python实验代写,python编程代写,python代码代写,python作业代写
Your assignment task
Your task is to develop a program that outputs a path (series of actions) for the agent (i.e. the Laser Tank),
and to provide a written report explaining your design decisions and analysing your algorithms' performance.
You will be graded on both your submitted program (Part 1, 60%) and the report (Part 2, 40%). These
percentages will be scaled to the 10% course weighting for this assessment item.
To turn LaserTank into a search problem, you have will have to rst dene the following agent design
components:
A problem state representation (state space),
A successor function that indicates which states can be reached from a given state (action space and
transition function), and
A cost function (utility function); we assume that each step has a uniform cost of 1.
Note that a goal-state test function is provided in the support code. Once you have dened the components
above, you are to submit code implementing one of two discrete search algorithms:
1. Uniform-Cost Search, or
2. A* Search
Your submitted code should run your A* search implementation if you have it. If you haven't been
able to implement A* search, your code can run UCS instead. Finally, after you have implemented and tested
the algorithms above, you are to complete the questions listed in the section \Part 2 - The Report" and
submit them as a written report.
More detail of what is required for the programming and report parts are given below. Under the grading
rubric discussed below, the testcases used to assess the programming component will give a higher mark
for A* search, and you will not be able to answer some of the report questions without considering A* or
implementing it. These elements of the rubric give you an incentive to implement A* search over the simpler
UCS algorithm. Hint: Start by implementing a working version of UCS, and then build your A* search
algorithm out of UCS using your own heuristics.
the output le it generates. This is handled as follows:
The le solver.py, supplied in the support code, is a template for you to write your solution. All of
the code you write can go inside this le, or if you create your own additional python les they must
be invoked from this le.
Your program will: (i) take a testcase lename and an output filename as arguments, (ii) nd a
solution to the testcase, and (iii) write the solution to an output le with the given output filename.
Your code should generate a solution in the form of a comma-separated list of actions, taken from
the set of move symbols dened in the supplied laser tank.py le, which are:
{ MOVE FORWARD = 'f'
{ TURN LEFT = 'l'
{ TURN RIGHT = 'r'
{ SHOOT LASER = 's'
The main() method stub in solver.py makes it clear how to interact with the environment: (i) The
LaserTankMap.process input file(filename) function handles reading the input le, (ii) your
code is called to solve the problem, with your solver's actions written to the actions variable, then (iii)
write output file(filename, actions) function handles writing to the output le in the correct
format, which is passed to the autograder.
The script tester.py can be used to test individual testcases.
The autograder (hidden to students) handles running your python program with all of the testcases. It
will run the tester python program on your output le and assign a mark for each testcase based on the
return code of tester.
You can inspect the testcases in the support code, which each include information on their optimal
solution path lengths and test time limits. Looking at the testcases might also help you develop
heuristics using your human intelligence and intuition.
To ensure your submission is graded correctly, do not rename any of the provided les or alter the
methods LaserTankMap.process input file() or solver.write output file().
More detailed information on the LaserTank implementation is provided in the Assignment 1 Support Code
README.md, while a high-level description is provided in the LaserTank AI Environment description
document.
Grading rubric for the programming component (total marks: 60/100)
For marking, we will use 8 dierent testcases to evaluate your solution. There will be 3 easy, 3 medium and
2 dicult test cases, and marks will be allocated according to the following rules:
Solving a testcase means nding an optimal path within the given time limit (time limits are given in
each test case le).
Approximately solving a testcase means nding a sub-optimal solution path, which is longer than an
optimal one, within the given time limit.
If your code computes one (approximate or optimal) solution to one test case, COMP3702 students
receive 25 marks, and COMP7702 students receive 20 marks.
Above this, each subsequent testcase that your code solves receives another 5 marks per testcase, up
to a maximum of 60 marks.
Approximate solutions are penalised in proportion to how far they are from an optimal solution's length.
Each subsequent testcase that your code approximately solves receives 5 marks minus a penalty of 0.5
marks for each step over the optimal solution length, to a minimum of 0 marks for approximate solutions
that are 10 steps or more longer than the optimal path.
Part marks are given for programming attempts that fail to (approximately or optimally) solve one test
case, as indicated in the tables below.
The details of separate grading rubrics for COMP3702 and COMP7702 are given in the tables below, where
your marks are given by the highest performance threshold your submission passes.
Note on Gradescope's autograder: Due to limitations with the Gradescope autograder, your marks for Part
1 displayed within Gradescope will re
ect only the marks you have earned for solving or approximately solving
each test case. The additional 25 marks for COMP3702 students, or 20 marks for COMP7702 students, for
approximately solving one or more testcases will be added manually in Blackboard, as will part marks for
submissions that fail to meet this threshold.
