-
Notifications
You must be signed in to change notification settings - Fork 67
/
exercise-answers.txt
171 lines (137 loc) · 5.36 KB
/
exercise-answers.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
There are several solutions for the code of the vacuum cleaner
agent. Here we present and comment on some of them. We start from a very
reactive solution and finish with a more goal-oriented version.
1. First solution
----------------------------------------
+dirty <- suck.
+pos(1) <- right.
+pos(2) <- down.
+pos(3) <- up.
+pos(4) <- left.
----------------------------------------
* comments
- this is a reactive agent, which is easy to implement with Jason,
but note that the language is meant for cognitive agents
- as a consequence, all plans have an empty context (normally used
to check the situation currently believed by the agent)
* problems
- the robot may leave dirt behind, and you will get messages like
[VCWorld] suck in a clean location!
- reason: the events related to location ("+pos(...)") are
selected before the "dirty" event, so the suck action is performed
after the move action in a possibly clean location
2. Second solution
----------------------------------------
// plans for dirty location
+pos(1) : dirty <- suck; right.
+pos(2) : dirty <- suck; down.
+pos(3) : dirty <- suck; up.
+pos(4) : dirty <- suck; left.
// plans for clean location
+pos(1) : clean <- right.
+pos(2) : clean <- down.
+pos(3) : clean <- up.
+pos(4) : clean <- left.
----------------------------------------
* comments
- again a rather reactive agent
- the selection of plans is based on context (perceptual beliefs
in this case)
- it solves the problems of the previous solution
* problems
- the moving strategy is coded in two sets of plans, so to change
the strategy we need change all plans
- if you leave the agent running for a long time, it eventually stops.
the reason is that sometimes the robot does not perceive neither
dirty nor clean and thus no plan are selected. The following
code solves that:
----------------------------------------
// plans for a dirty location
+pos(1) : dirty <- suck; right.
+pos(2) : dirty <- suck; down.
+pos(3) : dirty <- suck; up.
+pos(4) : dirty <- suck; left.
// plans for other circumstances
+pos(1) : true <- right.
+pos(2) : true <- down.
+pos(3) : true <- up.
+pos(4) : true <- left.
----------------------------------------
3. Third solution
----------------------------------------
+pos(_) : dirty <- suck; !move.
+pos(_) : true <- !move.
// plans to move
+!move : pos(1) <- right.
+!move : pos(2) <- down.
+!move : pos(3) <- up.
+!move : pos(4) <- left.
----------------------------------------
* comments
- the moving strategy is re-factored to use a (perform) goal
(the goal '!move')
- this agent reacts to the perception of its location, but the
reaction creates a new goal (to move)
- to change the moving strategy we only need to change the
way the goal "move" is achieved (the plans for triggering
event '+!move')
* problem
- suppose that actions may fail, in this case, after performing 'up',
for example, the agent may remain in the same place and then no new
location is perceived: the agent will stop moving
(this is only conceptually a problem since the environment was not
coded to simulate action failures, i.e., the environment model is
deterministic).
4. Fourth solution
----------------------------------------
!clean. // initial goal
+!clean : clean <- !move; !clean.
+!clean : dirty <- suck; !move; !clean.
-!clean <- !clean.
+!move : pos(1) <- right.
+!move : pos(2) <- down.
+!move : pos(3) <- up.
+!move : pos(4) <- left.
----------------------------------------
* comments
- this agent is not reactive at all; it has no behaviour which is
triggered by an external event, that is, a perception of change
in the environment (note however that BDI agents typically have
both goal-directed and reactive behaviour)
- instead, the agent has a *maintenance goal* (the '!clean' goal) and
is blindly committed towards it (if it ever fails, the goal is just
adopted again -- see below)
- this goal is implemented by a form of infinite loop using recursive
plans: all plans to achieve 'clean' finish by adding 'clean' itself
as a new goal
- if anything fails in an attempt to achieve the goal 'clean', the
contingency plan (-!clean) reintroduces the goal again at all
circumstances (note the empty plan context); this is what causes the
"blind commitment" behaviour mentioned above
- this agent is thus more 'robust' against action failures
- the use of goals also allows us to easily code plans that handle
other goals. For instance, suppose we want to code the robot in a way
that after 2 seconds of cleaning, it makes a break of 1 second. This
behaviour can be easily code as follows:
----------------------------------------
!clean. // initial goal to clean
!pause. // initial goal to break
+!clean : clean <- !move; !clean.
+!clean : dirty <- suck; !clean.
-!clean <- !clean.
+!move : pos(1) <- right.
+!move : pos(2) <- down.
+!move : pos(3) <- up.
+!move : pos(4) <- left.
+!pause
<- .wait(2000); // suspend this intention (the pause) for 2 seconds
.suspend(clean); // suspend the clean intention
.print("I'm having a break, alright.");
.wait(1000); // suspend this intention again for 1 second
.print(cleaning);
.resume(clean); // resume the clean intention
!pause.
----------------------------------------
Just to see how flexible it is to program with goals, you might want
to try and implement the break strategy for the first (purely reactive)
solution.