CSC 411: Artificial Intelligence
(Fall 2006)
Assignment 3: Chapters 5, 6, 7
Due by November 9, Thursday, 2006
1. Chapter 5: Bayes’ theorem. Suppose an automobile insurance company classifies a driver as good, average, or bad. Of all their insured drivers, 25% are classified good, 50% are average, and 25% are bad. Suppose for the coming year, a good driver has a 5% chance of having an accident, and average driver has 15% chance of having an accident, and a bad driver has a 25% chance. If you had an accident in the past year, what is the probability that you are a good driver?
Solution:
1)
Prior
probabilities: p(good) = 0.25,
p(average) = 0.50, p(bad) = 0.25
2)
Conditional
probabilities: p(accident|good)
= 0.05, p(accident|average) = 0.15, p(accident|bad) = 0.25
3)
Use Bayes’ theorem:
p(good|accident) = p(accident|good)/{p(accident/good)p(good)+p(accident|average)p(average) +
p(accident|bad)p(bad)}
= (0.05)(0.25)/{(0.05){0.25)+(0.15)(0.50)+(0.25)(0.25)}
= 0.083.
2. Chapter 6: Production Systems. In our class, we introduced Example 6.2.2 of the textbook, where the knight’s tour problem is presented and production rules for the 3´3 knight problem are proposed. Example 6.2.3 (pp.209-210) generalizes the knight’s tour solution to the full 8´8 chessboard and designed some production rules. Using the rule in Example 6.2.3 as a model, write the eight move rules needed for the full 8´8 version of the knight’s tour.
Solution:
The form of these rules will vary, of course but they will all add + or
-2 and 1 to the row and column measures of the present state to produce the new
state. At some time in the production of the new state we much check that it
will be on the board. This can be done as in the text by checking the current
state to see if the present rule can be applied, or as in our example below
assuming the current state is o.k. and then constraining the new state. The
representation used here is PROLOG-like version of the predicate calculus where
each state or board position is called “state(Row,
CONDITION: state(Row,
ACT: state(NewRow,
NewCol) Ù NewRow is Row – 2 Ù NewRow > 0 Ù
NewCol is
The failure of any of the and constraints gets the system to consider the next rule or
backtrack.
The other seven rules can be created similarly.
3. Chapter 6: In Section 2.4 (pp.73-76), Example 3.3.5 (pp. 115-116), and Example 4.2.1 (pp.143-144), the financial advisor problem is discussed. Using predicate calculates as a representation language:
i) Write the problem explicitly as a production system
ii) Generate the state space and stages of working memory for the data-driven solution to Example 3.3.5
iii) Repeat b for a goal-driven solution
Solution: Refer to the
rules presented in Section 2.4.
i)
Put rule 1
through 8 in the production memory. Place the minsavings
and minincome functions in the production memory
also, to be used whenever a rule requires this mathematical calculation
preformed. Facts 9, 10, and 11 will be obtained from the used as needed. Now
run the production as in either ii) or iii) below.
ii)
Go through
the rules in order until you produce the result “savings(X)” for some X. When
no premise is yet known, as in the first three rules in the iteration, go on to
the next rule. Finally, in rule 4 we are asked about savings and dependents.
There are nor rules that can conclude these results so that the system goes to
the user to obtain this information. When it is supplied, minsavings
is calculated and then either rule 4 or rule 5 fires to conclude about the
adequacy of the savings account. This new fact is addeded
as a result that is now known by the system. At this point if it is breadth-first
data driven search we continue through the remaining 3 rules, obtaining
information on earnings and calculating minincome.
Otherwise, if we are doing depth-first search we take the results of rule 4 or
5 and go back to rule 1 again.
iii)
Search should
look something like Figure 3.2.6 (p.117), depending only on the rule ordering.
How this graph is generated depends on whether the goal driven search is
breadth-first or depth-first.
4. Chapter 7: Translate each of the following sentences into predicate calculus, conceptual dependencies, and conceptual graphs:
“Jane gave Tom an ice cream cone.”
“Basketball players are tall.”
“Paul cut down the tree with an axe.”
“Place all the ingredients in a bowl and mix thoroughly.”
Sample solution: Underscored for squares, others for ovals.
give(jane, ice_cream_cone, tom, past_time).
person:jane ß------------ agent ß--------
gave ------à
object ------à ice_cream
|
person:tom ß------------ recipient ß---------
has_property(basketball_players, tall).
basketball_player:X --------à
has_property ------à
tall
cut_down_with(paul, tree, axe)
person:paul ß------------ agent ß-------- cut_down -----à
object -------à tree
|
tool:axe ß----------------
with ß-------------
5. Chapter 7: The operations join and restrict (Section 7.2.4) defines a generalization ordering on conceptual graphs, while specialization of conceptual graphs using join and restrict is not a truth-preserving operation.
i) Show that the generalization relation is transitive.
ii) Give an example that demonstrates that that restriction of a true graph is not necessarily true.
iii) Prove, however, that the generalization of a true graph is always true.
Sample solution:
i)
There are two
issues here. First all properties of a representation are true in its
generalization. Secondly, any type extension of an entity will be found in its
generalization, in fact is part of its generalization. Thus, if a
generalization of generalization is not a generalization, then one of the
original generalizations is not a generalization.
ii)
Consider the
example of g2 and g3 of Figure 7.22 in text, page 254. Since all animals are
not dogs, g3 need not be true whenever g2 is.
iii)
If a
conceptual graph is true then its generalization is still a true representation
for that original situation. If the generalization was not true, then the
original was not true either.