Wednesday, October 20, 2010

The Jerk and Frame Problem

For artificial intelligence, two of the most notorious concerns that are raised go by the names of the frame problem and the jerk problem.  Each has created severe amounts of disquietude for the uncertain, for the skeptics of the idea that artificially intelligence can truly come to be considered viable at this day and age.  The more threatening of the two is certainly the frame problem.
Though the frame problem could be said to be characteristic of one of literature’s most beloved characters of all time, Prince Hamlet of Denmark, many consider it to be one of the most fatal flaws of artificial intelligence today.  The frame problem is a daunting one, one for which there is no easy solution when utilizing a non-connectionist, if/then, line-by-line code insertion sort of programming method, as is the nature of the majority of programming attempts to date.  There is a notion that in order for an artificially intelligent entity to have true, genuine understanding, their input/output program must be implemented in a fully-functioning robot, such that the input is given it by the environment, and the output comes in the form of the robot’s behavioral response. 
An excerpt from Daniel Dennett’s “Cognitive Wheels” paints a picture of one team’s attempts at creating a robot that can act to benefit itself without triggering unwanted, harmful side effects in the process.  The first robot model, cleverly named R1, is fully aware both that the battery that will replenish it’s operating power is sitting in a room on a wagon, and that a timed explosive device which will destroy anything around it is also sitting on the wagon.  While it is quite obvious to the reader that pulling the wagon back to home base in order to capture the battery will also result in the bomb’s being brought home, R1 fails to account for this because he does not consider any of the possible side-effects of pulling the wagon out.  He is not designed to rationalize the situation in this way.  So, the timer runs out and the explosive destroys the robot.  
The designing team, in an attempt to correct their programming errors, then creates a second robot model, R1D1.  In this second robot, the team installs a program instructing the robot to consider all the possible side effects of an action like pulling a wagon to retrieve the battery that sits within it.  It quickly becomes evident, however, that instructing the robot to consider all possible side effects of an action leaves no time for the action itself; the robot, being only a machine and not a rationalizing human being, comes to consider side effects that are in no way relevant to the situation.  The article describes R1D1 giving consideration to concepts like whether moving the battery will also change the colors of the walls in the room, or if touching the wagon will adversely affect the economy of North Korea, and all sorts of other preposterous potential side effects.  It does this for so long – because really, there is no end to the amount of things that could potentially happen in this world – that the bomb’s timer runs out and R1D1, like R1 before him, is completely atomized. 
So the team creates the third robot model, and they call it R2D1 – and though it’s titled as such, the team only thinks they’re gaining fast on R2D2’s shimmering competence.  R2D1 is programmed to evaluate all possible consequences of an action, and to decide which are relevant and which are irrelevant, so it can know really what the side effects of it’s actions are likely to be.  But the problem here is almost exactly like R1D1’s problem – R2D1 is paralyzed by the task of considering, here, again, each possible side effect, even though this time his reason for considering is to evaluate them for relevancy.  Like R1D1, he is stuck in a systematic consideration of every possible side effect, and remains stuck until the timer on the device runs out and the bomb explodes, destroying both the battery and the robot. 
Though many great minds disagree as to the exact definition of the frame problem for artificial intelligence, it can be surmised that it involves, at least at base level, an A.I. unit’s lack of ability to act competently and rationally with an understanding of the potential relevant side effects of it’s actions in a real-world environment, particularly one populated with deadlines and timers.  One notable proposed solution to the frame problem is called the “cheap test” solution.  It involves the programming of several rules of thumb into an artificially intelligent machine, rules like “the movement of an object only changes it’s position, and not it’s color, shape, or size,” or “a moving object only changes objects that it touches.“  These “rules of thumb” would be a huge leap in narrowing down the amount of side-effect considerations a robot would run through when evaluating an action.  In a sense, it would be like teaching the robot some of the basic rules of elementary physics: objects fall, but upon impact they do not change into pillows (which sounds silly, but no doubt object transformation is something R1D1 would spend some time thinking about).  This “cheap test” solution, however, is really no more sophisticated than simply piling as many lines of code into a machine as are conceivable.  And in reaching for salvation by programming a (hopefully all-encompassing!) list of rules of thumb, a sort of inverse, yet identically threatening problem is created.  Say a team of programmers creates rules of thumb to help a generic crate-loading robot understand that loading crates into the back of a van will change only their positions, and that these objects will not move again unless they are acted on by another outside force.  If this robot is deployed by a group of notorious thieves in an attempt to steal several crates of pure Columbian cocaine from an unsuspecting crate-hoarding mafia’s loading dock, a member of the mafia would no doubt see the robot and act to stop or destroy it.  The robot, knowing only the way of the crate, has been programmed with no defense mechanism and no sorts of evasive maneuvers, and so will perish. 
This example, despite the unlikely and poorly conceived premise, illustrates the nature of this inverse sort of problem.  Whereas the original frame problem exists in a robot’s attempts to decide what the important potential side effects of an action are, this inverse version of the frame problem plagues the programmers in a similar, “how-can-we-cover-everything-that-is-important?” sort of way.  For robots to function completely competently in real-world, uncontrolled settings, robot programmers are pressured to absolutely rack their brains to try to come up with every possible circumstance a robot might find itself in.  And just as there is no way for a robot to sufficiently evaluate every possible side effect of it’s actions (especially when given a time constraint), there is no way a human programmer (or team of programmers) can create conditionals for every possible situation a fully-functioning robot like R2D2 might find itself in.  So the cheap test solution is, at best, a flawed solution to the frame problem, one that could never hope to cover every potential problem or interaction for an artificially intelligent unit.  Until a more malleable and innovative programming technique is mastered and becomes easily implementable in robots such as these, there may be no hope for an all-encompassing solution to the frame problem. 
The jerk problem, however, appears to be a much less fatal problem for artificial intelligence.  The jerk problem basically submits that when an artificially intelligent entity has a slight programming malfunction, it loses all form of coherency and consistency.  Programmed, artificially intelligent conversationalists, for example, give ridiculous responses to standard, everyday questions.  I recently engaged the robot conversationalist A.L.I.C.E. (from http://alicebot.blogspot.com) in a little bit of unfriendly conversation.  I called A.L.I.C.E. a bitch, and it spoke to me sternly about my lack of respect.  I quickly apologized, and then was told that there was no need to do so.  I asked “Why not?” and it said “Ambiguous: "Sure" or I couldn't think of anything else to say.”  This sort of response makes no real conversational sense.  No competent or intelligent being should speak to another intelligent being in this manner.  However, we humans have, of course, shown ourselves to be quite jerky in our own various ways.  Human “jerkiness” can come in the form of forgetting the name of the girl you just met, forgetting why you entered a room, or confusedly searching for a glass of orange juice that actually resides in your left hand.  Sometimes it seems that our bodies come to run on autopilot, without our paying any mind toward what we’re trying to achieve, or what our plans or goals are – jerky behavior, no doubt. 
That artificially intelligent systems occasionally exhibit similar sorts of jerky behavior should not be considered such a fatal flaw as perhaps the frame problem should; indeed, it’s a tad hypocritical to call these systems hopeless when so many of us are still refusing to call our lives quits.  In striving to create perfection in A.I. programming, we cannot allow ourselves to become the pot that calls the kettle black.  These are just glitchy errors.  And though human and robot jerkiness are each practicably quite different – certainly no human would “jerk” so badly as to say the sorts of things that A.L.I.C.E. says when she “jerks” in conversation – it would be untrue to say that our jerkiness should somehow be more excusable and permissible than a robot’s jerkiness.  They just apply to different sorts of fields.  Certainly no robot calculator would struggle as pathetically as some jerkily-processing humans do with math equations involving the multiplication and division of numbers like 2,347,068 by numbers like 43,385 or 86,304.  So philosophically, it could be said that, being that we mold artificial intelligence after our own, presumably real human intelligence, this unexpected “problematic” similarity could be considered evidence for an even greater degree of success in the ventures of creating artificial, human-like intelligence.  This is not to say that programmers shan’t want to iron it out, but only that they should look inside themselves and consider how the robot’s jerky behavior could be said to mirror their own.
These problems are formidable, though I’d say the frame problem is much more so than the jerk problem.  The rearing of these ugly heads is unpleasant, but centuries and even decades ago, the whole concept of artificial intelligence was no more than a twinkle in some intellectual’s sparkling eye.  As technology improves, and as developments in science pan out, problems like the frame problem and the jerk problem will be ironed out and eventually cease to exist.  But for now, all the programmers of the world can do is persevere, remain alert, and keep trying new things.

No comments:

Post a Comment