Thursday, October 21, 2010

Geeking out with Endhiran

I saw the Tamil movie "Endhiran" a couple of days back. I liked it a lot, mainly because of Rajnikanth's excellent acting, the screenplay's internal consistency, dialog, and, of course, the beauty and stylishness of the special effects. I want to see it again.
The movie is impressive because, after a long time, you can keep thinking about the technical discussions about AI and Robotics in the movie. There is a consistent theme to it.

Asimov's Laws
When Dr.Vasi appears before the approval board, one of them asks if the robot obeys Asimov's laws. I was wondering about this during most of the fights before that scene. If the robot obeys Asimov's laws, it will not be fighting at all.
Asimov's laws were used in many of his Robot series. I read them in the novel "I, Robot". There are three laws and the whole novel is a series of situations structured around the contradictions between the three laws.
The laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In the final story of "I, Robot", a man runs for political office. But there is a suspicion that he is not a human at all, that he is a robot. There is no real way to prove it - until a situation based on the First Law comes up. Robots cannot hurt humans - and therefore if the candidate hits a human, he cannot be a robot. How this resolves itself I cannot reveal. You can read the novel.
So, Chitti is created for military applications - and therefore he is designed without the three laws.
The whole movie, thus, can be seen as a demonstration of what happens when a robot is designed without the three laws. But I don't think Shankar had that angle in mind. Dr.Vasi is shown as a noble person, who wants to help his nation by creating fighting robots. But there is a problem there - when you create machines which cause harm, they can be used by your opponents too. The "flipping" of Chitti (or his clones) to the dark side is actually inevitable if you mass produce robots for fighting.
In fact, we see such a phenomenon CURRENTLY. In personal computers, the original Terminate and Save Routines (TSRs) were intended for background processing. They were quickly adapted as the early viruses. Now, the virtual world is awash with computer worms and viruses. We have no control over the malicious use of programming. There is an escalating fight between unethical and criminal hackers on the one side and government and private security agencies on the other.
There are other ethical arguments against using Robots in combat.

The Military Application of Robots
The original reason that militaries offered for using robots was mine-sweeping. Such robots will comply with Asimov's laws above. They won't harm humans. That was the original reason offered.
But, right now many militaries around the world are researching on robots for combat applications. The US military, right now, has drones (or unmanned aerial vehicles) deployed in Pakistan and Afghanistan. These are armed with missiles and have caused much damage in both countries.
To keep in line with Geneva Conventions, the drones require human input before firing. There have already been ethical questions about such uses. Let me address one angle of the drone controversy:
When a country goes to war, the assumption is that there will be checks for deployment of their citizens - atleast in democracies. Because there will be a loss of life on THEIR side, the decision to go to war is not taken lightly by countries, in theory.
But if you deploy drones which are operated by civilian contractors sitting thousands of miles away, you have removed one major reason which may deter countries from launching agressive war. Without loss of life, and facing no pressure from their citizens, a democracy can sustain a war purely through money and technology.

For this problem, a few people have argued back and said that war is a bad choice, but once a war is launched, a country is justified in using its technical might to win.
Thus, robots for warfare is being increasing seen as a technically brilliant advancement. It is also seen as inevitable.
This means that we will likely see a new kind of "arms" race similar to the nuclear era. It is no surprise that terrorists will seek to use robots in combat too - after all the same argument about winning applies to them too.
It is in this context that Enthiran raises questions.

Enthiran and Military Robots
If you think about it, a humanoid robot is NOT necessary for the conventional military. A conventional military may use complex machines, with a narrow range of purposes. They don't need Chitti.
The movie shows arms dealers and terrorists interested in robots. To me, that seems completely natural and inevitable. In fact, because Dr.Vasi designed the robot without the in-built (non-overridable) three laws, the entire sequence of events, from the interest of arms dealers to the hostile takeover of Chitti by Dr.Bora is unavoidable.
There are no international laws currently in place to govern use of robots in combat. Let us say that a robot brigade in a future war with Pakistan malfunctions and ends up killing a lot of innocent civilians. Who do you blame? Who will be punished?

Some random notes
1. Neural Schema Architecture is a kind of AI architecture that supplies "schemas" for different behaviors. These are the ones that Dr.Vasi says cannot be shared until the patent is obtained.
2. An Inference Engine is the core of an "expert system". It has a rule list, and it takes actions based on applicable rules. It also has a knowledge base. A kind of Inference Engine called Fuzzy-Inference can make decisions in uncertain situations.
Dr.Bora calls Chitti "just an inference engine". What he is saying is that it just evaluates rules and takes actions. He seems to be saying it is not "intelligent". 3. Contrary to what the movie says, Dr.Bora did not provide a contradictory command to Chitti in the approval meeting. He asks Chitti to stab Dr.Vasi and the robot attempts to do it. That command does not seem to contradict any other command (unless I am missing something).
4. Dr.Vasi works on Chitti for 10 years. What was he doing at that time?
The major portion of his work would have been the representation of knowledge. In the initial scenes, Chitti is fed martial arts, dancing programs and so on. Creating the knowledge of such expertise and representing it in storage is a major problem for expert systems. That part of it is much more complex than creating the physical body.

No comments: