Friday, June 18, 2010

Film: The Singularity is Near

Ray Kurzweil's movie, The Singularity is Near: A True Story, has been released. The film receive a Best Special Effects award and a Second Place Audience Award at the Breckenridge Film Festival. I will be attending the film's NY debut on June 24th, and will post brief comments soon after.

Willow garage PR2 Plays Pool

Guest Opinion: Machines and Machination - How Human Can Machines Be?

We are posting this submitted blog and invite others to offer differing opinions. Guest bloggers are not required to have an in-depth understanding of the field of inquiry covered by the Moral Machines blog.

The first thing that struck me when I visited this blog for the first time was the title – if ever there was a delicious yet appropriate oxymoron, this is it. Since when did machines and morality go hand in hand? Whenever we talk of technology and its intrusiveness in all aspects of our lives, we hold back from going gaga over the machines because they lack ethical values and intuitive sense. In a nutshell, they lack innate qualities of humanness like the ability to discern right from wrong based on ethics, kindness, morals, and a host of other factors that must be taken into consideration.

Take for example a court of law – if a person is on trial for murder, they are not automatically sentenced to the death penalty or life imprisonment. The circumstances under which the murder was committed are taken into consideration – some people do it in cold blood after planning it out carefully; others are psychopaths who take pleasure in the acts of torture and killing; and yet others are victims of circumstances and are provoked into killing either to defend themselves or because they are so incensed that they don’t realize what they are doing until it’s too late.

A machine could probably pronounce the right verdict if you feed in the circumstances and the associated punishments, but what if there are extenuating circumstances? What if the murder was planned, but only because the culprit was so badly affected by the victim that he saw no other way but to eliminate him from this world? What if he was avenging the brutal rape and murder of his wife and young daughters? How would a machine judge him in such a case? Being a machine, it would not be able to accord enough importance to the anguish and mental agony of the murderer who himself is actually the main victim here? Even human beings find it hard to make the right decision in such cases. So how on earth can a machine that is made of fiber and circuits have enough moral fiber to do right in such a tough call?

There’s no doubting the fact that the march of the technological brigade is on at full speed and in full swing; but as regards the implication that they can replace humanity at any time in the future, there is much doubt and confusion as to what we can create and what those creations can do. Yes, they will be more efficient and dependable in a workhorse kind of way, but unless they are provided with human guidance and augmentation when situations call for ethical and moral decisions, machines will be more detrimental than advantageous to society.

By-line:
This article is contributed by Susan White, who regularly writes on the subject of Rad Tech Schools. She invites your questions, comments at her email address: susan.white33@gmail.com.

The Road to Minority Report


An article by Charles Arthur in the Guardian discusses,
Why Minority Report was spot on.

Pre-crime
In the film, "pre-cogs" can look into the future and inform the police (they have got no choice – they are stuck in baths in the basement). In 2008, Portsmouth city council installed CCTV linked to software that would note whether people were walking suspiciously slowly. University researchers had already realised in 2001 that, if you recorded the walking paths of people in car parks, you could spot the would-be thieves simply: they didn't walk directly to a car, but instead ambled around with no apparent target. That is because, unlike everyone else in a car park, they weren't going to their own car.

That's not the end: Nick Malleson, a researcher at the University of Leeds, has built a system that can predict the likelihood of a house being broken into, based on how close it is to routes that potential burglars might take around the city; he is meeting Leeds council this week to discuss how to use it in new housing developments, to reduce the chances of break-ins. So although pre-crime systems can't quite predict murder yet, it may only be a matter of time.

Spider robots
The US military is developing "insect robots", with the help of British Aerospace. They actually have eight legs (so, really, arachnid robots) and will be able to reconnoitre dangerous areas where you don't want to send a human, such as potentially occupied houses.

"Our ultimate goal is to develop technologies that will give our soldiers another set of eyes and ears for use in urban environments and complex terrain; places where they cannot go or where it would be too dangerous," Bill Devine, advanced concepts manager with BAE Systems, told World Military Forum. Give it 10 years and they will be there.

Thursday, June 17, 2010

IBM Computer Wins at Jeopardy


IBM's Watson computer has been designed to play the TV game Jeopardy, and wins quite often. The NYTIMES has a feature article by Clive Thompson on Watson whose title plays on the answer to a Jeopardy question, What Is I.B.M.'s Watson?. While computers like Watson represent a dramatic step forward in the development of Turing Level computing they have clear limitations and can be problematic if used inappropriately.
Watson can answer only questions asking for an objectively knowable fact. It cannot produce an answer that requires judgment. It cannot offer a new, unique answer to questions like “What’s the best high-tech company to invest in?” or “When will there be peace in the Middle East?” All it will do is look for source material in its database that appears to have addressed those issues and then collate and compose a string of text that seems to be a statistically likely answer. Neither Watson nor Wolfram Alpha, in other words, comes close to replicating human wisdom.

CULTURALLY, OF COURSE, advances like Watson are bound to provoke nervous concerns too. High-tech critics have begun to wonder about the wisdom of relying on artificial-intelligence systems in the face of complex reality. Many Wall Street firms, for example, now rely on “millisecond trading” computers, which detect deviations in prices and order trades far faster than humans ever could; but these are now regarded as a possible culprit in the seemingly irrational hourlong stock-market plunge of the spring. Would doctors in an E.R. feel comfortable taking action based on a split-second factual answer from a Watson M.D.?

Tuesday, June 1, 2010

Video of iRobot APOBS

iRobot Demonstrates New Weaponized Robot

John Palmisano over at the IEEE Spectrum blog has blogged about the latest developments at iRobot.

As Palmisano points out:
Back in the day, the founders of iRobot had been against the weaponization of robots. Perhaps business and financial pressures are pushing the boundaries. Indeed, the military market is becoming ever more important, according to the company's first quarter results. Finances were very tight in 2009 and many engineers, which in my professional opinion are very talented individuals, are getting paid below wage standards in the industry. iRobot probably sees military systems as a market they'll have to explore and expand.