Tuesday, October 26, 2010

10 robots you can actually date?

Over at computertechnician.net is a list of 10 robots (they say) you can actually date. Well, you pay yer money and you take yer choice!

Sunday, October 24, 2010

Robot Wars: 10 Recent Developments in Unmanned Warfare You Haven’t Heard About

"When the war in Afghanistan kicked off, the U.S. military only had a handful of drones or unmanned weapons on the battlefield. Now it’s one of the military’s main concerns as they race to outdo the competition developing innovative robots that do the dirty work. Technology is always changing and here’s a look at some of the recent developments in unmanned warfare that’s making its way to a war zone."

(more)

Sunday, October 17, 2010

IEET Poll on Robot Honesty

The IEET poll, "Do we need a law making it illegal for computers and robots to deceive or be dishonest?" Produced mixed results. This was not a scientific poll, but it was nevertheless nice to hear the conversations started.

Machine Ethics in the Scientific American


Congratulations to our colleagues Michael and Susan Anderson for their article on Machine Ethics (ME) in the October issue of the Scientific American. In the article, titled Robot Be Good: A Call for Ethical Autonomous Machines, they introduce both ME and their recent work programming ethical principles in Nao, a humanoid robot. Nao, pictured to the right, was developed by the French company Aldebaran Robotics.
Nao is capable of finding and walking toward a patient who needs to be reminded to take a medication, bringing the medication to the patient, interacting using natural-language, and notifying an overseer by e-mail when necessary. The robot receives initial input from the overseer (who typically would be a physician), including: what time to take a medication, the maximum amount of harm that could occur if this medication is not tak- en, how long it would take for this maximum harm to occur, the maximum amount of expected good to be derived from taking this medication, and how long it would take for this benefit to be lost. From this input, the robot calculates its levels of duty satisfaction or violation for each of the three duties and takes different actions depending on how those levels change over time. It issues a reminder when the levels of duty satisfaction and violation have reached the point where, according to its ethical principle, reminding is preferable to not reminding. The robot notifies the overseer only when it gets to the point that the patient could be harmed, or could lose considerable benefit, from not taking the medication.

Those familiar with the Anderson's work will appreciate that Nao is the first robotic implementation of work they did on EthEl.

Monday, October 11, 2010

Machine Learning Project at Carnegie Mellon


The NYTIMES published a story titled, Aiming to Learn as We Do, a Machine Teaches Itself. The article on machine learning focused on the Never-Ending Language Learning system (NELL) at Carnegie Mellon University. It is an interesting glimpse into how far we have come in developing learning systems.
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”

A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.

When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)

NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.

His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”

Sunday, October 10, 2010

Google Driverless Cars in SF Traffic


According to a story in today's NYTIMES Google has been testing seven driverless cars that have driven a 1000 miles without human intervention and more than 140,000 miles with just occasional intervention. The more astonishing point, "One even drove itself down Lombard Street in San Francisco, one of the steepest and curviest streets in the nation."
Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.

But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?

And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?


Read the full story title, Google Cars Drive Themselves in Traffic.

Saturday, October 2, 2010

21 drone attacks in Sept., 18 militants killed in last 2

The United States has widened pilotless drone aircraft missile strikes against al Qaeda-linked militants in Pakistan's northwest, with 21 attacks in September alone, the highest number in a single month on record.

Angered by repeated incursions by NATO helicopters over the past week, Pakistan blocked a supply route for coalition troops in Afghanistan after one such strike killed three Pakistani soldiers on Thursday in the northwestern Kurram region.

Pakistan is a crucial ally for the United States in its efforts to pacify Afghanistan, but analysts say border incursions and disruptions in NATO supplies underline growing tensions in the relationship.

On Saturday, two drone attacks within hours of each other killed 18 militants in Datta Khel town in North Waziristan tribal region along the Afghan border, intelligence officials said.

"In the first attack two missiles were fired at a house while in the second attack four missiles targeted a house and a vehicle. The death toll in the two attacks reached 18," said one intelligence official. At least six foreigners were killed in the first strike.

There was no independent confirmation of the attacks and militants often dispute official death tolls.


Read the full NYTIMES story from October 2nd here.