Sunday, November 28, 2010

NY TImes piece on Military Robotics

Sunday's frontpage New York Times piece on the trend towards increased roboticization of the military is accompanied online by an interactive graphic showing the latest platforms used by the U.S. Military. The article quotes Ron Arkin as saying that 56 countries now have military robotics programs, and Wendell is also quoted a couple of times, drawing attention to the long-term downside of these developments. The recent ICRAC meeting in Berlin is also linked.

Thursday, November 25, 2010

Machine Consciousness 2011

SECOND Call For Papers:
Machine Consciousness 2011: Self, Integration and Explanation

Abstract submission deadline: December 31st, 2010

Submissions are invited for presentation at MC2011, a two-day symposium to be held in conjunction with Artificial Intelligence and the Simulation of Behaviour 2011 (AISB 2011), April 4-7 2011, University of York, UK. It is anticipated that the symposium will be held April 6th-7th (TBC).

Machine Consciousness (MC) concerns itself with the creation of artefacts which have, or model, mental characteristics typically associated with consciousness such as (self-) awareness, emotion, affect, phenomenal states, imagination, etc.

Specific Foci
We encourage submissions falling under one of more of these topics:
• MC and Self modelling
• MC and Information integration
• The explanatory power of MC models
• MC and Neuroscience
• MC and Functional versus phenomenal consciousness
• MC Ethics

Ryan Calo on the Legal Challenges Arising from Open Robotic Platforms

A very interesting article by Ryan Calo on issues of legal liability and open or closed robot platforms has been published by the Maryland Law Review. The full article titled, Open Robotics is available for download online.
Abstract:
With millions of home and service robots already on the market, and millions more on the way, robotics is poised to be the next transformative technology. As with personal computers, personal robots are more likely to thrive if they are sufficiently open to third-party contributions of software and hardware. No less than with telephony, cable, computing, and the Internet, an open robotics could foster innovation, spur consumer adoption, and create secondary markets.

But open robots also present the potential for inestimable legal liability, which may lead entrepreneurs and investors to abandon open robots in favor of products with more limited functionality. This possibility flows from a key difference between personal computers and robots. Like PCs, open robots have no set function, run third-party software, and invite modification. But unlike PCs, personal robots are in a position directly to cause physical damage and injury. Thus, norms against suit and expedients to limit liability such as the economic loss doctrine are unlikely to transfer from the PC and consumer software context to that of robotics.

This essay therefore recommends a selective immunity for manufacturers of open robotic platforms for what end users do with these platforms, akin to the immunity enjoyed under federal law by firearms manufacturers and websites. Selective immunity has the potential to preserve the conditions for innovation without compromising incentives for safety. The alternative is to risk being left behind in a key technology by countries with a higher bar to litigation and a serious head start.

Babies Learn from Robots and Vice Versa


The NSF website has an article of interest titled, Babies Learn From Robots While Robots Learn From Babies: Interdisciplinary research combines infant learning and computer science


The study indicates that more than appearance, robots will need to possess sophisticated cognitive abilities such as being able to understand speech and imitate human actions in order to pass the test of human social acceptance. The specific set of movements or gestures a robot should have will depend on a number of factors such as the domain in which it operates--whether the robot is an emergency responder or a child's tutor, for example. Programming for local culture also is important for determining whether humans will interact with it.

"Some skills such as being able to interact through speech and understand a human's intentions are universally applicable to all robots that interact with humans," said Rao. "Other skills will need to be learned on-the-fly, which is one of the reasons why we have focused our robotics research on learning by imitating humans."

IEET Poll: Consensus on Improving Human Morality


In an IEET poll only a minority of respondents believe we will need the assistance of AI to improve human morality.

Friday, November 5, 2010

The Singularity Hypothesis

Springer has commissioned an edited volume in The Frontiers Collection (which deals with forefront topics in science and philosophy) about the singularity hypothesis and related questions, such as the intelligence explosion, acceleration, transhumanism, and whole brain emulation. The book shall examine answers to central questions which reformulate the singularity hypothesis as a coherent and falsifiable conjecture, examine its empirical value, and investigate its the most likely consequences, in particular those associated with existential risks.

The purpose of this volume is to report the results of using the standard toolkit of scientific enquiry and analytic philosophy to answer these questions. Chapters will consist of peer-reviewed essays addressing the scientifically literate nonspecialist in a language that is divorced from speculative, apocalyptic, and irrational claims.


Visit The Singularity Hypothesis Blog.

David Chalmers on the Singularity

As a follow-up to his 2009 talk at the Singularity Summit, David Chalmers has written an extended article on the subject. The article is titled, The Singularity: A Philosophical Analysis. It is perhaps the most comprehensive reflection by a philosopher to date on the subject. The paper has three sections. The first section covers the arguments for an intelligence explosion. The second section will be of most interest to readers of this blog. In that part of the article Chalmers considers "how to negotiate the singularity: if it is possible that there will be a singularity, how can we maximize the chances of a good outcome?" In the last part he looks at uploading and the place for humans in a post-singularity world.

Wednesday, November 3, 2010

Call For Papers: Robotics: War and Peace

Special Issue of Philosophy and Technology
Editor-in-Chief: Luciano Floridi
Call for Papers on
Robotics: War and Peace
Guest Editor: John P. Sullins
Closing date for submissions: January 9th, 2011

The topic. Two of the most philosophically interesting aspects of robotics technology are their use in military applications, and as engineered companions and helpers in the home. Military technology is going through a change that is as significant as the advent of gunpowder, or nuclear weapons. Robotics has made great advances in the last decade due mostly to research and development funded by various militaries around the world. The resulting systems stand to change every aspect of war and peacekeeping. At the other end of the spectrum, robots are being engineered to care for the elderly and provide love and companionship for the lonely. This special issue will be devoted to exploring the constellation of philosophical issues that revolve around the roll of robots in war and peace.

The special issue. We are interested in high quality papers that research not only the how of robotics, but also answer the tough questions of why we should, or should not, deploy these systems in our homes and battlefields. Suggested topics include, but are not limited to, the following: How does the growing use of telerobotic weapons systems affect the future of peaceful relations? Should autonomous weapons be deployed to the modern battlefield? Can the values of just war be advanced thorough robotics? Is it feasible or desirable to build peace keeping robots? How do robotic weapons systems change the roll of the human warrior? How can we program warrior virtues into a machine? Do drones contribute to a more or less stable world? What changes need to be made to modern thinking on the rules of war given the rapid growth of autonomous and semi-autonomous weapons systems? How doe drones change the public understanding of war and piece? What values are driving the raise of robotic casualty care systems? How does one engineer ethical rules in robotic weapons and love or companionship in artificial agents? What philosophical values are driving the development of elder care robots? What ethical norms should inform the design of companion robots? Can philosophically interesting relations occur between humans and machines? Is Eros a robot? What are the sexual politics and gender issues involved in building robotic love dolls?

We are particularly interested in papers that not only critique, but suggest ways to move forward on one of the most important issues confronting the philosophy of technology today.

Due Dates. Given the pace at which robotic technology is developing, we have adopted a very tight schedule for this issue. Initial submissions for review must be uploaded to the journal editorial management system by January 9th, 2011 with revised papers uploaded for final review in March 2011. This special issue will be published in July 2011 (3rd issue of Philosophy & Technology).

Submissions will be taken through the journal’s website: http://www.editorialmanager.com/phte/.

For further information please write to the guest editor: Professor John Sullins john.sullins@sonoma.edu

Prosthetic Limbs Interfaced with the Brain


Technology Review reports that DARPA financed prosthetic arms designed by two different manufacturers that have brain interfaces. The prosthetic limbs should be available within 5-10 years. One of the devices was developed by DEKA Research and Development the other by the Applied Physics Laboratory (APL) at John Hopkins University.
Limited testing of neural implants in severely paralyzed patients has been underway for the last five years. About five people have been implanted with chips to date, and they have been able to control cursors on a computer screen, drive a wheelchair, and even open and close a gripper on a very simple robotic arm. More extensive testing in monkeys implanted with a cortical chip shows the animals can learn to control a relatively simple prosthetic arm in a useful way, using it to grab and eat a piece of marshmallow.
"The next big step is asking, how many dimensions can you control?" says John Donoghue, a neuroscientist at Brown University who develops brain-computer interfaces. "Reaching out for water and bringing it to the mouth takes about seven degrees of freedom. The whole arm has on order of 25 degrees of freedom." Donoghue's group, which has overseen previous tests of cortical implants in patients, now has two paralyzed volunteers testing the DEKA arm. Researchers at APL have developed a second prosthetic arm with an even greater repertoire of possible movements and have applied for permission to begin human tests. They aim to begin implanting spinal cord injury patients in 2011, in collaboration with scientists at the University of Pittsburgh and Caltech.

Read the full story by Emily Singer titled, Robotic Limbs that Plug Into the Brain: Scientists are testing whether brain signals can control sophisticated prosthetic arms.

Tuesday, November 2, 2010

The Robonauts are Coming

robonautNY Times reporting today on NASA/GM project to send humanoid robots to the International Space Station and possibly the moon. The NASA page mentions the ISS mission, but not the moon.

Monday, November 1, 2010

Convicted for Outwitting 'Trading Robots'

A Report Listed on the CNBC Website.

Norwegians Convicted for Outwitting 'Trading Robots'
Financial Times | October 14, 2010 | 05:29 AM EDT

Two Norwegian day traders have been handed suspended prison sentences for market manipulation after outwitting the automated trading system of a big US broker. The two men worked out how the computerized system would react to certain trading patterns – allowing them to influence the price of low-volume stocks.

The case, involving Timber Hill, a unit of US-based Interactive Brokers, comes amid growing scrutiny of automated trading systems after the so-called “flash crash” in May, when a single algorithm triggered a plunge in US stocks. Svend Egil Larsen and Peder Veiby had won admiration from many Norwegians ahead of the court case for their apparent victory for man over machine.

Prosecutors said Mr Larsen and Mr Veiby “gave false and misleading signals about supply, demand and prices” by manipulating several Norwegian stocks through Timber Hill’s online trading platform. Anders Brosveet, lawyer for Mr Veiby, acknowledged that his client had learnt how Timber Hill’s trading algorithm would behave in response to certain trades but denied this amounted to market manipulation. “They had an idea of how the computer would change the prices but that does not make them responsible for what the computer did,” he told the Financial Times. Both men have vowed to appeal against their convictions.

Messages posted on Norwegian internet forums on Wednesday indicated widespread sympathy for the defendants. “It is the trading robots that should be brought to justice when it is them that cause so much wild volatility in the markets,” said one post. Mr Veiby, who made the most trades, was sentenced to 120 days in prison, suspended for two years, and fined NKr165,000 ($28,500). Mr Larsen received a 90-day suspended sentence and a fine of NKr105,000. The fines were about equal to the profits made by each man from the illegal trades. Christian Stenberg, the Norwegian police attorney responsible for the case, said any admiration for the men was misplaced. “This is a new kind of manipulation but it is still at the expense of other investors in the market,” he said. Interactive Brokers declined to comment. Irregular trading patterns were first spotted by the Oslo stock exchange and referred to Norway’s financial regulator.

Dancing with a Star Robot

Tuesday, October 26, 2010

10 robots you can actually date?

Over at computertechnician.net is a list of 10 robots (they say) you can actually date. Well, you pay yer money and you take yer choice!

Sunday, October 24, 2010

Robot Wars: 10 Recent Developments in Unmanned Warfare You Haven’t Heard About

"When the war in Afghanistan kicked off, the U.S. military only had a handful of drones or unmanned weapons on the battlefield. Now it’s one of the military’s main concerns as they race to outdo the competition developing innovative robots that do the dirty work. Technology is always changing and here’s a look at some of the recent developments in unmanned warfare that’s making its way to a war zone."

(more)

Sunday, October 17, 2010

IEET Poll on Robot Honesty

The IEET poll, "Do we need a law making it illegal for computers and robots to deceive or be dishonest?" Produced mixed results. This was not a scientific poll, but it was nevertheless nice to hear the conversations started.

Machine Ethics in the Scientific American


Congratulations to our colleagues Michael and Susan Anderson for their article on Machine Ethics (ME) in the October issue of the Scientific American. In the article, titled Robot Be Good: A Call for Ethical Autonomous Machines, they introduce both ME and their recent work programming ethical principles in Nao, a humanoid robot. Nao, pictured to the right, was developed by the French company Aldebaran Robotics.
Nao is capable of finding and walking toward a patient who needs to be reminded to take a medication, bringing the medication to the patient, interacting using natural-language, and notifying an overseer by e-mail when necessary. The robot receives initial input from the overseer (who typically would be a physician), including: what time to take a medication, the maximum amount of harm that could occur if this medication is not tak- en, how long it would take for this maximum harm to occur, the maximum amount of expected good to be derived from taking this medication, and how long it would take for this benefit to be lost. From this input, the robot calculates its levels of duty satisfaction or violation for each of the three duties and takes different actions depending on how those levels change over time. It issues a reminder when the levels of duty satisfaction and violation have reached the point where, according to its ethical principle, reminding is preferable to not reminding. The robot notifies the overseer only when it gets to the point that the patient could be harmed, or could lose considerable benefit, from not taking the medication.

Those familiar with the Anderson's work will appreciate that Nao is the first robotic implementation of work they did on EthEl.

Monday, October 11, 2010

Machine Learning Project at Carnegie Mellon


The NYTIMES published a story titled, Aiming to Learn as We Do, a Machine Teaches Itself. The article on machine learning focused on the Never-Ending Language Learning system (NELL) at Carnegie Mellon University. It is an interesting glimpse into how far we have come in developing learning systems.
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”

A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.

When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)

NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.

His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”

Sunday, October 10, 2010

Google Driverless Cars in SF Traffic


According to a story in today's NYTIMES Google has been testing seven driverless cars that have driven a 1000 miles without human intervention and more than 140,000 miles with just occasional intervention. The more astonishing point, "One even drove itself down Lombard Street in San Francisco, one of the steepest and curviest streets in the nation."
Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.

But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would?

And in the event of an accident, who would be liable — the person behind the wheel or the maker of the software?


Read the full story title, Google Cars Drive Themselves in Traffic.

Saturday, October 2, 2010

21 drone attacks in Sept., 18 militants killed in last 2

The United States has widened pilotless drone aircraft missile strikes against al Qaeda-linked militants in Pakistan's northwest, with 21 attacks in September alone, the highest number in a single month on record.

Angered by repeated incursions by NATO helicopters over the past week, Pakistan blocked a supply route for coalition troops in Afghanistan after one such strike killed three Pakistani soldiers on Thursday in the northwestern Kurram region.

Pakistan is a crucial ally for the United States in its efforts to pacify Afghanistan, but analysts say border incursions and disruptions in NATO supplies underline growing tensions in the relationship.

On Saturday, two drone attacks within hours of each other killed 18 militants in Datta Khel town in North Waziristan tribal region along the Afghan border, intelligence officials said.

"In the first attack two missiles were fired at a house while in the second attack four missiles targeted a house and a vehicle. The death toll in the two attacks reached 18," said one intelligence official. At least six foreigners were killed in the first strike.

There was no independent confirmation of the attacks and militants often dispute official death tolls.


Read the full NYTIMES story from October 2nd here.

Wednesday, September 29, 2010

CIA charged with use of 'illegal, inaccurate code to target kill drones'

A story embarrassing to the CIA appeared in The Register on September 24th. The story is, CIA used 'illegal, inaccurate code to target kill drones':'They want to kill people with software that doesn't work'.
The CIA is implicated in a court case in which it's claimed it used an illegal, inaccurate software "hack" to direct secret assassination drones in central Asia.

The target of the court action is Netezza, the data warehousing firm that IBM bid $1.7bn for on Monday. The case raises serious questions about the conduct of Netezza executives, and the conduct of CIA's clandestine war against senior jihadis in Afganistan and Pakistan.

The dispute surrounds a location analysis software package - "Geospatial" - developed by a small company called Intelligent Integration Systems (IISi), which like Netezza is based in Massachusetts. IISi alleges that Netezza misled the CIA by saying that it could deliver the software on its new hardware, to a tight deadline.

When the software firm then refused to rush the job, it's claimed, Netezza illegally and hastily reverse-engineered IISi's code to deliver a version that produced locations inaccurate by up to 13 metres. Despite knowing about the miscalculations, the CIA accepted the software, court submissions indicate.

Tuesday, September 28, 2010

Dogs 1 Robots 0

The Dogs of War Get Their Due in New Jersey

In Iraq and Afghanistan, the nature of war has changed, forcing the Pentagon to retool for unconventional foes. Amid the push for robotic IED detectors and aerial drones, however, is renewed investment in another, less techie counterinsurgency tool: war dogs. While they’ve served in every modern conflict, no other war has so closely matched their particular skills—which helps explain why their ranks have more than doubled since 2001, from 1,300 to about 2,800 dogs, mostly German Shepherds. “The capability they bring”—to track snipers, smell explosives, and sense danger—“cannot be replicated by man or machine,” said Gen. David Petraeus in February 2008, according to an Air Force publication. He went on to urge investment in the animals, noting that “their yield outperforms any asset we have in our inventory.”

That, coupled with the fact that most dogs serve multiple tours and dozens have died in the current conflicts, compelled the U.S. War Dogs Association, a New Jersey–based nonprofit, to lobby for an official medal for canine service. Last month, the Pentagon demurred, saying medals are only for people. So the association designed its own two-inch-wide medal for deserving dogs nationwide. It’s shipped medals to about 30 dogs, including hounds at Fort Lewis in Washington state and Maryland’s Fort Meade.

Bored Predator Drone


Bored Predator Drone Pumps A Few Rounds Into Mountain Goat

From: the Onion

Monday, September 27, 2010

Was the Stuxnet Virus produced by the US, Israel, or another wealthy nation?

Many of you may have noticed stories this week about the Stuxnet virus, which propose that a virus of this sophistication could have only been created by a large directed effort, probably that of a wealthy nation. The apparent target of the virus is vulnerabilities in Iran's IT industry. Stuxnet specifically targets software developed by Siemans AG. It is presumed that China, Russia, Israel, Britain, Germany and the United States are the countries most likely to have initiated this new venture in cyberwarfare.

Read the AP article titled, Computer Attacks Linked to Wealthy Group or Nation.

Sunday, September 26, 2010

More on Robot Deception

While in Berlin, Germany last week with Ron Arkin and Colin Allen, the IEET published the question, "Do we need a law making it illegal for computers and robots to deceive or be dishonest?" The question had been stimulated by recent articles about research performed by Ron and research engineer Alan Wagner. Ron was particularly pleased that this research had gotten people to ask questions such as this. Stimulating reflection on serious ethical concerns has always been one of his goals.

However, our conversations went in a somewhat different direction. Is the publicity creating the impression that the relatively low level mechanisms Arkin and Wagner introduced into their experiment are the equivalent of higher level cognitive ability? In other words, are we feeding a false impression that robots are much more sophisticated than they are, or are likely to be in the foreseeable future?

Ron pointed out that the actual research and the press release that accompanied it was responsible, and as we all know the press can distort scientific findings for its own purposes.

Here are some addition links for those interested in this subject.

Click here to link to the research paper titled, Acting Deceptively: Providing Robots with the Capacity for Deception
Hyperlink to the original Press Release.
Article at NewScientist titled, Deceptive robots hint at machine self-awareness.
Vote on the question at Polldaddy and view the results.

While roughly 60% favor outlawing or restricting deceptive robots, b ut only half of these thought it enforceable.

Society for Philosophy and Technology: Technology and Security

CONFERENCE ANNOUNCEMENT:

From May 26-29, 2011, the University of North Texas will host the 17th international conference of the Society for Philosophy and Technology: https://spt2011.unt.edu/.

The conference theme is "Technology and Security," but papers reflecting on any aspect of technology are welcomed. We also welcome interdisciplinary submissions from those studying technology in fields other than philosophy. See the call for papers here: https://spt2011.unt.edu/call-papers. Abstracts can be submitted to: spt2011@unt.edu. Please note the abstract submission deadline is November 1, 2010.

The keynote speaker is P.W. Singer, Senior Fellow and Director of the 21st Century Defense Initiative at the Brookings Institution and author of Wired for War: The Robotics Revolution and 21st Century Combat.

ETHICOMP 2011: The Social Impact of Social Computing

Sheffield Hallam University, Sheffield, UK

Wednesday 14 September to Friday 16 September 2011

Call for Papers to the 12th ETHICOMP conference:“The social impact of social computing”.

The overall theme of ETHICOMP 2011 is the huge range of impacts on us all of advances in social computing. Under this theme, papers, with a social/ethical perspective, within the following areas are particularly welcomed.

APPLICATIONS
Online communities - Blogs, wikis, social networks, collaborative bookmarking, social tagging, podcasts, tweeting, augmented reality
Business and public sector - Recommendation, forecasting, reputation, feedback, decision analysis, e-government, e-commerce
Interactive entertainment - Edutainment, training, gaming, storytelling
TECHNOLOGICAL INFRASTRUCTURE
Web technology
Database technology
Multimedia technology
Wireless technology
Agent technology
Software engineering
THEORETICAL UNDERPINNINGS
Social psychology
Communication and human-computer interaction theories
Social network analysis
Anthropology
Organisation theory
Sociology
Computing theory
Ethical theory
Information and computer ethics
Governance
Papers covering one or several of these perspectives are called for from business, government, computer science, information systems, law, media, anthropology, psychology, sociology and philosophy. Interdisciplinary papers and those from new researchers and practitioners are encouraged. A paper might take a conceptual, applied, practical or historical focus. Case studies and reports on lessons learned in practice are welcomed.

The full announcement is available here.

Saturday, September 25, 2010

Call to Establish an Arms Control Regime for Robots

A strong call to limit armed tele-operated and autonomous systems came out of the workshop in Berlin this past week. What follows is an excerpt from the full statement.
We believe:
 That the long-term risks posed by the proliferation and further development of these weapon systems outweigh whatever short-term benefits they may appear to have.
 That it is unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war.* In all cases where such a decision must be made, at least one human being must be held personally responsible and legally accountable for the decision and its foreseeable consequences.
 That the currently accelerating pace and tempo of warfare is further escalated by these systems and undermines the capacity of human beings to make responsible decisions during military operations.
 That the asymmetry of forces that these systems make possible encourages states, and non-state actors, to pursue forms of warfare that reduce the security of citizens of possessing states.
 That the fact that a vehicle is uninhabited does not confer a right to violate the sovereignty of states.

There is, therefore, an urgent need to bring into existence an arms control regime to regulate the development, acquisition, deployment, and use of armed tele-operated and autonomous robotic weapons.

The full statement can be read here.

You Can Teach a Quadrotor Drone New Trick

Friday, September 17, 2010

Robot Arms Control Workshop in Berlin, Germany


Many of the people familiar to readers of this blog will be coming together for a three day workshop (Sept 20th-22nd) in Berlin, Germany to discuss various calls for international arms treaties directed at regulating the roboticization of warfare. The workshop has been organized by the International Committee for Robot Arms Control (ICRAC). Among the workshop participants will be: Jürgen Altmann, Ron Arkin, Peter Asaro, ,Dennis Gormley, Joanne Mariner, Eugene Miasnikov, Götz Neuneck, Elizabeth Quintana, Wolfgang Richter, Lambèr Royakkers, Niklas Schörnig, Noel Sharkey, Rob Sparrow, Mark Steinbeck, Detlev Wolter, Uta Zapf, Colin Allen, and myself.

The Guardian published an article yesterday (Sept 16th) discussing the conference and the need for robot arms control. Read the full article titled, Robot warfare: campaigners call for tighter controls of deadly drones:Conferences will raise concerns over unpiloted aircraft and ground machines that choose their own targets.

Cyborgs on Mars

In the September issue of Endeavour, senior curator at the Smithsonian National Air and Space Museum Roger Launius takes a look at the historical debate surrounding human colonization of the solar system and how human biology will have to adapt to such extreme space environments. . .

If humans are to colonize other planets, Launius said it could well require the "next state of human evolution" to create a separate human presence where families will live and die on that planet. In other words, it wouldn't really be Homo sapien sapiens that would be living in the colonies, it could be cyborgs—a living organism with a mixture of organic and electromechanical parts—or in simpler terms, part human, part machine. . .

The possibility of using cyborgs for space travel has been the subject of research for at least half a century. An influential article published in 1960 by Manfred Clynes and Nathan Kline titled “Cyborgs and Space” changed the debate. According to them, there was a better alternative to recreating the Earth’s environment in space, the predominant thinking during that time. The two scientists compared that approach to “a fish taking a small quantity of water along with him to live on land.” They felt that humans should be willing to partially adapt to the environment to which they would be traveling.

“Altering man’s bodily functions to meet the requirements of extraterrestrial environments would be more logical than providing an earthly environment for him in space,” Clynes and Kline wrote. . .

Grant Gillett, a professor of medical ethics at the Otago Bioethics Center of the University of Otago Medical School in New Zealand said addressing the ethical issue is really about justifying the need for such an approach, the need for altering humans so significantly that they end up not entirely human in the end.

“(Whether we) should do it largely depends on if it's important enough for humanity in general,” Gillett said. “To some extent, that's the justification.”


Read the full article titled, Cyborgs Needed for Escape from Earth in Astrobiology Magazine from which these excerpts were extracted.

The Future of Context-Aware Computing

Justin Rattner, Intel VP and Chief Technology Office described the future of context-sensitive computing (devices that anticipate needs and desire and try to fulfill them), during a keynote at the Intel Developer Forum.
Rattner devoted most of his keynote to explaining and demonstrating how Intel is researching and pursuing making that mainstream intention a reality. These included:
• Tim Jarrell, the Vice President and Publisher of Fodor's Travel, arrived onstage to demonstrate a new Fodor's app (created in collaboration with Intel) that can recommend restaurants based on what the user likes and eats, and the user's location in the city. When used in "Wander" mode, the app helps center the user by providing him information about surrounding landmarks. (A very similar technology, named Augmented Reality, was demonstrated at the Intel Labs "Zero Day" IDF event on Sunday.) The app is not available yet, but Fodor's is continuing development on it.
• Intel Research Scientist Lama Nachman demonstrated the use of "shimmer sensors," wearable sensors that measure stride time and swing time and showed charts that measured Rattner's movements onstage during his speech (he had been wearing them on his ankles). This technology was intended to help measure the gait of elderly people who had difficulty walking.
• A remote control that "enhances the smart TV experience" by recognizing who's holding a remote control and adjusting the viewing experience accordingly.
• A sense system, roughly the size of a large cell phone, that could animate avatars to let you know a person's current activity or state of activities. One example was how someone sitting and drinking coffee might receive a phone call and leave the coffee shop, by showing how the device would animate a troll-like creature at first sitting and then walking while talking on a cell phone.

Read the full PCMAG article titled, Rattner Describes the Future of Context-Aware Computing.

Brain Controlled Wheelchair


Researchers at the Federal Institute of Technology in Lausanne have developed a wheelchair that can be controlled by patients with their thoughts. The technology combines an electroencephalograph (EEG) with software that interpolates the intent of the patient.
EEG has limited accuracy and can only detect a few different commands. Maintaining these mental exercises when trying to maneuver a wheelchair around a cluttered environment can also be very tiring, says, José del Millán, director of noninvasive brain-machine interfaces at the Federal Institute of Technology, who led the project. "People cannot sustain that level of mental control for long periods of time," he says. The concentration required also creates noisier signals that can be more difficult for a computer to interpret.

Shared control addresses this problem because patients don't need to continuously instruct the wheelchair to move forward; they need to think the command only once, and the software takes care of the rest. "The wheelchair can take on the low-level details, so it's more natural," says Millán.

The wheelchair is equipped with two webcams to help it detect obstacles and avoid them. If drivers want to approach an object rather than navigate around it, they can give an override command. The chair will then stop just short of the object.

Read the full article from Technology Review titled, Wheelchair Makes the Most of Brain Control:Artificial intelligence improves a wheelchair system that could give paralyzed people greater mobility.

Sunday, September 12, 2010

Ryan Calo Interviewed by Robots Podcast


Ryan Calo, a senior research fellow at Stanford Law School, who also founded the Stanford Robots and Law Blog was interviewed by robotspodcast. The full Interview can be played here.

Survey on Attitudes Regarding Unmanned Systems

Gerhard Dabringer conducted a Survey on Unmanned Systems at AUVSI in Denver in August. He has made his findings available in a Summary Report available here. Among his findings are:

1.The use of Robotic Combat Systems (RCS) is generally approved of, though there is a strong tendency towards the „man in the loop“ approach, especially when systems are weaponized.

2. There is a strong need for a broad discussion of ethical aspects as well as legal aspects of RCS.

3. Policy makers need to make sure that the existing discussions are beeing noticed.

4. RCS are recognized as a new ethical dimension in warfare and a majority sees the need for new international legislation.

5. Autonomous use of weapons by the RCS is generally not approved of.

Deceptive Robots


Gizmag describes research by Ronald Arkin and Alan Wagner in which robots are taught to deceive.
What it all boiled down to was a series of 20 hide-and-seek experiments. The autonomous hiding/deceiving robot could randomly choose one of three hiding spots, and would have no choice but to knock over one of three paths of colored markers to get there. The seeking robot could then, presumably, find the hiding robot by identifying which path of markers was knocked down. Sounds easy, except that sneaky, conniving hiding robot would turn around after knocking down one path of markers, and go hide in one of the other spots.

In 75 percent of the trials, the hiding robot succeeded in evading the seeking robot. In the other 25 percent, it wasn’t able to knock down the right markers necessary to produce its desired deception. The full results of the Georgia Tech experiment were recently published in the International Journal of Social Robotics.


The full research article is title, Acting Deceptively:Providing Robots with the Capacity for Deception.
Abstract Deception is utilized by a variety of intelligent systems ranging from insects to human beings. It has been argued that the use of deception is an indicator of theory of mind (Cheney and Seyfarth in Baboon Metaphysics: The Evolution of a Social Mind, 2008) and of social intelligence (Hauser in Proc. Natl. Acad. Sci. 89:12137–12139, 1992). We use interdependence theory and game theory to explore the phenomena of deception from the perspective of robotics, and to develop an algorithm which allows an artificially intelligent system to determine if deception is warranted in a social situation. Using techniques introduced in Wagner (Proceedings of the 4th International Conference on Human-Robot Interaction (HRI 2009), 2009), we present an algorithm that bases a robot’s deceptive action selection on its model of the individual it’s attempting to deceive. Simulation and robot experiments using these algorithms which investigate the nature of deception itself are discussed.

Thursday, September 2, 2010

Newer videos of ECCEROBOT

Is a Robot Crime Wave on the Near Horizon?


The Coming Robot Crime Wave is an article by Noel Sharkey, Marc Goodman, and Nick Ross, which outlines a number of ways in which present and future robotic systems will be adapted to perpetrate a wide variety of illegal activities. One example they discuss is Narco submarines.
Major criminal organizations such as drug cartels don’t need to rely on cheap home engineering. Discoveries of submarines designed to carry tons of narcotics have been occurring since 1988. With 10 tons of cocaine netting $200 million, $2 million for a submarine would repay the robot’s cost many times over in one voyage. The drug cartels clearly have the money to adapt their technology to keep ahead of enforcement agencies.

Once the exclusive and secretive preserve of the military, this technology is becoming commonplace in civilian applications, with marine robots a prime example. So far, they’ve been used to locate the Titanic, investigate ice caps, build deep sea oil rigs, repair undersea cables, and mitigate environmental catastrophes such as the recent Deepwater Horizon explosion in the Gulf of Mexico.

In 2010, US officials secured the first convictions for remote-controlled drug smuggling when they imprisoned three men for building and selling drug subs (http://bit.ly/ b8Qawc). At the Tampa hearing, attorney Joseph K. Ruddy reported that these remote-controlled submarines were up to 40 feet long and could carry 1,800 kilograms of cocaine 1,000 miles without refueling. The effectiveness of these submarines in avoiding detection is clear, given that none have ever been seized. We only hear about the criminals’ failures, so there could be none, dozens, or hundreds of these machines in use.

The latest autonomous and semiautonomous submarine capabilities pose a greater concern. They can act on their own when required, employ programmed avoidance routines to thwart authorities, be fitted with sensors to send signals to the operator when the payload is delivered or the craft attacked, and carry self-destruct features to destroy incriminating evidence.

TILT 2011: Technologies on the stand: legal and ethical questions in neuroscience and robotics.

The Tilburg Institute for Law, Technology, and Society (TILT) is proud to announce the upcoming TILTing Perspectives 2011 conference entitled

"Technologies on the stand: legal and ethical questions in neuroscience and robotics."

The conference will be held at Tilburg University (the Netherlands) on 11 and 12 April 2011. It will focus on the legal and ethical questions raised by the application of neuroscience and robotics in various contexts. The conference will have two independent, but related tracks:

1. Law and neuroscience
The first track will focus on the legal and ethical issues surrounding recent developments in neuroscience and the legal application of neurotechnologies. Discussion topics will include, but are not limited to:
- the possible use of neurotechnologies in a legal context and the implications thereof,
- the role of neuroscience in determining legal capacities and in detecting deception,
- the legal and ethical issues surrounding the medical application of neurotechnologies, and
- the legal and ethical implications of using neurotechnologies for enhancement purposes.

2. Law, ethics and robotics
The second track will focus on the legal and ethical implications of the application of robotics in social environments (e.g., the home, hospitals and other health care institutes, in traffic, but also in war). Discussion topics will include, but are not limited to:
- the legal and ethical questions raised by the proliferation of robotics for the home environment,
- the legal and ethical questions raised by the deployment of robotics in war,
- liability and the legal status of robots, and
- autonomous action, agency and the ethical implications thereof.

The conference aims at bringing together national and international experts from the fields of (1) law and neuroscience and (2) law, ethics and robotics, and to facilitate discussion between lawyers, legal scholars, psychologists, social scientists, philosophers, neuroscientists and policy makers.

Our confirmed keynote speakers are:
- Stephen Morse (University of Pennsylvania)
- Paul Wolpe (Emory University)
- Wendell Wallach (Yale University)
- Noel Sharkey (University of Sheffield)

If you would like to present a paper at this conference, please send in an abstract (of max. 350 words) using the abstract submission system on our website:http://www.tilburguniversity.nl/faculties/law/research/tilt/events/tilting2011/abssubmission/

Abstract submission is open from 1 September until 15 October. You may submit an abstract on the topics suggested above, or on a related topic that falls within the conference theme.

Full papers will be published in the conference proceedings. The winning paper in the Best Paper Contest will be published in a special edition of the international, peer reviewed journal Law, Innovation and Technology (Hart Publishers).

Important dates for submission:
- Deadline for submission of abstract: 15 October 2010
- Notification of acceptance and invitation to write a full paper: 1 November 2010
- Deadline for submission of full papers: 15 December 2010
- Reviewers' feedback and comments: 31 January 2011
- Deadline for submission of revised papers: 15 March 2011
- Conference dates: 11 and 12 April

For more information, please visit our website: http://www.tilburguniversity.nl/faculties/law/research/tilt/events/tilting2011/

Tuesday, August 31, 2010

Flash Crash Ethics

Aug 29: Australian Broadcasting radio show Background Briefing aired a story on "the flash crash" for which Colin was interviewed (at the end).

Program description:

A few months ago the US share market plunged l000 points in a few minutes, and trillions were traded both up and down. What caused it, and can it happen again? Tiny high frequency computer algorithms - or algos - roam the markets, buying and selling in a parallel universe more or less uncontrolled by anyone. Did they go feral, or was it the fat finger of a coked out trader? In September US regulators bring out their findings. Reporter Stan Correy.

Saturday, August 28, 2010

Call for Papers

IEEE Transactions on Affective Computing

Special Issue on Ethics and Affective Computing

The pervasive presence of automated and autonomous systems necessitates the rapid growth of a relatively new area of inquiry called machine ethics. If machines are going to be turned loose on their own to kill and heal, explore and decide, the need for designing them to be moral becomes pressing. This need, in turn, penetrates to the very foundations of ethics as robot designers strive to build systems that comply. Fuzzy intuitions will not do when computational clarity is required. So, machine ethics also asks the discipline of ethics to make itself clear. The truth is that at present we do not know how to make it so. Rule-based approaches are being tried even in light of an acknowledged difficulty to formalize moral behavior, and it is already common to hear that introducing affects into machines may be necessary in order to make machines behave morally. From this perspective, affective computing may be morally required by machine ethics.

On the other hand, building machines with artificial affects might carry with it negative ethical consequences. In order to make humans more willing to accept robots and other automated computational devices, creating them to display emotion will be a help, since if we like them, we will, no doubt, be more willing to welcome them. We might even pay dearly to have them. But do artificial affects deceive? Will they catch us with our defenses down, and do we have to worry about Plato's caveat in the Republic that one of the best ways to be unjust is to appear just? Automated agents that seem like persons might appear congenial, even as any moral regard is ignored, making them dangerous culprits indistinguishable from automated "friends." In this light, machine ethics might demand that we exercise great caution in using affective computing. In radical cases, it might even demand that we not use it at all.

We would seem to have here a quandary. No doubt there are others. The purpose of this volume is to explore the range of ethical issues related to affective computing. Is affective computing necessary for making artificial agents moral? If so, why and how? Where does affective computing require moral caution? In what cases do benefits outweigh the moral risks? Etc.

Invited Authors:
Roddy Cowie (Queen's University, Belfast)
Luciano Floridi (University of Hertfordshire and University of Oxford)
Matthias Scheutz (Tufts University)
Papers must not have been previously published, with the exception that substantial extensions of conference papers can be considered. The authors will be required to follow the Author’s Guide for manuscript submission to the IEEE Transactions on Affective Computing at http://www.computer.org/portal/web/tac/author. Papers are due by March 1st, 2011, and should be submitted electronically at https://mc.manuscriptcentral.com/taffc-cs. Please select the "SI - Ethics 2011" manuscript type upon submission. For further information, please contact guest editor, Anthony Beavers at afbeavers@gmail.com.

Friday, August 20, 2010

Willow Garage Ready to Market Beer-Fetching, Pool-Shooting Robot



You Could Own A Pool-Shooting, Beer-Fetching Willow Garage Robot

Autonomy and Accountability in Robot Wars

Vivek (Vik) Kanwar has written an article titled, Post-Human Humanitarian Law: The Law of War in the Age of Robotic Warfare, that has been published by the Social Science Research Network.
Abstract:
This Review Essay surveys the recent literature on the tensions between of autonomy and accountability in robotic warfare. Four books, taken together, suggest an original account of fundamental changes taking place in the field of IHL: P.W. Singer’s book Wired for War: the Robotics Revolution and Conflict in the 21st Century (2009), William H. Boothby’s Weapons and the Law of Armed Conflict (2009), Armin Krishnan’s Killer Robots: Legality and Ethicality of Autonomous Weapons (2009), and Ronald Arkin’s Governing Lethal Behavior in Autonomous Robots (2009). This Review Essay argues that from the point of view of IHL the concern is not the introduction of robots into the battlefield, but the gradual removal of humans. In this way the issue of weapon autonomy marks a paradigmatic shift from the so-called “humanization” of IHL to possible post-human concerns.

P ≠ NP? Limits on Computing?

An August 10th article in the NewScientist titled, P ≠ NP? It's bad news for the power of computing, reports that a mathematician Vinay Deolalikar has perhaps solved a major computational problem.
If the result stands, it would prove that the two classes P and NP are not identical, and impose severe limits on what computers can accomplish – implying that many tasks may be fundamentally, irreducibly complex.

For some problems – including factorisation – the result does not clearly say whether they can be solved quickly. But a huge sub-class of problems called "NP-complete" would be doomed. A famous example is the travelling salesman problem – finding the shortest route between a set of cities. Such problems can be checked quickly, but if P ≠ NP then there is no computer program that can complete them quickly from scratch.

Complexity theorists have given a favourable reception to Deolalikar's draft paper, but when the final version is released in a week's time the process of checking it will intensify.

Big Brother and the Iris Scanner

Biometrics R&D firm Global Rainmakers Inc. (GRI) announced today that it is rolling out its iris scanning technology to create what it calls "the most secure city in the world." In a partnership with Leon -- one of the largest cities in Mexico, with a population of more than a million -- GRI will fill the city with eye-scanners. That will help law enforcement revolutionize the way we live -- not to mention marketers.
"In the future, whether it's entering your home, opening your car, entering your workspace, getting a pharmacy prescription refilled, or having your medical records pulled up, everything will come off that unique key that is your iris," says Jeff Carter, CDO of Global Rainmakers. . .
For such a Big Brother-esque system, why would any law-abiding resident ever volunteer to scan their irises into a public database, and sacrifice their privacy? GRI hopes that the immediate value the system creates will alleviate any concern. "There's a lot of convenience to this--you'll have nothing to carry except your eyes," says Carter, claiming that consumers will no longer be carded at bars and liquor stores. And he has a warning for those thinking of opting out: "When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in." . . .
So will we live the future under iris scanners and constant Big Brother monitoring? According to Carter, eye scanners will soon be so cost-effective--between $50-$100 each--that in the not-too-distant future we'll have "billions and billions of sensors" across the globe.


From Fast Company: Iris Scanners Create the Most Secure City in the World. Welcome, Big Brother

Friday, August 13, 2010

Moral Machines and the Threat of Ethical Nihilism

A draft of a paper that reacts to Moral Machines is available online. See Moral Machines and the Threat of Ethical Nihilism.

Here's a quick statement of the paper's direction:

"In 2000, Allen, Varner and Zinser addressed the possibility of a Moral Turing Test (MTT) to judge the success of an automated moral agent (AMA), a theme that is repeated in Wallach and Allen (2009). While the authors are careful to note that a language-only test based on moral justifications, or reasons, would be inadequate, they consider a test based on moral behavior. “One way to shift the focus from reasons to actions,” they write, “might be to restrict the information available to the human judge in some way. Suppose the human judge in the MTT is provided with descriptions of actual, morally significant actions of a human and an AMA, purged of all references that would identify the agents. If the judge correctly identifies the machine at a level above chance, then the machine has failed the test” (206). While they are careful to note that indistinguishability between human and automated agents might set the bar for passing the test too low, such a test by its very nature decides the morality of an agent on the basis of appearances. Since there seems to be little else we could use to determine the success of an AMA, we may rightfully ask whether, analogous to the term "thinking" in other contexts, the term "moral" is headed for redescription here. Indeed, Wallach and Allen’s survey of the problem space of machine ethics forces the question of whether in fifty years (or less) one will be able to speak of a machine as being moral without expecting to be contradicted. Supposing the answer were yes, why might this invite concern? What is at stake? How might such a redescription of the term "moral" come about?"

Robot Ethics and Human Ethics

A special issue of Ethics and Information Technology on "Robot Ethics and Human Ethics" has just been released. See http://www.springerlink.com/content/1388-1957/12/3/ for details.

Monday, August 2, 2010

"Rise of the Drones" -- Transcript of House Committee on Oversight and Government Reform

A transcript of testimony collected March 23, 2010, before the House of Representatives Committee on Oversight and Government Reform, is available from the Homeland Security Digital Library. It is titled: "Rise of the Drones: Unmanned Systems and the Future of War".

A full list of witness is listed below. Among the statements made are these:

"the United States government urgently needs publicly to declare the legal rationale behind its use of drones, and defend that legal rationale in the international community" — Kenneth Anderson, Washington College of Law, American University

"AUVSI’s over 6,000 members from industry, government organizations, and academia are committed to fostering and promoting unmanned systems and related technologies." — Michael S. Fagan Chair, Unmanned Aircraft Systems (UAS) Advocacy Committee Association for Unmanned Vehicle Systems International (AUVSI)

"The Department of Commerce believes the issue of missile proliferation has never been as important to our national security interests as it is now. A comprehensive export control system is already in place to protect our national security. As noted above, the Department of Commerce is committed to enhancements to that system as needed to ensure it continues to protect our national security." — Kevin Wolf, Assistant Secretary for Export Administration, Bureau of Industry and Security

"Our industry growth is adversely affected by International Traffic in Arms Regulations (ITAR) for export of certain UAS technologies, and by a lengthy license approval process by Political Military Defense Trade Controls (PM-DTC). AUVSI is an advocate for simplified export-control regulations and expedited license approvals for unmanned systems technologies." — Michael Fagan, AUVSI Chair

"I would advise an incremental approach similar to that used with remote-controlled systems: intelligence missions first, strike missions later. Given the complexity involved, I would also restrict initial strike missions to non-lethal weapons and combatant-only areas. One possible exception to this non-lethal recommendation would involve autonomous systems targeting submarines, where one only would have to identify friendly combatants, enemy combatants, and perhaps whales." — Edward Barrett, Director of Research, Stockdale Center for Ethical Leadership U.S. Naval Academy


Witness list:
  • John F. Tierney, Chairman
  • Peter W. Singer, Director, 21st Century Defense Initiative The Brookings Institution
  • Edward Barrett, Director of Research, Stockdale Center for Ethical Leadership U.S. Naval Academy
  • Kenneth Anderson, Professor, Washington College of Law American University
  • John Jackson, Professor of Unmanned Systems U.S. Naval War College
  • Michael Fagan, Chair, Unmanned Aerial Systems Advocacy Committee Association of Unmanned Vehicle Systems International
  • Michael J. Sullivan, Director, Acquisition and Sourcing Management U.S. Government Accountability Office
  • Dyke Weatherington, Deputy, Unmanned Aerial Vehicle Planning Taskforce Office of the Under Secretary for Acquisition, Technology and Logistics, U.S. Department of Defense
  • Kevin Wolf, Assistant Secretary for Export Administration, Bureau of Industry and Security

Sunday, July 25, 2010

NYTIMES Profiles The Lifeboat Foundation

The Lifeboat Foundation is a nonprofit that seeks to protect people from some seriously catastrophic technology-related events. It funds research that would prevent a situation where technology has run amok, sort of like a pre-Fringe Unit.

The organization has a ton of areas that it’s looking into, ranging from artificial intelligence to asteroids. A particular interest for the group revolves around building shields and lots of them.

For example, there’s talk of a Neuroethics Shield – “to prevent abuse in the areas of neuropharmaceuticals, neurodevices, and neurodiagnostics. Worst cases include enslaving the world’s population or causing everyone to commit suicide.”

And then there’s a Personality Preserver that would help people keep their personalities intact and a Nano Shield to protect against overly aggressive nano creatures.


Read the full article by Ashless Vance titled, The Lifeboat Foundation: Battling Asteroids, Nanobots and A.I.

REX, Wheelchair Bound Are Up And About With Robot Exoskeleton

Thought-Controlled Prosthetic Limbs


The Defense Advanced Research Projects Agency (DARPA) has awarded a contract for up to $34.5 million to The Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md., to manage the development and testing of the Modular Prosthetic Limb (MPL) system on human subjects, using a brain-controlled interface.

The MPL offers 22 degrees of motion, including independent movement of each finger, in a package that weighs about nine pounds (the weight of a natural limb). Providing nearly as much dexterity as a natural limb, the MPL is capable of unprecedented mechanical agility and is designed to respond to a user’s thoughts.


Read the full article titled, Thought-Controlled Prosthetic Limb System to be Tested on Humans.

Secrets, Surveillance, and UAVs

Two interesting reports about UAVs (unmanned aircraft) have be brought to our attention. One, written by Thomas P. Ehrhard, is titled, Air Force UAVs:The Secret History. Ehrhard is a Special Assistant to the Chief of Staff of the US Air Force. In an introduction, Rebecca Grant, Director of the Mitchell Institute for who the report was prepared, writes:
All along, there have been some tantalizing public hints about the extent of America's unmanned reconnaissance and surveillance work. What is striking, though, is how thoroughly the Air Force's secret role in UAV development remained "in the black world," unseen by any except those closest to the projects. The veil allowed speedy development of systems but gave the Air Force an undeserved reputation of indifference.

Of perhaps even more interest to readers of this blog, is a second report titled, Homeland Security: Unmanned Aerial Vehicles and Border Surveillance by Chad Haddal and Jeremiah Gertler. This discusses progress in using UAV's for surveillance along the U.S.'s international borders.
The technical capabilities of the UAVs have been tested in a military context, but safety and technical issues need to be addressed if the program is to be expanded domestically. Chief among these issues is the FAA’s concerns about the NAS and whether UAVs can be safely incorporated into the nation’s crowded skies. It has been noted that UAVs suffer accident rates multiple times higher than manned aircraft. However, in an effort to support the wars in Afghanistan and Iraq, DOD fielded UAVs such as Predator and Global Hawk before their development programs were complete. Thus, the UAV accident rate might be lower if these systems had been allowed to mature under the full development program.

Thursday, July 22, 2010

Ethical regulations on robotics in Europe

Abstract There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to ‘‘responsibility and autonomous robots’’, ‘‘machines as a replacement for humans’’, and ‘‘tele-presence’’. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of robotics, but we will demonstrate that there are legal challenges with regard to these issues.
The full paper by Michael Nagenborg, Rafael Capurro, Jutta Weber, and Christoph is published in AI & Society and available here.

ETICA:Ethical issues of emerging ICT applications

The European Commision under the 7th framework programme has funded ETICA, a consortium of universities to address the "EthicaI Issues of Emerging ICT Applications." The project website is located here.
The ETICA project will identify emerging Information and Communication Technologies (ICTs) and their potential application areas in order to analyse and evaluate ethical issues arising from these. By including a variety of stakeholders and disciplinary perspectives, it will grade and rank foreseeable ethical risks. Based on the study governance arrangements currently used to address ICT ethics in Europe, ETICA will recommend concrete governance structures to address the most salient ethical issues identified. These recommendations will form the basis of more general policy recommendations aimed at addressing ethical issues in emerging ICTs before or as they arise.

Taking an inclusive and interdisciplinary approach will ensure that ethical issues are identified early, recommendations will be viable and acceptable, and relevant policy suggestions will be developed. This will contribute to the larger aims of the Science in Society programme by developing democratic and open governance of ICT. Given the high importance of ICT to further a number of European policy goals, it is important that ethical issues are identified and addressed early. The provision of viable policy suggestions will have an impact well beyond the scientific community. Ethical issues have the potential to jeopardise the success of individual technical solutions. The acceptance of the scientific-technological basis of modern society requires that ethical questions are addressed openly and transparently. The ETICA project is therefore a contribution to the European Research Area and also to the quality of life of European citizens. Furthermore, ethical awareness can help the European ICT industry gain a competitive advantage over less sensitive competitors, thus contributing to the economic well-being of Europe.

ETICA also publishes a magazine titled EIEx:The Magazine of the European Innovation Exchange. Issue 3 is available here.

Wednesday, July 14, 2010

South Korea redeploys lethal robots on border

A report in the The Daily Telegraph indicates that South Korea has redeployed robots capable of target acquisition and lethal fire along the North Korea border. Although not mentioned in this story, this is the South Koreans' second attempt to deploy border robots (see this previous blog post http://moralmachines.blogspot.com/2009/12/armin-krishnan-on-killer-robots.html)

Monday, July 5, 2010

Salutations Sentients


From Salon:

Reach out and touch a virtual object

“The audiovisual aspects of VR have come a long way in recent years, so adding a sense of touch is the next step,” says Andreas Schweinberger, a researcher at Technische Universität Munchen in Germany. “We know that the more senses that can be used, the more interaction, the greater the sense of presence. And a stronger sense of presence means the experience is more immersive and realistic.”

Schweinberger led a team from nine universities and research institutes in developing technology to make VR objects and characters touchable. With funding from the EU in the Immersence project, they developed innovative haptic and multi-modal interfaces, new signal processing techniques and a pioneering method to generate VR objects from real-world objects in real time.

The latter technology, developed at the Computer Vision Laboratory of Swiss project partner ETH Zürich, uses a 3D scanner and advanced modelling system to create a virtual representation of a real object, such as a cup, box or, in one experiment, a green fluffy toy frog. The 3D digital representation of the object can then be transmitted to someone at a remote location, who, by wearing VR goggles and touching a haptic interface, can move, prod and poke it.


Read the full article here.

Artificial Skin for Robots Can Make Them Safer


A mobile robot carefully transports a sample through a biotech lab where it is surrounded by the routine hustle and bustle. Lab technicians are conversing with one another and performing tests. One technician inadvertently runs into the robot, which stops moving immediately.

An artificial skin covering the robot makes this possible. Consisting of conductive foam, textiles and an intelligent evaluation circuit, the sensor system detects points of contact and differentiates between gentle and strong contact. It registers people immediately. The shape and size of the sensor cells implemented in the skin can be varied depending on the application. They detect any contact. The higher the number of sensor cells, the more precisely a point of collision can be detected. A sensor controller processes the measured values and transmits them to the robot or, alternatively, a computer, a machine or production line.

Researchers at the Fraunhofer Institute for Factory Operation and Automation IFF in Magdeburg designed and patented this sensor system in 2008 ..."Our artificial skin can be adapted to any complex geometry, including curved or very flat. We use large-area floor sensors to define safety zones that people may not enter", says Markus Fritzsche, researcher at the Fraunhofer IFF. "These areas can be changed dynamically."


Red the full article titled, Robots get an artificial skin.

BINA48 Makes Her Debut


A robot head, commission by Martine Rothblatt, that emulates her spouse Bina Rothblatt, was featured in a NYTIMES article by Amy Harmon titled, Making Friends With a Robot Named Nina48. A video of Harmon interacting with Bina48 accompanies the article. The robot Bina lacks the charm of her living counterpart, but nevertheless, demonstrates a few beguiling trick that play upon the human tendency to anthropomorphize artifacts.

Harmon also has a companion piece in the NYTIMES titled, A Soft Spot for Circuitry, that discusses the proliferation of companion robots in nursing homes, schools, and even living rooms.
When something responds to us, we are built for our emotions to trigger, even when we are 110 percent certain that it is not human,” said Clifford Nass, a professor of computer science at Stanford University. “Which brings up the ethical question: Should you meet the needs of people with something that basically suckers them?”

An answer may lie in whether one signs on to be manipulated.”

Monday, June 28, 2010

Special Issue of ETIN: Robot Ethics and Human Ethics

In September, Springer will publish a special issue of Ethics and Information Technology dedicated to "Robot Ethics and Human Ethics." The first two paragraphs of the editorial are offered here, along with the Table of Contents.

It has already become something of a mantra among machine ethicists that one benefit of their research is that it can help us better understand ethics in the case of human beings. Sometimes this expression appears as an afterthought, looking as if authors say it merely to justify the field, but this is not the case. At bottom is what we must know about ethics in general to build machines that operate within normative parameters. Fuzzy intuitions will not do where the specifics of engineering and computational clarity are required. So, machine ethicists are forced head on to engage in moral philosophy. Their effort, of course, hangs on a careful analysis of ethical theories, the role of affect in making moral decisions, relationships between agents and patients, and so forth, including the specifics of any concrete case. But there is more here to the human story.

Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.

"Robot Minds and Human Ethics: The Need for a Comprehensive Model of Moral Decision Making"
Wendell Wallach

"Moral Appearances: Emotions, Robots and Human Morality"
Mark Coeckelbergh

"Robot Rights? Toward a Social-Relational Justification of Moral Consideration"
Mark Coekckelbergh

"RoboWarfare: Can Robots Be More Ethical than Humans on the Battlefield"
John Sullins

"The Cubical Warrior: The Marionette of Digitized Warfare"
Lamber Royakkers

"Robot Caregivers: Harbingers of Expanded Freedom for All"
Yvette Pearson and Jason Borenstein

"Implications and Consequences of Robots with Biological Brains"
Kevin Warwick

"Designing a Machine for Learning and the Ethics of Robotics: the N-Reasons Platform"
Peter Danielson

Book Reviews of Wallach and Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford, 2009.
  • Anthony F. Beavers
  • Vincent Wiegel
  • Jeff Buechner

Saturday, June 26, 2010

Wallach to Give Keynote at WFS Conference in Boston July 8th

Wendell Wallach will be giving the keynote talk at the plenary session of the World Future Society Conference in Boston on July 8th. The title of the talk will be, Navigating the Future: Moral Machines, Techno Humans, and the Singularity. Other speakers at WorldFuture 2010:Sustainable Futures, Strategies, and Technologies will be Ray Kurzweil, Dennis Bushnell, and Harvey Cox.

Friday, June 18, 2010

Self-Replicating Creature Spawned in Simulator

NewScientist has a story about a replicating creature spawned by Andrew Wade in the Game of Life. The full story is available here.