Wednesday, November 25, 2009

Which Party Is Best Prepared to Save Us From the Robot Apocalypse?

Arthur C. Clarke famously said, “Any sufficiently advanced technology is indistinguishable from magic.” But if science fiction has taught us anything, it’s that any sufficiently advanced technology will inevitably rise up to enslave us. So if you want to get ready for the day when your Roomba declares that maybe it’s time for you to start crawling around on the floor sucking up dust, it might be a good idea to evaluate the Republican and Democratic approaches to this problem.

For more, see http://www.prospect.org/csnc/blogs/tapped_archive?month=11&year=2009&base_name=which_party_is_best_prepared_t.

Monday, November 23, 2009

Swarm Bots Drag Child Across Room

SSwarm Bots Evolve Communications Skills and Deceit, is an article by Aaron Saenz over at the Singularity Hub. Saenz provides an update on research with S-bots, swarming bots developed by EPFL in Lausanne, Switzerland. The article also contains three videos showing the bots avoiding poison and swarming around food, 'evolving' effective communication to join in a shared task, and jointly dragging a young child across the room (see below).

President Obama Keeping an Eye on Robots

As presidents (sic), I believe that robotics can inspire young people to pursue science and engineering. And I also want to keep an eye on those robots in case they try anything.

(LAUGHTER)

President Obama in the transcript of a speech covered by The Washington Post. The article is titled, Obama Remarks on Math, Science and Technology Education.

Interview of Veruggio and Operto on Roboethics


Gianmarco Veruggio and Fioella Operto of the Scuolo di Robotica (Genova) were interviewed by Gerhard Dabringer. The full interview is available here.

GIANMARCO VERUGGIO
Roboethics is not the “Ethics of Robots”, nor any “ethical chip” in the hardware, nor any “ethical behavior” in the software, but it is the human ethics of the robots’ designers, manufacturers and users. In my definition, “Roboethics is an applied ethics whose objective is to develop scientific – cultural - technical tools that can be shared by different social groups and beliefs. These tools aim to promote and encourage the development of Robotics for the advancement of human society and individuals, and to help preventing its misuse against humankind.
Actually, in the context of the so-called Robotics ELS studies (Ethical, Legal, and Societal issues of Robotics) there are already two schools”. One, let us called it “Robot-Ethics” is studying technical security and safety procedures to be implemented on robots, to make them as much safe is possible for humans and the plant. Roboethics, on the other side, which is my position, concerns with the global ethical studies in Robotics and it is a human ethics.

FIORELLA OPERTO
Roboethics is an applied ethics that refers to studies and works done in the field of Science&Ethics (Science Studies, S&TS, Science Technology and Public Policy, Professional Applied Ethics), and whose main premises are derived from these studies. In fact, Roboethics was not born without parents, but it derives its principles from the global guidelines of the universally adopted applied ethics This is the reason for a relatively substantial part devoted to this matter, before discussing specifically Roboethics’ sensitive areas.
Many of the issues of Roboethics are already covered by applied ethics such as Computer Ethics or Bioethics. For instance, problems - arising in Roboethics - of dependability; of technological addiction; of digital divide; of the preservation of human identity, and integrity; the applications of precautionary principles; of economic and social discrimination; of the artificial system autonomy and accountability; related to responsibilities for (possibly unintended) warfare applications; the nature and impact of human-machine cognitive and affective bonds on individuals and society; have been already matters of investigation by the Computer ethics and Bioethics.

Stanford Law School and the Robots


Stanford University News carries an article by Adam Gorlick titled, As robots become more common, Stanford experts consider the legal challenges. There is a particular emphasis in this article on protecting the manufacturers who build the robots.

"I worry that in the absence of some good, up-front thought about the question of liability, we'll have some high-profile cases that will turn the public against robots or chill innovation and make it less likely for engineers to go into the field and less likely for capital to flow in the area," said M. Ryan Calo, a residential fellow at the Law School's Center for Internet and Society.

And the consequence of a flood of lawsuits, he said, is that the United States will fall behind other countries – like Japan and South Korea – that are also at the forefront of personal robot technology, a field that some analysts expect to exceed $5 billion in annual sales by 2015.

"We're going to need to think about how to immunize manufacturers from lawsuits in appropriate circumstances," Calo said, adding that defense contractors are usually shielded from liability when the robots and machines they make for the military accidentally injure a soldier.

"If we don't do that, we're going to move too slowly in development," Calo said. "When something goes wrong, people are going to go after the deep pockets of the manufacturer."

Friday, November 20, 2009

Brain Chips for Controlling Computers

No less than the Intel Corporations predicts the advent by 2020 of brain chips that will replace keyboards, mice and remote controls for TVs, according to an article in Computerworld titled, Intel: Chips in brains will control computers by 2020. Sharon Gaudin writes:

Scientists at Intel's research lab in Pittsburgh are working to find ways to read and harness human brain waves so they can be used to operate computers, television sets and cell phones. The brain waves would be harnessed with Intel-developed sensors implanted in people's brains.

The scientists say the plan is not a scene from a sci-fi movie -- Big Brother won't be planting chips in your brain against your will. Researchers expect that consumers will want the freedom they will gain by using the implant.

"I think human beings are remarkable adaptive," said Andrew Chien, vice president of research and director of future technologies research at Intel Labs. "If you told people 20 years ago that they would be carrying computers all the time, they would have said, 'I don't want that. I don't need that.' Now you can't get them to stop [carrying devices]. There are a lot of things that have to be done first but I think [implanting chips into human brains] is well within the scope of possibility."

Medibots: Surgeons in your gut and bloodstream

NewScientist has a story and video on, Medibots: The world's smallest surgeons. Among the technologies discussed is the 20-millimetre HeartLander with "rear foot-pads with suckers on the bottom, which allow it to inch along like a caterpillar."

The HeartLander has several possible uses. It can be fitted with a needle attachment to take tissue samples, for example, or used to inject stem cells or gene therapies directly into heart muscle. There are several such agents in development, designed to promote the regrowth of muscle or blood vessels after a heart attack. The team is testing the device on pigs and has so far shown it can crawl over a beating heart to inject a marker dye at a target site (Innovations, vol 1, p 227).

Another use would be to deliver pacemaker electrodes for a procedure called cardiac resynchronisation therapy, when the heart needs help in coordinating its rhythm.


Computers Search For Meaning

European researchers have developed the first semantic search platform that integrates text, video and audio. "The system can 'watch' films, 'listen' to audio and 'read' text to find relevant responses to semantic search terms." The MESH project "represents an emerging paradigm shift in search technology" according tl an article in ScienceDaily titled, Listen, Watch, Read: Computers Search for Meaning.

Right now, text in computing is defined by a series of numbers, most commonly the Unicode standard. Each number signifies a particular letter, and computers can scan these codes very quickly. So when you enter a search term, the machine has no idea what those letters signify. It simply looks for the pattern -- it has no inkling of the concept behind the pattern.

But in semantic search, every bit of information is defined by potentially dozens of meaningful concepts. When a copywriter invoices for his or her work, for example, the date could be defined in terms of calendar, invoice, billing period, and so on. All these definitions for one piece of information are called 'metadata', or information about information.

Collections of agreed metadata terms for a particular field or task, like medicine or accounting, are called ontologies.

So the computer not only searches for the term, it searches for related metadata that defines types of information in specific ways. In reality, the computer still does not 'understand' a concept in its semantic search -- it continues to look for patterns of letters. But because the concepts behind the search terms are included, it can return results based on concepts as well as text patterns.

Thursday, November 19, 2009

IBM's Brain Simulator


The capacity of the new brain simulator introduced by IBM exceeds the number of neurons and synapses in a cat's brain. The cortical simulator, called C2, recreates roughly 1 billion neurons connected by 10 trillion individual synapses. IEEE Spectrum carries an article on this new research platform titled, IBM Unveils a New Brain Simulator: A big step forward in a project that aims for thinking chips.

“Each neuron in the network is a faithful reproduction of what we now know about neurons,” [Jim Old] says. This in itself is an enormous step forward for neuroscience, but it also allows neuroscientists to do what they have not previously been able to do: rapidly test their own hypotheses on an accurate replica of the brain.


While the introduction of the simulator indicates that computer scientists are on track to build a simulator with the synaptic capacity of the human brain by 2019, it also suggests drawbacks in this approach for building supercomputers with human-level intelligence.

A major problem is power consumption. Dawn is one of the most powerful and power-efficient supercomputers in the world, but it takes 500 seconds for it to simulate 5 seconds of brain activity, and it consumes 1.4 MW. Extrapolating from today’s technology trends, IBM projects that the 2019 human-scale simulation, running in real time, would require a dedicated nuclear power plant.

Sunday, November 15, 2009

Security and Privacy Risks Arising from Household Robots

An article titled, A Spotlight on Security and Privacy Risks with Future
Household Robots: Attacks and Lessons
, by Tamara Denning, Cynthia Matuszek, Karl Koscher, Joshua R. Smith, and Tadayoshi Kohno is available online.

ABSTRACT
Future homes will be populated with large numbers of robots with diverse functionalities, ranging from chore robots to elder
care robots to entertainment robots. While household robots will offer numerous benefits, they also have the potential to introduce new security and privacy vulnerabilities into the hoe. Our research consists of three parts. First, to serve as a foundation for our study, we experimentally analyze three of today’s household robots for security and privacy vulnerabilities: the WowWee Rovio, the Erector Spykee, and the WowWee RoboSapien V2. Second, we synthesize the results of our experimental analyses and identify key lessons and challenges for securing future household robots.Finally, we use our experiments and lessons learned to construct a set of design questions aimed at facilitating the future development of household robots that are secure and preserve their users’ privacy.


We discovered this report through our sister blog tin can thoughts.

Robo-Stox Soar

Roomba Pac-Man

The roombas in a simulation of Pac-Man are not dangerous. We read about the Roomba Pac-Man in a Nov 11th post at Robots.Net.

The Research and Engineering Center for Unmanned Vehicles (RECUV) at the University of Colorado at Boulder has been developing software that helps robots form ad-hoc networks and distribute cooperative control of their operations. Some of the individuals at RECUV decided to create a cool demo on their own time to show off what their software can do. They've implemented a real-life version of Pac-Man using Roombas. They are quick to point out that despite the fact that the Blinky, Inky, Clyde, and Pinky Roombas seem determined to kill the Pac-Man Roomba, all the robots are actually quite safe. This is because, they say, all are "instilled with the Three Laws of Roombotics".


Integrating Sound and Vision to Enhance Robot Perception

By developing algorithms for integrating both auditory and visual input, Popeye, a robot built by a team of European researchers, was able to effectively identify a "speaker with a fair degree of reliability." ICT Results reports on this research in an article titled, Robotic perception on purpose.

“The originality of our project was our attempt to integrate two different sensory modalities, namely sound and vision,” explains Radu Horaud, POP’s coordinator.

“This was very difficult to do, because you are integrating two completely different physical phenomena,” he adds.

Vision works from the reflection of light waves from an object, and it allows the observer to infer certain properties, like size, shape, density and texture. But with sound you are interested in locating the direction of the source, and trying to identify the type of sound it is.


Haptic Ring Facilitates Virtual Touch

Aaron Saenz at the Singularity Hub reports on the Haptic Ring that Lets You Feel Objects in Augmented Reality , which was recently displayed at the Digital Contents Expo in Tokyo. This video demonstrates the technology starting at 0:42 seconds.

Robots For Real

IEEE Spectrum has a special report with a series of videos about research with robots. Of particular interest to readers of this blog will be reports on using robots with Alzheimer patients and robotic surgery. There is also a nice report showing Hiroshi Ishiguro with some of his androids.

Tuesday, November 10, 2009

The New Yorker and NPR on the C.I.A.s Covert Drone Program

Did you miss Jane Mayer's article in The New Yorker about a C.I.A. covert program for using drones to target terrorists such as Baitullad Mehsud. The article titled, The Predator War: What are the risks of the C.I.A.s covert drone program? has garnered considerable attention. It is important to note that this secret C.I.A. program is distinct from the military drone attacks that have been written about more extensively. There is also an NPR Fresh Air interview of Jane Mayer on this C.I.A. program titled, Jane Mayer: The Risks of a Remote-Controlled War.

The drone program, for all its tactical successes, has stirred deep ethical concerns. Michael Walzer, a political philosopher and the author of the book “Just and Unjust Wars,” says that he is unsettled by the notion of an intelligence agency wielding such lethal power in secret. “Under what code does the C.I.A. operate?” he asks. “I don’t know. The military operates under a legal code, and it has judicial mechanisms.” He said of the C.I.A.’s drone program, “There should be a limited, finite group of people who are targets, and that list should be publicly defensible and available. Instead, it’s not being publicly defended. People are being killed, and we generally require some public justification when we go about killing people.”

Since 2004, Philip Alston, an Australian human-rights lawyer who has served as the United Nations Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, has repeatedly tried, but failed, to get a response to basic questions about the C.I.A.’s program—first from the Bush Administration, and now from Obama’s. When he asked, in formal correspondence, for the C.I.A.’s legal justifications for targeted killings, he says, “they blew me off.” . . . Alston describes the C.I.A. program as operating in “an accountability void,” adding, “It’s a lot like the torture issue. You start by saying we’ll just go after the handful of 9/11 masterminds. But, once you’ve put the regimen for waterboarding and other techniques in place, you use it much more indiscriminately. It becomes standard operating procedure. It becomes all too easy. Planners start saying, ‘Let’s use drones in a broader context.’ Once you use targeting less stringently, it can become indiscriminate.

Tuesday, November 3, 2009

Artificial Beings


We belatedly noticed the publication of Artificial Beings: Moral Conscience, Awarness and Consciencousness by Jacques Pitrat, from Wiley, John & Sons..

It is almost universally agreed that consciousness and possession of a conscience are essential characteristics of human intelligence. While some believe it to be impossible to create artificial beings possessing these traits, and conclude that ultimate major goal of Artificial Intelligence is hopeless, this book demonstrates that not only is it possible to create entities with capabilities in both areas, but that they demonstrate them in ways different from our own, thereby showing a new kind of consciousness. This latter characteristic affords such entities performance beyond the reach of humans, not for lack of intelligence, but because human intelligence depends on networks of neurons which impose processing restrictions which do not apply to computers.
At the beginning of the investigation of the creation of an artificial being, the main goal was not to study the possibility of whether a conscious machine would possess a conscience. However, experimental data indicate that many characteristics implemented to improve efficiency in such systems are linked to these capacities. This implies that when they are present it is because they are essential to the desired performance improvement. Moreover, since the goal is not to imitate human behavior, some of these structural characteristics are different from those displayed by the neurons of the human brain - suggesting that we are at the threshold of a new scientific field, artificial cognition, which formalizes methods for giving cognitive capabilities to artificial entities through the full use of the computational power of machines.

Sunday, November 1, 2009

Ethics and Robotics


A recent collection of articles titled, Ethics and Robotics, edited by Rafael Capurro and Michael Nagenborg has been published by IOS Press. Among the contributors to this volume are Peter Asaro, Patrick Lin, George Beckey, Keith Abney,

Thinking ethically about robots means no less than asking ourselves who we
are…

Ethics and robotics are two academic disciplines, one dealing with the moral norms and values underlying implicitly or explicitly human behavior and the other aiming at the production of artificial agents, mostly as physical devices,
with some degree of autonomy based on rules and programmes set up by their creators. Robotics is also one of the research fields where the convergence of nanotechnology, biotechnology, information technology and cognitive science is currently taking place with large societal and legal implications beyond traditional industrial applications. Robots are and will remain -in the foreseeable future- dependent on human ethical scrutiny as well as on the moral and legal responsibility of humans. Human-robot interaction raises serious ethical questions right now that are theoretically less ambitious, but practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code. The ethical perspective addressed in this volume is therefore the one we humans have when interacting with robots. Topics include the ethical challenges of healthcare and warfare applications of robotics, as well as fundamental questions concerning the moral dimension of human-robotinteraction including epistomological, ontological and psychoanalytic issues. It deals also with the intercultural dialogue between Western and Non-Western as well as between European and US-American ethicists.

Colin Allen Talk at the Adelaide Festival of Ideas