A.I. & Cog Sci luminaries Marvin Minsky, Patrick Winston, & Noam Chomsky among others weighed in at an event celebrating MIT's 150th anniversary earlier this month on why they think there has been a lack of progress in A.I. as reported by MIT's Technology Review
Peter Norvig has written an interesting commentary on why Chomsky is wrong to deride statistical approaches to language.
Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Sunday, May 29, 2011
Wednesday, May 25, 2011
Sword Fighting Robots
IEEE Spectrum has an article about why researchers at Georgia Tech are giving robots swords. The full article titled, Awesomely Bad Idea: Teaching a Robot to Sword Fight can be read here. A video of a sword fighting robot is below, but researcher Tobias Kunz is already exploring putting a sword in the hand of a robotic arm.
Should We Fear a Robot Future?
From the Future of Humanity Institute 2011 Winter Intelligence conference.
Read the full article here.
Participants were also asked when human-level machine intelligence would likely be developed. The cumulative distribution below shows their responses:
The median estimate of when there is a 50% chance is 2050. That suggests we have around 40 years to enjoy before the extremely bad outcome of human-level robot intelligence arrives. The report presents a list of milestones which participants said will let us know that human-level intelligence is within 5-years. I suppose this will be a useful guide for when we should start panicking. A sample of these include:
Winning an Oxford union‐style debate
Worlds best chess playing AI was written by an AI
Emulation/development of mouse level machine intelligence
Full dog emulation…
Whole brain emulation, semantic web
Turing test or whole brain emulation of a primate
Toddler AGI
An AI that is a human level AI researcher
Gradual identification of objects: from an undifferentiated set of unknown size- parking spaces, dining chairs, students in a class‐ recognition of particular objects amongst them with no re‐conceptualization
Large scale (1024) bit quantum computing (assuming cost effective for researchers), exaflop per dollar conventional computers, toddler level intelligence
Already passed, otherwise such discussion among ourselves would not have been funded, lat alone be tangible, observable and accordable on this scale: as soon as such a thought is considered a ‘reasonable’ thought to have
Read the full article here.
Friday, May 20, 2011
Augur, Breazeal, Sharkey and Wallach in BBC Video
"Can robots know the difference between right and wrong?" is a video feature produced by David Reid for the BBC. Reid video-taped Augur, Breazeal, Sharkey and myself during Innorobo, a robotics tradeshow. The video can be accessed here.
Thursday, May 19, 2011
Google Cars on Nevada Highways?
Google has begun a campaign lobbying the legislature of Nevada to accept the operation of its self-driving cars on public roads.
Read the full story by John Markoff from the NYTIMES titled, "Google Lobbies Nevada to All Self-Driving Cars."
The company confirmed on Tuesday that it has lobbied on behalf of the legislation, though executives declined to say why they want the robotic cars’ maiden state to be Nevada. Jay Nancarrow, a company spokesman, said the project was still very much in the testing phase.
Google hired David Goldwater, a lobbyist based in Las Vegas, to promote the two measures, which are expected to come to a vote before the Legislature’s session ends in June. One is an amendment to an electric-vehicle bill providing for the licensing and testing of autonomous vehicles, and the other is the exemption that would permit texting.
In testimony before the State Assembly on April 7, Mr. Goldwater argued that the autonomous technology would be safer than human drivers, offer more fuel-efficient cars and promote economic development.
Although safety systems based on artificial intelligence are rapidly making their way into today’s cars, completely autonomous systems raise thorny questions about safety and liability.
Read the full story by John Markoff from the NYTIMES titled, "Google Lobbies Nevada to All Self-Driving Cars."
UK Approach to Unmanned Aircraft Systems
A document from the UK Ministry of Defense with a critical perspective on the development of unmanned aircraft has been receiving considerable attention in the press. I posted a link to what the media said about the document earlier. The full report titled, "Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems", is now available online and can be accessed here.
Wallach in H+ Magazine
An interview of Wendell Wallach by Ben Goertzel has been published in H+ magazine online. Goertzel asks Wallach a number of questions regarding the likelihood of developing artificial agents with moral decision-making capabilities and consciousness.
The full interview is available here.
Ben:
What are your thoughts about consciousness? What is it? Let’s say we build an intelligent computer program that is as smart as a human, or smarter. Would it necessarily be conscious? Could it possibly be conscious? Would its degree and/or type of consciousness depend on its internal structures and dynamics, as well as its behaviors?
Wendell:
There is still a touch of the mystic in my take on consciousness. I have been meditating for 43 years, and I perceive consciousness as having attributes that are ignored in some of the existing theories for building conscious machines. While I dismiss supernatural theories of consciousness and applaud the development of a science of consciousness, that science is still rather young. The human mind/body is more entangled in our world than models of the self-contained machine would suggest. Consciousness is an expression of relationship. In the attempt to capture some of that relational dynamic, philosophers have created concepts such as embodied cognition, intersubjectivity, and enkinaesthetia. There may even be aspects of consciousness that are peculiar to being carbon-based organic creatures.
We already have computers that are smarter than humans in some respects (e.g., mathematics and data-mining), but are certainly not conscious. Future (ro)bots that are smarter than humans may demonstrate functional abilities associated with consciousness. After all, even an amoeba is aware of its environment in a minimal way. But other higher-order capabilities such as being self-aware, feeling empathy, or experiencing transcendent states of mind depend upon being more fully conscious.
I suspect that without somatic emotions or without conscious awareness (ro)bots will fail to interact satisfactorily with humans in complex situations. In other words, without emotional and moral intelligence they will be dumber in some respects. However, if certain abilities can be said to require consciousness, than having the abilities is a demonstration that the agent has a form of consciousness. The degree and/or type of consciousness would depend on its internal structure and dynamics, not merely upon the (ro)bots demonstrating behavior equivalent to that of a human.
The full interview is available here.
Do Hospitals Hype Robotic Surgery?
Johns Hopkins Medical School issued the following news release: "Hospitals misleading patients about benefits of robotic surgery, study suggests."
Johns Hopkins research shows hospital websites use industry-provided content and overstate claims of robotic success
An estimated four in 10 hospital websites in the United States publicize the use of robotic surgery, with the lion’s share touting its clinical superiority despite a lack of scientific evidence that robotic surgery is any better than conventional operations, a new Johns Hopkins study finds.
The promotional materials, researchers report online in the Journal for Healthcare Quality, overestimate the benefits of surgical robots, largely ignore the risks and are strongly influenced by the product’s manufacturer.
“The public regards a hospital’s official website as an authoritative source of medical information in the voice of a physician,” says Marty Makary, M.D., M.P.H., an associate professor of surgery at the Johns Hopkins University School of Medicine and the study’s leader. “But in this case, hospitals have outsourced patient education content to the device manufacturer, allowing industry to make claims that are unsubstantiated by the literature. It’s dishonest and it’s misleading.”
In the last four years, Makary says, the use of robotics to perform minimally invasive gynecological, heart and prostate surgeries and other types of common procedures has grown 400 percent. Proponents say robot-assisted operations use smaller incisions, are more precise and result in less pain and shorter hospital stays — claims the study’s authors challenge as unsubstantiated. More hospitals are buying the expensive new equipment and many use aggressive advertising to lure patients who want to be treated with what they think is the latest and greatest in medical technology, Makary notes.
But Makary says there are no randomized, controlled studies showing patient benefit in robotic surgery. “New doesn’t always mean better,” he says, adding that robotic surgeries take more time, keep patients under anesthesia longer and are more costly.
None of that is apparent in reading hospital websites that promote its use, he says. For example he points out that 33 percent of hospital websites that make robot claims say that the device yields better cancer outcomes — a notion he points out as misleading to a vulnerable cancer population seeking out the best care.
Makary and his colleagues analyzed 400 randomly selected websites from U.S. hospitals of 200 beds or more. Data were gathered on the presence and location of robotic surgery information on a website, the use of images or text provided by the manufacturer, and claims made about the performance of the robot.
Forty-one percent of the hospital websites reviewed described the availability and mechanics of robotic surgery, the study found. Of these, 37 percent presented the information on the homepage and 66 percent mentioned it within one click of the homepage. Manufacturer-provided materials were used on 73 percent of websites, while 33 percent directly linked to a manufacturer website.
When describing robotic surgery, the researchers found that 89 percent made a statement of clinical superiority over more conventional surgeries, the most common being less pain (85 percent), shorter recovery (86 percent), less scarring (80 percent) and less blood loss (78 percent). Thirty-two percent made a statement of improved cancer outcome. None mentioned any risks.
“This is a really scary trend,” Makary says. “We’re allowing industry to speak on behalf of hospitals and make unsubstantiated claims.”
Makary says websites do not make clear how institutions or physicians arrived at their claims of the robot’s superiority, or what kinds of comparisons are being made. “Was robotic surgery being compared to the standard of care, which is laparoscopic surgery,” Makary asks, “or to ‘open’ surgery, which is an irrelevant comparison because robots are only used in cases when minimally invasive techniques are called for.”
Makary says the use of manufacturer-provided images and text also raises serious conflict- of-interest questions. He says hospitals should police themselves in order not to misinform patients. Johns Hopkins Medicine, for example, forbids the use of industry-provided content on its websites.
“Hospitals need to be more conscientious of their role as trusted medical advisers and ensure that information provided on their websites represents the best available evidence,” he says. “Otherwise, it’s a violation of the public trust.”
In addition to Makary, other Johns Hopkins researchers involved in the study include Linda X. Jin, B.A., B.S.; Andrew A. Ibrahim, B.A.; Naeem A. Newman, M.D.; and Peter J. Pronovost, M.D., Ph.D.
Media Contact: Stephanie Desmon
410-955-8665
Saturday, May 7, 2011
An Algorithm for Evolving Altruism?
Dario Floreano and Laurent Keller of the University of Lausanne in Switzerland claims that altruism quickly evolves in simulations using robots. He suggest that an algorithm for altruism has developed from this research and may be used in other robots. Science Magazine has an online article about his research titled, Even Robots Can Be Heros. Science Daily's online article discussing the research is titled, Robots Learn to Share: Why We Go Out of Our Way to Help One Another. Dario Floreano and Laurent Keller report on the research in PLoS Biology. Dario Floreano explains the research in a video on UTube.
Subscribe to:
Posts (Atom)