“The originality of our project was our attempt to integrate two different sensory modalities, namely sound and vision,” explains Radu Horaud, POP’s coordinator.
“This was very difficult to do, because you are integrating two completely different physical phenomena,” he adds.
Vision works from the reflection of light waves from an object, and it allows the observer to infer certain properties, like size, shape, density and texture. But with sound you are interested in locating the direction of the source, and trying to identify the type of sound it is.
Wendell Wallach and Colin Allen maintain this blog on the theory and development of artificial moral agents and computational ethics, topics covered in their OUP 2009 book...
Sunday, November 15, 2009
Integrating Sound and Vision to Enhance Robot Perception
By developing algorithms for integrating both auditory and visual input, Popeye, a robot built by a team of European researchers, was able to effectively identify a "speaker with a fair degree of reliability." ICT Results reports on this research in an article titled, Robotic perception on purpose.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment