A Purdue University psychologist in West Lafayette, Indiana is developing a form of machine vision that provides a field of view with more ability to perceive objects in the context of their environment. This more human-like form of robotic vision has patents filed and is available for licensing from Purdue’s technology transfer office.
According to Zygmunt Pizlo, research psychologist and director of Purdue’s Visual Perception Laboratory, today’s robotic vision technology uses multiple cameras with laser range finders and other sensors to detect objects around them. While these systems can provide basic object recognition, they do not replicate the 3-D capabilities that are possible for humans.
The Visual Perception Laboratory is developing a form of robotic vision based on a model of decision-making that resembles the human mind. “We believe there is a fundamental principle for human vision and that is we rely on a prior knowledge about a physical environment,” says Pizlo, “so we’re trying to program this knowledge of the physical environment into a robot’s artificial intelligence.”
A key part of that process is figure-ground orientation, that simplifies a scene or photo around a main object, with everything else in the scene relegated to the background. Tadamasa Sawada, a postdoctoral researcher in the lab, says getting a robotic to develop this capability requires “incorporating visual mechanisms and a prior knowledge about the physical environment into a robot.”
Research by by the Visual Perception Lab is generating the algorithms for robots to learn this capability, starting with certain assumptions of the physical world perceived by humans and robots. “The physical world is not completely random,” says Pizlo. “Most natural objects are symmetrical, all objects have volume, gravity is always present, ground surfaces are approximately horizontal.”
Pizlo says accomplishing this higher-order type of vision will enable robots to interact with humans more like humans. “Until they can see like us, they can’t truly interact with us,” notes Pizlo. “Once they can interact with us they can begin doing all types of tasks such as drive a car, help surgeons in hospitals, assist the elderly, provide sight for the blind, replace people in high-risk situations like making repairs in a nuclear plant and, yes, bring us coffee in the morning.”
The researchers have built a robot they named Capek (pictured at top) that can watch individuals move around a space with physical objects like chairs and desks, and learn the team’s actions to accurately visualize their movements. Purdue’s Office of Technology Commercialization says the lab’s technology already has patents filed for it, and is available for licensing.
In the following video, Pizlo and colleagues tell more about and demonstrate the technology.
- Robotic Hand Demonstrates Firm Grip and Gentle Touch
- Grants Awarded for Robotics, Cyber-Vision and Child Health
* * *