Donate to Science & Enterprise

S&E on Mastodon

S&E on LinkedIn

S&E on Flipboard

Please share Science & Enterprise

Computer Vision, Deep Learning Aid Prosthetic Hands

Image of bottle

Image of bottle as seen from camera attached to prosthetic hand (Newcastle University)

4 May 2017. A built-in camera and artificial intelligence can improve the speed and grasping ability of a prosthetic hand, as shown in tests with people missing a hand. Test results and a description of the technology developed by engineers at Newcastle University in the U.K. appear in yesterday’s issue of the Journal of Neural Engineering.

Researchers from Newcastle’s Intelligent Sensing lab are seeking to improve the capabilities of prosthetic devices, particularly those for upper limbs. The authors note that even with recent advances in materials, electronic signaling, and miniaturization, today’s commercial prosthetic hands offer a limited range of movement, control, and grasping motions. But the need for these prosthetic devices continues to expand. In their paper, the team cites data indicating the U.S. has some 500,000 upper-limb amputees, with in the U.K. as well, 473 new upper-limb amputations each year. In addition, more than half of those individuals are still young, between 15 and 54 years of age.

The researchers led by biomedical engineering lecturer Kianoush Nazarpour focused particularly on the ability of prosthetic hands to respond intuitively to objects and at much the same speed as people with real hands. “Responsiveness has been one of the main barriers to artificial limbs,” says Nazarpour in a university statement. “For many amputees the reference point is their healthy arm or leg so prosthetics seem slow and cumbersome in comparison.”

Nazarpour and colleagues combine two aspects of artificial intelligence in their solution. The first part, computer vision, uses an common, inexpensive web cam fitted on the prosthetic hand to capture images of objects to be grasped by the device, with algorithms to recognize the size and shape of the object. The second aspect, deep learning, classifies the object’s properties according to the type of grasping action the hand needs to perform. The team’s software has 4 types of grasping actions depending on location of the wrist and fingers, and number of fingers needed.

Deep learning in this technology makes it possible for a prosthetic hand to successfully grasp an object it has not yet seen before, and still perform quickly. As Nazarpour notes, “the hand is able to pick up novel objects, which is crucial since in everyday life people effortlessly pick up a variety of objects that they have never seen before.”

The team rejected assembling a database of all known objects, which would have been unwieldy, if not impossible. Incorporating deep learning into the system was the work of doctoral candidate and first author Ghazal Ghazaei, who still needed to train the software with 473 representative graspable objects from an existing library of visuals, and 72 images of each object from increments of 5 degrees. “So the computer isn’t just matching an image,” adds Ghazaei, “it’s learning to recognize objects and group them according to the grasp type the hand has to perform to successfully pick it up.”

In the first set of tests, the software successfully identified 85 percent of the objects it already encountered, and 75 percent of objects it had not seen before. In the next tests, the team ran the software on a laptop, simulating the operations of a prosthetic hand, where the camera takes a snapshot of the object, and the software responds to the image. In these assessments, the software successfully identified 84 percent of the objects, both previously seen and newly encountered objects.

The researchers then asked two individuals with amputations below the elbow to try the system with a commercial prosthetic hand made by the company Touch Bionics. This i-limb ultra model has fingers and thumb that operate independently and can increase the strength of the grip if needed. The i-limb ultra is normally operated with a mobile app that adjusts to 14 grasping patterns.

In this series of tests, the two participants were asked to pick and move 24 objects with the prosthetic hand, first snapping a photo with the web cam, then grasping, moving, and releasing object as requested. The participants performed these tasks 6 times, with the order of the objects altered in each batch. The individuals at first had feedback from video screens and signals from the software, which was reduced during the test until the last two batches had no feedback.

Results of this test show the prosthetic hand controlled by the software successfully grasped and moved 88 percent of the objects encountered. Moreover, the training provided to the participants enabled them to operate the system faster as the tests progressed, even without visual or system feedback by the end. In surveys following the exercise, one participant said, “Just getting the routine was difficult at the beginning, but once this was established it became much easier.” The second participant added, “For the time being, the vision-based system seems to be a good solution. I liked its responsiveness very much.”

The following video demonstrates the system in use during tests with the prosthetic hand.

More from Science & Enterprise:

*     *     *

1 comment to Computer Vision, Deep Learning Aid Prosthetic Hands