Science & Enterprise subscription

Follow us on Twitter

  • Our #podcast is now live ... NPC Members Display Photos At Annual Club Exhibit https://t.co/EsClVw9ZNc @pressclubdc #NPCPhotoEx
    about 14 hours ago
  • More Democrats say they're in favor of increased federal spending on scientific research than Republicans, a gap th… https://t.co/SQBna2FykD
    about 14 hours ago
  • New post on Science and Enterprise: Infographic – Partisan Split Remains on Research Spending https://t.co/FabPa5ufSb #Science #Business
    about 14 hours ago
  • The first participant in a clinical trial received an experimental minimally-invasive brain implant to record brain… https://t.co/nPUNY6S2a5
    about 1 day ago
  • New post on Science and Enterprise: Trial Underway Testing Brain-Computer Implant https://t.co/olPDnbtfnW #Science #Business
    about 1 day ago

Please share Science & Enterprise

Computer Learning Process Developed for Robotic Arm Movement

Ashutosh Saxena

Ashutosh Saxena (Cornell University)

Computer scientists at Cornell University in Ithaca, New York developed an algorithm to fine-tune movements of industrial robotic arms, through feedback and learning from interactions with humans. A team from the lab of computer science professor Ashutosh Saxena will present its findings next month at the Neural Information Processing Systems conference in Lake Tahoe, California.

Early industrial robots perform repetitive tasks in structured conditions, such as manufacturing assembly lines, requiring little judgement or interaction with humans, other than the operator. Later generations of robots, however, are more likely to be introduced in commercial or health care environments where they need to perform complex and interactive tasks with multiple people, calling for the machines to make quick and subtle adjustments in their operations.

Saxena and colleagues developed software for a Baxter robot, made by Rethink Robotics, that enables it to learn various trajectories for moving its arms, and making adjustments in those trajectories as a result of feedback from people nearby. The Baxter model has two robotic arms, each with an elbow and rotating wrist.  The device also has a touch screen for interactions with the operator.

The algorithm enables the operator to select an initial trajectory for a robotic arm from choices displayed on the touch screen. When the robot starts its arm movements, humans can intervene, physically holding or moving the arm or wrist to improve their performance — in what the researchers call zero-G mode — which the system internalizes and adjusts accordingly in future similar movements. Adjustments can also be made through the touch screen.

The Cornell team demonstrated the algorithm with a Baxter robot programmed for supermarket checkout duties. Adjustments in arm or wrist movements are made with fragile items, such as eggs or fresh fruit, put on the moving belt, as well as potentially dangerous houseware items, such as kitchen knives. As a result, the robot can learn specific trajectories associated with each item, incorporating pick-and-place and grasping motions with bar-code detection to identify the specific item.

The researchers tested the algorithm with five individuals not associated with the project, who were asked to provide feedback to the robot in a simulated grocery checkout station performing 16 pick-and-place tasks. The results showed the testers were able to train the robot arm to perform the optimal trajectories within five feedback cycles, with each training task requiring on average 5.5 minutes. The testers changed the rank-order of the available trajectories for easier, and relied on the physical movement of the robotic arm or wrist (i.e., zero-G mode) for more complex or difficult movements.

The following video gives a brief demonstration of the software and robot.

Read more:

*     *     *

Please share Science & Enterprise ...
error

Comments are closed.