Computer scientsts at Carnegie Mellon University and Johns Hopkins University’s Applied Physics Laboratory (APL) adapted new techniques for uncovering software bugs to the demanding requirements of robotic surgery. Carnegie Mellon’s André Platzer and APL’s Yanni Kouskoulas and colleagues will describe their work later this week at the Hybrid Systems: Computation and Control conference in Philadelphia.
Platzer (pictured left) and Kouskoulas led the team that applied verification techniques designed for hybrid cyber-physical systems combining computer controls with real-world physical activities, often with life-and-death outcomes, such as aircraft collision avoidance systems and robotic surgery. “Because the consequences of these systems malfunctioning are so great,” says Platzer, “finding a way to prove they are free of design errors has been one of the most important and pressing challenges in computer science.”
Most system testing technologies — even those for complex computer circuitry that aim to uncover design defects — analyze all the possible states of a system, much like a mathematical proof of a theorem. But these approaches, say the researchers, assume a finite number of states against which a systems design can be tested. With cyber-physical systems, that assumption cannot be made, since these hybrid systems must deal with infinite variations encountered in the physical world.
To overcome this obstacle, the research team developed an approach that applies differential dynamic logic defining operational regions for the system and enabling infinite activity states within the boundaries of that region. Platzer devised a tool called KeYmaeraD that quantifies differential dynamic logic for modeling cyber-physical systems, which he and colleagues earlier applied to aircraft collision avoidance and distributed car control systems.
The researchers applied this approach to evaluate the control algorithm of a robotic system used for surgery on the skull that assists a human surgeon in small recesses of the skull that may not be visible to the surgeon. The system manipulates and restricts movement of the surgeon’s tools, particularly as it nears the boundary with healthy brain tissue, where it exerts force feedback as a warning, or stops altogether.
The team tested the robotic surgery system extensively, including on cadavers. The verification methods devised by the researchers indicated unexpected events could occur with this system, especially when the safety boundaries of one region interfered with the boundaries of adjoining regions. The safety feedback mechanisms could cancel each other out, they discovered, allowing the tool to enter the off-limits regions.
Their formal verification tool provides examples of how this uncovered problem could occur. “It leads you to the problem,” notes Kouskoulas. “You then have to be creative to find the solution.” That solution included a new algorithm to test the safety of the system.
Platzer and Kouskoulas add that this formal verification technique also could change the way medical robotics are evaluated, providing more assurance of safety than is currently possible with rigorous testing.
Read more:
- Robotics Cloud Software Platform Being Demonstrated
- Collaboration to Build Three New Pediatric Medical Devices
- Resourceful Robot for High-Level Tasks in Development
- Engineers Develop Robotic Driver-Assist Safety System
- Robotic-Guided Brain Surgery Performed
* * *