Science & Enterprise subscription

Follow us on Twitter

  • These Are Countries Where Startup Funding Is Really Taking Off https://t.co/qc8e3PSPGD via @crunchbasenews
    about 1 day ago
  • Now comes the hard part ... Artificial intelligence raises question of who’s an inventor https://t.co/tHDe9Rh6N4 vi… https://t.co/v9s3ZLgcN3
    about 1 day ago
  • Cashing out ... Gene therapy pioneer Kathy High has left Spark after completing $4.3B union with Roche ... via Endp… https://t.co/9wCsBVL2U2
    about 3 days ago
  • Amazon Care, the company's virtual medical clinic, is now live for Seattle employees https://t.co/yaXClfiY6K
    about 4 days ago
  • A survey of institutional investors shows these money managers for financial institutions are starting to take clim… https://t.co/jMNeawHlpR
    about 5 days ago

Please share Science & Enterprise

System Creates Ad Hoc Touch-Based Interfaces on Surfaces

Wall surface with touch interface created by WorldKit

Wall surface with touch interface created by WorldKit (chrisharrison.net)

Computer scientists at Carnegie Mellon University in Pittsburgh developed a system that can project images to control computer devices on everyday surfaces almost at will. The team of doctoral candidates Robert Xiao and Chris Harrison, with professor Scott Hudson, will discuss their WorldKit system next week at the ACM SIGCHI Conference on Human Factors in Computing Systems meeting in Paris.

Xiao, Harrison, and Hudson developed WorldKit in Carnegie Mellon’s Human-Computer Interaction Institute that combines a ceiling-mounted camera with sensors and projector to record the configuration of the room, capture hand movements and gestures, and project images on the desired surfaces in the room. WorldKit enables individuals to create images projected on room surfaces, on the fly and as needed, containing elements that interact with computer systems.

For example, a person can create a TV remote control on the arm of a sofa, or post a calendar on the wall. In addition, people in the room can then interact with the calendar, expanding the detail of or modifying individual events.

The researchers are building on earlier work by Harrison — who joins the Carnegie Mellon faculty this summer — that harnesses the capabilities of the depth-sensing camera, like the one found on the Microsoft Xbox Kinect, to track the user’s fingers on remote flat surfaces. This capability to project an interactive touch screen allows users to control interactive applications by tapping or dragging their fingers on those projected surfaces.

WorldKit, however, does not require prior calibration, and can automatically adjust its sensing and image projection to the orientation of the chosen surface. Individuals can select from a menu and project control-images such as switches, message boards, and indicator lights. Xiao says the system takes advantage of new and improved hardware, noting “Depth sensors are getting better and projectors just keep getting smaller.”

The WorldKit developers aim to refine the system to custom-design interfaces with hand gestures, rather than selecting from a menu, where users can interact with the system in free space. In addition, higher resolution depth cameras could enable the system to sense detailed finger gestures, or voice sensors could respond to spoken commands.

“People have talked about creating smart environments, where sensors, displays and computers are interwoven,” says Harrison. “With WorldKit, we say forget touchscreens and go straight to projectors, which can make the room truly interactive.”

Read more:

*     *     *

Please share Science & Enterprise ...

Comments are closed.