Visual and Spatial Robotic Task Planning Using an Augmented Reality Authoring Interface

Back to all technologies
Download as PDF
Task planning for mobile robots is an important topic in the general robotics area. Many previous works have adopted an Augmented Reality (AR) interface for robot task planning and animation authoring because of its ability to bridge the digital authoring interface with the physical world. However, they all use an external computer vision approach, i.e. with image markers or object recognition methods, to localize the robot during the manipulation and navigation. Being able to program the robot to perform a series of location and time-based sequential tasks has the potential to greatly help us in daily life.

Researchers at Purdue University have developed a lightweight design for robot task planning with a spatially situated approach. This technology utilizes AR interfaces, Robot Assistants (RA), and the interactive Internet of Things (IoT) to program robots and machines using a smart phone. With a handheld device, the user can move around to directly and spatially author the robot's navigation path and interactive functions with room-level navigation without needing an external tracking system.

-Direct visual authoring
-Easy to install
-Ready to use

Potential Applications:
-Internet of Things
Apr 23, 2018
United States

Purdue Office of Technology Commercialization
1801 Newman Road
West Lafayette, IN 47906

Phone: (765) 588-3475
Fax: (765) 463-3486