Nvidia Corp., a US designer of graphics processing units (GPUs), says its researchers have developed a system that can teach a robot how to perform a task by observing human actions.
Calling it “a first of its kind deep learning-based system”, the company said it is designed to strengthen “communication between humans and robots” and further research will help people work more seamlessly with robots, TechCrunch reports.
The team of researchers behind the system was led by the company’s principal research scientist Stan Birchfield and deep-learning researcher Jonathan Tremblay.
The researchers trained a sequence of neural networks to carry out duties pertaining to perception, program generation and program execution, by means of Nvidia’s TITAN X GPUs.
Using the system, a robot can learn a learn a task just by observing one demonstration in real life.
After observing the task, the robot produces a description of the steps required to undertake the task again.
The description, which humans can read, enables the user to find out and fix any issues quickly with the robot’s “interpretation of the human demonstration before execution on the real robot”.
“For robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot,” the researchers said in their report. “This includes both the desired result and any hints as to the best means to achieve that result.”
“With demonstrations, a user can communicate a task to the robot and provide clues as to how to best perform the task,” they said.
This week, the Nvidia researchers will present their research paper and work at the International Conference on Robotics and Automation in Brisbane, Australia.
The team of researchers plans to broaden the range of tasks the robots can learn and the vocabulary which is necessary to describe the tasks.
– Contact us at [email protected]