Structured English brings robots closer to everyday users
By Anne Ju
Move over, Jetsons. A humanoid robot named Mae is traipsing around Cornell's Autonomous Systems Lab, guided by plain-English instructions and sometimes even appearing to get frustrated.
Mae understands and executes English commands, thanks to algorithms and a software toolkit called Linear Temporal Logic Mission Planning (LTLMoP) being developed in the lab of Hadas Kress-Gazit, assistant professor of mechanical and aerospace engineering.
According to Kress-Gazit, the future of robotics is in the ability of robots to easily understand everyday users and to act reliably in different situations.
"The big picture is that we want to have anybody tell the robot what to do," explained Kress-Gazit, who studies how to create provably correct, high-level behaviors for robots. "You don't want to have a programmer who's been doing the job forever to have to write the code for every single behavior, as is currently done in the field. You want to take what someone said and automatically generate the code for the robot to successfully accomplish its task."
The LTLMoP toolkit combines logic, language and control algorithms. The group has demonstrated the algorithms by getting Mae, a 2-foot robot NAO humanoid made by Aldebaran Robotics, to simulate looking for missing items in a grocery store while also avoiding spills in the aisles. Depending on what she finds, the robot takes action based on the specifications that were given to her.
The "store" is located in the Rhodes Hall Autonomous Systems Lab. Mae knows how to react in certain situations -- for example, if a "missing item" is encountered, she alerts a manager. If she sees a "spill," she'll avoid the area.
Traditionally, a controller for these relatively complex tasks requires specifically programming the robot to react in every conceivable state it may find itself in. This is the tedious and error-prone nature of robotics today, the researchers say. There's no guarantee that the code has accounted for every situation, and that it will work. There's also no guarantee the behavior is even possible.
For their work, the Cornell researchers are looking at how to provide explanations to the user when, for whatever reason, a task cannot be done. That kind of feedback from the robot does not exist in robotics today, Kress-Gazit says.
In LTLMoP, a high-level specification can be written in structured English. For example, Mae is told to visit all the corners of the "store" and to look side to side while walking through the aisles. The commands can be written concisely because the robot understands breaking the store into "regions," -- and prepositional statements, such as "between" and conditional statements like "if … then."
"Instead of giving it a list of things to do in order, you give it a specification of the sort of behavior it should exhibit at all times," said graduate student Cameron Finucane, who works on the LTLMoP platform.
Kress-Gazit's research is supported by the National Science Foundation CAREER program and a Department of Defense Multidisciplinary University Research Initiative.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe