homer goes 1st place
Yay!!! We won the RoboCup German Open in the @Home track again. This time with a PAL Robotics TIAGo robot and Lisa throughout all the challenges. We are so happy about that after winning it last year for the first time. After we had a low amount of sleep and a lot of work to do we are now very happy with our result. Especially, Raphael is very happy with the result of the team. Out of the sudden, he stepped in for the complete organization of this years @Home Track, and could not invest much work into the team, but the team showed outstanding achievements for the circumstances.
Here are all the scores:
|Team||Stage 1+2 (Normalized, 50%)||Internal Jury (25%)||External Jury (25%)||Final Score||Final Rank|
|Team||Speech And Person||Poster||GPSR||Help Me Carry||Storing Groceries||Sum|
|Team||Stage1||Dishwasher / Set Up The Table||EE-GPSR||Open Challenge||Restaurant||Sum|
So but now from those forewords to the what actually happened in the Finals.
We had a significant advancement to the second team Tech United, thus we decided to go all in for the final demonstration and wanted to present the possibilities of our recent research results. As in the Open Challenge we used Lisa to observe a person fulfilling a beforehand unknown task. As we were successful with the manipulation of the hard to grasp dishes we aimed for introducing a new task of cleaning up the table but not by programming it line by line. Lisa should observe Ivanna putting the dishes into the dishwasher rack. So we introduced a new possible action that could be detected just by the analysis of the camera information, supported by the knowledge of the environment including objects. An action plan should be generated by Lisa’s observation and to go even further we planned to transmit the action plan to TIAGo which is also able to perform the exact action set of Lisa. Note at this position, the imitation is not made on a human trajectory imitation level in combination of the object observations. This could be potential future research. We imitate the observed behaviour based on a set of actions that the robots are already able to execute for other tasks like the General Purpose Service Robot task. There is a huge action set required for that, that we plan to reuse for creating new actions based on a combination of smaller actions. The current supported actions for recognition are
By this rather rudimentary actions you already can create a huge variety of new
actions, i.e. like we planned to Clean the table could translate to
Pick(fork), Place(fork, diswasher_rack). This description is rather simplified. In
practice there are other parameters, that can be gathered from the semantic knowledgebase
or by asking the human. I.e. in the above example the location of the dishwasher
rack could potentially unknown. In this case the robot should either ask a person
where it is, search for it or by getting this information from a semantic knowledgebase.
We even planned to show that we can even state which table to clean up. This would have been built on the ideas that we presented last years in the finals, where we taught new commands by a textual description by speech. When we can integrate both approaches we can use new descriptions in combination with the demonstrations that we learn visually.
Unfortunately we had some issues in the final demonstration, because Lisa recognized to much actions :). But we are trying to proof that it can work for the RoboCup WorldCup in Montreal in June.
We want to give a big thanks to PAL Robotics for sponsoring us with a TIAGo robot for a year. Without this we would not have been able to clean up the table and put the stuff to the dishwasher tray!