homer goes 1st place

Yay!!! We won the RoboCup German Open in the @Home track again. This time with a PAL Robotics TIAGo robot and Lisa throughout all the challenges. We are so happy about that after winning it last year for the first time. After we had a low amount of sleep and a lot of work to do we are now very happy with our result. Especially, Raphael is very happy with the result of the team. Out of the sudden, he stepped in for the complete organization of this years @Home Track, and could not invest much work into the team, but the team showed outstanding achievements for the circumstances.

Group Picture with Nick Theisen, Tobias Evers, Niklas Yann Wettengel, Anatoli Eckert, Ivanna Mykhalshychna, Thies Möhlenhof, Lukas Debald, Raphael Memmesheimer
Group Picture with Nick Theisen, Tobias Evers, Niklas Yann Wettengel, Anatoli Eckert, Ivanna Mykhalshychna, Thies Möhlenhof, Lukas Debald, Raphael Memmesheimer.

Here are all the scores:

Final Scores

Team Stage 1+2 (Normalized, 50%) Internal Jury (25%) External Jury (25%) Final Score Final Rank
homer 1.00 0.97 0.90 0.97 1
Tech United 0.46 0.86 0.70 0.62 2
Golem 0.39 1.00 0.63 0.6 3
ToBi 0.36 0.90 0.66 0.57 4

Stage 1

Team Speech And Person Poster GPSR Help Me Carry Storing Groceries Sum
b-it bots 0 29 - - 0 29
homer 140 33 103 13 0 289
ToBi 78 36 16 30 0 160
SCC 20 31 - - 0 51
Tech United 98 33 8 28 4 171
Golem 75 33 20 28 3 159
Liu - 33 - - - 33
IRSA - - - - - -

Stage 2

Team Stage1 Dishwasher / Set Up The Table EE-GPSR Open Challenge Restaurant Sum
b-it bots 29 - - - - 29
homer 289 130 20 130 85 634
ToBi 160 - 0 67 0 227
SCC 51 - - 48 10 109
Tech United 171 10 0 63 50 294
Golem 159 10 0 70 10 249
Liu 33 - - - - 33
IRSA - - - - - -

So but now from those forewords to the what actually happened in the Finals.
We had a significant advancement to the second team Tech United, thus we decided to go all in for the final demonstration and wanted to present the possibilities of our recent research results. As in the Open Challenge we used Lisa to observe a person fulfilling a beforehand unknown task. As we were successful with the manipulation of the hard to grasp dishes we aimed for introducing a new task of cleaning up the table but not by programming it line by line. Lisa should observe Ivanna putting the dishes into the dishwasher rack. So we introduced a new possible action that could be detected just by the analysis of the camera information, supported by the knowledge of the environment including objects. An action plan should be generated by Lisa’s observation and to go even further we planned to transmit the action plan to TIAGo which is also able to perform the exact action set of Lisa. Note at this position, the imitation is not made on a human trajectory imitation level in combination of the object observations. This could be potential future research. We imitate the observed behaviour based on a set of actions that the robots are already able to execute for other tasks like the General Purpose Service Robot task. There is a huge action set required for that, that we plan to reuse for creating new actions based on a combination of smaller actions. The current supported actions for recognition are Pick (<object>), Place(<object>, <target_object>), Pour(<source_object>, <target_object>).

By this rather rudimentary actions you already can create a huge variety of new actions, i.e. like we planned to Clean the table could translate to Pick(fork), Place(fork, diswasher_rack). This description is rather simplified. In practice there are other parameters, that can be gathered from the semantic knowledgebase or by asking the human. I.e. in the above example the location of the dishwasher rack could potentially unknown. In this case the robot should either ask a person where it is, search for it or by getting this information from a semantic knowledgebase.

We even planned to show that we can even state which table to clean up. This would have been built on the ideas that we presented last years in the finals, where we taught new commands by a textual description by speech. When we can integrate both approaches we can use new descriptions in combination with the demonstrations that we learn visually.

Group Picture during the prize giving
Group Picture during the prize giving.

Unfortunately we had some issues in the final demonstration, because Lisa recognized to much actions :). But we are trying to proof that it can work for the RoboCup WorldCup in Montreal in June.

We want to give a big thanks to PAL Robotics for sponsoring us with a TIAGo robot for a year. Without this we would not have been able to clean up the table and put the stuff to the dishwasher tray!