homer goes 1st place

Yay!!! We won the RoboCup German Open in the @Home track again. This time with a PAL Robotics TIAGo robot and Lisa throughout all the challenges. We are so happy about that after winning it last year for the first time. After we had a low amount of sleep and a lot of work to do we are now very happy with our result. Especially, Raphael is very happy with the result of the team. Out of the sudden, he stepped in for the complete organization of this years @Home Track, and could not invest much work into the team, but the team showed outstanding achievements for the circumstances.

Group Picture with Nick Theisen, Tobias Evers, Niklas Yann Wettengel, Anatoli Eckert, Ivanna Mykhalshychna, Thies Möhlenhof, Lukas Debald, Raphael Memmesheimer
Group Picture with Nick Theisen, Tobias Evers, Niklas Yann Wettengel, Anatoli Eckert, Ivanna Mykhalshychna, Thies Möhlenhof, Lukas Debald, Raphael Memmesheimer.

Here are all the scores:

Final Scores

Team Stage 1+2 (Normalized, 50%) Internal Jury (25%) External Jury (25%) Final Score Final Rank
homer 1.00 0.97 0.90 0.97 1
Tech United 0.46 0.86 0.70 0.62 2
Golem 0.39 1.00 0.63 0.6 3
ToBi 0.36 0.90 0.66 0.57 4

Stage 1

Team Speech And Person Poster GPSR Help Me Carry Storing Groceries Sum
b-it bots 0 29 - - 0 29
homer 140 33 103 13 0 289
ToBi 78 36 16 30 0 160
SCC 20 31 - - 0 51
Tech United 98 33 8 28 4 171
Golem 75 33 20 28 3 159
Liu - 33 - - - 33
IRSA - - - - - -

Stage 2

Team Stage1 Dishwasher / Set Up The Table EE-GPSR Open Challenge Restaurant Sum
b-it bots 29 - - - - 29
homer 289 130 20 130 85 634
ToBi 160 - 0 67 0 227
SCC 51 - - 48 10 109
Tech United 171 10 0 63 50 294
Golem 159 10 0 70 10 249
Liu 33 - - - - 33
IRSA - - - - - -

So but now from those forewords to the what actually happened in the Finals.
We had a significant advancement to the second team Tech United, thus we decided to go all in for the final demonstration and wanted to present the possibilities of our recent research results. As in the Open Challenge we used Lisa to observe a person fulfilling a beforehand unknown task. As we were successful with the manipulation of the hard to grasp dishes we aimed for introducing a new task of cleaning up the table but not by programming it line by line. Lisa should observe Ivanna putting the dishes into the dishwasher rack. So we introduced a new possible action that could be detected just by the analysis of the camera information, supported by the knowledge of the environment including objects. An action plan should be generated by Lisa’s observation and to go even further we planned to transmit the action plan to TIAGo which is also able to perform the exact action set of Lisa. Note at this position, the imitation is not made on a human trajectory imitation level in combination of the object observations. This could be potential future research. We imitate the observed behaviour based on a set of actions that the robots are already able to execute for other tasks like the General Purpose Service Robot task. There is a huge action set required for that, that we plan to reuse for creating new actions based on a combination of smaller actions. The current supported actions for recognition are Pick (<object>), Place(<object>, <target_object>), Pour(<source_object>, <target_object>).

By this rather rudimentary actions you already can create a huge variety of new actions, i.e. like we planned to Clean the table could translate to Pick(fork), Place(fork, diswasher_rack). This description is rather simplified. In practice there are other parameters, that can be gathered from the semantic knowledgebase or by asking the human. I.e. in the above example the location of the dishwasher rack could potentially unknown. In this case the robot should either ask a person where it is, search for it or by getting this information from a semantic knowledgebase.

We even planned to show that we can even state which table to clean up. This would have been built on the ideas that we presented last years in the finals, where we taught new commands by a textual description by speech. When we can integrate both approaches we can use new descriptions in combination with the demonstrations that we learn visually.

Group Picture during the prize giving
Group Picture during the prize giving.

Unfortunately we had some issues in the final demonstration, because Lisa recognized to much actions :). But we are trying to proof that it can work for the RoboCup WorldCup in Montreal in June.

We want to give a big thanks to PAL Robotics for sponsoring us with a TIAGo robot for a year. Without this we would not have been able to clean up the table and put the stuff to the dishwasher tray!

Getting closer to the final

At first a short update about the competitions that were held yesterday. Help me carry, where the robot had to follow an operator out of the arena, take his grocery bag and bring it back into the house offered some difficulties for Lisa as she lost track of the operator and instead wanted to follow a person in the audience. I guess it was love at first sight :D. This is a very complicated task as lisa has to detect a person and track them as they move, even when they walk into a crowd of people. Tiago also had some problems, while storing groceries into a shelf, but put it right in the evening when he had to put away dishes into a dishwasher, where he did such a great job, that surely everyone in the audience whished they also had a help like this at home.

Tiago at Dishwasher Challenge
Tiago at Dishwasher Challenge.
All the dishes placed in the sink
All the dishes placed in the sink.

Today was also an exciting Day for our Team. We arrived early in the morning to complete some final tests before the EE-GPSR test started. This is an enhanced and more complex version of the GPSR challenge were commands with different difficulty are read out loud by the Jury. Additionally the commands can be defective or some informations can be missing. The robot recognized and executed 2 out of 3 commands and gave his best. We are very proud as none of the other teams could recognize a command. The Jury also got into a little embarrassing situation, when they asked one “robust” man out of the audience to be a supernumerary, when only after this they had to read the command “Go into the living room and find the fatter person”.

Testing the Open Challenge
Testing the Open Challenge.

Shortly after that the Open Challenge was presented in which the Team can show what they want to the audience. We decided to show Lisas new action recognition skills, that allows to recognize actions just by observing a human doing it. As it is relatively new she can only detect few actions. Lisa got presented how to pick and place a cup and how to water a plant. After that Lisa had to tell what actions with which objects she recognized and imitate them. Even though it did not exactly went as planned Lisa showed great performance and got the most points from the jury.

In the afternoon most people where hungry, which met well, because the next game was the restaurant game, where Lisa has to detect people in the restaurant, take their orders and bring them food and drinks. Unfortunately the food could not get eaten as they still need it for other competitions. Lisa did a great job as temporary waiter and served some food to the hungry waiting guests.

Ivanna programming lisa in a very relaxing way
Ivanna programming lisa in a very relaxing way.

After todays competitions we spent most of our time planning, testing, programming and finetuning the presentations for tomorrows final. In which we will hopefully perform as well as the rest of the week. Whish us luck ;)

Leading after day 1

The schedule for RoboCup@Home this German Open is tight. Because the amount of tasks and teams that are participating in this event is so high the organization decided to benchmark the first tasks on the second preparation day. Meaning that there was even less time for to set up the robots and adapt to the environment and objects.

Tiago at GPSR(General Purpose Service Roboter)
Tiago at GPSR(General Purpose Service Roboter).

It was working out everything as expected. We directly passed the robot inspection and after achieved the highest score in the Speech and Person recognition task. In this task the robot has to stand in front of a crowd, facing them with its back. After the start the robot turns around and has to analyze the crowd. Meaning to count people, check their gender, pose, dress color… This is kind of tricky because the Robocup is an open event, the arena is open for visitors which are also watching the task. That means the robot also needs to decide whether the persons are in the apartment and not outside. After stating the crowd size an operator faces the robot asking questions from a set of trivia questions, questions about the environment and the crowd. This is about the speech recognition with a huge set of possibilities. After answering five questions the crowd forms around the robot and keeps on asking questions but now from surrounded all over. The robots are then supposed to still answer the questions but also face the correct person via sounding sources localization.

Programming Lisa..
Programming Lisa...

As the Golem team from Mexico still didn’t receive their robot because of custom issues we offered them to use the PAL Robotics Tiago robot for this task. It was a strange idea but it was working outdated good for them They attached the Kinect 2 and a micro phone and we wrote a small wrapper for making the robot turn and drive. So they where able to integrate this task on the robot under time pressure and it was working out super nice. In the end they understood the questions and analyzed the crowd quite well. What an experience! There where many happy faces afterwards.

Team meeting
Team meeting.

In the evening there was the Poster session, where teams present their’s newest research and results. After they discuss about the posters. We showed of that we participate for the first time with TIAGo and introduced our new approaches on imitation learming. Usually the results of the poster presentation are quite equal for all teams. Therefore we managed to stay first at the end of the first day. Today there is a lot on the schedule. First GPSR, Help Me Carry, Storing Groceries and later the Dishwasher test. We are curious about this day.

.

threshold problems and late-night-labeling

The preparation for the 8 challenges are still running, yet. But this afternoon at 4 p.m. the first challenge “speech and person recognition” starts.

This challenge, as the name says, is about recognising people as well as speech. More precisely the robot, we take lisa for that, need to look at a crowd and then answer questions about this crowd. E.g. the count of females and males, sitting persons or persons which wave to the robot.

team at work
team at work.

At the beginning our robots had some problems moving over the threshold at the entrance and exit of the testarena where the challenges take place. To improve this we first made little ramps, yet we hope the robots wont get stuck in a burnout while moving in the arena.

>Do not feed!<
>Do not feed!<.

We need to label a lot of pictures of objects for one of our neural networks to recognize objects and thus make it possible to grab them and sort them or transport them arround. This labelwork is a simple but timetaking task, so yesterday night we all sit together in our appartment and started labeling the pictures.

labeling, for those who are not so familar with this vocabulary, is nothing else then just marking the region in a picture where an object is. With this information as input neural networks can learn the objects and recognize them in new pictures.

Team homer's elixir of life
Team homer's elixir of life.