RoCKIn Day 2 and Day 3

After we got a base that was able to map, localize and drive autonomously we had to add the ears for speech understanding and the vision for object perception and face detection. We were brainstorming about what we can use to mount the Asus Xtion (which by luck is only powered via USB). By walking through the RoCKIn tent we asked some @Work guys if they may have some spare part that we can use for this. So Sven from the b-it bots team gave us a square. With that we were able to mount a rail that holds an old version of our pan tilt unit setup which was designed for the usage with a Kinect. After we mounted it we added some tape - et voila = a basic mount for the ASUS Xtion and the microphone was built.

Because we didn’t have a PTU on the new robot we had to change some parts of the code for instance while tracking people, the PTU was always looking to the tracked person. We had to change this code in a way that the youBot turns to the tracked person.  The cool thing about the new setup was that we were able to rely on components that don’t need external power supply this made things a bit easier. So we were able to use the new youBot setup completely on the second competition day and were able to compete in all competitions from now on. Some minor navigation issues were fixed and we obtained a competition ready robot. Only a manipulator was missing but as we had more important problems we didn’t follow the task of also porting the Katana arm to our setup. We preferred to invest some time in getting voice output on the robot which sounds like an easy problem to solve. But we fast recognized that it turns out to get hard if you don’t have the components like battery powered speakers and have to use use the internal microphone plug which blocks the audio out when a microphone is plugged in.

So we tried to use a second notebook which should act as a face using the ability of ROS to connect to an external ROS core. But this turns out to block some messages we needed for our statemachines and also seemed to be very unstable due network issues. Then we tried to stream the audio via RDP and stream it on the other notebook which worked but caused other problems.  Then we just wrote a node that sends a command via ssh to an external notebook which then publishes a message that was processed by an own ROS core and a running instance of our robot_face. A very hackish solution, but worked out fine for us. When you had natural voice output in your robot you definitely will miss it afterwards when debugging state machines.

All in all we could use all this to test the robot on the second competition day. With some improvements in the code we started the third and last RoCKIn day and (almost) everything went really smooth. Almost all of our points were scored on the last day, but it was still enough to win the competition! Thanks again to team b-it bots for the robot platform - it would not have been possible without you!

trophy