IROS TIAGo Manipulation Challenge
Yesterday at 17:00 we had our mobile manipulation demonstration with TIAGo at the IROS conference. For the finals six teams from all over the world have been qualified.
The preparation was quite intense, as there was just half a day of preparation on site and we had to use a robot that we have not used or configured before. But this is also an opportunity to challenge participants and actually it is quite comfortable for us to attend challenges where we don’t have to ship the robot.
We intended to show our approach for imitation learning. But due to the issues we had to configure the robot on-site we could not concentrate on the preparation of that and decided to run an old demonstration where TIAGo cleans a table and puts hard to grasp objects like forks, knifes and spoons from a table into a bowl.
During the preparation we faced a lot issues that where caused by time differences between the robot and our notebook that we attach for computations. Even when the differences are quite low that can cause behaviours like destroyed maps or stooping the robot before an obstacle later as it is supposed to.
We overcame this issue remotely with Niklas via telephone. Another issue that we did not figure out initially was that on the robot itself was a mapping component running that was fighting with ours. We where thinking about a wrong transformation somewhere in the robots laser position. This was fixed by just disabling the mapping component running on the robot and rely on our custom solution. These are small issues when you have more time but for a scheduled demo with preparation of just one day those issues can stress you. Therefore we decided spontaneously to show another demo that did not need that much preparation from our side. Up to the end the time was still fluctuating quite a bit but during the demo it was constant and we didn’t face any further problems.
During the presentation TIAGo was doing a good job. We also got remote help to run this demo from Tobias back in Koblenz. Even so it was not calibrated we managed to put some objects into the bowl and in cases where the robot was not gripping correctly it tried again because the offset between robot end-effector and the RGB-D camera. There was also a closed loop sensing for identifying when to close the gripper after getting feedback on touch with the table. This is highly relevant for hard to grasp objects like the cutlery as in the depth image it’s hard to estimate a accurate position for those small objects because there is usually no or a very low difference between the object and the table.
Once we are back we will focus on the World Robot Summit in Tokyo.