Why making robots is still hard
 

Making robots is no easy task. If you talk to roboticists, they will tell you that it took years before the last robot they built or programmed was any good at performing a specific task. And although you may see videos of impressive robot feats, the reality is often more sobering. Remember the video of robots falling at the DARPA Robotics Challenge?

So why is it difficult to make robots? Here’s a breakdown looking at how robotics still requires years of research and development before we can expect to see them in our everyday lives.

Power

Most robots are required to operate without being plugged into a power socket. This means they need to carry their own energy source, be it a battery pack or gas tank. Small drones can typically operate for less than 1 hour, which is also the battery life of most advanced humanoids such as ATLAS from Google’s Boston Dynamics. So by the time the robot has walked out the door and made a few steps, it’s time for a power recharge.

Progress is being made, and a push for batteries that allow our laptops and cellphones to work for days on end is also powering the increase in robot run time. Take the same ATLAS robot which was tethered just a year ago, and now carries its own battery pack. The main challenge is that robot motion is often power hungry. Most drones will use the largest portion of their energy powering their propellers rather than computation, sensing, or communication combined. Larger batteries could give a robot more power, but will also make it heavier, which then requires more energy to move the robot. The reality is that robots are often docked to a charging station.

Beyond power, efficiency is also a real challenge. For example, human muscles are capable of impressive strength, yet many robot manipulators don’t have strength to carry heavy loads.

Biological muscles are still an order of magnitude lighter and smaller for generating the same force as robot motors. – Herman Bruyninckx

 

Sensing

Did you ever wonder why most demos show robots manipulating objects with bright colors or with a QR code?




Robots still have a hard time recognizing everyday objects. And although machine learning algorithms have proven effective in allowing computers to label images with sentences such as “black cat on a white chair”, robots also need to know what the objects are used for, and how to go about interacting with them. A fuchsia shirt, striped jacket, or a pair of trousers will all look quite different to a laundry folding robot, and would require a different sequence of motions. And even though cameras are helpful, image processing is still a burdensome task. Sensors like the Microsoft Kinect, and Laser Range Finders have enabled robots to make 3D maps of their environment. With the resulting point clouds, they can detect obstacles, build maps, and know where they are in them. Inferring the meaning of the scene however, is a step further. Beyond vision, touch and sound are still seldom used in robotic systems. Fortunately, robots have access to a number of dedicated sensors that are not human-centric, and are better suited for specific tasks, including accelerometers, temperature or gas sensors, and GPS.

Manipulation

Industrial robots are very successful in manipulating specific pre-defined objects in a repetitive manner. Manipulation outside of these constrained environments is one of the greatest challenges in robotics. There is a reason most successful commercial robots for the home environment, including telepresence robots, vacuum cleaners, and personal robots, are not built to pick up objects. Amazon solved this problem in their warehouse by building teams of humans and robots to fulfil orders. Robots move shelves to the workers who are then responsible for picking objects off the shelves and placing them in boxes. Just last year, Amazon ran a “picking challenge” at ICRA to help move the state of the art forward. The competition was won by Team RBO from Berlin. The RoCKIn competition in Europe also focuses on manipulation in the home and work environments.




Companies such as Shadow Robot are trying to capture the fine motor control that allows us to interact with everyday objects in a robotic hand – using these manipulators often requires precise planning. An alternative solution has often been to use proven manipulators from the industrial sector or increasingly soft robot manipulators that conform to different shapes of objects.




Cognition

Current robots typically use well determined algorithms that allow them to complete specific tasks, for example navigating from point A to point B, or moving an object on an assembly line. Designing collaborative robots for SMEs, or robots for the home, will increasingly require them to understand new environments and learn on the job. What seems like a simple task to us, could turn into an complex cognitive exercise for a robot. Projects such as the iCub have been making progress in this direction, aiming to reach the cognitive levels of a 2.5-year old child.




Deep learning is also providing new avenues. A team at the University of Zurich recently showed drones learning to fly through forest trails using Deep Neural Networks.




Whatever the learning embedded in the robot, it’s important to realise that we are still far from anything that resembles human intelligence or understanding. The forest trail navigation mostly crunches the data from lots of forest trail images and performs the correct motor commands in response. This is closer to a human learning to balance a poll on the palm of their hand through practice, rather than the development of a real understanding for the laws of physics.

Unstructured environments

The world is a messy place, and for most robots, operating in unstructured environments is difficult. That’s why commercial robots have been most successful in factories, on warehouse floors or roads, in the open air, and underwater. On the flip side, there are very few robots that operate autonomously in the home environment, other than vacuum cleaner robots. The Dyson 360 eye required over 100 000 hours of production time and 16 years of development before the company was convinced it would do a good job navigating a room. Their trials showed that kids would sometimes dance before their robot or play with it, attach little cardboard ears in front of its camera, or cover it entirely. Not to mention, every home is unique.




Integration

The trick to understanding integration is to think like a robot. What does it take for a new robot to bring a glass of water to an elderly person in a house? First the robot would need to have a map of the house, perhaps building it from scratch by navigating through the corridors and rooms. It would then need to understand the command from the person to get the water, potentially using speech recognition. The robot would then use its map to plan a trajectory to the kitchen, avoiding obstacles, and constantly updating its estimation of where it is as it goes. Once at the cupboard, the robot would need to open the cupboard, locate a transparent glass, and pick it up. The robot would then turn to the sink, open the water tap using fine motor skills, position the glass under the stream, but only until the glass is full, and then navigate to the person without spilling. It would then gently put the glass on a table. This task requires safe hardware, an impressive sensor suite, and complex algorithms, which currently only exist as stand-alone pieces, if at all. Integrating all these components in a single robot is very difficult, and that’s just for one task – fetching a glass of water.

About integration, John Hallam from the University of Southern Denmark says

You’re not building a robot – you’re engineering a system – too often people focus on making the device. By doing that they miss out on important possibilities.

Legal framework and public perception

Startups are increasingly building robots that have the potential to be disruptive because they solve specific problems well. Drones and autonomous cars are a good example of this. The challenge for these startups is that regulation is not yet in place to allow their products to be commercialised.

Andrea Bertolini from Scuola Superiore Sant’Anna in Italy says there is “too much litigation and too little ex ante regulation”, meaning that startups may end up taking unnecessary legal risks, which could be prevented if better regulation was in place to support innovation up front.

And in a world where human safety and customer expectation is paramount, finding the right standards to make robots a reality has been an ongoing effort in the community.

Finally, misrepresentation in the public about what robots can do increases public concern. This carries the danger that policy makers may react to public opinion caused by a lack of balanced information. Public concerns need to be discussed and the responsible use of robots promoted so that policy is focused on what Europe needs.

It’s still hard to make a robot

Robotics is making strides in the development of specific technologies and robotics solutions for dedicated tasks. After years of development in research laboratories, robots are just starting to find their way to the consumer market. We are still very far away from Rosie the Robot however. Making Rosie would require profound advances in power, sensing, manipulation, cognition, and the magic glue – integration.

The ten-part series on European Robotics will be published every two weeks on the SPARC website and Robohub. Funding for the series was provided by RockEU – a Coordination and Support Action funded under FP7 by the European Commission, Grant Agreement Number 611247.

Read all the articles in the series: Focus on European Robotics