The incident quickly went viral, and sparked a fierce debate about whether the robot actually attacked the woman or had tripped up. It’s mostly being overlooked that we’re a long way from having robots that could intentionally attack someone – machines like these are often remote controlled – but the danger to the public is clearly real enough.
With sales of humanoid robots set to skyrocket over the next decade, the public will increasingly be at risk from these kinds of incidents. In our view as robotics researchers, governments have put very little thought into the risks.
Here are some urgent steps that they should take to make humanoid robots as safe as possible.
1. Increase owner requirements
The first important issue is to what extent humanoid robots will be controlled by users. Whereas Tesla’s Optimus can be remotely operated by people in a control centre, others such as the Unitree H1s are controlled by the user with a handheld joystick.
Currently on sale for around £90,000, they come with a software development kit on which you can develop your own artificial intelligence (AI) system, though only to a limited extent. For example, it could say a sentence or recognise a face but not take your kids to school.
Who is to blame if someone gets hurt or even killed by a human-controlled robot? It’s hard to know for sure – any discussion about liability would first involve proving whether the harm was caused by human error or a mechanical malfunction.
This came up in a Florida case where a widower sued medical robot-maker Intuitive Surgical Inc over his wife’s death in 2022. Her death was linked to injuries she sustained from a heat burn in her intestine during an operation that was caused by a fault in one of the company’s machines.
The case was dropped in 2024 after being partially dismissed by a district judge. But the fact that the widower sued the manufacturer rather than the medics demonstrated that the robotics industry needs a legal framework for preventing such situations as much as the public do.
While for drones there are aviation laws and other restrictions to govern their use in public areas, there are no specific laws for walking robots.
So far, the only place to have put forward governance guidelines is China’s Shanghai province. Published in summer 2024, these include stipulating that robots must not threaten human security, and that manufacturers must train users on how to use these machines ethically.
For robots controlled by owners, in the UK there is currently nothing preventing someone from taking a robot dog out for a stroll in a busy park, or a humanoid robot to the pub for a pint.
As a starting point, we could ban people from controlling robots under the influence of alcohol or drugs, or when they are otherwise distracted such as using their phones. Their use could also be restricted in risky environments such as confined spaces with lots of members of the public, places with fire or chemical hazards, and the roofs of buildings.
2. Improve design
Robots that looks sleek and can dance and flip are fun to watch, but how safe are the audiences? Safe designs would consider everything from reducing cavities where fingers could get caught, to waterproofing internal components.
Protective barriers or exoskeletons could further reduce unintended contact, while cushioning mechanisms could reduce the effect of an impact.
Robots should be designed to signal their intent through lights, sounds and gestures. For example, they should arguably make a noise when entering a room so as not to surprise anyone.
Even drones can alert their user if they lose signal or battery and need to return to home, and such mechanisms should also be built into walking robots. There are no legal requirements for any such features at present.