Robotics

5 Laws of Robotics


I’ve been studying the issues/opportunities in robotics commercial rollouts for many years now, and I’ve spoken at a number of conferences about the best way for us to look at robotics setups. In the process I’ve found that my guidelines work best with EPSRC Robotics Principles, although I put additional focus on potential solutions. And I call them the 5 Laws of Robotics because it’s very difficult to escape Asimov’s Laws of Robotics in public perception of what needs to be done.

The most obvious first point about this supposed “5 Laws of Robotics” is that I’m not suggesting an actual law, and neither actually Asimov with his famous 3 Laws (technically 4 of them). Asimov proposed something programmed or coded into robotic existence, and of course it didn’t work perfectly, which provided him with material for his books. Interestingly, Asimov believed, as did many others at the time (is AI symbolic?) that it would be possible to define effective global rules of behavior for robots. Though, I’m not.

My 5 Laws of Robotics are:

  1. Robots shouldn’t kill.
  2. Robots must comply with the law.
  3. Robot must be a good product.
  4. Robots have to be honest.
  5. The robot must be identifiable.

What exactly is meant by the law?

First, people may not legally arm robots, although there may be legal exceptions for use by defense forces or first responders. Some people are totally against Lethal Autonomous Weapon Systems (LAW) of any kind, whereas others draw the line at robotic weapons ultimately coming under human command, with legal accountability. Currently in California there is a proposed law to introduce fines for individuals who build or modify robots, drones or armed autonomous systems, with the exception of ‘legitimate’ uses.

Second, robots must be designed to comply with existing laws, including privacy laws. This implies some form of accountability to the company for compliance across multiple jurisdictions, and while it is technically very complex, the successful company will be as proactive as the company otherwise there will be lots of court cases and insurance claims that keep lawyers happy but impact the company’s reputation badly. all robotics companies.

Third, although we continue to develop and adapt standards as our technology evolves, the core principle is that robots are products, designed to perform tasks for humans. As such, a robot must be safe, reliable, and do what it claims, in the way it claims to operate. Fallacies about the capabilities of any product are universally frowned upon.

Fourth, and this is a fairly unique robot ability, the robot cannot lie. Robots have illusions of emotion and agency, and humans are highly susceptible to being ‘digitally driven’ or manipulated by artificial agents. Examples include robots or avatars that claim to be your friends, but can be as subtle as robots that use human voices as if a real person were listening and talking. Or not explaining that the conversation you’re having with the robot may have multiple listeners at other times and locations. Robots are potentially extraordinarily effective advertising vehicles, in ways we haven’t yet suspected.

Finally, and this extends to the principles of accountability, transparency and honesty, it should be able to know who owns and/or operates any robot we interact with, even if we only share the sidewalk with them. Almost every other vehicle must comply with some law or registration process, which allows identification of ownership.

What can we do to act on these laws?

  1. Robots Registry (number plates, access to owner/operator database)
  2. Algorithm Transparency (via Model Cards and Test Benchmarks)
  3. Independent Ethical Review Board (as in the biotech industry)
  4. Ombudsman robots (to be a liaison between the public, policy makers and the robotics industry)
  5. Rewarding Good Robots (design awards and case studies)

There are many organizations that release suggested guidelines, principles and laws. I have surveyed most of them and looked at the research. Most of them just squeeze the ethical fist and achieve nothing because they don’t take into account the real world conditions around what the goals are, who will be in charge and how to make progress towards those goals. I wrote about this problem before giving a talk at the ARM Developer Summit in 2020 (video included below).

Silicon Valley Robotics announced the first winners of our inaugural Robotics Industry Award for 2020. The SVR Industry Award considers responsible design as well as technological innovation and commercial success. There are also some ethical checkmarks or certification initiatives in the works, but like the development of new standards these can take a long time to get right, whereas awards, support and case studies can be readily available to encourage discussion of what robots are good at. , and, what are the social challenges that robotics needs to solve.

The Federal Trade Commission recently published “The Luring Test: AI and consumer trust engineering” describe

For those not familiar with Isaac Asimov’s famous Three Laws of Robotics, they are:

The First Law: Robots must not injure humans, or, in their inaction, allow humans to come to harm them.

Second Law: Robots must obey orders given by humans unless such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov later added a Fourth (called the Zero Law, as in 0, 1, 2, 3)

Zeroth Law: Robots must not harm humanity, or, by their inaction, allow humanity to be harmed

Robin R. Murphy and David D. Woods have updated Asimov’s law to be more similar to the law I proposed above and provide a good analysis for what Asimov’s law was and why they changed it to deal with modern robotics. Beyond Asimov’s Three Laws of Responsible Robotics (2009)

Several other selections from the hundreds of principles, guidelines, and ethical landscape surveys I recommend are from EPSRC co-original author, Joanna Bryson.

The Meaning of EPSRC Robotics Principles (2016)

And the 2016/2017 update from the original EPSRC team:

Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby & Alan Winfield (2017) Principles of robotics: managing robots in the real world, Connection Science, 29:2, 124-129, DOIs: 10.1080/09540091.2016.1271400

Another survey worth reading is on the Stanford Plato site: https://plato.stanford.edu/entries/ethics-ai/


Andra Keay is Managing Director of Silicon Valley Robotics, founder of Women in Robotics and a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.

Andra Keay is Managing Director of Silicon Valley Robotics, founder of Women in Robotics and a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.


Silicon Valley Robotics is an industry association that supports the innovation and commercialization of robotics technology.

Silicon Valley Robotics is an industry association that supports the innovation and commercialization of robotics technology.



Source link

Related Articles

Back to top button