Making sure
Artificial Intelligence (AI) does what we want and
behaves in predictable ways will be crucial as the technology
becomes increasingly ubiquitous. It is an area frequently
neglected in the race to develop products, but DeepMind has now
outlined its research agenda to tackle the problem.
AI safety, as the
field is known, has been gaining prominence in recent years.
That is probably at least partly down to the overzealous
warnings of a coming AI apocalypse from Elon Musk and Stephen
Hawking. It is also recognition of the fact that AI technology
is quickly pervading all aspects of our lives, making decisions
on everything from what movies we watch to whether we get a
mortgage.
That is why DeepMind
hired researchers who specialize in foreseeing the unforeseen
consequences of the way we built AI back in 2016. The team has
spelled out the three key domains they think require research if
we are going to build autonomous machines that do what we want.
In a new blog designed
to provide updates on the team’s work, they introduce the ideas
of specification, robustness, and assurance, which they say will
act as the cornerstones of future research. Specification
involves making sure AI systems do what their operator intends;
robustness means a system can cope with changes to its
environment and attempts to throw it off course; and assurance
involves our ability to understand what systems are doing and
how to control them.
No comments:
Post a Comment
Comments