Making AI Safe for Civilized Society - InformationWeek
IoT
IoT
Commentary
2/2/2018
08:00 AM
James Kobielus
James Kobielus
Commentary
50%
50%

Making AI Safe for Civilized Society

There are plenty of concerns about the safety of artificial intelligence, and it's up to humans to set the standards for safe uses of the technology.

You don’t need to buy into the notion that artificial intelligence (AI) is a so-called “existential threat” to recognize that the technology has its downsides.

Some of AI’s risks may stem from design limitations in a specific buildout of the technology.  Others may be due to inadequate runtime governance over live AI apps. Still others may be intrinsic to the technology’s inscrutable “blackbox” complexity. And, let’s not forget the trend toward AI’s weaponization, which poses an existential threat any way you look at it.  

One of the most vibrant fields of high-tech research is what’s often called “AI safety (or, alternately, “friendly AI” or “AI risk management”). Generally, AI safety refers to the myriad of ways in which the technology may adversely impact society. The AI safety community is developing technological, procedural, regulatory, and other guardrails to mitigate the most worrisome threats.

Image: Shutterstock
Image: Shutterstock

As a mainstream preoccupation, AI safety has become inescapable in the popular press, the blogosphere, and technical journals. It has become a popular topic on the mainstage at tech conferences. AI safety researchers can tap into a growing pool of grants that fund innovative approaches for addressing the problem. Some of the research monies are coming from the same foundations that are addressing many types of existential threats, including global warming, nuclear weapons, and biotechnology. Research is coming from all over the AI community, from institutes around the globe, and from big technology companies. Among the most noteworthy AI safety research initiatives is a nonprofit sponsored by Elon Musk and other Silicon Valley movers and shakers.

Key AI safety research topics include the following:

· Can we prevent AI from invading people’s privacy?

· Can we eliminate socioeconomic biases that may be baked into AI-driven applications?

· Can we ensure that AI-driven processes are entirely transparent, explicable, and interpretable to average humans?

· Can we engineer AI algorithms so that there’s always a clear indication of human accountability, responsibility, and liability for their algorithmic outcomes?

· Can we build ethical and moral principles into AI algorithms so that they weigh the full set of human considerations into decisions that may have life-or-death consequences?

· Can we automatically align AI applications with stakeholder values, or at least build in the ability to compromise in exceptional cases, thereby preventing the emergence of rogue bots in autonomous decisionmaking scenarios?

· Can we throttle AI-driven decision making in circumstances where the uncertainty is too great to justify autonomous actions?

· Can we institute failsafe procedures so that humans may take back control when automated AI applications reach the limits of their competency?

· Can we ensure that AI-driven applications behave in consistent, predictable patterns, free from unintended side effects, even when they are required to dynamically adapt to changing circumstances?

· Can we protect AI applications from adversarial attacks that are designed to exploit vulnerabilities in their underlying statistical algorithms?

· Can we design AI algorithms that fail gracefully, rather than catastrophically, when the environment data departs significantly from circumstances for which they were trained?

AI safeguards will almost certainly find their way into future waves of commercial devices, applications, and cloud services. AI safety is also the focus of a growing curriculum that’s essential study for the next generation of data scientists and other application developers.

But we’d be naïve to imagine that society can ever fully protect itself from all the adverse consequences that may befall us from our AI inventions. No matter how smart humanity becomes in perfecting the state of the art in AI safety, we’re not likely to rid ourselves entirely of algorithmic insensitivity. If nothing else, the probabilistic underpinnings of AI — along with its staggering complexity, versatility, and autonomy — practically guarantee that its behavior can never be entirely predicted or controlled in advance in every real-world circumstance.

As AI remakes the human experience, we’ll have to revisit and recalibrate its guardrails to keep its worst tendencies in check.

Jim is Wikibon's Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM's data science evangelist. He managed IBM's thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Register for InformationWeek Newsletters
White Papers
Current Issue
Top IT Trends for 2018
As we enter a new year of technology planning, find out about the hot technologies organizations are using to advance their businesses and where the experts say IT is heading.
Video
Slideshows
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll