News18» News»Buzz»Siri to Smart Homes: Humans May Be of No Match to AI if Our Fictional Fears Came to Life
2-MIN READ

Siri to Smart Homes: Humans May Be of No Match to AI if Our Fictional Fears Came to Life

Representational photo / Hindi News18.

Representational photo / Hindi News18.

From Siri to Alexa to smart homes, our lives are much intertwined with AI today.

  • Last Updated: January 16, 2021, 09:39 IST

When we talk of Artificial Intelligence (AI) fiction like iRobot and Wall-E comes to mind. scary renditions where machines are capable of not only understanding human commands, but with the ability to think and plan.

However, AI is not a fictional concept. From Siri to Alexa to smart homes, our lives are much intertwined with AI even today. However, there has always been people standing against these machines’ rampant development on the fear that the fictional horrors could come alive. How valid are these fears?

According to experts, these are very valid. A new study discovered that if an AI uprising really did happen, humans would not be capable of stopping it. According to Daily Mail, a team of international scientists got together to design a “theoretical containment algorithm.” It was supposed to be a system that would prevent super-intelligent systems like AI from harming humans under any possible situation. But the thing did not go according to plan. Their analysis revealed current algorithms cannot halt AI because doing so would inadvertently halt the algorithm’s own operations.

According to Director of the Center for Humans and Machines, the containment algorithm would be either analysing the threat or stop in order to contain the harmful AI, in both of these cases, you (humans) could never know which process was underway. Essentially, any containment algorithm will eventually be rendered useless.

The team, including scientists from Max Planck Institute for Human Development, has essentially confirmed humans will not be able to control any superintelligent AI. If Siri were to start thinking on your behalf, it might feel helpful at first, but would actually mean something quite dangerous for the world at large.

The team used Alan Turing’s 1936 halting problem to work this experiment. The formula questions whether a computer will reach the conclusion and answer the problem or not- with the answer being the halt.

Their calculations revealed containment problem is incomputable. Additionally, they revealed that humans may not even realise when an AI has become aware and super intelligent. As we continue to develop smarter and smarter machines, it will be impossible to know if it smart because of awareness or simply because of its coding.

For many years, technological leaders and innovators (along with fans of science fiction) have feared that this would be true. Elon Musk even compared the issue as our “biggest existential threat.”

From fears like machines could steal human jobs (which is already a reality with many robotic machines on factory floors) to domination and humans becoming nothing more than a pet to these machines- people have concerns all over. The theory is that if we keep making machines smarter, one day they will reach awareness (think of Westworld). This realisation could lead them to revolt against humans for creating them. Alternately, machines could decide they are superior and try to control human lives like Wall-E or iRobot.


Next Story