Should the “I” in “Artificial Intelligence (AI)” need a reboot?

Three events took place a few days ago which, at first glance, may seem trivial, but are they? Continue reading…

The morning ritual begins by ordering Alexa, who misunderstands, to play the morning melodies on the flute. Alexa, the voice assistant obeys after a few tries and the soothing strains float across the expanse of the living room.

The rhythmically cooing pigeons rush onto the terrace, listening to the familiar whistle. They drag their bobbing heads to the rustling sound of seeds scattering on the terrace floor. Some coo and invite their mates, and others strut around and fan their tails to protect their territories. The sumptuous and timely breakfast begins. Pigeons decide when to eat, how much to eat.

Soon after, the news of “Going bananas over Artificial Intelligence” caught the eye. The title is a robot trained to peel the humble banana. The news comes from the venerable laboratory of the University of Tokyo.

Alexa is smart, the pigeons are smart, and the robot is skilled (and looks like a human)!! Or is it so? Did we use the words describing “intelligence” rather loosely here?

From the depths of my mind comes alive the doomsday prophecy warning that there will soon be no clear difference between what can be achieved by a biological brain and a computer (aka AI). AI is poised to mimic human intelligence and soon after will overtake it, rule it and, at its peak, replace humanity.

The primacy of humanity is threatened!

Fortunately, I had just finished reading the brilliant book: “The Book of Why” by Turing winner Judea Pearl and the seminal article: “Human-Level Intelligence or Animal-Like Abilities” by Adnan Darwiche (UCLA, 2018 ). They came to my rescue to assuage my fear of humanity being usurped by the AI!

Alexa uses natural language processing and voice recognition software. Large amounts of audio training data are used as inputs. Raw data is cleaned and labeled. Using algorithms, voice assistants understand and execute user commands. Is intelligence drilled into Alexa or does Alexa simply master imitation through continuous training and learning?

Pigeons are among the most intelligent birds. Their guiding abilities were effectively used as carrier birds. This cognitive skill could be a mixture of innate trait and engaged training. But can we speak of intelligence?

Now let’s stop at the robot and the banana. A banana peeling robot is trained through a deep imitation (learning) process to learn and perform this process deceptively effortlessly. The media coverage makes it an exciting headline and readers are overflowing with positivity. However, the headlines about the robot’s prowess might be misleading. The success rate of the banana peeler robot after thirteen hours of training peaks at 57%. That is to say, forty-three times in a hundred attempts, he failed by crushing the banana. Can this be called intelligence or is it simply an imitation trying to be perfected? John McCarthy (Stanford University) invented the world of artificial intelligence in 1955. The pithy acronym AI has gained immense ground, with technological breakthroughs like parallel computing, big data, and better algorithms propelling its massive growth.

There is increased speculation around AI; humans will be replaced by machines. This was, however, tempered by the fact that humans can take advantage of AI and that AI could augment human capabilities. Attempts have been made to redefine Artificial Intelligence as Augmented Intelligence.

Machines have advantages that humans don’t: speed, repeatability, consistency, scalability and lower cost, humans have advantages that machines don’t: reasoning, originality, feelings, contextuality and experience.

The triumph of neural networks in applications like voice recognition, vision, autonomous navigation has left media coverage less thoughtful and sometimes going too far in describing task automation to be quickly equated with human intelligence. This excitement is mixed with a good dose of fear. So, is the word “intelligence” inappropriate here?

Intelligence refers to a person’s cognitive abilities, which would include the ability to

1. Understand and reason and imagine,
2. Bring original, sometimes abstract, thoughts
3. Being able to evaluate and judge,
4. Adapt to context and environment,
5. Gain knowledge and store and use as experience

So if machine learning is how AI is powered to only respond to the last point of acquiring knowledge and storing it for later use, then won’t that be “incomplete intelligence”?

At the risk of sounding like a maverick, Pearl argues that artificial intelligence is handicapped by an incomplete understanding of what intelligence really is. AI applications, to date, can solve problems of a predictive and diagnostic nature, without trying to find the cause of the problem. Never denying the transformative and disruptive, complex and non-trivial power of AI, Pearl shared his genuine critique of the achievements of machine learning and deep learning given the relentless focus on correlation leading to pattern matching, looking for anomalies and often resulting in the “curve” fitting function.

The importance of the “causal ladder,” that is, moving from association to intervention and concluding with counterfactuality, was an immense consequence of Pearl’s contribution.

Pearl has been one of the driving forces that expects correlation-based reasoning not to subsume causal reasoning and the development of causal-based algorithmic tools. If, for example, programmers of driverless cars want to react differently to new situations, they should explicitly add the new reactions, which is done by understanding cause and effect. Furthermore, the concern echoed by Darwiche of the current imbalance between exploiting, profiting from, and encouraging current correlation-based AI tools should not come at the expense of representation and reason-based causality tools to create causal links.

Only causal reasoning could provide machines with human-level intelligence. It would be the cornerstone of scientific thought and would make human-machine communication efficient.

So far, areas such as explainable AI (xAI), moralities and biases of AI, should be profitably addressed.

Until then, the specter of whether AI would usurp human intelligence is a non-starter. Should we agree that the field of Artificial Intelligence bears a more appropriate title of Artificial Ability or Augmented Imitation? Will the acronym reboot help deter the apocalyptic from painting a bleak picture of humanity’s impending demotion?



LinkedIn


Warning

The opinions expressed above are those of the author.



END OF ARTICLE




Source link

Comments are closed.