Technology

Can AI Think Like a Human? Here’s Why Scientists Disagree

Published

on

The magic that is Artificial Intelligence (AI) really does wonders, making people believe that it can think like a human being. From recognizing images to generating human-like language, it’s no surprise why some would think so. But it’s a given that this may be the case, but the technology is far from thinking and understanding like a human does.

A Common Mistake

Thanks to developments in AI, systems that carry out human-like behaviors have come to life. More specifically, the language model GPT-3 can write content almost similar to human writing. Likewise, PaLM can explain jokes it has never read before.

AI model GATO from Google’s AI company DeepMind was released last month. It is a general-use AI developed to perform a multitude of tasks. This includes writing image captions, answering questions, playing video games, and controlling a robotic arm to stack blocks. Another of these AI-generated networks is DALL-E, one that was trained to create images and artwork from a text description.

Because of these, claims have surfaced believing that AI can duplicate human behaviors. A researcher from DeepMind, Nando de Freitas, says that when existing models are scaled, these will adequately be capable of human-like intelligence. Many others agree with this statement. However, human-like behavior doesn’t necessarily mean human-like understanding. 

Can AI Think Like a Human?

The latest Artificial Neural Networks, more popularly known as neural nets, are the foundation from which AI is built. It takes inspiration from the human brain, thus the name “neural.” Billions of cells known as neurons form complicated connection webs with each other. They process information when they signal back and forth from one another.

Neural nets are a distinctly simplified model of this biology. An actual neuron is changed with a primary node. The electricity of the connection between nodes is represented with the help of a single quantity referred to as a “weight.”

Neural nets can be trained to identify patterns with the correct number of connected nodes piled one over the others. They can also generalize similar stimuli to what they have seen before. Generalization refers to an AI system’s ability to learn from data it gathers and applies it to new information.

This ability to recognize features and patterns, as well as its generalization capabilities, stem from what makes neural nets succeed. This is the mimicking of human techniques for the purpose of completing tasks. However, there are also differences.

Usually, neural nets are trained through “supervised learning.” This means that they are given inputs and asked for the desired outcomes. They then adjust the connection weights until the network has learned to provide the expected results.

A neural net will learn a language task when presented with a sentence by introducing one word at a time. It will gradually predict the next word to fill in the sequence. This isn’t how humans learn. We learn by being unsupervised, meaning, the correct response isn’t what we’re told for a given stimulus. Humans figure this out ourselves.

Ways Neural Nets Can Learn 

Neural nets learn differently from how we humans do. To match a stimulus with a preferred response, neural nets use an algorithm called “backpropagation.” This pass mistakes backward through the network and lets the weights be adjusted the correct way.

That said, scientists and researchers proposed various ways of backpropagation. The human brain can use these, however, there is no known evidence yet that human brains can learn utilizing that method. They also believe in new techniques and insights into how the human brain works before AI machines can think like humans.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version