top of page

Why AI Cannot Solve Simpler Problems?

With technological advancements and researches, scientists are building better AI tools day by day which is trying to solve bigger problems around the world but this general artificial intelligence is yet far from solving the simpler problems. As human beings can explore and solve unknown problems by their natural instinct these men made intelligence if still far behind from mimicking the general intelligence of a living being.

Source- Towards Data Science

The AI toolbox continues its growth with algorithms of specific tasks but it cannot generalize their capabilities beyond their assigned domain. There is artificial intelligence that can beat world champions at StarCrafts but doesn't know the game at an armature level. There are artificial neural networks that can find signs of breast tumor or cancers in mammogram but fails to discriminate between a cat or a dog. There are complex language models that spin thousands of seemingly coherent articles per hour but cannot answer simple logical questions of the world.

It can be concluded, our AI techniques replicate some aspects of human intelligence but it is still not enough to bridge the gaps between the two. Scientist Herbert Roitblat in his book Algorithms Is Not Enough provides an in-depth review of different branches of AI and describes why each of them is yet not ready in creating the dream of creating general intelligence.

All AI algorithms need a predefined representation and this is its most common shortcoming. We can create an AI algorithm to solve a problem efficiently only after we discover the problem and represent it in a computable way. However, the unpresentable and undiscovered problems are still insolvable by AI.


In the earlier decades, AI-focused mostly on symbolic systems. This branch of AI assumes human thinking is base on the manipulation of symbols and any system that can compute symbols is intelligent. In symbolic AI human developers specify the rules, facts, and structures that define the behavior of computer programs. The symbolic systems can memorize information, compute complex mathematical formulas at ultra-fast speeds and emulate expert decision-making. Some of the popular programming languages and appliances we use

Source- Analytics Insight

every day has its base in the work that has been done on symbolic AI.

“The intellectual tasks, such as chess playing, chemical structure analysis, and calculus are relatively easy to perform with a computer. Much harder are the kinds of activities that even a one-year-old human or a rat could do,” Roitblat writes in his book Algorithms Are Not Enough.

Scientist Hans Moravec stated that in contrast to a human computer can perform high-level reasoning task at an ease but struggle at solving simple naturally acquired skills of living beings.

“Human brains have evolved mechanisms over millions of years that let us perform basic sensorimotor functions. We catch balls, we recognize faces, we judge distance, all seemingly without effort,” Roitblat writes. “On the other hand, intellectual activities are a very recent development. We can perform these tasks with much effort and often a lot of training, but we should be suspicious if we think that these capacities are what makes intelligence, rather than that intelligence makes those capacities possible.”

Thus despite its remarkable capabilities AI is still strictly tied to the representations provided by us.


In Machine learning, the AI models are trained through examples. Here is a Machine learning system, AI could not only do what they are specifically programmed to do but they can extend their capabilities to previous unsolved events.

“[Machine learning] systems could not only do what they had been specifically programmed to do but they could extend their capabilities to previously unseen events, at least those within a certain range,” Roitblat writes in Algorithms Are Not Enough.

Supervised Learning is the most popular form of machine learning in which a model is trained on a set of input data such as humidity and temperature and expected outcomes like the probability of rain. Machine learning tunes a set of parameters that maps the input to outputs. A well-trained machine language model can predict the outcome with remarkable accuracy.

“[M]achine learning involves a representation of the problem it is set to solve as three sets of numbers. One set of numbers represents the inputs that the system receives, one set of numbers represents the outputs that the system produces, and the third set of numbers represents the machine learning model.” Roitblat describes.

Though Supervised machine learning is not strictly bound by rules like symbolic AI, it still requires representation created by human intelligence. So the engineers need to define a specific problem, curate a training dataset, and label the outcomes before they can create a machine learning model. Only when the problem has been strictly represented in its own way can the supervised machine learning starts tuning its parameter.

“The representation is chosen by the designer of the system,” Roitblat writes. “In many ways, the representation is the most crucial part of designing a machine learning system.”

Deep learning machine learning has gained a risen popularity in the last decade and this machine learning is often compared to the human brain. Deep learning has a deep neural network, which stalks layers upon layers of simple computational units to make machine learning models that can perform complicated tasks like transcribing audio and classifying images.

Even deep learning need architecture and representation to solve a problem. So one must first find the problem to be solved, curate a training dataset and then figure out deep learning architecture to solve their problem.

“Like much of machine intelligence, the real genius [of deep learning] comes from how the system is designed, not from any autonomous intelligence of its own. Clever representations, including clever architecture, make clever machine intelligence,” Roitblat writes. “Deep learning networks are often described as learning their own representations, but this is incorrect. The structure of the network determines what representations it can derive from its inputs. How it represents inputs and how it represents the problem-solving process are just as determined for a deep learning network as for any other machine learning system.”

Reinforcement learning is somewhat similar to some aspects of human and animal intelligence. Here the AI does not rely on labeled examples but more on its given environment and set of actions. Through trial and error, the reinforcement learning model finds a sequence of actions to create better results.

In recent years reinforcement learning is solving complicated problems such as mastering computer games, developing various robot hands and arms.

But thus reinforcement learning is really very complex and needs a lot of help from humans to design the model.

“It is impossible to check, in anything but trivial systems, all possible combinations of all possible actions that can lead to reward,” Roitblat writes. “As with other machine learning situations, heuristics are needed to simplify the problem into something more tractable, even if it cannot be guaranteed to produce the best possible answer.”
Herbert Roitblat , data scientist and author of "Algorithms Not Enough"
Source : TechTalk
Roitblat summarizes the drawbacks of current AI systems in Algorithms Are Not Enough: “Current approaches to artificial intelligence work because their designers have figured out how to structure and simplify problems so that existing computers and processes can address them. To have a truly general intelligence, computers will need the capability to define and structure their own problems.”

Various efforts are being made to challenge the current AI system one of them is to continue to scale deep learning. Evidence shows that adding more layers and parameters to the neural network produces incremental improvements, especially in language models such as GPT-3. But these bigger neural network does not always provide general intelligence.

“These language models are significant achievements, but they are not general intelligence,” Roitblat says. “Essentially, they model the sequence of words in a language. They are plagiarists with a layer of abstraction. Give it a prompt and it will create a text that has the statistical properties of the pages it has read, but no relation to anything other than the language. It solves a specific problem, like all current artificial intelligence applications. It is just what it is advertised to be—a language model. That’s not nothing, but it is not general intelligence.”

Other researchers are trying to improve the current AI structure. For example, Hybrid artificial Intelligence brings symbolic AI and neural networks together to produce a model having the reasoning power of symbolic and pattern recognition power of neural networks.

There are already several implementations of hybrid AI, also referred to as neuro-symbolic systems, it shows that hybrid systems require less training data and are more stable at reasoning tasks than pure neural network approaches.

System 2 deep learning also enables neural networks to learn "high-level representation". Self-supervised learning learns tasks without the labeled data and explores the world with curiosity.

“I think that all of these make for more powerful problem solvers (for path problems), but none of them addresses the question of how these solutions are structured or generated,” Roitblat says. “They all still involve navigating within a pre-structured space. None of them addresses the question of where this space comes from. I think that these are really important ideas, just that they don’t address the specific needs of moving from narrow to general intelligence.”

Scientists are constantly researching to make the future AI better and flawless, but researches describe how flawed these AI still are but it can be assured that flaws are not any inherent property of the AI tools but it is mostly because of the scientist's conceptual decision making. So it can be assumed and assured that the AI will improve over time with proper research and documentation.


To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers.

Source: TechTalks

20 views0 comments
bottom of page