What is Artificial Intelligence (AI), and where is it going?

CornerRight 17 min


Icons/Technology/Tech No.1

AI


Icons/Technology/Tech No.1

AI

More and more often, Ladies and Gentlemen, in your daily press releases, news websites, industry reviews, and market analyzes, you can find terms such as artificial intelligence (AI). Machine learning (ML), deep learning (DL), data research, data mining, and big data are all around them too.

Sounds mysterious and modern – doesn’t it? Yes, with more and more audacity, these “futuristic” technologies anchor in business. As a result, companies, enterprises, industry, science, and many more, benefit from the support of these newest achievements of humankind.

It will probably make you wonder if you can use this technology in your organization? Will it bring additional benefits? What are the costs and effort needed to implement it?

Yes, these are crucial questions. But before you answer them to yourself, let me tell you a bit about these technologies and how they can be applied.

The origins of AI.

When entering the world of artificial intelligence and data science methods, you need to be aware of the origins of these fields. In addition, it is worth getting acquainted with their structure, nomenclature, and relations that bind all these terms together.

At first glance, it looks very complicated. Still, you don’t have to worry about anything because, contrary to appearances, it is not difficult to grasp. All you need to do is familiarize yourself with the basics I will introduce below and catch the dependencies and connections resulting from them.

Artificial intelligence is a relatively young field of science. Its origins date back to 1956 when it was established as an academic discipline. As a result, it experienced several waves of optimism, disappointment, and new approaches and successes throughout its development period.

AI research has tried and discarded many different approaches during its lifetime, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. As the 21st century began, highly mathematical, statistical machine learning dominated AI. However, the technique has proved to be very effective in solving problems across the industry and academia.

Different sub-domains of AI research focus on specific goals and the use of particular tools.

The primary objectives of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and moving and manipulating objects. In addition, general intelligence is one of the long-term goals in this field. AI researchers use various search and mathematical optimization methods, formal logic, artificial neural networks, and methods based on statistics, probability, and economics. AI also draws from computer science, psychology, linguistics, philosophy, and many other fields.

The AI field was founded on the premise that human intelligence “can be described so precisely that it can be made into a machine to simulate it.” That raises philosophical arguments about the morality and ethics of creating artificial beings endowed with human-like intelligence. Since antiquity, these issues have been raised by mythology, fiction, and philosophy. Science fiction and futurology also suggest that AI could become an existential threat to humanity with its enormous potential and power.

Before I go into a more detailed description and definition, please note that AI is a vast field that draws on many other scientific and technical disciplines. And as you delve into its secrets, you can see a structure, each subset of which describes the areas of application and the tools used in it with increasing precision. So, let me introduce this property to you in the diagram below.

As you can see in the diagram above, AI aggregates minor domains (ML, DL, DS) subsets. Similarly, I will show you the structure of DS complementing AI with tools and methods.

Now that you have picked up some terms and know-how, I will introduce you to AI more specifically. So, let’s go:

Artificial intelligence (AI)

Artificial intelligence is the broadest term used to classify the capacity of a computer system or machine to mimic human cognitive abilities. These include learning and problem-solving, imitating human behavior, and performing human-like tasks.

For example, a computer system uses maths and logic to simulate people’s reasoning to learn from new information, make decisions, and perform human intelligence tasks. So, a machine can use AI to predict, automate and optimize jobs. In contrast, people who have done these tasks did not perform them very effectively due to physical or biological limitations.

In these definitions, the concept of intelligence refers to the ability to plan, reason, learn, sense, build some kind of perception of knowledge and communicate in natural language.

The founding father of AI, Alan Turing, defines the discipline as:

 “AI is the science and engineering of making intelligent machines, brilliant computer programs.”

Then John McCarthy (creator of the LISP programming language) shortened it a bit by simply saying:

“AI is the science and engineering of making intelligent machines.”

The term “artificial intelligence” was first used in 1956 at the Dartmouth Computer Science Conference. By defining the term AI, scientists attempted to model the operation of the human brain and use this knowledge to create more advanced computers. They expected rapid results in research and understanding how the human brain works and how to digitize it. The conference brought together many of the brightest minds in the field. And it was based on an intense two-month brainstorming session.

Indeed the participants had a good time at Dartmouth. They put much effort into the work, but the results were devastating. Mimicking the brain with programming turned out to be complicated. 

Nevertheless, they achieved some results. Scientists realized that the key drivers of an intelligent machine are:

– Learning, that being interaction with changing and spontaneous environments. 

– Natural language processing, that helping with human-machine interaction.

– Creativity, that freeing humanity from many of its problems. 

AI systems are powered by algorithms and use machine learning (ML), deep learning (DL), and data science (DS). ML algorithms are the providers of data for AI systems. Using statistical research, mathematical models, algorithms, and data processing enables AI systems to learn. Thanks to ML, AI systems are getting better at performing tasks without creating unique software for this purpose.

Artificial Intelligence can encompass everything from Google search algorithms to autonomous vehicles. As a result, AI technologies have enabled people to automate previously time-consuming tasks and gain untapped insight into data through rapid pattern recognition.

Even today, when AI is ubiquitous, computers are still far from perfectly modeling human intelligence. Scientists as before developing methods, algorithms, and technologies, so AI improves and steadily approaches the set ideals. Thus, the achievements and subsequent stages of the AI development cycle set the direction and illustrate the progress over time and capabilities. Hence, the division into three main categories of artificial intelligence emerges.: 

  • Artificial narrow intelligence (ANI), which has a narrow range of abilities;
  • Artificial general intelligence (AGI), which is on par with human capabilities;
  • Artificial superintelligence (ASI), which is more capable than a human;

Artificial Narrow Intelligence (ANI)

Artificial Narrow Intelligence (ANI), also referred to as “weak AI” or “narrow AI,” is the only type of AI humankind has implemented so far. ANI performs single tasks – such as face recognition, speech recognition, voice assistant, car driving, and much more. It is brilliant and efficient at the specific job, as the developers designed it. Therefore we can say that ANI is goal-oriented.

While ANI-based machines may appear intelligent, they operate within a narrow range of constraints, which is why we can commonly refer to this type as “weak AI.” ANI does not mimic or replicate human intelligence. Instead, it simulates human behavior based on a narrow range of parameters and contexts. 

Consider virtual assistant speech and language recognition, self-driving car, vision recognition, and recommendation systems that suggest the commercials based on your web browser history. But, as you can see, these systems can only learn or perform specific tasks. 

ANI has experienced many breakthroughs in the past decades, fueled by advances in ML and DL. For example, today, AI systems are used in medicine to diagnose cancer and other diseases with remarkable accuracy by replicating human cognition and reasoning. 

ANI’s machine intelligence comes from Natural Language Processing (NLP). NLP is visible in chatbots and similar technologies. Thus, AI is programmed to interact with people natural, personalized way by understanding speech and text in natural language. 

ANI can either be reactive or have limited memory. Reactive AI is incredibly basic. It has no memory or data storage capabilities, emulating the human mind’s ability to respond to different kinds of stimuli without prior experience. On the other hand, limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use historical data to inform decisions.

Today’s ANI techniques fall into two categories: symbolic AI and ML.

Symbolic AI, also known as good old-fashioned AI (GOFAI), has been the dominant area of research throughout much of AI history. Symbolic AI requires developers to carefully define the rules that control the behavior of an intelligent system. As a result, symbolic AI lends itself to applications where the environment is predictable and the rules are clear. While symbolic AI has fallen somewhat out of favor in recent years, most applications today are rule-based systems.

Machine learning, the other branch of ANI, develop intelligence through examples. A developer of a machine learning system creates a model and then “trains” it by providing it with many examples. The machine learning algorithm processes the samples and makes a mathematical representation of the data to perform prediction and classification tasks.
 

Most AI systems are limited memory AI systems, where machines use large volumes of data for DL. DL enables personalized AI experiences, for example, virtual assistants or search engines that store your data and personalize your future experiences. 

Examples of ANI:

  • Search engines like Google Search and others.
  • Virtual assistants like Siri by Apple, Alexa by Amazon, and Cortana by Microsoft. 
  • Image / object recognition systems.
  • Disease mapping and prediction tools.
  • Manufacturing and drone robots.
  • Email spam filters and social media monitoring tools for harmful content.
  • Entertainment or marketing content recommendations based on watch/listen/purchase behavior.
  • Self-driving cars like Tesla.

Artificial General Intelligence (AGI)

We can also refer to general Artificial Intelligence (AGI) as “strong or deep AI.” It is a machine concept that mimics human intelligence or behaviors, having the ability to learn and solve any problem. AGI can think, understand and act indistinguishably from a human in any situation.

Researchers and scientists have not yet reached the level of strong AI. To be successful, they would have to find a way to make machines conscious by endowing them with the complete set of cognitive abilities. In addition, they would need to take experiential learning to the next level to improve performance in single tasks and to be able to apply knowledge to a broader range of problems.

Strong AI uses a theory of mind AI framework, which refers to the ability to discern other intelligent entitles’ needs, emotions, beliefs, and thought processes. The theory of human mind-level AI is not about replication or simulation. Instead, it’s about training machines to understand humans truly.

The immense challenge of achieving strong AI is not surprising, considering that the human brain creates general intelligence. However, the lack of comprehensive knowledge on the human brain’s functionality has researchers struggling to replicate essential functions of sight and movement.

It is difficult to determine whether or not humankind will achieve strong AI in the foreseeable future. However, as image and objects recognition technology advances, we will likely see an improvement in the ability of machines to learn and see.

Artificial Super Intelligence (ASI)

Artificial superintelligence (ASI) is a hypothetical artificial intelligence that not only mimics or understands human intelligence and behaviors. ASI is the point in the development of AI where machines become self-aware and exceed humankind’s intelligence capabilities and abilities.

Superintelligence has long been the muse of dystopian science fiction, where robots conquer, overthrow and enslave humanity. However, the ASI concept assumes that AI evolves so close to human emotions and experiences that it understands them. As a result, it evokes its feelings, needs, beliefs, and desires in interaction. 

In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything humankind does. In every aspect, i.e., science, sports, art, hobbies, emotional relationships, ASI would have a more extraordinary memory and a faster ability to process and analyze data and stimuli. Consequently, super-intelligent beings’ decision-making and problem-solving capabilities would be far superior to human beings.

The potential to have such powerful machines at your disposal may seem appealing. Still, the concept itself has many unknown consequences. For example, if self-aware, super-intelligent beings arose, they would be capable of ideas such as self-preservation. The impact this will have on humanity, our survival, and our way of life is pure speculation.

Strong AI vs. weak AI

As McCarthy and his colleagues envisioned, AI is an AI system that can learn tasks and solve problems without being clearly instructed in every detail. In addition, it should be able to reason, abstract, and quickly transfer knowledge from one field to another.

Developing an AI system that meets these requirements is very difficult, as explorers have learned over decades of research. As a result, the original vision of AI, computers that mimic the human thinking process, became known as AGI.

As I presented above, the AGI is “a machine capable of understanding or learning any intellectual task that a human can perform.” But, unfortunately, scientists, researchers, and thought leaders believe that the AGI is at least decades away.

But in their continued endeavors to fulfill the dream of creating thinking machines, scientists have invented all sorts of valuable technologies. Narrow AI is the umbrella term that encompasses all these technologies. 

Narrow AI systems are good at performing a single task or a limited range of functions. In many cases, they even outperform humans in their specific domains. But as soon as they meet a situation that falls outside their problem space, they fail. They also can’t transfer their knowledge from one field to another.

What is the future of AI?

That is a kind of burning question. Are we able to achieve AGI or ASI? Is it even possible? Optimistic experts believe that AGI and ASI are possible. But, still, it is challenging to determine how far we are from becoming aware of these levels of AI. 

The line between computer programs and AI is opaque. It is relatively easy to mimic the narrow elements of human intelligence and behaviors. However, creating a synthetic version of human consciousness is different altogether. Moreover, while AI is still in its infancy, the search for strong AI has long been considered sci-fi. So, breakthroughs in ML and DL indicate that we may need to be more realistic about the possibility of achieving AGI. 

It is daunting to contemplate a future in which machines are better than humans at human things. Moreover, we cannot accurately predict the impact of AI advances on our future world. Even the problem of eradicating things like disease and poverty is not fully understood yet. 

Scientists agree that none of the AI technologies we have today have the required ingredients for AGI. But they don’t necessarily conform to the next step to go beyond ANI. In contrast, some of the ANI capabilities expansion plans are as follows: 

  • Cognitive scientist Gary Marcus proposes to create hybrid AI systems that combine rule-based systems with neural networks. There are already some working examples that show that neuro-symbolic AI systems can overcome the data constraints of narrow AI. “There are plenty of first steps towards building architectures that combine the strengths of the symbolic approaches with insights from machine learning, to develop better techniques for extracting and generalizing abstract knowledge from large, often noisy data sets,” Marcus writes…
  • Richard Sutton, computer scientist and the co-author of a seminal book on reinforcement learning, believes that the solution to move beyond narrow AI is to continue to scale learning algorithms. Sutton argues that the AI industry owes its advances to the “continued exponentially falling cost per unit of computation” and not our progress in coding the knowledge and reasoning of the human mind into computer software.
  • Deep learning pioneer Yoshua Bengio spoke of system 2 DL at the NeurIPS 2019 conference. According to Bengio, system 2 DL algorithms will perform some form of variable manipulation without the need to have integrated symbolic AI components. “We want to have machines that understand the world, that build good world models, that understand cause and effect, and can act in the world to acquire knowledge,” Bengio says.
  • Yann LeCun, another deep learning pioneer, spoke of self-supervised learning at AAAI Conference 2020. Self-supervised learning AI should learn by observing the world and without the need for tons of labeled data. “I think self-supervised learning is the future. This is what’s going to allow our AI systems, deep learning system, to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge,” LeCun said in his speech at the AAAI Conference.

The road to AGI

ML describes the ability to find patterns and make decisions without instruction or pre-programming, that is, the power of computer systems to truly “learn” on their own. ML, therefore, includes a subset of AI, but not the other way around. 

DL is a subset of ML that “learns” from unsupervised and unstructured data processed by neural networks, algorithms with brain-like functions.
 

Neural networks can evolve through both learning and inference. The training is about using different algorithms and improving them over time while turning on new data sources. Deduction means that a machine can identify the data sources it needs to predict using logical rules and deductive inference.

Advances in ML and DL research facilitate the transition from ANI to AGI by explicit instructions.

Step toward ASI

The artificial superintelligence of ASI is a step further from AGI. AI exceeds human capabilities at a brilliant level. However, with ASI still hypothetical, there are no absolute limits to what ASI can achieve, from building nanotechnology to fabricating objects and preventing aging. 

Many philosophers and scientists have different theories about the feasibility of reaching ASI. For example, David Chalmer, a cognitive scientist, believes that it will be relatively easy to expand the capabilities and performance to call ASI once we achieve AGI. Furthermore, according to Moore’s law, computing power should double at least every two years. So, that suggests that there may not be a limit to the absolute power of the technology. 

One of the obstacles to ASI is the complexity of global problems. For example, can machines solve world hunger or stop climate change? In addition, ASI will need an exceptional amount of data, even compared to AGI. Some believe that using genetic engineering to create a super-intelligent group of people is the best solution in ASI. In contrast, others believe that ASI will cover the next generation of supercomputers.

Forecast summarizing the near future of AI

While there is still a long way to go before AGI and ASI, AI is advancing rapidly with discoveries and milestones emerging. Compared to human intelligence, AI promises to multitask and remember information perfectly, continuously operate without interruptions, perform calculations with record speed and high efficiency, sift through long records and documents, and make unbiased decisions.

As AI takes over more and more jobs, there are serious debates about AI ethics and whether governments should step in to monitor and regulate its growth. AI can alter relationships, increase discrimination, invade privacy, create security threats, and even end humanity as we know it.

These issues may seem daunting, but they make AI research even more intriguing and impactful.

So let’s wait and see what the future holds for us.

About the Author

Michal Gorgon – Software engineer with a master’s degree in Automation and Robotics.
At 4Soft, he is a senior specialist for embedded systems and IoT. In his daily work, he combines the world of hardware and software, which allows him to pursue his passion for automation, control, DSP, cryptography, AI and ML.

After work, he’s passionate about martial arts, motorization, offshore sailing, and modern electronic music.

 

Similar blog posts

10 Common Software Architecture Patterns: Expert Guide

8 min

Did you know that before starting a software development project, an architect needs to pick the software architecture for it? This is a common best practice in the tech industry that allows teams to make the most out of the software and create a better experience for users.

Wed/Mar/2022
see details

11 types of software development you should know

10 min

If you’re looking to invest in a digital product or build a custom solution, it’s smart to get started by learning what the software development landscape looks like today. It might seem that programming is a pretty straightforward activity. You write code, test it, deploy it, and finally implement it – right?

Mon/Jan/2022
see details

Blockchain Developers’ Brawl: Bitcoin Cash Hard Fork

2 min

Schisms in blockchains are quite a common thing – Bitcoin Cash itself was born in a similar way in the past, when it split out of the original Bitcoin. But in November 2018 we’re the witnesses of another hard fork born out of civil war within the community.

Tue/Dec/2018
see details

4 Most Popular 4soft Articles on ICO & Blockchain Development

2 min

Since the beginning of the 4soft Blog, we created 4 core epic posts on 4 different aspects of Initial Coin Offering process, about 1500 words each. That’s the most popular quartet among our posts. Together those posts make a strong knowledge base for your future ICO project, covering the process, threats, outsourcing and app features.

Tue/Jul/2018
see details