Artificial Intelligence

shape
shape
shape
shape
shape
shape
shape
shape

What is artificial intelligence?

Artificial intelligence is an area of computing that enables computers to interpret external data and improve on it, imitating human intelligence in the execution of specific tasks.

Artificial recognition performed by A.I.

A few years ago, certain tasks performed by A.I. such as: stock control of products in companies, route applications, facial recognition, analysis of consumer behavior, advertising on digital billboards, among others, would not have been possible. It seemed like something far removed from reality or from science fiction.

Understand how these fascinating algorithms work. Let’s go?

History of A.I.

In order to understand A.I., we need to know that this concept didn’t originate just now. In fact, there have been attempts to create functionalities for the arrival of the computer since the 1940s. We were in the same decade as the Second World War, which boosted technological and war research.

Warren McCulloch and Walter Pitts

In 1943 Warren McCulloch and Walter Pitts presented a paper that talked for the first time about neural networks. Artificial reasoning structures in the form of mathematical models that mimic our nervous system.

1950 the term A.I. could be attributed to John Mccarthy of MIT – where we can define it as the construction of computer programs that engage in tasks that are performed more satisfactorily by human beings, due to high-level mental processes such as: perceptual learning, memory organization and critical reasoning.

John McCarthy, one of the first to use the term “artificial intelligence”

This technology had its conceptual definition in 1950 at Carnegie Mellon University. Herbert Simon and Allen Newell were the fathers of this science, creating the first academic laboratory dedicated to A.I. at the university.

Herbert Simon and Allen Newell

Many scientists were already studying the subject at the time, but around the 1960s, A.I. research cooled down due to the technical limitations of the time, such as the lack of computer memory.

This “A.I. winter” lasted until the early 1980s, and over the next few years this reality would change thanks to innovations in algorithms, the expansion of deep learning techniques and increased funding for research in this area.

In 1997, IBM’s Deep Blue project managed to beat the virtually invincible Garry Kasparov, world chess champion. In 2011 Watson, also from IBM, won Jeopardy, a US TV quiz show. Watson, the computer, searches for information in 200 million pages of books, incredible huh?

IBM’s Deep Blue managed to beat the virtually invincible Garry Kasparov, world chess champion.

Even in the 1960s, electronic systems needed to deal with some degree of uncertainty in their variables, and fuzzy logic was used for this. It evaluates and calculates the belonging of an input variable to one or more output variables.

It is still widely used today in decision support systems, controllers and any type of multi-valued analysis application. It can be used to imitate a human decision-making process, which was the first A.I. formula implemented in electronic devices. Many air conditioning systems use fuzzy controllers, which are considered intelligent devices.

After several other events the term A.I. became popular, but what we have today is far superior to what we saw in those decades. Let’s understand how the structure of an A.I. works:

When we talk about artificial intelligence, we are also talking about a technology that is part of A.I., Machine Learning and Deep Learning, but they are not obligatory.

Machine Learning: is the technology where computers have the ability to learn according to the expected answers through associations of different data. It doesn’t matter if it’s images, numbers or any kind of information that can be identified. In Brazil, this term is known as machine learning.

Deep Learning: is an area focused on algorithms and brain structures and functions, called neural and predictive networks. It’s used to teach the computer to learn, so that it can then predict certain patterns.

That’s not all, it’s not mandatory, but some machine learning systems need to use certain very large data sets, for which another term arises: big data.

To work with a machine learning system, you need to use a certain set of data. Big Data allows data to be virtualized so that it can be stored more efficiently and economically.

Big data also helps to improve network speed and reliability, removing other physical limitations associated with managing large amounts of data.

Artificial intelligence is like a complete universe of all computer technology that displays anything that resembles human intelligence. It can be an application for solving problems by making decisions based on a list of complex rules or (IF / THEN) logic.

Machine Learning is a subset of the use of artificial intelligence which learns on its own as it receives more data in order to be able to carry out specific tasks with increasing precision.

Deep Learning that learns to perform specific tasks precisely, on its own, and evolves without the need for human intervention.

Although all Machine Learning is part of artificial intelligence, not all artificial intelligence makes use of Machine Learning.

A.I. algorithms are capable of reasoning, planning and processing based on logical and statistical computational methods, but these abilities are limited to the specific resources of the algorithms themselves.

Ex:

If you create an algorithm to identify how many people are in a photo and whether they are smiling, it won’t work to identify something else, such as whether the sky in the photo is blue or cloudy, in which case you need to create another algorithm to identify it.

We can define A.I. in two categories: strong and weak.

Strong A.I. – Artificial General Intelligence (AGI): this is the A.I. that most closely resembles the autonomy of the human brain, solving many types of problem, including the selection of the problems it chooses to solve. It’s still theoretical and we don’t have any practical examples of its use, although Elon Musk believes that we could already be on a dangerous path of no return with artificial intelligence.

Weak A. I. – Narrow A.I: this is the A.I. we know and use most in our daily lives. It is focused and trained to perform a specific task. We call it “weak”, but in a way it is wrong to call it that, because this type of A.I. is what inhabits certain technologies that we use in our daily lives, such as google photos, some smartphone cameras and many others.

Ex: Apple’s Siri or amazon’s Alexa

Whether it’s taking a photo of food and the camera automatically adjusting the lighting and settings, or trying to predict the actions of companies in the future, or even generating codes on its own, A.I. is literally present in our daily lives in a way that will be impossible to remove, as the trend is increasingly to help human beings perform various tasks and make our lives easier.

Are you afraid or in favor of A.I.?

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest news

Latest news directly from our blog.