the-3-stages-of-artificial-intelligence:-which-one-we-are-in-and-why-many-think-that-the-third-one-can-be-fatal

Since its launch at the end of November 2022, ChatGPT, the chatbot that uses artificial intelligence (AI) to answer questions or generate texts on demand from users, has become the fastest growing internet application in history.

In just two months it reached 100 million active users. It took the popular app TikTok nine months to reach that milestone. And to Instagram two and a half years, according to data from the technology monitoring company Sensor Town.

“In the 20 years we’ve been following the internet, we can’t recall a faster growth of a consumer internet application,” said analysts at UBS, who reported the record in February.

The massive popularity of ChatGPT, developed by the OpenAI company, with financial backing from Microsoft, has sparked all kinds of discussions and speculations about the impact that generative artificial intelligence is already having and will have in our near future.

It is the branch of AI that is dedicated to generating original content from existing data (usually taken from the internet) in response to instructions from a user.

The texts (from essays, poetry and jokes to computer codes) and images (diagrams, photos, artwork of any style and much more) produced by generative AIs such as ChatGPT, DALL-E, Bard and AlphaCode -to name just a few. of the best known – are, in some cases, so indistinguishable from human work, that they have already been used by thousands of people to replace their usual work.

From students who use them to do their homework to politicians who entrust their speeches to them – Democratic representative Jake Auchincloss launched the resource in the US Congress – or photographers who invent snapshots of things that did not happen (and even win awards for This, like the German Boris Eldagsen, who won first place in the last Sony World Photography Award for an image created by AI).

This very note could have been typed by a machine and you probably wouldn’t know it.

The phenomenon has led to a human resources revolution, with companies like tech giant IBM announcing it will stop hiring people to fill some 8,000 jobs that can be handled by AI.

A report by investment bank Goldman Sachs estimated in late March that AI could replace a quarter of all human jobs today, though it will also create more productivity and new jobs.

Mechanical arms of industrial robotsMechanical arms of industrial robots

Getty Images
The more AI advances, the greater its ability to replace our work.

If all these changes overwhelm you, prepare yourself for a fact that could be even more disconcerting.

And it is that, with all its impacts, what we are experiencing now is only the first stage in the development of AI.

According to experts, what could come soon -the second stage- will be much more revolutionary.

And the third and last, which could occur very shortly after that, is so advanced that it will completely alter the world, even at the cost of human existence.

the three stages

AI technologies are classified by their ability to mimic human characteristics.

1. Narrow Artificial Intelligence (ANI)

The most basic category of AI is better known by its acronym: ANI, for Artificial Narrow Intelligence.

It is so named because it narrowly focuses on a single task, performing repetitive work within a range predefined by its creators.

ANI systems are generally trained using a large data set (for example from the internet) and can make decisions or take actions based on that training.

An ANI can match or exceed human intelligence and efficiency but only in that specific area in which it operates.

An example is chess programs that use AI. They are capable of beating the world champion of that discipline, but cannot perform other tasks.

A machine playing chessA machine playing chess

Getty Images
ANI can outperform humans but only in a specific area.

That is why it is also known as “weak AI”.

All programs and tools that use AI today, even the most advanced and complex ones, are forms of ANI. And these systems are everywhere.

The smartphones They are full of apps that use this technology, from GPS maps that let you locate anywhere in the world or find out the weather, to music and video programs that know your tastes and make recommendations.

Also virtual assistants like Siri and Alexa are forms of ANI. Like the Google search engine and the robot that cleans your house.

The business world also uses this technology a lot. It is used in the internal computers of cars, in the manufacture of thousands of products, in the financial world and even in hospitals, to perform diagnoses.

Even more sophisticated systems like driverless cars (or autonomous vehicles) and the popular ChatGPT are forms of ANI, since they can’t operate outside the range predefined by their programmers, so they can’t make decisions on their own.

They also do not have self-awareness, another trait of human intelligence.

However, some experts believe that systems that are programmed to learn automatically (machine learning) such as ChatGPT or AutoGPT (an “autonomous agent” or “intelligent agent” that uses information from ChatGPT to perform certain subtasks autonomously) could move to the next stage of development.

2. Artificial General Intelligence (AGI)

This category –Artificial General Intelligence– is reached when a machine acquires cognitive abilities at the human level.

That is, when you can perform any intellectual task that a person does.

A human and robotic hand touch each otherA human and robotic hand touch each other

Getty Images
The AGI has the same intellectual capacity as a human.

It is also known as “strong AI”.

Such is the belief that we are on the verge of reaching this level of development, that last March more than 1,000 technology experts asked AI companies to stop training, for at least six months, those programs that are more powerful. than GPT-4, the latest version of ChatGPT.

“AI systems with intelligence that competes with humans can pose profound risks to society and humanity,” Apple co-founder Steve Wozniak and Tesla owner SpaceX Neuralink, among others, warned in an open letter. and Twitter, Elon Musk (who was one of the co-founders of Open AI before resigning from the board over disagreements with the company’s leadership).

  • The letter in which more than 1,000 experts ask to stop artificial intelligence for being a “threat to humanity”

In the letter, published by the nonprofit Future of Life Institute, the experts said that if companies do not quickly agree to halt their projects, “governments should step in and institute a moratorium” so that safety measures can be designed and implemented. strong security.

Although this is something that -for the moment- has not happened, the United States government did convene the owners of the main AI companies – Alphabet, Anthropic, Microsoft, and OpenAI – to agree on “new actions to promote responsible innovation of AI”.

“AI is one of the most powerful technologies of our time, but to take advantage of the opportunities it presents, we must first mitigate its risks,” the White House said in a statement on May 4.

The US Congress, for its part, summoned the CEO of OpenAI, Sam Altman, on Tuesday to answer questions about ChatGPT.

During the Senate hearing, Altman said it is “crucial” that his industry be regulated by the government as AI becomes “increasingly powerful.”

Carlos Ignacio Gutiérrez, a public policy researcher at the Future of Life Institute, explained to BBC Mundo that one of the great challenges presented by AI is that “there is no collegiate body of experts who decide how to regulate it, as happens, for example, with the Intergovernmental Panel on Climate Change (IPCC)”.

In the letter from the experts, they defined what their main concerns were.

“Should we develop non-human minds that could eventually outnumber us, outsmart us, make us obsolete and replace us?” they questioned.

“Should we risk losing control of our civilization?”

Sam Altman testifying before Congress on May 16Sam Altman testifying before Congress on May 16

Getty Images
Sam Altman, CEO of OpenAI, creator of ChatGPT, supported government regulation of AI during a congressional hearing.

Which brings us to the third and final stage of AI.

3. Artificial Super Intelligence (ASI)

The concern of these computer scientists has to do with a well-established theory that, when we reach AGI, shortly after we will reach the last stage in the development of this technology: Artificial Superintelligence, which occurs when synthetic intelligence surpasses the human.

Oxford University philosopher and AI expert Nick Bostrom defines super intelligence as “an intellect that is far more intelligent than the best human brains in virtually every field, including scientific creativity, general wisdom, and social skills.” ”.

The theory is that when a machine achieves intelligence on a par with humans, its ability to multiply that intelligence exponentially through its own autonomous learning will cause it to vastly surpass us in a short time, reaching ASI.

“Humans to be engineers, nurses or lawyers must study for a long time. The issue with the AGI is that it is immediately scalable,” says Gutiérrez.

This is thanks to a process called recursive self-improvement (recursive self-improvement) that allows an AI application to “continually improve itself, in a time that we couldn’t.”

While there is much debate about whether a machine can really acquire the kind of broad intelligence that a human being has – especially when it comes to emotional intelligence – it is one of the the things that most concern those who believe that we are close to achieving the AGI.

Recently, the so-called “godfather of artificial intelligence” Geoffrey Hinton, a pioneer in the investigation of neural networks and deep learning that allow machines to learn from experience, just like humans, warned in an interview with the BBC that we could be close. of that milestone.

“Right now (the machines) are not smarter than us, from what I can see. But I think soon they could be,” said the 75-year-old, who just retired from Google.

  • “At the moment AI systems are not smarter than us, but I think they will soon be” | Interview with Geoffrey Hinton, “godfather of artificial intelligence”

extinction or immortality

There are, in general, two fields of thought in relation to the ASI: there are those who believe that this superintelligence will be beneficial to humanity and those who believe the opposite.

Among the latter was the famous British physicist Stephen Hawking, who believed that super-intelligent machines posed a threat to our existence.

Stephen Hawking at the launch of the Leverhulme Center for the Future of Intelligence (IFC) at the University of Cambridge on October 19, 2016.Stephen Hawking at the launch of the Leverhulme Center for the Future of Intelligence (IFC) at the University of Cambridge on October 19, 2016.

Getty Images
The distinguished British physicist Stephen Hawking believed that super-intelligent AI could lead to the “end of the human race.”

“The development of full artificial intelligence could mean the end of the human race,” he told the BBC in 2014, four years before he died.

A machine with this level of intelligence would “take off on its own and redesign itself at an ever-increasing rate,” he said.

“Humans, who are limited by slow biological evolution, could not compete and would be outclassed,” he predicted.

However, on the opposite side they believe the opposite.

One of ASI’s biggest enthusiasts is American futurist inventor and author Ray Kurzweil, an AI researcher at Google and co-founder of Silicon Valley’s Singularity University (“singularity” is another name for the era in which machines become super intelligent).

Kurzweil believes that humans will be able to use super-intelligent AI to overcome our biological barriers, improving our lives and our world.

In 2015 he even predicted that by the year 2030 humans will be able to achieve immortality thanks to nanobots (extremely small robots) that will act inside our bodies, repairing and healing any damage or illness, including that caused by the passage of time.

In his statement to Congress on Tuesday, OpenAI’s Sam Altman was also optimistic about the potential of AI, noting that it could solve “humanity’s greatest challenges, like climate change and curing cancer.”

In the middle are people, like Hinton, who believe AI has enormous potential for humanity, but find the current pace of development, without clear limits and regulations, “worrying.”

In a statement sent to The New York Times Announcing his departure from Google, Hinton said he now regretted the work he had done, because he feared that “bad actors” would use AI to do “bad things.”

Geoffrey HintonGeoffrey Hinton

Reuters
Geoffrey Hinton told the BBC that advances in AI “scare” him.

Asked by the BBC, he gave this example of a “nightmare scenario”:

“Imagine, for example, that some bad actor like [el presidente ruso Vladimir] Putin decided to give robots the ability to create their own sub-goals.”

The machines could eventually “create sub-goals like: ‘I need to get more power,’” which would pose an “existential risk,” he said.

At the same time, the British-Canadian expert said that in the short term, AI will bring many more benefits than risks, so “we must not stop developing it.”

“The issue is: Now that we’ve discovered that it works better than we expected a few years ago, what do we do to mitigate the long-term risks of things smarter than us taking over?”

Guitérrez agrees that the key is to create an AI governance system before it develops an intelligence that can make its own decisions.

“If these entities are created that have their own motivation, what does it mean when we are no longer in control of those motivations?”, he asks.

The expert points out that the danger is not only that an AGI or ASI, either by their own motivation or controlled by people with “bad objectives”, start a war or manipulate the financial, productive system, energy infrastructure, transportation or any other system that is now computerized.

A superintelligence could dominate us in a much more subtle way, he warns.

“Imagine a future where an entity has so much information about every person on the planet and their habits (thanks to our internet searches) that it could control us in ways we wouldn’t realize,” he says.

“The worst case scenario is not that there are human vs. robot wars. The worst thing is that we do not realize that we are being manipulated because we are sharing the planet with an entity that is much more intelligent than us.


Remember that you can receive notifications from BBC Mundo. Download the new version of our app and activate them so you don’t miss out on our best content.

  • Do you already know our YouTube channel? Subscribe!
  • See original article on BBC

By Scribe