WHAT FUTURE FOR ARTIFICIAL INTELLIGENCE?


In some science fiction movies you see men who, little by little, are being replaced at work by mechanoid robots. The vehicles circulate without drivers; In hospitals, surgical interventions are carried out by robotic machines that are more efficient than humans. Humans are slowly disappearing as artificial intelligence takes over civilization.

Is it possible that in the future this catastrophic scenario will come to be established without being able to remedy it? In Stanley Kubrick's film “A Space Odyssey”, astronaut David Bowman still masters the “Hal9000” supercomputer and uninstalls it; it is the victory of the human over an intelligence that begins to be stronger than him. In a future world, David would be defeated and the supercomputer would kill him without feeling before allowing him to disable it.

The danger is there and it begins to worry everyone. Artificial intelligence has already taken off and is progressing faster than scientists imagined in the last century. Now the new scientists warn about the disappearance of the human cup and the final victory of an intelligence that built its own neurons, If man ceases to exist, everything will be lost forever, and no one knows what the relationship between artificial intelligence and the solar system that life offers us for now would be.

Will semiconductors be able to harbor feelings in the future? We are at the beginning of a crossroads where the questions are still greater than the answers. In a statement issued at the end of May, hundreds of scientists, experts and analysts say that a series of new AI-based machines are being developed quite rapidly that require new regulations on this growing technology to prevent it from "getting out of hand." hands". The European Union (EU) is already working on the elaboration of such regulations to mark the border in which humans would cease to be the dominant factor.

At the beginning of the year, a thousand researchers signed a declaration in which they called for a six-month pause in the development of AI, because they understood that it represents very profound risks to humanity. Microsoft and Google indicated, for their part, that they would not be associated with that request and continue working on the advancement of AI. ChatGPT was used as an example, despite the fact that its creator, Sam Altman, has been called a "prophet of the apocalypse".

Of course, not all scientists believe that this "apocalypse" can occur. To express their observations, they have created a “Center for AI Security” (CAIS) in San Francisco that tries to define the potential benefits of AI when it is developed under rigid controls and not in an anarchic way.

In contrast to the prudence, even dramatic, of those who watch the development of AI with concern, the CAIS affirms that the mission of modern science is to reduce the scale of associated risks; that is, control the margin in which the risks can be greater than the benefits.

CASI director Dan Hendrycks says that to be alarmed by AI progress is "even convenient" since some of it can lead to potential human catastrophe. This diverse set of opinions still lacks a work of synthesis. Specifically, it can be said that everyone is about to continue developing AI but they do not agree on the type of control measures, barriers or others, that prevent dominating the human race.

These divergences are natural. We are at the beginning of a new era of technology and a science that may or may not be without limits, although the tendency is for it not to be hindered. As it happened with other exceptional situations: automotive mechanics, logarithms, quantum mechanics, there is a part of confusion that, with a lot of attention and prudence, can give the human the correct direction to follow.


Comments

Popular posts from this blog