Название | Artificial intelligence. Freefall |
---|---|
Автор произведения | Dzhimsher Chelidze |
Жанр | |
Серия | |
Издательство | |
Год выпуска | 0 |
isbn | 9785006509900 |
– Create a culture of continuous improvement. Retrospectives foster a culture in which continuous improvement and adaptation are the norm. This helps the team to be more flexible and ready for changes.
In general, retrospectives are an important tool for moving from a situation of uncertainty to more structured and orderly processes. They help the team adapt to changes, improve their processes, and achieve their goals”.”
Yes, the model has become academic and it is clear that these are excerpts from articles, and not reflections. But this is a huge progress, and you can already work with the model and use it as a tool.
However, with only weak AI available and dreaming of a more or less strong one, various researchers already distinguish super-strong AI (ASI, Artificial Superintelligence) in their classification.
This is about an AI, that:
– It can solve both routine and creative tasks;
– instantly navigates uncertainty even without a network or internet connection;
– adapts problem solving to the context of circumstances and available opportunities / resources;
– understands people’s emotions (not only through a text query, but also based on the analysis of facial expressions, voice timbre, and other parameters) and takes them into account in your work;
– able to interact independently with the real world to solve problems.
This is the kind of AI that we only see in science fiction movies so far. Even AI itself writes about ASI as a “hypothetical concept” and “a subject of science fiction and active research in the field of artificial intelligence” (YandexGPT 3 Pro). This is a certain desired point in the distant future, which is not yet possible to reach.
Chat GPT 4 describes ASI as follows: “hypothetical artificial intelligence system with intelligence superior to human. This is not just an advanced AI, but a system that can adapt, learn and develop independently, exceeding human abilities in learning, problem solving, cognitive functions and self-improvement.
Super-strong AI, or ASI, will be able to understand and process many types of data (text, images, sound, video), which will allow it to perform verbal tasks and make decisions. It will use advanced AI technologies such as multidimensional language models (LLMs), multi-bit neural networks, and evolutionary algorithms.
Currently, ASI remains a conceptual and speculative stage in the development of AI, but it represents a significant step forward from the current level of AI”.”
And if there are hundreds of weak AIS now, for each task, then there will be only dozens of strong AIS (most likely there will be a division in directions, we will consider this in the next block), and the super-strong AI will be one for the state and even the entire planet.
Limitations on the path to strong AI
To be honest, I have little faith in the rapid emergence of a strong or super-strong AI.
First, this is a very costly and complex task from the point of view of regulatory restrictions. The era of uncontrolled AI development is ending. More and more restrictions will be imposed on it. We’ll discuss AI regulation in a separate chapter.
The key trend is a risk-based approach. So, in a risk-based approach, strong and super -strong AI will be at the upper level of risk. This means, that legislative measures will also be protective.
Secondly, this is a difficult task from a technical point of view, and a strong AI will be very vulnerable.”
Now, in the mid-2020s, creating and training a strong AI requires huge computing power. So, according to Leopold Aschenbrenner, a former Open AI employee from the Super alignment team, it will require the creation of a data-center worth a trillion US dollars. And its power consumption will exceed all current electricity generation in the United States.
We also need complex AI models (orders of magnitude more complex than the current ones) and a combination of them (not just LLM for query analysis). In other words, it is possible to exponentially increase the number of neurons, build connections between neurons, and coordinate the work of various segments.
At the same time, it should be understood that if human neurons can be in several states, and activation can occur “in different ways” (biologists will forgive me for such simplifications), then machine AI is a simplified model that does not know how to do this. Simply put, machine’s 80—100 billion neurons are not equal to a human’s 80—100 billion. The machine will need more neurons to perform similar tasks. The same GPT4 is estimated at 100 trillion parameters (conditionally neurons), and it is still inferior to humans.
All this leads to several factors.
The first factor is that increasing complexity always leads to reliability problems, and the number of failure points increases.
Complex AI models are difficult to both create and maintain from degradation over time, in the process of operation. AI models need to be constantly “serviced”. If this is not done, then a strong AI will begin to degrade, and neural connections will be destroyed, this is a normal process. Any complex neural network, if it is not constantly developing, begins to destroy unnecessary connections. At the same time, maintaining relationships between neurons is a very expensive task. AI will always optimize and search for the most efficient solution to the problem, which means, that it will start turning off unnecessary energy consumers.
That is, the AI will look like an old man with dementia, and the “life” period will be greatly reduced. Imagine what a strong AI with its capabilities can do, but which will suffer from memory loss and sharp reversals to the state of the child? Even for current AI solutions, this is an actual problem.
Let’s give a couple of simple real-life examples.
You can compare building a strong AI to training your human muscles. When we first start working out in the gym and get involved in strength training, bodybuilding, then progress is fast, but the further we go, the lower the efficiency and increase in results. You need more and more resources (time, exercise, and energy from food) to progress. Yes, even just holding the form is becoming more and more difficult. Plus, the increase in strength comes from the thickness of the muscle section, but the mass grows from the volume. As a result, the muscle will at some point become so heavy that it will not be able to move itself, and may even damage itself.
Another example of the complexity of creating, but already from the field of engineering, is Formula 1 races. For example, a 1-second lag can be eliminated if you invest 1 million and 1 year. But to win back the crucial 0.2 seconds, it may already take 10 million and 2 years of work. And the fundamental limitations of the design of the car can force you to reconsider the whole concept of a racing car.
And even if you look at ordinary cars, everything is exactly the same. Modern cars are more expensive to create and maintain, and without special equipment, it is impossible to change even a light bulb. If you take modern hyper cars, then after each departure, entire teams of technicians are required for maintenance.
If you look at it from the point of view of AI development, there are two key parameters in this area:
– number of layers of neurons (depth of the AI model).
– the number of neurons in each layer (layer width).
Depth determines how great the AI’s ability to abstract is. Insufficient depth of the model leads to a problem with the inability to perform deep system analysis, superficiality of this analysis and judgments.
The width of the layers determines the number of parameters / criteria that the neural network can use on each layer. The more they are, the