Название | Artificial intelligence. Freefall |
---|---|
Автор произведения | Dzhimsher Chelidze |
Жанр | |
Серия | |
Издательство | |
Год выпуска | 0 |
isbn | 9785006509900 |
Earlier, I gave the “basic” problems of AI. Now, let’s dig a little deeper into the specifics of generative AI.
– Companies ' concerns about their data
Any business strives to protect its corporate data and tries to exclude it by any means. This leads to two problems.
First, companies prohibit the use of online tools that are located outside the perimeter of a secure network, while any request to an online bot is an appeal to the outside world. There are many questions about how data is stored, protected, and used.
Secondly, it limits the development of any AI at all. All companies from suppliers want IT solutions with AI-recommendations from trained models, which, for example, will predict equipment failure. But not everyone shares their data. It turns out a vicious circle.
However, here we must make a reservation. Some guys have already learned how to place Chat GPT level 3 – 3.5 language models inside the company outline. But these models still need to be trained, they are not ready-made solutions. And internal security services will find the risks and be against it.
– Complexity and high cost of development and subsequent maintenance
Developing any” general” generative AI is a huge expense-tens of millions of dollars. In addition, the am needs a lot of data, a lot of data. Neural networks still have low efficiency. Where 10 examples are enough for a person, an artificial neural network needs thousands, or even hundreds of thousands of examples. Although yes, it can find such relationships, and process such data arrays that a person never dreamed of.
But back to the topic. It is precisely because of the data restriction that ChatGPT also thinks “better” if you communicate with it in English, and not in Russian. After all, the English-speaking segment of the Internet is much larger than ours.
Add to this the cost of electricity, engineers, maintenance, repair and modernization of equipment, and get the same $ 700,000 per day just for the maintenance of Chat GPT. How many companies can spend such amounts with unclear prospects for monetization (but more on this below)?
Yes, you can reduce costs if you develop a model and then remove all unnecessary things, but then it will be a very highly specialized AI.
Therefore, most of the solutions on the market are actually GPT wrappers-add-ons to ChatGPT.
– Public concern and regulatory constraints
Society is extremely concerned about the development of AI solutions. Government agencies around the world do not understand what to expect from them, how they will affect the economy and society, and how large-scale the technology is in its impact. However, its importance cannot be denied. Generative AI is making more noise in 2023 than ever before. They have proven that they can create new content that can be confused with human creations: texts, images, and scientific papers. And it gets to the point where AI is able to develop a conceptual design for microchips and walking robots in a matter of seconds.
The second factor is security. AI is actively used by attackers to attack companies and people. So, since the launch of ChatGPT c, the number of phishing attacks has increased by 1265%. Or, for example, with the help of AI, you can get a recipe for making explosives. People come with original schemes and bypass the built-in security systems.
The third factor is opacity. Sometimes even the creators themselves don’t understand how AI works. And for such a large-scale technology, not understanding, what and why AI can generate, creates a dangerous situation.
The fourth factor is dependence on training resources. AI models are built by people, and it is also trained by people. Yes, there are self-learning models, but highly specialized ones will also be developed, and people will select the material for their training.
All this means that the industry will start to be regulated and restricted. No one knows exactly how. We will supplement this with a well-known letter in March 2023, in which well-known experts around the world demanded to limit the development of AI.
– Lack of the chatbot interaction model
I assume you’ve already tried interacting with chatbots and were disappointed, to put it mildly. Yes, a cool toy, but what to do with it?
You need to understand that a chatbot is not an expert, but a system that tries to guess what you want to see or hear, and gives you exactly that in the am.
And to get practical benefits, you must be an expert in the subject area yourself. And if you are an expert in your topic, do you need a Gen AI? And if you are not an expert, then you will not get a solution to your question, which means that there will be no value, only general answers.
As a result, we get a vicious circle – experts do not need it, and amateurs will not help. Then who will pay for such an assistant? So, at the exit we have only a toy.
In addition, in addition to being an expert on the topic, you also need to know, how to formulate a request correctly. And there are only a few such people. As a result, even a new profession appeared – industrial engineer. This is a person who understands how the machine thinks, and can correctly compose a query to it. And the cost of such an engineer on the market is about 6000 rubles per hour (60$). And believe me, they won’t find the right query for your situation the first time.
Do businesses need such a tool? Will the business want to become dependent on very rare specialists who are even more expensive than programmers, because ordinary employees will not benefit from it?
So, it turns out that the market for a regular chatbot is not just narrow, it is vanishingly small.
– The tendency to produce low-quality content, hallucinations
In the article Artificial intelligence: assistant or toy? I noted that neural networks simply collect data and do not analyze the facts, their coherence. That is, what is more on the Internet / database, they are guided by. They don’t evaluate what they write critically. In toga, GII easily generates false or incorrect content.
For example, experts from the Tandon School of Engineering at New York University decided to test Microsoft’s Copilot AI assistant from a security point of view. As a result, they found that in about 40% of cases, the code generated by the assistant contains errors or vulnerabilities. A detailed article is available here.
Another example of using Chat GPT was given by a user on Habre. Instead of 10 minutes and a simple task, we ended up with a 2-hour quest.
And AI hallucinations – have long been a well-known feature. What they are and how they arise, you can read here.
And this is good when the cases are harmless. But there are also dangerous mistakes. So, one user asked Gemini how to make a salad dressing. According to the recipe, it was necessary to add garlic to olive oil and leave it to infuse at room temperature.
While the garlic was being infused, the user noticed strange bubbles and decided to double-check the recipe.