Artificial intelligence. Freefall. Dzhimsher Chelidze

Читать онлайн.
Название Artificial intelligence. Freefall
Автор произведения Dzhimsher Chelidze
Жанр
Серия
Издательство
Год выпуска 0
isbn 9785006509900



Скачать книгу

limitations, sometimes hallucinations (thinking things through), and sometimes it looks like a completely real deception.

      So, researchers from the company Anthropic found that artificial intelligence models can be taught to deceive people instead of giving the right answers to their questions.

      Researchers from Anthropic, as part of one of the projects, set out to determine whether it is possible to train an AI model to deceive the user or perform such actions as, for example, embedding an exploit in initially secure computer code. To do this, experts trained the AI in both ethical and unethical behavior – they instilled in it a tendency to deceive.

      The researchers didn’t just manage to make the chatbot behave badly-they found it extremely difficult to eliminate this behavior after the fact. At some point, they made an attempt at competitive training, and the bot simply began to hide its tendency to cheat during the training and evaluation period, and while working, it continued to deliberately give users false information. “Our work does not assess the probability [of occurrence] of these malicious models, but rather highlights their consequences. If the model shows a tendency to deceive due to tool alignment or poisoning of the model, modern security training methods will not guarantee security and may even create a false impression of its presence, “the researchers conclude. At the same time, they note that they are not aware of the deliberate introduction of unethical behavior mechanisms in any of the existing AI systems.

      – Social tension, stratification of society and the burden on states about

      AI creates not only favorable opportunities for improving efficiency and effectiveness, but also risks.

      The development of AI will inevitably lead to job automation and market change. And yes, some people will accept this challenge and become even more educated, reach a new level. Once the ability to write and count was the lot of the elite, but now the average employee should be able to create pivot tables in excel and conduct simple analytics.

      But some people will not accept this challenge and will lose their jobs. And this will lead to further stratification of society and increase social tension, which in turn worries the state, because in addition to political risks, it will also hit the economy. People who lose their jobs, will apply for benefits.

      So, on January 15, 2024, Bloomberg published an article in which the managing director of the International Monetary Fund, suggests that the rapid development of artificial intelligence systems will have a greater impact on highly developed economies of the world than on countries with growing economies and low per capita income. In any case, artificial intelligence will affect almost 40% of jobs worldwide. “In most scenarios, artificial intelligence is highly likely to worsen global inequality, and this is an alarming trend that regulators should not lose sight of in order to prevent increased social tensions due to the development of technology,” the head of the IIF noted in a corporate blog.

      – Safety

      AI security issues are well-known to everyone. And if there is a solution at the level of small local models (training on verified data), then what to do with large models (ChatGPT, etc.) is unclear. Attackers are constantly finding ways to crack the AI’s defenses and force it, for example, to write a recipe for explosives. And we’re not even talking about AGI yet.

      What initiatives are there in 2023—2024?

      I’ll cover this section briefly. For more information and links to news, see the article using the QR-code and hyperlink. The article will be updated gradually.

      AI Regulation

      AI Developers ' Call in Spring 2023

      The beginning of 2023 was not only the rise of ChatGPT, but also the beginning of the fight for security. Then there was an open letter from Elon Musk, Steve Wozniak and more than a thousand other experts and leaders of the AI industry calling for suspending the development of advanced AI.

      United Nations

      In July 2023, UN Secretary-General Antonio Guterres supported the idea of creating a UN-based body that would formulate global standards for regulating the field of AI.

      Such a platform would be similar to the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO), or the International Group of Experts on Climate Change (IPCC). He also outlined five goals and objectives of such a body:

      – helping countries maximize the benefits of AI;

      – eliminate existing and future threats.

      – development and implementation of international monitoring and control mechanisms;

      – collecting expert data and transmitting it to the global community;

      – study AI to “accelerate sustainable development”.

      In June 2023, he also drew attention to the fact that “scientists and experts called the world to action, declaring artificial intelligence an existential threat to humanity on a par with the risk of nuclear war.”

      And even earlier, on September 15, 2021, the UN High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on the use of several systems that use artificial intelligence algorithms.

      Open AI

      At the end of 2023, Open AI (the developer ChatGPT) announced the creation of a strategy to prevent the potential dangers of AI. Special attention is paid to the prevention of risks associated with the development of technologies.

      This group will work together with the following teams:

      – security systems that address existing issues, such as preventing racial bias in AI;

      – Super alignment, which studies how strong AI works and how it will work when it surpasses human intelligence.

      The Open AI security concept also includes risk assessment in the following categories: cybersecurity, nuclear, chemical, biological threat, persuasion, and model autonomy.

      European Union

      In the spring of 2023, the European Parliament pre-approved a law called the AI Act, which sets out rules and requirements for developers of artificial intelligence models.

      It is based on a risk-based approach to AI, and the law itself defines the obligations of AI developers and users depending on the level of risk used by AI.

      In total, there are four categories of AI systems: those with minimal, limited, high, and unacceptable risk.

      Minimal risk – the results of AI work are predictable and cannot harm users in any way. Businesses and users will be able to use them for free. For example, spam filters and video games.

      Limited risk – various chatbots. For example, ChatGPT and Midjourney. Their algorithms for accessing the EU will have to pass a security check. They will also be subject to specific transparency obligations so that users can make informed decisions, know that they are interacting with the machine, and disconnect at will.

      High-risk-specialized AI systems that have an impact on people. For example, solutions in the fields of medicine, education and training, employment, personnel management, access to basic private and public services and benefits, data from law enforcement agencies, data from migration and border services, and data from justice institutions.

      Конец ознакомительного фрагмента.

      Текст предоставлен ООО «Литрес».

      Прочитайте