Название | Cyberphysical Smart Cities Infrastructures |
---|---|
Автор произведения | Группа авторов |
Жанр | Физика |
Серия | |
Издательство | Физика |
Год выпуска | 0 |
isbn | 9781119748328 |
In the article by Bryson and Winfield, “Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems” [1], the authors explore several important concepts. One of the most important ones is how they define intelligence. According to the authors, intelligence requires: “The capacity to perceive contexts for action, the capacity to act, and the capacity to associate contexts to actions” [1]. This definition is important because we must be able to compare organic intelligence versus artificial to distinguish artificial intelligence (AI) as a new thought category. The other important concept that they discuss is the standardization of ethics as it applies to AI. According to Bryson and Winfield, standards set by consensus of a large group should include ethical implications and machine learning (ML) code, which powers AI, and should incorporate these ethics. While Bryson and Winfield discuss the importance of these ethical standards, they fail to discuss what these ethics should be, leaving it open to interpretation. In this chapter, this gap will be examined in effort to try to establish some status quo.
Continuing with exploring the ethical dilemma posed by AI technology, in February 2019, the AMA Journal of Ethics published an article entitled “Ethical Dimensions of Using Artificial Intelligence in Health Care” [2]. In this article, the role that AI plays in healthcare was explored, as well as the ethical implications. The main focus of this article was to find balance between the benefits of AI technology and the inherent risks associated with it.
Another article that provided important insight was “Artificial Intelligence in Medicine” by Hamet and Tremblay [3]. In this article, they describe that there are two main branches of AI in medicine: a physical branch and a virtual branch. Within the virtual branch, which can also be viewed as deep learning, there are three aspects, which are “(i) unsupervised (ability to find patterns), (ii) supervised (classification and prediction algorithms based on previous examples), and (iii) reinforcement learning (use of sequences of rewards and punishments to form a strategy for operation in a specific problem space)” [3]. In comparison, the physical branch largely involves robots that provide a variety of services and applications to both users and physicians.
In the official document, the National Artificial Intelligence Research and Development Strategic Plan [4], the future of AI is laid out. Over the course of eight strategies, the National Science and Technology Council outlines important steps needed that are priorities for Federal investment. “The Federal Government must therefore continually reevaluate its priorities for AI R&D investments to ensure that investments continue to advance the cutting edge of the field and are not unnecessarily duplicative of industry investments” [4]. Of the eight strategies, seven are continued over from the 2016 report. Due to the fact that these seven aspects are not new, the focus will be on the eight, and only new one. The eighth strategy focuses on the partnership between the federal government and academia, industry, and others involved in the research and development of advancement in AI to continue to generate breakthroughs. Although not as new, it does also address ethics and AI that will be used as that topic is explored.
In his article, “Hacking AI: Rethinking cybersecurity for artificial intelligence” [5], Davey Gibian explores how traditional cybersecurity is insufficient for evolving AI technologies. He also states that what is needed for cybersecurity is “two algorithm‐level considerations: robustness and explainability” [5]. One interesting point that he goes on to make under “robustness” is talking about eliminating bias as part of AI cybersecurity. In this report, we will examine how this bias can be caused by ethics implemented into AI.
The concept of traditional cybersecurity insufficient for modern and future AI technology is also supported by Ilja Moisejevs in his article “What everyone forgets about machine learning” [6]. Here, he briefly goes through the history of cybersecurity and cyber threats. He then goes on to explain the need for cybersecurity in ML and the impact that failure to implement this can cause (Figure 1.1).
1.2 A Brief History of AI
If one were to go through a history of literature, that person would find out that humans have often fantasized about the creation of non‐human acting, responding, and thinking as if they were human. Many stories that fall in the genre of science fiction often depict the use of robots functioning using AI either for good, or for bad. Although the fantasy of AI stretches back quite far, the modern age of AI begins around the era of the 1950s [7]. The difference between this era versus previous written versions of the future of AI stems from the fact that Alan Turing had begun development of code that would yield the first AI machines, turning what was once science fiction into reality.
Figure 1.1 Intersection of AI, healthcare, and cybersecurity.
Although AI development began in the 1950s, 70 years later, we still do not have fully functional robots walking around. The reason for this has to do with the fact that early AI innovators were limited in technological capability. Things such as processor speed, memory, space, cost, and availability were all issues that these early innovators had to overcome. As developments for the computer became faster, smaller, and cheaper, AI was able to move forward, jumping over hurdles that previously blocked its path. In the beginning of AI programs, many would often be developed to play a board game, such as chess or checkers. These games were limited as far as the rules, and thus easy to learn, but take a significant amount of intelligence to learn and master [7]. By the 1960s, AI had been established and many were working toward defining this new frontier.
The best comparison for the growth and development of AI ironically is to that of a human child. When the child is first born, it simply exists. It does not do much other than a few basic functions. However, soon after, the child begins developing motor and verbal skills and interacting with its surroundings. Eventually these motor skills are refined, and the child begins to walk. At the point of mobility, the child's world expands beyond just the three feet surrounding them. Data processing, understanding, and decision making begin to shape in the child. Thus after 70 years, AI is equivalent to a toddler. There are many things that AI can do independently, but it is far from being an independent fully functioning adult right now.
1.3 AI in Healthcare
Thanks largely in part to Hollywood and science fiction, AI is synonymous in people's minds with walking, talking robots. However, AI expands beyond robotics into machine learning and natural language processing, all of which can find applications in the healthcare field [2]. Care robots or “carebots” do exist, but they are far from the androids that appear in Westworld. There are several schools of thought of how to classify AI in healthcare. One perspective views AI being put into three categories: diagnosis, clinical decision making, and personalized machines [2]. In comparison, another school of thought believes there are two main categories with subcategories in each. In this viewpoint, the main categories are virtual and physical [3].
Before being able to define AI's role, it is important to understand the capabilities of AI. Using ML, AI is able to process large amounts of data and look for patterns, including patterns that are often missed or overlooked by humans. For many, this pattern identification is used as a secondary consult to confirm a doctor's diagnosis [2]. There is an inherent trust in these AI and ML by healthcare professionals (HCPs). HCPs are often overworked and understaffed, and so using the AI to confirm diagnosis means that they are assuming the following: (i) the machines were coded correctly and tested so that they will identify the patterns correctly and perform as expected, (ii) those who performed the coding have at least some understanding of the healthcare, and (iii) the machines have not been tampered with. Later in this chapter, the third point of trust will be addressed, whether this trust is wrongfully placed or not. This chapter does not investigate the manufacturing of these machines, so for the intent of this topic, it will be assumed that the first point of trust is valid. Looking