Designing Agentive Technology. Christopher Noessel

Читать онлайн.
Название Designing Agentive Technology
Автор произведения Christopher Noessel
Жанр Маркетинг, PR, реклама
Серия
Издательство Маркетинг, PR, реклама
Год выпуска 0
isbn 9781933820705



Скачать книгу

multiple options for achieving a goal, taking into account the trade-offs between them, and selecting the best one.

      • It is adaptable. It’s able to use feedback to track its progress toward its goal and adjust its plans accordingly.

      • In advanced agents, this can mean the capability to refine predictive models with increasing experience and as new real-time information comes in. Called machine learning in the vernacular, this helps narrow AIs adapt to an individual’s behavior and get better over time. I’ll touch on machine learning a bit more later, but for now understand that software can be programmed to make itself better at what it does over time.

      So agents are properly defined as artificial narrow intelligence—AI that is strictly fit to one domain. But where ANI is a category, the agent is the instance, the thing that will act on behalf of those users. So let’s talk about that second aspect of the definition.

      Acting on Behalf of Its User

      Similar to intelligence, agency can be thought of as a spectrum. Some things are more agentive than others. Is a hammer agentive? No. I mean if you want to be indulgently philosophical, you could propose that the metal head is acting on the nail per request by the rich gestural command the user provides to the handle. But the fact that it’s always available to the user’s hand during the task means it’s a tool—that is, part of the user’s attention and ongoing effort.

      Less philosophically, is an internet search an example of an agent? Certainly the user states a need, and the software rummages through its internal model of the internet to retrieve likely matches. This direct cause-and-effect means that it’s more like the hammer with its practically instantaneous cause-and-effect. Still a tool.

      But as you saw before, when Google lets you save that search, such that it sits out there, letting you pay attention to other things, and lets you know when new results come in, now you’re talking about something that is much more clearly acting on behalf of its user in a way that is distinct from a tool. It handles tasks so that you can use your limited attention on something else. So this part of “acting on your behalf”—that it does its thing while out of sight and out of mind—is foundational to the notion of what an agent is, why it’s new, and why it’s valuable. It can help you track something you would find tedious, like a particular moment in time, or a special kind of activity on the internet, or security events on a computer network.

      To do any of that, an agent must monitor some stream of data. It could be something as simple as the date and time, or a temperature reading from a thermometer, or it could be something unbelievably complicated, like watching for changes in the contents of the internet. It could be data that is continuous, like wind speed, or irregular, like incoming photos. As it watches this data stream, it looks for triggers and then runs over some rules and exceptions to determine if and how it should act. Most agents work indefinitely, although they could be set to run for a particular length of time or when any other condition is met. Some agents like a spam filter will just keep doing their job quietly in the background. Others will keep going until they need your attention, and some will need to tell you right away. Nearly all will let you monitor them and the data stream, so you can check up on how they’re doing and see if you need to adjust your instructions.

      So those are the basics. Agentive technology watches a data stream for triggers and then responds with narrow artificial intelligence to help its user accomplish some goal. In a phrase, it’s a persistent, background assistant.

      If those are the basics, there are a few advanced features that a sophisticated agent might have. It might infer what you want without your having to tell it explicitly. It might adapt machine learning methods to refine its predictive models. It might gently fade away in smart ways such that the user gains competence. You’ll learn about these in Part II, “Doing,” of this book, but for now it’s enough to know that agents can be much smarter than the basic definition we’ve established here.

      How Different Are Agents?

      Since most of the design and development process has been built around building good tools, it’s instructive to compare and contrast them to good agents—because they are different in significant ways.

      One of the main assertions of this book is that these differences are enough to warrant different ways of thinking about, planning for, and designing technology. They imply new use cases to master and new questions for evaluating them. They call for a community of practitioners to form around them.

       TABLE 2.1 COMPARING MENTAL MODELS

A Tool-Based Model An Agent-Based Model
A good tool lets you do a task well. A good agent does a task for you per your preferences.
A hammer might be the canonical model. A valet might be the canonical model.
Design must focus on having strong affordances and real-time feedback. Design must focus on easy setup and informative touchpoints.
When it’s working, it’s ready-to-hand, part of the body almost unconsciously doing its thing. When the agent is working, it’s out of sight. When a user must engage its touchpoints, they require conscious attention and consideration.
The goal of the designer is often to get the user into flow (in the Mihalyi Csikszentmihalyi sense) while performing a task. The goal of the designer is to ensure that the touchpoints are clear and actionable, to help the user keep the agent on track.

      To make a concept clear, you need to assert a definition, give examples, and then describe its boundaries. Some things will not be worth considering because they are obviously in; some things will not be worth considering because they are obviously out; but the interesting stuff is at the boundary, where it’s not quite clear. What is on the edge of the concept, but specifically isn’t the thing? Reviewing these areas should help you get clear about what I mean by agentive technology and what lies beyond the scope of my consideration.

      It’s Not Assistive Technology

      Artificial narrow intelligences that help you perform a task are best described as assistants, or assistive technology. We need to think as clearly about assistive tech as we do agentive tech, but we have a solid foundation to design assistive tech. We have been working on those foundations for the last seven decades or so, and recent work with heads-up displays and conversational UI are making headway into best practices for assistants. It’s worth noting that the design of agentive systems will often entail designing assistive aspects, but they are not the same thing.

      It seems subtle at first, but consider the difference between two ways to get an international airline ticket to a favorite destination. Assistive technology would work to make all your options and the trade-offs between them apparent, helping you avoid spending too much money or winding up with a miserable, five-layover flight, as you make your selection. An agent would vigilantly watch all airline offers for the right ticket and pipe up when it had found one already within your preferences. If it was very confident and you had authorized it, it might even have made the purchase for you.

      It’s Not Conversational Agents

      “Agent” has been used traditionally in services to mean “someone who helps you.” Think of a customer service