Designing Agentive Technology. Christopher Noessel

Читать онлайн.
Название Designing Agentive Technology
Автор произведения Christopher Noessel
Жанр Маркетинг, PR, реклама
Серия
Издательство Маркетинг, PR, реклама
Год выпуска 0
isbn 9781933820705



Скачать книгу

or on the phone, doing their best to minimize your wait. In my mind, this is much more akin to an assistant. But even that’s troubling since “assistant” has also been used to mean “that person who helps me at my job” both synchronously—as in “please take dictation”—and agentively—as in “hold all my calls until further notice.”

      These blurry usages are made even blurrier because human agents and assistants can act in both agentive and assistive ways. But since I have to pick, given the base meanings of the words, I think an assistant should assist you with a task, and an agent takes agency and does things for you. So “agent” and “agentive” are the right terms for what I’m talking about.

      Complicating that rightness is that a recent trend in interaction design is the use of conversational user interfaces, or chatbots. These are distinguished for having users work in a command line interface inside a chat framework, interacting with software that is pretty good at understanding and responding to natural language. Canonical examples feature users purchasing airline tickets (yes, like a travel agent) or movie tickets.

      Because these mimic the conversations one might have with a customer service agent, they have been called conversational agents. I think they would be better described as conversational assistants, but nobody asked me, and now it’s too late. That ship has sailed. So, when I speak of agents, I am not talking about conversational agents. Agentive technology might engage its user through a conversational UI, but they are not the same thing.

      It’s Not Robots

      No. But holy processor do we love them. From Metropolis’ Maria to BB-8 and even GLaDOS, we just can’t stop talking and thinking about them in our narratives.

      A main reason I think this is the case is because they’re easy to think about. We have lots of mental equipment for dealing with humans, and robots can be thought of as a metal-and-plastic human. So between the abstraction that is an agent, and the concrete thing that is a robot, it’s easy to conflate the two. But we shouldn’t.

      Another reason is that robots promise—as do agents—“ethics-free” slave labor (please note the irony marks, and see Chapter 12 “Utopia, Dystopia, and Cat Videos” for plenty of ethical questions). In this line of thinking, agents work for us, like slaves, but we don’t have to concern ourselves about their subservience or even subjugation the same way we must consider a human, because the agents and robots are programmed to be of service. There is no suffering sentience there, no longing to be free. For example, if you told your Nest Thermostat to pursue its dreams, it should rightly reply that its dream is to keep you comfortable year round. Programming it for anything else might frustrate the user, and if it is a general artificial intelligence, be cruel to the agent.

      Of course, robots will have software running them, which if they are to be useful, will be at times agentive. But while our expectations are that the robot’s agent stays in place, coupled as we are to a body, that’s not necessarily the case with an agent. For example, my health agent may reside on my phone for the most part, but tap into my bathroom scale when I step on it, parley with the menu when I’m at a restaurant, pop onto the crosstrainer at the gym, and jump to the doctor’s augmented reality system when I’m in her office. So while a robot may house agentive technology, and an agent may sometimes occupy a given robot, these two elements are not tightly coupled.

      It’s Not Service by Software

      I actually think this is a very useful way to think about agentive tech: service delivered by smart software. If you have studied service design, then you have a good grounding in the user-centered issues around agentive design. Users often grant agency to services to act on their behalf. For example, I grant the mail service agency to deliver letters on my behalf and agree to receive letters from others. I grant my representative in government agency to legislate on my behalf. I grant the human stock portfolio manager agency to do right by my retirement. I grant the anesthesiologist agency to keep me knocked out while keeping me alive, even though I may never meet her.

      But where a service delivers its value through humans working directly with the user or delivering the value “backstage,” out of sight, an agent’s backstage is its programming and the coordination with other agents. In practice, sophisticated agents may entail human processes, but on balance, if it’s mostly software, it’s an agent rather than a service. And where a service designer can presume the basic common senses and capabilities of any human in its design, those things need to be handled much differently when we’re counting on software to deliver the same thing.

      It’s Not Automation

      If you are a distinguished, long-time student of human-computer interaction, you will note similar themes from the study of automation and what I’m describing. But where automation has as its goal the removal of the human from the system, agentive technology is explicitly in service to a human. An agent might have some automated components, but the intentions of the two fields of study are distinct.

      Hey Wait—Isn’t Every Technology an Agent?

      Hello, philosopher. You’ve been waiting to ask this question, haven’t you? A light switch, you might argue, acts as an agent, monitoring a data stream that is the position of the knife switch. And when that switch changes, it turns the light on or off, accordingly.

      Similarly, a key on a keyboard watches its momentary switch and when it is depressed, helpfully sends a signal to a small processor on the keyboard to translate the press to an ASCII code that gets delivered to the software that accumulates these codes to do something with them. And it does it all on your behalf. So are keys agents? Are all state-based machines? Is it turtles all the way down?

      Yes, if you want to be philosophical about it, that argument could be made. But I’m not sure how useful it is. A useful definition of agentive technology is less of a discrete and testable aspect of a given technology, and more of a productive way for product managers, designers, users, and citizens to think about this technology. For example, I can design a light switch when I think of it as a product, subject to industrial design decisions. But I can design a better light switch when I think of it as a problem that can be solved either manually with a switch or agentively with a motion detector or a camera with sophisticated image processing behind it. And that’s where the real power of the concept comes from. Because as we continue to evolve this skin of technology that increasingly covers both our biology and the world, we don’t want it to add to people’s burdens. We want to alleviate them and empower people to get done what needs to be done, even if we don’t want to do it. And for that, we need agents.

      To bring this working definition home, let me end this section by forwarding some practical questions you can ask of a given user task that recurs with some predictability. If the answer to all these is yes, then you should probably employ an agentive solution.

      • Can the user reasonably delegate tasks off to another? Students learning a language cannot hand that task to an agent and expect to acquire the skill. Similarly, I cannot send an agent to the gym in the hopes of building up muscle. Ethically, I should not accept a paycheck to perform a task that I then secretly hand off to an agent to do it for me.

      • Can the trigger for the task be reliably handled by a computer? If, for example, the triggering event is a question of subjective judgment, then a person should handle initiation and most probably execution of the task.

      • Does the user have a need for focused attention on some aspect of its performance? If not, why bother the user?

      • Can the task be reliably performed with no user input, including preferences and goals? If it can, then why bother the user with it at all?

       Images