The AI-Powered Enterprise. Seth Earley

Читать онлайн.
Название The AI-Powered Enterprise
Автор произведения Seth Earley
Жанр Программы
Серия
Издательство Программы
Год выпуска 0
isbn 9781928055525



Скачать книгу

times they are split into numerous categories—a characteristic of “splitters,” who like to separate things. You need to repeat this exercise with multiple different groups to get a sense of the elements and classifications they consider to be important. This process can inform the mental model of the user and then lead to the development of organizing principles that can be applied to knowledge bases and document repositories.

Process Step Answer the Following Examples
1.Observe and gather data (pain) points What are the specific problems and challenges that users are identifying? “We can’t locate information about policies for specialty coverage.” “We need to look in multiple systems to find prior experience data when underwriting new policies in high risk areas.” “Different terminology is used in different systems, which makes queries difficult.”
2.Summarize into themes What are the common elements to observations? How can symptoms and pains be classified according to overarching themes? Inability to locate policy and underwriting information using common terminology
3.Translate themes into conceptual solutions Wouldn’t it be great if we could . . . ? Wouldn’t it be great if we could access all policy and prior experience data across multiple systems using a single search query and return consistent results?
4.Develop scenerios that comprise solutions What would a day in the life of a user look like if this solution were in place? At a high level, describe how underwriters go about their work in writing policies for specialty and high-risk clients. Describe each potential situation and how they would go about their work.
5.Identify audiences whom the scenarios affect Who are the users that are impacted? Risk managers; underwriters; sales personnel
6.Articulate tasks that audiences execute in scenarios What are the tasks that need to be executed in each scenario? For a given scenario, articulate tasks (research options, review loss history, locate supporting research, etc.)
7.Build detailed use cases around tasks and audiences What are the specific steps to accomplish tasks? For a single task, list the steps to execute (this level of detail is not needed in all cases). Step 1: Log on to claim system. Step 2: Search for history on the coverage type in geography. Step 3: . . .
8.Identify content needed by audiences in specific use cases What content and information is needed at each step in the process? Claims data; policy information; underwriting standards; actuarial tables; fraud reports; etc.
9.Develop organizing principles for data and content Arrange the things audiences need according to process, task, or other organizing principle Begin with “is-ness.” What is the nature of the information? Then determine “about-ness,” the additional characteristics of the information. How would you tell apart 1,000 documents of that type?

      This is not the only data- and content-centric approach. Here are some others:

      •Start with a body of content and ask people to come up with labels.

      •Come up with labels and ask people to group them within existing categories.

      •Ask people to group labels into new categories.

      •Sample all of the existing terminology lists and have people reconcile them.

      •Hand out sample content and have people classify it with existing labels.

      Each of these approaches informs the mental model of the user and helps create language and terminology that can label and tag products, services, and reference content for easier access.

      The challenge with data- and content-centric approaches is that they lose the understanding of the user’s goals. The challenge with a user- and problem-centric approach is to prevent it from becoming a laundry list of user challenges. A combination of approaches is often the best way to crack the problem.

       Checking the Box versus Validating the Work

      How do you know when an ontology is sufficient to be useful? Here’s a story that may illustrate that question.

      The head of knowledge and search at a large global services firm was having challenges with information management across the enterprise. The company’s program for searching for answers to employees’ questions was held up as a model of exemplary practices at conferences; numerous attendees would crowd around the presenter after her talk and ask questions about how her team did it. Each week, the group reported positive results and metrics showed steady improvement in measures such as improved search accuracy and precision of results. But as a head of one of the business units confided to us, “People still can’t find what they need.” The company wanted to understand why, so they hired my consulting firm.

      At first, it appeared that the company had already achieved a high level of success. My consultants and I listened to their approach, and our first thought was, “They really know best practices and understand how content, knowledge, taxonomies, and ontologies work. They are applying them and following all the steps.”

      But our main contact at the company suggested that we dig deeper. “Go to the end users and look at what they are trying to do,” they advised. “Evaluate their taxonomies and ontology and how they got there.”

      The rest of the week was quite revealing indeed. Even though the steps that the global services firm followed were valid, problems emerged. Use cases were vague: “Users must be able to access the information they need when in the field.” That type of use case is not testable. What information? For what purpose? From where? None of the details were specified.

      When it came time to build the taxonomies and ontology, a manager sent out a spreadsheet that people added terms to, and then the head of the group added his own terms and deemed the taxonomy complete. There was no validation or testing with actual users or measures of usability. The firm had not followed best practices and heuristics for ontology development. There were too many overly broad terms (such as “documents” and “content”— what is the difference between a document and content?) and too many detailed terms that had insignificant differences (“exemplars” and “examples”). Hierarchies were six or seven levels deep in some parts and one level deep in other areas, making it nearly impossible to establish a mental model of how the information was organized. And there were many other violations of accepted practices (such as large “general,” “miscellaneous,” and “other” categories—which are useless individually and nonsensical when combined).

      Ontologies should not be a matter of individual opinion. They should not be deemed complete by business leaders, based only on their judgment or developed in a vacuum. Everything should be testable and measurable. No one at the global services firm tried ingesting information using the system and then measuring how people located the information on an end-to-end, holistic basis.

      An even larger mistake in many organizations is a lack of a clear understanding of the customer at a level of detail that truly informs decisions and provides enough of the features that both humans and machine learning algorithms can interpret and act upon. Achieving this level of understanding and insight begins with humans applying consistent, repeatable, testable methodologies, because machines cannot