Fast, Accurate, Relevant, Intuitive: The Future of Search
What attributes come to mind when you consider a great search experience? You would probably say it is fast, accurate, relevant, and intuitive. You would focus on what the act of searching allowed you to achieve.
Or, as my OpenSource Connections colleague David Fisher likes to say: “No one ever says, ‘I’m going to go out to the garage and search for something.’ They say, ‘I am going out to the garage to get the rake to clean up the fall leaves.’” Essentially, they perform a “search” to find the rake among all the clutter in the garage!
Search is not about information. It’s about action. Historically, search teams have focused more on data collecting, making sure there are comprehensive, up-to-date, nicely structured datasets. Search teams understand that there are many types of rakes—some are good for raking up leaves, some are best for grass, and some we use when landscaping.
Today, however, in our new AI world, that is not enough. Our stakeholder and user expectations have grown exponentially. They want to see results from their investments in search, and they define those results in terms of specific actions. Actions accomplished, such as, “How many self-service requests did our IT portal complete?” Or actions reduced, such as, “How many fewer phone calls did we get to the company help desk?”
AI TEAMS AND SEARCH TEAMS
AI teams have risen to become the biggest competitor to search teams for funding and responsibility within the enterprise.
Indeed, what I’ve heard from people in AI teams is “Thank you search team for gathering the data in one place. Now, you can export every document into my AI platform, and we’ll take it from here.” And yes, it sounds very condescending when you hear that! It’s hard to not be defensive and view that perspective as a threat. A threat that the value of our work gets distilled down to just the value of our connectors to various enterprise systems.
An AI team’s perspective about search is driven in large part by how search teams measure success compared to how AI teams measure success. The search discipline, and the people in it, grew out from the rich world of information retrieval (IR). The focus of the IR world has always been on the query and the quality of the content that it matches. We obsess over how to parse the user’s query, how to measure the potential best documents to match, and then how well our system performed.
We benchmark our success using carefully curated datasets that say, “This is a good match for this query.”
AI teams are fixated on how AI is going to accomplish these amazing things for the organization. They have their charters built around specific processes and the outcomes on which they are measured. They don’t care about cataloguing and organizing all the potential information in an enterprise; they care about delivering specific results. They don’t have to be the best results; they just have to be good enough for the user to embrace what the AI has proposed and move on to the next step.
AI teams are focused on what actions are taken by users interacting with the experience. To them, the workflow is the most important thing, not the data. To go back to the garage example, you should rake leaves using a leaf rake, but you can use other types as well. Search teams are going to obsess over helping you find the specific leaf rake in the garage that works best for leaves. Meanwhile, the AI team just cares that you found a rake, any rake, that would work well enough to get you out there collecting up those leaves! Guess which approach gets you out there raking leaves faster?