Fast, Accurate, Relevant, Intuitive: The Future of Search
MOVING FORWARD
So how do we move forward? How do we embrace, as a key determiner of success, the actions our users take?
We need to start with upgrading our core platform, the venerable search engine. A few years ago, I got to meet with the team at Vectara, and they shared with me the idea that when RAG came along, the search engine evolved into the answer engine. No longer are we about searching up source documents with 10 blue links, we are now all about answering the question that our users are asking. No reading of source material required. However, that isn’t enough, because to solve someone’s information need, an action generally needs to happen. As we move beyond RAG, we need to think about our core platform as an action engine.
We need to get out of our search silo by firmly putting ourselves in our users’ shoes, which can be done by embracing the Jobs to be Done Framework (jobs-to-be-done.com/jobsto-be-done-a-framework-for-customer-needs-c883cbf61c90). This flips how we think about why people buy things. Instead of focusing on who customers are, it looks at what they’re trying to accomplish. That example of going to the garage to get the rake to clean up the fall leaves? The intent is not to find a rake; it’s to have a leaf-free yard. The rake is just hired to do that job. This approach recognizes that people make decisions for functional reasons (getting leaves cleaned up efficiently), emotional reasons (satisfaction of a tidy yard), or social reasons (not being the neighbor with the messy lawn).
When companies understand the real job behind a process, they can create products people actually need and market them in ways that genuinely connect. It’s about solving the right problem, not just selling features.
As search teams, we need to embrace Jobs to Be Done to facilitate thinking about the actions that we want to see happening. Are actions happening as a result of search? This is where we need to evolve our thinking. Classically, the only “action” we would focus on, especially in enterprise search, was if a click-through to a specific document happened. Of course, with these new AI apps, often, the source document isn’t even shown, much less made available to click on. I frequently get asked, “How do we measure actions?” and this is where we need to think more broadly about our measurement and evaluation strategies.
ANSWER SUCCESS AND FAILURE
Let’s talk about the hardest use case to understand success or failure: the summarized answer that tells users what they want, but they then go away. From a search perspective, it can be hard to tell if users went away because we satisfied their need, or if they disappeared because we gave them a garbage answer and they were rage quitting. We need a way of implicitly measuring the validity of the answer.
We know that explicitly asking users, say via thumbs up or thumbs down, doesn’t work. What we need is to embed some “next step” actions in the answer that lets the user react to the answer.
For example, let’s say you have an outdated company provided laptop (like I do!). A typical query might be, “Am I eligible for a new laptop?” A basic RAG solution might say, “The company policy is to provide new laptops every 3 years” and maybe provide some links to the IT policy. A better solution says, “The company policy is to provide new laptops every 3 years. Click here to request a new laptop.” Now, you can use your standard click-through metrics, using the number of clicks on the action. However, you could go even a step further. A search engine implementing RAG will know who you are and then be able to consult the appropriate IT policies.
Ideally, it will even have a laptop eligibility tool that decides your specific eligibility for a new laptop based on the issuance date of your last one.
Now the search results can be: “The company policy is to provide new laptops every 3 years. You are eligible in 6 months. Click here to be notified.” Here we give users, even those who aren’t eligible, a choice that indicates they are happy with the answer. Imagine if I searched for “I need a replacement laptop bag” and received the answer, “The company policy is to provide new laptops every 3 years” with some next-step actions. Yeah, no action taken, a strong signal that the answer didn’t address their need. The system couldn’t distinguish between a new laptop and a new container for the laptop.
We need to leave our safe space and get out there into the enterprise, working with specific departments and teams.
We are the “forward-deployed consultants,” helping specific teams define what our users are trying to accomplish. We make sure that we define the actions that determine success or not and that those actions are integrated into our search platforms, or, dare I say, our action platforms.
I’ve taken a somewhat negative tone about AI teams, but we need to heed Conway’s Law, which says “organizations will design systems that reflect their internal communication structures.” This means that the way teams communicate influences the architecture of the systems they create. If the future of search is fast, accurate, relevant, and intuitive, and if search is truly about action over information, then we really need blended AI/search teams.