Sharing Knowledge at Speed and Scale: The Convergence of KM and AI
Instant answers and algorithmic insights have become a fact of life and are now expected when searching, problem-solving, and, well, working in general. Today’s technology makes it very easy to mistake “fast” for “smart.” We talk about AI as an oracle, a magic box capable of solving our organizational problems at the push of a button. But, as any knowledge or information professional will tell you, information is not knowledge. And speed without structure is chaos.
AI can be powerful, but only if it’s rooted in something deeper. That “something” should be grounded in knowledge management (KM). When done well, KM provides the scaffolding, ethical compass, and human context that AI desperately needs.
This article explores the magic that happens when KM and AI don’t just coexist but converge and why that convergence is the key to sharing knowledge at speed and scale without sacrificing quality, trust, or relevance. What will this look like? And how can we, as knowledge professionals, help shape it?
GENERATIVE VS. AGENTIC AI: DIFFERENT DEMANDS ON KM
Not all AI is created equal, and different types of AI interact with KM in different ways. Two flavors of AI that are particularly relevant to this discussion are generative AI (GenAI) and agentic AI.
GenAI is the kind we see in large language models (LLMs) such as ChatGPT and Claude. These systems excel at producing text, images, code, or other content based on training data. But they lean heavily on curated, high-quality content. Without this, you risk outputs that sound plausible but are way off base.
KM has an important role to play here. It ensures that the generated content isn’t just semantically correct, but also makes sense in the context of the organization. AI-ready content should be aligned with strategy, language, and culture. It also needs to be up-to-date and curated. Without that foundation, you get hallucinations, duplication, or content that looks helpful on the surface but is subtly misleading.
KM also plays a part in the governance and structure of information and knowledge. Finding, capturing, codifying, and classifying the relevant knowledge and then organizing it in a way that is accessible and has proper access control is necessary before starting any sort of LLM/internal repository chatbot project. Organizations that attempt to launch first and worry about the underlying foundations later quickly realize that the results are not reliable, and user confidence suffers immediately. This is the classic “garbage in, garbage out” dilemma.
Agentic AI works differently. Instead of generating content, it takes action. Some example actions are triggering workflows, sending reminders, or nudging people toward decisions. In this case, KM steps in to provide the rules of the road. Decision trees, embedded policies, business rules, and clear context all help ensure these autonomous tools don’t wander off course. The work KM has hopefully already done (mapping knowledge, codifying processes, capturing nuance, etc.) becomes the safety net.
Whether you’re deploying a knowledge assistant or automating workflows, AI’s usefulness doesn’t just depend on the volume of data. It depends on knowledge of high-quality content that is structured, validated, and made meaningful through human oversight.
AI DOESN’T REPLACE KM, IT AMPLIFIES ITS IMPORTANCE
We are not in a zero-sum game when it comes to AI and KM. AI is not going to replace KM any more than Google replaced librarians. What AI already does is change the conditions under which KM operates and make KM more essential than ever. I had some people warn me during the initial GenAI hype a few years ago that KM was a thing of the past, and I better start planning how to pivot. We are at the stage now where KM is getting a boost because teams running AI projects that are not turning out as planned are learning that they need to get their knowledge house in order to realize the true potential of their AI investments.
Think about how we used to find information—card catalogs, cross-referenced subject headings, and institutional knowledge from someone who knew the quirks of our department.
Today, AI is expected to fill that role. But without strong underlying KM practices, AI becomes a sophisticated, expensive, and resource-heavy nonsense generator.
In my work with global organizations and clients, and through conversations on the Knowledge Fika podcast, I’ve found that the most successful implementations of AI do not start with the technology. They start with the knowledge ecosystem behind it.