KM a Fad? KM a Failure? News at 11PM!

So much for "fad" labels!

KM a Fad?  KM a Failure?  Well no, not even. But it’s going to take a little bit of “thread pulling” to fully explore this one. And it makes for an interesting topic of discussion today that goes rather neatly with another post of mine — Not Dead Yet (think Monty Python).

So buckle up, it’s going to get a bit bumpy.


I came across several interesting blogs/articles yesterday. The first was a blog post out there by Adi Gaskell (“Is knowledge management essential or a white elephant”) that reminded me of something that I’d read years ago – a 2002 paper written by T.D. Wilson, (Professor Emeritus, University of Sheffield) titled, “The nonsense of knowledge management” in which there was a declaration that KM was just a fad, driven by “management consultancies.” Along with that was a prediction that KM would “fade away like previous fads.”

Ah huh. Still waiting. I see no body, and see no crime scene tape.

Add to that, Michael Koenig did some research which was published in a 2006 article titled, “KM: the forest for all the trees” that I believe effectively debunks one central concept of Wilson’s conjecture that KM is a fad. Koenig’s 2006 research went a lot further than similar research conducted by Wilson (who attempted to discredit the concept of knowledge management) and Koenig’s research clearly showed that KM was not a fad (or if it is a fad, at the very least it’s one that’s been around for quite awhile and going very strong).

I think perhaps that Wilson’s paper represented a rather myopic view of what knowledge management supposedly is, based primarily upon what IT consulting firms consider to be “knowledge management” (as what even he described as the “re-branding” by many). And Wilson seems to lump at some of those IT consulting firms into the general category of “management consultant” but even those other mentioned “management consultants” seemed to more often have a KM definition more driven by IT (and quite typically, “their” IT solution). Which though ignores the whole possible universe of organizations that do recognize that KM is more than IT (although I’ll admit that this universe is smaller than I’d prefer it to be) and it ignores several huge issues regarding the strategic implementation of KM (getting to that below!).

So…anyway, Adi’s post brought to my attention a working paper published by Harvard Business Review last year that I had missed – “Using What We Know: Turning Organizational Knowledge into Team Performance (PDF)” (Staats, Valentine and Edmondson).

And that working paper raised some questions about whether or not an investment in knowledge management by organizations positively affects team performance. It seems that according to a 2008 study (“The knowledge management spending report”, Murphy and Hackbush) it was reported that “U.S. companies spent approximately $73 billion on knowledge management initiatives.”

However, I’d like to make two important points about that report (published by AMR Research, and available for download at only $4,000!):

  • The research was focused on providing information on “key application markets of Knowledge Management Software”
  • They collected data and reported on IT spending by “350 enterprise technology decision makers in the United states and Europe” with the “top 50 to 100 vendors” of….IT.

Which sounds an awful lot like the technology driven side of KM (“Knowledge Management Software”). And no exploration of the non-technology side (see below). So to me, that sounds more like a bunch of money spent on IT, and not really spent on KM.

And so, armed with the IT spending data from the Murphy and Hackbush report, the Staats, Valentine and Edmondson HBR paper then sought to determine whether the “captured knowledge had a positive effect on the team’s efficiency and on project quality.” Which unfortunately is not a valid starting point in trying to determine the benefit or lack thereof for KM and it reflects what seems to me to be a lack of some of the fundamental concepts of KM as it is applied to a modern organization.

Their conclusion? Drum roll please….

That this available captured knowledge DID have a positive effect on efficiency, and DID NOT have a positive impact on project quality.

Okay….now, have you been following along? Good. Let’s toss out one more….let’s call it an interesting claim about KM. This is from an article in the online The Wall Street Journal, “Tapping Into Social-Media Smarts”:

“About half of company knowledge-management initiatives stagnate or fail.”

No reference source for this big bold claim though. Sorry, but my “bull crap” detector is honking in the background. Reported by who? Those same organizations that believe that IT is KM?

Which brings me back around to the whole “KM is dead” or “just a fad” viewpoint.

If you hear those claiming that KM is dead (or dying) or stating that they think that it is a fad….then they probably should consider the possibility that THEY WEREN’T REALLY DOING “KM” TO BEGIN WITH. And so then, the supposed “failures” that do get reported are more likely to be a failure of the wrong strategy or solution (and the failure of those implementing that to not recognize that it was the wrong approach!).

I think that the quote, “A fool with a tool is still a fool” applies rather neatly here.

For example, in the HBR study they discussed the significant amount of knowledge available in repositories. Best practices, lessons learned. Sounds good. Unfortunately, none of that is what we’d call, “new knowledge.” New to them perhaps, but not new knowledge.

Why is that important? New knowledge leads to innovation. “New to you” knowledge leads to improvements (i.e., improving efficiency here by learning from what is done over there). Measuring efficiency may improve a process, but measuring the required results leads to improvements in the results (and often also improvements in things like efficiency). So if they were hoping to see how knowledge lead to improvements in quality (or innovation), then why were they instead measuring efficiency?

Am I getting a “lightbulb” moment yet? (It helps if you imagine Gru from the movie “Despicable Me” saying “liiiightbulb” whenever he got a new idea.)

See, that’s kind of the point of codified knowledge. It’s quite useful when you deal with the exact same problems over and over, and provide nearly identical solutions to each. (Nonaka’s viewpoint of “personalization” vs. “codification”) Following the same set of rules, same drawings, same project work break down structures, and so on….over and over can do wonders for improving project and process efficiencies (but not necessarily anything for the desired results…you can become highly efficient at providing exactly the w-r-o-n-g result).

Unfortunately, that’s often not the world of today’s projects where it is increasingly more critical to achieve the right results to remain competitive.

I offer up the following sanity check as proof of that. Actually mileage may vary, but a show of hands please:

Option A — When you are about to begin a new project, do you rush to the project repositories and download project files and simply change out the dates? (Codification)

Or…

Option B – Do you go seek out those who just completed the last similar project to get the “real” perspective on what you need to know? (Personalization)

And guess which one is going to be more supportive of innovation, quality improvements and such? Right, personalization which works well when the work pace or rate is such that you don’t really have the time to codify everything important, or if your project stands on “shifting sand” that frequently changes your “ground rules.” And this is well supported in related research (Zack) regarding what is termed as “exploration” and “exploitation.”

  • Exploration is dependent upon new knowledge and leads to innovation.
  • Exploitation is dependent upon existing knowledge which leads to improvements, like efficiency gains.

So with that in mind, I’m kind of at a loss as to why someone was expecting to see quality improvements (or innovation) based on using existing knowledge in repositories. I think that perhaps the fatal flaw in the HBR report was that for their research they drew upon “information systems literatures” and so didn’t have a good grasp of what knowledge was needed where and why (i.e., they examined research revolving around IT and codified knowledge, rather than socialized knowledge sharing amongst the people in various project teams).

In short, they were looking for the wrong knowledge and the wrong knowledge utilization if the goal was to determine the contribution that knowledge makes to improving project quality/results.

If you were really running with this you may have perhaps even been thinking about efficiency being a lagging indicator, vs. quality being a leading indicator. If so, bonus points to you.

Is KM a fad? I’m pretty sure that after more than 20 years, and about 10 years after the predictions that it was a fad and about to fade away….it’s not a fad.

Is KM dead? Don’t think so. But I’ll qualify that by saying that perhaps what IS dead (or at least should be dead!) is believing that KM is all about IT and “Knowledge Management Software.” It’s much more than that. And that’s the point of having a KM strategy that is crafted to address critical organizational gaps. If you have a need for innovation, or if quality is a big issue, then you must address those gaps appropriately with the right type of KM strategy. And you need to have the right metrics to evaluate the effectiveness of that strategy in closing the right gaps.

Dr. Dan's Daily Dose:
Knowledge Management is not dead, and after all these years, it clearly simply isn’t a fad. To be effective though it is critical to have in place a KM strategy that intends to close the right knowledge gaps. And it is crucial to also utilize the right KM metrics. If you’re seeking to improve the effectiveness of projects (or programs, etc. etc.), then you best be using leading indicator metrics that measure that, instead of using lagging indicators that do not support your strategic goals for KM. KM “failure” is most certainly actually more about KM strategy failure or KM implementation failure, rather than any failure of KM.
About Dr. Dan Kirsch