Just a small end of the week thought about knowledge sharing in general that has been learned through a lot of experience:
- data and information sharing is critically important...
- data and information sharing increases in value when it comes out of a database and one can make sense of it in one's own context and
- see how to apply it to one's real work (HUGE implications for practical application - another commentor talked about leadership, but it is really about how KS and learning itself pragmatically fits into workflow)
- and discuss it with other peers (communities of practice, etc.)
- sharing back out the experiences of that contextualized knowledge to keep the knowledge flowing (continuous learning).
The second thought digs deeper into the sense making. Past experience may or may not be useful in future application, particularly in complex situations. So knowledge sharing has to be done in a way that there is clarity about the level of complexity. If there is low complexity, the KS can help for replication. If there is high complexity, we have to do a probe/test/probe approach which may be adapatation or something novel. (See the work of Dave Snowden)
So what does this mean in the context of resilience? From my distant view (I don't work on resilience directly) it seems to me that whatever approach you take, it has to have it's roots in complexity theory and practice. There is a significant difference in wanting to avoid reinventing the wheel, and adaptive, forward looking learning that allows improvement in a complex context. So when we think of "platforms" I hope we think far beyond technology and really dig into the business practices in these contexts.