· 

genAI impact on risk management

This is a summary of our group discussion on June 24, 2023. Thanks to all the participants!

 

Application of (gen)AI to “standard” RM activities

·       One member reports using genAI to (re)create risk mitigation plans for program risks; got more, coherent plans that way, using a standardized structure, than when humans did it, including more discipline in the “plan, do check” cycles of risk mitigation. This built on their previous work of having a “risk wiki” to share internally, but this is easier to harness, use, and search.

·       Another member reports using genAI as starting point for writing reports/speeches, not as the final word but that it in particular helps in having a comprehensive structure. Others reported using it to “not start with a blank page” more generally

·       Others prefer to query genAI to sanity check and complement what they have already written/drafted, to lessen biases

·       Several members reported using genAI to summarize, also to compare/synthesize information from different (public) sources (e.g.: compare 10K risk filings from different companies in an industry). Outside RM, many institutions are developing/calibrating AI models on own private body of knowledge.

·       Since genAI is fairly new, is and will go through growing pains and any “professional” should be: Testing, looking and waiting (some debate on the latter, maybe influencing!)

·       A key question is when will there be a tipping point to greater trust in use of AI. Comments that while AI has been in development for years/decades, now step change in speed, so some sort of tipping pt in 2-5 yrs not just “in our lifetimes”. Also, ability to trace antecedents (“footnotes”) of AI output will be crucial.

·       For some time, there may be a dichotomy of some trusting AI more than (biased) humans and others trusting humans more that (black box) AI.

·       Likely initially more use in less mature organizations which are less well resourced, but (opposing voice) real value may be where more internal data is available, ready to be summarized.

·       Good idea to explore using AI in a sandbox, learn its limitations, start with non mission-critical or highly visible stuff

·       Noted that financial aspects of genAI use to still be sorted out; seems ChatGPT queries are now all loss making, just like initial Google search queries were, and costs of calibrating a LLM, never mind the R&D, are not negligible.

 

Application of (gen)AI to refocusing to higher-value RM activities

·       The group questioned to what extent to Boards and other decisionmakers already base decisions on AI, how it will evolve, and what it means for RM. The general sense is that many largely still ignore it, some want it as a voice, but few trust/outsource to it. Subject to change. But incumbent on RM to emphasize what more it can deliver, e.g. foresight, challenge, …

·       A real challenge for RM to step up to think outside the box, go beyond pure synthesis and extrapolation. The current “risks for year N+1, N+5, …” tend to suffer from this. Solution needs being provocative, watching for early warning signals. Skepticism that AI will step up to the plate anytime soon on this, but perhaps can be useful in processing the amount of information flow that is needed to prompt such thinking

·       One member’s summary: “AI can describe risks as points. Needs humans to turn them into vectors, with a direction and speed.” Question: Can AI help pareto-identify relevant weak signals?

·       Generally little fear that genAI will obviate/replace “good” RM or decision-making

 

Role of RM professionals in considering (gen)AI risks to organizations/society more broadly

·       There will be a need for regulation of, and ethical training of and about, AI and its uses. There’s currently some fearmongering and also some self-interested idealism. RM, used to looking at multiple possible “truths”, could be well positioned to navigate

·       Real concerns about what AI means for “what is truth”, “what is understanding” – something we’ll all have to work through together. There will be stumbles, and there will be schisms

 

·       Opportunity for RM to emphasize how some of the relevant risks (opportunities) from AI don’t exist in isolation, but are part of interrelated risks covering a range of trends. Should RM play a more active role in definiting the framework for AI?

Write a comment

Comments: 0