Artificial Intelligence is Transforming Crisis Exercising
To view this webinar recording please log in to your BCI website profile. If you do not have a website profile, please register here. Please note that you don't have to be a BCI Member to view this webinar/event and that by registering for a BCI website profile you are not applying for a BCI membership.
When OpenAI released its December 2022 version of chatGPT, the risk landscape changed overnight and so too did crisis exercising. January 2023 saw the publication of Stanford’s report “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations”.
The three primary take-aways for risk professionals are how generative language models can mimic actors, behaviours, and content:
- Actors: Language models make it easy for anyone to pretend more realistically to be somebody else.
- Behaviours: Generating personalized content is cheaper and chatbots more realistic
- Content: Messages are more impactful and persuasive compared to traditional propagandists, especially those who lack the necessary linguistic or cultural knowledge of their target audience.
This means a wider range of disgruntled customers and activists can run large-scale reputation-damaging campaigns. However, the same AI tools can be used for good and make scenario design more realistic and engaging. Join Charlie Pratten, Crisis Training Coordinator at Conducttr for an overview of how AI is going to transform crisis exercising for good.
Presenter: Charlie Pratten, Crisis Training Coordinator at Conducttr