ARTICLE
8 August 2024

Modelling A Risk Assessor - Automated Risk Assessments Through AI

Transforming risk assessments with AI: Automate, enhance, and streamline documentation through advanced language and vision models. Discover the future of precise and efficient risk evaluation holds.
United Kingdom Technology
To print this article, all you need is to be registered or login on Mondaq.com.

1. Introduction

The field of Artificial Intelligence (AI) has seen tremendous growth and investment in recent years. Large Language Models (LLMs) such as GPT 1, Gemini2 and Llama3 are being utilised across multiple industries to automate procedures and workflows once thought to only be possible within the realm of human intelligence. Notably, recent endeavors by Microsoft to automate documentation procedures in nuclear power plants 4 underscore the vast potential of LLMs across diverse sectors. In this context, we aim to explore the implications of these advancements on the risk assessment process within insurance risk management.

The task of conducting a risk assessment is a complex and dynamic problem with multiple variables. It takes several years for a risk assessor to be trained and equipped with the knowledge to conduct a risk assessment efficiently and accurately. A few years ago this may have been thought as an impossible task to effectively model the algorithm that a risk assessor computes to conduct a risk assessment. However, due to advances in Ai technologies this may very soon change.

While most users of language models likely find benefit in getting quick answers to complex questions, the real strength of a language model is in finetuning and retraining them to become expert systems for highly technically and complex documentation tasks. With today's technology it is entirely possible to conceive a LLM based 'risk assessor' with the capability of doing write-ups including risk assessment reports. Furthermore, with the advent of vision-language models (VLMs) [?] it may be possible to produce this documentation with the input of video and image data of walkthroughs of a site. This piece discusses how this possible and how it is likely to change the industry.

2. What is a Language Model?

A language model in its most simple form is an algorithm that understands human language very well. This understanding is gained in its training process. These models are first trained for the task of predicting the next word in a sentence, given all previous words. This is learnt by the model studying a large corpora of diverse text documents. In doing so, the model learns interesting patterns through analysing relationships between words in documents and begins to build a resemblance of a knowledge base or memory. Language models then are guided to become expert systems through a process of fine-tuning, in which human experts construct perfect input/output sequences of labels such as questions and answers. Over time the model learns to answer questions that it has not specifically been trained on, but are similar to samples that it has seen before. This is how a language model can be trained to become a expert for a specific domain.

An LLM for risk engineering would be trained slightly differently. Instead of a diverse corpora of text it would only be trained on risk engineering specific data, and instead of questions and answers for fine-tuning, it can be provided with data of a site and adjoining documentation. The repository of data used to train this LLM does not need to be incredibly diverse as it is only going to be used for one domain - risk engineering. This decomplexifies the task massively as there is no expectation of the model to be a jack of all trades but instead, a master of one.

LLMs produce data-driven insights learnt through the training process, in a similar way to how a risk assessor can produce documentation learnt through their own training. Once a LLM has seen enough examples of data-documentation pairs, it can begin to predict the documentation given the data. A language model trained for this would just require a risk assessor noting down key information and the LLM would compile that data into high quality documentation.

3. Vision Language Models

Newer advances in the field of expert systems built through the aforementioned language modelling innovate with VLMs which take input from multiple modalities such as video feed or images and text. In these systems, the model is not prompted by a text based input but rather provided the input of an image or video adjoined with text. Following these design principles it may be possible to build a similar model but for writing risk assessments purely from video or image data. In this setting, the VLM can be tasked to provide risk assessment documentation from a video walkthrough of a site. A risk assessor only has to capture a video stream of themselves navigating around a site. The vision language model will then process that video into a documentation.

4. Data and Experimentation

The fundamental concept behind this idea is that for every written risk assessment, fire safety plan, and even insurance premium lies data that a risk engineer and underwriter had to process. In a similar sense to how large language models are trained to become experts through demonstrating examples of questions and answers, a VLM can become an expert system for writing risk assessments from video data. To train a model like this, it requires input of video sequences of risk tours and adjoined documentation. Through this process it may be possible to model what the risk assessor infers during a risk assessment. The video stream acts as the 'eyes' of the risk assessor, and the VLM acts as the brain. Together they can piece together the report writing and documentation.

To train a robust and reliable model of a LLM-Risk assessor a large amount of data labelled with expert descriptions is required. This is one area where companies such as WTW may have an advantage, as they already own several thousand high quality risk assessments. The task would be to adjoin these risk assessments with video input, likely broken down into many smaller chunks.

Industries such as healthcare are already incorporating these models into their workflow 5 in which LLMs are used to produce clinical documentation allowing the professions to focus on their expertise where it is needed the most.

5. Ethical Considerations

Despite the potential, there are challenges and concerns to take into consideration, such as the need for large amounts of high-quality data to train these models effectively. However, in industries like insurance, abundant high quality data already exists, which could facilitate the development of robust AI systems. Another concern is the ethical ramifications of this approach. Is it safe to have an algorithm construct a fire safety plan? What are the risks if incorrect data is produced? Our research aims to minimise these and develop experiments taking these factors into consideration. It is likely that the first few iterations will act as an aid to a risk assessor or require proof reading and supervision rather than a complete model to produce documentation. Overtime once systems can be relied upon they can act with more autonomy.

6. How this can change the industry

The integration of LLMs and VLMs into the realm of risk assessments holds immense promise for revolutionising the insurance industry. By leveraging the capabilities of these models, it may become feasible to automate the generation of risk assessment documentation, thereby streamlining processes, enhancing efficiency, reducing barriers to entry, and even improving accuracy. The availability of high-quality data presents an opportunity to develop robust AI systems capable of performing complex tasks. Other service based industries with years of historical data have already begun experimenting with models such as these. The future risk assessor is likely to interface with expert based systems while conducting risk assessments. First generation models will likely only be able to conduct small basic tasks and will act as personal assistants to professions. With today's rapid advancement of algorithm development and improvement it is not inconceivable to imagine a world in which these models can be adapted to produce the entire documentation process required within a risk assessment. Data collection to obtain this video data of sites could be streamlined to robotic systems such as small drones or handheld devices sent out to clients around the world. This would lead to more risk assessments being completed in a much shorter amount of time, potentially with higher precision and accuracy.

Previous work conducted by the WTW research network in collaboration with Loughborough University experimented with robotic systems automating the construction of fire asset floor plans. Current prototypes developed are able to construct precise floor plans in the matter of seconds. First experiments are likely to interface with this system directly, taking the input map of a site and producing a write up of documentation in the style of a risk assessment report.

Footnotes

1. Achiam, Josh, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida et al. "Gpt-4 technical report." arXiv preprint arXiv:2303.08774 (2023).

2. Team, Gemini, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut et al. "Gemini: a family of highly capable multimodal models." arXiv preprint arXiv:2312.11805 (2023).

3. Touvron, Hugo, Louis Martin, Kevin Stone, Peter Albert, Amjad Alma-hairi, Yasmine Babaei, Nikolay Bashlykov et al. "Llama 2: Open foundation and fine-tuned chat models." arXiv preprint arXiv:2307.09288 (2023).
llava Lin, Bin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. "Video-llava: Learning united visual representation by alignment before projection." arXiv preprint arXiv:2311.10122 (2023).

4. Ford Parsons, MD; Roy Gill, MD; Bill Hayes, MD, "How Can Generative AI, Specifically LLMs Aid in Documentation", HIMSS

5. Solomon Klappholz, "Microsoft is using AI to get its nuclear projects approved in the US", ITPro, WSJ

By Leon Davies and Simon Sølvsten

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More