IAG sponsors 2nd edition of Responsible AI Index, recommits to responsible AI across its suite of AI initiatives
18 months ago IAG sponsored the inaugural Australian Responsible AI Index, and today IAG is a proud sponsor of the 2nd edition of the Index.
Over the last six months, we have seen major advances in AI, and its mass accessibility on consumer devices. The broad potential of AI to improve the world has never been more visible, but so too is its potential to cause harm. If it was not already clear, the need for businesses to have material processes in place to develop responsible AI is now a certainty.
The latest Index report demonstrates the need for businesses to go beyond high-level principles and intents, and to take concrete action on responsible AI. The index shows that many businesses understand responsible AI principles and have the right intent, but that there remains a gap between that good intent and actions taken.
IAG was an early supporter and adopter of responsible AI practices. In 2018 we supported the creation of the not-for-profit Gradient Institute, recognising the importance of responsible AI, and continue to sponsor the research institute today. In 2020-21 we participated in a pilot study of the Australian AI Ethics Principles. In that case study, we illustrate how the car insurance total loss experience has been improved by the application of AI.
When an IAG customer lodges a car insurance claim, our AI models predict whether that claim will be a total loss or not - before that claim has even been seen by a human assessor. If our model predicts a total loss with a suitably high level of confidence, special communications are tailored and sent to the customer, helping them to understand the total loss claim process. This was found to materially improve the customer experience.
This AI system has now directed communications to around 20,000 Australians experiencing potential total loss of their vehicle, likely in many cases for the first time. We continue to see positive results from this work, and no significant complaints or issues relating to it.
The application of responsible AI practices - not just principles - was critical to the success of this project. As part of AI delivery, IAG aligns its actions with the recommendations in the Responsible AI Index Report, including:
- We assess the potential impacts - both positive and negative - for consumers affected by the AI system. This allows any negative impacts to be mitigated, and for outcomes to be assessed for fairness
- We test models for unfairness or biases, and take steps to mitigate these issues where required
- We use pilot studies to test AI systems at a small scale, to reduce any magnitude of errors or unforeseen issues
- We establish monitoring procedures to proactively detect any changes or degradation in system performance
- We ensure individuals engaging in AI work are aware of Responsible AI concepts, and we ensure internal experts on Responsible AI are available to support them
- We ensure governance procedures for each AI system are established, including clarifying responsibility for the system
IAG is using AI in a variety of contexts to improve both its operations and customer outcomes. Examples include:
- AI systems designed to assist with claims processing - fully automating some simple administrative tasks and assisting staff with recommendations for other more complex decisions. This improves the speed and accuracy of claims decisions, improving the customer experience.
- AI systems designed to help our fraud investigators to prioritise the many fraud alerts generated. This improves the ability of our investigators to accurately detect claims fraud, which also improves outcomes for the vast majority of customers.
- AI systems designed to improve the insurance quote experience by dynamically adjusting webpages and default options depending on the context of the individual. This improves the customer experience and supports IAG's growth.
In all these AI systems and more, IAG operates using a clear and established set of responsible AI practices, aligned to the recommendations made in the Responsible AI Index report.
IAG is a strong believer in the opportunities of AI to deliver better outcomes for customers and our business. Our ongoing investment in developing responsible AI practices is a key enabler, giving us the confidence to advance AI, knowing that we are doing so in a way that is beneficial, fair and has clear governance controls.
To read the Responsible AI Index 2022 report and media release go to: https://www.fifthquadrant.com.au/2022-responsible-ai-index