China Daily:AI in Banking and Insurance
2024-10-08 IMIThe article was published on China Daily,Oct 4th,2024.
The adoption of predictive AI systems by banks, including ML models, was reported to have rapidly increased in 2022-23, primarily in areas such as operations, risk modelling, pattern recognition for fraud and financial crime prevention. The advent of Generative AI (GenAI), particularly over the last 16-18 months, has attracted the sector’s interest in applications and use cases focussing on harnessing the capabilities of Large Language Models (LLMs) and other GenAI models.
Reports of ML-based use cases in banking include AML, fraud detection, identity verification (Know Your Customer), and are covered by traditional model risk management, governance and data protection frameworks in place. Particularly when it comes to applications related to financial crime, dynamic risk analytics tools have changed the way AML and financial crime checks are conducted. These tools leverage the power of data analytics for more targeted identification of instances of financial crime, while reducing the number of false positives at the same time.
Initial use cases of GenAI deployed in banking were reported to be internal-facing, and include summarisation, translation, information retrieval (particularly where context is important). Code generation was noted as an important area of current experimentation in the use of GenAI tools in banking, considering the sheer volume of existing apps in place to serve customers. Customer service was noted as an important area for the future application of GenAI tools. Banks reported a risk-aligned approach to the deployment of GenAI models, enhancing existing AI governance frameworks and using models that are aligned with these frameworks (e.g. use of small language models, training of these models with proprietary data). Although today direct client interaction 2 with such models remains limited, participants expect expansion of the use of GenAI models in the future as these will be increasingly expected by their customers.
In insurance, predictive AI models were reported to be extensively used in underwriting, risk assessment, risk modelling, as well as claims management and handling across insurance lines. The introduction of GenAI enables insurance companies to process better language-driven information, primarily in handling policies and claims. The translation capabilities of AI models allow for efficient cross-country comparison of claims and policies. LLMs also facilitate information retrieval for advice by agents from better informed systems and offer efficient and simplified communication for complex products (e.g. life insurance, pensions). Nevertheless, human involvement remains essential in the process especially when interacting with clients.
In terms of materialised benefits, the use of AI tools in insurance was reported to offer operational efficiencies and better customer experience (e.g. faster claims processing). AI offers a deeper understanding of insurance losses, allowing for better coverage of client needs including better pricing. GenAI’s cross-language capabilities allow for analysis of information at a deeper level and at a cross-country level.
Both in banking and insurance, culture, education and literacy were highlighted as important areas that remain to be addressed in terms of AI governance frameworks including GenAI. Understanding and managing AI tools is a responsibility that extends to all levels of organisations as these tools are widely accessible and not just limited to experts, unlike ML models. Understanding what questions to ask, the level of reliability of their outputs and even ethical considerations that relate to the use of such models were noted as important considerations for users. Industry considered the recruitment of external diverse talent and the upskilling of existing staff to be of utmost importance.
The importance of data was highlighted, in particular aspects such as data accessibility, training, data flows, and the integration of financial and non-financial data in new AI tools. The frameworks that regulate data flow and data treatment across various industries are believed to significantly influence the outcomes and value generated by these new tools.
The reliance on third-party service providers was highlighted as a critical issue in the wider use of AI in finance. Financial services firms have already established processes for third-party services such as cloud services, but these have to be expanded to AI model and data providers. Participants noted the need for transparency, especially in contracts with third-party service providers, to ensure visibility of AI’s integration and impact, as the ultimate accountability for AI outputs remains with the financial service provider. The risk of trust erosion was also discussed in this context.
Participants noted that current regulations sufficiently cover AI use in banking and insurance, and regulation was not considered an impediment to the materialisation of AI-derived benefits. However, they expressed concerns that less stringent regulations in some sectors could potentially attract regulated activities, posing a risk of regulatory arbitrage. The need for a risk based approach for model risk management was emphasised, raising concerns about broad AI definitions by some policy makers. An example included some regulations classifying models like Generalised Linear Models (GLM) as AI, increasing compliance requirements. Suggestions relating to regulation included metric-linked regulations and machine-readable formats for easier compliance. 3 Academic research in AI, particularly GenAI, is growing but requires a more interdisciplinary approach. In terms of AI model performance, some academic research highlights the superior performance of ML and sometimes GenAI in specific fields such as credit; financial predictions; portfolio allocation; and insurance consumer pricing. However, there are potential disadvantages, notably in explainability; model output robustness; and computational costs. The absence of simple metrics to measure these factors complicates understanding potential trade offs like explainability versus accuracy. Ultimately, wider deployment of advanced GenAI tools depends on the financial institution’s risk appetite.