AI in Finance: Navigating the Critical Risks Before Realising the Rewards
AI is transforming the finance industry, offering exciting opportunities and serious risks. As a follow up to the recent webinar on Gen AI, we are providing two articles, the first on the biggest risks on AI and the second on the industry specific use cases
In this first article, we delve into some of the risks of AI.
Risk 1: Hallucinations - how can AI hallucinate so confidently?
There is a widely reported case where a judge caught a lawyer providing case study references that doesn't exist. There are other examples in Australia where hallucinations exist, including the case of LJ Hooker.
So why does this happen? AI hallucinates because AI does not actually reason, instead, it predicts based on patterns. Ie, it generates text by predicting what words statistically follow others, not by fact-checking or understanding the physical world around us. So when it doesn’t “know” something or has incomplete/conflicting data in its training, it guesses in a way that sounds plausible but can be totally false.
This fundamental nature leads to "hallucinations": fabrications of data, cases, or analysis. The now-infamous legal case (invented precedents) wasn't an anomaly. Precisely because AI sounds so confident, it erodes trust when the response is incorrect.
Given this is how Large Language Models (LLMs) are set up , there is no 100% guaranteed way to remove hallucinations.
What can we do to reduce hallucinations?
The most basic way to reduce hallucinations is a review of the output by another person. Depending on the uses, AI is acting as a junior intern.
For increased accuracy, a second AI should be used to check the AI output.
Risk 2: Data security: How do we create our own secured environment?
In addition to robust regulatory requirements for financial services, customers and the community expect our confidential data to be secured.
One practical application of AI is to create custom GPTs based on your own datasets. But how do we ensure the confidentiality of our information is protected, our privacy is respected and copyright laws are met?
There are a number of options available, depending on the use cases, your risk appetite, your client requirements and your global headquarters’ requirements that you may have.
The first step is to get a paid version of an AI tool, even for personal use. This provides you with the option to turn off training of their LLMs. This is the first action we recommend everyone takes - review your data security.
There are many Teams versions of general AI tools. We have seen many successful use cases built upon this approach, even with a small technology team in place.
Risk 3: How do we protect ourselves against deepfakes?
Beyond numerical hallucinations, AI's ability to generate hyper-realistic text, images, and voice ("deepfakes") poses a systemic threat. Can you trust that analyst report, that CEO statement, or that transaction record? The lack of reliable, industry-standard watermarking or provenance tracking creates fertile ground for fraud and market manipulation.
Our call to action is to review your controls for instances where deepfake can impact your business. This could mean reversing some of the technology you have been using. Eg, many banks have reversed the use of voice as a technology for phone banking.
Risk 4. What is AI’s broader impact on the environment and society?
The computational power driving advanced AI, particularly LLMs, is significant. The US exploring nuclear power for AI data centres underscores the scale. In Australia, amidst energy transition pressures and cost volatility, this presents a critical strategic question: Do the projected efficiency gains from AI offset its significant, ongoing environmental footprint and operational cost?
The other environmental footprint consideration is the use of cooling for the generation of power. This has been discussed in recent articles in the AFR.
Other areas being debated are, eg, the potential for a universal income if the productivity gains come to fruition and taxing the companies which gain from AI for fairer distribution of benefits.
The lack of visibility of these issues in industry and the media is likely to change as the bigger questions around the impact on society will come to light.
AI is a technology where many aspects are grappling with and certainly one that governments should collaborate to achieve a better outcome.
In the next article, we will deep dive into the use cases and tools for AI.
About the Author: Alice Tang
Alice Tang is the Co-CEO of Amplify AI Group. Amplify AI Group is a consulting firm which advises, builds and coaches AI for the finance industry. As the AI expert for financial services, Amplify AI Group’s mission is to solve your hardest problems with AI and data.
Alice has over 25 years industry experience, was recently as the COO of MA Asset Management, where she partnered with the business to 2.9x their Asset Under Management from $3.5bn to $10.3bn during her 6 years tenure. Her broad remit includes product, fund operations, fund accounting, client services, risk management and setting up a data, technology and AI team which help drive the growth of the business.
Prior to MA Asset management, Alice was at the Macquarie Group for 16 years where she held senior leadership positions in investment, fund management and risk and compliance roles.
Alice has a BComm (Co-Op) from UNSW and she has also undertaken her MBA (Exec) with the AGSM. She is a Fellow of the Chartered Accountants ANZ and a GAICD.
You can connect with Alice Tang on LinkedIn.