AI in Financial Services: Balancing Applications Against Pitfalls

Posted by
Check your BMI

AI has diverse applications in finance, but facilitates pitfalls that may undermine trust between consumers and the industry. Transparency and explainability are crucial to address these concerns.

 

SwissCognitive Guest Blogger: Miranda Hartley – “AI in Financial Services: Balancing Applications Against Pitfalls”


 

toonsbymoonlight
Once artificial intelligence (AI) entered the financial services sector, it would never be the same. Banking, insurance, and investment management all present excellent opportunities for leveraging AI’s analytical capabilities. In June 2023, it was reported that up to 40% of open roles advertised at Wall Street banks were AI-related. The ‘AI Arms Race’, as it’s been called, has seen financial services scrambling to implement the most intelligent, future-friendly AI-based technology.

What’s particularly interesting about the stampede towards AI is that there are multiple areas of interest in financial services. There’s customer-facing AI – conversation bots – as well as back-office functions: document extraction and labelling, as well as KYC and risk monitoring. Four of the most interesting and topical AI applications are instant credit decisioning, portfolio management, sentiment analysis and fraud detection.

Firstly, instant credit decisioning is rapidly becoming the baseline for lending. The global lending industry – valued at nearly one trillion dollars in 2023 – relies on quick, flexible, yet robust decision-making. Instant credit decisioning releases funds for individuals and businesses. AI can automatically extract the data from financial documents, such as invoices, bank statements and financial statements, before performing the underwriting process and uploading the result to an online platform.

Similarly, AI can assist decision-making in trading and investment management by releasing structured information about funds, and facilitating risk analysis and prediction. AI can also enhance portfolio management through data analysis and forecasting, enabling more precise investment decisions and more considered risk management.

Another direct application of AI to investments and trading is sentiment analysis. These bots harness natural language processing (NLP), a subfield of artificial intelligence focusing on comprehending text and speech. NLP generates contextual comprehension of individual words rather than interpreting them in isolation based on their definitions. Trading AI bots can sweep media, detecting and carefully producing insight about real-time sentiment and guiding decision-making.

The sophistication of AI’s language detection also makes it a high-potential tool for fraud detection. Machine learning algorithms can analyse large datasets, enabling instantaneous identification of anomalous behaviours. Given that fraud is believed to account for 6.05% of global gross domestic product over the last two decades, AI has enormous economic and cultural potential in its elimination.

Of course, AI’s convenience and potential may make it appear like a Trojan horse. There is an almost unprecedented wave of misinformation about how AI can distort the future of financial services. Prevalent among these beliefs is the assertion that artificial intelligence will ‘steal people’s jobs’ by making employees obsolete.

In many cases, companies compensate for the ease of AI-powered automation by slowing their recruitment, even during growth periods. With the mundane aspects of their job exorcised, existing employees can focus on the creative and strategic aspects of their job. The existing employees become more valuable to the company; their experiences working are also likely to be more positive.

A far more pressing issue for AI than taking jobs is bias and a lack of transparency. Earlier this year, the European Union ruled that AI should be human-centred and trustworthy. To achieve either of these objectives, it’s essential that AI models can fulfil epistemic transparency. Epistemic transparency is a concept that explores the capacity of AI to provide clear, interpretable explanations and justifications for its output.

An important concept linked to epistemic transparency is explainability. Explainability refers to the ability of an AI model to justify its decision-making and judgement in a way that is interpretable by humans. However, it is not just AI models that should be clear about their decision-making: explainability should be upheld by financial organisations when communicating with their consumers and cohort. Ideally, companies should be able to outline the benefits of their AI-based systems and justify their lack of bias.

Unfortunately, AI systems are more than capable of perpetrating existing biases and inequities. Back in 2015, Amazon had to remove the AI component of its recruitment process when it was revealed that it discriminated against female software developers.

Though AI and efforts to safeguard against its exclusionary effects have improved since then, it remains a cautionary tale for unmonitored, improperly trained AI systems.

Aside from the ethical ramifications of bias, skewed data is simply unproductive. For example, commercial lenders don’t want to turn perfectly viable customers away due to AI bias. Potentially profitable investment opportunities can also be lost if certain characteristics are favoured. Bias can even reduce the efficacy of fraud detection if false positives or negatives are generated based on misleading attributes. The list goes on.

As it stands, the low-hanging fruit of AI is automated back-office processes, such as automated data extraction, customer onboarding and email and document management. As these tasks do not directly involve decision-making, the risk of bias is low. However, for organisations seeking the benefits of more sophisticated utilisations of AI, the training and monitoring of the model couldn’t be more important. While AI boasts immense potential and countless applications, the financial services sector must employ it in a practical and responsible manner.


About the Author:

Miranda Hartley is a published, part-time freelance writer and content manager at Evolution AI, a leading data extraction firm. Miranda also co-organises the London Machine Learning Meetup: the largest community of data scientists, AI specialists and machine learning experts in Europe. Her writing focuses on technology, industry and travel.

Der Beitrag AI in Financial Services: Balancing Applications Against Pitfalls erschien zuerst auf SwissCognitive, World-Leading AI Network.