Gen AI: Improving productivity in banking by 30%
“The technology has the potential to improve productivity in banking by up to 30%,” says Russ. “Moreover, a recent survey found that 68% of the industry’s leaders believe that having a platform that enables the adoption of emerging technologies is of high importance – so it’s clear that generative AI is rewriting the future of finance as we know it.
“That being said, if institutions want to capitalise on the technology, they have a lot to factor into their adoption roadmap – from leveraging the right data to complying with regulations to bolstering a workforce’s data intelligence.”
Smaller LLMs: They’re just as mighty
So, what’s the first thing to consider when businesses are looking to adopt an LLM? According to Russ, it’s all about size: “When people use the term “LLM”, they usually think of the huge consumer chatbots that have been at the helm of the AI conversation. But financial institutions need a smaller, more secure LLM.”
And whilst the right size model is crucial, it’s also about leveraging a solution that is tailored to an organisation’s needs: “A financial enterprise’s model does not need to know anything about, say, celebrities; this data is irrelevant. The model needs to help organisations with what matters to them, like determining risk, minimising fraud, or providing more personalised experiences for customers.”
Enter custom open-source models. But for an LLM to be tailored to a specific need, it must first be trained and reasoned on an enterprise’s proprietary data. “Customised models are actually more cost-effective to run due to their smaller size. Not to mention, the smaller and higher quality datasets will result in the model producing more relevant and accurate results”, Russ notes.
“Imagine how efficient and productive an enterprise could be if its LLM could analyse a consumer’s buying behaviour and flag suspicious or fraudulent actions. Or, if a consumer is applying for a loan, the LLM could use the specific algorithms it was trained on to determine eligibility.”
So, how do these smaller, tailored options fare in comparison to their much larger counterparts? Simply put, it all comes down to the data.
“The large, general-purpose models are trained on a much larger dataset, often composed of data scraped from the web. This includes all manner of information, including irrelevant or poor-quality data, which has a huge impact on the model’s output”, Russ points out.
“For example, it could hallucinate or produce inaccuracies, and in an industry as heavily regulated as finance, this would spell disaster.”
Naturally, data quality is at the forefront of financial services. But the question then becomes: how can enterprises safely experiment with the technology, and unlock its benefits, whilst ensuring governance?
“If institutions leverage their own LLM that is trained on their own data, that model belongs to them”, Russ explains.
“They control what data goes into building their model, and what data is left out. Plus, they won’t need to share anything with a third party. As such, this approach ensures they comply with regulatory requirements, and can experiment within secure boundaries.”
On a broader level, Russ adds, this can facilitate in-house training of custom models across the industry, empowering every organisation to extract the most value out of a model that is powered by its own private data.