accent graphic
accent graphic
Back to Blog

Top 5 Risks of AI in Banking

July 1, 2024|0 min read
linkedin iconfacebook iconx iconlink icon



Financial services leaders are looking to AI to help them achieve new operational efficiencies, reduce costs, automate repetitive tasks, improve customer experiences, and drive new product and services innovations. And, many financial providers are already seeing results across use cases like fraud detection, customer service, and risk management. 

However, financial services organizations face challenges when it comes to leveraging business data and consumer-permissioned financial data effectively — which gets even more challenging when AI enters the picture.

Artificial intelligence relies on the data used to feed its system. If that data is inaccurate, confusing, or outdated, financial providers are left making decisions, running operations, or serving customers based on bad information. 

So what happens when the data flowing into and out of an AI model is dumb, deficient, or deceitful? Here’s our take on the top 5 risks to watch out for: 

1. Data Quality 

How many companies are struggling with the premise of “bad data in, bad decisions out?” The vast majority (77%) of organizations have data quality issues, and 91% report this impacts their company’s performance, according to a 2022 survey of 500 data professionals from Great Expectations. AI is only as good as the data that fuels it. Poor data quality, incomplete data sets, and disjointed information can be costly for businesses.

2. AI Hallucinations and Misinformation

Whether AI is intentionally misled by malicious actors or simply making stuff up to satisfy human prompts and questions, the risk of inaccurate information generated by AI is real. Primary examples of misinformation and AI hallucinations (incorrect or misleading results that AI models generate) include everything from Air Canada chatbot lies to made-up court cases invented by ChatGPT for a U.S. District Court case. 

3. Data-Generated Bias 

AI can also be led astray by the same stereotypes and biases that humans struggle to combat. This can be due to the data used to train the AI model, the algorithms, or cognitive bias from the creators of an AI model. Gartner estimates that 85% of AI projects provide false results due to bias built into the data or the algorithms, or that exist in the professionals managing those deployments. For financial services, bias in AI use cases could impact credit and loan decisions, financial advice and investments, customer service, and more. 

4. Data Privacy and Security

Privacy and security are already top concerns related to data — the use of AI shines an even bigger spotlight on the need for strong data privacy and security protocols and tools. The risks are two-fold: 

  • Protecting against AI-powered cyberattacks, threats, and fraud
  • Educating on proper AI usage and establishing policies to protect against data breaches and leaks

5. Mistrust and Reputation Risk 

At a high level, a recent Harvard Business Review study found most people (57%) don’t trust AI. But, consumers are starting to trust AI to take the place of human interactions in certain cases. For example, PWC found that half of consumers said they would trust GenAI to collate product information before they make a purchase (55%) or provide product recommendations (50%). However, there is still room to grow in gaining trust with AI for more complex topics — only 23% trust GenAI to assist them with legal advice, and just 27% trust it to execute financial transactions. 

Want to learn more? Check out the full report on the top risks and rewards for AI in banking: 

Related Blog Posts
accent graphic