Buying from Fintech (and Selling to Banks): Key Takeaways from TTV Capital’s Bank LP Summi...
March 6, 2025 | 2 min read
July 1, 2024|0 min read
Copied
Financial services leaders are looking to AI to help them achieve new operational efficiencies, reduce costs, automate repetitive tasks, improve customer experiences, and drive new product and services innovations. And, many financial providers are already seeing results across use cases like fraud detection, customer service, and risk management.
However, financial services organizations face challenges when it comes to leveraging business data and consumer-permissioned financial data effectively — which gets even more challenging when AI enters the picture.
Artificial intelligence relies on the data used to feed its system. If that data is inaccurate, confusing, or outdated, financial providers are left making decisions, running operations, or serving customers based on bad information.
So what happens when the data flowing into and out of an AI model is dumb, deficient, or deceitful? Here’s our take on the top 5 risks to watch out for:
How many companies are struggling with the premise of “bad data in, bad decisions out?” The vast majority (77%) of organizations have data quality issues, and 91% report this impacts their company’s performance, according to a 2022 survey of 500 data professionals from Great Expectations. AI is only as good as the data that fuels it. Poor data quality, incomplete data sets, and disjointed information can be costly for businesses.
Whether AI is intentionally misled by malicious actors or simply making stuff up to satisfy human prompts and questions, the risk of inaccurate information generated by AI is real. Primary examples of misinformation and AI hallucinations (incorrect or misleading results that AI models generate) include everything from Air Canada chatbot lies to made-up court cases invented by ChatGPT for a U.S. District Court case.
AI can also be led astray by the same stereotypes and biases that humans struggle to combat. This can be due to the data used to train the AI model, the algorithms, or cognitive bias from the creators of an AI model. Gartner estimates that 85% of AI projects provide false results due to bias built into the data or the algorithms, or that exist in the professionals managing those deployments. For financial services, bias in AI use cases could impact credit and loan decisions, financial advice and investments, customer service, and more.
Privacy and security are already top concerns related to data — the use of AI shines an even bigger spotlight on the need for strong data privacy and security protocols and tools. The risks are two-fold:
At a high level, a recent Harvard Business Review study found most people (57%) don’t trust AI. But, consumers are starting to trust AI to take the place of human interactions in certain cases. For example, PWC found that half of consumers said they would trust GenAI to collate product information before they make a purchase (55%) or provide product recommendations (50%). However, there is still room to grow in gaining trust with AI for more complex topics — only 23% trust GenAI to assist them with legal advice, and just 27% trust it to execute financial transactions.
Want to learn more? Check out the full report on the top risks and rewards for AI in banking: https://www.mx.com/guides/risks-rewards-ai/
March 6, 2025 | 2 min read
Feb 11, 2025 | 2 min read
Feb 7, 2025 | 3 min read