AI is changing the way businesses operate across the globe. But one sector with arguably the biggest potential for AI-powered digital transformation is financial services. In fact, it is already one of the most prolific users of the technology, accounting for nearly a fifth of market use. But greater use of AI also opens the door to business risk. It expands the attack surface and exposes organizations to potentially significant financial and reputational damage. That’s why any new initiative must take a security-by-design approach.
How AI is being used in banking
According to McKinsey, AI can drive multiple benefits, such as:
- Increasing revenue through enhanced personalization of services to customers and employees
- Lowering costs by driving automation-related efficiencies, reduced error rates and better use of resources
- Uncovering new business opportunities by driving insight from vast quantities of data
In practice, there are many banking use cases for both predictive AI – which analyzes historical data to uncover insights – and generative AI (GenAI), which is focused on content creation and translation. These include:
Fraud detection/prevention: Using real-time monitoring of merchant transactions, banks can better spot when a customer account has been compromised or is being used suspiciously.
Process automation: Using robotic process automation (RPA) for repetitive tasks, banks can free-up staff to work on higher value jobs, reduce error rates and drive efficiency.
Customer authentication: Deploying AI-powered facial recognition and other biometric systems for Know Your Customer (KYC) and account log-in verification/authentication.
Risk management: Predictive AI can analyze historical client data to empower banking staff to make better informed investment and lending decisions.
App development: GenAI tools can be used by DevOps teams to accelerate development of new customer and employee-facing apps, designed to enhance end-user experiences.
Customer service: GenAI can be used to power 24/7/365 customer-facing chatbots with natural language interactions.
Where’s the risk?
However, use of AI technologies like this can create new opportunities for threat actors. Data protection lies at the heart of the challenge for banks. The data on which AI and machine learning (ML) models are trained is often highly sensitive. It could also be distributed across the enterprise, or even located in third-party data stores. If a hacker is able to get their hands on it, they could use the data to impersonate customers in identity fraud attempts. Or they could launch more sophisticated data poisoning attacks.
Data poisoning occurs when threat actors delete or change specific data points in training data with a view to sabotaging AI/ML models and/or introducing specific vulnerabilities and errors to achieve pre-determined aims. It could result in disruption to critical fraud detection, customer authentication or process automation systems. Threat actors may use data poisoning to:
- Hold a victim organization to ransom
- Target a specific function like KYC authentication in order to bypass security checks
- Cause maximum financial and reputational damage by sabotaging key business processes
How optimized security can drive digital growth
There’s a huge AI growth opportunity for the financial services sector. But only if organizations proactively build risk mitigations into new initiatives. New AI/ML models and infrastructure should be designed with security in mind from the ground-up, according to best practices like the ones published by the UK’s National Cyber Security Centre (NCSC). Its Guidelines for secure AI system development is a must-read document for any AI developer. It also emphasizes the importance of data security best practice.
A data-centric security approach can help to mitigate the risk of threat actors stealing or manipulating AI training data, by ensuring that – even if they manage to circumvent perimeter IT checks – they won’t be able to make sense of or monetize the targeted data. The comforte Data Security Platform automatically discovers and classifies data wherever it is across the enterprise and applies protection in line with policy. Critically, it offers tokenization technology which enables organizations to continue using data for AI model training, without sacrificing security.
Data-centric security is only one piece of the puzzle. It must be combined with strict access controls, asset discovery and management, continuous monitoring, supply chain risk management and staff training/awareness raising—among other measures. But it is a crucial means to stay one step ahead of the bad guys, and keep regulators of new rules such as the EU’s DORA legislation happy.
With a security-by-design mindset like this, financial services organizations can tap the full transformative potential of AI to innovate, create outstanding customer experiences and enter new markets.