Across the globe, organizations are waking up to the power of artificial intelligence (AI). Even before ChatGPT, the technology was starting to move from being an early adopter buzzword to something capable of effecting business transformation for the corporate masses. The market for AI software, hardware and services is expected to hit yet another milestone – $500bn in value – by 2024, thanks to impressive double-digit growth. Yet however they’re deployed to drive business success, AI services are only as strong as the data that powers them.
That means organizations wanting to extract maximum value from AI must also build security into projects from the start. A best practice layered approach must begin with securing the data itself.
The power of AI
We should all be aware by now of the value AI tools can provide to organizations – in both a customer-facing and back office context. One obvious advantage from chatbots like ChatGPT is in enhancing customer service with a realistic human-like interface to field queries and complaints. In so doing, it will free up staff to focus on higher value tasks. AI-based automation could also reduce the manual, repetitive tasks some team members still have to undertake in industries like banking and retail – such as on- and offboarding customers and approving mortgages. This would also minimize human error and improve overall efficiency.
A third output from AI tools – and perhaps the one business leaders are most excited about – revolves around decision making. By applying the right algorithms to large datasets, companies can generate insight with which to make more successful decisions. For example, they can forecast multiple market scenarios to help predict stock levels, customer sentiment and pricing sensitivity – and adjust strategy accordingly. This will be an increasingly critical driver of agility – especially in times of tremendous business uncertainty and macro-economic volatility.
Risk is everywhere
Yet where there is data, there are inevitably risks over how it is used and who has access to it. These include:
Model poisoning where threat actors inject malicious data into an AI model, causing it to misclassify the data and make bad decisions.
Data manipulation/tampering which could occur before data is even fed into algorithms, to sabotage results or achieve other outcomes that the threat actor wants.
Data theft is a growing concern, given that the personal and financial information fed into AI systems could be sold on the cybercrime underground to fraudsters for use in follow-on phishing or identity fraud attacks.
Accidental disclosures are an ever-present risk and the result of human error. They could be due to system misconfiguration or simply mishandling of data.
These AI-related cyber risks could cause significant financial or reputational damage, especially if they invite the scrutiny of data protection regulators. The GDPR usually requires organizations to tell data subjects if their information is to be used in an AI system. It also demands firms properly handle and secure that data in line with principles like data minimization.
Some AI security basics
Cybersecurity for AI is a rapidly maturing space. But there are still things organizations can do today to help to mitigate some of the risks outlined above. These include best practices like:
Data-centric security: Tokenization is one of the most effective ways to protect data. It converts plain text into a coded language, meaning if hackers get hold of it they will have nothing useful to use or sell. It’s perfect for use in AI systems as the data can still be used as normal, without compromising on protection. And tokenization can also reduce compliance scope, because the original data no longer exists.
Access controls: Organizations should limit access to data based on the user's identity and role, according to the policy of least privilege. This ensures that only authorized users can access and use the data for the minimum time necessary to complete their work. Strong passwords should be enhanced with multi-factor authentication (MFA).
Regular audits: These can help organizations to identify vulnerabilities and areas for improvement in their data protection policies and practices. It will also ensure that policies and procedures are being followed consistently.
Employee training: Staff can be a weak link in corporate security, so regular training is essential to ensure that employees are aware of the importance of data protection, and are following policies. It should be regularly communicated to all staff, including contractors.
Monitoring and logging: Logging tools collect data on security “events” – everything from emails to logins. Monitoring tools then sift through these logs in real time to alert teams about suspicious behavior such as unauthorized access or account usage.