We all make them. Silly, simple human errors and oversights. Call it a foible in human nature, but often it just exemplifies our inability to keep our attention focused on one thing in an increasingly distracted world.
Sure, it’s one thing when we misplace our keys, or absent-mindedly put the cereal box in the refrigerator, but it’s entirely another thing when we commit a serious error on the job, especially when the mistake generates punitive repercussions. Like accidentally deleting a production database, or forgetting to protect a whole bunch of very sensitive information that you intended to store only temporarily in a cloud service but promptly forgot about. Yeah, no matter how much we think we’re paying attention as we carry out our professional duties, we’re all making mistakes like these on a fairly frequent basis.
The above example of sensitive data becoming exposed demonstrates the real dangers that can occur when human error becomes injected into the data handling and processing workflow. The process of data collection and handling is very tedious and repetitive, and it doesn’t take much to make a mistake that can be really costly for your business. One of the remedies to counter this reality is to replace human intervention with machine automation, to position humans in a “review” or “supervisory” capacity while letting the machine perform the tedious rote procedures needed to work with large datasets.
Humans are notoriously prone to mistakes when inattention and fatigue get in the way—when we’re tired but have a deadline to meet, well, that’s when we start to make mistakes. However, if software is properly configured and instructed to handle sensitive information in a particular way, it can then do so 24/7 across massive volumes of data without getting tired or committing exhaustion-induced error. Of course, the operative part of that scenario is to ensure that no errors creep into the configuration and instruction phase. If software “learns” a bad step, the error is propagated over and over again, ensuring mistakes occur in such a volume as a person could never commit. Garbage in, garbage out. So automation can’t be the sole part of the solution.
I can think of two other parts of the solution that you may want to consider when trying to eliminate human error from your organization’s handling and processing of sensitive data: building into your organization a culture that values data privacy and security so that any handling of sensitive data is always carried out with care and attention, which is a preventative measure; and protecting all sensitive data with data-centric security to ensure that when mistakes do occur, the data cannot be compromised. This is more of a mitigating measure. Let’s briefly explore both.
Our business world is a very rushed and even frantic one. Trying to juggle multiple channels of constant communication with any number of tasks and workflows usually results in a critical decision between quick task completion versus precise and careful task completion. Unfortunately, most companies value the former approach—usually, employees are judged by the volume of their output. It doesn’t have to be that way, though. Organizations can deliberately work on instituting and supporting a stronger culture of data privacy and security, meaning that whenever somebody is working on or with sensitive data, workers always abides by the strict rules and best practices. No shortcuts allowed, such as quickly dumping sensitive data somewhere else (like into a cloud storage bucket) as a transitory or temporary storage maneuver. Every employee needs to understand how data is the lifeblood of the company and why it is the asset of most value, like gold, so treating it with caution and care always should be privileged over haste in task execution. This is a conscious decision that the business makes. If your professionals are handling data insecurely, just know that it’s within your power to correct that.
The realist would say that no matter how much work is put into cultivating a strong culture of data privacy and security, the situation still will inevitably arise when somebody becomes less attentive or tries a shortcut when working with sensitive information. To prepare for that inevitability, companies should really look into data-centric security to augment the entire portfolio of defensive measures. This means applying strong protection such as tokenization and format-preserving encryption directly to data, as soon as it enters your data ecosystem. Because these forms of data protection preserve format, business applications can accommodate protected data (unlike data protected with classic encryption) and therefore data never has to be de-protected. The best part is, if threat actors get their hands on the data, it will be incomprehensible to them and cannot be leveraged for gain.
As with most things in life, balance is key. Knowing that a majority of data breaches can be directly traced to human error should be a sobering realization. Tactical measures such as increased automation, a stronger corporate culture of caution when working with sensitive data, and applying data-centric protection to sensitive information as soon as it enters your IT environment all can have a positive impact.
Don’t make the mistake of ignoring the role of human error in data breaches.