In this post I'll explain why it is impossible to achieve perfect or complete security in any business environment. Stay with me, especially if you don't already believe this. I'm going to briefly describe just 2 different reasons why perfect security is impossible today, and likely to remain so for the foreseeable future.
People
"We have met the enemy, and he is US" is a very apt phrase in the security
world. Experts agree that roughly 85% of all data breaches can be traced
back to staff errors, both of omission and commission.
Examples here include falling victim to Phishing attacks by clicking on an
attachment, or failing to apply the correct firewall rules during a software
rollout.
Eliminating the human element is likely to be impossible for the rest of
the working lives of anyone reading this.
Your data defenders must be perfect 100% of the time or those seeking to
extract data or to extort money from you will get in.
Today, any public-facing computer is scanned multiple times a second by
criminal enterprises seeking to get into the system.
Even a seemingly minor slip up can easily turn disastrous.
Any real company, developing software at least, will have developers,
probably a DevOps team, and others that intimately touch the code and
infrastructure on a regular basis.
Any of those people can miss a step, forget a check, or just write bad
code or other instructions on an off day.
Even with modern attempts to improve code development quality (such as
Pair Programming) incorrect code will still find its way onto your
company's systems.
That's not to say you shouldn't support these efforts, but they're not
a panacea.
While we're on the subject of staff, staff participation in data breaches
is not unheard of, but it is reasonably rare.
Most estimates suggest that significantly less than 10% of data breaches
involve some internal bad actor aiding the attacker (or being the attacker).
Vulnerabilities
Coding-related vulnerabilities come in several forms. Here I'm going to
separate them into 4 categories:
- code is developed "in-house", known vulnerabilities
- code developed "in-house", un-known vulnerabilities
- 3rd party code, known vulnerabilities
- 3rd party code, un-known vulnerabilities
For our purposes "in-house" code is developed by someone your organization
directly controls - employees, contractors, etc. And 3rd party code comes
from external sources such as free and open source (FOSS), or via code you've
licensed from external companies, which your developers have incorporated in
your products.
Vulnerabilities also vary by Severity. Most rating systems break these down
into buckets such as 'critical', 'high', 'medium', etc. based on how likely
they are to cause trouble in the real world applications that use the
impacted software.
In-house developed vulnerabilities
Your developers can accidentally code mistakes into your software. There are
tools to help identify these, but it's often left to the developer to figure
out how to fix the issue. Some tools offer solutions, but even there the
solutions are not fool-proof and occasionally are just plain wrong. It's
not difficult for a well-intentioned developer to leave a vulnerability in code
they've written.
And, if you're not scanning for these 'self induced' vulnerabilities, you have
very likely inadvertently created many, many of these. These are, of course,
preventable - if your teams scan for them and diligently fix every one found.
Unknown Vulnerabilities in FOSS
There are long lists of known vulnerabilities in released software published.
These cover both FOSS and software released by companies. In 2022, there
were 26,448 new vulnerabilities discovered - at the time, a new record.
In 2023 there appear to have been 29,065 new ones found.
It's important to note that, even once a vulnerability has been identified by
a security researcher and made known to the software developers of that code,
it can take months before a patch is developed and released to address it.
Only at that point does the vulnerability become "known", and made public.
And yes, your systems are at risk of an attacker discovering the problem and
exploiting it, until they're patched.
This is beyond your control, but does also represent real risk to your business.
But there's another class of problems here, the unknown vulnerabilities.
These are often called 'zero-day vulnerabilities' - because they've been
known for zero-days to the security community, and to the code project's
developers. These are often discovered during a data breach investigation.
At which point they will be reported to the software maintainers and they
(hopefully) leap into action to generate a patch.
Note that, once a zero-day becomes publicly known, it still may take
weeks for the software maintainers to create and release a patch to complete
address it.
We do occasionally see false starts on these patches too, where an initial
patch is released only to (usually quickly) be discovered to have left part
of the attack still viable.
Hours or days go by while the developers work to address the new issue and
release an updated patch.
All the while your company may be vulnerable to attackers making use of the zero-day attack, and we saw this with the infamous
Log4j vulnerability.
Generally, significant zero-day vulnerabilities are addressed quickly though.
Note that this is not only a software problem, but we also see vulnerabilities
in some hardware too, those these are much more rare. The most recent that
seemed to have had wide-spread impact were nick-named
Spectre
and
Meltdown.
These impacted a wide array of general purpose computing devices from laptops
to significantly larger computers.
In these 2 hardware vulnerabilities, software/firmware patches were able to
be developed that addressed these hardware problems.
This may not always be the case. A future hardware vulnerability could arise
which is impossible to work around, though this is very unlikely.
Hardware vulnerabilities are another class of vulnerabilities that can impact
your company, which are completely beyond your control.
Zero-day software vulnerabilities are reasonably uncommon, but they're not
always minor.
For example the 'log4j' problem from late 2021 was a zero-day that caused
a lot of disruption and costs, as it is a very widely used package.
See the Wikipedia entry for
log4j.
And zero-days can arrive even via software you're paying for.
Recall the SolarWinds issue of a few years ago where hackers were able
to place 'backdoors' into one of SolarWinds products. See the
Arstechnica report.
The above represents several classes of problems, like human error, that simply cannot be eliminated by any known method.
In addition to code that is actually embedded into your systems, vulnerabilities
can exist in open source software your teams use in order to build your
products.
A widely used build tool, Apache's
Jenkins
has had numerous of these, and failure to
patch Jenkins, or similar tools, can also lead to cost-incurring breaches for
you.
[Note that this is not intended to discourage you from using Jenkins, but rather
to be alert to keep it patched, and/or create other mechanisms to keep your
systems safe. I'm aware of several instances where attackers used unpatched Jenkins servers to get into company's systems.]
Unpatched build systems is something you *can* eliminate, but is often overlooked.
Closing Thoughts
I hope the the above has proven to you, beyond a shadow of a doubt, that it
really is impossible to be 100% secure.
Another issue I've run into, and I know my peers have too, is that many,
if not most, companies don't balance their security resources/spend well.
Many companies err towards favoring prevention tactics, while treating detection/monitoring/alerting as a "nice to have".
If you read this far, you can see the wisdom in recognizing that attacks
will happen, and some will breach your systems.
Prevention is important, and I would never suggest
otherwise, but early detection and a plan for eradicating attacks is
necessary to minimize the damage and related costs to your business.