So you’ve decided to set up a vulnerability scanning programme, great. That’s one of the best ways to avoid data breaches. How often you should run your scans, though, isn’t such a simple question. The answers aren’t the same for every type of organization or every type of system you’re scanning.
This guide will help you understand the questions you should be asking and help you come up with the answers that are right for you.
How often should vulnerability scans be run
A lot of the advice below depends on what exactly you’re scanning. If you’re not sure about that yet – check out this comprehensive vulnerability scanning guide.
Once you’ve decided which systems should be in scope, and what type of scanner you need, you’re ready to start scanning. So how often should you ideally be running vulnerability scans?
Here are five strategies to consider, and we’ll discuss in which scenarios they work best:
- Emerging threat-based
Fast-moving tech companies often deploy code or infrastructure changes multiple times a day, while other organizations can have a relatively static setup, and may not be making regular changes to any of their systems.
The complexity of technology we use means that each change can bring with it a catastrophic configuration mistake, or the accidental introduction of a component with known vulnerabilities. For this reason, running a vulnerability scan after even minor changes are applied to your systems is a sensible approach.
Because it’s based on changes, this approach is most suited for rapidly changing assets, like web applications, or cloud infrastructure like AWS, Azure and GCP, where new assets can be deployed and destroyed on a minute-by-minute basis. It’s also particularly worth doing in cases where these systems are exposed to the public internet.
For this reason, many companies choose to integrate testing tools into their deployment pipelines automatically via an API with their chosen scanning tool.
It’s also worth considering how complex the change you’re making is.
While automated tools are great for regular testing, the bigger or more dramatic the change you’re making, the more you may want to consider getting a penetration test to double-check no issues have been introduced.
Good examples of this might be making big structural changes to the architecture of web applications, any sweeping authentication or authorization changes, or large new features introducing lots of complexity. On the infrastructure side the equivalent might be a big migration to the cloud, or moving from one cloud provider to another.
Even if you don’t make regular changes to your systems, there is still an incredibly important reason to scan your systems on a regular basis, and one that is often overlooked by organizations new to vulnerability scanning.
Security researchers regularly find new vulnerabilities in the software of all kinds and public exploit code which makes exploiting them a breeze can be publicly disclosed at any time. This is what has been the cause of some of the most impactful hacks in recent history, from the Equifax breach to the Wannacry ransomware, both were caused by new flaws being uncovered in common software, and criminals rapidly weaponizing exploits to their own ends.
No software is exempt from this rule of thumb. Whether it’s your web server, operating systems, a particular development framework you use, your remote-working VPN, or firewall. The end result is that even if you had a scan yesterday that said you were secure, that’s not necessarily going to be true tomorrow.
New vulnerabilities are discovered every day, so even if no changes are deployed to your systems, they could become vulnerable overnight.
Does that mean that you should simply be running vulnerability scans non-stop though? Not necessarily, as that could generate problems from excess traffic, or mask any problems occurring.
For a yardstick, the notorious WannaCry cyber-attack shows us that timelines in such situations are tight, and organizations that don’t react in reasonable time to both discover and remediate their security issues put themselves at risk. Microsoft released a patch for the vulnerability WannaCry used to spread just 59 days before the attacks took place. What’s more, attackers were able to produce an exploit and start compromising machines only 28 days after a public exploit was leaked.
Looking at the timelines in this case alone, it’s clear that by not running vulnerability scans and fixing issues within a 30-60 day window is taking a big risk, and don’t forget that even after you’ve discovered the issue, it may take some time to fix.
Our recommendation for good cyber hygiene for most businesses, is to use a vulnerability scanner on your external facing infrastructure on at least a monthly basis, to allow you to keep one step ahead of these nasty surprises. For organizations with a heightened sensitivity to cyber security, weekly or even daily scans may make more sense. Similarly, internal infrastructure scans once a month helps maintain good cyber hygiene.
For web applications, scanning their framework and infrastructure components on a regular basis makes equal sense, but if you’re looking for mistakes in your own code with authenticated scans, a change-based approach makes much more sense.
If you’re running vulnerability scans for compliance reasons, then specific regulations often explicitly state how often vulnerability scans should be performed. For instance, PCI DSS requires that quarterly external scans are performed on the systems in its scope.
However, you should think carefully about your scanning strategy, as regulatory rules are meant as a one-size-fits-all guideline that may not be appropriate for your business.
Simply comparing this 90-day regulation with the timelines seen in the WannaCry example above shows us that such guidelines don’t always cut the mustard. If you actually want to stay secure rather than simply ticking a box, often it makes sense to go above and beyond these regulations, in the ways described above.
images from Hacker News