This is our trilogy moment!
Trilogies can sometimes be good, right? For example Lord of the Rings, Back to the Future, and the original Star Wars trilogy.
This week we’re continuing our CIS 20 critical control series with the “foundational” subcontrols 9 – 12. If you’re in need of context or missed the first two parts, head here.
Control 9 (Foundational) – Knowing your limitations
By now I’m sure you’re aware that the internet is both a gift and a curse. Our ninth control helps limit the amount of downside that can come from direct access to the internet.
Almost every cyber attack you’ve come across included scanning of some sort. At the beginning of every attack, there’s a discovery phase where the attacker gathers info about their target, which includes passive and active network scans. Depending on how your network is set up there’s a chance you have unauthorized ports, services, and protocols publicly available for attackers to discover through scans.
Equifax, one of the most well-known cyberattacks in the last decade started from a simple scan. The attackers started with broader scans to map the Equifax network and then narrowed their scans to easily accessible and vulnerable systems. As we’re all aware this attack led to 147 million records leaked.
Here’s what CIS has to say about limiting public access…

Mapping and scanning are the two subcontrols I’ll rant about.
- Mapping: Every device on your network should have publicly open ports, protocols, and services mapped. This is a high-level picture most sophisticated and determined attackers will have and you should as well. This inventory should be automatically updated via one of the many magical automation tools in InfoSec.
- Scanning: If the attacker is constantly scanning your ports, then why shouldn’t you? I’m surprised by the InfoSec people I meet that have never scanned (passively or actively) their network and have zero visibility into what’s happening. This goes back to a point I made in the first part of this series… if you don’t know what you have, how are you supposed to protect it?
Control 10 (Foundational) – Data Recovery
We’re all going to be attacked at some point, so mitigating this risk through backups, images, and redundancy plans is obvious, but rarely done correctly.
So what do I mean by “correctly”… Well, many InfoSec groups backup their data, but never check to see if they can restore their system’s data from scratch. It’s literally a “half-baked” plan because they’re only backing up data, but are unable to restore it. That’s why testing your backup processes for different systems on a recurring basis is important.
Now more than ever, we’re experiencing the negative effects of this half-baked plan thanks to ransomware, the new cool thing.
Recently, Maze ransomware has gone on a rampage on the healthcare industry, Cognizant, Canon, and Xerox. Ransomware is a simple concept to understand…

Hackers break into your systems, encrypt your data, and hold it for ransom in exchange for cryptocurrency. Over time these ransomware attackers have become more sophisticated attempting to make the process of buying back your data as seamless as possible. Kind of like a company, improving the purchasing process (e.g. Amazon).
Let’s see how CIS recommends we protect against this and other data recovery situations.

The three I would emphasize here are automation, testing, and offline backups.
- Automation: The backup process should be automatic, it doesn’t matter if it’s differential, incremental, or full backup. By putting in the work upfront to automate this, it’s one less thing you’ll have to worry about.
- Testing: Now that we’ve automated the backup process, it’s not a “set it and forgets it” kind of thing, that’s where most InfoSec groups go wrong. We need to test our backups on a recurring basis, making sure that our backups are correct and can be restored. These tests should be done for different systems, situations, and times.
- Offline: Another major mistake InfoSec groups have made in previous years is auto-syncing their backups to the cloud, then assuming this is safe. Auto-synced backups have proven to be easily “ransomwared” along with the rest of a company’s data. All of the data being backed up should have at least one offline location that’s physically and logically separated from the network.
Control 11 (Foundational) – Secure Network Devices
In a previous post in this series, I touched on secure configurations for endpoint devices, control 11 is very similar, but instead is applied to network devices.
All the security appliances, networks, and general IoT devices we install on our networks favor convenience over security, which is understandable but can cause issues.
The below convenient default features can all lead to vulnerabilities.
- Default username and passwords
- Open services and ports
- Support for older protocols
- Pre-installation of unneeded software
This attack surface is only increasing due to the massive spike in IoT devices… Below are some stats (50b devices by 2030), which are most likely wrong (because all stats are wrong), but directionally accurate and concerning if the InfoSec community needs to protect all these devices.

Let’s see what CIS says about protecting our network devices.

Two subcontrols that jump out to me are…
- Secure Configs: Early in this series we talked about the importance of maintaining an inventory of secure baseline configurations and this is where that comes into play. Spending time upfront to ensure each net new device is securely configured, imaged, then stored for future use can go a long way in protecting your devices. Think about it… Securely configuring and testing ten thousand devices can seem impossible, unless each device is the same, then we can configure one and copy/paste that same secure configuration to the rest.
- Dedicated Networks: Most networks have a central location where all the devices are configured, which is a golden ticket for any attacker. A common practice for configuring devices is separating the device network from the rest of your network. This separation lowers the chances of an attacker getting access to all the devices.
Control 12 (Foundational) – Castle Walls
The old-school InfoSec mentality revolved around prevention and building really large castle walls on the outskirts of the network, ignoring the insides. This mentality led to networks being rampaged from the inside, making lateral (East to West) movement simple, and is a common concern today with everyone moving to cloud infrastructure.
It’s kind of like a turtle’s shell, hard on the outside, soft on the inside.

With that said, it’s still critical to secure your boundaries because that’s where most attackers come from.
“Defense in Depth” is the common strategy InfoSec groups use when protecting their boundaries, which is simply multiple layers of defense. These layers are made of different security appliances and different vendors for the same security appliance.
I’ll emphasize the second vendor point… If you have multiple firewalls all from the same vendor, then there’s a chance they all have a common vulnerability that the attacker can take advantage of, but with multiple vendors, this is less likely.

CIS has a lot to say about this control…


There are too many subcontrols to expand on, so I’ll pick two.
- Decryption: In today’s internet, most traffic is encrypted, so if a user is accessing a site that’s not whitelisted that traffic should be decrypted and analyzed. Decrypting traffic can take a lot of processing power and users carelessly surf the internet (including me), so being picky about what you decrypt is a good idea.
- Blocking Known Bad: Luckily, InfoSec groups share known bad IP addresses, so blocking communications with those IPs should always be a default reflex. A “bad” IP may not be inherently bad, but instead, a victim’s infrastructure is being used as a stepping stone. Refreshing this list of known bad IPs is a good habit to get into due to IPs flowing in and out of bad behavior.
That’s part three in the bag, until next time my fellow humans!