This is it, the grand finale of our CIS 20 critical controls five-part series.
Five weeks ago we began with an introduction to what the CIS 20 controls are, why they’re relevant, and how they’re applied, so if you’re new to the series go check out part 1. If not, let’s get this party started.
Part five of our series involves controls 17 – 20, which is very different from controls 1 – 16. This fifth part focuses more on the organizational side of things.
Control 17 (organizational) – Awareness
In this control, we’re going to focus on the weak link of every chain, which is us… Humans.
No matter how clever we think we are, in the end, we’re always going to make a silly mistake and this is what attackers hope for. Specifically, InfoSec is concerned with the entire lifecycle of a product – development, distribution, and operation.
Take a second to think about all the different humans involved in creating a product and the possible mistakes that can be made. Here’s an example specific to software…
- Programmers writing code without considering security, IT operations unable to recognize the security implications of logs, security analysts ignoring alerts due to alert fatigue, end-users constantly falling into social engineering traps, and executives struggling to quantify risk and the monetary impact it has on their company.
Technology is hard, but complex systems (e.g. humans and their interactions) are way harder, so the CIS can provide some guidance, but shaping human behavior is never a simple fix.
The two I’ll point out are…
- Skepticism: Many of the subcontrols listed above include a personality trait that is useful for more than just InfoSec, which is skepticism. Training your employees to be skeptical of the emails they receive, sites they visit, and apps they download can go a long way in securing any company. Skepticism is a trait that takes a long time to develop, but once you have it you’re able to apply a critical lens to every situation. A practical way to begin this training would be through simple security awareness training, educating your employees on common social engineering tactics, and trending cyber-attacks.
- Knowing Bad: Many people never really know they’re hacked until something extremely obvious happens like their computer dies or a ransomware note appears. Teaching your employees how to spot “bad” things in their computers is a good way to detect compromised machines. Some examples could be slow computers that overheat, the mouse moving on its own, frequently random pop-ups, unexpected software installs, and social media accounts sending invitations on their behalf.
Control 18 (organizational) – Web App Security
This control like many others is a beast, so we’ll skim the surface of this complex topic.
The majority of security issues in today’s modern techno-utopia revolve around web apps and this is due to users spending all their time there. Also, most apps are insecure, so it’s easy for attackers to break in.
Web App Security is complex, but it can be boiled down into a simple issue, which is the lack of secure coding in the developer community. Now this doesn’t mean that developers don’t try, they do, but creating a completely secure app is impossible. With that said, many apps today have basic security issues, showing the developer’s ignorance of security or carelessness when creating the app.
Here are a few controls from the CIS perspective.
The two I’ll touch on are…
- Web Application Firewalls (WAFs): WAFs are firewalls specific to an application. These specialized firewalls are an important piece of the security puzzle because their main job is to ensure the data being pushed in/out of the application is safe. The trick with WAFs is making sure that the data isn’t encrypted when the WAF examines it, so the WAF should decrypt the data before passing it on to the app.
- Secure Coding: Securing an app is hard work and the knowledge needed to secure apps changes based on the programming language and environment the developer is working in. Holding training specific to the language and environment is a good first step toward running a developer shop with solid secure coding.
If you’re interested in learning more about web app security head over to the OWASP site.
Control 19 (organizational) – Incident Response
Once you get punched in the face how do you respond?
Every company will get hacked it’s just a matter of time, so being ready for that moment is critical to mitigating the amount of pain and loss. Incident response teams are too expensive for every company to have, that’s why incident response for hire is a thing.
But that doesn’t mean that every company shouldn’t have a plan in place for when sh** hits the fan.
A book I recommend to those looking to build out an IR plan is “Crafting the InfoSec Playbook”.
Let’s see what controls CIS recommends
The two subcontrols that jump out to me are…
- Decision making: A well-thought-out incident response plan will have roles and people tagged against those roles, so everyone has accountability. One role that is critical and often overlooked is the “decision-maker”. During an incident, there will be major decisions that need to be made quickly, and without having someone in the room to make those decisions the incident could be prolonged.
- Practice, practice, practice: Theory is nice, but without practice theory is useless. When creating an IR plan it’s important that you run through scenarios, allowing each person to play their role. To make a plan genuinely useful you need to test it against mock IR scenarios, putting the plan and people to the test. One common error found in IR plans is communication. The techies need to communicate with each other, without interruption from executives always asking for updates, so having a middleman/woman in-between to calm the executives while the techie’s focus is key.
Control 20 (organizational) – Pen testing & red teaming
Finally the last control! 🙂
You may wonder why CIS has prioritized pen testing and red teaming so low… Simply put, not everyone can and will use this tool, but more importantly, you can get more “bang for your buck” through the other controls.
Running through proper penetration tests and red teaming exercises puts any organization’s security to the test. Many companies like to flaunt how secure they are, but if they’re not constantly testing that hypothesis through experimental attacks, then it’s all marketing and PR.
Before diving into the subcontrols let’s draw a line between pen testing and red teaming…
- Pen testing (narrow/short) – Pen testing is a very specific activity with many constraints and a short timeframe. Think of pen testing as a specific test against the security of a certain app or an attack used by real attackers.
- Red teaming (wide/long) – Red teaming is a much more realistic, but costly exercise. This is usually an external team hired to spend months attacking a company with very few constraints and they’re not only testing a specific attack but stringing together entire attack campaigns.
Now let’s see what CIS has to say about this…
The two subcontrols that jumped out to me are…
- The attack inside and out: Most companies have strong preventive security on the outside of their network (e.g. large castle walls), but on the inside they’re squishy. Detecting an attacker once they’ve entered and limiting their ability to move around makes a huge difference in the amount of damage they’re able to cause to a company. By testing both the outside of your network and inside, you’ll be better prepared for a real-world attack. In the InfoSec world, this is formally called “Credentialed” and “Non-credentialed” attacks.
- Scoring: If you’re able to run recurring attacks on the company, it’s useful to keep a score of good vs. evil. Creating metrics to keep a score of how your company’s security is held up against previous attacks is a good way to see if the overall security posture of the company is improving over time.
And that’s it… We’re finished with the CIS 20 critical control series! 🙂
I’ll see you next time my fellow humans.