Recently, I was given an opportunity to further my education in the world of InfoSec and decided on pursuing a certification in the world of DevOps, specifically securing DevOps through automation. This post is all about why I decided on that certification (or really any cert), how I accelerated that learning process (passed in 3-weeks), and what main concepts from this course I’ll carry with me.
The “why” – Why this or any certification
In all honesty, I’m not a huge fan of certifications. The InfoSec industry, specifically HR teams all over the world use certifications as a barrier to entry, assuming if someone has passed an exam they’re qualified to do a job. You and I both know that’s not true, but this traditional method of validating knowledge is still the most popular form of “credibility”.
Good News! This is a dying practice, at least for the companies I find more interesting… Alternative ways of proving credibility can manifest from personal websites filled with CTFs, blog posts, videos, and open-source contributions, and the list is growing.
Now that’s out of the way, I’m going to contradict myself… I would be lying if I didn’t say that part of the reason I’ve pursued this exam is due to the shiny certificate and how others may perceive my “credibility”. This was a small factor, but a factor nonetheless.

The two more important reasons I’ve pursued this certification are structure and “following the puck”. Let’s start with structure.
Whenever I want to learn about a new bucket of concepts I need structure, without structure, I’m lost. Structure for me comes in the form of books, courses, and lengthier blog posts, with a structure I’m able to digest another’s perspective while developing my own.
This course provided this structure by sticking many seemingly different topics into a semi-cohesive narrative, with the perspective I’m interested in (e.g. security).
The second and personally more important reason comes from advice I’ve held close for many of the decisions I’ve made in relation to learning. This advice was given to me when first kicking off my career and that is “go where the puck is headed, not where it is now”. Let’s break this down.
Not to hurt any feelings, but the InfoSec industry tends to be further behind when it comes to adopting new processes, technologies, etc., this delayed adoption can be amplified if you’re working within a large enterprise. I’m fortunate enough to interact with InfoSec practitioners on a daily basis from many different industries, company sizes, and security maturity, providing a unique perspective. Through these interactions, I’ve realized the “laggards” and ”late majority” are beginning their migration to “the cloud”, while the early adopters are beginning the “codification” of InfoSec, while simultaneously becoming more embedded in their developer’s workflow.

My intent was not to pursue where everyone is now, but where they’re headed, giving me a little head start. And yes… I know there are many “innovators” or “early adopters” saying that I’m late to the party, but I’ve got to start somewhere.
The above advice not only applies to my learning journey but also to my career moves… When moving toward a company or industry, it’s important to ask yourself, “Will this grow in the decade”? By moving towards growth in both companies, skills, and industries you’re tipping the economic supply and demand scales in your favor. The most ideal situation is having high-demand skills, within a highly-demanded industry, so when a recession rolls around you’ll be better off than most.
I know this approach isn’t ideal for those of you hellbent on pursuing your passion, but most of us aren’t as blessed as you to have a passion, so we instead pursue the intersection of interest, economic demand, and skill.

The “how” – How do I approach learning
My approach hasn’t changed much since we discussed “Accelerated Learning” a little over a year ago, so we’ll quickly summarize and cover any new bits.
This bout of accelerated learning involved three things – interest, immersion, and consistency. Interest and immersion are repeats from my previous post on this topic, so we’ll skim over these.
First off, interest is everything. If you’re not interested in a topic, then accelerated learning will be tough. This certifications curriculum held many of the topics I’m interested in, so I had the interesting piece down.
Next is complete immersion, but this time around immersion is slightly different from me due to a full-time gig. With work, I was limited in the amount of attention I could put towards this certification day-to-day but still had to maximize each spare mental cycle I had. With these restrictions I decided on a minimum of reading, watching, and practicing each day in the evening for at least 1.5 hours.
Now we had a plan, so it was time to execute with an “essentialism” focus. With consistent habits and not getting too lost in extra labs or disconnected topics, I was able to pass the exam within 3-weeks. Keep in mind I did a substantial amount of reading into subtopics in this bucket of ideas, so I wasn’t starting from complete scratch.
This is all easy to say, but painstakingly hard to do when you’re emotionally drained from a long day at work, but those are the moments with the best dopamine high once completed.
The “what” – What are the main concepts I’ll carry with me?
There are too many topics to cover (hundreds of pages worth), so I’ll stick with the most interesting or non-obvious concepts that are stuck in my brain.
And will do this listicle style!
DevOps mindset – The least technical and most important is the fundamental shift security teams need to make when shifting to a DevOps-driven organization. Traditionally security has been the “no” team always shoving rejection into the faces of others, but with a DevOps mindset your role is to say “yes”. A fast-moving company tends to be technologically polyglot, meaning the engineers are able to use whatever new tech is best for the job, without concerns of security shutting them down. Our focus is to embed security into the developer’s workflow with minimal impact on their speed. This manifests in different forms of automated testing, code reviews, and standards, a specific example is automated SAST, DAST, RASP, IAST, and other weird acronyms tools for testing.
Codifying all the things! – InfoSec along with many other professions have tasks that can, should, and will be automated away with time. As we move closer to a DevOps-centric organization your job will transition into being a “YAML engineer”. YAML along with other declarative languages is what’s shaping the automation of our infrastructure, configuration, testing, etc., with this codifying movement we’re gifted with more security. If the creation, configuration, and testing of our infrastructure are codified, we’re able to patch, deploy/redeploy, and test in a repeatable way.
Version Control – Git sits at the center of our DevOps universe, helping us track all the changes to our applications, infrastructure, testing suites, etc., making security more effective in the long run. With Git we’re able to ensure no unexpected or known bad changes can happen through automated testing. From time-to-time manual processes such as manual code reviews will sneak in for high-risk code (e.g. encryption, secrets management, etc.) changes, but what can you do?
Continuous Integration and Continuous Deployment (CI/CD) – Another major piece of the DevOps puzzle is CI/CD tooling, so we’re able to string all of this automation together into a single flow. The most popular CI/CD tool today is Jenkins, but this is losing market share quickly thanks to cloud-managed CI/CD tooling like Github actions and Gitlab CI/CD templates. This mass migration away from Jenkins is due to the resources that go into a self-managed CI/CD server (e.g. Jenkins), which sounds very similar to something… Cloud migration most enterprises took many moons ago.
Authorization and Authentication Everywhere – “Zero Trust” like many other trendy marketing terms is thrown around too much, but when adopting a DevOps approach, microservices can sometimes follow (not always). With a microservices architecture, not solely relying on your perimeter is critical for a secure network due to the amount of interaction happening behind the castle walls. A heavy topic discussed throughout the DevOps community is the importance of authentication and authorization throughout a network. Two heavily used tools for this are OAuth via OpenID Connect (e.g. login with Google, FB, etc.) or SAML tokens via active directory (AWS/Azure). Another interesting model is burning PKI infrastructure into all your interconnected services, so each service is limited to who (humans and machines) they’re able to communicate with.
Measure Everything – I wouldn’t go as far as to say log “all the things”, instead I’ll say to log “most of the things”. Measuring the efficiency and effectiveness of new features being rolled out or patched is a major piece to a successful DevOps movement, embedding security is no exception. I know within security we have issues logging the right data, but when “measurement” is at the center of everything you do this moves up the priority list. The three elements of measurement focus on logs (timestamped things), metrics (measurement of a thing), and traces (logs + metrics over time). One of the main measurements any security team should be focused on is the “mean time to detect”.
Complexity does NOT = Risk – It’s easy to assume that the more complexity one introduces to a system the riskier it’ll be, but that’s not always the case. If done correctly we’re able to improve security through automated CI/CD pipelines, proper authorization/authentication, and microservices. The trick is to recreate chokepoints hardening the parameter, with a “zero trust” architecture, which is thanks to API gateways and serverless functions. I mean if Netflix and Amazon can do it at this scale, I’m sure we’re able to achieve it at a smaller scale. 😉

What’s next – Last I want to share some interesting concepts that are currently being adopted by the “early adopters” and will be relevant for the masses in a few years’ time and that is micro-VMs, as well as Serverless architectures (or Functions as a service).
- Micro-VMs are a newish approach to get the speed of containers, with the security of VMs, and one example of this is the AWS Firecracker. Docker containers are great, but they come with overhead due to the complexity baked into the longlist of Syscalls they’re making to function, with VMs that’s not the case.
- Serverless architecture is acting as a game-changer for many companies due to the operational aspect of work being abstracted away even further. A serverless or function as a service (FaaS) is where the techies need to only focus on the code and nothing else. Cloud providers are abstracting away the need for patching of OS, server, etc., freeing up the developer’s mental cycles toward creating high-quality code… This comes with many benefits for cost, security, and operations, but we’ll leave this topic for a different day.
No matter what the topic there’s a common thread we can pull to learn basically any topic and that’s demystifying jargon. Every topic has a list of jargon you’ve never interacted with and our goal is to understand, not just memorize this jargon. To “understand” is to constantly wrestle with an idea in your mind by asking “Why is this mentioned?” and “How does it connect with everything else?”, by repeating these two questions you’ll move up Bloom’s taxonomy (video explanation) getting to higher levels of understanding.