Last week’s Twitter hack has the tech leadership world abuzz with worried CTOs. This conversation comes up about once per year, typically after a high profile security incident. The core of What CTOs Worry About is: how do I foster a healthy security culture.


Conventional Wisdom Says…

As with home security, most intrusions are opportunistic. Every home or car has a window that can be broken. Yet most intruders will simply check the door to see if it’s unlocked. If it’s not, they’ll go see if one of your neighbors was careless. Therefore a CTO’s first step should be to eliminate all opportunistic attack vectors. (UFO VPN’s logs were exposed via an unsecured ElasticSearch instance).

The other piece of convential wisdom says that you will likely not be able to withstand a directed, concerted attack from a skilled attacker. Therefore a CTO’s second step should be to hire a concerted attacker.

Unfortunately, many companies don’t have the budget for this, so the conversation naturally turns to what other concrete steps CTOs can take to secure their environments. We like security in layers and security in-depth, so don’t look at the following ideas as options to choose from but rather a whole set of tactics and strategies you can implement as they become appropriate for your organization.

OWASP Top 10

Most CTOs agree that the OWASP framework is… shoddy. At the same time, most also implement some sort of OWASP review into their code cycles. Why the apparent hypocrasy?

In an ideal world, all of your engineers would also be security experts. But this isn’t an ideal world, and they’re not. The OWASP Top 10 ends up serving not as a tight security framework, but rather as an awareness campaign. By adding a security or OWASP portion to code review, CTOs hope to attune their engineers to a basic understanding of security concerns. The OWASP Top 10 is an easily-digestible checklist that engineers can glance at before hitting commit; and the hope is that a code reviewer — with OWASP in the back of their mind — will be looking for other potential flags to raise outside of the OWASP framework.

If you’re not doing this already, this is a cheap and easy way to a) incorporate security review into code review, hopefully catching the truly low-hanging fruit and b) force engineers to think about the security implications of their code.

Penetration Testing

If you have a little bit of budget you should hire an external penetration testing firm. Conventional wisdom says you should do this at least once per year, preferably twice per year. Take the pentesting findings seriously. Generally, a pentest report will contain lots of items that you consider to be irrelevant or false positives. Don’t ignore them out of hand! Often, a tiny little vulnerability can be leveraged into a larger one, later on. Just because a pentester wasn’t able to convert a vulnerability into an exploit today doesn’t mean someone else won’t be able to 6 months from now as your codebase and architecture changes.

I also strongly recommend buying a copy of Burp Suite for your team and having a portion of your engineers learn it. You won’t be able to do as good a job internally as a professional firm can do, but even naively running Burp Suite against your app once per month can pay dividends. And similarly to the OWASP approach, making an internal pentest part of your regularly scheduled audits will only serve to attune your team more closely to security concerns.

Bug Bounty Programs

The problem with pentest firms is that pentests will typically only scratch the surface. The pentesting firm is paid to generate a pretty report; they’re not actually paid to discover vulnerabilities for you.

If you can’t afford a full-blown grey-hat hacker to run you through your paces, but you still have some budget left after hiring your pentesting firm, then you should consider running a bug bounty program through hackerone or similar.

Unlike pentesters, bounty hunters are paid per exploit. And they will find exploits. In my opinion this is the single most effective thing you can do with your budget.

You are typically allowed to define the parameters of the bug bounty program; ie “look only for authorization exploits” or “ignore our static-hosted sites”. This is a good way to make sure you don’t blow through a budget in the first week. You must be prepared to pay; expect several hundred to a couple of thousand dollars per exploit. (An ounce of prevention is worth a pound of cure.)

A fun exercise here is to get your team to tighten up security as much as they possibly can, until they’re convinced that the app has no holes. Then open up the bug bounty program and watch as your team realizes they’ve reinvented Swiss cheese.

White-Hat and Grey-Hat Researchers

Everything above really only aims to secure your application or software. But just as most undirected attacks are opportunistic, most directed attacks involve some social engineering. This is why conventional wisdom says you won’t be able to withstand a concerted hacker.

The only way to get a handle on this is to preemptively hire a concerted hacker. If you’ve run a bug bounty program already, then a white-hat hacker may not be able to find too many (or any) software exploits. But a grey-hat researcher who also uses social engineering almost certainly will find a way in.

This is where things get really messy and difficult to control. Think about all the ways “trusted users” could (accidentally or nefariously) exfiltrate data from your app. Every admin panel is a risk, and every person with access to an admin panel is a risk.

I still see companies using production exports as seed data for local development. This is both a terrible idea and also a common vector for data leaks.

The important questions you have to ask yourself here are: who has access and how, how do we protect data even when “trusted users” are involved, how do we set up layers of security to mitigate or minimize social engineering, and how can we best monitor for “funnies”?

Because every organization is different, the best thing to do here is to hire a skilled professional to put you through your paces.

Monitoring, Routines, and Incident Response

A healthy organization doesn’t just do these things once a year or once per quarter. They maintain a state of constant vigilance. They install intrusion detection and network monitoring software, use tools like Amazon Macie to detect accidental PII leaks, and most importantly respond to incidents in a predictable and timely manner.

I find about half of startup CTOs have no incident response policy in place whatsoever. This is also a bad idea. Many companies are in an in-betweeny state: they do have an on-call rotation for engineers, they have some monitoring and firewalls, but have no actual policies in place for what to do when they find something. So the on-call engineer calls their boss, and their boss makes an ad-hoc judgement call. Most people are not crisis experts, so it is tough for me to advise allowing people to make ad-hoc judgement calls while they’re under duress. This is how security incidents get swept under the rug.

A better approach is to have scheduled audits and a well-defined incident response policy. When an engineer finds something funny, they should be able to look at their checklist or flowchart and be able to respond to the incident with confidence. When their boss is called in, they should be able to look at the flowchart and the risk categorization of the servers/systems/data that was breached, and know exactly under what conditions customers and authorities need to be notified, and how.

Most CTOs wait until their first security incident to figure this stuff out. That happened to me nearly a decade ago, and it’s not a comfortable position to be in.

This all feels like paperwork and most techies hate paperwork, so unfortunately most ignore it.

Fortunately, the various compliance frameworks make it easy to wrap all this stuff up into a single procedure. GDPR, ISO-27001, SOC 2, and the NIST framework all require that you have risk classifications as well as incident response procedures. So in this case you can drop two birds with one stone: pursuing compliance will also prepare you for these eventualities. Even as a company with little or no budget, you can still strive to hit all the GDPR/ISO/NIST checkboxes even if you don’t pay for a certification at the end of the day.

There are two major, non-technical advantages to pursuing these frameworks. First, they force you to maintain a dataroom with good paperwork. It’s time-consuming to set up, but easy to maintain once it’s in place. Second, they force you to continually audit and record your procedures.

Fortunately for us, our dataroom has only one security incident filing in it. But it does have dozens of audit reports, attestations from engineers about training they’ve received, checklists, procedures, form letters, and anything else we could possibly need if facing either a security incident or an audit from a regulatory agency. The peace of mind that this offers a CTO simply can’t be measured in dollars.

In Summary

All CTOs must practice defense-in-depth. It’s too easy to focus only on software and code security, however, so you must also take a close look at your people and processes.

As always, if you’re not an expert in something, hire an expert. Get an external pentest once per year, pay for internal Burp Suite training, start a bug bounty program, hire a greyhat.

If you can’t afford to hire an expert (and even if you can), strive to make security cultural. Train your engineers on security practices quarterly. Make security review a part of code review. Train your non-engineers about PII, data exfiltration, and social engineering.

Finally, hope for the best but plan for the worst: make sure you have clear incident response procedures, make sure you have an email template informing customers of a data breach prepared, make sure your data room is up-to-date, make sure you know how to contact the appropriate authorities based on your jurisdiction.

And most of all, make sure to keep your front door locked even though you have that huge window in your living room.


Hey, you made it all the way down here! That means you probably found this content valuable. I won’t force you to give me money, I just ask that you share this with your friends.

Share

If you want to support me writing this content, however, feel free to give me money! Either by subscribing or giving a gift subscription to a friend:

Subscribe now

Give a gift subscription

And feel free to Tweet your tech leadership worries to @bkanber, and I’ll try to address them in an upcoming newsletter.