In Search of a Cloud-Native Security 'Top 10'

The Wild West of Cloud-Native Security

As an industry, we have work to do. I might be so bold as to say, “We don’t yet agree on what Cloud Security means!” Some evidence? The ‘shared responsibility’ leaves customer organizations holding the bag. Amazon’s evangelists indicate that 90% or more of successful attacks are the customer’s fault and often a configuration failure. The security tools ecosystem is so hopelessly fragmented, Gartner claims, that larger organizations may need 10-30 tools to cover their bases. It’s the wild west out there.

I remember a similar time in application security’s infancy, around 2002-2003. Organizations had a reasonable handle on their network and host security, but wondered, “What does application security entail?” What can this ancient history tell us about defining what cloud-native security means today, and about contending with its vendors’ fragmented capabilities? Maybe a lot.

Bringing Law to the West

Something transformative happened in 2003. The Open Web-Application Security Project (OWASP) published its first “Top 10 Security Vulnerabilities.” Yes, such lists have myriad problems — some nearly fatal — but they serve a crucial function: organizing a set of expectations, shared by vendors and the organizations that buy their products and services, as to what the scope of “security” provided is, and what matters most.

Cloud security doesn’t have its definitive Top 10. Sure, OWASP has published cloud- and container-security Top 10 lists. CSA has published the CCM 3.0.x. These lists are each an agglomeration, mixing practice and procedure with principle and policy, as well as sprinkling in the occasional technical control or defect/vulnerability to be avoided. Taken in aggregate, these Top N lists create a salmagundi that fails to answer the question: what people, process, and technology controls constitute a competent and functioning cloud-native security practice? Maybe it’s too early for so highly synthesized a resource. In the meantime, can we agree on what cloud security entails?

The Good, the Bad, and the Ugly

At this point in cloud security’s maturity, it makes sense to take the time to systematize the space. Hasn’t that been done? In some ways, yes — in others, no.

The good? CSA Cloud Controls Matrix drives industry, as do regulations like SOC2 and their commonly prescribed controls. From the perspective of an organization’s security initiative, these lists do the best job of answering the following question:

What kinds of policies and procedures need to be in place in order to be compliant with industry practice?

The bad? These lists do a poor job of enumerating:

What guardrails reflect secure coding practice and prevent exploitable vulnerabilities in cloud-native apps?

The ugly: vendor blogs do attempt to enumerate technical controls for software and infrastructure, but their writing gives us insight into how sparse vulnerability/control coverage is, and sometimes how narrow the scope of their products’ capabilities is.

True, a dozen cloud and container guardrails tools exist to ensure organizations build security into applications, operate them securely, and prevent operation of that application from drifting away from secure or approved behavior. But it isn’t clear from industry, OWASP, or vendor lists what the scope of those guardrails needs to be to ensure the organization’s security posture.

Toward a Cloud-Native Security Top-N List

Sourcing the material outlined by the previous section, and applying an organizational security initiative lens to it, we can begin to understand the true scope of cloud-native security — as it pertains to the technical controls necessary to avoid security defects. From this ‘universe’ of potential vulnerability and mirror preventative, detective, and resilience-based controls, we will eventually, collectively, discern a Cloud-Native Top-N list, which industry can rally around as a priority.

Consider the following:

Figure 1 – Classes of Cloud-native Technical Control

An organization’s customer-facing value streams are delivered in a variety of forms: software and services, containers, instances, and network infrastructure. When applying guardrails to assure technical posture of these assets, industry has — sometimes implicitly — focused on six areas:

  • Identity and Access Management – defects and related controls concerning proofing a user’s identity, authenticating that user, and applying access control to that user’s system use. Also includes the notion of ownership of assets, separation of and least privilege, per the organization’s policies.
  • Trusted Sourcing – defects and technical controls associated with the packaging, registry, and orchestration technologies that deliver software and infrastructure after the hard work of “building security in” has been accomplished. Composition analysis and patch management to ensure secure components. Code-identity, ranging from proven authorship and provenance to runtime identity and role.
  • Monitoring – defects and controls associated with assuring that sufficient logging occurs, navigating trade-offs with data redaction and other applicable privacy regulation. Distributed tracing to ensure reconstruction of business context in microservice and federated architectures. As with privacy, settings that navigate trade-offs between expungement and retention requirements in call cases.
  • Security Context – defects and associated controls assuring the integrity of security context by assuring a) the immutability, b) separation of, and c) least privilege of security contexts. Also, applying standard software security best practices to configuration and code, either a priori as AST or continuously as RASP.
  • Privacy – defects and associated controls assuring that use of cryptography meets compliance and privacy needs. Most importantly, that handling cryptographic material (such as credentials, keys, etc.) meets policy and standards.
  • Resilience – orchestration and other technology controls supporting an ability to adjust deployed production state upon discovered vulnerability or incident. Controls applied to software and infrastructure built and/or operated through third parties. Specific controls to prevent DoS and meet SLA under attack.

Cloud guardrails tools have detective/preventative capabilities that touch on point-aspects of the IAM and Privacy categories. Perimeter and Data security tools tend to cover the “Access Control”, “Consistent Use of Encryption”, “Tenancy and Separation” categories, and perhaps one or two others. Even where greater coverage of the control areas is present, support for CSP APIs tends to be sparse. Container security and some drift detection features touch on the Security context category elements. Application security tooling, a mature ecosystem, touches only the “Secure Assurance” and “Composition Security” areas.

Yet, by scanning CSP and IAC configuration, organizations should be able to detect that the breadth of controls described by the above “Technical Control Classes” is implemented. Or, in its absence, that a defect or policy violation is present.

For example, Concourse provides institutions a complete out-of-the-box solution designed to cover all the security domains above and keep organizations in compliance with regulators’ requirements for cloud security, risk management, and governance. Concourse gives you the necessary policies and controls, visibility into cloud usage, and automation of cloud compliance functions. Learn more here.

Editor’s note: John Steven is a software security expert with 20+ years of experience bringing innovation to market as products and services. Reach out to him and/or follow him on LinkedIn and Twitter.

Related Resources

Learn more about one policy architecture and Concourse Labs.