Security and continuous delivery each have implications in their effects on each other. While continuous delivery can demand new security measures, traditional security can slow continuous delivery or simply not address its security needs at all.
Micro Case Study Clarifies Security Conflict
Amrit Williams, now CTO of CloudPassage, shares an experience that exemplifies the challenges continuous delivery poses for security. Williams once worked with one of the largest national HMOs in the United States, which developed and supported a customer healthcare portal to enable customers to access health data and take advantage of other HMO services.
At the time of his service there, the company sought to enhance its portal’s performance to challenge other HMOs in the marketplace that were popular, in part, due to the strength of their portal.
“They wanted to add a bunch of new features and make continual improvements by way of moving from a development model of new releases every six-month period to new site releases and updates every couple of hours or about four times per day,” says Williams. But every time the Ops team would bring up new containers and compute resources to deploy new technologies and application code, success would count on repeated manual changes to firewall rules to appropriately pass data, enabling applications to talk to the portal and its technologies.
“They finally decided to allow the firewall rules to apply to the application at the server level rather than at the underlying network level so that they didn’t have to go changing those rules all the time,” says Williams.
In DevOps, Security Still in First Gear
“Traditionally in the enterprise, security has been about speed only when the company is potentially dealing with a compromise, a policy violation or some other threat to the business,” says Williams. The organization is not used to having to respond quickly with security simply to deliver code. Common security tools were not designed to shift and adapt to automated code development and deployment.
“For example, a lot of times the security teams will rely on endpoint security tools that typically require static IP addressing, which is antithetical to dynamic compute models,” explains Williams. These mammoth resource-hogging security applications ask too much and secure too little if they can even work effectively in DevOps shops at all. When you must stop to adjust these security tools, you may trigger change management processes that can slow the development process even further.
Why Not Fail Fast?
While DevOps makes it easier to isolate and fix coding errors quickly, this doesn’t appear to spill over to security. Fixing errors to make the code work doesn’t mean the code is more secure. These changes could even introduce new security flaws that were not there before.
Securing continuous delivery is about the security team surrounding dynamic processes with protections that keep up with the speed of the continuous delivery pipeline. This can be difficult when these processes are automated, bringing up resources on demand to create and deploy software.
“It is really the deployment and configuration of these environments that can lead to some of the biggest security concerns and not necessarily the code itself,” he says. “It is that aspect of DevOps that security is usually not very familiar with, which can really result in them being left out of the process because they try to apply a rule or a government policy that is again, antithetical to that process.”
Continuous Delivery Security Incidents Challenge Legacy Security
In this case example, Williams shares a particular type of threat that legacy security tools would not prevent in a continuous delivery environment.
“It was 2012, during Christmas and it happened to customers using a region of Amazon‘s public cloud services. Amazon rebooted a bunch of servers, and some of those servers were running local host-based firewall management tools that were reliant on connectivity back to the management consoles as well as a static IP address,” says Williams.
When those systems came back online, neither the agents on the hosts or the tools functioned properly. Amazon and its customers had to work to ensure the agent software on the hosts worked and communicated with the firewall management tools and that customer systems were secure. This was important because of the number of sales Amazon customers had to process securely at the time, which was just a few days before Christmas Eve.
Again, this is an example of the failure of legacy security tools inside an environment that was increasingly adding automation while approaching continuous delivery. The lesson here is that as DevOps and continuous delivery approaches are altogether unique from legacy development, so the security tools that protect these environments need to be new, unique and suited to continuous delivery by design, lest they fail to survive partial outages such the Amazon example.