Agile Development Doesn't Create Secure Software questions whether Agile development teams can build secure code. It mostly references a study on small- and medium-sized Agile development teams, which found that Agile teams don't take security seriously even when building systems that are "web-facing and potential targets of attack". This isn't surprising. We already know that many development teams, especially small teams, do a poor job of building secure software. But is it Agile development specifically that is the problem?
Many security experts, especially those who work in or for enterprises, think it is. In Agile Development = Security Fail, Adrian Lane looks at the risks and problems in developing software following Agile methods from a software security perspective. And he finds a lot of them (as you can tell from the title of his talk). Let's look at these problems, and what they mean to Agile development teams.
<h3>Do Agile Teams Move too Fast to do a Good Job?</h3>
All development teams are under pressure to deliver as soon as possible, whether they are following incremental, iterative (capital-A Agile or small-a agile) methods or not. This pressure often leads teams to short-cut reviews and testing, including security work. Agile teams move especially fast — that's one of the main reasons to follow Agile development. Agile teams try to deliver working software to the customer quickly and often so that the team and the customer can make sure that the project stays on track. But Agile teams that follow good practices can deliver code that is reliable and high-quality, and still do this quickly — this is why so many teams have adopted or are adopting Agile methods today. Steve McConnell in Code Complete found that Agile teams following XP practices like pair programming and automated testing can achieve defect-removal rates of 90% on average and up to 97%, "which is far better than the industry average of 85 percent defect removal." If Agile teams can deliver high-quality code, why can't they deliver secure code?
<h3>Microsoft thinks that Agile teams can build secure software - they even explain how</h3>
Microsoft's SDL Agile breaks the set of security practices from their secure SDL down into steps that can be followed by Agile development teams working in sprints or time boxes. There are some steps that only need to be done once, usually at the start of a project — making sure that everyone on the team understands security and privacy requirements, making sure that the team is trained on secure development practices, setting up a way to track security bugs, and assigning a security lead for the team. There are practices that need to be followed all of the time, in each sprint — using safe libraries and frameworks, running static analysis tools, and doing threat modeling on new features. And other steps that the team should do as often as they can — testing and code reviews, design and code reviews, incident response planning.
SDL Agile shows that Agile teams can build secure software — they just do it differently. Just like everything else in Agile development, security problems and security practices have to broken down into smaller pieces — do less, but more often. Not trying to do things perfectly each time, because you don't have the time, and because you know that you will get another chance to do it again soon.
<h3>Iteration Zero — More work needs to be done upfront</h3>
Because attention to design and architecture upfront will pay dividends later, many (if not most) Agile teams start with an Iteration Zero — some time at the start of the project to get the team together, to choose the tools and frameworks that they want to start using and try them out, time to learn about the domain and to think through some of the design problems upfront, to get an understanding of risks and constraints. This is also the time to understand security and privacy and compliance requirements and governance requirements for the system and the project.
Security needs to be included in design and coding work from the beginning. This means that the development team needs software security training early on so that they understand secure design problems and risks going in, and so that they can take care of them by making the right design and platform decisions. For example, SQL Injection (one of the leading vulnerabilities, especially for web apps) can be prevented upfront by deciding against dynamic statements and using prepared statements with bind variables instead. It's a simple decision to make, and doesn't add to the cost of developing the system — but it's a decision that the team needs to know they need to make.
<h3>Taking small steps forward — Dealing with risks incrementally</h3>
With incremental, iterative development you lose the gating steps that are inherent to serial Waterfall development — the handoffs from design to coding, coding to testing. Security practices, like quality practices in Agile, have to be burned in to how the team works, done continuously, iteratively and incrementally instead. The team can use incremental Attack Surface Analysis to watch for changes to the system's security risk profile — to determine when they have changed the architecture or interfaces to the system in a way that could make the system more vulnerable to attack. When they need to do more testing or code reviews, or threat modeling.
Threat modeling doesn't have to be a big deal for teams that are building software incrementally and that know the code well. Most changes that can be done in a 1-week or 2-week sprint are small and incremental, and shouldn't take a lot of time to review even when you have changed the attack surface. Like all things in Agile development, the ceremony can be kept to a minimum. What is important is to always be looking for risks and threats, for what can go wrong.
Developers who are pairing can also look out for security bugs and risks while they look for other coding and design problems. Static analysis tools can be plugged into Continuous Integration to check for security vulnerabilities and other coding mistakes. And there is time in Agile development to make sure that changes to high-risk code (network-facing and customer-facing code, plumbing and security features) are manually code reviewed against security coding checklists.
<h3>The Whole Team</h3>
The idea of the Whole Team is that everyone works together and shares responsibility for the code. But security work — session management, encryption, the database access layer (the kind of code that you should be using a framework for anyways) — needs to be done by experienced, technically strong developers. Adrian Lane is right that it's foolish to let inexperienced developers work on security-sensitive parts of the system, at the very least without the help of experienced people who understand the risks and care about details, just like it is foolish to have them work on other technically complex or risky parts of the system.
<h3>Automated Testing isn't Enough</h3>
Many Agile teams rely on automated unit testing and acceptance testing to prove that the code works. But incremental, in-phase functional testing isn't enough in itself to build secure and reliable software. Adrian Lane makes a good point that developers aren't good at breaking their own code — developer testing tends to focus on proving that the code works. So somebody else (QA testers, outside security testers) need to try to break the system and look for holes through adversarial, exploratory testing, destructive testing and simulated attacks.
At some point you should pen test the running system. With iterative, incremental development, the system is a constantly moving target, changing every week or two. You can't pen test the system at the end of each time box — there isn't enough time. But you don't have to. Schedule a baseline pen test when there is enough key functionality in place to be worth testing, but early enough that you can learn from it and before you have built too much software that may need to be changed. And any time after you have made a big change to the architecture or the attack surface, based on risk.
Pen tests and other security reviews don't fit nicely into time boxes, but they don't have to — they can be run as parallel engagements especially if you are bringing in someone from outside to do the work. The team members who are needed to help out will need some time buffered from their sprint commitments, in the same way that some teams need to buffer time for support work.
Any problems found in security reviews and pen testing need to be added to the team's backlog and dealt with like any other bug. If it's a serious enough problem, it should be reviewed by the team as part of their retrospectives — not just how to fix the problem, but what it means to how they develop software, what changes they need to make to how they design, code and test software to prevent problems like this from happening again.
<h3>Security Sprints and Hardening Sprints?</h3>
Some people recommend security sprints: periodic sprints where the team focuses on security issues instead of delivering features. One type of security sprint is what Microsoft calls a security spike or "mini security push", where the team looks for security bugs and other bugs and problems in an existing code base before making any major changes. Find the bugs, review and triage them, identify high-risk areas of code that may need more testing or review (or even rewriting if the code is bad enough), and then decide what you will have to fix so that you can start with a stable and safe base.
Another kind of security sprint is a "hardening sprint": a sprint that focuses on fixing security problems found in a pen test or after running a static analysis tool for the first time, or after the team finishes their security training and wants to get their hands dirty, or after a security incident in production; as well as fixing operational problems and outstanding bugs, and reviewing and updating documentation.
<h3>Who is Responsible for Security: the Customer or the Team</h3>
Adrian Lane also talks about the central problem of Agile's Customer or Product Owner — that it's the Customer who drives the team's requirements and priorities, decides what gets done and what doesn't. Even a strong and well-intentioned Customer can't be expected to understand application security issues or how to build secure software. They already have too much on their plate. The Customer's job is to decide what the team builds and in what order — NOT how they build it. At most they should be responsible for helping the team to understand the system's security and privacy and compliance constraints and basic security-related features. Defining what information is sensitive and needs to be protected, what activities need to be logged and recorded and reported in the system, and helping to define rules like access control restrictions and entitlements.
Security isn't the Customer's responsibility. And there's no time or opportunity to force security from outside. An Agile culture doesn't support this well anyways. Adrian Lane talks about the "Chickens and Pigs" problem with Agile (especially Scrum) teams — outsiders like security and compliance aren't considered part of the team, they don't share in the outcome and can't force one.
This means that the real job of building a secure system has to fall to the development team — it's their job to design, build and deliver a system that works and that is reliable and secure. Just because many Agile teams building online web systems and games and mobile apps don't build secure software doesn't mean that they can't. There's no reason that Agile Development has to = Security Fail. The technical work, the commitment to quality and detail that is required to build secure software is the same, regardless of what development approach a team follows.