“Process” is often seen as a antithetical to the fast-moving nature of startups; security processes, in particular, can be regarded as a direct impediment to shipping cool features. On the other hand, the security of an organization and its users shouldn’t be disregarded for
the sake of speed. Striking a balance between security and nimble development is a vital aspect of a security (in particular, application security) team. At Slack, we have implemented a secure development process which has both accelerated development and allowed us to scale our small team to cover the features of a rapidly growing engineering organization.
In this presentation we will discuss both our Secure Development Lifecycle (SDL) process and tooling, as well as view metrics and provide analysis of how the process has worked thus far. We intend to open-source our tooling as a supplement to this presentation, and offer advice for others wishing to attempt similar implementations. We'll discuss our deployment of a flexible framework for security reviews, including a lightweight self-service assessment tool, a checklist generator, and most importantly a chat-based process that meets people where they are already working. We’ll show how it’s possible to encourage a security mindset among developers, while avoiding an adversarial relationship. By tracking data from multiple sources, we can also view the quantified success of such an approach and show how it can be applied in other organizations.
At Netflix Security, we try our best to enable developers by removing roadblocks and providing systems with “sane” defaults that keep everyone from shooting themselves in the foot. When dealing with AWS security groups, not shooting yourself in the foot is important. VPCs, subnets, CIDR ranges, group membership, are all part of the security group vocabulary and essential in ensuring that applications can only talk to each other on an as-needed basis.
How many times have you heard fellow engineers mutter, “Well adding 0.0.0.0/0 seems to work. We will fix it later.” Grouper and Dredge together provide a solution for generating AWS security group rules based on current network data, ensuring that least privilege isn’t a future milestone. Both Grouper and Dredge are deeply integrated into our stack providing developer network insights that were previously unsurfaced. -- unsurfaced is an interesting word choice.
This talk will focus on the history of our security group infrastructure. The challenges of security groups in a large environment (limitations on the number of rules, multiple accounts, lack of cross region security groups, etc.,). Our current security group management and maturity strategy. How Grouper aligns with the freedom and responsibility culture at Netflix.
The Netflix cloud security team has a strong commitment towards open source. Given interest and maturity in these projects, we are open to open-sourcing them in the fu
Writing secure code is not as glamorous as releasing the next cool feature. However, we know that fixing security vulnerabilities in production is hard and costly. In order to have a more secure application it is important to consider what makes an application secure from the start during the design phase. But what security requirements make sense? How can a security organization track whether the multitude of applications are adhering to application security best practices and known secure states? How does a development team prioritize all the security requirements?
Driving uniformed security requirements across a large company can be no small task. Many development groups write security requirements guided by regulation or industry standards for their specific application that are not seen by other development teams or the security organization. Further difficulties arise from teams that are dispersed using different tools and processes. Acquired development organizations that are accustomed to different processes pose their own challenges. The sum of these items leads to a siloed approach to writing and tracing security requirements complicating efforts by the security organization to understand how applications are developing secure code.
With the OWASP ASVS, a set of verification statements can be used to create a list of functional and non-functional requirements and controls that an application can adhere to in order to maintain a secure posture for their risk tolerance. Our Application Security team used the verification statements from the ASVS and created a set of security requirements, controls, and technical design decisions that our applications can use in their normal Scrum process as they would for feature development. The Application Security team also provides a priority ranking on each of the work items in order to assist the application in prioritizing the work.
Our team developed a modified version of the open source Google VSAQ in order to present our applications with a questionnaire that determines the ASVS level that an application should strive toward. This questionnaire will ask questions related to the type of features and functions that the application may have in order to identify the tasks that the application needs to complete to meet that ASVS level. In some cases, the application may use a third party or another internal application to handle the functionality that is listed in the ASVS giving the development team the ability to opt out of some security requirements. For instance, the user authentication may be a module developed by another application as in the case of an SSO enabled application.
As with most projects, creating new processes and procedures for something specific like security requirements can create turmoil and outright revolt among the consumers of the new process. So bringing a set of uniformed security requirements to an established organization requires working within the existing process. To this end we are utilizing current internal requirements tracking, enhancement tracking and testing tools as a way to reduce reluctance to the new project. Through this already defined process security tasks can be viewed and treated as any other type of development tasks. This allows the security organization to see which applications are adhering to the controls, which ones are not, which controls are the most challenging across the application base, and follow the work through the lifecycle using standard reporting.
Test plans can be written using the ASVS verification statements as they are or as a guide to a more specific test plan. To verify that the requirements have been met by an application, the test plans will be mapped to the requirement in a requirements tracking tool.
Secure development does not need to be painful or difficult. In this talk I would like to show how an organization can apply the ASVS to their Software Security Life Cycle to create more secure applications. Working with a ready baked set of security requirements and methods of validation takes the ambiguity out of creating security requirements and makes them more consumable to development teams. The OWASP ASVS provides the guidance to that prepared set of requirements that can be used in an already established software development life cycle.
Description:
The Home Depot, the world’s largest home improvement retailer, has been providing hammers, saws, nails, lumber, and paint to Do-It-Yourselfers and Pros alike since 1978. In the same spirit, the Product Security team offers self-service tools and materials to help software developers analyze their source code and deployed applications at scale and speed, matching the pace of agile.
Key Takeaways:
• Build tooling using the same technologies and methods developers use
• Ensure tooling is available when and how developers want it
• Eliminate friction by providing meaningful results and teaching developers how to interpret them
• Empower developers to determine a path toward issue resolution
Getting developers to care about security is tough, but turning your developer training into a hands-on puzzle game with a Capture the Flag (CTF) event can create excitement while effectively accomplishing the real goal of the training. Permanently open their eyes to what goes wrong when security controls are left out and give them the attacker’s perspective to look critically at their code moving forward. Consider that students remember 20% of what they hear – and 90% of what they do. Hands-on training is radically more effective.
This presentation will discuss the pedagogical underpinnings to the technique (so management will approve it), and practical recommendations on implementing an event (so that the participants will have a good time). After several years of running events in a variety of contexts, I’ll share some success stories and admit to some failures that will help put you on the right path for your own event.
Topics will include:
• Designing your event infrastructure to minimize risk and satisfy IT policies.
• Preparing difficult, but solvable challenges.
• Managing players while encouraging them to break the rules.
Cookies are an integral part of any web application and secure management of cookies is essential to web security. However, during my years as a security consultant I've often encountered various myths and misconceptions regarding cookie security from both developers as well as other security professionals. This talk will dive into the details of cookie security and highlight some of the lesser known facts about well-known cookie attributes. For example, we will see why the ‘Secure’ attribute doesn’t make a cookie immune against active man-in-the-middle attacks, how JavaScript can manipulate cookies marked with ‘HttpOnly’, why setting the ‘Domain’ attribute to the origin host may make it less secure and how other applications on the same host still can access cookies scoped to a path outside their application. This talk will also cover many of the recent improvements to cookie security implemented in modern browsers, such as ‘Strict secure cookie’, ‘Cookie prefixes’, and the ‘SameSite’ attribute. This talk will give you a solid understanding of the pitfalls affecting cookie security, the risks associated with these, and how you can leverage modern security specifications to enhance the protection of cookies in your web application.
Tentative outline (subject to change):
-Cookie Basics
-Cookie Lifetime
o Persistent vs. non-persistent cookies
o Expires and Max-Age Attribute
o Security implications
-Cookie Scope vs. Same-origin Policy
-Secure Attribute
o What it protects against
o What it doesn’t protect against
o Targeting ‘Secure’ cookies in MiTM attacks
o Demo
-HttpOnly Attribute
o What it protects against
o What it doesn’t protect against
o Attacking ‘HttpOnly’ cookies from JavaScript
o Demo
-Path Attribute
o Isolating cookies between applications on same domain
o Compromising cookies scoped to another application’s path
o Demo
-Domain Attribute
o Broadly scoped domains
o Narrowly scoped domains
o Risks with setting the domain attribute
-Modern Cookie Protections
o SameSite Attribute
o Cookie Prefixes
o Strict Secure Cookie
-Summary
o Is there an ultimate cookie configuration?
In 1984, Ken Thompson wrote, “You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.)” [1] Yet modern software applications are 80% open source components.[2] The supply chain is total anarchy.
All this third-party code runs with the full privileges of the application, essentially granting full access to host, backend, datacenter, and possibly intranet. Obviously, if a popular component, like Log4j or Apache Commons, were trojaned, it would give an attacker a hall pass to most of the datacenters in the world. Much of our trust in open source components comes from the fact that the source is public and “given enough eyeballs, all bugs are shallow.” [3] Unfortunately, in the Java ecosystem (and most other environments), there is literally no assurance that a given binary matches the source.
This talk reports on the results of a large-scale experiment to search the universe of Java libraries for malicious discrepancies between source code and binaries. We created an automated security pipeline that automatically matches repositories, builds code, performs a “security diff” of the bytecode instructions, and generates human-readable reports for analysis. Our “security diff” tool ignores inconsequential differences between compilers, flags, and versions, so that only truly different code gets flagged. The experiment is currently underway and hundreds of libraries have been analyzed.
Of course, source-to-binary traceability is not everything, a malicious developer could hide attacks in the source code [4]. A crafty malicious developer would intentionally introduce vulnerabilities that look like accidents to establish some plausible deniability. So, given the trust that these libraries have been granted, and the potential attractiveness to an attacker (particularly nation-sponsored or financially motivated hackers), we absolutely have to know if public source code matches the binaries we blindly trust.
The bigger the company you're working in, the more technologies and methodologies used by development teams you are going to face. At the same time, you want to address security risks in an appropriate, reliable and traceable way for all of them.
After a short introduction of a unified process for handling security requirements in a large company, the main part of the talk is going to focus on a tool called SecurityRAT which we developed in order to support and accelerate this process.
The goal of the tool is first to provide a list of relevant security requirements according to properties of the developed software (e.g. type of software, criticality), and afterwards to handle these in a mostly automated way - integration with an issue tracker being used as a core feature.
Work in progress (currently targeting mainly integration to other systems, automated testing of requirements and reporting) as well as future plans will form the last part of the talk.
During the past 7 years, I have examined how cryptography has been used in 200+ different projects from a security risk perspective. This includes 85+ design reviews well over 100 secure code reviews (mostly Java with some C/C++ and C# thrown in for good measure) performed for two different companies. That includes both proprietary code of these 2 companies, proprietary vendor code reviewed under NDAs, as well as some FOSS code. This talk explores the most commonly observed applied cryptography mistakes made by developers during that 7 year window, how you can spot those mistakes, and finally describes how to correct them.
Best practices for HTTPS deployment have been steadily improving over the past decade. TLS usage on web servers has been steadily increasing and there are dozens of tools (O-Saft being the most popular) now available to test the correctness of the TLS configuration of a front-end web server. All good news. But what about the other services and protocols used in a web application stack? What about the connection between the web application server and the backing data store? Unfortunately, the state of the art regarding proper TLS configuration in popular databases has not progressed as quickly as it has for HTTPS.
Virtually all important data sent between a client and a web application, will also be sent between the application server and its backing data store. The network IS hostile and any connection to the backing data store of a web application needs to have the same level of network confidentiality and integrity as the front-end client.
This talk will look at the current TLS capabilities of popular web application data stores (MySQL, PostgreSQL, and MongoDB), including both the most recent versions as well as the most widely deployed versions. We’ll discuss best practices for defining TLS configuration within these data stores, which are somewhat different from HTTPS, and improvements in tools made by the presenter, to help verify proper server configuration of TLS. Finally, with these new tools we’ll survey actual TLS configurations of publicly connected data stores to determine adherence to best practices in the wild.
Building secure applications is a difficult task, especially in combination with building it based on a new application framework. ASP.NET Core is a new open-source and cross-platform framework completely rewritten from scratch. It can run on Windows, Mac and Linux and the framework moved to a more modular based approach which gives more flexibility when creating solutions with it.
How secure is ASP.NET Core by default? Do the API’s help the developer out doing a good job or is a mistake easily made? In this session, we're going to investigate how ASP.NET Core MVC deals with the above questions related to e.g. Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) issues. We’re also going to extend it and adapt new web standards and see how we can validate an existing solution for the problems we’ve identified.