Login CSRF is a well-known vulnerability that allows an attacker to hijack a victim’s browser to login to an application using the attacker’s own credentials. This paper applies a similar concept on an application using federated identities. Specifically, we will walkthrough a CSRF issue that can creep into an application that uses OpenID Connect and Oauth 2.0, where more than one identities for a user needs to be linked together. We look at the conditions under which such a Federated Login CSRF may occur and mitigations for the same.
I have worked on enterprise APIs being used by millions of users worldwide both as a Enterprise Security Architect and as a developer building these services. In this session, I will talk about Top 10 ways to design and build secure Microservices to protect your users and your reputation. This top 10 list includes:
1. Use the latest version of TLS
2. Designing a secure Infrastructure and Network whether on prem or in cloud
3. Best Practices in Authentication to authentication your clients or end users.
4. Authorization of your end users or clients so they get just the right access based on least privilege and need to know.
5. Protecting your APIs against Distributed Denial of Service by using patterns such as Rate Limiting, Throttling, Daily limits etc.
6. Alerting and Monitoring your APIs to detect abnormal patterns and security issues.
7. API resiliency that directly affects Availability of your Microservices.
8. Encrypting & Hashing sensitive data - at rest and/or in transit - in memory, in cache and in db, in transit, in UI
9. Key management security
10. Session Management best practices
Leveraging Blockchain for Identity and Authentication in IoT is good for Security
Since the beginning of the internet, attempts have been made to solve the problem of privacy and security. Every effort has had challenges of inconvenience, cost and insecurity.
How do we prove our identity?
Blockchain technology and its mutual distributed ledgers (MDL) cannot be altered and allow people and companies to record, validate and track transactions throughout a network of decentralized computer systems. These MDLs are databases with a time stamped audit trail.
By leveraging this technology, an app on our device will hash our identifying information and insert this into the public Blockchain. Anytime you need to authenticate to another service or user, you share the information which is then sent through the algorithm and checked against the Blockchain. Once authenticated, your information for identification is not needed again.
If the hashed information is decentralized and provides interoperability. Personal information never leaves the device and is not stored on a centralized server. Taking the personal data, hashing it and then discarding everything but the hashes of our personal data allows the network to accept the information in the same manner as our ID cards.
These Blockchains open the door to innovation and enables more interoperability connecting various distributed services.
There can be 2 unique MDLs; one to hold the encrypted documents and a separate ledger that will hold encryption key access which are folders encompassing our identity, health or other qualifying records. Driver’s license bureaus can provide us a digitally signed copy of our driver’s license that we control. We then offer controlled usage to entities that need to inspect the documents, the information recorded on the MDL.
This use of immutable ledger can become the accepted modality of the future.
The OWASP 2017 top ten is adding a new category of underprotected APIs. This reflects how RESTful Web APIs are rapidly becoming the backbone of communication on the modern web. A whole series of new challenges are thus presented for dealing with security and access authorization issues. These are not well covered by existing tools or techniques. This talk will cover some of the potential threats that result from failure to secure Web APIs sufficiently and discuss some of the emerging security technologies in the field. In this API driven world there are a more complex set of API consuming clients, some of which may need to embed access credentials such as API keys. We will discuss the differences between software authorization via static API keys and user authorization via OAuth2 and the interplay between them. We will pay particular attention to API consumers such a mobile apps where the code must be published in the public domain. We will look at the typically poor level of practice in concealment of access credentials such as API keys in these apps. Some practical advice with code examples will be provided about how to improve the security posture of mobile apps accessing an API. We will cover the use of TLS and how it is not an effective countermeasure to credentials being extracted unless certificate pinning is also used to prevent Man-in-the-Middle attacks against the app. There will be some practical advice on how to implement TLS pinning with code examples. Finally we will look at more advanced techniques such as app hardening, white box cryptography and software attestation for mobile applications where security is crucial. Attendees should gain a good understanding of the underprotected API problem, some short term practical tips to improve their API security posture with minimal effort and an appreciation of emerging tools and technologies that enable a significant step change in security.
Each Android app runs in its own VM, with every VM allocated a limited heap size for creating new objects. Neither the app nor the OS differentiates between regular objects and objects that contain security sensitive information like user authentication credentials, authorization tokens, en/decryption keys, PINs, etc. These critical objects like any other object are kept around in the heap until the OS hits a memory constraint and realizes that it needs more memory. The OS then chooses to invoke garbage collector in order to reclaim memory from the apps. Java does not provide explicit APIs to reclaim memory occupied by objects. This leaves a window of time where the security critical objects live in the memory and wait to be garbage collected. During this window a compromise of the app can allow an attacker to read the credentials. This is a needless risk every Android application lives with today. To exacerbate the situation, apps today heavily make use of Identity providers to implement Open ID/OAuth based authentication and authorization.
In this paper we propose a novel approach to determine at every program statement, which security critical objects will not be used by the app in the future. An Android application once compiled, has all the information needed to determine this. Using results from our data flow analysis [1] we can decide to flush out the security sensitive information from the objects immediately after their last use, thereby preventing an attacker who has compromised the app from reading security critical information. This way an app can truly provide defence in depth, protecting sensitive data even after a compromise.
We propose a new tool called Androsia, which uses static program analysis techniques to perform a summary based [2] interprocedural data flow analysis to determine the points in the program where security sensitive objects are last used (so that their content can be cleared). Androsia then performs bytecode transformation of the app to flush out the secrets resetting the objects to their default values. The data-flow analysis associates two elements with each statement in the unit control flow graph called flow sets: one in-set and one out-set. These sets are (1) initialized, then (2) propagated through the unit graph along statement nodes until (3) a fixed point is reached.
We leverage the power of Soot [3], a static Java-bytecode analysis framework, to identify the points in the program where an object is last used (LUP). The detection of Last Usage Point (LUP) of objects, requires analysis of methods in a reverse topological order of their actual execution order; which means that the callee method will be analyzed before the caller method. We construct flow functions for the analysis and use them to propagate the data flow sets [4]. The flow functions are as follows:
Out(i) = φ if S(i) is exit node in CFG
= ∪ {In(j)} | where S(j) is the set of all successor statements of S(i) | otherwise
In(i) = Out(i) ∪ Gen(i); where
Gen(i) = {var(y)} | if S(i) is of the form: x = y
= {var(y)} | if S(i) is of the form: x = if(y)
= {var(y)} | if S(i) is of the form: x = while(y)
= {p(i)} | if S(i) is of the form: x = f(p)
= {φ} | otherwise
(In the interest of space: pls. refer [1] to know more about how the flow functions work in a data flow analysis)
In our analysis, the flow sets are propagated backwards in the Unit graph [5]. The analysis result corresponding to a method is kept as a summary for that method and is propagated to caller methods at the method call site. Hence giving rise to an inter-procedural summary based analysis.
Using the results from this analysis we then perform bytecode transformation on the target app to remove sensitive information from the objects at the identified program points from our analysis. As a case-study, we take Android apps and manifest the security that Androsia has to offer.
[1] Data flow analysis, https://en.wikipedia.org/wiki/Data-flow_analysis
[2] D. Yan, G. Xu, and A. Rountev. Rethinking soot for summary-based wholeprogram analysis. In Proceedings of the ACM SIGPLAN International Workshop on State of the Art in Java Program Analysis, SOAP ’12, pages 9–14, New York, NY, USA, 2012. ACM
[3] Soot: a Java Optimization Framework. http://www.sable.mcgill.ca/soot/
[4] Implementing an intra procedural data flow analysis in Soot. https://github.com/Sable/soot/wiki/Implementing-an-intra-procedural-data-flow-analysis-in-Soot
[5] UnitGraph. https://www.sable.mcgill.ca/soot/doc/soot/toolkits/graph/UnitGraph.html
Most businesses have at least one old clunker app kicking around, and the longer it has been around and more clunky it is, the more likely it is to be vital to your business (otherwise you’d have gotten rid of it, right?). So how do you approach getting an old clunker migrated to the cloud? Think you can put it off? You’ll probably discover that there is a compelling business reason to get it migrated lurking just around the corner that will force your hand. Whether it is as mundane as a data center consolidation effort, of as aspirational as a push to transform the business to be more agile and customer focused, the cloud has your app in its sights and will not rest until your app has made the leap.
There are a variety of approaches touted for app migration, from decomposition into micro-services, to blatant lift-and-shift, so how can you tell which migration pattern is most likely to succeed and meet business objectives? Much like approaching a renovation of an old house, how can you tell which apps are the ‘scrapers’ where refactoring might as well mean rewriting, and which ones ‘have good bones’ and might successfully make the transition without much more than basic updates? Cloud purists will promote a refactoring pattern where an apps decomposed into a collection of cloud-native micro-services. Others will promise that you can forklift the app into a cloud with almost no change. But do you understand the benefits and pitfalls of the various approaches? Is there a middle path?
Many questions arise, such as: Should the app be migrated to a public or private cloud? Would an IaaS or PaaS be a better fit? Can it be outsourced to a SaaS, essentially replacing the app with a cloud native offering and avoiding migration of the app itself? What are the security implications of each app migration pattern combined with the target cloud environment? Does my legacy app have inherent design assumptions that conflict with the design assumptions of the target cloud environment? Are there the necessary supporting organizational capabilities (DevOps, Agile, DevSecOps, Test Driven Design, etc.), and technologies (continuous integration/continuous deployment, configuration management automation, etc.) to support cloud migration success?
This presentation will explore these topics and more to provide a roadmap to making both good security decisions and good decisions overall in planning your app’s migration to the cloud.
Machine Learning (ML) has found it particularly useful in malware detection. However, as the malware evolves very fast, the stability of the feature extracted from malware serves as a critical issue in malware detection. The recent success of deep learning in image recognition, natural language processing, and machine translation indicates a potential solution for stabilizing the malware detection effectiveness. We present a color-inspired convolutional neural network-based Android malware detection, R2-D2, which can detect malware without extracting pre-selected features (e.g., the control-flow of op-code, classes, methods of functions and the timing they are invoked etc.) from Android apps. In particular, we develop a color representation for translating Android apps into rgb color code and transform them to a fixed-sized encoded image. After that, the encoded image is fed to convolutional neural network for automatic feature extraction and learning, reducing the expert’s intervention.We have run our system over 800k malware samples and 800k benign samples through our back-end (60 million monthly active users and 10k new malware samples per day), showing that R2-D2 can effectively detect the malware. Furthermore, we will keep our research results on http://R2D2.TWMAN.ORG if there any update.
A Secure Product Lifecycle (SPLC) is integral in ensuring software is written with security in mind, but companies struggle to create a successful process with limited security resources and minimal impact to engineering teams. This session will discuss lessons learned, soup-to-nuts, through the process of designing, rolling out, and measuring a scalable SPLC.
In Adobe’s Digital Marketing business unit, two security analysts created a successful program that has scaled to support thousands of engineers. Defining security requirements and KPIs for engineering teams is just the first step in creating the SPLC. In order to make the design a reality for several products, thousands of engineers, and millions of lines of code, we organized our team into an ‘as a service’ model and utilized automation to scale to meet this demand. Establishing a strong security ambassador program helped ensure the success of the SPLC. The centralized ambassador network has been crucial to the success all product security initiatives throughout the business unit. We will give examples of how ambassadors have assisted with incident response, driven training and security culture initiatives, and have championed security-related projects on their individual team.
We will explore a case study of one of our most successful SPLC-driven programs - static code analysis. By fully automating the process from code check-in to delivery of results, we achieved 100% buy-in from all engineering teams in the Digital Marketing business unit. The process was designed to have minimal impact on the engineering teams, and to be integrated into their existing workflows, allowing for a very low-overhead program that adds value. The engineers code and commit as they normally would. On the backend, our static code analysis engine is scanning and will inject any findings into their existing bug-tracking system.
You will walk away from this talk with on-the-ground knowledge to establish an effective SPLC by establishing and utilizing security ambassadors and providing seamless automation to support these key initiatives.
In an age of ever more sophisticated cybercrime and mass surveillance secure communication is an increasingly rare premium commodity. In this talk we take a look at how the threat model for secure messaging applications has evolved beyond the traditional man-in-the-middle attacker.
We will cover new goals and capabilities of attackers targeting modern communications networks. We will also look at several classes of attacks many of which are already being used effectively in the wild. For these, we will cover the capabilities needed to launch the attack, the effects successful execution can entail.
Some of the real world attacks/vulnerabilities we will touch upon are:
- The account enumeration and hijacking of Telegram accounts in Iran.
- Detecting the language in encrypted text messages.
- Recovering the content of encrypted VoiP conversations.
- HipChat server compromise leading to leak of meta-data and chat logs.
- Invisible rekeying on WhatsApp.
- Widespread lack of even basic privacy in the face of future quantum attacks.
- Browser based attacks on WhatsApp and Telegram.
- iMessage protocol attack
For more than a decade, independent arms of the federal government have published application and hardware security standards that only a minor subset of the InfoSec community has a true grasp on. The Federal Information Processing Standard (FIPS) 140-2 contains 11 comprehensive security requirement areas, and the National Information Assurance Directive (NIAP) has created Common Criteria Protection Profiles for Network Devices and Applications that address many of the security threats and design issues that are still persistent today. These standards take a detailed and secure-by-design approach to security that could be hugely beneficial to engineers and system architects beginning to design new systems. Yet, because of the dense and academic style of these standards, many are only vaguely aware of them, seeing them only as a headache forced onto them by sales managers as development is wrapping up.
For three years I worked to formally validate products against these standards, and recently I’ve made the switch to application security assessment where I see many product teams entirely unaware of these practices and standards. This talk aims to cherry-pick the crucial security requirements and principles in these standards and present them in an easily understandable format for development teams, product architects, and security engineers. My goal is to improve your security throughout development and reduce risk for both your customers and company.
I will start by briefly discussing the standards themselves and the context in which they were created and still apply.
Next, I will dive into detail on 5 major security principles that are seen throughout these standards. As I discuss these I will include examples and my observations on how they are currently implemented in the industry.
1. Define the security boundary
2. Create a functional specification
3. Prove that the boundary and services protect Critical Security Parameters
4. Protect all network traffic using SSH, TLS, or IPsec
5. Prove the strength of your entire cryptographic stacThe consequences of not complying with the requirements of General Data Protection Regulation (GDPR) is immense for all international data processors. The fines and penalties even for small companies can be as high as 20 million EUR, and GDPR requires data protection by design and by default. Most IT companies do not have in-house expertise to identify the required features for full compliance. This work provides a valuable vendor and technology-agnostic toolkit for building GDPR-complaint software with minimum cost and effort. The toolkit is based on a tag-based approach for identifying required features and tasks. After reviewing various privacy regulations, including GDPR, and coding their content, we arrived at a set of tags that fully capture the principles and notions of privacy requirements relevant to software development, deployment and operation. The tags are organized in 14 classes and include sub-tags, and variants. Any list of privacy and security controls can be evaluated using these tags to ascertain if they adequately enable the desired level of privacy. As a case study we will develop the first publicly available agile scrum template, using the proposed tagging system, for the development of an IoT system that transmits private information across the international borders. The tagging system and the approach could be easily customized for any other agile methodology and framework. The talk will expand on some of the recent stories and case studies of how missing the tags can create non-compliance and as a result, huge liability.