John L. Whiteman

John L. Whiteman

Security Researcher at Intel Corporation

John L. Whiteman is a security researcher for Intel and a part-time adjunct cybersecurity instructor for the University of Portland. He also teaches the UC Berkeley Extension’s Cybersecurity Boot Camp. John holds a Master of Science in Computer Science from Georgia Institute of Technology. He possesses multiple security certifications including CISSP and CCSP. John has over 20 years of experience in high tech with over half focused on security. He can also hear John host the OWASP PDX Security Podcast online. John grows wasabi during his “off” hours.

Presentation Abstract

Living Threat Models Are Better Than Dead Threat Models

The cornerstone of security for every application starts with a threat model. Without it, how does one know what to protect and from whom? Remarkably, most applications do not have threat models, take a look at the open-source community. And, even if a threat model is created, it tends to be neglected as the project matures since any new code checked in by the development team can potentially change the threat landscape. One could say that the existing threat model is as good as dead if such a gap exists.

Our talk is about creating a Living Threat Model (LTM) where the same best practices used in the continuous integration of source code can aptly apply to the model itself. LTMs are machine readable text files that coexist in the Git repository and, like, source code, can be updated, scanned, peer reviewed and approved by the community in a transparent way. Wouldn’t it be nice to see a threat model included in every open-source project?

We need to consider automation too to make this work in the CI/CD pipeline. We use the open-source Data Flow Facilitator for Machine Learning (DFFML) framework to establish a bidirectional data bridge between the LTM and source code. When a new pull request is created, an audit-like scan is initiated to check to see if the LTM needs to be updated. For example, if a scan detects that new cryptography has been added to the code, but the existing LTM doesn’t know about it, then a warning is triggered. Project teams can triage the issue to determine whether it is a false positive or not, just like source code scans.

We have been working on this effort for a few years and feel we are on the right track to make open-source applications more secure in a way that developers can understand.