John S. Andersen

John S. Andersen

Infrastructure and DevOps Architect at Intel Corporation

John Andersen is driven by a passion for exploration and understanding. He is currently analyzing software development methodologies and technologies to identify and proliferate best practices across organizations so as to optimize asset allocation in alignment with disparate organizational strategic principles. He is excited to explore the application of these techniques on human and AI agents. His mission is to help foster an environment which maintains agent freedom, privacy, and security.

Presentation Abstract

Living Threat Models Are Better Than Dead Threat Models

The cornerstone of security for every application starts with a threat model. Without it, how does one know what to protect and from whom? Remarkably, most applications do not have threat models, take a look at the open-source community. And, even if a threat model is created, it tends to be neglected as the project matures since any new code checked in by the development team can potentially change the threat landscape. One could say that the existing threat model is as good as dead if such a gap exists.

Our talk is about creating a Living Threat Model (LTM) where the same best practices used in the continuous integration of source code can aptly apply to the model itself. LTMs are machine readable text files that coexist in the Git repository and, like, source code, can be updated, scanned, peer reviewed and approved by the community in a transparent way. Wouldn’t it be nice to see a threat model included in every open-source project?

We need to consider automation too to make this work in the CI/CD pipeline. We use the open-source Data Flow Facilitator for Machine Learning (DFFML) framework to establish a bidirectional data bridge between the LTM and source code. When a new pull request is created, an audit-like scan is initiated to check to see if the LTM needs to be updated. For example, if a scan detects that new cryptography has been added to the code, but the existing LTM doesn’t know about it, then a warning is triggered. Project teams can triage the issue to determine whether it is a false positive or not, just like source code scans.

We have been working on this effort for a few years and feel we are on the right track to make open-source applications more secure in a way that developers can understand.