Skip to main content
Category

Blogs

Open Source Threat Modeling

By Blogs

A guest blog post by Mike Goodwin.

What is threat modeling?

Application threat modeling is a structured approach to identifying ways that an adversary might try to attack an application and then designing mitigations to prevent, detect or reduce the impact of those attacks. The description of an application’s threat model is identified as one of the criteria for the Linux CII Best Practises Silver badge.

Why threat modeling?

It is well established that defense-in-depth is a key principle for network security and the same is true for application security. But although most application developers will intuitively understand this as a concept, it can be hard to put it into practice. After many years and sleepless nights, worrying and fretting about application security, one thing I have learned is that threat modeling is an exceptionally powerful technique for building defense-in-depth into an application design. This is what first attracted me to threat modeling. It is also great for identifying security flaws at design time where they are cheap and easy to correct. These kinds of flaws are often subtle and hard to detect by traditional testing approaches, especially if they are buried in the innards of your application.

Three stages of threat modeling

There are several ways of doing threat modeling ranging from formal methodologies with nice acronyms (e.g. PASTA) through card games (e.g. OWASP Cornucopia) to informal whiteboard sessions. Generally though, the technique has three core stages:

Decompose your application – This is almost always done using some kind of diagram. I have seen successful threat modeling done using many types of diagrams from UML sequence diagrams to informal architecture sketches. Whatever format you choose, it is important that the diagram shows how different internal components of your application and external users/systems interact to deliver its functionality. My preferred type of diagram is a Data Flow Diagram with trust boundaries:

Identify threats – In this stage, the threat modeling team ask questions about the component parts of the application and (very importantly) the interactions or data flows between them to guess how someone might try to attack it. The answers to these questions are the threats. Typical questions and resulting threats are:

Question Threat
What assumptions is this process making about incoming data? What if they are wrong? An attacker could send a request pretending to be another person and access that person’s data.
What could an attacker do to this message queue? An attacker could place a poison message on the queue causing the receiving process to crash.
Where might an attacker tamper with the data in the application? An attacker could modify an account number in the database to divert payment to their own account.

Design mitigations – Once some threats have been identified the team designs ways to block, avoid or minimise the threats. Some threats may have more than one mitigation. Some mitigations might be preventative and some might be detective. The team could choose to accept some low-risk threats without mitigations. Of course, some mitigations imply design changes, so the threat model diagram might have to be revisited.

Threat Mitigation
An attacker could send a request pretending to be another person and access that person’s data. Identify the requestor using a session cookie and apply authorization logic.
An attacker could place a poison message on the queue causing the receiving process to crash. Digitally sign message on the queue and validate their signature before processing.
Maintain a retry count on message and discard them after three retries.
An attacker could modify an account number in the database to divert payment to their own account. Preventative: Restrict access to the database using a firewall.
Detective: Log all changes to bank account numbers and audit the changes.

OWASP Threat Dragon

Threat modeling can be usefully done with a pen, whiteboard and one or more security-aware people who understand how their application is built, and this is MUCH better than not threat modeling at all. However, to do it effectively with multiple people and multiple project iterations you need a tool. Commercial tools are available, and Microsoft provides a free tool for Windows only, but established, free, open-source, cross-platform tools are non-existent. OWASP Threat Dragon aims to fill this gap. The aims of the project are:

  • Great UX – Using Threat Dragon should be simple, engaging and fun
  • A powerful threat/mitigation rule engine – This will lower the barrier to entry for teams and encourage non-specialists to contribute
  • Integration with other development lifecycle tools – This will ensure that models slot easily into the developer workflows and remain relevant as the project evolves
  • To always be free, open-source (like all OWASP projects) and cross-platform. The full source code is available on GitHub

The tool comes in two variants:

End-user documentation is available for both variants and, most importantly, it has a cute logo called Cupcakes…

Threat Dragon is an OWASP Incubator Project – so it is still early stage but it can already support effective threat modeling. The near-term roadmap for the tool is to:

  • Achieve a Linux CII Best Practices badge for the project
  • Implement the threat/mitigation rule engine
  • Continue to evolve the usability of the tool based on real-world feedback from users
  • Establish a sustainable hosting model for the web application

If you want to harden your application designs you should definitely give threat modeling a try. If you want a tool to help you, try OWASP Threat Dragon! All feedback, comments, issue reports and pull requests are very welcome.

About the Author
Mike Goodwin is a full-time security professional at the Sage Group where he leads the team responsible for product security. Most of his spare time is spent working on Threat Dragon or co-leading his local OWASP chapter.

Securing Network Time

By Blogs

Since its inception the CII has considered network time, and implementations of the Network Time Protocol, to be “core infrastructure.” Correctly synchronising clocks is critical both to the smooth functioning of many services and to the effectiveness of numerous security protocols; as a result most computers run some sort of clock synchronization software and most of those computers implement either the Network Time Protocol (NTP, RFC 5905) or the closely related but slimmed down Simple Network Time Protocol (SNTP, RFC 4330).

Diagram

There are several different implementations of NTP and SNTP, including both open source and proprietary versions. For many years the canonical open source implementation has been ntpd, which was started by David Mills and is now developed by Harlan Stenn at the Network Time Foundation. Parts of the ntpd code date back at least 25 years and the developers pride themselves in having the most complete implementation of the protocol and having a wide set of supported platforms. Over the years forks of the ntpd code have been made, including the NTPSec project that seeks to remove much of the complexity of the ntpd code base, at the expense of completeness of the more esoteric NTP features and breadth of platform support. Others have reimplemented NTP from scratch and one of the more complete open source alternatives is Chrony, originally written by Richard Curnow and currently maintained by Miroslav Lichvar.

The CII recently sponsored a security audit of the Chrony code, carried out by the security firm Cure53 (here is the report). In recent years, the CII has also provided financial support to both the ntpd project and the NTPSec project. Cure53 carried out security audits of both ntpd and NTPSec earlier this year and Mozilla Foundation’s Secure Open Source (SOS) project funded those two audits. SOS also assisted the the CII with the execution of the Chrony audit.

Since the CII has offered support to all three projects and since all three were reviewed by the same firm, close together in time, we thought it would be useful to present a direct comparison of their results.

ntpd

Full report PDF

The ntpd code base is the largest and most complex of the three and it carries a lot of legacy code. As a result, unsurprisingly, it fared the worst of the three in security testing with the report listing 1 Critical, 2 High, 1 Medium and 8 Low severity issues along with 2 Informational comments. It should be noted that these issues were largely addressed in the 4.2.8p10 release back in March 2017. That said, the commentary in the report is informative, with the testers writing:

“The general outcome of this project is rooted in the fact that the code has been left to grow organically and had aged somewhat unattended over the years. The overall structure has thus become very intricate, while also yielding a conviction that different styles and approaches were used and subsequently altered. The seemingly uncontrolled inclusion of variant code via header files and complete external projects engenders a particular problem. Most likely, it makes the continuous development much more difficult than necessary.”

As a result, it seems quite likely that there are more lurking issues and that it will be difficult for the authors to avoid introducing new security issues in the future without some substantial refactoring of the code.

As mentioned above, ntpd is the most complete implementation of NTP and as a result is the most complex. Complexity is the enemy of security and that shows up in this report.

NTPSec

Full report PDF

As mentioned previously, the NTPSec project started as a fork of ntpd with the specific aim of cleaning up a lot of the complexity in ntpd, even if that meant throwing out some of the less-used features. The NTPSec project is still in its early days; the team has not yet made a version 1.0 release, but has already thrown out nearly 75% of the code from ntpd and refactored many other parts. Still, the security audit earlier this year yielded 3 High, 1 Medium and 3 Low severity issues as well as raising 1 Informational matter. The testers comments again were telling:

“On the one hand, much cruft has been removed successfully, yet, on the other hand, the code shared between the two software projects bears tremendous similarities. The NTPsec project is still relatively young and a major release has not yet occurred, so the expectations are high for much more being done beforehand in terms of improvements. It must be mentioned, however, that the regression bug described in NTP-01-015 is particularly worrisome and raises concerns about the quality of the actions undertaken.

In sum, one can clearly discern the direction of the project and the pinpoint the maintainers’ focus on simplifying and streamlining the code base. While the state of security is evidently not optimal, there is a definite room for growth, code stability and overall security improvement as long as more time and efforts are invested into the matter prior to the official release of NTPsec.”

The NTPSec has made some significant technical progress but there is more work to do before the developers get to an official release. Even then, the history of the code may well haunt them for some time to come.

Chrony

Full report PDF

Unlike NTPSec, Chrony is not derived from the ntpd code but was implemented from scratch. It implements both client and server modes of the full NTPv4 protocol (as opposed to the simplified SNTP protocol), including operating as a Stratum 1 reference server, and was specifically designed to handle difficult conditions such as intermittent network connections, heavily congested networks and systems that do not run continuously (like laptops) or which run on a virtual machine. The development is currently supported by Red Hat Software and it is now the default NTP implementation on their distributions.

In the 20+ years that I’ve worked in the security industry I’ve read many security audits. The audit that the CII sponsored for Chrony was the first time that I’d used Cure53, and I had not seen any previous reports from them, so when I received the report on Chrony I was very surprised. So surprised that I stopped to email people who had worked with Cure53 to question their competence. When they assured me that the team was highly skilled and capable, I was astounded. Chrony withstood three skilled security testers for 11 days of solid testing and the result was just 2 Low severity issues (both of which have since been fixed). The test report stated:

“The overwhelmingly positive result of this security assignment performed by three Cure53 testers can be clearly inferred from a marginal number and low-risk nature of the findings amassed in this report. Withstanding eleven full days of on-remote testing in August of 2017 means that Chrony is robust, strong, and developed with security in mind. The software boasts sound design and is secure across all tested areas. It is quite safe to assume that untested software in the Chrony family is of a similarly exceptional quality. In general, the software proved to be well-structured and marked by the right abstractions at the appropriate locations. While the functional scope of the software is quite wide, the actual implementation is surprisingly elegant and of a minimal and just necessary complexity. In sum, the Chrony NTP software stands solid and can be seen as trustworthy.”

The head of Cure53, Dr. Mario Heiderich, indicated that it was very rare for the firm to produce a report with so few issues and that he was surprised that the software was so strong.

Of course just because the software is strong does not mean that it is invulnerable to attack, let alone free from bugs. What it does mean however is that Chrony is well designed, well implemented, well tested and benefits from the hindsight of decades of NTP implementation by others without bearing the burden of legacy code.

Conclusions

From a security standpoint (and here at the CII we are security people), Chrony was the clear winner between these three NTP implementations. Chrony does not have all of the bells and whistles that ntpd does, and it doesn’t implement every single option listed in the NTP specification, but for the vast majority of users this will not matter. If all you need is an NTP client or server (with or without reference clock), which is all that most people need, then its security benefits most likely outweigh any missing features.

Diagram

Acknowledgements

The security audit on Chrony was funded by the CII but the Mozilla SOS project handled many of the logistics of getting the audit done and we are very grateful to Gervase Markham for his assistance. Mozilla SOS funded the audits of ntpd and NTPSec. All three audits were performed by Cure53.

1,000 Projects Registered for the CII Best Practice Badge, 100 Badges Granted and Prizes!!!

By Blogs

In May of last year the CII launched its Best Practice Badge program. Our goal was to raise awareness of development processes and project governance steps that will help projects have better security outcomes. By giving project maintainers a list of actionable items that will know improve security, teaching them why these steps lead to improvement and showing them how to implement them, we can raise security standards and help projects get better at delivering secure products. By offering a visual “badge” we can make it easier for consumers of open source projects to see which projects take security seriously. More recently, in June of this year, we added new Silver and Gold levels to the badges, to allow projects that make further efforts to drive security improvements to show off their commitment.

We recently issued our 100th Badge to a passing project. A few days later, we had our 1,000th project sign up for the Best Practice Badge program. Our goal for the Best Practice Badge is to be a recognisable mark of commitment to security by projects. For for any mark to gain recognition, it needs to be used and on display.  In light of that fact, we are delighted that the Best Practice Badge recently passed these two major adoption milestones.

Some people have questioned why the pass rate is only 10 percent. The fraction of projects getting a badge has been fairly stable for a while, even as the number of registered projects continues to grow, as can be seen from the project statistics page. When we set up the program it was very much our intent that this should not be some “rubber stamp” process but that projects would need to work to get their badge. To date nearly every project has had to make some improvement in order to achieve a badge, which indicates that the program is actually moving the needle on Open Source Security.

Several projects have given us feedback on the badging process and there are several topics that came up over and over again. Common issues that often need to be fixed include:

  • not supporting a secure way to access the project web site (or not having a valid certificate for the site),

  • not performing automated testing,

  • not performing any sort of code analysis;

  • and not having a publicly documented process for reporting security vulnerabilities.

Other important changes projects have made as a result of going through the badge process include:

  • removing insecure cryptographic algorithms,

  • adding unique version numbers for each release,

  • documenting release notes and the contribution process, and

  • including coding style guidelines for contributions.

History shows that these sorts of steps can improve the security outcomes for projects so we are delighted that all of the passing projects are now taking these steps.

CII KubernetesOn to Silver and Gold

As well as the huge progress we have made with getting projects to a “passing grade,” the CII Best Practice Badge program recently launched its enhanced Silver and Gold badges. These higher level badges add a number of extra criteria on top of the passing level and make mandatory some of the criteria that are recommended at the lower levels. These higher levels of give our passing projects some new stretch goals to which they can aspire.

Today we are delighted to announce that now not only do the the higher level badges bring glory and fame but prizes as well! The maintainers who complete the Silver badge process of the first 50 projects will each receive a bag of Linux Foundation and CII branded swag (probably a hoodie, t-shirt and some other stuff; we’ve not quite pinned the details down yet). Furthermore, each maintainer who completes the badge process of the first 5 projects to have a Gold badge validated will be invited to attend the Linux Foundation-organised conference of their choice, along with an invitation to present at that conference on how their project runs their Secure Development Life Cycle process. Don’t worry if you’re too shy to get up on stage; presenting isn’t obligatory but we really do want successful projects to share their experiences so that other projects can learn from your experiences.

On to the 10,000 projects and 1,000 badges! Woohoo!

CII Best Practices Badge Program Announces Higher-level Certification and Expanded Language Support

By Blogs

In May last year the CII launched it’s Best Practices Badge program, a qualitative self-assessment approach that is available online, the CII Best Practices Badge program allows open source projects to grade themselves against a set of best practices for open source development.

Today we are pleased to announce the next stage of the Best Practice Badge program, which adds two major upgrades to the original program: higher-level certification and internationalisation.

Since formally launching 13 months ago, more than 850 projects have signed up for the process, requiring project maintainers to answer an extensive questionnaire about their development process and explain how they meet the 60+ criteria. While this is a self-assessment process that does not mean that it is a low bar; so far about 10 percent have passed while many projects are making changes to allow them to meet the requirements. Projects that have received their badges so far include GitLab, Hyperledger Fabric, Linux, NTPSec, Node.js, OPNFV, OpenBlox, OpenSSL, OpenStack, and Zephyr.

The below chart shows the number of projects working toward earning a badge and indicates meaningful progress across the board. More CII Best Practices Badges growth and pass rate statistics can be found here.

Diagram Project Progress

It has always been our intention to use the program to push projects to raise their own standards and, to that end, today we are launching two new badges for projects that meet these higher standards. In addition to the original “Passing” badge, we are adding enhanced “Silver” and “Gold” badges. The new criteria for badges for silver and gold levels build on the existing criteria for the “Passing” level.

The new levels raise the bar in a number of areas and are meant to help identify projects that are not only highly committed to improving the quality and security of their code, but are also mindful and proactive with other success factors. For developers, the badges signal which projects are well-organized and easy to participate in, especially for newcomers. For consumers, the changes will ease the on ramp by requiring quick start guides, for example. While criteria that calls for even more rigorous development best practices will instill increased confidence with businesses leveraging open source. In fact, meeting the new criteria especially at the Gold level, will likely not be achievable by numerous small and single-organization projects.

To earn a silver badge, for example, projects are now required to adopt a code of conduct, clearly define their governance model, upgrade crypto_weaknesses and use at least one static analysis tool to look for common vulnerabilities in the analyzed language or environment, if possible.

The other change that we are excited to announce is internationalisation. To broaden the program’s reach and make it easier for projects around the world to participate in the Best Practice Badge program we have updated the Badge application to support multiple languages. We are launching the site with full Chinese and French language support today and German, Russian and Japanese in progress. We would especially like to thank CII member company Huawei for their generous support of the translation into Chinese and Yannick Moy for hard work translating the site into French.

 As with the original work, David Wheeler, project leader at the Institute for Defense Analyses, did the hard work to expand the program. We continue to welcome community feedback, especially on the translation work. To get involved, please join the cii-badges mailing list and track us on GitHub at coreinfrastructure/best-practices-badge. Or course, we also encourage projects to begin the CII Best Practices Badge application process.

For those attending LinuxCon | ContainerCon | CloudOpen China, CII Program Director Marcus Streets is presenting “The Core Infrastructure Initiative: Its First Three Years and Onwards to the Future” on June 20th. He will also share more on these new developments and explain how you can apply for a badge for your free software project.

The CII Advances Kernel Security

By Blogs

The Core Infrastructure Initiative exists to support work improving the security of critical open source components. In a Linux system a flaw in the kernel can open up the opportunity for security problems in any or all the components – so it is in some sense the most critical component we have. Unsurprisingly, we have always been keen to support work that will make this more secure and plan to do even more going forward.

There has been some public discussion in the last week regarding the decision by Open Source Security Inc. and the creators of the Grsecurity® patches for the Linux kernel to cease making these patches freely available to users who are not paid subscribers to their service. While we would have preferred them to keep these patches freely available, the decision is absolutely theirs to make. From the point of view of the CII, we would much rather have security capabilities such as those offered by Grsecurity® in the main upstream kernel rather than available as a patch that needs to be applied by the user. That said, we fully understand that there is a lot of work involved in upstreaming extensive patches such as these and we will not criticise the Grsecurity® team for not doing so. Instead we will continue to support work to make the kernel as secure as possible.

Over the past few years the CII has been funding the Kernel Self Protection Project, the aim of which is to ensure that the kernel fails safely rather than just running safely. Many of the threads of this project were ported from the GPL-licensed code created by the PaX and Grsecurity® teams while others were inspired by some of their design work. This is exactly the way that open source development can both nurture and spread innovation. Below is a list of some of the kernel security projects that the CII has supported.

One of the larger kernel security projects that the CII has supported was the work performed by Emese Renfy on the plugin infrastructure for gcc. This architecture enables security improvements to be delivered in a modular way and Emese also worked on the constify, latent_entropy, structleak and initify plugins.

  • Constify automatically applies const to structures which consist of function pointer members.

  • The Latent Entropy plugin mitigates the problem of the kernel having too little entropy during and after boot for generating crypto keys. This plugin mixes random values into the latent_entropy global variable in functions marked by the __latent_entropy attribute. The value of this global variable is added to the kernel entropy pool to increase the entropy.

  • The Structleak plugin zero-initializes any structures that containing a  __user attribute. This can prevent some classes of information exposures. For example, the exposure of siginfo in CVE-2013-2141 would have been blocked by this plugin.

  • Initify extends the kernel mechanism to free up code and data memory that is only used during kernel or module initialization. This plugin will teach the compiler to find more such code and data that can be freed after initialization, thereby reducing memory usage. It also moves string constants used in initialization into their own sections so they can also be freed.

Another, current project that the CII is supporting is the work by David Windsor on HARDENED_ATOMIC and HARDENED_USERCOPY.

HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the prevention of use-after-free bugs. It is based off of work done by Kees Cook and the PaX Team. David has been adding new data types for reference counts and statistics so that these do not need to use the main atomic_t type.

The overall hardened usercopy feature is extensive, and has many sub-components. The main part David is working on is called slab cache whitelisting. Basically, hardened usercopy adds checks into the Linux kernel to make sure that whenever data is copied to/from userspace, buffer overflows do not occur.  It does this by verifying the size of the source and destination buffers, the location of these buffers in memory, and other checks.

One of the ways that it does this is to, by default, deny copying from kernel slabs, unless they are explicitly marked as being allowed to be copied.  Slabs are areas of memory that hold frequently used kernel objects.  These objects, by virtue of being frequently used, are allocated/freed many times.  Rather than calling the kernel allocator each time it needs a new object, it rather just takes one from a slab. Rather than freeing these objects, it returns them to the appropriate slab. Hardened usercopy, by default, will deny copying objects obtained from slabs. The work David is doing is to add the ability to mark slabs as being “copyable.”  This is called “whitelisting” a slab.

We also have two new projects starting, where we are working with a senior member of the kernel security team mentoring a younger developer. The first of these projects is under Julia Lawall, who is based at the Université Pierre-et-Marie-Curie in Paris and who is mentoring Bhumika Goyal, an Indian student who will travel to Paris for the three months of the project. Bhumika will be working on ‘constification’ – systematically ensuring that those values that should not change are defined as constants.

The second project is under Peter Senna Tschudin, who is based in Switzerland and is mentoring Gustavo Silva, from Mexico, who will be working on the issues found by running the Coverity static analysis tool over the kernel. Running a tool like Coverity over a very large body of code like the Linux kernel will produce a very large number of results. Many of these results may be false positives and many of the others will be very similar to each other. Peter and Gustavo intend to use the Semantic Patch Language (SmPL) to write patches which can be used to fix whole classes of issue detected by Coverity in order to more rapidly work through the long list. The goal here is to get the kernel source to a state where the static analysis scan yields very few warnings, which in turn means that as new code is added which causes a warning it will more prominently stand out, which will make the results of future analysis much more valuable.

The Kernel Self Protection Project keeps a list of projects that they believe would be beneficial to the security of the kernel. The team has been working through this list and if you are interested in helping to make the Linux kernel more secure then we encourage you to get involved. Sign up to the mailing lists, get involved in the discussions and if you are up for it then write some code. If you have specific security projects that you want to work on and you need some support in order to be able to do so then do get in touch with the CII. Supporting this sort of work is our job and we are standing by for your call!

Core Infrastructure Initiative Celebrates 3 Year Anniversary

By Blogs

This month marks the three year anniversary of the formation of the Core Infrastructure Initiative. It’s also the third anniversary of the Heartbleed vulnerability that served as a wake up call for the industry and which was a catalyst for the creation of the CII. For those not immersed in the security or technology industries, that bug revealed just how widespread and critical open source software is to the Internet’s infrastructure. The simple yet damaging security vulnerability uncovered in the hugely popular OpenSSL software had an enormous impact, in some cases allowing attackers to steal passwords, private keys, credit card numbers, financial information and more. At the time, it was estimated that almost one in five secure web servers were vulnerable to attack.

That episode also exposed limitations to Linus’s Law “many eyeballs make bugs shallow.” While in theory the openness of open source allows for huge numbers of people to get involved in checking the source, when software lacks an investment commensurate with its importance, we’re all at greater risk.

To help correct this, the Linux Foundation mobilized to form the Core Infrastructure Initiative. Twenty industry giants, including many of the world’s largest software companies, joined us in our initial mandate to secure the projects that are most critical to businesses on the Internet. To achieve this we set out to identify projects at risk, understand their needs and provide them the resources necessary to both make them more secure in the short term and stay more secure in the long term. As reportedat the time in The Economist, “OpenSSL, with its single main developer scraping by without a fair salary, was highlighted as a project that needed most attention.”

Less Fire-Fighting, More Strategizing

So three years in, are open source software vulnerabilities still as big a problem? Has the awareness raised by Heartbleed had a positive impact on online security and open source management? What have we been able to to do the make things better?

Firstly, of course software vulnerabilities are still a problem, in open source and in closed source. Sadly this is likely to be the case for quite some time. As long as software is specified by and written by humans this isn’t changing anytime soon. That said, we have made tremendous progress in the last few years.

Heartbleed uncovered a major gap in how we protect and secure the technology we use everyday. It showed us there is a major need to build a pre-emptive and collective system, absent of any one company’s individual priorities, to safeguard the Internet today and into the future. Quantitative and qualitative analysis of security of software, both closed and open, helps safeguard corporations and individuals.

I’m proud to say CII has made real progress and achieved many of our initial objectives, including our goal to make OpenSSL significantly more secure. Funding from CII has facilitated the fixing of many of its bugs and importantly reduced the chance of introducing new bugs. The OpenSSL team has indicated that it has moved from being in “firefighting mode” and are now more actively able to pursue strategic approaches to securing the project. Static and dynamic analysis are regularly performed as well using dynamic fuzz testing tools like AFL. In a few weeks time, we will be releasing results from an external audit of the OpenSSL code base that CII funded. The OpenSSL project now has a well-defined and published approach for how it will informs all interested parties of security advisories. The project is more secure than it was three years ago, both in terms of the code and the process, and we are delighted to have been instrumental in helping this happen.

Beyond OpenSSL, the CII has provided direct funding for architectural, development and testing work for dozens of other projects, relieving some of the financial pressure felt by many of their developers and allowing them to reduce technical debt and make structural improvements that will pay dividends into the future. Details of many of the successes from 2016 can be found in our most recent annual report.

Early on we recognised that we needed to apply quantitative and qualitative measures to find where risks lay. Our first CII Census project used a variety of metrics around bug density, developer community engagement, the number of security vulnerabilities and download and usage statistics to help identify open source components might be sources of risk and help target these for support. I am pleased to say that the CII members recently voted to extend funding the Census project to allow it to expand the number of packages under consideration, draw more detailed usage data from more sources and provide more continuous updates so that we can track projects as they improve (or, hopefully rarely, get worse).

Another success is the CII Best Practices Badge, which uses a qualitative self-assessment approach whereby open source project participants can grade themselves against a set of Best Practices for Open Source Development. Since formally launching in May 2016, more than 700 projects have signed up for the process and more than 70 projects have earned the badge.

We specifically reached out to both smaller projects, like cURL, and bigger projects, like the Linux kernel, to make sure that our criteria made sense for many different kinds of projects. The list of projects that proudly display the badge continues to grow — GitLab, Node.js, OpenBlox, OpenSSL, OpenStack, OPNFV, and Zephyr. The CII Badges program continues to evolve with work underway to introduce new badge levels to provide more sophisticated criteria.

Going forward we see the CII needing to do less fire-fighting and being able to apply more strategy. While we don’t expect to see an end to the need for supporting important maintenance work and to underwrite “orphaned projects,” many of our most successful initiatives have been the ones that have allowed us to help hundred or thousands of open source projects, rather than supporting them one at a time. The Best Practice Badge has helped hundreds of projects review and improve their security process. The Fuzzing Project has also applied dynamic fuzz-testing tools to hundreds of projects, while the Reproducible Builds project has helped enhance the build systems of tens of thousands of projects. We are also supporting the ongoing development of open source security testing tools ranging from the OWASP ZAP project to the Frama-C static analysis tools

Maintaining the code that we depend upon is still very important but we also need to build systems that allow us to help a much wider open source community. Thus, while our initial mandate was to target the projects that are most critical to businesses on the Internet, the CII is targeting the broadest range of projects possible within this remit — established and new, large and small, infrastructure and front-facing — in order to make the biggest impact possible. Below is a chart that shows our spending pattern over the last year. As time has gone by, the CII funding is moving up and to the right as we assign more funding to projects with continued high impact. We expect this trend to continue.

The first graphic titled “Annual Investment in Project” shows the current state of confirmed spending through 2017. The second one, titled “Total Investment in Project,” illustrates CII spending since our start three years ago.

Diagram Annual CII Spending

Diagram Total Project InvestmentNew Structure Expedites Funding Decisions and Grant Dispersal  

As we enter our fourth year, the CII has also made some changes to its membership structure. We need to be able to expand our membership, and we need to be able to make decisions quickly. To that end, earlier this year we updated our charter and and introduced new membership levels, creating a smaller, elected Steering Committee (SC) and a new Investment Committee (IC). With these changes, CII’s committees are more empowered to make swift decisions related to the organization’s operations and distribution of funds. Additionally, the CII charter now explicitly calls out the Steering Committee’s role to provide governance, oversight and audit of the CII and the role of a separate Investing Committee that will determine funding of specific projects.

Having two committees with more distinct areas of focus also means CII members are able to nominate someone from their legal and/or Open Source teams to work on governance issues, and appoint someone with domain expertise to vote on grants and funding decisions.

CII also introduced Platinum and Gold membership levels. The only difference between them is whether the member gets automatic representation on the CII Steering Committee (for Platinum members) or gets to vote to elect SC representatives (for Gold members).

Aside from now having an elected steering committee rather than direct representation, CII has also changed the way in which we vote on which projects we want to fund. Previously we needed to have a majority of members vote in favor of a project. Now we open voting for a period of three weeks and require a majority of votes cast in that window in order to accept the proposal. Voting can also close early once more than 50 percent of members have voted one way or the other. With these changes, all members are able to have their say, but never hold up the voting process. We believe that these streamlined procedures will allow us to get the resources where they are needed more quickly and also ensure that when great open source developers are available we can snatch them up quickly before they take a job elsewhere.

We’re proud of the progress we’ve made in the past three years. We took on a huge and open-ended challenge. By its very nature we will likely never be “done,” but it is clear that

we have already made a significant impact. Going forward we will continue to build better open source security tools, drive better security processes and support the communities that are building the technology on which we all depend. We will also continue to support many of the teams that toil to maintain the old foundation stones of the Internet, some of which go back decades.

We look forward to the next three years!

Kees Cook Updates CollabSummit Attendees on the Kernel Self-Protection Project

By Blogs

Kees Cook entralled CollabSummit attendees last week with his update on how the Linux Kernel Self-Protection project is coming along. There are now developers from several different organizations (Google, Linaro, Oracle, Red Hat, Intel, one self-funded, and one funded by CII) participating in the project. Kees went into detail about how it is important to not stall out just fixing security bugs as they appear but that we need to proactively develop technology to defeat entire classes of bugs before they can be exploited (with examples). Kees’ charts are available online.

Working With the White House

By Blogs

Head over to Linux.com to read Jim Zemlin’s latest blog posting The Linux Foundation’s Core Infrastructure Initiative Working with White House on Cybersecurity National Action Plan.

We are pleased The White House recognizes the work that CII has been doing to improve the security of open source software as it’s used on the Internet and by business and government. We look forward to working closely with the White House and the Department of Homeland Security as they implement CNAP and believe that private-public partnerships of this kind can have a major impact on improving security best practices.

This Anti-Pattern Must Die

By Blogs

One of the fun things about working in computer security is the emotional rollercoaster that vendors and journalists use to try to sustain attention on security topics, get dollars spent, and get bugs fixed. “If this patch isn’t applied immediately, then the earth will be hit by asteroids and we are all going to DIE!” “If you don’t buy this security  product, then your network will be hacked and you will be FIRED!”

With that said, there is one security anti-pattern that really must die an immediate death. I promise not to name and shame, but if you are doing this, please stop immediately, especially if you are doing it with a security package.

The issue at hand is the damage caused when users follow instructions similar to the following

“1. Install the apt-get repository key:

# apt-key adv –fetch-keys http://<removed to protect the guilty>/repos/apt/conf/<removed>.key

2. …”

A pattern closely related to the above:

“$ wget http://<url removed>/<removed>.key | sudo apt-key add –

2. …”

While I’m off crying softly in the corner — LOOK OUT FOR ASTROIDS! You can read calm and well reasoned arguments for why this (and its cousin where the key is acquired over http via wget) is such a bad practice on StackExchange and in the Debian manual. Hint, from this point forward the database of keys which are used to validate packages prior to installation and update can no longer be trusted to contain only non-malicious keys.

Once you have recovered from the shock of awareness of what you have done to anyone who was foolhardy enough to follow your instructions, head over to Let’s Encrypt to get an SSL certificate. Also, publish the key’s figerprint along with instructions for users on how to validate the key.

If you see this pattern online or if you are working on an open source project which uses this anti-pattern, please feel free to drop me a line at eratliff at linuxfoundation dot org or ejratl at gmail dot com.