Skip to main content
All Posts By


Node.js 10 is here!

By In the News

Node.js wants to make it easier for you to keep things secret and keep them safe. This is the first release line to include OpenSSL 1.x. Recent work done by the OpenSSL team and the Core Infrastructure Initiative has made it possible for Node to really take advantage of everything OpenSSL has to offer. This includes the ChaCha20 cipher and Polu1305 authenticator.

By moving to OpenSSL 1.1.0, Node.js has made it easier to upgrade to future OpenSSL versions. Node developers will be able to be secure while using the gold-standard of encrypted communications on the web.

Read More »

The Linux Foundation’s Core Infrastructure Initiative Announces New Backers, First Projects to Receive Support and Advisory Board Members

By Announcements

News Highlights

* Additional founding members Adobe, Bloomberg, HP, Huawei and join CII

* Network Time Protocol, OpenSSH and OpenSSL first projects to receive support; Open Crypto Audit Project to conduct security audit of OpenSSL

* Advisory Board members include longtime Linux kernel developer and open source advocate Alan Cox; Matt Green of Open Crypto Audit Project; Dan Meredith of the Radio Free Asia’s Open Technology Fund; Eben Moglen of Software Freedom Law Center; Bruce Schneier of the Berkman Center for Internet & Society at Harvard Law School; Eric Sears of the MacArthur Foundation; and Ted Ts’o of Google and the Linux kernel community

SAN FRANCISCO, May 29, 2014 – The Core Infrastructure Initiative (CII), a project hosted by The Linux Foundation that enables technology companies, industry stakeholders and esteemed developers to collaboratively identify and fund open source projects that are in need of assistance, today announced five new backers, the first projects to receive funding from the Initiative and the Advisory Board members who will help identify critical infrastructure projects most in need of support.

CII provides funding for fellowships for key developers to work fulltime on open source projects, security audits, computing and test infrastructure, travel, face-to-face meeting coordination and other support. The Steering Committee, comprised of members of the Initiative, and the Advisory Board of industry stakeholders and esteemed developers, are tasked with identifying underfunded open source projects that support critical infrastructure, and administering the funds through The Linux Foundation.

The computing industry has increasingly come to rely upon shared source code to foster innovation. But as this shared code has become ever more critical to society and more complex to build and maintain, there are certain projects that have not received the level of support commensurate with their importance. CII changes funding requests from the reactive post-crisis asks of today to proactive reviews identifying the needs of the most important projects. By raising funds at a neutral organization like The Linux Foundation, the industry can effectively give these projects the support they need while ensuring that open source projects retain their independence and community-based dynamism.

“All software development requires support and funding. Open source software is no exception and warrants a level of support on par with the dominant role it plays supporting today’s global information infrastructure,” said Jim Zemlin, executive director at The Linux Foundation. “CII implements the same collaborative approach that is used to build software to help fund the most critical projects. The aim of CII is to move from the reactive, crisis-driven responses to a measured, proactive way to identify and fund those projects that are in need. I am thrilled that we now have a forum to connect those in need with those with funds.”

Additional Backers Represent Overwhelming Support for Open Source Projects

Additional founding members of CII include Adobe, Bloomberg, HP, Huawei and These companies represent the ongoing and overwhelming support for the open source software that provides the foundation for today’s global infrastructure. They join other members of CII who include Amazon Web Services, Cisco, Dell, Facebook, Fujitsu, Google, IBM, Intel, Microsoft, NetApp, Rackspace and VMware. Comments from some of the newest members are included below.

Range of Projects Prioritized for First Round of Funding

Upon an initial review of critical open source software projects, the CII Steering Committee has prioritized Network Time Protocol, OpenSSH and OpenSSL for the first round of funding. OpenSSL will receive funds from CII for two, fulltime core developers. The OpenSSL project is accepting additional donations, which can be coordinated directly with the OpenSSL Foundation (contact at

The Open Crypto Audit Project (OCAP) will also receive funding in order to conduct a security audit of the OpenSSL code base. Other projects are under consideration and will be funded as assessments are completed and budget allows.

Esteemed Industry Experts Will Advise CII on Projects Most in Need

The CII Advisory Board will inform the CII Steering Committee about the open source projects most in need of support. With highly esteemed experts from the developer, security and legal communities, the CII Advisory Board plays an important role in prioritizing projects and individuals who are building the software that runs our lives.

Alan Cox is a longtime Linux kernel developer and has been recognized by the Free Software Foundation for advancing free software.

Matthew Green is a Research Professor of Computer Science at the Johns Hopkins University and a co-founder of the Open Crypto Audit Project. His research focuses on computer security and cryptography, and particularly the way that cryptography can be used to promote individual privacy.

“Whether we acknowledge it or not, the security of today’s Internet depends on a small number of open source projects. This initiative puts the resources in place to ensure the long-term viability of those projects. It makes us all more secure,” said Green.

Dan Meredith is a director at Radio Free Asia’s Open Technology Fund. He has been an activist and technologist exploring emerging trends intersecting human rights, transparency, global communication policy, the Internet, and information security for over a decade.

Eben Moglen is a professor of law and legal history at Columbia University and is the founder, director-counsel and chairman of Software Freedom Law Center. He is considered the foremost expert on open source legal practices and represents a variety of open source projects and developers.

Bruce Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School and a well-recognized expert on computer security and privacy. He is also a fellow at New America Foundation’s Open Technology Institute.

Schneier commented on the Core Infrastructure Initiative: “This is an important step towards improving the security of the Internet. I’m happy to see the technology companies that rely on the security of open source software investing in that security.”

Eric Sears is a Program Officer for Human Rights for MacArthur Foundation. His grant-making portfolio includes efforts to strengthen digital free expression and privacy through advancing a more open and secure Internet.

Ted Ts’o has been recognized as the first Linux kernel developer in North America and today is a file system developer at Google who is also the Linux /dev/random maintainer.

Member Comments


“Adobe believes that open development and open source software are fundamental building blocks for software development,” said Dave McAllister, director of open source at Adobe. “The Core Infrastructure Initiative allows us to extend our support through a neutral forum that can prioritize underfunded yet critical projects. We’re excited to be a part of this work.”


“Open source software provides a critical foundation for the technologies we build for our clients,” said Shawn Edwards, CTO, Bloomberg. “We are proud to support the Core Infrastructure Initiative so we can contribute to building the foundational technologies that make future innovation possible.”


“HP strongly believes in the quality of open source software, as evidenced by its use, participation in, and support of open source projects and software,” said Eileen Evans, vice president and deputy general counsel, cloud and open source, HP.  “As a member of the Core Infrastructure Initiative, HP will lend its expertise and resources to further improve the technology of open source global information infrastructure, and in particular, work to reduce the likelihood of security-related incidents.”

“Open source software has fueled the advancements we’ve seen over the last decade in cloud and mobile computing,” said Parker Harris, co-founder, “That is why supporting the Linux Foundation’s Core Infrastructure Initiative is an absolute necessity in today’s software industry, and is delighted to contribute to this effort and foster the next generation of open source computing innovation.”

Anyone can donate to the Core Infrastructure Initiative fund. To join or donate or find out more information about the Core Infrastructure please visit

About The Linux Foundation

The Linux Foundation is a nonprofit consortium dedicated to fostering the growth of Linux and collaborative software development. Founded in 2000, the organization sponsors the work of Linux creator Linus Torvalds and promotes, protects and advances the Linux operating system and collaborative software development by marshaling the resources of its members and the open source community. The Linux Foundation provides a neutral forum for collaboration and education by hosting Collaborative Projects, Linux conferences, including LinuxCon and generating original research and content that advances the understanding of Linux and collaborative software development. More information can be found at

The Linux Foundation, Linux Standard Base, MeeGo, Tizen and Yocto Project are trademarks of The Linux Foundation. OpenBEL is a trademark of OpenBEL Consortium. OpenDaylight is a trademark of the OpenDaylight Project, Linux is a trademark of Linus Torvalds

# # #

ADTmag: Open Source Node.js Hits v10, with Better Security, Performance, More

By In the News

The open source Node.js project for server-side JavaScript today hit a major milestone with the release of version 10.0.0. It marks the seventh major release of the cross-platform JavaScript runtime since the formation of the governing Node.js Foundation in 2015. Node.js itself debuted in 2009, promising a unified JavaScript-based Web application platform that allowed for creating dynamic sites with server-side code, rather than just static client code embedded in browsers.

Read More »

ZDNet: Hyperledger bug bounty program goes public

By In the News

The Hyperledger project has opened the doors of its bug bounty program to the public. Hyperledger is an open-source project and hub for developers to work on blockchain technologies. The Hyperledger infrastructure is being developed in order to support cross-industry uses of distributed ledger technologies, most commonly associated with the exchange of cryptocurrency. Hosted by the Linux Foundation, Hyperledger focuses on cross-industry support for distributed ledger frameworks, smart contracts, and libraries, and already supports a range of business-based blockchain frameworks and transactional applications.

Read More »

FCW: Lawmakers worry about a second Heartbleed

By In the News

Two Republicans on key House committees are looking for more information about the challenges surrounding the cybersecurity of open-source software.

Reps. Greg Walden (R-Ore.) and Gregg Harper (R-Miss.), respectively the chairs of the House Energy and Commerce Committee and its Subcommittee on Oversight and Investigations, want information from Linux Foundation Executive Director Jim Zemlin about the cybersecurity risks of open-source software.

Read More »

The Hill: Lawmakers press Linux on security of open-source software

By In the News

Republican leaders of the House Energy and Commerce Committee are pressing the nonprofit Linux Foundation on how the tech community can better mitigate vulnerabilities in open-source software. Rep. Greg Walden (R-Ore.), the committee chairman, and Rep. Gregg Harper (R-Miss.) sent a letter to the Linux Foundation on Monday, citing the critical “Heartbleed” vulnerability discovered in 2014 that impacted thousands of websites and allowed hackers to steal user passwords.

“As the last several years have made clear, OSS [open-source software] is such a foundational part of the modern connected world that it has become critical cyber infrastructure,” the lawmakers wrote. “As we continue to examine cybersecurity issues generally, it is therefore imperative that we understand the challenges and opportunities the OSS ecosystem faces, and potential steps that OSS stakeholders may take to further support it.”

Read More »

Open Source Threat Modeling

By Blogs

A guest blog post by Mike Goodwin.

What is threat modeling?

Application threat modeling is a structured approach to identifying ways that an adversary might try to attack an application and then designing mitigations to prevent, detect or reduce the impact of those attacks. The description of an application’s threat model is identified as one of the criteria for the Linux CII Best Practises Silver badge.

Why threat modeling?

It is well established that defense-in-depth is a key principle for network security and the same is true for application security. But although most application developers will intuitively understand this as a concept, it can be hard to put it into practice. After many years and sleepless nights, worrying and fretting about application security, one thing I have learned is that threat modeling is an exceptionally powerful technique for building defense-in-depth into an application design. This is what first attracted me to threat modeling. It is also great for identifying security flaws at design time where they are cheap and easy to correct. These kinds of flaws are often subtle and hard to detect by traditional testing approaches, especially if they are buried in the innards of your application.

Three stages of threat modeling

There are several ways of doing threat modeling ranging from formal methodologies with nice acronyms (e.g. PASTA) through card games (e.g. OWASP Cornucopia) to informal whiteboard sessions. Generally though, the technique has three core stages:

Decompose your application – This is almost always done using some kind of diagram. I have seen successful threat modeling done using many types of diagrams from UML sequence diagrams to informal architecture sketches. Whatever format you choose, it is important that the diagram shows how different internal components of your application and external users/systems interact to deliver its functionality. My preferred type of diagram is a Data Flow Diagram with trust boundaries:

Identify threats – In this stage, the threat modeling team ask questions about the component parts of the application and (very importantly) the interactions or data flows between them to guess how someone might try to attack it. The answers to these questions are the threats. Typical questions and resulting threats are:

Question Threat
What assumptions is this process making about incoming data? What if they are wrong? An attacker could send a request pretending to be another person and access that person’s data.
What could an attacker do to this message queue? An attacker could place a poison message on the queue causing the receiving process to crash.
Where might an attacker tamper with the data in the application? An attacker could modify an account number in the database to divert payment to their own account.

Design mitigations – Once some threats have been identified the team designs ways to block, avoid or minimise the threats. Some threats may have more than one mitigation. Some mitigations might be preventative and some might be detective. The team could choose to accept some low-risk threats without mitigations. Of course, some mitigations imply design changes, so the threat model diagram might have to be revisited.

Threat Mitigation
An attacker could send a request pretending to be another person and access that person’s data. Identify the requestor using a session cookie and apply authorization logic.
An attacker could place a poison message on the queue causing the receiving process to crash. Digitally sign message on the queue and validate their signature before processing.
Maintain a retry count on message and discard them after three retries.
An attacker could modify an account number in the database to divert payment to their own account. Preventative: Restrict access to the database using a firewall.
Detective: Log all changes to bank account numbers and audit the changes.

OWASP Threat Dragon

Threat modeling can be usefully done with a pen, whiteboard and one or more security-aware people who understand how their application is built, and this is MUCH better than not threat modeling at all. However, to do it effectively with multiple people and multiple project iterations you need a tool. Commercial tools are available, and Microsoft provides a free tool for Windows only, but established, free, open-source, cross-platform tools are non-existent. OWASP Threat Dragon aims to fill this gap. The aims of the project are:

  • Great UX – Using Threat Dragon should be simple, engaging and fun
  • A powerful threat/mitigation rule engine – This will lower the barrier to entry for teams and encourage non-specialists to contribute
  • Integration with other development lifecycle tools – This will ensure that models slot easily into the developer workflows and remain relevant as the project evolves
  • To always be free, open-source (like all OWASP projects) and cross-platform. The full source code is available on GitHub

The tool comes in two variants:

End-user documentation is available for both variants and, most importantly, it has a cute logo called Cupcakes…

Threat Dragon is an OWASP Incubator Project – so it is still early stage but it can already support effective threat modeling. The near-term roadmap for the tool is to:

  • Achieve a Linux CII Best Practices badge for the project
  • Implement the threat/mitigation rule engine
  • Continue to evolve the usability of the tool based on real-world feedback from users
  • Establish a sustainable hosting model for the web application

If you want to harden your application designs you should definitely give threat modeling a try. If you want a tool to help you, try OWASP Threat Dragon! All feedback, comments, issue reports and pull requests are very welcome.

About the Author
Mike Goodwin is a full-time security professional at the Sage Group where he leads the team responsible for product security. Most of his spare time is spent working on Threat Dragon or co-leading his local OWASP chapter.

Securing Network Time

By Blogs

Since its inception the CII has considered network time, and implementations of the Network Time Protocol, to be “core infrastructure.” Correctly synchronising clocks is critical both to the smooth functioning of many services and to the effectiveness of numerous security protocols; as a result most computers run some sort of clock synchronization software and most of those computers implement either the Network Time Protocol (NTP, RFC 5905) or the closely related but slimmed down Simple Network Time Protocol (SNTP, RFC 4330).


There are several different implementations of NTP and SNTP, including both open source and proprietary versions. For many years the canonical open source implementation has been ntpd, which was started by David Mills and is now developed by Harlan Stenn at the Network Time Foundation. Parts of the ntpd code date back at least 25 years and the developers pride themselves in having the most complete implementation of the protocol and having a wide set of supported platforms. Over the years forks of the ntpd code have been made, including the NTPSec project that seeks to remove much of the complexity of the ntpd code base, at the expense of completeness of the more esoteric NTP features and breadth of platform support. Others have reimplemented NTP from scratch and one of the more complete open source alternatives is Chrony, originally written by Richard Curnow and currently maintained by Miroslav Lichvar.

The CII recently sponsored a security audit of the Chrony code, carried out by the security firm Cure53 (here is the report). In recent years, the CII has also provided financial support to both the ntpd project and the NTPSec project. Cure53 carried out security audits of both ntpd and NTPSec earlier this year and Mozilla Foundation’s Secure Open Source (SOS) project funded those two audits. SOS also assisted the the CII with the execution of the Chrony audit.

Since the CII has offered support to all three projects and since all three were reviewed by the same firm, close together in time, we thought it would be useful to present a direct comparison of their results.


Full report PDF

The ntpd code base is the largest and most complex of the three and it carries a lot of legacy code. As a result, unsurprisingly, it fared the worst of the three in security testing with the report listing 1 Critical, 2 High, 1 Medium and 8 Low severity issues along with 2 Informational comments. It should be noted that these issues were largely addressed in the 4.2.8p10 release back in March 2017. That said, the commentary in the report is informative, with the testers writing:

“The general outcome of this project is rooted in the fact that the code has been left to grow organically and had aged somewhat unattended over the years. The overall structure has thus become very intricate, while also yielding a conviction that different styles and approaches were used and subsequently altered. The seemingly uncontrolled inclusion of variant code via header files and complete external projects engenders a particular problem. Most likely, it makes the continuous development much more difficult than necessary.”

As a result, it seems quite likely that there are more lurking issues and that it will be difficult for the authors to avoid introducing new security issues in the future without some substantial refactoring of the code.

As mentioned above, ntpd is the most complete implementation of NTP and as a result is the most complex. Complexity is the enemy of security and that shows up in this report.


Full report PDF

As mentioned previously, the NTPSec project started as a fork of ntpd with the specific aim of cleaning up a lot of the complexity in ntpd, even if that meant throwing out some of the less-used features. The NTPSec project is still in its early days; the team has not yet made a version 1.0 release, but has already thrown out nearly 75% of the code from ntpd and refactored many other parts. Still, the security audit earlier this year yielded 3 High, 1 Medium and 3 Low severity issues as well as raising 1 Informational matter. The testers comments again were telling:

“On the one hand, much cruft has been removed successfully, yet, on the other hand, the code shared between the two software projects bears tremendous similarities. The NTPsec project is still relatively young and a major release has not yet occurred, so the expectations are high for much more being done beforehand in terms of improvements. It must be mentioned, however, that the regression bug described in NTP-01-015 is particularly worrisome and raises concerns about the quality of the actions undertaken.

In sum, one can clearly discern the direction of the project and the pinpoint the maintainers’ focus on simplifying and streamlining the code base. While the state of security is evidently not optimal, there is a definite room for growth, code stability and overall security improvement as long as more time and efforts are invested into the matter prior to the official release of NTPsec.”

The NTPSec has made some significant technical progress but there is more work to do before the developers get to an official release. Even then, the history of the code may well haunt them for some time to come.


Full report PDF

Unlike NTPSec, Chrony is not derived from the ntpd code but was implemented from scratch. It implements both client and server modes of the full NTPv4 protocol (as opposed to the simplified SNTP protocol), including operating as a Stratum 1 reference server, and was specifically designed to handle difficult conditions such as intermittent network connections, heavily congested networks and systems that do not run continuously (like laptops) or which run on a virtual machine. The development is currently supported by Red Hat Software and it is now the default NTP implementation on their distributions.

In the 20+ years that I’ve worked in the security industry I’ve read many security audits. The audit that the CII sponsored for Chrony was the first time that I’d used Cure53, and I had not seen any previous reports from them, so when I received the report on Chrony I was very surprised. So surprised that I stopped to email people who had worked with Cure53 to question their competence. When they assured me that the team was highly skilled and capable, I was astounded. Chrony withstood three skilled security testers for 11 days of solid testing and the result was just 2 Low severity issues (both of which have since been fixed). The test report stated:

“The overwhelmingly positive result of this security assignment performed by three Cure53 testers can be clearly inferred from a marginal number and low-risk nature of the findings amassed in this report. Withstanding eleven full days of on-remote testing in August of 2017 means that Chrony is robust, strong, and developed with security in mind. The software boasts sound design and is secure across all tested areas. It is quite safe to assume that untested software in the Chrony family is of a similarly exceptional quality. In general, the software proved to be well-structured and marked by the right abstractions at the appropriate locations. While the functional scope of the software is quite wide, the actual implementation is surprisingly elegant and of a minimal and just necessary complexity. In sum, the Chrony NTP software stands solid and can be seen as trustworthy.”

The head of Cure53, Dr. Mario Heiderich, indicated that it was very rare for the firm to produce a report with so few issues and that he was surprised that the software was so strong.

Of course just because the software is strong does not mean that it is invulnerable to attack, let alone free from bugs. What it does mean however is that Chrony is well designed, well implemented, well tested and benefits from the hindsight of decades of NTP implementation by others without bearing the burden of legacy code.


From a security standpoint (and here at the CII we are security people), Chrony was the clear winner between these three NTP implementations. Chrony does not have all of the bells and whistles that ntpd does, and it doesn’t implement every single option listed in the NTP specification, but for the vast majority of users this will not matter. If all you need is an NTP client or server (with or without reference clock), which is all that most people need, then its security benefits most likely outweigh any missing features.



The security audit on Chrony was funded by the CII but the Mozilla SOS project handled many of the logistics of getting the audit done and we are very grateful to Gervase Markham for his assistance. Mozilla SOS funded the audits of ntpd and NTPSec. All three audits were performed by Cure53.

1,000 Projects Registered for the CII Best Practice Badge, 100 Badges Granted and Prizes!!!

By Blogs

In May of last year the CII launched its Best Practice Badge program. Our goal was to raise awareness of development processes and project governance steps that will help projects have better security outcomes. By giving project maintainers a list of actionable items that will know improve security, teaching them why these steps lead to improvement and showing them how to implement them, we can raise security standards and help projects get better at delivering secure products. By offering a visual “badge” we can make it easier for consumers of open source projects to see which projects take security seriously. More recently, in June of this year, we added new Silver and Gold levels to the badges, to allow projects that make further efforts to drive security improvements to show off their commitment.

We recently issued our 100th Badge to a passing project. A few days later, we had our 1,000th project sign up for the Best Practice Badge program. Our goal for the Best Practice Badge is to be a recognisable mark of commitment to security by projects. For for any mark to gain recognition, it needs to be used and on display.  In light of that fact, we are delighted that the Best Practice Badge recently passed these two major adoption milestones.

Some people have questioned why the pass rate is only 10 percent. The fraction of projects getting a badge has been fairly stable for a while, even as the number of registered projects continues to grow, as can be seen from the project statistics page. When we set up the program it was very much our intent that this should not be some “rubber stamp” process but that projects would need to work to get their badge. To date nearly every project has had to make some improvement in order to achieve a badge, which indicates that the program is actually moving the needle on Open Source Security.

Several projects have given us feedback on the badging process and there are several topics that came up over and over again. Common issues that often need to be fixed include:

  • not supporting a secure way to access the project web site (or not having a valid certificate for the site),

  • not performing automated testing,

  • not performing any sort of code analysis;

  • and not having a publicly documented process for reporting security vulnerabilities.

Other important changes projects have made as a result of going through the badge process include:

  • removing insecure cryptographic algorithms,

  • adding unique version numbers for each release,

  • documenting release notes and the contribution process, and

  • including coding style guidelines for contributions.

History shows that these sorts of steps can improve the security outcomes for projects so we are delighted that all of the passing projects are now taking these steps.

CII KubernetesOn to Silver and Gold

As well as the huge progress we have made with getting projects to a “passing grade,” the CII Best Practice Badge program recently launched its enhanced Silver and Gold badges. These higher level badges add a number of extra criteria on top of the passing level and make mandatory some of the criteria that are recommended at the lower levels. These higher levels of give our passing projects some new stretch goals to which they can aspire.

Today we are delighted to announce that now not only do the the higher level badges bring glory and fame but prizes as well! The maintainers who complete the Silver badge process of the first 50 projects will each receive a bag of Linux Foundation and CII branded swag (probably a hoodie, t-shirt and some other stuff; we’ve not quite pinned the details down yet). Furthermore, each maintainer who completes the badge process of the first 5 projects to have a Gold badge validated will be invited to attend the Linux Foundation-organised conference of their choice, along with an invitation to present at that conference on how their project runs their Secure Development Life Cycle process. Don’t worry if you’re too shy to get up on stage; presenting isn’t obligatory but we really do want successful projects to share their experiences so that other projects can learn from your experiences.

On to the 10,000 projects and 1,000 badges! Woohoo!

CII Best Practices Badge Program Announces Higher-level Certification and Expanded Language Support

By Blogs

In May last year the CII launched it’s Best Practices Badge program, a qualitative self-assessment approach that is available online, the CII Best Practices Badge program allows open source projects to grade themselves against a set of best practices for open source development.

Today we are pleased to announce the next stage of the Best Practice Badge program, which adds two major upgrades to the original program: higher-level certification and internationalisation.

Since formally launching 13 months ago, more than 850 projects have signed up for the process, requiring project maintainers to answer an extensive questionnaire about their development process and explain how they meet the 60+ criteria. While this is a self-assessment process that does not mean that it is a low bar; so far about 10 percent have passed while many projects are making changes to allow them to meet the requirements. Projects that have received their badges so far include GitLab, Hyperledger Fabric, Linux, NTPSec, Node.js, OPNFV, OpenBlox, OpenSSL, OpenStack, and Zephyr.

The below chart shows the number of projects working toward earning a badge and indicates meaningful progress across the board. More CII Best Practices Badges growth and pass rate statistics can be found here.

Diagram Project Progress

It has always been our intention to use the program to push projects to raise their own standards and, to that end, today we are launching two new badges for projects that meet these higher standards. In addition to the original “Passing” badge, we are adding enhanced “Silver” and “Gold” badges. The new criteria for badges for silver and gold levels build on the existing criteria for the “Passing” level.

The new levels raise the bar in a number of areas and are meant to help identify projects that are not only highly committed to improving the quality and security of their code, but are also mindful and proactive with other success factors. For developers, the badges signal which projects are well-organized and easy to participate in, especially for newcomers. For consumers, the changes will ease the on ramp by requiring quick start guides, for example. While criteria that calls for even more rigorous development best practices will instill increased confidence with businesses leveraging open source. In fact, meeting the new criteria especially at the Gold level, will likely not be achievable by numerous small and single-organization projects.

To earn a silver badge, for example, projects are now required to adopt a code of conduct, clearly define their governance model, upgrade crypto_weaknesses and use at least one static analysis tool to look for common vulnerabilities in the analyzed language or environment, if possible.

The other change that we are excited to announce is internationalisation. To broaden the program’s reach and make it easier for projects around the world to participate in the Best Practice Badge program we have updated the Badge application to support multiple languages. We are launching the site with full Chinese and French language support today and German, Russian and Japanese in progress. We would especially like to thank CII member company Huawei for their generous support of the translation into Chinese and Yannick Moy for hard work translating the site into French.

 As with the original work, David Wheeler, project leader at the Institute for Defense Analyses, did the hard work to expand the program. We continue to welcome community feedback, especially on the translation work. To get involved, please join the cii-badges mailing list and track us on GitHub at coreinfrastructure/best-practices-badge. Or course, we also encourage projects to begin the CII Best Practices Badge application process.

For those attending LinuxCon | ContainerCon | CloudOpen China, CII Program Director Marcus Streets is presenting “The Core Infrastructure Initiative: Its First Three Years and Onwards to the Future” on June 20th. He will also share more on these new developments and explain how you can apply for a badge for your free software project.