Try Vulcan Free: The only free tool for risk aggregation and prioritization | Get started now >>

Vulcan Cyber launches new Attack Path Graph feature | Learn more >>

CVE-2023-2640 & CVE-2023-32629: How to fix the new vulnerabilities in Ubuntu Kernel | Learn more >>

Understanding vulnerability risk: The best practice guide for cyber risk management, from Vulcan Cyber users | Read more >>

Try Vulcan Free: The only free tool for risk aggregation and prioritization | Get started now >>

Vulcan Cyber launches new Attack Path Graph feature | Learn more >>

CVE-2023-2640 & CVE-2023-32629: How to fix the new vulnerabilities in Ubuntu Kernel | Learn more >>

Understanding vulnerability risk: The best practice guide for cyber risk management, from Vulcan Cyber users | Read more >>

GET A DEMO
Perspectives

The Biden Administration National Cybersecurity Strategy – A Practitioner’s Take

Mike Parkin | March 03, 2023

This blog post is a practitioner’s take (my take) on the Biden administration National Cybersecurity Strategy, which is a solid effort to codify go-forward cybersecurity strategy, while at times lacking in a realistic path to execute on certain objectives. Focusing on five main pillars, it lays out a strategy to help establish incentives and responsibility for keeping assets secure in a rapidly changing environment. It’s a reasonable strategy overall, reflecting current realities in cyber risk and the geopolitical situation.

Biden Administration National Cybersecurity Strategy

It will be challenging for most cybersecurity organizations to reach the objectives and goals laid out in the Biden administration’s National Cybersecurity Strategy, but it is ultimately what we are striving for. Critical infrastructure must be secure. Threat actors need to be dealt with globally which will require the cooperation of international law enforcement and intelligence communities. And there needs to be more investment in privacy, security, and related fields that drive long-term solutions.   

Most of this strategy parallels initiatives the cybersecurity community has been pushing for decades, especially regarding investing in and defending critical infrastructure. Developers have long had to juggle the conflicting priorities of “coding securely” with security baked in from the start, and “coding quickly” to meet the real-world business needs of their organization. Policies based on this strategy will, hopefully, shift the balance to more security baked into code rather than code that’s just half baked. 

On the other hand, some of the pillars of the Biden Administration National Cybersecurity Strategy are initiatives the cybersecurity community has wished for but have never had a real expectation of getting. Specifically, the second pillar, about directly engaging threat actors, and the fifth pillar, improving international cooperation on cybersecurity. 

Law enforcement involvement has always been a challenge. When the threat is local, or in-country, it’s obvious who to go to. But how many local or even regional (state) level law enforcement agencies have the resources, time, or skills to deal with a cyberattack? All too often, local law enforcement lacks the skills while state and federal organizations lack the bandwidth.  

If the threat is international there is little they can do to deal with the situation since they will need to cooperate with multiple jurisdictions to even find the perpetrator, and then with an agency that has the authority to make an arrest or seize assets. This leads naturally to the fifth pillar of international cooperation. This has been improving for some time, with some notable successes in recent years, but still runs into resource constraints, jurisdictional rivalries, and the fact that some nations are simply not interested in cooperating. 

Threat actors have good reason, after all, to congregate in a few notable countries. 

The third pillar, using market forces to drive security and resilience, may receive pushback. On the surface this sounds like a great idea. But there are some potential concerns with how it could be implemented. Making developers responsible for their applications is a good idea. It provides an impetus for developing secure code, shifting the balance from “Do it Fast” to “Do it Right.” This is a good thing and how we should be developing code regardless. 

The fact sheet summarizes the third pillar of the Biden Administration National Cybersecurity Strategy as follows: 

  1. Shape Market Forces to Drive Security and Resilience – We will place responsibility on those within our digital ecosystem that are best positioned to reduce risk and shift the consequences of poor cybersecurity away from the most vulnerable in order to make our digital ecosystem more trustworthy, including by:
    • Promoting privacy and the security of personal data; 
    • Shifting liability for software products and services to promote secure development practices; and, 
    • Ensuring that Federal grant programs promote investments in new infrastructure that are secure and resilient. 

I highlighted a couple of sentences above that may have unintended consequences. These are both conceptually good ideas. As mentioned, application developers should emphasize doing it right and code with an emphasis on security and privacy more than getting the next release out as quickly as possible. But it begs the question, how much of that responsibility will shift? 

It’s obvious that responsibility for sloppy coding should fall to the developer who released the vulnerable application when they didn’t follow best practice. But what about situations in which the problem traces back to a library with a latent vulnerability? One that lay undiscovered for years? Or one that is found in an open source library that was developed by volunteers and may not have had an active code review in years, or even be actively maintained anymore? 

Who takes responsibility for a breach that happened because an organization deployed an application without following the vendor’s recommendations for secure implementation? What about products that were left in service well past their end of life and the users didn’t replace them or simply retire them? Who takes responsibility for a breach that happened because a user fell for a sophisticated social engineering attack? 

Ultimately, responsibility is shared. While the developers need to do all they can to make their code secure and their products robust, it is up to their customers to deploy those products securely, following industry best practices and vendor advice. And it’s up to the user community to follow good basic security practices so they don’t become a victim of phishing, social engineering, or another user-focused attack. 

It will be interesting to see how this strategy is implemented and how it plays out in practice. There are technical, financial, political and diplomatic challenges that need to be overcome for this strategy to succeed. While we have many of the tools we need to manage cyber risk now, this has the potential to reduce our threat surfaces and make life much riskier for the threat actors who now operate with impunity and have little to worry about.