Estimated Reading Time: 6 minutes
In 1998, the US passed The Digital Millennium Copyright Act (DMCA) in an effort to enact several of the requirements of World Intellectual Property Organization (WIPO). DMCA makes it a crime to publicize technologies which are developed to bypass measures that control access to copyrighted works. DMCA also makes it a criminal act to subvert access controls built into any technology. Unfortunately, a side effect of the enactment of the DMCA has had a chilling effect on security research and has the potential of being a foundation for the prosecution of security researchers who, by design, test and validate the security of technology platforms(1).
As security professionals, we live in a schizophrenic world. Everyday, we strive to protect our organizations from the ongoing technology-centric threats that could cause harm to our customers, our employees, and our shareholders. Many, if not all of us, take an immense amount of pride in what we do, as it’s far more than just a job, but in many ways, a calling. However, just as a great detective must ‘think’ like a perpetrator, we must also think like those who have set out to infiltrate our organizations by whatever means possible. We have become masters of compartmentalization – speaking to our executive boards one moment, and digging around in the underbelly of the internet the next.
We, probably more acutely than anyone, understand the threat to not only our organizations, but in many ways, to the stability of our country and in fact, our very way of life. In order for us to effectively counter those threats, we rely heavily on those who voluntarily invest their time in identifying the vulnerabilities and issues that could potentially cause us harm.
For several years now, I have followed, albeit from a distance, countless stories around security researchers who have been afraid to publish their research because of some antiquated public statute. Several rather extreme cases have involved the government or corporate attorneys stepping in and flat out threatening these researchers with legal consequences if they did release their work.
To be honest, this has me rather concerned about the future of our industry.
The “cyber”-security field relies heavily on researchers such as this to help us fulfill our day to day responsibilities. We rely on both privately funded research as much as we do that which comes from academia. When such research is being locked away because of fear of imprisonment, it harkens to a future right out of Fahrenheit 451. And while the majority of books have not (yet) been banned, the outdated and ill-considered laws that are currently in place are fundamentally having the same cause-and-effect as being out-and-out forbidden – factual information is not being published due to fear of incarceration.
Welcome to 1999 Fireman Montag.
But yet, as I write this, I understand, and support, the need for such laws. Mr. Compartmentalization at present and accounted for.
How do I justify being deeply offended by the quandary our researchers face, but yet support the system that is causing it? Allow me to explain.
The Legal Dilemma
Since the 1980’s, the legal system has been wrangling with how to ‘adjust’ the criminal code to take into consideration the very different world of ‘cyber’ crime. There have been numerous attempts to plug the various holes used by defendants to get out of being charged with cyber crimes based on legacy criminal code. But this patchwork of statutes has done nothing but complicate the matter, leaving interpretations to a judicial system ill-equipped to understand the complexities of the crimes at hand, how they differ from one another, and the potential impact of future legal opinions. A murder is a murder – it’s either intentional or accidental – premeditated or impulsive, but in the end, a human life was taken. Rarely are the circumstances so clear in ‘cyber’ crime, Breaches occur, information is stolen. but if that same information can be found posted on Facebook or LinkedIn, what are the grounds for it being a crime? If a person is in possession of a stolen car, that is very clearly a crime, however, if someone is in possession of a text file with what happens to be the same information that can be culled from social media, is that person guilty of ‘possession of stolen property’ as if it were a car?
In the research world, it is far less clear and distinct. Laws have been rightfully enacted to protect our industries and corporations from industrial espionage and the theft of proprietary information that they have invested in developing. The existence of such laws is critical for the ongoing growth of our economy and play a pivotal role in encouraging further investment in development and creativity. However, through the ‘80s and ‘90s, many regulatory efforts were proposed to ‘update’ such laws to address the influx of computer-related crimes that law enforcement agencies were trying to deal with. The Computer Fraud and Abuse Act (CFAA) which was enacted in 1984, was arguably the first such law which caused serious concern over how security researchers would be treated(2). Unfortunately, things haven’t gotten much better over the past few decades. Since its publication, the CFAA has been amended seven times and new laws such as the Digital Millennium Copyright Act (DMCA) fundamentally make it illegal for researchers to do what they do best. While efforts such as Arron’s Law have attempted to address the vagueness of the CFAA, none have been successful to date.
I say all this not to debate the current legal system, but rather to call attention to how over time, the patchwork of ad-hoc cyber-laws are being twisted and distorted to such a degree that it will actually make protecting the country’s infrastructure impossible to do.
So when I’m culling through my daily news feed and I see a post that once again infers to the legal ramifications of security research, I begrudgingly grab my coffee and settle in for a read. Much to my surprise, I find that the Library of Congress, in cooperation with the US Copyright Office, added exemptions to the DMCA which expressly protect most security research.
From the Vice.com article(3):
As part of an effort to keep the DMCA timely, Congress included a so-called “safety valve” dubbed the Section 1201 triennial review process that, every three years, mandates that activists and concerned citizens beg the Copyright Office and the Librarian of Congress to craft explicit exemptions from the law to ensure routine behavior won’t be criminalized.
As part of this mandated three-year review, the Copyright Office and the Librarian of Congress have updated the exemption list of the DMCA and removed the majority of the language which could potentially limit research and be a foundation for prosecutorial actions against security researchers.
Yes, you read that right. Two bureaucratic government agencies got together and agreed to do the right thing. And while one could argue that the areas that are still ‘off limits’ for disclosure may be the most critical ones that need to be tested, at least we’re moving in the right direction. For the most part, researchers will no longer have to worry about working in a true ‘lab environment’, or purchasing licensed copies of some piece of software that’s running on millions of devices. While I have no doubt that we will still hear of canceled talks or research deemed too risky to disclose, at least the work will be getting done.
I’m hoping that the DMCA changes will not only encourage more to begin research, but hopefully the tsunami of work that’s been going on behind closed doors will flood us with new information and new approaches to protecting our collective world. That said, I strongly believe that as leaders in our industry, we need to do more to protect those who give so much. Security leaders need to hold themselves accountable for supporting, enabling, and furthering the practices of our collective research establishment. Collectively, as industry leaders, we must find ways to ensure such practices are protected from overzealous corporate lawyers and federal authorities. After all, who is better prepared to stop our industry’s march towards a dystopian Bradbury future then the people who live it everyday?
Let’s be frank, our experience tells us our applications, networks and environments are full of exploitable vulnerabilities that have yet to be disclosed, but are likely being used by attackers as you read this. If it weren’t for our illustrious researchers, we might never know about them, never protect our companies against them, and never hold vendors accountable for fixing them. Just imagine if we never knew about the vulnerabilities in our SCADA systems, our financial applications, or our airline WiFi.. Does anyone truly think we’d be better off not knowing?
As the saying goes: If not us, then who? If not now, then when?
Copyright © 2002-2020 John Masserini. All rights reserved.