Wednesday, January 27, 2016

An attempt at fixing Wassenaar

Last year in May, I wrote extensively about the many ways in which the 2013 "intrusion software" amendments to the Wassenaar Arrangement were broken and downright dangerous to all efforts at security the global IT infrastructure. Since then, the debate has heated up from all sides -- extending to a hearing in front of US congress (which was pretty unanimous in condemning these amendments), but also including voices such as James Bamford arguing for these controls in an op-ed. The landscape of the discussion is too complex now to be summarized here (the interested reader can find a partial survey of recent developments here).

Common ground between the different sides of the discussion is not large, but the thing that almost everybody agrees to is "it is bad when despotic regimes that couldn't otherwise get advanced surveillance software purchase sophisticated surveillance software from abroad". How to prevent this is up for discussion, and it is unclear whether export control (and specificially the Wassenaar Arrangement) is the right tool for the task.

To find out whether the export control language can be made to work, my colleagues Mara Tam and Vincenzo Iozzo and me have worked jointly and tried to come up with an amendment to the language of the Wassenaar Arrangement that would satisfy the following criteria:

  • Make sure that click-and-play surveillance frameworks such as the ones marketed by HackingTeam or Gamma are caught and controlled.
  • Make sure that no technology that is required for defending networks (including bugs, proof-of-concept exploits, network scanners etc.) is caught and controlled.
In order to achieve this, we had to depart from the "traditional" Wassenaar language (that is focused on performance metrics and technical properties) and include much greater emphasis on "intent" and especially "informed consent by the user". We draw the line between good and bad if the design intent of the software in question is to be used against people that did not consent.

As of today, we are circulating our draft more widely. We are not 100% sure that our language achieves what we want to achieve, and we are not even sure whether what we want to achieve can be achieved within the language of export control -- but we have made a very thorough effort at testing our language against all scenarios we could come up with, and it worked well.

We are hoping that by circulating our proposal we can somewhat de-polarize the discussion and attempt to find a middle ground that everybody can be happy with -- or, failing that, to show that even with a lot of effort the 2013 amendments may end up being unfixable.

Anyhow, if you are interested in our document, you can download it here. As we get more feedback, the document will be updated and replaced with newer versions.

Tuesday, December 22, 2015

Open-Source BinNavi ... and fREedom

One of the cool things that my former zynamics colleagues (now at Google) did was the open-sourcing of BinNavi - a tool that I used to blog about quite frequently in the old days (here for example when it came to debugging old ScreenOS devices, or here for much more - kernel debugging, REIL etc.).

BinNavi is a GUI / IDE for performing multi-user reverse engineering, debugging, and code analysis. BinNavi allows the interactive exploration and annotation of disassemblies, displayed as browsable, clickable, and searchable graphs - based on a disassembly read from a PostgreSQL database, which can, in theory, be written by any other engine.

Writing UIs is hard work, and while there are many very impressive open-source reverse engineering tools around (Radare comes to mind first, but there are many others), the UI is often not very pretty - or convenient. My hope is that BinNavi can become the "default UI" to a plethora of open-source reverse engineering tools, and grow to realize it's full potential as "the open-source reverse engineering IDE".

One of the biggest obstacles to BinNavi becoming more widely adopted is the fact that IDA is the only "data source" for BinNavi - e.g. while BinNavi is FOSS, somebody that wishes to start reverse engineering still needs IDA to fill the Postgres database with disassembly.

To remedy this situation, Dave Aitel put up a contest: Anybody that either builds a Capstone-to-BinNavi-SQL-bridge or that adds decompilation as a feature to BinNavi gets free tickets to INFILTRATE 2016.

Last week Chris Eagle published fREedom, a Python-based tool to disassemble x86 and x86_64 programs in the form of PE32, PE32+, and ELF files. This is pretty awesome - because it means that BinNavi moves much closer to being usable without any non-free tools.

In this blog post, I will post some first impressions, observations, and screenshots of fREedom in action.

My first test file is putty.exe (91b21fffe934d856c43e35a388c78fccce7471ea) - a relatively small Win32 PE file, with about ~1800 functions when disassembled in IDA.

Let's look at the first function:
Left: IDA's disassembly output. Right: fREedom's disassembly output
So disassembly, CFG building etc. has worked nicely. Multi-user commenting works as expected, as does translation to REIL. Callgraph browsing works, too:



The great thing about having fREedom to start from is that further improvements can be incremental and layered - people have something good to work from now :-) So what is missing / needs to come next? 
  1. fREedom: Function entry point recognition is still relatively poor - out of the ~1800 functions that IDA recognizes in putty, only 430 or so are found. This seems like an excellent target for one of those classical "using Python and some machine learning to do XYZ" blog posts.
  2. fREedom: The CFG reconstruction and disassembly needs to be put through it's paces on big and harder executables.
  3. BinNavi: Stack frame information should be reconstructed - but not by fREedom, but within BinNavi (and via REIL). This will require digging into (and documenting) the powerful-but-obscure type system design.
  4. BinNavi: There has been some bitrot in many areas of BinNavi since 2011 - platforms change, systems change, and there are quite some areas that are somewhat broken or need updating (for example debugging on x64 etc.). Time to brush off the dust :-)
Personally, I am both super happy and pretty psyched about fREedom + BinNavi, and I hope that the two can be fully integrated so that BinNavi always has fREedom as default disassembly backend.

Wednesday, December 16, 2015

A decisionmaker's guide to buying security appliances and gateways

With the prevalence of targeted "APT-style" attacks and the business risks of data breaches reaching the board level, the market for "security appliances" is as hot as it has ever been. Many organisations feel the need to beef up their security - and vendors of security appliances offer a plethora of content-inspection / email-security / anti-APT appliances, along with glossy marketing brochures full of impressive-sounding claims.

Decisionmakers often compare the offerings on criteria such as easy integration with existing systems, manageability, false-positive-rate etc. Unfortunately, they often don't have enough data to answer the question "will installing this appliance make my network more or less secure?".

Most security appliances are Linux-based, and use a rather large number of open-source libraries to parse the untrusted data stream which they are inspecting. These libraries, along with the proprietary code by the vendor, form the "attack surface" of the appliance, e.g. the code that is exposed to an outside attacker looking to attack the appliance. All security appliances require a privileged position on the network - a position where all or most incoming and outgoing traffic can be seen. This means that vulnerabilities within security appliances give an attacker a particularly privileged position - and implies that the security of the appliance itself is rather important.

Installing an insecure appliance will make your network less secure instead of safer. If best engineering practices are not followed by the vendor, a mistake in any of the libraries parsing the incoming data will compromise the entire appliance.

How can you decide whether an appliance is secure or not? Performing an in-depth third-party security assessment of the appliance may be impractical for financial, legal, and organisational reasons.

Five questions to ask the vendor of a security appliance


In the absence of such an assessment, there are a few questions you should ask the vendor prior to making a purchasing decision:

  1. What third-party libraries interact directly with the incoming data, and what are the processes to react to security issues published in these libraries?
  2. Are all these third-party libraries sandboxed in a sandbox that is recognized as industry-standard? The sandbox Google uses in Chrome and Adobe uses in Acrobat Reader is open-source and has undergone a lot of scrutiny, so have the isolation features of KVM and qemu. Are any third-party libraries running outside of a sandbox or an internal virtualization environment? If so, why, and what is the timeline to address this?
  3. How much of the proprietary code which directly interacts with the incoming data runs outside of a sandbox? To what extent has this code been security-reviewed?
  4. Is the vendor willing to provide a hard disk image for a basic assessment by a third-party security consultancy? Misconfigured permissions that allow privilege escalation happen all-too often, so basic permissions lockdown should have happened on the appliance.
  5. In the case of a breach in your company, what is the process through which your forensics team can acquire memory images and hard disk images from the appliance?
A vendor that takes their product quality (and hence your data security) seriously will be able to answer these questions, and will be able to confidently state that all third-party parsers and a large fraction of their proprietary code runs sandboxed or virtualized, and that the configuration of the machine has been reasonably locked down - and will be willing to provide evidence for this (for example a disk image or virtual appliance along with permission to inspect).

Why am I qualified to write this?


From 2004 to 2011 I was CEO of a security company called zynamics that was acquired by Google in 2011. Among other things, we used to sell a security appliance that inspected untrusted malware. I know the technical side involved with building such an appliance, and I understand the business needs of both customers and vendors. I also know quite a bit about the process of finding and exploiting vulnerabilities, having worked in that area since 2000.

Our appliance at the time was Debian-based - and the complex processing of incoming malware happened inside either memory-safe languages or inside a locked-down virtualized environment (emulator), inside a reasonably locked-down Linux machine. This does not mean that we never had security issues (we had XSS problems at one point where strings extracted from the malware could be used to inject into the Web UI etc.) - but we made a reasonable effort to adhere to best engineering practices available to keep the box secure. Security problems happen, but mitigating their impact is not rocket science - good, robust, and free software exists that can sandbox code, and the engineering effort to implement such mitigations is not excessive.

Bonus questions for particularly good vendors

If your vendor can answer the 5 questions above in a satisfactory way, his performance is already head-and-shoulders above the industry average. If you wish to further encourage the vendor to be proactive about your data security, you can ask the following "bonus questions":
  1. Has the vendor considered moving the Linux on their appliance to GRSec in order to make privilege escalations harder?
  2. Does the vendor publish hashes of the packages they install on the appliance so in case of a forensic investigation it is easy to verify that the attacker has not replaced some?

Monday, May 25, 2015

Why changes to Wassenaar make oppression and surveillance easier, not harder

Warning to EU readers: EU writing culture lays out arguments to draw a strong statement as conclusion, US writing culture seems to prefer a strong statement up front, followed by the arguments. This article follows US convention.

Adding exploits to Wassenaar was a mistake if you care about security


The addition of exploits to the Wassenaar arrangement is an egregious mistake for anyone that cares about a more secure and less surveilled Internet. The negative knock-on effects of the agreement include, but are not limited to, the following list:

  • It provides governments with a massive coercive tool to control public security research and disadvantage non-military security research. This coercive power need not be exercised in order to chill public research and vulnerability disclosure.
  • It tilts the incentive structure strongly in favor of providing all exploits to your host government, and makes disclosure or collaborative research across national boundaries risky
  • It provides a way to prohibit security researchers from disseminating attack tools uncovered on compromised machines.
  • It risks fragmenting, balkanizing, and ultimately militarizing the currently existing public security research community.

The intention of those that supported the amendment to Wassenaar was to protect freedom of expression and privacy worldwide; unfortunately, their implementation achieved almost the exact opposite. 

With friends of such competence, freedom does not need enemies. The changes to Wassenaar need to be repealed, along with their national implementations.

A pyrrhic victory with unintended consequences


In December 2013, activists worldwide celebrated a big success: Intrusion Software was added to the list of technologies regulated by the Wassenaar Arrangement. In the cyber activist community, people rejoiced: Finally, the people they call "cyber arms dealers" would no longer be able to act with impunity. Oppressive regimes would no longer be able to buy the software that they use for mass surveillance and repression. A victory for freedom and democracy, no doubt.

Unfortunately, the changes to the regulation have horrible knock-on effects for security research, privacy, and society at large. The change to the Wassenaar Arrangement achieves the exact opposite of what it was intended to do.

The difficulties of being a security researcher


To discuss the many ways in which this regulation is flawed requires some background on the difficulties faced by security researchers worldwide:

Security research is an activity fraught with many difficulties. There are few historical precedents where a talented 20-year old can accidentally conceive a method capable of intruding into and exfiltrating information out of hundreds of well-fortified institutions. The best (if very cheesy) analogy I can come up with that explains the difficulties of young researchers are the X-Men comic books -- where teenagers discover that they have a special ability, and are suddenly thrust into a much bigger conflict that they have to navigate. One week you're sitting in your room doing something technically interesting, a few weeks later people in coffee shops or trains may strike up a conversation with you, trying to convince you that government X is evil or that they could really be helpful fighting terrorist organisation Y.

Security researchers face a fundamental problem: In order to prove exploitability, and in order to be 100% sure that they are not crying wolf, they need to demonstrate beyond any doubt that an attack is indeed possible and reliable. This means that the researcher needs to build something that is reliable enough to be dangerous.

Once successful, the researcher is in a very difficult spot -- with no evident winning move. What should he/she do with the exploit and the vulnerability?

Different people will posit different things as "the right behavior" at this point. Most people argue the researcher should provide the vulnerability (and sometimes the exploit) to the software vendor as quickly as possible, so that the vulnerability can be fixed. This often comes with risk -- for many closed-source programs, the researcher had to violate the EULA to find the vulnerability, and many vendors perceive vulnerability reports as attention-seeking at best and blackmail at worst. In the best case, reporting simply involves some extra work that will not be compensated, in the worst case the researcher will face legal and/or extralegal threats by the party he/she is reporting the vulnerability to.

After the researcher hands over the vulnerability and exploit, he/she is often made to wait for many months -- wondering if the people he provided his code to will fix the issue as swiftly as possible -- or if they are silently passing on information to third parties. In many cases, he/she will receive little more than a handshake and a "thank you" for his efforts.

At the same time, various parties are likely to offer him money for the vulnerability and the exploit -- along with a vague promise/assurance that it will only used for "lawful purposes". Given the acrobatics and risks that responsible disclosure carries, it is unsurprising that this offer is tempting. Anything is better than "legal risk with no reward".

Partly to alleviate this imbalance, more mature vendors have begun introducing bug bounties -- small payments that are meant to encourage disclosing the bug to the vendor. The sums are smaller than in the grey market, but -- by and large -- enough to compensate for the time spent and to offer the researcher positive recognition. Talented researchers can scrape out a living from these bounties.

Security researchers are in an odd position: In order to check the validity of their work, they need to create something with the inherent potential for harm. Once the work is shown to be valid, the result becomes the object of desire of many different military and intelligence organisations which, given the general scarcity of "cyber talent", would love to get these researchers to cooperate with them. The software industry has grudgingly accepted the existence of the researchers, and the more mature players in that industry have understood the value of their contributions and tried to re-shape the playing field so that getting security issues reported and fixed is incentivized.

The researcher has to balance a lot of difficult ethical deliberations -- should the exploit be sent to the vendor? Can the people that wish to buy the exploit on the grey market be trusted when they claim that the exploit will save lives as it will be used to prevent terrorist strikes? What is a reasonable timeframe for a vendor to fix the issue? Can disclosure accelerate the process of producing a fix and thus close the window of opportunity for the attacker more quickly? Is fixing the issue even desirable?

There are no 100% simple answers to any of the above questions -- each one of them involves a long and difficult debate, where reasonable people can disagree depending on the exact circumstances.

Adding exploits to Wassenaar, and shoddily crafted regulation


The Wassenaar Arrangement is not a law by itself -- it is an agreement by the participating countries to pass legislation in accordance with the Wassenaar Arrangement, which then stipulates that "export licenses" have to be granted by governments before technology listed in the agreement can be exported.

From an engineering perspective, it is good to think of Wassenaar as a "reference implementation" of a law -- different legal systems may have to adapt slightly different wording, but their implementation will be guided by the text of the arrangement. The most damaging section in the new version reads as follows:
Intrusion software:
"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network capable device, and performing any of the following:
a. The extraction of data or information, from a computer or network capable device, or the modification of system or user data; or
b. The modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.
This formulation is extremely broad -- any proof-of-concept exploit that defeats ASLR or a stack canary (or anything else for that matter) and achieves code execution falls under this definition. Even worse -- by using a formulation such as "standard execution path" without properly defining how this should be interpreted, it casts a shadow of uncertainty over all experimentation with software. Nobody can confidently state that he knows how this will be interpreted in practice.

Legal uncertainty and coercive power to nation states


Legal uncertainty is almost always to the benefit of the powerful. Selective enforcement of vague laws that criminalize common behavior is a time-honored technique -- perhaps best embodied in Beria's famous statement "Find me the man and I will find you the crime".

The principle is simple: As long as a person conforms to the expectations of society and the powerful, the laws are not stringently applied -- but as soon as the person decides to challenge the status quo or not cooperate with the instruments of power, the laws are applied with full force.

I live in Switzerland. Most other security researchers that I like to collaborate with are spread out all over the world, and I routinely travel accross international borders. Occasionally, I carry working exploits with me -- I have to do this if I wish to collaborate with other researchers, discuss new discoveries, or perform investigations into reliable methods of exploitation. Aside from the physical crossing of borders, it is routine to work jointly on a shared code repository accross borders and timezones.

The newly created legal situation criminalizes this behavior. It puts me personally at huge risk -- and gives local governments a huge coercive tool to make me hand over any information about zero-day I may find ahead of time: If I do not apply for an export license before visiting another researcher in Germany or France or the US, I will be breaking export control regulations.

The cyber activists that celebrated Wassenaar have mainly made sure that every security researcher that regularly leaves his country to collaborate with others can be coerced into cooperation with their host government easily. They have made it easier for all governments to obtain tools for surveillance and oppression, not harder.

A story of Elbonia and Wassenaar


Let us imagine a young researcher Bob in a country named Elbonia. Elbonia happens to be a Wassenaar member, and otherwise mostly under the rule of law with comparatively solid institutions. 
Bob finds a number of vulnerabilities in a commonly used browser. He reports these to the vendor in the US, and the vendor publishes fixes and awards bug bounties.

The domestic intelligence service of Elbonia has a thorny problem on the counterterrorism side -- separatists from one of their provinces have repeatedly detonated bombs in the last years. The intelligence service could really use some good exploits to tackle this. Unfortunately, they have had difficulties hiring in recent years, and they do not have the tooling or expertise -- but they do see the security bulletins and the bug bounties awarded to Bob.

They decide to have a coffee with Bob -- who does not want to work with the intelligence service and would prefer to get the problems fixed as quickly as possible. The friendly gentlemen from the service then explain to the researcher that he has been breaking the law, and that it would be quite unfortunate if this led to any problems for him -- but that it would be easy to get permission for future exports, provided that a three-month waiting period is observed in which the Elbonian intelligence service gets to use the exploits for protecting the national security and territorial integrity of Elbonia.

What would the researcher do?

Balkanising and nationalising an international research community


The international security research community has greatly contributed to our understanding of computer security over the last 20+ years. Highly international speaker line-ups are the norm, and cooperation between people from different nations and continents is the norm rather than the exception. 

The criminalization of exporting exploits risks balkanising and nationalising what is currently a thriving community whose public discussion of security issues and methods for exploitation benefits everybody.

The implementation of Wassenaar makes it much easier and less risky to provide all exploitable bugs and associated exploits to the government of the country you reside in than to do any of the following:
  • report the vulnerability to a vendor and provide a proof-of-concept exploit
  • perform full disclosure by publishing an exploit in order to force a fix and alert the world of the problem
  • collaborate with a researcher outside of your home country in order to advance the state of our understanding of exploitation and security

Wassenaar heavily tilts the table toward "just sell the exploit to your host government".

Making "defense" contractors the natural place to do security research


With the increased risk to the individual conducting research, and a fragmentation of the international research community, what becomes the natural place for a security researcher to go to pursue his interest? What employer is usually on great terms with his host government, and can afford to employ a significant number of other researchers with the same nationality?

The changes to Wassenaar make sure that security researchers, which in the past few years have been recruited in large numbers into large, private-sector and consumer-facing companies, will have much less attractive prospects outside of the military-industrial complex. A company that does not separate people by nationality internally and is unused to heavy classification and compartmentalisation will simply not want to run the risk of violating export-control rules -- which means that the interesting jobs will be offered by the same contracting firms that dominate the manufacturing of arms. These companies are much less interested in the security of the global computing infrastructure -- their business model is to sell the best method of knocking out civilian and military infrastructure of their opponents.

Will the business of the defense contractors be impacted? This is highly doubtful -- the most likely scenario is that host governments for defense contractors will grant export licenses, provided that the exploits in question are also provided to the host government. The local military can then defend their systems while keeping everybody else vulnerable -- while keeping good export statistics and "protecting Elbonian jobs".

Everybody that cares about having more secure systems loses in this scenario.

Weakening defensive analysis


The list of knock-on effects that negatively affect the security of the Internet can be extended almost arbitrarily. The changed regulation also threatens to re-balance the scales to make the public analysis of surveillance and espionage toolkits harder and riskier.

The last few years have seen a proliferation of activist-led analysis of commercial surveillance tools and exploit chains -- publicly accessible analysis reports that disassemble and dissect the latest government-level rootkit have become a regular occurrence. This has, no doubt, compromised more than one running intelligence-gathering operation, and in general caused a lot of pain and cost on the side of the attackers.

Some implants / rootkits / attack frameworks came packaged with a stack of previously-unknown vulnerabilities, and most came with some sort of packaged exploit.

It is very common for international groups of researchers to circulate and distribute samples of both the attack frameworks and the exploits for analysis. Quick dissemination of the files and the exploits they contain is necessary so that the public can understand the way they work and thus make informed decisions about the risks and capabilities.

Unfortunately, sending a sample of an attack framework to a researcher in another country is at risk of being made illegal.

On export controls and their role in protecting privacy and liberty


There is a particular aspect in the lobbying for adding exploits to Wassenaar that I personally have a very hard time understanding: How did anyone convince himself that the amendmends to Wassenaar were a good idea, or that export control would be helpful in preventing surveillance?

Any closer inspection of the agreement and the related history will bring to light that it was consistently used in the past to restrict the export of encryption -- as you can read here, Wassenaar restricted export of any mass-market encryption with key sizes in excess of 64 bits in the late 1990s. 

Everybody with any understanding of the cryptographic history knows that export licenses were used as a coercive mechanism by governments to enhance their surveillance ability -- "you can get an export license if you do key escrow, or if you leak a few key bits here and there in the ciphertext". To this day, security of millions of systems is regularly threatened by the remains of "export-grade encryption".

Even today, there are plenty of items on the Wassenaar restrictions list that would have great pro-privacy and anti-surveillance implications -- like encrypted, frequency-hopping radios. 

I have a difficult time understanding how anyone that claims to support freedom of expression and wishes to curb surveillance by governments would provide additional coercive power to governments, instead of advocating that encrypted, frequency-hopping radios should be freely exportable and available in places of heavy government surveillance.

Summary


While the goal of restricting intrusive surveillance by governments is laudable, the changes to Wassenaar threaten to achieve the opposite of their intent -- with detrimental side effects for everybody. The changes need to be repealed, and national implementations of these changes rolled back.

(Thanks to all pre-reviewers for useful comments & suggestions :-) 

Monday, January 13, 2014

Full-packet-capture society - and how to avoid it

After my previous post on the need for intelligence reform, this post discusses a concrete policy recommendation - but before that, I will describe what is at stake.

I see our world on a trajectory into a dystopian future that is frightening and undesirable. Technological progress is deeply transforming our societies, and while most of it is for the better, we need to step back occasionally and look at the bigger picture.

Storage costs are on a steep downward curve, similar to CPU costs -­­ except that the end of Kryder's Law (the storage equivalent of Moore's law) is not in sight yet. Kryder's conservative forecast in 2009 estimated that a zetabyte of storage will cost about 2.8 billion USD by 2020. Extrapolating that prices will halve roughly every two years, this means that a zetabyte might be as cheap as 100 million USD sometime between 2030 and 2040.

All human speech ever spoken, sampled at 16 khz audio, is estimated to be roughly 42 zetabytes. This means that by the time I reach retirement age, storage systems that can keep a full audio transcript of everything humanity has said in the last 10 years will be within the reach of many larger nation­ states. Perhaps I will live long enough to get CD quality, too. Impressive, but also terrifying.

A future where every word ever spoken and every action ever taken is recorded somewhere will lead to a collapse of what we understand as freedom in society. There is good reason that both the east German StaSi and the KGB kept vast troves of "kompromat" on anyone that showed political ambitions - such data was useful to discredit people that were politically active or to blackmail them into cooperation.

The trouble with kompromat is, though, that nobody needs to actually use it, or threaten its use, for it to become an effective deterrent to political activity. We can see this in western societies already: It is not uncommon for qualified and capable individuals to decide against standing in elections for fear of having their lives examined under a microscope. When everything you have ever done has been recorded, are you sure that none of it could be used to make you look bad?

What about the famous "three felonies a day" that even well-­meaning and law­-abiding citizens run into?

Clapper's argument that "it isn't collection until you look at it" is disingenuous and dangerous. By this logic, vast files tracking people's lives in pedantic detail are not problematic until that data is retrieved from a filing cabinet and read by a human. Transporting his logic into East Germany of the early 80's, collecting excruciating detail about people's private lives was OK, it was only when the StaSi actively used this data that things went wrong.

The discussion whether phone metadata records should be held by the government or by private entities does not matter. Data should only be held for the period which is necessary to perform a task, and storing data in excess of this period without allowing people to view / edit / remove this data carries the implicit threat that this data may be used to harm you in the future. Involuntary mass retention of data is oppressive. And while checks and balances exist now, we cannot be sure how they hold up over time. Deleting the data is the only prudent choice.

Well-­intentioned people can build highly oppressive systems, and not realize what they are doing. Erich Mielke, who had built the most oppressive security agency in living memory in order to protect "his" country from external and internal foes, famously said "but I love all people" in front of East German Parliament. He did not grasp the extent of the evil he had constructed and presided over.

Nobody wants a full-­packet­-capture society. It is fundamentally at odds with freedom. Arbitrary collection and retention of data on people is a form of oppression.

Policy recommendation: A different form of SIGINT budget


How do we balance the need to protect our countries against terrorism and foreign aggression with the need for privacy and data deletion that is necessary to have functioning democracies and non-­oppressive societies?

This question has been much-discussed in recent months, culminating in a set of recommendations made by the panel of experts that the Obama administration had convened.

I agree with the principles set forth in the above­-mentioned document, and with several of the recommendations. On the other hand, I feel that some of the recommendations focus too narrowly on the "how" of collection, rather than policing the overall end goal of avoiding mass surveillance. Regulations that are overly specific are often a combination of cumbersome-in-practice and easily-side-stepped -- properties you do not want from a law.

My personal recommendation is a different form of SIGINT budget: Aside from the monetary budget that Congress allots to the different intelligence agencies, a surveillance budget would be allotted, of a form similar to:
For the fiscal year 2014, you are allowed to collect and retain data on [X number] citizens and [Y number] non-­citizens for a year. Restrictions on the purposes of collection and retention of data still apply.
This budget could be publicly debated ­ and would make sure that data collection is focused on the areas that truly matter, instead of rewarding SIGINT middle managers that try to improve their career trajectories by showing how much "more data" they can collect. The budget would be accounted for in "storage-hours" to create incentives for early deletion. People can get promoted by showing the ability to do the same work while retaining less data, or retaining the data for briefer periods.

This may look similar in practice to the way cloud providers (like Amazon) charge for storage. The agencies get to store and keep data, but they get charged internally for this, daily or weekly. Retain too much data and your collection system runs out of budget - but you can free up budget by deleting old data. The overall budget is public, so the public can have a clear view of how much data is collected under all programs, instead of the undignified spectacle of "we do not collect this data under this program" non-denials.

The big trouble with sniffing internet traffic is that it is fundamentally addictive. You can see the spiral of escalation in almost every criminal hacking career. It is easy to underestimate that the same addictive property of data collection applies to organisations. Middle managers can shine by showing growth in collection, upper management can speak of "total domain dominance" and similar powerful-­sounding words. Collection becomes an end by itself. By imposing hard limits on the number of people whose lives can be touched through surveillance, we make sure that our efforts are focused on the real problems -- and remain liberty­-preserving.

If, for whatever reason, a SIGINT agencies runs out of "surveillance budget" in a given fiscal year, they can always ask Congress to grant an "emergency loan" - provided, of course, that this remains an exception.

Public budgeting and proper accounting of retained data, implemented in modern democracies, would give citizens a clean and understandable method to evaluate and discuss the extent of governmental data collection for national security, without introducing detailed micro­-management-rules on the "how" of collection. It would provide a clear answer to "how much data are you actually keeping", and create strong incentives for early data deletion. It is not perfect, but it may be the cleanest way of achieving both the security and the privacy that a free society needs.



[Many people helped improve this article by proofreading it and offering helpful suggestions or interesting counterarguments. Aside from anonymous help, I have to thank Dave Aitel, Chris Eng, Vincenzo Iozzo, Felix Lindner, Window Snyder, Ralf-Philipp Weinmann and Peitr Zatko]

Sunday, January 12, 2014

Why Intelligence Reform is necessary

This is the first part of a two-part blog post on the need for intelligence reform.

Why do I even feel entitled to an opinion?

I have been dealing with the technical side of computer network attacks for more than 15 years, and have written exploits for about as long as the now-famous "tailored access operations" team inside NSA has existed. Many people consider me to be an expert on all things related to reverse engineering and exploitation. Through my work, I have had as much as exposure to government-organized hacking as you can have without getting a clearance. I understand this stuff, and as a firm believer in the ability of democracies to right themselves through informed debate, I feel the need to stray from my usual technical stomping grounds and talk about politics.

Over the years, I have met and talked with a number of people that used to work in, or close to, the intelligence community. I have found the vast majority of them to be conscientious, hard-working, idealistic (after all, pay in the government sector is often significantly below the private sector, so a sense of duty plays a large role), and overall good people. In political discussions, we had more commonalities than disagreements. Politically, while I am slightly left-of-center on many political questions, I am a defense and intelligence hawk (at least by European standards) - I do believe that intelligence agencies have a legitimate role to play in both foreign policy and counter-terrorism, and I am aware enough of the realities of international law that mean that countries that neglect their defense and intelligence organizations do so at their own peril.

At the same time, having grown up in a country more heavily burdened by historical abuse of state security institutions than most, and in a region of the world where - in living memory - many countries lost 5-10%+ of their entire population in wars fueled by nationalist ideals, I am instinctively worried about concentrating excessive powers in state security institutions. I am also easily alarmed by nationalist thoughts and ideology.

The Snowden revelations, but much more so the reactions to the Snowden revelations, have caused me to think about the implications of the technological changes we are in the midst of - for both society and surveillance. I conclude that our societies need a reform of the legal frameworks for signals intelligence in a digitized world - not only in the English-speaking countries, but also in all those countries that aspire to obtain the same capabilities.

Policy ideas are always the result of a combination of practical considerations and personal ideology. In order to be transparent with my personal ideology, I should explain as much of it as possible before delving into my ideas for reform. To do this, I will address a few common arguments that I have encountered that express incredulity at the public outrage, and explain why I think the outrage is (partially) justified.

"The Russians and Chinese are much worse, so where's the outrage about them?"

People are outraged at the disclosures about widespread espionage by English-speaking countries while they are not outraged by Russian or Chinese espionage because people expect different behavior from friends than from adversaries. Most of the world considers the English-speaking countries to be committed to principles of democracy, justice, and fairness. When dealing with them, these countries are treated as friends and allies. Nobody in central Europe for example is worried about a US invasion, while a faint fear of Russian invasion is never far away. 

Expectations are different when it comes to Russia or China: These countries have such an abysmal record of human rights; such an abysmal record when it comes to questions of the rule of law that nobody expects anything from them. Russia is, for all purposes, treated as an aging and wounded bear, unpredictable but still dangerous. China is even compared to 1910-1914 Germany in the current issue of "The Economist", hardly a flattering comparison.

In short, it is entirely normal to expect different behavior from your friends than from your enemies or rivals. Having your apartment burgled by a known criminal gang is one thing, having your friend, whom you had over for dinner repeatedly, burgle your apartment, is a very different thing.

"We do not violate the privacy of our own citizens, and everything we do is outside our territory, so what's the damage?"

The problem with this argument is a discrepancy between the legalistic interpretation of the constitution and the emotional interpretation of the constitution - a discrepancy between "the letter of the law" and "the spirit of the law". 

A constitution is aspirational - it outlines the basic principles and values to which a society aspires. These principles are universally recognized by a country's population as "the right thing to do".

In practice, though, the US cannot reasonably grant the rights in the 4th Amendment to people living in China, and Germany could not enforce the constitutionally guaranteed equality of all humans in apartheid-era South Africa. As a result, Constitutional rights end at borders. It is important to keep in mind, though, that this is not because we think that Chinese do not deserve protection from unreasonable search & seizure, or because we think that Freedom of Speech should not apply outside of our borders - but only because we are in no practical position to grant rights to someone living under the jurisdiction of another government. (There is the other matter that we'd violate international law, but if history is any guide, international law does not exist unless the strongest player wants to enforce it).

Nobody extends their constitutions across their borders because it would mean intervening in other countries. But the principles in the constitution are good principles, and we should try to adhere to them wherever possible. We cannot force the Chinese government to allow Freedom of Speech in China, but that does not mean that it would be OK for us to further suppress Freedom of Speech there - just because China happens to be outside of our borders.

Secondly, there is the Universal Declaration of Human Rights. This is as close to an universal constitution as humanity has gotten, and it explicitly mentions in article 12:
No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
The UDHR is a good document, and one that all important powers signed after the atrocities of the two world wars. The US was a driving force in drafting it and getting it ratified - why are we completely ignoring it now, arguing that any privacy protections do not apply to non-citizen outside of our territory?

"Corporations collect vastly more data, and they are not under democratic control."

There is an important bit of truth in this statement: Corporations are collecting ever-more data, and it is quite unclear whether existing legal frameworks are sufficient to protect privacy. In my personal opinion, all developed nations should pass legislation that enforces something similar to the OECD's "seven principles for the protection of personal data", and hold companies accountable for this. People need to understand what data is collected, for what purpose, and have wide-ranging ability to inspect, edit and delete the collected data. 

At the same time, the argument that insufficient legal oversight in one area justifies insufficient legal oversight in another area is clearly wrong. Both areas, corporate and government data collection, need to have their oversight fixed.

"Sufficient controls are in place to prevent abuse of power"

I'd be strongly inclined to believe this argument - but there are two important points that we should keep in mind. First off, checks and safety procedures are hardly ever perfect, and tend to erode in times of crisis. One could say that most democracies are two terror strikes and one opportunist away from a dictatorship, and safeguards are much more quickly eroded than they are rebuilt. Democratic societies need to stay in constant debate about where the limits of surveillance are supposed to lie.

I believe that today the controls in the US are sufficient to prevent the most egregious abuse of power. I do not have much faith, though, that they would survive one major terrorist strike combined with a wrongly ambitious president or vice president.

Legal safeguards in a democracy buy you time. If you elect a madman, dismantling the safeguards will take him some time. Hopefully, the safeguards take longer than 8 years to dismantle. Being a security-minded person, I'd like to have some margin of error on this.

The second point to consider is that of "creeping abuse". Post 9/11, exceptional powers were granted to the security apparatus to protect our societies from further terrorist strikes. These powers were explicitly granted for counter-terrorism. The natural inclination of the security apparatus is then to slowly and carefully widen the definition of terrorism. We can see this in action: Glenn Greenwald's partner, David Miranda was detained under legislation explicitly drafted for counter-terrorism - using rights only granted for fighting terrorists that are bent on mass killing, which Mr Miranda was clearly not about to do. We have also watched Mr Clapper publicly twisting the meaning of the word "collection" until it implied that a stamp collector doesn't collect stamps unless he looks at them.

In short: I am uncomfortable with what I perceive is an insufficiently wide safety margin against abuse - and we have all seen an abuse of anti-terror legislation for an entirely unrelated cause, that of self-defense of the security organizations against embarrassment. We need much stronger safeguards, and much more transparency.

"Spies spy, why are people surprised?"

I am not surprised, or even particularly worried, about state-to-state espionage. My opinion on this is that where matters are truly vital (nuclear proliferation, questions of war and peace etc.) intelligence collection should lead to better-informed leaders and hopefully peaceful outcomes. 

My ethics dictate that strength should not be abused - e.g. I would consider it unethical by a strong developed nation to use espionage against a weak developing country to get a leg up in trade negotiations - but in general, nobody is surprised or outraged that the people in the White House want to know what the leaders in Tehran are thinking, and vice versa. 

People are surprised because governments everywhere have been hesitant to explain to their own population what exactly intelligence agencies do. Similar to internet companies that hide the true extent of data collection in a gigantic EULA that no user understands, governments everywhere "hide" what these agencies do in plain sight: Large quantities of dispersed legalese and vague formulations. 

Democratic governments need to become better at explaining what these agencies are for and what the exact authorities and limitations of these agencies are. Voters can then decide if they are cool with that. The historical tendency to hide these organisations from public view is wrong, antidemocratic, and ultimately harmful to both the democracies and the mission of these organisations.

"Everybody does it and has always done it!"

One could easily get into an argument about whether this statement is true or not - historically, many countries (including the US) only performed intercept and cryptanalysis during times of war. Then again, politicians tried to disband signals intelligence (SIGINT) organisations, these organisations had a tendency to be conserved elsewhere in the bureaucracy. So even if we accept that SIGINT collection in times of peace is an unchangeable fact of life, the nature of collection has changed significantly in recent decades.

Even during the height of the cold war, when the US had all its ears focused on Russia, the odds that some random Russian person had their communication intercepted and archived by the US were near-zero.

The technological explosion we're living in changed this: International communication has grown exponentially, and it is likely that the majority of the population of most industrialized nations have participated in communications that were intercepted (if not necessarily read by a human being).

This is a radical change. Technology has amplified everybody's ability to communicate, but also created a society where virtually everybody's data has been touched by one, if not more, security organisations - both domestic and foreign. The legal framework has simply been outpaced by technological progress, and the security agencies have been extremely happy to not draw attention to this.

This new reality needs to be addressed - not only in the countries that were hit by the recent revelations, but in all modern democracies (many of which have even weaker oversight over their intelligence agencies than the famous "5 eyes"). 

Summary:

Technology has changed the world, vastly expanding everybody's ability to communicate - but at the same time, also vastly expanding not only the potential for surveillance, but actual surveillance. 

Intelligence collection should not be done "in bulk" - a regular person should have negligible odds of ever having their communication intercepted and archived.

Intelligence reform is needed - in all modern democracies - to ensure that people can have privacy, to combat the mistaken view that "all is fair if it's not on my territory", and to strengthen the safeguards against abuse.

My next post will talk a bit more about what reforms should be enacted, and what may happen if we fail to act.

Sunday, June 02, 2013

Analogies, Piracy, Attribution, and the law of unintended consequences

I learnt a few interesting lessons this week...

I was honored to invited as a keynote speaker at SOURCE Dublin 2013, which was held on the 23rd/24th of May. Given that I had nothing technical to speak about, and given that keynote talks are not supposed to be technical, I needed to come up with an entertaining topic related to IT security.

A few weeks earlier, I had watched Dave Aitel give an interview on TV somewhere, and he had said a particular sentence that I found thought-inducing: He compared the hiring of computer security folks by the DoD to "building a new Navy" that will control the trade routes of the future, the internet.

I found the thought interesting, and decided that I will see how far I can stretch this sentence. When thinking about the internet as the ocean, one is half a step away from the word "Pirates". And boy, everybody loves hearing about Pirates - the topic is rich in historical sources of dubious veracity and colorful lore. Clearly, I had found a great way to entertain the audience for 50 minutes.

So I used this as an excuse to buy and read some books on the history of piracy, and managed to construct what I immodestly think is a great piece of entertainment - a talk that manages to engage the audience, draw parallels between the early Boucaniers and Hackers (it's always good to flatter the audience a bit), comment on how the Boucaniers turned into Privateers, and generally get people to dream a bit. (Slides)

While flattering the audience is good and well, I also wanted to get the audience to question something they believe in. With everybody arguing about the evils of government-tolerated industrial espionage, I wanted to tell the audience that one country's criminal is often another country's hero - so what better way to draw a parallel between today's attackers and 16th-century Britain, an upcoming power attempting to gain the upper hand against almighty Spain.

All in all, I gave the talk (somewhat nervously), and I think I managed to engage and entertain the audience. I was quite happy with how it went, particularly because I was extremely nervous about having to give a presentation with no technical verifiable truth in it.

I had expected the talk to be an entertaining diversion, with little real relevance. Now, some very surprising things happened after the talk:

First, a number of people took the talk way too seriously, attempting to derive policy recommendations from my very tenuously constructed analogy. Analogies are great for examining a problem - given something unknown, there are few more interesting activities than to construct different analogies and then reason about where they fit and where they do not fit. Thus, they are great tools for understanding and examining - while being dangerously bad for prediction and policy advice.

Secondly, a different set of people begun arguing that the analogies are flawed because at some lower level of abstraction they break down ("I can't use the internet to turn sewage into shrimp, hence the internet can't be like an ocean"). I had difficulty understanding the effort and emotion people put into finding places where the analogy breaks down - given that I had meant it as entertaining, they seemed to me like the guy in a superhero movie that complains that some action scene was unrealistic.

Then something else happened that caught me completely off-guard: Pretty exactly one week after my presentation went online, the NYT published an op-ed contributed by JC Hirsch and Sam Adelsberg titled "An Elizabethan Cyberwar" - which was clearly strongly inspired by my keynote, down to individual details of my constructed analogies. The article takes the Britain/Spain analogy, mentions the deniability afforded to the British Crown by the privateering constructs that I had highlighted, and then proceeds to provide policy advice based on this.

I was stunned - first off, that something I had constructed for entertainment would end up inspiring an NYT op-ed a week later and secondly, that people are really trying to construct advice from it.

To clarify: I used Dave's analogy of "the internet as sea" and constructed the analogy to the Spanish Main as a form of entertainment, something to discuss over a glass of wine - not as something that should be used to draw any real-life lessons about the internet, or about cyberwarfare.

So what real-life lessons did I learn through this ? A good analogy is like a good joke: It is impossible to contain, travels fast, and can have surprising unintended consequences. Also, everybody is so desperate to understand "the internet" that the path from "small conference talk in Dublin" to "heavily influencing a NYT op-ed" is short. This highlights how little we understand technology's impact, and how much even tenuously constructed analogies fill an emotional need. Finally, as for good jokes and cyber attacks, attribution for good analogies seems hard - selfishly, I would have really liked a footnote to the op-ed.

Update: It seems analogies are like 0days - often discovered by multiple parties in parallel, confusing anyone who wants to do attribution :-). It seems the authors of the NYT op-ed had developed these ideas independently prior to my talk, and just delayed the publication of the article due to current events. They were not influenced in any way by my keynote. :-)