BOSTON — Edward Snowden talked live via satellite with OpenStack COO Mark Collier about why open source matters, his toolkit and how fear has become the most common political value. Here’s the complete 25-minute Q&A that closed out the day-two keynote of the Summit Boston. You can also catch the video here.
There are thousands of people here from the OpenStack community, building the cloud in over 60 countries. What’s your take on cloud computing and what does that mean from your point of view?
SNOWDEN: There’s a lot of different ways to look at it. In the abstract sense – what does the ordinary user think about cloud, what does it mean for them? Cloud means Google apps, Gmail — they’ve got their stuff on someone else’s computer.
On the other hand we have what you guys do, the IAAS layer which is increasingly becoming the bones of the internet, the thing that we build on. One of the most dangerous things we can see happen in this space — one of the things that you guys do best — is help people who are in a place to make the decisions make them in a considered way. For most people, the internet is kind of magic. It just happens. They look at it on their smartphones, their Facebook app is the internet, but that’s not enough.
What OpenStack does is make you lose that inherent silent vulnerability of investing into things that you do not influence, you do not own, you do not control or even shape.
We can’t let people go to this mindlessly, effortlessly when they’re in the act of building rather than consuming. It raises the question of why. Particularly when there are these for-profit alternatives when we get to the infrastructure-as-a-service layer. You could use EC2, Google’s Compute engine or whatever, they’re fine, they work.
The problem is that they are fundamentally disempowering. You give them money and in exchange you’re supposed to be provided with a service. But you’re providing more than money: you’re also them providing with data and you’re giving up control, you’re giving up influence. You can’t shape their infrastructure, they’re not going to change things and tailor them to your needs. You end up reaching a point, they are portable to a certain extent, you can containerize and shift things around but you’re sinking costs into an infrastructure that is not yours, fundamentally.
What OpenStack does is make you lose that inherent silent vulnerability of investing into things that you do not influence, you do not own, you do not control or even shape. Whereas with OpenStack, you build it layer on layer; it requires more technical understanding but as it grows and complies with free and open values that open source drives, we can start to envision world where cloud infrastructures are not private in the sense of private corporations but in the sense of private person – whether you’re a small business a large business or a community of technologists, you can own it, control it, you can shape it. You can lay the foundation on which everybody builds. And I think that’s one of the most powerful ideas shaping the history of the internet will hopefully allow us to direct the future in a more free, rather than more closed, way.
Some members of the OpenStack community wanted to know about your experience with open source, what’s in your toolbox?
Probably my most famous involvement was during the NSA in 2013, which people don’t think about too much in-depth because the NSA is running on Windows [although] they have Linux machines and servers. For the NSA and CIA through the Vault 7 leaks we’ve seen that they’re very aggressive GPL violators.
But there’s a journalistic side – how did we make this happen? How did we effectuate the return of public information and reveal unlawful, unconstitutional activity… that was almost entirely powered by open source. The TorGuards that I was going through, I stood up myself, they were running on Debian. All the journalists involved were using Tails, because I wanted to limit the amount of mistakes they could make and they weren’t specialists.
We don’t work for governments, we don’t work for states, we don’t work for corporations. We should be working for the spirit of technology itself, moving people closer to a more empowered future.
The TOR project was the most critical piece. It’s not to say that it will secure everyone for everything forever, but it gave them enough breathing room to make things happen. Since then, I was the director and now am the president of the Freedom of Press Foundation. My primary work there since joining has been expanding the open source efforts that we have in house there, such as SecureDrop.
There are a lot of other interesting efforts coming this year, for example in open hardware. I gave a talk last year at the MIT’s Forbidden Research with Andrew “bunnie” Huang on an ‘introspection engine’ for modern smartphones.
It gets back to this infrastructure issue we were talking about earlier – you’re running things on Google’s stack or Amazon, but how do you know when it starts spying on you? How do you know when your image has been passed to some adversarial group? Whether it’s an image taken to a competitor or for the FBI, whether legally or illegally, you don’t have any awareness of this, because it’s happening at a layer that’s hidden. It’s invisible to you. The same thing happens with our phones. When we turn on airplane mode, when we turn off location services, how do we know the GPS is off? The antenna is powered down? We’re trusting a software and a root kit. So we’re developing hardware – that everyone will be able to replicate, we’re providing the plans — where you’ll be able to look at electron flow over these circuit paths to see that for yourself.
It’s great that you’re driving forward with open source in meaningful ways. Another question from the community: What are some of the ethical implications for people in open source when they don’t know how their work will be used, maybe in ways that they don’t want or don’t agree with.
To think beyond what the license says is clearly lesson number one. That stuff matters, that capability matters. We haven’t recognized that all government involvement isn’t necessarily bad, all intelligence agency involvement isn’t necessarily bad and, hard to say, but there’s a lot of good people at the NSA, the CIA, and there’s even some good people at the FBI. We want to enable everybody and we need to think about the context of our work.
We don’t work for governments, we don’t work for states, we don’t work for corporations, we should be working for the spirit of technology itself, moving people closer to a more empowered future. I try to think of this in terms of values: all systems should be designed to obey the user, they should not be designed not hide things from them, deceive them, lie to them. That’s one of the largest problems that we have with closed source. It’s not so much that someone doesn’t want to share source code – though that matters – it’s what that actually means what they do. That leads to the world we’ve got today, where we’ve got vulnerabilities in every Intel chip. Because Intel’s “monitoring” it — we can’t see, patch it or change it for ourselves.
This is the atomic moment for the profession of computer science.
When you’re thinking about your ethical obligations, the main thing is, how do I empower the user and if this creates a large-scale disruption in traditional power structure or an amplification of those structures (whether they’re corporate or government structures). how can people be sheltered – at least think about what you can do to protect people. The traditional world view of the happy policeman on the street looking out for people is increasingly and some would say tragically being displaced by technology. Because we can only put so many people in so many places but tech is everywhere. And if we are going to have have computer in every home, in every pocket, in every place, they need to abide by values that protect and serve the public.
I wanted to ask about the dynamics of exploits – there are people now who would pay a million dollars for a zero-day exploit. There’s a market for it – how does the economics of it affect the dynamics of people who are trying to secure their infrastructure.
“All systems should be designed to obey the user, they should not be designed not hide things from them, deceive them or lie to them.”
This is a complicated space, we could spend an hour talking on this alone… To look at it briefly: mitigations work. We know that. If you can start move entire bug classes off the table. Use memory-safe languages for development and things like that, best practices for coding standards, for design standards to limit weaknesses and validate the inputs…this will make attacks much more expensive…But there’s still going to be a market, there will still be people looking for it and there will still be people who succeed… Traditionally, we say that beauty of open source is that ‘many eyes make all bugs shallow,’ but we see the bugs still get through and they still get through even in the most open context, we see things like Shellshock and the impacts are this are large but that’s not an argument that we shouldn’t do open source. The beauty is that when they do come through the entire community can respond and that Shellshock-type code base, it gets more eyes on the codebase and more people involved in the process. We don’t want to encourage or look for big bugs but the fact that we can is fundamentally empowering and educational. We learn and improve as a community.
When Apple has a security flaw, or Google, or Amazon’s stack has something, we don’t know what they learn, we can’t evaluate if their response was good enough. And ultimately, if we don’t like it we have no influence over it. Now, people can say ‘boo-hoo, that’s how private industry works,’ and that’s a fair argument but the point of open source is that we have a better one, we don’t have to compromise. We want a better world, and so we’re going to build it.
Looking ahead, with everything you’ve seen and everything you know: Are you an optimist or a pessimist at this point?
[Laughs.] It depends on the day. I’m fundamentally optimistic. When you look at where we are with the progress of technology today, we’re at a crossroads. We’ve been struck by a moral dilemma that we did not ask for, that we did not see coming. This is the atomic moment for the profession of computer science. In the last century we had nuclear physicists who were trying to master the science and see how it all fit together and see how far they could go…That’s what the internet is today. The problem is that we didn’t predict what the bad actors, the violent actors would apply to these discoveries of ours. Now we need to think about how to mitigate them. We can’t put the genie back in the bottle, we can’t unsplit the atom. We can make sure that we don’t make the same errors that we made in the last century just, to ensure that things are working – like cell phone networks — that we adopt terrible standards [for them]…
“Fear has become the most common political value in the world.”
We have the weakest possible encryption schemes applied to typical cellphone networks because governments encourage the adoption of weak standards. They wanted them to be weak enough to break, even though they didn’t say that…but we need to make sure we’re not looking at “good enough” because it’s very hard to update the technology, it’s very hard to move legacy stuff out of production (everyone in the room knows that.) We need to figure out not just how to build for today, but for the next hundred years. By setting the example by setting, by setting protocols for building beyond tomorrow.
One final question from the audience, a little more political. What can we do to reverse trend of protectionism and nationalism?
This is extremely complex…When you look at political dynamics today, whether it’s the U.S. elections or the closeness of the elections in France or perhaps the most extreme surveillance bill in history of western democracy last year passed in the U.K. (the Investigatory Powers Bill) or what the Russians call their Big Brother law — and when the Russians call it that, you know there’s a problem — fear has become the most common political value in the world.
Saying [the word] terrorism will defang any opposition, it will silence any counter-proposals. This puts us in a systemically vulnerable place, where the traditional system of checks and balances upon which western civilization has relied on are starting to fade. Courts are afraid to rule in places that are politically controversial…. For example, all the mass surveillance in the US is a pretty clear violation of the fourth amendment and any expert group like ACLU looks at it and files cases on it but the courts are hesitant…because they don’t want to be seen as simply applying the law in a way that makes sense because judges are people too, judges are vulnerable to fear like everyone else. Politicians are vulnerable to fear like everyone else. Presidents are vulnerable to fear.
This creates a world where the weakest link in safeguarding human rights is increasingly becoming human. This leaves us in a place where the traditional mechanisms of enforcing human rights are beginning to fail.
The beautiful thing is that the same time these old processes are beginning to fail, we’re seeing glimpses of how
technology can enforce human rights in ways beyond borders.
Let’s say if we have a country that does;t respect human rights to the global standards — it doesn’t have to be a place like South Sudan, Cameroon, Russia or China — it could be a place like the United Kingdom, or France, or right here in the United States, in Boston.
But if we develop protocols and system that’s invisibly surrounding us every day, that’s in our pockets. Even if someone doesn’t touch the internet, their communications are transiting the internet, they’re relying on this fabric, this mesh, whether it’s the infrastructure that you’re providing to the hospital is putting their records are online, a 90-year-old woman who doesn’t even have a phone is still relying on it. These same things can be pushed across borders instantly.And when we create safe and reliable means of protecting human rights — at the protocol, at the system level — those rights can’t be abrogated simply because it’s convenient or simply because someone asked.
We create not just a better world, but a freer world, and it can happen every corner of the earth as fast as we can proliferate the technology. And I would argue not only that we can do it, and that we should do it, but that if the next generation is going to enjoy the same rights that we generated, we must.