Hacking Elections: A Conversation with Matt Tait


[MUSIC PLAYING] RYAN CALO: So welcome to the
Tech Policy Lab’s Distinguished Lecture series. For those of you
who do not know, the Tech Policy Lab is
an interdisciplinary unit here on campus. We formerly bridge the
School of Computer Science and Engineering, the information
school, and the law school. But within our
community, we have folks that are
faculty and students from many other
departments, including electrical engineering,
natural language processing, urban studies,
communications, and others. And we are devoted
to putting together interdisciplinary teams
to help policymakers broadly understood, make more
wise and inclusive tech policy. My name is Ryan Calo, I’m
along with Batya Friedman and Tadayoshi Kohno– we are the three
co-directors of the lab. And we’re very excited
to welcome you. Up to two times a year,
we bring in someone really special
into the community in order to deliver our
distinguished lecture. And so today, I wanted to talk
to you about who we brought in and also, about the format
that we have in store. And it’s a real treat. So the person that we
brought in is Matt Tait. And Matt Tait is famous
in some circles– we just talked about infamous. But he’s an individual
who has been instrumental in uncovering
election interference by foreign entities. Most notably, in
2016, with respect to Russian interference in
the United States election. He was well-positioned
to do that. He was working as an
independent consultant, but had experience working
for the British government. He was with UKs top digital
intelligence agency, this is the Government
Communications Headquarters– the GCHQ. And so, I won’t
steal his thunder, but let me just say
that he was a very and remains a very
instrumental figure in this. Presently, he actually has
joined us United States. And he is then at
UT Austin, where he serves as a senior cyber
security fellow at the Robert Strauss Center for
International Security and Law, again, at the University
of Texas at Austin– one of our great sister
public research institutions and a wonderful place. And there, he does a
very interesting thing, which is he teaches
cybersecurity at a technical level to public
affairs and law students. Which, I imagine, is quite an
interesting challenge and also, quite a service. And the idea is
very similar to what we think about at
the Tech Policy Lab, which is creating a
generation of policymakers and others– industry leaders, people
in government, lawyers, and technologists– who are
fluent, or at least conversant, in the technical, as well as
the policy and value discussion. So that’s a project near
and dear to our heart. And so, he is someone that
I could not recommend more that you follow on Twitter. He has an enormous
Twitter following. And you can follow him @pwn– pone all the things, which
is very, very clever– but it’s @pwnallthethings. And what I recommended
for you– first of all– is that in addition to
being really well-informed and surprisingly accurate
about the law– in fact, until I did some investigation,
I thought he must have had a legal background– but also, funny, and charming,
and all the other things. So I would check that
out, if you don’t already. So what’s going
to happen tonight is that Matt’s going to
talk to us about the issues that he’s seeing out there
now and in the future around interference with
democratic processes. But then, I’m very
excited to announce that one of our colleagues
in information science over at the information school– Megan Finn– is then going to
have a conversation with Matt about this ecosystem. Megan is an extraordinary
faculty member in her own right. And she studies not precisely
election interference, but she studies information
infrastructures. And she’s particularly
well-known for crisis informatics– that is studying
information flows in the wake of a disaster or a crisis. Truly fascinating work. For Megan, I would
recommend her book– Documenting Aftermath–
Information Infrastructures in the Wake of Disasters– that is out with MIT Press. And you can find it
at MIT Press’s website or you can find it on Amazon. And so they’re going to have
a discussion, a dialogue about what we’re seeing today. We thought that that
interactive format would be really enriching. And then, finally, we’re
going to, of course, open it up to you’re
pressing questions– which I’m sure that you have. So you’ll see me twice
more– once after Matt is done with this
presentation, giving us the lay of the land
and his thoughts. And then to introduce
Megan and Matt. And then, once again, at the
closing of this discussion. So I just want to say, thank
you to all of you for coming. Thanks very much to Matt,
for coming all the way over from Texas. And to Megan, for spending
an evening with us. And thanks so much to
Hannah, who is our program manager for the Lab, who once
again, has put together just, a truly amazing, amazing event. So with that, I’m going
to welcome you, up here– Matt– and please join me
in welcoming Matt Tait. [APPLAUSE] MATT TAIT: Wow. Thanks so much, Ryan, and Megan,
and Hannah for putting this on. Thanks to the school, as
well, for putting this on. When I was asked to come in and
talk about hacking elections and to give an overview of
this in 15 to 20 minutes, I thought– wow, that’s a
broad topic to talk about. This is– hacking elections
is something really enormous. And why do you want
me to talk about it out all of these people? Like, this is a very
difficult topic to talk about. And we’ve just had an election
here in the United States. You might be able to
tell from my accent, that I didn’t get to
participate in it– which was very sad to me. But it was very
exciting to watch– I guess– from a distance. So who am I? My name’s Matt Tait. I currently– Ryan’s
already introduced me– but I work in UT Austin. But most of you probably know me
as @pwnallthethings on Twitter, where I mostly waste time
and pretend to do work. But before that, I
used to work doing lots of quite technical things. And I’m actually, quite glad
to be back here in Seattle. When I first moved to the US,
I lived here for a brief while. And so it’s nice to be back
here with Seattle weather. So I worked briefly with
Microsoft and with Google, doing very technical things. Now I have the opportunity to
do much less technical things with policymakers. And elections are one
of these things where the interaction of cyber
security and policy has become very foremost. So anyway, one of the things
that’s very difficult for me when talking about elections
and election security, in particular– first of
all, I’m not a US citizen. Second of all, this is
a really partisan topic. It’s very difficult to talk
about elections and election security without
necessarily getting sucked into the partisan
divide in the United States. And coming as an
intelligence professional– an ex-intelligence
professional– it’s really difficult to
have this conversation without having all sorts
of different examples from Republicans
versus Democrats. And this is a very difficult
conversation to have, but I think it’s very
important to have it in a nonpartisan way. So I guess, we can
start off by asking, what is election hacking? And why do we care about it? I appreciate the irony of a
British person coming over to the United States and asking,
what’s the point of elections? What’s the point of– why do we bother with this? It turns out that
elections actually are very controversial. They’re actually
extraordinarily expensive. So I have some figures here–
like, in the 2016 election, it cost about $4 billion just
for the congressional races in political spending. It cost a further $2.5 billion– with a B– for the presidential
part of the election, just in political spending. And just in terms of
the time that everybody spent, queuing up in
order to put their cross next to who they
would particularly like to win this election– nobody’s doing this for fun. People are waiting in line
for hours in order to do this. This is a lot of wasted time. This is a lot of
opportunity cost when these people
could be doing work. It turns out, that’s about
half a billion dollars worth of opportunity cost. And if we look at
the infrastructure– the infrastructure
for elections– is enormous, its vast. Just replacing all of
the voting machines across the United States
would cost about $3 billion. Why, this is about $10 billion
of cost for doing elections. So why? Why do we do elections? What is the point of them? Nobody enjoys elections. This is not something
that we’re doing for fun. And I think, one of the
things that’s interesting is if we ask, well, what happens
in countries that don’t bother with democratic elections? Historically and currently
around the world, there’s lots of countries which
didn’t bother with elections. And it turns out
that those countries do leadership changes, too. But the way that they
do leadership changes is quite different to the
way that the United States does leadership changes. At the end of our election– in the United States, here– we choose someone
different to be in charge, and that person
is now in charge. And the previous guy
gets to go on book tours. He gets to go and do whatever. And he seems reasonably
happy about that. In other countries, when you
want to become the next king– the way that you do it is you
wait for the previous king to die or surprisingly
often, you try and speed up process up. And it turns out that, one
of the things that happens is once you’ve taken
control as the new king– one of the first
things that you realize is that you’re now the
target of the next guy who wants to become king. And this means that
consolidating power is no longer something
that you might think of as being in the
national interest, it’s something that’s now of
extraordinary personal security interest to you. And what this means it
means that the next guy that comes in, the first thing
that he’s going to do is consolidate power. And the way that they do that– increasingly well,
historically– has been to do this through
quite bloodthirsty means. If you look at Saudi
Arabia– for instance– this is not a
democratic country. As part of the transition
of power there, the crown prince seized control. And what he did was he
arranged for all of the people that might be a threat
to his legitimacy, a threat to his power– he arranged for them to
be taken to the Ritz Hotel and persuaded that he was
the legitimate ruler, in ways that were pretty brutal. Like, not everybody
survived that. It matters that we do elections
versus other countries that don’t do elections because
power transfers happen anyway. And elections are
one of the ways that we can, as a
democratic country, avoid some of the
bloodthirstiness of power transfers that happen
in other systems. But what do we mean by hacking? Hacking is one of these
very complicated words. I come from a technical
background, and hacking– to me– normally means breaking
into computers. But when we talk about
election hacking– especially when we talk
about the 2016 election– hacking tends to mean
something quite broader. Lots of people talk
about the 2016 election having been hacked. And what they don’t
mean, usually, is Russian government officials
breaking into e-voting machines in order to change votes. Now that’s, of course,
something that’s a very paramount concern. It’s something that’s very
important we make sure that this doesn’t happen. But e-vote hacking
is a very small part of the overall picture. When we talk about
election hacking, often we’re talking
about something that’s much wider than that. There’s lots of
different aspects to it. One of the aspects that
matters for election hacking is well, breaking into
computer systems– breaking into John Podesta’s email. Most if you will, of
course, remember in 2016, John Podesta’s
emails were stolen, and then they were published. But does it matter
that Wikileaks will say publishing this, do
we contain the disinformation aspect surrounding that as part
of our definition of hacking? Do we include things like
the Internet Research Agency and their
troll farms as part of our definition of hacking? What about voter
disenfranchisement– is that election hacking? What about gerrymandering? Is this election hacking? So one of the aspects
that’s very important is this information– this information is probably,
the hardest aspect of election hacking to think about. Because it exists in
a continuum, where on one side of the continuum is
clearly legitimate journalism. And on the other side
of this continuum is clearly illegitimate
election interference. And the real question is,
how do we counter this? How do we think about it? How do we rationalize
where on this line– the bright line between
legitimacy and illegitimacy– lives? In the event that we are
going to talk about– for instance– our
political adverts, where do they live on this line? When we start talking about
partisan media coverage, where do they live on this line? One of the difficulties
we’re currently seeing in the press is with
the Mueller investigation, is that certain aspects
of the Trump campaign existed on the spectrum, but
further out towards the right. I think, that that’s
something which has caused real concern
in the United States. Where is the law on this? And the law is the
point where the state is going to intervene. The state is going to say that
beyond some particular line, it’s illegitimate,
and the state is going to come in and
say you can’t do this. But in the event we draw that
in the wrong place– actually, we’re censoring
legitimate journalism. This is a real difficulty–
where do we draw these lines? So what can we do about it? And does it matter
who is doing it? For instance, the
Internet Research Agency was the Russian citizens as
non-state actors interfering in our election. But when alt-right
trolls do it– are they interfering
in the same way? Are we going to
indict those, too? When Russia Today–
for instance– broadcasts extremely
partisan press coverage, say that their coverage is
illegitimate versus Fox News– Fox News also provides
very partisan coverage. And it gets considerably more
attention from US citizens. I think, it really
matters when we’re looking at this to break down
the specific areas of election interference. And to recognize that
different parts of it have different responses. So the question is well,
who can do different things within the system? So to give you an example here– when the Russian
government decides to interfere in the election,
the federal government can come in and it
can say, I’m going to push back against that. Because the federal
government operates at this international sphere. And that means that they can
do things like deterrents. Whereas when someone
like Fox News publishes partisan
media coverage– that’s something that
the federal government can’t interfere in
because suddenly, this is domestic media. I think, recognizing that there
are different aspects to this and that there are different
groups in play, really matters. So what can we do about it? That’s a much harder
question, that’s a really difficult question. Well, different parts of it
have different solutions. So we all know this guy. This guy is John Podesta. We all remember him. Why do we know him? Why do we care who he is? Well, because he
got sent this email. This is a very
interesting email. And part of the thing
that’s really interesting about it is we have this email
because Wikileaks published it. This is the email that
was sent to John Podesta that he clicked on, and
then gave his credentials to the Washington government. The reason we know that
he was sent this email is because in a
grand show of irony, this was one of the
emails that they took and then they published it. So we actually know for
sure that it was this email. And we also are able
to work out for sure that it’s the Russian
government because we can track these links. And we can say, well,
this particular link was sent by the same groups
that also were targeting NATO officials,
that were targeting Ukrainian politicians,
that were targeting all sorts of other people. It’s very interesting
that in their attempts to publish all of
his emails, they forgot to go back and
correct the one email that revealed who they were. And this is what he saw. I challenge any of
you to say that if you were sent that kind
of email that you wouldn’t click through it. That you wouldn’t enter
your password on that page. We’re trained– I
guess– in cybersecurity to look at phishing emails
and to identify the ones which are most likely to trick us. This one would 100% trick me. If you sent me this
link with an URL that looked that legitimate
with my picture on it, with my name repopulated– I would immediately and
without a second thought, enter my credentials
into that box. And this is literally, my job. And the reason that
he got caught out was because your username
and your password is sufficient to
give the hackers access to your entire inbox. And it shouldn’t be. This is very complicated. In the event that we have
technical solutions– and we do now have
technical solutions– to some aspects
of it, then we can use these technical solutions
to eliminate entire categories of the problem. This is a YubiKey
or a security key. That in the event
that John Podesta had had on his machine– then when he was sent this email
and he typed in his password, it would have said OK , now
it’s time for you to tap your YubiKey. And because of the
technology in this system, they wouldn’t have been
able to access his account. But it’s interesting that there
are aspects of this security problem, of this
election hacking problem that we can fully
defend against with technology. There are other parts that
we can’t defend so easily with technology. Vote counts– I think– are
a really complicated issue. They’re one of these areas
where there are really good technical solutions
to parts of this problem. But the real difficulty with
e-vote counts, in particular, is the technology
to defend them– although it works–
is so complicated, ordinary voters
can’t understand it. That we have an entire
field of cryptography which is designed specifically
around solving this one problem– how can you have
e-voting systems which are so secure that you
can know for sure that hackers didn’t break in? That’s actually a
stronger statement than saying the hackers
didn’t break in, but knowing for sure that
hackers didn’t break in. And it turns out
that, this is actually a more important problem to
solve than keeping hackers out. Because actually,
we always think of when we think about e-voting
machines– we think, how do we keep hackers out? It’s more important to prove
that we kept hackers out. Because the thing that
you’re trying to defend is not the vote count
itself, but the legitimacy of the election. Because what happens
when people stop trusting in the legitimacy
of the election? Well, then you end up with
all sorts of serious problems about whether or not
those people in charge actually have the
legitimacy conferred on them through this election. And that’s where things
start to become dangerous. So we need to defend, and we
can defend– with technology, we can defend e-voting
systems, but we actually need to just do it. But at the moment,
we don’t do this. It’s thought of as sufficient. When DHS says we don’t have any
evidence that people have tried to break into our
e-voting systems– that’s not sufficient. Because for most
people in this country or for large amounts people in
this country, we look at this and we say, the fact that
you haven’t found hackers in our system– well, is that just because you
haven’t looked hard enough? Have not found anything
because you’re not looking? Or have you not found anything
because there’s nothing there? So we can defend some
things with technology. But we really can’t defend
everything with technology. And I think one of the things
that we’ve learned from 2016 is that there hasn’t been enough
introspection from all sorts of parts of society,
which were really quite responsible for aspects
of amplifying misinformation from the Russian government. When we look at– this
is from a Harvard study– when we look at the
news coverage in 2016, we can see really
clearly that some stories got a completely
disproportionate amount of coverage. Whether or not you believe,
for instance, Hillary Clinton’s emails were a very
important topic, I’m sure they don’t deserve this
sort of level of coverage that they got compared
with, for instance, Trump’s Russia connections. I think that it’s very
important to recognize that when Trump was
getting coverage, he got primarily positive
coverage for his policies. And tended not to get very
much coverage for his scandals, of which there were many. Whereas when you look
with Hillary Clinton, she got enormous amounts of
coverage for her scandals, and very little coverage
for her policies. I think it’s important
for news media to recognize that
this is something that they need to
look at, where they’re amplifying misinformation
from foreign hacked sources. What their role in this is. And to recognize that the
federal government is not going to be able to
solve this problem. And technology is not going to
be able to solve this problem. And I think it’s
also incumbent on us all to think quite
carefully about social media and our own influence
on social media. Because this is,
again, something that technology
companies– something that the federal
government is not going to be able
to solve for us. Why is it the case that we see
so many clickbaity headlines? Why is it the case that
we see so many problems with social media? Well fundamentally the
problem is adverts. I hate to put all of
this on one industry, but adverts is one
of these areas that causes this enormous
financial incentive for us to spend time in
front of a computer screen on someone’s website. And they’ve realized
that they can get us to spend more screen time
looking at clickbaity headlines that they realize that they
can get us to spend more time looking at things designed
to cause outrage, things designed to keep us in
sort of a happy mood, to avoid cognitive dissonance. And this is how you get
to financial incentives for echo chambers. And the financial incentives
are not there for trustworthy reporting. The financial incentives are
not there for true reporting. The financial
incentives are there to show us things that
we already agree with, and things that
spark our emotions, and things that keep
us clicking back. Things which give us headlines
that we agree with or could disagree with so that
we share them more. Because adverts are driving
the financial markets here. They’re driving the
financial incentives here. So I hope this has framed
some of the conversation that we’re going
to have, to discuss that when we talk about
hacking elections, this is actually a really broad
area that we can break down into certain categories. And that some parts
of them we actually do have really good answers to. When it comes to defending
against phishing, when it comes to defending against certain
categories of hacking, we actually have
complete solutions. Like when it comes to things
like SQL injections of state and local infrastructure,
we actually know how to defend those. We know how to eliminate SQL
injections from code bases. We just should. When it comes to defending
politicians from phishing, we can do that. And one of these security
keys will cost $20. There’s not that many
politicians in the country. We can get one for all of them. If it costs $10 billion
to put on an election, how much does it cost to get
everybody a $20 YubiKey– who’s in the political sphere? We can completely defend some
categories of this problem. And we should. But there are some
parts that we can’t defend with just technology. For instance, when
it comes to defending against Russian interference,
that’s something only really the federal
government can do. This is a question of
deterrence fundamentally. This is something that we as
citizens can’t directly solve. This is an issue for deterrence. But I think it’s also
important fundamentally to realize that we can’t fix
everything with technology, and we can’t fix
everything with government. And that some of
these problems we need to have a much bigger
conversation within society, because there’s no
knights in shining armor out there who are
coming to save us on things like media coverage,
on things like social media. We’re not going to fix
this with regulation. We’re not going to fix
this with technology. And fundamentally,
those problems are the things that we need
to spend more time looking at ourselves, looking at
our influence on each other, and the financial incentives
that are driving this. Because if we’re going to get
stuck in echo chambers, then that’s on us. So that frames it in
I guess 15 minutes. It’s a very short time. But we have two
years to get ready. And I think if there’s one
thing that anyone in the United States can agree on– the United States is
built on elections. This is a country
more than any other that cares about democracy, that
talks about democracy as one of its central
founding principles. If there’s one thing that
everyone in the United States can agree on more
than anything else, it’s that of all of
the eligible voters who want to cast their vote,
they should be allowed to. And that their votes should
be counted correctly. And that they should
be informed when they go to cast their votes. We have two years until
the next election. And I think we have time
to get some of this ready. [APPLAUSE] RYAN CALO: Thank you, Matt. That was absolutely terrific. Somebody that Matt
and I both know is a guy named
Bobby Chesney who’s the director of the Strauss
Center where Matt works. And Bobby was interviewed
for a profile of Matt that was done by a
journalist over the summer. And he said, “Matt Tait
has a unique ability to speak to all audiences
very intelligent. Maybe it’s his wonderful accent. Maybe it’s the personal charm. But maybe it’s also
his deep reflection on these values
and these issues.” So I really appreciate
that very, very much. So right now I’m
going to invite Matt back up, along with Megan,
for the portion of this– that is, the
structured dialogue. And so Megan, could you
please come up here? And Matt, I’m going to
invite you up here again. And again, Megan– for those
of you came in a little later– Megan is a wonderful
faculty member in our Information
School who studies information infrastructure. And she has agreed– to our delight–
to interview Matt for this portion of the program. So Megan, I’m going to turn
it over to you and to Matt. And thank you again
for doing this. We appreciate it. MEGAN FINN: Thank you. All right. Wow, quite loud. Hi everybody. All right. Well, thanks Matt for
such a provocative talk that was very broad. I think you did a really
great job of highlighting how complex all of these issues
facing election security are, and how many
different institutions and technical elements
are brought together when we start to try to talk
about what happened in 2016, and what we can
do better in 2020. Because I wanted to start by
just asking what elections are you watching right
now that you think might have interesting
implications in the election security space? MATT TAIT: The United States
is in permanent election mode. So I think watching
the United States– and especially the 2018
election [INAUDIBLE] calming down after the sheer
level of excitement as the 2018 election finally, finally now–
weeks after the election– is coming to a close. Which is one of the weird
things about US elections, they never seem quite end. But at the moment, one of
the things that I’m watching is the UK. Not so much about
elections, but about the referendum, and the
closing of the Brexit campaign. There’s a lot of instability
at the moment in the UK as a consequence of this. And it’s something which is
of very paramount concern to the future of Europe,
and the future of the UK. And there’s lots
of external actors which are very interested
in what that looks like. Either to end up with
soft or harder landings. And that’s been very
interesting to watch. MEGAN FINN: So what kind
of elections security, or the many, many topics that
you highlighted in your talk, are coming up as
you’re watching what Brexit is going to look like? Do you see that there is
influence campaigns going on still, even though– MATT TAIT: So for
sure, because one of the things that’s very
controversial at the moment is the extent to
which this is going to become a very hard
Brexit with no deal, versus whether it’s
going to be a soft deal, versus where plausibly there
might be a second referendum, or something much softer. And that of course
spans the entire range, and it is extraordinarily
politically polarized in the UK. And it’s fascinating
to watch as– especially online–
there’s been lots of people who are taking
curiously both positions, and recognizing that you
can have one set of Twitter bots arguing very strongly
for a no deal Brexit, and you can have another
set of Twitter bots arguing very strongly that
there should be no deal, and there should be
a second referendum. And the purpose of
this is to recognize that this is a
polarizing issue, and it drives a wedge in society. And that’s actually very much
the same sort of techniques that trolls have used in
the United States as well. Both alt-right trolls and
foreign governments trolls. This is identifying
wedge points in society that they can take
both sides on, because it’s the
chaos that they want. It’s driving the
wedge that they want, rather than necessarily a
particular policy issue. MEGAN FINN: Have
any lessons been learned by what happened
in the US in 2016? MATT TAIT: So I think yes. But as I mentioned, this
is such a broad area that lessons are
often learned in quite narrow specific segments. So in the UK, for
instance, they’ve spent a lot of time talking
about defending politician’s accounts, to make sure that it’s
much harder for those accounts to be hacked. This is one of the weird
things between the UK and the US is that in the
UK, when your intelligence agencies go to politicians
and they say, hey, you can help secure your
accounts, they’re like, cool. This sounds like a great idea. And if you did that
in the US, I think there would be
somewhat more freak out if the FBI went to the
Democrats and said, hey, we’re the right guys
to secure all your emails. [INAUDIBLE] have
different viewpoints. Or even Republicans now. Nobody in the US
trusts the FBI anymore. But it’s fascinating
to see specific areas of it being fixed. And we saw this with the
French elections, for instance. Their election that led
to the election of Macron. They spent a lot of time
thinking in advance, knowing in advance that
the Russian government in particular was going to be
putting out a disinformation campaign, and preparing
for that in advance. And that sort of
worked for them. MEGAN FINN: When we look
at other institutional configurations or ways
of approaching the media and technology,
are there, again, lessons to be learned
from abroad for us here in the states? MATT TAIT: It depends. So part of the
problem with the US is that it’s
constitutionally very different to lots of the other
countries that you can look at. So for instance, if
I were to come up with a recommendation
of, hey, you could have a single bipartisan
or nonpartisan election commission, and they’ll
run all of your elections, this might be a great
idea in the United States. But it’s just never
going to work. It’s going to run into
constitutional problems. It’s going to run into just
practical political problems. So there’s some things
that we can learn abroad. I think that the US is
such a unique system. It’s such a big system
as well, that it’s difficult to transfer ideas
wholesale from say Europe into the United States and
say, they’ve got this solution. It works much better. It would be much better if we
had a parliamentary system, perhaps, in the United States. But I don’t this is going
to happen any time soon. MEGAN FINN: Right. But even the way the French
did prepare the media at least to consider some of
the more extreme views as possibly not coming
from the French themselves. It seems that that might
offer a possible way forward. MATT TAIT: So not
so much the French, but the Germans have
certainly done this. So we definitely saw this
over the recent elections that journalists were much
more generally suspicious of leaked documents that
had been given to them. Because they knew not
just from their history, but also from the 2016
election, that when you see hacked
documents, you should be quite careful about
taking them at face value. And this is one of
the stories that I don’t think people fully
remember from the 2016 election, but actually
several of the documents that were given to the public as
ostensibly hacked from the DNC were actually doctored. There were the edits that were
made to them for the purposes of influencing the public. And this is one of the reasons
why my Twitter account has as many followers as it does. But it’s that when
this first came out, one of the things that I
was immediately looking for was this felt like an old
fashioned Soviet disinformation campaign. And it seemed
interesting to me– based on the history of Soviet
disinformation campaigns– what they would
often do is release a bunch of legitimate
stolen documents. And inside there they would
insert strategic fakes. Because it’s very
difficult, if you’re confronted with a large body
of evidence, most of which is true– 98% is true– and
then there’s 2% which is just something
dreadfully scandalous dropped in a footnote, or
in an extra page. Then of course you’re
primed to believe it. And that happened in
the 2016 election. It’s just that it’s not
particularly public. So for instance,
we know for sure that there was one
of the documents that was released containing
ostensibly a national security strategy from the DNC. And we know that hackers
in the Russian government went into that
document, and they added in the header
of that document the word “secret” in capitals. And the purpose of that
was they recognized that for Hillary Clinton,
one of the really politically polarizing things was her use
of her personal email server, and potentially whether
or not there was classified information there. And so in the event that this
was a hacked document that contain the word
secret in the header, and said it was a national
security strategy, that would be very
damaging to her. And it didn’t get
very much attention because most people looked
at it and they thought, this is just secret in the
colloquial sense, rather than secret in the national
security sense. So it didn’t get much attention. But we know for sure that
some of these strategic edits took place. MEGAN FINN: Can you
talk a little bit about how you became involved
in this investigation of Russian interference
in the 2016 election? MATT TAIT: Sure. Says. It’s a very surreal part
of my life, I guess. So on June 14th, 2016,
you ended up with– the Washington Post released
this blockbuster piece about how the DNC
had been hacked. And this cybersecurity company
called CrowdStrike had come in, they had looks at
this server, and they had concluded that there was two
different families of malware that had infected the server. Which they called Fancy
Bear and Cozy Bear. And Fancy Bear was
their internal name FSB, Cozy Bear was their
internal name for SVR. And it seemed to me when they
first all published this, it seems a little bit
difficult to believe that the Russian
government would have hacked the same
target twice from two different [INAUDIBLE]. That seems like perhaps
communication error on their part. But also, there’s good
reasons, if you’re a cybersecurity company,
to perhaps overblow how advanced the hackers
are, because that makes you sound better as
a cybersecurity company. Makes your victim feel
a little bit better that they were hacked by super
sophisticated people, rather than super
unsophisticated people. And also, it didn’t seem to
me particularly controversial that foreign
intelligence agencies are very interested in the
political maneuverings of their major rivals. In the event the
Russian government was not trying to
hack the DNC, frankly they should get their
tax rubles back. Because that’s just
intelligence 101. And the thing for me which
just was really weird was the next day
what happened, which was that a WordPress
account appeared called Guccifer 2.0,
or Guccified 2.0, depending on how
you pronounce it. And they were trading off
the back of the reputation of a previous hacker– Guccifier or Guccifer– who
was a Romanian hacker who had hacked Sidney Blumenthal. And this new hacker
was claiming, I am the person
that hacked the DNC. Look, here’s a whole bunch
of internal documents that I’ve taken, and this proves
that CrowdStrike is wrong. Like that CrowdStrike
said that this was the Russian
government, but it’s not the Russian government. It’s me, some lone
Romanian hacker. And to me, this was like
an enormous red flag. Because there’s
loads of countries which would be interested in the
political system in the United States. There’s loads of
countries which would be really interested in hacking
those types of documents. And almost none of them
would be brave enough to then turn that and pivot it
into a disinformation campaign. And this was something
the Soviets used to do, and not something that
the Russian Federation had been doing very publicly,
at least for a long time. And that seemed to me to
be really interesting. And so my immediate
thought was, if this is an old style Soviet
disinformation campaign, are they inserting fakes? What can we tell
about these documents? Can we see where they’ve
deleted a paragraph? Can we see where they’ve
added a paragraph? And I started going through
the metadata of these documents for that purpose. And then as I was
going through that, I would discover other
mistakes that they had made. So for instance, they had
used Russian language settings on their computer as they
had made these edits. Or the username
on their computer was a reference to the
founding member of the FSB. And it seemed to me like this
is just an error on their part, and just calling it out. Expecting that this
would all wind down, and they would go away. Run back to where they
came from with that tail between their legs. And they didn’t. They doubled down each time. And it was fascinating,
because what they would do, they would not only go further
and further in their pretext– MEGAN FINN: They
is Guccifer 2 here? MATT TAIT: Yes. MEGAN FINN: OK.
Or whoever? MATT TAIT: Is not being
the Russian government. But also they would
go out of their way to discredit previous analysis. So having got caught out
with Russian language settings on their system,
what they would do is they would start
publishing more documents. But now they would
start publishing them with intentionally
different language settings, or with different usernames. So the next one would say Che
Guevara as their username. And what they were doing
was they were clearly playing with the researchers. Sort of watching what they
were doing in real time, and then adding new
bits of information to discredit the old ones. And that was
fascinating to watch. And it wasn’t really until
the Wikileaks dump of the DNC documents– which led to the
resignation of Debbie Wasserman Schultz very publicly– that
the entire operation suddenly became very
professional and started becoming much more clearly
politically directed against the Democrats. And so at the time I was
just one of the first people to start calling
all of this out, and going through these
documents as I went. And saying, this
is what I’m seeing, versus, they’re
doing this, and this is what I assess that
they’re trying to do here. MEGAN FINN: Yeah. It’s pretty fun to read through
that 50 plus Twitter chain to understand your thinking. One question that
came up– so I got to ask a bunch of
students and colleagues what they wanted
to know from you. And a lot of people
were like, how did he learn how to do this stuff? And what are the techniques? Some of the techniques
that you were using, like looking
at macros in Word seem rather pedestrian maybe. Even for average users, we
could figure out how to do that. And then other ones,
much more exotic. So how did you learn how to
do this really cool stuff? MATT TAIT: My focus
has always been on looking at primary documents. Whenever I see new stories,
my instinct is to say, I don’t necessarily trust the
analysis by the journalists. I want to go to the
primary documents and then see what’s
going on for myself. So I want to get into
the mind of the people writing these documents. And so a lot of that
has sort of led me to when I see documents,
to look for bits and pieces of hidden data. Not just what this
person is writing, but how they’re writing. The style that they’re doing. And then also when
did they write it? What computer did
they write it on? And looking for
forgeries in particular. Looking for tiny mistakes
that they’ve made. And seeing whether or not
you can learn something about this document that nobody
else is going to learn from it. Because that will give you some
insight into who these people are, how they’re thinking. And that will give
you a little bit of an edge when you’re
trying to predict what they’re going to do next. And so a lot of
looking at metadata, looking at these timestamps
and all that kind of thing evolved from that. Going to the original
documents and trying to find bits of
hidden information that nobody else is
really looking for. MEGAN FINN: Yeah. And it’s super
interesting that you’ve been publishing a lot of
your findings on Twitter. And so I’m curious
how using Twitter fits into your public
scholarship and your thinking about doing this analysis. MATT TAIT: Well, Twitter’s
a really strange place. So I set up my Twitter
account, really never expecting it would look anything like it. This is why it
has a cartoon meme and a silly name is I
never expected it to be attached to me personally. It’s sort of taken off
on a life of its own, and it’s kind of cool. It has its upsides and
downsides, I guess. But one of the things
that I always used it for was trying to look
for in documents– go back to the primary
sources, and trying to extract as much
information as I can. And one of the things that
I’ve always been very wary of is it’s easy to read
documents in ways that mean that you’re
looking into them for what you want to find, rather
than what’s in there. And so what I would
intentionally do is I would create long
Twitter threads of everything that I could find. And I would take this
mental effort of, I will start at the
beginning and I will not stop till I hit the end. I will read the whole
document from start to finish, rather than going through a
Control F-ing for the words that I would like to have
some cool news story. Because I’m not
constrained in the same way that most journalists
are, which is that they have to get
a story out quickly. I have the opportunity of time
that I can just go through and I can find everything. I can look at the little stories
which are not good enough for journalists to publish. And so that’s what I
was using Twitter for, was just to call out the
things that I thought were interesting, see
what other people had as theories as to what
was going on in something versus something else. To draw attention to it. But also, just because if
you’re doing it publicly, it actually incentivizes
you to do all of it. In the event that you’re
doing it not publicly, then it’s very easy
to say, I’m bored. I’ve done two hours of
reading this document. I think I understand where
the document is going. I’ll stop now. Whereas in the event that it’s
like a thousand page document, it’s going to take you
24 hours to read it. That’s a lot of work. And if you’re doing it
publicly, it sort of forces you into the habit of
doing it a little bit more carefully. And then once you’ve done it,
it’s also a reference for you to go back to. I often use my Twitter
account for that purpose. I remember I read this document. It had something interesting
or important to say. What did it say? I’ll go back, I’ll search
my own Twitter account. And that will take me back to
the point where I found it. And then I’ll know
what the source was. I’ll know where it was. MEGAN FINN: OK Do you
worry at all that you’re saying things that
would in some ways compromise American security? Or your analysis is catching
things that CrowdStrike is not catching. MATT TAIT: So there
have been times before, especially when it
comes to FOIA requests, there’s been a
lot of times where people have misredacted things. And in some cases I
will call that out for the purposes of
making the point that it’s been misredacted. Because I think it’s
important when things actually have real world
consequences, it’s important that they
get redactions correct. But there have been some
times where I’ve found things, like names of particular staff. Identifying specific
things which would cause quite exquisite
harm to specific individuals. I’m not going to call
that out on Twitter, cause I don’t think that
would be appropriate. MEGAN FINN: Right. And for you, so this reading for
you is obviously multilayered. A lot of us have
been wondering a lot about how is artificial
intelligence going to affect the election
interference picture? And how are you
thinking about this sort of multilayered reading, and
the potential for editing– if you will– like
videos and images and all of these other interesting
ways of manipulating public opinion with AI and all
these other advances coming at us? MATT TAIT: So I think one
of the real dangers with AI is the extent to
which it’s going to enable forgeries
that are going to be really compelling forgeries. And we can already– if you want to know
how good AI is, look at the predictive
text on your phone, or the predictive
texting on Gmail. Like how good it
is at constructing completely legitimate sentences
that you might plausibly otherwise type, by learning from
what it is that you normally type. And is going to be only a matter
of time until someone can say, I have all of these
hacked emails from you. I can create new
complete forgeries which have your style. They’re not just they’re saying
something that you didn’t say. They’re saying something you
didn’t say in your style, in a way that any
of your friends would recognize
immediately as your style. Because the AI has learned it. And it’s not just going
to do this for text. It’s going to do
this the pictures. I’m here at U Dub, so
there’s lots of research here on editing videos in ways
that are extremely plausible. And I think that’s going
to be really dangerous. We already have
politicians who are willing to post things
which are edited, and sort of in nice meme format. What happens when
they start getting sent memes which are actually
completely fabricated videos? What happens when
its former President Obama is saying something
that he never actually said that’s getting retweeted
by the president? But what happens then? How many people are
going to believe this? And how many people are
going to say, you know what? I’m going to wait to find the
ground truth on this before I assume that this is
what actually happened. MEGAN FINN: Frightening. So you touched a little bit
on bots in the UK debates that are going on. Can you talk about
what the role of bots is in shaping the electorate
and public dialogue? Because I think not
all of us are as well versed in some of the
stuff that you referenced. MATT TAIT: Right. So that’s a very broad,
complicated question. So one of the things
that bots primarily are used for at the moment is
amplifying specific topics. So often there’ll be a topic
which actually doesn’t– under normal circumstances would
not get a lot of attention. Because they’re kind
of niche, or they’re viewpoints that actually very
few people actually hold. Like they might be
extremely offensive views, but they’re views that not
very many people actually hold. And with the use of bots,
what they’re able to do is they’re able to massively
amplify those voices, and make it sound like
those views actually are held by a much larger
percentage of the population than actually there really are. And a consequence
of this is that they can then cause a much
greater deal of panic amongst the rest of the
public who then think, oh this viewpoint
that previously I assumed there was
a couple of hundred people across the United States
held this view, actually maybe a couple of 100,000 people
across the United States hold this view. And that can lead to massive
divisions internally. And we certainly see this
with massive amplification of alt-right messages. And we’ve seen this in
the UK, for instance, where neo-Nazis in the UK said,
we’re going to hold this rally. And this gets
massively amplified. And everyone thinks,
oh my goodness, there’s going to be this massive
rally of neo-Nazis. And the police have to come
out in a massive show of force in order to try and
have a clear divide with the neo-Nazis on one
side, and the antifascists on the other. And then only six
people actually turn up, because in
reality it’s six people plus 100,000 bots who are
amplifying their message. And I think that it can– especially in social media–
we can get very distorted views as to what actually
people really do think because of
the amplification, the misamplification of certain
voices through these bots. And I think that’s something
that’s quite difficult. MEGAN FINN: Are there
technical approaches to dealing with some of
the issues around bots? MATT TAIT: So it’s
very difficult because inherently
the way that you are interacting
with the internet is through your computer. So anything that you
can do with the internet through your computer, AI can
also do through the internet– through your computer– in a
way that’s indistinguishable. But I think there’s
some ways that we can make it more clear to people
that certain things are using– for instance, when you’re
posting through the Twitter website, that’s how humans
interact with Twitter. Whereas with lots of these bots,
they’re interact with Twitter through some of these
programmable interfaces. I do think that there’s a lot
of scope for Twitter saying, you know what? The accounts that
are interacting through the
programmable interface, we’re going to highlight in
some slightly different way. And of course that’s not going
to prevent very advanced people designing these bots to
interact with Twitter through the website. But it makes it harder for them. And it makes it more
likely that these people– or these accounts which are
pretending to be real users, but actually are just one of a
swarm of programmed interfaces and programmed bots– I think those are
highlighting that some of the people that you’re
interacting with on Twitter are not real people. That’s something
that they can do. And they can catch sort of
98% of them on the first cut. And then you can
do something else in order to try and catch
the 98% of what’s left. And so on and so forth. And eventually
you get to a point where only very,
very advanced people are able to genuinely
masquerade computers as humans. I think that’s something that
social media companies haven’t spent much time doing, and
something that they could. MEGAN FINN: That’s
an exciting idea. So what kind of
frameworks do you think would support
a stronger voting infrastructure from a
policy standpoint, if any? MATT TAIT: So one of
the wheel difficulties with voting infrastructure
in the US is first of all that it’s massively outsourced
to the individual states, who have their own reasons
for either having good systems or bad systems. And sometimes having
bad systems is a feature, not a bug for them. And I think that’s
really problematic. When you turn up to a
voting precinct and there’s four hour lines, when you
first look at that you think, my goodness. How did we screw up so
badly that this happened? And after a while they think,
oh, this wasn’t actually an accident. I think that it is
deeply un-American that some people are
trying to disenfranchise other citizens from the vote. And I think this it’s
worth recognizing that some of the problems
that we have are intentional, rather than because we
haven’t got enough money, or because we haven’t put enough
time and work into fixing it. And I think that
that’s something which we as society need
to get better at calling out and clamping down on. But it’s very difficult at
the federal level to force that down, because this is
an inherently states issue. And it is fiercely contested. If you ever speak with election
officials in any of the states, one of the things that
you discover very quickly is their antipathy often
towards the federal government. They don’t want the
federal government coming in and telling them
how to run their systems. And that can cause enormous
amounts of friction in very complicated ways. One thing that I do think we
can do, which we have done, is designating things
like election systems as critical national
infrastructure. Which gives the
federal government more opportunity to
defend election systems at the federal level. So the Department of Defense,
the Department of Homeland Security have more opportunity
to defend these systems because we have designated them
as critical national structure. One of the other
things that I think we can do that we’re
not very good at doing is making funds available
to the states for things like cybersecurity to say– if we want cybersecurity for
e-voting machines for instance, there’s actually a
really big difference between the federal
government saying, we will test your infrastructure
up to the value of $2 million, versus, here is
$2 million for you to test your infrastructure. One of these is going to get
an enormous amount of pushback, and the other one is going
to be invited with open arms. Because it’s money for
them to spend as they want. And I think having a better
understanding that sometimes– actually, everyone
wants the same outcome, but how you get to that outcome
can actually really matter. I think that that’s
something that we’re not very good at as a society. MEGAN FINN: Super interesting. So how do you see
these election issues that we’ve been talking about
connecting to broader cyber insecurity trends generally? MATT TAIT: How do you mean? MEGAN FINN: Well, so we
talked about interference in the US elections along
the lines of misinformation, or these very specific
hacks that the Russians seek to sow divisions, or
sow confusion and chaos. To even fabricate
activist groups. Do you see this having
more broad effects beyond the election? Certainly you talked about
the referendum in the UK, and just influence within the
general conversation there. But elections are these very
specific decision points. MATT TAIT: So the point of
these types of misinformation campaigns is in order to
affect the citizenry, in order to affect how people
view specific issues. And elections are one
very obvious example where affecting
people’s viewpoints is going to affect a real world
outcome– i.e. the election. But also referendum is
another good example. But also just general
policy issues. So another good example is
the Mueller investigation. The Mueller investigation
is enormously polarized in the United States. There’s a lot of people who
deeply believe that this is going to fix the United States. There’s a lot of
people who deeply believe that it is
antithetical to the US system, and would like it
to be disbanded. And this is an enormous friction
point within the United States as a policy issue. And that it touches
on foreign governments as well, in this case. And it’s something that is
very easy for these bots– for foreign governments–
to want to affect on this specific issue. So it’s not just elections. But it’s issues when they
get to the national level and they become
enormous friction points become something which is
of real interest to them. It’s less the case, once
you start to get down towards the state referenda. But we’ve definitely seen
interference with California, for instance, with trying to
boost the California secession movement. There’s been large
amounts of work put in to try and make that movement
sound bigger than it is, and to fund it and
make it feel bigger. But at the national
level, I think that’s still where a
lot of these concerns are getting driven. MEGAN FINN: Great. So I’m curious– you’ve given
a couple of really interesting talks about the history
of disinformation during the Cold War. And I’m curious if you can talk
a little bit about what you’ve learned by looking at
that history yourself, and what we can take away
from thinking about that? MATT TAIT: Sure. So it’s a fascinating
area of history, actually. But the two big
things to recognize. First of all, that this is
part of a grand strategy of– there is very serious amounts
of money put into this. There were during
the Soviet era. The Soviet government
was putting in roughly the same
amount of money into disinformation
targeted at the US that the US was putting into the
National Security Agency total. This is a vast, vast
amounts of cash. And what they did was
they didn’t just say, I’m going to do this one thing. And then six months
later that didn’t work. Let’s do this one other thing. They were just
doing large numbers of these all of the time. And some of them would work,
and some of them wouldn’t work. And the ones which would work
they would then drive further. And the ones which
wouldn’t work, they would throw them away and
they would try other ideas. And what they do is they would
look for compliant journalists who were not going to check
their sources particularly carefully. Who often had an ideological
bent to start off with. And they would feed
them either real things, but with a massive partisan
spin that could be put on it. Or real things
combined with forgeries that could be used in order
to drive public opinion. And there’s lots of examples
of that in the history of– MEGAN FINN: In archives. Yeah. MATT TAIT: Yeah. Where things like suggesting
that AIDS was caused by the US government was specifically
a campaign in order to influence Africa,
and to drive them and regimes in Africa
towards the Soviet Union, away from the United States. And things like inserting the
forgery of the US Field Army Manual to say that the US was
torturing people in Vietnam. The purpose of this was in
order to drive public opinion– both in the United
States, but also outside of the United
States– to be more hostile to the United States. And therefore by extension,
closer to the Soviet Union. And they just see
this as part of their general foreign policy. You have your overt
foreign policy done through the Ministry
of Foreign Affairs. But you have this
much wider gray space which is just part of your
standard operating procedures in order to drive people
closer to your alignment, and away from the US. MEGAN FINN: And so much of
that sounds so familiar today. And it just raises the
question for me of say early ’90s through 2014, were we
just all asleep at the wheel? Or were these covert
operations still being planned and planted? These sort of seeds
of misinformation being planted in
our public sphere? Or was there a dormant period? MATT TAIT: Both, I think. So I think post Cold War it
dropped off dramatically. But there’s always been sort
of an undercurrent there, and it sort of ramped back
up very massively in 2016. It was quantitatively
quite different to 2016 compared with other
previous elections. But it’s important to recognize
that these things have always been there. And part of the reason
it was as big as it was was because they really didn’t
get that much pushback on it. And I think the US has
forgotten a lot of the lessons that it learned about
deterrence from the Cold War. Especially with the
previous administration. And it led often to very
high volume rhetoric that got, I think,
heard in Moscow quite differently to how the
US intended it to be heard. So for instance, when you phone
up your counterpart in FSB and you say, I see what
you’re doing, they say, so? We don’t care. What are you going
to do about it? Cool. You can tell that
we’re doing this. But we’re not
getting any pushback. And what happened was they
were quite careful and quite strategic as to pushing
back against US provocation, or what they perceived
as US provocations. So we often forget in towards
the end of summer 2016 there was a series of leaks
in US newspapers– the Washington Post,
the New York Times– that the US was planning
a retaliatory strike against the Russian
government in the case that they were going
to do interference. And that this sounded
like potentially they had access to the Russian grid. That they would be
able to be lights out for the Russian government. And then in the
middle of this there was this massive leak
of this new group called Shadow Brokers, which
released a bunch of NSA– or allegedly NSA– tools. And part of the purpose of this
was to say, we see you too. We have leverage too. And that caused internal chaos
inside the United States. And it’s had some very
significant knock on effects, with things like the
WannaCry ransomware so the consequences there. But also this coverts trying
to do covert signaling I think also has been very dangerous. That we’ve forgotten
that deterrence requires overt signaling. We have the Open Skies
Treaty in the United States so that the Russian government
can fly over our nukes and see that they’re there. We’re not hiding them. We want them to see
how many we have. We want them to understand
what it is that we have, so that they can plan for it,
so that everybody understands where everything is. When we expel x many
diplomats, they’re going to expel x many diplomats. Everyone understands this,
because it’s overt signaling. And the covert signaling
that went on in 2016 I think was extremely dangerous. And a good example of this was
at the very end of the summer campaign, there was
the Mirai botnet. It was a cybercrime group
which attacked loads of internet of things cameras. It took out the
DNS servers, which are core internet infrastructure
for the east coast. And this took down Amazon
AWS, took down Netflix, took down a bunch of
core infrastructure on the east coast. And this was all
happening in a world where the Washington Post
was talking about the NSA making Moscow go dark. Where Shadow Brokers
has suddenly leaked a bunch of allegedly NSA tools. Where you’d have
everybody talking about the Russian
government hacking the US, and maybe the US doing
some kind of response. And the east coast goes dark. Suddenly everyone’s
panicking and thinking, this sounds like a cyber attack
by the Russian government. And it turns out it wasn’t. It was just this criminal group. And when you’re doing all
of this covert signaling, you really run the risk
of massive escalations that you didn’t expect. And I think the previous
administration was not good at understanding how
to do overt signaling. As a final example of
this, the sanctions that they did right
at the very end of the previous administration
in December of 2016 were willing muddied. Where they basically,
said we are sanctioning all of these large
collection of groups for all of this collection of facts. And they weren’t
good at saying, we’re doing this bit for this reason. And this bit for this reason. And this bit this reason. Which means that when they did
that, it wasn’t particularly clear which parts are
you trying to signal are things that we can’t do? What are real red lines? What are just your
being angry at us? And of course, that was
made much more complicated by the incoming administration’s
messing with that. But understanding
deterrence is something which I think the
previous administration– and certainly the
current administration– haven’t done particularly well. MEGAN FINN: I’m going to invite
people up to the microphones– I think there is one that
corner, and one over here– to ask questions. Am I right? Oh yeah, one at the far ends
in the room, to ask questions. And I guess– so what would good
deterrence look like right now? While people are queuing up. MATT TAIT: So I think it’s– with the current president’s
current approach to Russia, I think that it’s
very difficult to see how you can have any kind
of plausible deterrence. But in the event that the
president said, for instance, you know what? The Russian government
did hack the DNC, and that that’s an
outrageous infringement on the political process
inside the United States. We’re going to sanction that. We’re going to put in serious
resources into defending it. And in the event that
this happens in 2020, then here’s these
sequence of events which the Russian government
is going to really dislike. So lethal aid to
Ukraine, for instance. Or ramping up US
efforts in Syria. Those would be of extraordinary
foreign policy consequence to the Russian government. They would then see that
as a legitimate threat. But I think part
of the difficulty is that you can’t really
do retrospective deterrence this far after the fact. I don’t think there’s
anything the US government can do to deter 2016. But what you can do is you
can say, going forward, these are all red lines, and
this is what we’re going to do. But the important thing is
that they need to be credible. And at the moment
it’s very difficult with the current
administration to see how they’re going to come
up with a threat that’s going to be credible. MEGAN FINN: Yes. All right. So we’ve got some folks
who have questions. I just ask that people ask
their questions briefly so we have lots of
time to hear answers. I’ll start on this
side with Darma. AUDIENCE: Hi. Thank you for the talk
and the conversation. I’ve been listening and thinking
about Robert McChesney’s book Telecommunications, Mass
Media, and Democracy, which is about the 1920s and
the setup of the broadcast system in the US. And how it very
intentionally was designed to keep Russians
from being able to be able to have access to the US. And it did so from a
structural perspective of how the broadcast
system was designed. So I’m wondering if you’ve
thought about that comparison. And now that everything is
connected to everything, I think we’re not really
prepared for that. But I don’t know what that
brings to mind for you too. Thank you. MATT TAIT: Right. So that is a really
good question. The internet makes a lot
of these problems much more difficult because of the
global nature of the internet. It’s very difficult to– as we’ve seen with the 2016
events, a lot of these events were actually being
done from inside Russia. So these are things that
we can’t kick people out as a consequence of. We can’t send the FBI
around to their house to arrest them in ways that
would have been much more plausible back in say the 1920s. We do have real
problems, I think, trying to get a
handle on the fact that information is now an
extremely global affair. That actually the news
sources that we have are often very international. And the people that
we’re interacting with on social
media in particular are very international. And we can’t have a
system where we just say, here’s a US Twitter
that everybody uses. Because US tech companies,
they have no interest in carving up their platforms,
the specific geographic areas. So I think unless we get
something like that– and I don’t think that there’s
any reason to believe that that’s going to happen– we just have to
cope with the fact that our information sources
aren’t going to be controlled. And perhaps shouldn’t
be controlled. We just have to learn
how to cope with that. MEGAN FINN: There’s
one over this side. Yeah. AUDIENCE: Hi. Thank you for your talk
and the conversation. I just wanted to ask
if you have any insight into the Estonian
online voting system– which is a country
that obviously has 100% online voting– and how that’s playing out
in the current context. And if there’s been any
adversarial pressure on their system. MATT TAIT: Right. So the Estonian system is
actually really interesting. And part of the reason
it’s so interesting is because they’re, of
course, right next to Russia. They’ve had real problems
with the Russian government interfering in their
processes for forever. And one of the ways that
they solved this is they say, we’re going to
have online voting. But it’s not just
ordinary online voting. It’s not online voting where
you just log into a website and provide your credentials,
and then you’re in. You have everybody in
Estonia has their e-ID cards, which contain on
them a private key, which allows you to sign an
action as a private citizen. So when you vote, you’re
voting through your computer. You’re not just typing in a
username and password, which can be easily stolen,
or can be hacked, or could be forged by
the Russian government. You’re actually digitally
signing it with an e-ID card. Which means that in the
event that someone did hack one of these
vote tally systems, you’d be able to see immediately
these digital signatures are wrong. In the event they tried
to do ballot stuffing, you’d realize immediately that
actually a lot of these ballots are forged because of the
cryptography– because of the digital systems
that are involved. I do think that Estonia’s a
really good example of where technology is seen as a first
party part of the solution. You can’t solve everything
with technology, but there’s a bunch of
things that we really can solve with technology. And I think it’s very easy
to get sucked into nihilism in this country with how
bad some of the systems are. Most people, when you talk about
e-voting systems, they say, no. These are terrible. We have to go to paper. But then paper is also
pretty terrible for all sorts of other reasons. We can have secure systems. Estonia is a fantastic
example where that works. And we can and should
learn from them, bluntly. MEGAN FINN: Let’s
go back to the side. AUDIENCE: Hi. So correct me if
I’m wrong, but I believe captchas are a
technology that ascertains that a user of a system is
a real human, as opposed to a bot. So why doesn’t
Twitter and Facebook implement them for each post to
ascertain that it’s not a bot? MATT TAIT: That’s
a great question. The answer is
because the more you ask people to
demonstrate that they’re a human before every
time that they do it– before every time that
they post– the more you’re going to annoy them. And part of the financial
incentives of social media companies is not to
make sure that there are no bots in the
system, but to make sure that you’re spending
as much time on their system as
possible so that you get to see as many ads as possible. Because that’s how
they make money. And in the event that
you had a Facebook or you had a Twitter where every
time you had to have a post you had to complete one of these
annoying captcha things, what would happen very
quickly is that there would become a
rival service where you didn’t have to do that. And people would quickly
drain towards it. Because if you’re Twitter, if
you’re Facebook, what you want is you want people producing
good content for your system. So not only did those people
spend lots of time watching adverts, but other people
come to their platform to look at those
people generating content to see those adverts. And the more you require them
to do complicated proofs– and you’ve all done captchas. They’re annoying. And increasingly they’re
difficult for us humans to even do. Quite often I get shown
a picture of a street, and they say, click
on all of the signs. And I click on all
the signs, and then it says, you still
haven’t clicked on all, because there’s one
hidden behind a tree. And at some point I feel
like it’s a personal attack that I’m not a real human. But one of the other
problems, of course, with AI is that AI is getting
more and more advanced. The reason these captchas are
becoming more and more annoying is because AI is getting
better and better at fooling these systems into
saying that they are human. And so what’s going to happen is
over time we’re going to end up with these systems where it
becomes almost impossible to get a system where a human
is able to demonstrate that they’re human and an AI isn’t. But even if you could, you’re
annoying your human visitors if you’re asking
them constantly. AUDIENCE: Thank you. AUDIENCE: Thanks for coming. So there were some media reports
that you were interviewed by the special counsel’s office
related to the whole Peter Smith affair. What was that like? Are there any fun
anecdotes you can share? MATT TAIT: Wow. So I have learned today
from Jerome Corsi what it’s like to go on record
with all sorts of things that you shouldn’t
probably go on record with. The question as well met. I’m going to decline to answer. AUDIENCE: Worth a shot. Thanks. MATT TAIT: It’s.
A great question. Thanks. AUDIENCE: Hi there. I really enjoyed your talk. I know that you were
very from the start, and just went out
of your way to be very diplomatic about
the partisan nature of this kind of interference. But it is inherently partisan,
and is a very clear trend– both in the US and
the UK– that there is a clear beneficiary
of this activity, and a clear opponent of it. How do we approach any
of these solutions where half the country– and the major political party– doesn’t seem to be in any
way motivated to give a shit. Because they’re
benefiting from it. And one day they might not
be– because as you say, they’re just trying
to sow chaos. But right now, what
are the levers of power that we can actually
exert any influence? Because the problem isn’t voting
systems in Washington state where we’re all crazy hippies. It’s the problem is the
voting systems in Wisconsin, Pennsylvania, Ohio, which don’t
exactly have motivated parties. MATT TAIT: Right. It’s a really difficult
question, it’s and really difficult problem. Because fundamentally
the good solution to– if this was
any other system, the solution would
be you elect them out and get different people. But this becomes
self-referential, because to vote
them out requires you to use the election
system that you have. And I think elections in this
country have never been good. bluntly. They’ve never been good. And they were designed
from the ground up to not be particularly fair. And over time they
have got better. And partly that’s through
constitutional amendments. A lot of the constitutional
amendments in the US are specifically
extending the franchise to people that were
previously disenfranchised. And part of the
reason for this is that this happens as
an evolution over time. And I think that this might
be the way that some of these get solved is through
constitutional amendments. But ultimately, no matter how
much these systems are unfair, you can overcome the
unfairness in it in the event that it becomes so
drastic that you end up with a complete tidal
wave that overthrows it. And then those people can
actually reform the system to make it more central. And then that tends
to entrench it. It is a very difficult problem,
because the election system is probably the most difficult
system of all systems to fix– because it’s
inherently political. The people that are in
charge, by definition, have benefited from it. And therefore also,
by definition, the least incentivized
to fix it. When elections go wrong,
what you’re really doing is you’re saying that the
election is illegitimate. And the people in power, the
one thing that they can all agree on is that them being
elected is 100% legitimate. It’s the most legitimate
that has ever been. And so it’s a really
difficult question. And also, when you want the
state to intervene in order to make things better,
you have to recognize that the state that will
be doing the intervening is the current state– which is the state that
you think has problems. So in the event
that you wanted– we’re here in Seattle. So most people here
I’m sure are Democrats. In the event that you think
the federal government should come in and fix a lot
of these state systems, do you really want Donald
Trump coming in and saying, I’m going to fix the
problems in Washington state. Which might be problems
that he’s invented. They might be problems
which are going to try and skew the vote
in a different direction. I think this it’s the most
difficult system to fix, and ultimately only is
fixed at the ballot box. RYAN CALO: All righty. Well, I just want to say
this has been an absolutely fascinating conversation. We could continue all night. But we are out of time. So please join me in thanking
both Megan and Matt for coming. [APPLAUSE] [MUSIC PLAYING]

About the author

Comments

  1. Hypocrisies you all mess with other people's elections…. the U.S. is on stolen land,, your all hypocrites

Leave a Reply

Your email address will not be published. Required fields are marked *