The Bot Wars, begun they have.
Over the past two years, automated social media accounts and fraudulent regulatory filings have been used by anonymous parties to obscure public opinion, distort public discourse, and corrupt the integrity of rulemaking in the United States government.
The legitimacy and utility of the open platforms that federal, state and local governments have built online to empower distributed national populations to read and comment on proposed rules and regulations over the Internet are now under threat from corporate astroturfing, targeted misinformation campaigns by foreign nations, and malicious non-state actors.
While millions of fake and fraudulent FCC comments have drawn the most attention, for good reason, half a dozen federal agencies in the United States that had fake or fraudulent comments entered into their rulemaking systems over the past year should be viewed as canaries in the coalmine.
Federal Communications Commissioner Jessica Rozenworcel sounded the alarm at the State of the Net Conference in Washington, DC this January, when she said that there has been a systemic effort to corrupt the process by which the public participates in some of the biggest issues in Washington, calling for the Justice Department to investigate the issue and for the federal government to invest in civic infrastructure. I was present for that conference and documented it:
At the State of the Net conference, @JRosenworcel says people from across the USA, across politics persuasions, have found their names & identities stolen & used in the @FCC net neutrality rule making. 2 million total. ~500K from Russian email addresses. #sotn2018pic.twitter.com/zCUNlssA54
— Alex Howard (@digiphile) January 29, 2018
Over the two years I spent at the Sunlight Foundation, we repeatedly called for transparency regarding how the agencies were investigating or responding to these problems.
The FCC, which was ground zero for the problem, stonewalled us – along with other media and New York State’s attorney general.
In the interim, the problem has continued to fester. If our elected representatives don’t act, sock puppets and bots could break our ability to use the Internet for public comment in rulemakings.
Over the past decade, the redesigns of Regulations.gov and FederalRegister.gov constituted two of the most significant upgrades to public information and participation in the United States the last decade, dramatically improving opportunities to learn more about proposed regulations and to weigh in on them.
Site data suggests, however, that both have received relatively little mainstream attention, awareness or participation, as compared to consumer Internet platforms or apps, in terms of the more than 200 million Americans registered to vote. That said, there has never been more potential to solicit and record the feedback of Americans.
As I highlighted earlier, this is primarily grounded in design and human choices, rather than any technical challenge. Many government agencies continue to use social media to push out pictures, press releases, remarks and announcements, rather than links to comment on rules. Just compare what @Interior tweets to what they publish in the Federal Register.
The Federal Communications Commission, which has used its social media presence better than most, has a different challenge: after its ancient ECFS comment system didn’t hold up during the 2014 Open Internet proceeding, the agency chose to redesign it, instead of using Regulations.gov.
Part of that design included an API, without terms of service or clear limitations for abuse or enforcement – which might have included blocking IP addresses or revoking keys.
The other part, however, was not building in more friction for those comments, particularly with respect to identifying that a human had authored a given comment.
As I told Federal Computer Week in December 2017, the FCC’s comment problem is the result of flawed product design and maladministration.
Fake and fraudulent comments are the product of human choices, rather than any technical challenge.
The festering issues has only been compounded by the void in communication from the agency, with little public evidence of investigation or accountability regarding these demonstrated problems and potential solutions.
What to do now?
To state the obvious, there’s been many missed opportunities, but it’s not too late for agencies and Congress to start to take simple steps now. If they do not, it’s reasonable to expect that foreign and domestic actors will continue to disrupt, pollute or otherwise create doubt about the legitimacy of public comments in rulemakings.
I think that’s unacceptable, and said as much last week during an event at New America in which Rozenworcel again outlined the challenges, followed by a panel at which we discussed how the comment process could be fixed. You can watch archived video of the event below:
There’s some steps that agencies could take today, as Rozenworcel, Matt Cutts (the acting director of the U.S. Digital Service) and others have suggested: use improved CAPTCHAS, email confirmations, and adopt multi-factor authentication.
Simply put, there are existing means to authenticate someone online with a higher level of accuracy that have already been used by other agencies. After the IRS system for providing public access to tax transcripts was compromised, the agency took it down and then added another factor of authentication: a cellphone number.
Those approaches are a start, but they shouldn’t be the end of the conversation, as questions at New America made clear.
Public interest advocates expressed legitimate concerns that added friction could be to disenfranchise the public by making it harder to comment. I share that concern. This is too important to get wrong or continue to neglect.
Here’s the rub: some online activities need to be anonymous, as with whistleblowing. Others can be pseudonymous.
When it comes to financial transactions or health records or any other exchange of highly sensitive data, much more friction in identity verification is necessary.
If a local, state or national government is going to pursue rulemaking online that is meant to be a meaningful gauge of public opinion on a given issue, that feedback needs to be tied to citizens and residents.
If it is not, it’s unfortunately reasonable to expect that foreign and domestic actors will seek to disrupt, pollute or otherwise create doubt about the legitimacy of that feedback.
That identity could be provided by a government entity, like a post office or tax agency, or a private sector entity that has been validated by government, like a telecom company, bank or technology company, or something even more novel, like the blockchain — although there are some serious downsides to radical transparency that should also be weighed.
Legislators, law enforcement regulators around the world need to understand the depth of the problems that our increasingly connected world has ushered in, and then consider how they could collaborate to protect the integrity of public comments on matters of public interest, neutralize and mitigate propaganda, and prevent stolen identities from being use to commit comment fraud.
The General Accountability Office’s investigation of what happened at the FCC is necessary, but insufficient.
Every democratic nation and state needs to convene informed debates about how different approaches to adding friction to existing digital government comment systems to setting civic standards to ensuring that every one of their citizens has access and opportunity to be informed and participate.
Around the world, there’s extraordinary, exciting innovation in this space, from the digital government in Estonia to Loomio in New Zealand or “liquid democracy” in Europe to biometrics in India to blockchain pilots.
There is still immense potential for online platforms to play a foundational role in enabling nations, states and cities to nurture more transparent, equitable and just digital mechanisms for ensuring that the consent of the governed is honored in practice, not just principle or rhetoric.
Here’s hoping that somehow, we can find a way to have the necessary conversations and hearings.
Pingback: Regulatory CAPTCHA | E Pluribus Unum
Pingback: Open Government Partnership IRM finds backsliding in the USA | E Pluribus Unum