This morning, James Grimaldi reported that Ajit Pai, chairman of the Federal Communications Commission, has told two U.S. Senators that he has proposed “to rebuild and re-engineer” the agency’s online electronic comment system “to institute appropriate safeguards against abusive conduct.”
The proposal, made in a letter to Congress that has not been publicly disclosed yet, comes after Grimaldi and his colleagues at Wall Street Journal and other outlets reported that millions of fake and fraudulent comments were filed in a 2017 rulemaking proceeding on net neutrality.
As FCC Commissioner Jessica Rosenworcel and others have noted, fake and fraudulent comments have also been filed in several other agencies, drawing attention to the need to fix online comments systems.
While I’m glad to hear that the FCC chair is now considering changes to new ECFS system, he has had ample opportunities to respond to many inquiries and concerns public, press, a state attorney general and Congress regarding fake and fraudulent comments filed in the agency’s proceedings.
As I told the Wall Street Journal, however, adding a CAPTCHA to try to prevent spam unfortunately sounds like a solution from the last millennium to a decidedly 21st century set of problems.
As it happens, the World Wide Web Consortium just released a working paper that makes clear how unsuitable a CAPTCHA would be for public comment at a public agency.
Here’s the abstract:
“Various approaches have been employed over many years to distinguish human users of web sites from robots. While the traditional CAPTCHA approach of asking the user to identify obscured text in an image remains common, other mechanisms are gaining in prominence. These approaches generally require users to perform a task believed to be possible for humans and difficult for robots, but the nature of the task inherently excludes many people with disabilities, resulting in an incorrect denial of service to these users. Research findings also indicate that many popular CAPTCHA techniques are no longer particularly effective or secure, so it is necessary to consider alternative approaches to block robots, yet ensure these approaches support access for people with disabilities.”
The FCC has, as far as I can tell, failed to public engage stakeholders about this, acknowledge the validity of the concerns, the steps the agency has taken to investigate them and make people who have had their identities used whole, and host any public forums about potential approaches to mitigating the problem. If I missed any of that, mea culpa.
Given the agency’s resources, mission and professed commitment to transparency and legal mandate to consult the public in its rulemakings, the FCC needs to aim much higher than a CAPTCHA or even a ReCAPTCHA, particularly given these concerns.
It could work with the USDS, 18F and the country’s technologists to consider how existing platforms like Regulations.gov work. It could propose ways to authenticate public comments online that are based on upon principles of public accessibility, privacy, security, and transparency.
A responsive Website based on an API that’s integrated with something like Login.gov is one direction. Using multi-factor authentication with email confirmation and an app is another.
I’ve asked the FCC to disclose Pai’s letter to Congress and will share it here, along with needed comment on its substance.
Subsequently, as with other congressional correspondence, the FCC has now published the letter of Senators Merkley & Toomey and Pai’s response on its website. (Ars Technica was the first outlet to publish Pai’s letter online as part of its story on the FCC’s proposal on comments. As was the case during my tenure at Sunlight, the FCC’s taxpayer-funded public information officers and officials never responded to my inquiry regarding fixing online comments or any investigation into fraudulent filings.)
As reported, in the letter, Pai wrote that he agrees that FCC “should—and if we receive the requisite approval, will—incorporate CAPTCHA or a similar mechanism to prevent bots from submitting comments.”
You make a good point that CAPTCHA/ReCAPTCHA aren’t very accessible to disabled users. By using CAPTCHA, we exclude a segment of the population, biasing our results.
However, wouldn’t additional steps like requiring an email/two-factor authentication also bias results, since more invested/passionate/extreme users are more likely to continue through the funnel? The W3C working paper was a great read, but I didn’t notice any statistics on what percentage of users would be adversely affected by CAPTCHA.
I’m assuming that “% of users unable to use CAPTCHA” is less than “% of users who wouldn’t complete account creation/login” – however, the nature of the “exclusions” is obviously very different, and we would never ignore disabled users simply because they are a small group.
It would be fascinating to look at experiments on how survey results change as we make the system more/less accessible.