“The consumer experience shared in the narrative is the heart and soul of the complaint,” said CFPB Director Richard Cordray, in a statement. “By publicly voicing their complaint, consumers can stand up for themselves and others who have experienced the same problem. There is power in their stories, and that power can be put in service to strengthen the foundation for consumers, responsible providers, and our economy as a whole.”
The CFPB was given authority and responsibility for handling consumer complaints regarding financial services by the Dodd-Frank Wall Street Reform and Consumer Protection Act, more than three years ago. Today, the CFPB released an overview of the complaints that the agency has handled since July 21, 2011,
Today, the CFPB released an overview of complaints handled since the Bureau opened on July 21, 2011. (The graphics atop this post and below are sourced from this analysis.) According to the data inside, up until June 30, 2014, the CFPB has handled approximately 395,300 consumer complaints.
According to the overview, the World Wide Web has been a key channel for people to file complaints to the CFPB: 56% of all consumer complaints were submitted through the CFPB’s website. 10% were submitted via telephone calls, with the balance coming in through mail, email, and fax. The rest of the report contains tables and data that breaks down complaints by type, actions taken, company responses, and consumers’ feedback about company responses.
By releasing these narratives, not just the number of complaints, the agency holds that the following benefits will accrue: more context to the complaint, specific trends in complaints, enabling consumers to make more informed decisions, and spurring competition based on consumer satisfaction. In the release announcing the proposed policy, the CFPB emphasized that consumers must opt-in to share these stories: “The CFPB would not publish the complaint narrative unless the consumer provides informed consent. This means that when consumers submit a complaint through consumerfinance.gov, they would have to affirmatively check a consent box to give the Bureau permission to publish their narrative. At least initially, only narratives submitted online would be available for the opt-in.”
Consumers could subsequently decide to withdraw their consent, resulting in the regulator removing the complaint from their website. Companies will be given the opportunity to publish a written response to the complaints that would appear next to a given consumer’s story.
The agency’s proposal states that “no personal information will be shared, stating that “complaints would be scrubbed of information such as names, telephone numbers, account numbers, Social Security numbers, and other direct identifiers.”
Getting that right is important — watch for powerful financial companies, their lobbyists and sympathetic politicians to raise privacy concerns about the proposal in DC in the weeks to follow.
While it may not be apparent at first glance, however, the collection and publication of these complaints would have an important, tacit effect upon the market for financial services. By collecting, structuring and releasing consumer complaints as data, the CFPB could add crucial business intelligence into the marketplace for these services. This isn’t a novel model: the Consumer Product Safety Commission already discloses a public complaint database at SaferProducts.gov, enabling merchants and services like Consumer Products to give people crucial information about their purchases. The SEC and FINRA would be well-advised to release financial advisor data in a similar fashion. Someday, complaints submitted from mobile e-patients may have similarly powerful corrective effect in the market for health care goods and services.
Has the Internet showed up to comment on the Federal Communication Commission’s rulemaking around net neutrality, as I wondered when the Open Internet proceeding began? Well, yes and no. According to FCC press secretary Kim Hart, the FCC 677,000 or so total public comments on Net Neutrality submitted before tomorrow’s deadline.
@digiphile@pd_w@FCC Update: FCC has received approximately 677,000 comments on #NetNeutrality so far. Includes comments to docket + email
As Wall Street Journal reporter Gautham Nagesh tweeted, the FCC’s action on media deregulation a decade ago received the most public comments of any of the agency’s rulemakings to date, with two million or so comments.
What this total number means in practice, however, is that network neutrality advocates have failed to stimulate public interest or engagement with this issue, despite “warnings about the FCC’s fast lane” in the New York Times. While that is in part because net neutrality is to many people a “topic that generally begets narcolepsy,” to use David Carr’s phrase, it may also be because cable, broadcast and radio news haven’t covered the issue, much less shown the email address or offered a short URL for people to officially comment. The big jump in the graphic below after June 1st can reasonably be attributed to John Oliver’s segment on this issue on his HBO show, not other media.
That doesn’t mean that the comments haven’t flowed fast and furious at times, taking down the FCC’s ECFS system after Oliver’s show. (Shenanigans may have been at fault with the outage, too, as Sam Gustin reported at Vice.)
“During the past 60 days, the Commission has received a large number of comments from a wide range of constituents,” wrote FCC chief information officer David Bray on the FCC blog, where he reported the rate and total number of email comments on the Open Internet proceeding as open data and shared two graphics, including the one below.
Chairman Tom Wheeler and I both enthusiastically support open government and open data, so with this post I wanted to share the hourly rate of comments submitted into the FCC’s Electronic Comment Filing System (ECFS) since the start of public comments on the FCC’s Open Internet Proceeding (Proceeding 14-28). Here’s a link to a Comma Separated Values (CSV) text file providing those hourly rates for all comments submitted to ECFS and those specific to the Open Internet Proceeding; below is a graphical presentation of that same data.
I’m hoping we see the content of those public comments, too. I’ve asked.
Bray also wrote that the FCC’s inbox and (aged) public comment system will remain open and that the agency continues to “invite engagement from all interested parties.” He also indicated that the FCC will be considering ways to make it easier to third parties to scrape the comment data from the system.
The FCC IT team will also look into implementing an easier way for electronic “web scraping” of comments available in ECFS for comment downloads greater than 100,000 comments at once as we work to modernize the FCC enterprise.
The number of people submitting comments is impressive, underscoring the importance of this issue and the critical role public engagement plays in the Commission’s policy-making process. When the ECFS system was created in 1996, the Commission presumably didn’t imagine it would receive more than 100,000 electronic comments on a single telecommunications issue. Open government and open data is important to our rapidly changing times both in terms of the pace of technology advances and the tightening of budgets in government. I hope you find this information useful.
In the meantime, you have until tomorrow to participate.
UPDATE: On the afternoon of July 15th, the FCC extended the Open Internet comment period until Friday, July 18 at midnight. It appears that online interest was a large part of the decision. FCC press secretary Kim Hart:
“The deadline for filing submissions as part of the first round of public comments in the FCC’s Open Internet proceeding arrived today. Not surprisingly, we have seen an overwhelming surge in traffic on our website that is making it difficult for many people to file comments through our Electronic Comment Filing System (ECFS). Please be assured that the Commission is aware of these issues and is committed to making sure that everyone trying to submit comments will have their views entered into the record. Accordingly, we are extending the comment deadline until midnight Friday, July 18.”
One additional clarification from Hart, regarding the total number of comments and public access to their contents: emails are being entered into the official docket in ECFS but are not being filed individually in the docket. “A large number of them are put into a big PDF and then that single PDF is filed into ECFS, rather than filing them one by one,” she said, via email. “So they will all be in the docket, but in a couple dozen large files rather than individually. Some are already entered, but there’s a bit of a lag.”
Update: Per Hart, as of Thursday morning, the FCC has received a cumulative total of 968,762 comments: 369,653 to ECFS,
599,109 emails to the Open Internet inbox.
“This is the most comments the FCC has received in a rulemaking proceeding,” said Hart.
Update: As of Friday at 4 pm, 1,062,000 comments had been filed in the FCC’s Open Internet proceeding.
Statement from FCC Chairman Tom Wheeler regarding this outpouring of comments:
“When the Commission launched its effort to restore Open Internet protections that were struck down in January, I said that where we end up depends on what we learn during this process. We asked the public a fundamental question: “What is the right public policy to ensure that the Internet remains open?” We are grateful so many Americans have answered our call. Our work is just beginning as we review the more than one million comments we have received. There are currently no rules on the books to protect an Open Internet and prevent ISPs from blocking or degrading the public’s access to content. There is no question the Internet must remain open as a platform for innovation, economic growth and free expression. Today’s deadline is a checkpoint, not the finish line for public comment. We want to continue to hear from you. “
Statement from FCC spokesman Mark Wigfield regarding the process for reviewing these comments:
“We appreciate the high level of public engagement on the Open Internet proceeding and value the feedback we have received. The FCC has a great deal of experience handling complicated issues that draw extensive public comment. Managing this flood of information requires a combination of good technology, good organization and good people. We are currently examining a number of approaches. The FCC will deploy staff from across many bureaus and offices who have the training, organizational expertise, and track record of success sorting through large volumes of information to ensure that we account for all views in the record.”
Update: At the close of the initial comment period of the Open Internet proceeding, the FCC had received 1,067,779 comments: 446,843 were filed through the Electronic Comment Filing System, and 620,936 through the Open Internet inbox. Now, the “reply” period begins, and will run through September 10. Update: the FCC extended the reply period until September 15th to allow more time for the public to comment.
“The comment and reply deadlines serve to get public input to the FCC in a timely and organized way to provide more time for analysis.
However, comments are permitted in this proceeding any time up until a week before a vote is scheduled at an Open Meeting (the “Sunshine” period under the Sunshine in Government Act). ”
This post has been updated with more numbers, links and commentary, including the headline.
Short version: The board found little legally awry with surveillance conducted under Section 702 of FISA, which permits the federal government to compel United States companies to assist them in conducting surveillance targeting foreign people and entities, noting that it was a strong, effective tool for counterterrorism. The extensive report explores the legal rationales for such surveillance and lists ten recommendations in its report. The scope of digital surveillance was detailed in The Washington Post on Monday, which reported that only four countries in the world (the USA, Canada, UK, New Zealand and Australia) are not subject to the surveillance enabled by legal authority to intercept communications.
“Section 702 of FISA has not received the same level of attention as the 215 metadata collection program, largely because the program is not directly targeted at U.S. persons. However, under Section 702, the government can collect the contents of communications (for example examining email and other communications), rather than mere metadata, which it collects under Section 215.”
“702 is also a more powerful program because under it the government can collect the content of U.S. persons communications, if those persons are communicating with a foreign target. This means that U.S. persons communications can be incidentally collected by the agency, such as when two non-U.S. persons discuss a U.S. person. Communications of or concerning U.S. persons that are acquired in these ways may be retained and used by the government, subject to applicable rules and requirements. The communications of U.S. persons may also be collected by mistake, as when a U.S. person is erroneously targeted or in the event of a technological malfunction, resulting in “inadvertent” collection. In such cases, however, the applicable rules generally require the communications to be destroyed. Another circumstance where 702 collection has raised concerns is the collection of so-called “about” communication. An “about” communication is one in which the selector of a targeted person (such as that person’s email address) is contained within the communication but the targeted person is not necessarily a participant in the communication.” The PCLOB addresses each of these issues in their report.”
The PCLOB did find that “certain aspects of the program’s implementation raise privacy concerns,” specifically the “scope of the incidental collection of U.S. persons’ communications” when intelligence analysts targeted other individuals or entities.
As Josh Gerstein reported in Politico, the PCLOB “divided over key reforms to government collection of large volumes of email and other data from popular web businesses and from the backbone of the Internet. A preliminary report released Tuesday night hows that some of the proposals for changes to the Section 702 program caused a previously unseen split on the five-member Privacy and Civil Liberties Oversight Board: Two liberal members of the commission urged more aggressive safeguards, but a well-known privacy activist on the panel joined with two conservatives to withhold official endorsement of some of those changes.”
As Gerstein pointed out in a tweet, that means that reforms proposed in the House as Representatives go further than those recommended by the independent, bipartisan agency within the executive branch vested with the authority “to review and analyze actions the executive branch takes to protect the Nation from terrorism, ensuring the need for such actions is balanced with the need to protect privacy and civil liberties” and “ensure that liberty concerns are appropriately considered in the development and implementation of laws, regulations, and policies related to efforts to protect the Nation against terrorism”
Put another way: House of Representatives endorsing more aggressive 702 reforms than Privacy & Civil Liberties panel http://t.co/iRsJBOcg4X
Perhaps even more problematically, the PCLOB wrote in the report that “the government is presently unable to assess the scope of the incidental collection of U.S. person information under the program.”
As Matt Sledge observed in the Huffington Post, the report’s authors “express frustration that the NSA and other government agencies have been unable to furnish estimates of the incidental collection of Americans’ communications, which ‘hampers attempts to gauge whether the program appropriately balances national security interests with the privacy of U.S. persons.’
But without signs of abuse, the board concludes privacy intrusions are justified in protecting against threats to the U.S. Nevertheless, the board suggests that the government take on the ‘backdoor searches’ that have alarmed Wyden. In those searches, the government searches through the content of communications collected while targeting foreigners for search terms associated with U.S. citizens and residents. The House voted in June to end such searches. The searches ‘push the program close to the line of constitutional reasonableness,’ the privacy board report says, but it doesn’t recommend ending them.
Privacy and civil liberties advocates issued swift expressions of dismay about the constitutionality of the surveillance and questioned the strength of the recommendations.
“The Board’s report is a tremendous disappointment,” said Nuala O’Connor, the president of the Center for Democracy and Technology, in a statement. “Even in the few instances where it recognizes the privacy implications of these programs, it provides little reassurance to all who care about digital civil liberties. The weak recommendations in the report offer no serious reform of government intrusions on the lives of individuals. It also offers scant support to the U.S. tech industry in its efforts to alleviate customer concerns about NSA surveillance, which continue to harm the industry in the global marketplace,” she added.
“If there is a silver lining, it is that the Board recognized that surveillance of people abroad implicates their human rights, as well as the constitutional rights of people in the U.S.,” said Greg Nojeim, director of the Center’s Project on Freedom, Security and Technology. “However, the Board defers until a future date its consideration of human rights and leaves it to Congress to address the important constitutional issues.”
“If the Board’s last report on the bulk collection of phone records was a bombshell, this one is a dud,” said Kevin Bankston, policy director of New America’s Open Technology Institute (OTI).
“If the Board’s last report on the bulk collection of phone records was a bombshell, this one is a dud. The surveillance authority the Board examined in this report, Section 702 of 2008’s FISA Amendments Act, is in many ways much more worrisome than the bulk collection program. As the Board itself explains, that law has been used to authorize the NSA’s wiretapping of the entire Internet backbone, so that the NSA can scan untold numbers of our emails and other online messages for information about tens of thousands of targets that the NSA chooses without individualized court approval. Yet the reforms the Board recommends today regarding this awesome surveillance power are much weaker than those in their last report, and essentially boil down to suggesting that the government should do more and better paperwork and develop stricter internal protocols as a check against abuse.
“As Chief Justice Roberts said just last week, “the Founders did not fight a revolution to gain the right to government agency protocols,” they fought to require search warrants that are based on probable cause and specifically identify who or what can be searched. Yet as we know from documents released earlier this week, government agents are searching through the data they’ve acquired through this surveillance authority–an authority that was sold to Congress as being targeted at people outside the US–tens of thousands of times a year without having to get a warrant first.
“The fact that the Board has endorsed such warrantless rummaging through our communications, just weeks after the House of Representatives voted almost three to one to defund the NSA’s “backdoor” searches of Americans’ data, is a striking disappointment. The Board is supposed to be an independent watchdog that aggressively seeks to protect our privacy against government overreach, rather than undermining privacy by proposing reforms that are even weaker than those that a broad bipartisan majority of the House has already endorsed.
“We are grateful to the Board for its last report and are grateful to them now for laying out, in the clearest and most comprehensive way we’ve seen so far, exactly how the NSA is using its surveillance authority. But Congress shouldn’t wait for the NSA to take the Board’s weak set of recommendations and get its own house in order. Congress should instead move forward with strong reforms that protect our privacy and that tell the NSA, as the Supreme Court told the government last week: if you want our data you need to come back with a warrant.”
Hiding behind the “complexity” of the technology, it gives short shrift to the very serious privacy concerns that the surveillance has rightly raised for millions of Americans. The board also deferred considering whether the surveillance infringed the privacy of many millions more foreigners abroad.
The board skips over the essential privacy problem with the 702 “upstream” program: that the government has access to or is acquiring nearly all communications that travel over the Internet. The board focuses only on the government’s methods for searching and filtering out unwanted information. This ignores the fact that the government is collecting and searching through the content of millions of emails, social networking posts, and other Internet communications, steps that occur before the PCLOB analysis starts. This content collection is the centerpiece of EFF’s Jewel v. NSAcase, a lawsuit battling government spying filed back in 2008.
Trevor Timm, writing in the Guardian, said the PCLOB “chickened out of making any real reform proposals” and questioned why one member of the panel didn’t support more aggressive recommendations in
“More bizarrely, one of the holdouts on the panel for calling for real reform is supposed to be a civil liberties advocate. The Center for Democracy and Technology’s vice president, James Dempsey, had the chance to side with two other, more liberal members on the four-person panel to recommend the FBI get court approval before rummaging through the NSA’s vast databases, but shamefully he didn’t.
Now, as the Senate takes up a weakened House bill along with the House’s strengthened backdoor-proof amendment, it’s time to put focus back on sweeping reform. And while the PCLOB may not have said much in the way of recommendations, now Congress will have to. To help, a coalition of groups (including my current employer, Freedom of the Press Foundation) have graded each and every representative in Washington on the NSA issue. The debate certainly isn’t going away – it’s just a question of whether the public will put enough pressure on Congress to change.”
Editor’s note: This post has been substantially rewritten. More statements were added, and the headline has been amended.
The Department of Energy announced that its Buildings Performance Database has exceeded a milestone of 750,000 building records, making it the world’s largest public database of real buildings’ energy performance information.
The Department of Energy launched a National Geothermal Data System, a “resource that contains enough raw geoscience data to pinpoint elusive sweet spots of geothermal energy deep in the earth, enabling researchers and commercial developers to find the most promising areas for geothermal energy. Access to this data will reduce costs and risks of geothermal electricity production and, in turn, accelerate its deployment.
The Department of Energy released a study “which identified 65-85 gigawatts of untapped hydropower potential in the United States. Accompanying the release of this report, Oak Ridge National Laboratory has released detailed data resulting from this study.”
Energy Secretary Ernie Moniz announced that WattBuddy won the Department of Energy’s “Apps for Energy” contest, the second part of its year-long American Energy Data Challenge.
The U.S. Environmental Protection Agency (EPA) released the AVoided Emissions and geneRation Tool (AVERT), “a free software tool designed to help state and local air quality planners evaluate county-level emissions displaced at electric power plants by efficiency and renewable energy policies and programs.”
7 new utilities and state-wide energy efficiency programs adopted the Green Button standard, including Seattle City Light, Los Angeles Department of Water and Power, Green Mountain Power, Wake Electric, Hawaiian Electric Company, Maui Electric Company, Hawai’i Electric Light Company, and Hawaii Energy.
Pivotal Labs collaborated with NIST and EnergyOS to create OpenESPI, an open source implementation of the Green Button standard.
7 electric utilities “agreed to the development and use of a voluntary open standard for the publishing of power outage and restoration information. The commitment of utilities to publish their already public outage information as a structured data in an easy-to-use and common format, in a consistent location, will make it easier for a wide set of interested parties—including first responders, public health officials, utility operations and mutual assistance efforts, and the public at large—to make use of and act upon this important information, especially during times of natural disaster or crisis.” iFactor Consulting will support it and, notably, Google will use the data in its Crisis Maps.
Philadelphia, San Francisco and Washington D.C. will use the Department of Energy’s open source Standard Energy Efficiency Data (SEED) platform to publish data collected through benchmarking disclosure of building energy efficiency.
This morning, Adam Liptak reported at the New York Times that the Supreme Court has been quietly editing its legal decisions without notice or indication. According to Richard J. Lazarus, a law professor at Harvard Liptak interviewed about a new study examining the issue, these revisions include “truly substantive changes in factual statements and legal reasoning.”
The court does warn readers that early versions of its decisions, available at the courthouse and on the court’s website, are works in progress. A small-print notice says that “this opinion is subject to formal revision before publication,” and it asks readers to notify the court of “any typographical or other formal errors.”
But aside from announcing the abstract proposition that revisions are possible, the court almost never notes when a change has been made, much less specifies what it was. And many changes do not seem merely typographical or formal.
Four legal publishers are granted access to “change pages” that show all revisions. Those documents are not made public, and the court refused to provide copies to The New York Times.
The Supreme Court secretly editing the legal record seems like a big deal to me. (Lawyers, professors, court reporters, tell me I’m wrong!)
To me, this story highlights the need for and, eventually the use of data and software to track the changes in a public, online record of Supreme Court decisions.
Static PDFs that are edited without notice, data or indication of changes doesn’t seem good enough for the legal branch of a constitutional republic in the 21st century.
Just as the U.S. Code, state and local codes, are being constantly being updated and consulted by lawyers, courts and the people, the Supreme Court’s decisions could be published and maintained online as a body of living legislation at SupremeCourt.gov so that they may be read and consulted by all.
Embedded and integrated into those decisions and codes would be a record of the changes to them, the “meta data” of the actions of the legislative organ of the republic.
Residents of the District of Columbia now have a new way to comment on proposed legislation before the City Council, MadisonDC. Today, David Grosso, a DC Councilman-at-Large, introduced the new initiative to collaboratively draft laws online in a release and video on YouTube.
“As we encourage more public engagement in the legislative process, I hope D.C. residents will take a moment to log onto the Madison project,” said Councilmember Grosso. “I look forward to seeing the public input on my proposed bills.”
MadisonDC has its roots in the first Congressional hackathon, back in 2011. The event spawned a beta version of the Madison Project, an online platform to where lawmakers could crowdsource legislative markup. It was deployed first by the office of Representative Darrell Issa, crowdsourcing comments on several bills. The code was subsequently open sourced and now has been deployed by the OpenGov Foundation as a way to publish municipal codes online, along with other uses.
“We are excited to support Councilmember Grosso’s unprecedented efforts to welcome residents – and their ideas – directly into the local lawmaking process,” said Seamus Kraft, co-founder & executive director of The OpenGov Foundation, on the nonprofit organization’s blog. “But what really matters is that we’re going to produce better City Council bills, with fewer frustrations and unintended consequences. These three bills are only a start. The ultimate goal of MadisonDC is transforming D.C.’s entire policymaking machine for the Internet Age, creating an end-to-end, on-demand collaboration ecosystem for both citizens and city officials. The possibilities are limitless.”
In January 2014, the IRS quietly introduced a new feature at IRS.gov that enabled Americans to download their tax transcript over the Internet. Previously, filers could request a copy of the transcript (not the full return) but had to wait 5-10 business days to receive it in the mail. For people who needed more rapid access for applications, the delay could be critical.
What’s a tax transcript?
It’s a list of the line items that you entered onto your federal tax return (Form 1040), as it was originally filed to the IRS.
Wait, we couldn’t already download a transcript like this in 2014?
Nope. Previously, filers could request a copy of the transcript (not the full return) but they would have to wait 5-10 business days to receive it in the mail.
Why did this happen now?
The introduction of the IRS feature coincided with a major Department of Education event focused on opening up such data. A U.S. Treasury official said that the administration was doing that to make it “easier for student borrowers to access tax records he or she might need to submit loan applications or grant applications.”
Why would someone want their tax transcript?
As the IRS itself says, “IRS transcripts are often used to validate income and tax filing status for mortgage applications, student and small business loan applications, and during tax preparation.” It’s pretty useful.
OK, so what do I do to download my transcript?
Visit “get transcript” and register online. You’ll find that the process is very similar to setting up online access for a bank accounts. You’ll need to choose a pass phrase, pass image and security questions, and then answer a series of questions about your life, like where you’ve lived. If you write them down, store them somewhere safe and secure offline, perhaps with your birth certificate and other sensitive documents.
Wait, what? That sounds like a lot of of private information.
True, but remember: the IRS already has a lot of private data about you. These questions are designed to prevent someone else from setting up a fake account on your behalf and stealing it from them. If you’re uncomfortable with answering these questions, you can request a print version of your transcript. To do so, you’ll need to enter your Social Security number, data of birth and street address online. If you’re still uncomfortable doing so, you can visit or contact the IRS in person.
So is this safe?
It’s probably about as safe as doing online banking. Virtually nothing you do online is without risk. Make sure you 1) go to the right website 2) connect securely and 3) protect the transcript, just as you would paper tax records. Here’s what the IRS told me about their online security:
“The IRS has made good progress on oversight and enhanced security controls in the area of information technology. With state-of-the-art technology as the foundation for our portal (e.g. irs.gov), we continue to focus on protecting the PII of all taxpayers when communicating with the IRS.
However, security is a two-way street with both the IRS and users needing to take steps for a secure experience. On our end, our security is comparable to leaders in private industry.
Our IRS2GO app has successfully completed a security assessment and received approval to launch by our cybersecurity organization after being scanned for weaknesses and vulnerabilities.
Any personally identifiable information (PII) or sensitive information transmitted to the IRS through IRS2Go for refund status or tax record requests uses secure communication channels that meet or exceed federal requirements for encryption. No PII is passed back to the taxpayer through IRS2GO and no PII is stored on the smartphone by the application.
When using our popular “Where’s My Refund?” application, taxpayers may notice just a few of our security measures. The URL for Where’s My Refund? begins with https. Just like in private industry, the “s” is a key indicator that a web user should notice indicating you are in a “secure session.” Taxpayers may also notice our message that we recommend they close their browser when finished accessing your refund status.
As we become a more mobile society and able to link to the internet while we’re on the go, we remind taxpayers to take precautions to protect themselves from being victimized, including using secure networks, firewalls, virus protection and other safeguards.
We always recommend taxpayers check with the Federal Trade Commission for the latest on reporting incidents of identity theft. You can find more information on our website, including tips if you believe you have become the victim of identity theft.”
What do I do with the transcript?
If you download tax transcripts or personal health information to a mobile device, laptop, tablet or desktop, install passcodes and full disk encryption, where available, on every machine its on. Leaving your files unprotected on computers connected to the Internet is like leaving the door to your house unlocked with your tax returns and medical records on the kitchen table.
I got an email from the IRS that asks me to email them personal information to access my transcript. Is this OK?
Nope! Don’t do it: it’s not them. The new functionality will likely inspire criminals to create mockups of the government website that look similar and then send phishing emails to consumers, urging them to “log in” to fake websites. You should know that IRS “does not send out unsolicited e-mails asking for personal information.” If you receive such an email, consider reporting the phishing to the IRS. Start at www.irs.gov/Individuals/Get-Transcript every time.
I tried to download my transcript but it didn’t work. What the heck?
You’re not alone. I had trouble using an Apple computer. Others have had technical issues as well.
Here’s what the IRS told me: “As a web application Get Transcript is supported on most modern OS/browser combinations. While there may be intermittent issues due to certain end-user configurations, IRS has not implemented any restrictions against certain browsers or operating systems. We are continuing to work open issues as they are identified and validated.”
A side note: For the best user experience, taxpayers may want to try up-to-date versions of Internet Explorer and a supported version of Microsoft Windows; however, that is certainly not a requirement.)”
What does that mean, in practice? That not all modern OS/browser combinations are supported, potentially including OS X and Android, that the IRS digital staff knows it — although they aren’t informing IRS.gov users regarding what versions of IE, Windows or other browsers/operating systems are presently supported and what is not — and are working to improve.
Well, OK, but shouldn’t having a user account and years of returns make it easier to file without a return at all?
It could. As you may know, other countries already have “return-free filing,” where a taxpayer can go online, login and access a pre-populated tax return, see what the government estimates her or she owes, make any necessary adjustments, and file.
Wait, that sounds pretty good. Why doesn’t the USA have return-free filing yet?
Yes, it does. As ProPublica reported last year, “the concept has been around for decades and has been endorsed by both President Ronald Reagan and a campaigning PresidentObama.”
As ProPublica reported last year, both H&R Block and Intuit, the maker of TurboTax, have lobbied against free and simple tax filing in Washington, given that it’s in their economic self-interest to do so:
In its latest annual report filed with the Securities and Exchange Commission, however, Intuit also says that free government tax preparation presents a risk to its business. Roughly 25 million Americans used TurboTax last year, and a recent GAO analysis said the software accounted for more than half of individual returns filed electronically. TurboTax products and services made up 35 percent of Intuit’s $4.2 billion in total revenues last year. Versions of TurboTax for individuals and small businesses range inprice from free to $150.
What are the chances return-free filing could be on IRS.gov soon?
Hard to say, but the IRS told me that something that sounds like a precursor to return-free filing is on the table. According to the agency, “the IRS is considering a number of new proposals that may become a part of the online services roadmap some time in the future. This may include a taxpayer account where up to date status could be securely reviewed by the account owner.”
Creating the ability for people to establish secure access to IRS.gov to review and download tax transcripts is a big step in that direction. Whether the IRS takes any more steps soon is more of a political and policy question than a technical one, although the details of the latter matter.
Is the federal government offering other services like this for other agencies or personal data?
The Obama administration has been steadily modernizing government technology, although progress has been uneven across agencies. While the woes of Healthcare.gov attracted a lot of attention, many federal agencies have improved how they deliver services over the Internet. One of the themes of the administration’s digital government approach is “smart disclosure,” a form of targeted transparency in which people are offered the opportunity to download their own data, or data about them, from government or commercial services. The Blue Button is an example of this approach that has the potential to scale nationally.
I had a blast interviewing Matt Mullenweg, the co-creator of WordPress and CEO of Automattic, last night at the inaugural WordPress and government meetup in DC. UPDATE: Video of our interview and the Q&A that followed is embedded below:
WordPress code powers some 60 million websites, including 22% of the top 10 million sites on the planet and .gov platforms like Broadbandmap.gov. Mullenweg was, by turns, thoughtful, geeky and honest about open source and giving hundreds of millions of people free tools to express themselves, along with quietly principled, with respect to the corporate values for an organization spread between 35 countries, government censorship and the ethics of transparency.
After Mullenweg finished taking questions from the meetup, Data.gov architect Philip Ashlock gave a presentation on how the staff working on the federal government’s open data platform are using open source software to design, build, publish and collaborate, from WordPress to CKAN to Github issue tracking.
As White House special advisor John Podesta noted in January, the PCAST has been conducting a study “to explore in-depth the technological dimensions of the intersection of big data and privacy.” Earlier this week, the Associated Press interviewed Podesta about the results of the review, reporting that the White House had learned of the potential for discrimination through the use of data aggregation and analysis. These are precisely the privacy concerns that stem from data collection that I wrote about earlier this spring. Here’s the PCAST’s list of “things happening today or very soon” that provide examples of technologies that can have benefits but pose privacy risks:
Pioneered more than a decade ago, devices mounted on utility poles are able to sense the radio stations
being listened to by passing drivers, with the results sold to advertisers.26
In 2011, automatic license‐plate readers were in use by three quarters of local police departments
surveyed. Within 5 years, 25% of departments expect to have them installed on all patrol cars, alerting
police when a vehicle associated with an outstanding warrant is in view.27 Meanwhile, civilian uses of
license‐plate readers are emerging, leveraging cloud platforms and promising multiple ways of using the
information collected.28
Experts at the Massachusetts Institute of Technology and the Cambridge Police Department have used a
machine‐learning algorithm to identify which burglaries likely were committed by the same offender,
thus aiding police investigators.29
Differential pricing (offering different prices to different customers for essentially the same goods) has
become familiar in domains such as airline tickets and college costs. Big data may increase the power
and prevalence of this practice and may also decrease even further its transparency.30
reSpace offers machine‐learning algorithms to the gaming industry that may detect
early signs of gambling addiction or other aberrant behavior among online players.31
Retailers like CVS and AutoZone analyze their customers’ shopping patterns to improve the layout of
their stores and stock the products their customers want in a particular location.32 By tracking cell
phones, RetailNext offers bricks‐and‐mortar retailers the chance to recognize returning customers, just
as cookies allow them to be recognized by on‐line merchants.33 Similar WiFi tracking technology could
detect how many people are in a closed room (and in some cases their identities).
The retailer Target inferred that a teenage customer was pregnant and, by mailing her coupons
intended to be useful, unintentionally disclosed this fact to her father.34
The author of an anonymous book, magazine article, or web posting is frequently “outed” by informal
crowd sourcing, fueled by the natural curiosity of many unrelated individuals.35
Social media and public sources of records make it easy for anyone to infer the network of friends and
associates of most people who are active on the web, and many who are not.36
Marist College in Poughkeepsie, New York, uses predictive modeling to identify college students who are
at risk of dropping out, allowing it to target additional support to those in need.37
The Durkheim Project, funded by the U.S. Department of Defense, analyzes social‐media behavior to
detect early signs of suicidal thoughts among veterans.38
LendUp, a California‐based startup, sought to use nontraditional data sources such as social media to
provide credit to underserved individuals. Because of the challenges in ensuring accuracy and fairness,
however, they have been unable to proceed.
The PCAST meeting was open to the public through a teleconference line. I called in and took rough notes on the discussion of the forthcoming report as it progressed. My notes on the comments of professors Susan Graham and Bill Press offer sufficient insight and into the forthcoming report, however, that I thought the public value of publishing them was warranted today, given the ongoing national debate regarding data collection, analysis, privacy and surveillance. The following should not be considered verbatim or an official transcript. The emphases below are mine, as are the words of [brackets]. For that, look for the PCAST to make a recording and transcript available online in the future, at its archive of past meetings.
Susan Graham: Our charge was to look at confluence of big data and privacy, to summarize current tech and the way technology is moving in foreseeable future, including its influence the way we think about privacy.
The first thing that’s very very obvious is that personal data in electronic form is pervasive. Traditional data that was in health and financial [paper] records is now electronic and online. Users provide info about themselves in exchange for various services. They use Web browsers and share their interests. They provide information via social media, Facebook, LinkedIn, Twitter. There is [also] data collected that is invisible, from public cameras, microphones, and sensors.
What is unusual about this environment and big data is the ability to do analysis in huge corpuses of that data. We can learn things from the data that allow us to provide a lot of societal benefits. There is an enormous amount of patient data, data about about disease, and data about genetics. By putting it together, we can learn about treatment. With enough data, we can look at rare diseases, and learn what has been effective. We could not have done this otherwise.
We can analyze more online information about education and learning, not only MOOCs but lots of learning environments. [Analysis] can tell teachers how to present material effectively, to do comparisons about whether one presentation of information works better than another, or analyze how well assessments work with learning styles.
Certain visual information is comprehensible, certain verbal information is hard to understand. Understanding different learning styles [can enable] develop customized teaching.
The reason this all works is the profound nature of analysis. This is the idea of data fusion, where you take multiple sources of information, combine them, which provides much richer picture of some phenomenon. If you look at patterns of human movements on public transport, or pollution measures, or weather, maybe we can predict dynamics caused by human context.
We can use statistics to do statistics-based pattern recognition on large amounts of data. One of the things that we understand about this statistics-based approach is that it might not be 100% accurate if map down to the individual providing data in these patterns. We have to very careful not to make mistakes about individuals because we make [an inference] about a population.
How do we think about privacy? We looked at it from the point of view of harms. There are a variety of ways in which results of big data can create harm, including inappropriate disclosures [of personal information], potential discrimination against groups, classes, or individuals, and embarrassment to individuals or groups.
We turned to what tech has to offer in helping to reduce harms. We looked at a number of technologies in use now. We looked at a bunch coming down the pike. We looked at several tech in use, some of which become less effective because of pervasivesness [of data] and depth of analytics.
We traditionally have controlled [data] collection. We have seen some data collection from cameras and sensors that people don’t know about. If you don’t know, it’s hard to control.
Tech creates many concerns. We have looked at methods coming down the pike. Some are more robust and responsive. We have a number of draft recommendations that we are still working out.
Part of privacy is protecting the data using security methods. That needs to continue. It needs to be used routinely. Security is not the same as privacy, though security helps to protect privacy. There are a number of approaches that are now used by hand that with sufficient research could be automated could be used more reliably, so they scale.
There needs to be more research and education about education about privacy. Professionals need to understand how to treat privacy concerns anytime they deal with personal data. We need to create a large group of professionals who understand privacy, and privacy concerns, in tech.
Technology alone cannot reduce privacy risks. There has to be a policy as well. It was not our role to say what that policy should be. We need to lead by example by using good privacy protecting practices in what the government does and increasingly what the private sector does.
Bill Press: We tried throughout to think of scenarios and examples. There’s a whole chapter [in the report] devoted explicitly to that.
They range from things being done today, present technology, even though they are not all known to people, to our extrapolations to the outer limits, of what might well happen in next ten years. We tried to balance examples by showing both benefits, they’re great, and they raise challenges, they raise the possibility of new privacy issues.
In another aspect, in Chapter 3, we tried to survey technologies from both sides, with both tech going to bring benefits, those that will protect [people], and also those that will raise concerns.
In our technology survey, we were very much helped by the team at the National Science Foundation. They provided a very clear, detailed outline of where they thought that technology was going.
This was part of our outreach to a large number of experts and members of the public. That doesn’t mean that they agree with our conclusions.
Eric Lander: Can you take everybody through analysis of encryption? Are people using much more? What are the limits?
Graham: The idea behind classical encryption is that when data is stored, when it’s sitting around in a database, let’s say, encryption entangles the representation of the data so that it can’t be read without using a mathematical algorithm and a key to convert a seemingly set of meaningless set of bits into something reasonable.
The same technology, where you convert and change meaningless bits, is used when you send data from one place to another. So, if someone is scanning traffic on internet, you can’t read it. Over the years, we’ve developed pretty robust ways of doing encryption.
The weak link is that to use data, you have to read it, and it becomes unencrypted. Security technologists worry about it being read in the short time.
Encryption technology is vulnerable. The key that unlocks the data is itself vulnerable to theft or getting the wrong user to decrypt.
Both problems of encryption are active topics of research on how to use data without being able to read it. There research on increasingly robustness of encryption, so if a key is disclosed, you haven’t lost everything and you can protect some of data or future encryption of new data. This reduces risk a great deal and is important to use. Encryption alone doesn’t protect.
Unknown Speaker: People read of breaches derived from security. I see a different set of issues of privacy from big data vs those in security. Can you distinguish them?
Bill Press: Privacy and security are different issues. Security is necessary to have good privacy in the technological sense if communications are insecure, they clearly can’t be private. This goes beyond, to where parties that are authorized, in a security sense, to see the information. Privacy is much closer to values. security is much closer to protocols.
Interesting thing is that this is less about purely tech elements — everyone can agree on right protocol, eventually. These things that go beyond and have to do with values.
This afternoon, the United States House of Representatives passed the Digital Accountability and Transparency Act (DATA) of 2013, voting to send S.994, the bill that enjoyed unanimous support in the U.S. Senate earlier this month, on to the president’s desk.
The DATA Act is the most significant open government legislation enacted by Congress in generations, going back to the Freedom of Information Act in 1966. An administration official at the White House Office of Management and Budget confirmed that President Barack Obama will sign the bill into law.
The DATA Act establishes financial open data standards for agencies in the federal government, requires compliance with those standards, and that the data will then be published online. The bipartisan bill was sponsored in the Senate by Senator Rob Portman (R-OH) and Senator Mark Warner (D-VA), and in the House by Representative Darrell Issa (R-CA) and Representative Elijah Cummings (D-MD).
Representative Issa, who first introduced the transparency legislation in 2011, spoke about the bill on the House floor this afternoon and tweeted out a long list of beneficial outcomes his office expects to result from its passage.
#DATAact will give lawmakers & public watchdogs powerful tools to identify and root out waste, fraud & abuse in gov. pic.twitter.com/Y8YvP9ofJU
The Senators who drafted and co-sponsored the version of the bill that the House passed today quickly hailed its passage.
“In the digital age, we should be able to search online to see how every grant, contract and disbursement is spent in a more connected and transparent way through the federal government,” said Senator Warner, in a statement. “Independent watchdogs and transparency advocates have endorsed the DATA Act’s move toward greater transparency and open data. Our taxpayers deserve to see clear, accessible information about government spending, and this accountability will highlight and help us eliminate waste and fraud.”
“During a time of record $17 trillion debt, our bipartisan bill will help identify and eliminate waste by better tracking federal spending,” said Senator Portman, in a statement. “I’m pleased that our bill to empower taxpayers to see how their money is spent and improve federal financial transparency has unanimously passed both chambers of Congress and is now headed to the President’s desk for signature.”
Pleased the House just passed my bipartisan #DataAct.Great news for govt transparency & taxpayers. Now to President for signature. #opengov
“The DATA Act is a transformational piece of legislation that has the potential to permanently transform how the Federal government operates,” said House Majority Leader Eric Cantor, in a statement. “For the first time ever, the American people will have open, standardized access to how the federal government spends their money. Washington has an abundance of information that is often bogged down by federal bureaucracy and is inaccessible to our nation’s innovators, developers and citizens. The standardization and publication of federal spending information in an open format will empower innovative citizens to tackle many of our nation’s challenges on their own. Government of the people, by the people, and for the people should be open to the people.”
The DATA Act earned support from a broad coalition of open government advocates and industry groups. Its passage in Congress was hailed today by open government advocates and trade groups alike.
“The central idea behind the Digital Accountability and Transparency Act is simple: disclose to the public what the federal government spends,” “>said Daniel Schuman, policy council for the Citizens for Responsibility and Ethics in Washington.
“The means necessary to accomplish this purpose—increased agency reporting, the use of modern technology, implementation of government-wide standards, regular quality assurance on the data—will require government to systematically address how it stovepipes federal spending information. This is no small task, and one that is long overdue. The effort to reform transparency around federal spending arose in large part because members of both political parties concluded that their ability to govern effectively depends on making sure federal spending data is comprehensive, accessible, reliable, and timely. Currently, it is not. The leaders of the reform efforts in the Senate are Senators Mark Warner (D-VA), Rob Portman (R-OH), Tom Carper (D-DE), and Tom Coburn (R-OK), and the leaders in the House are Representatives Darrell Issa (R-CA) and Elijah Cummings (D-MD), although they are joined by many others. We welcome and applaud the House of Representative’s passage of the DATA Act. It is a remarkable bill that, if properly implemented, will empower elected officials and everyday citizens alike to follow how the federal government spends money.”
“Sunlight has been advocating for the DATA Act for some time, and are thrilled to see it emerge from Congress,” said Matt Rumsey, a policy analyst at the Sunlight Foundation. “As I wrote while describing the history of the bill after it passed through the Senate, ‘Congress has taken a big step by passing the DATA Act. The challenge now will be ensuring that it is implemented effectively.’ We hope that the President swiftly signs the bill and we look forward to working with his administration to shed more light on federal spending.
“With this legislation, big data is finally coming of age in the federal government,” said Daniel Castro, Director of the Center for Data Innovation, in a statement. “The DATA Act promises to usher in a new era of data-driven transparency, accountability, and innovation in federal financial information. This is a big win for taxpayers, innovators, and journalists.”
“After three years of debate and negotiation over the DATA Act, Congress has issued a clear and unified mandate for open, reliable federal spending data,” said Hudson Hollister, the Executive Director of the Data Transparency Coalition. Hollister helped to draft the first version of the DATA Act in 2011, when he was on Representative Issa’s staff. “Our Coalition now calls on President Obama to put his open data policies into action by signing the DATA Act and committing his Office of Management and Budget to pursue robust data standards throughout federal financial, budget, grant, and contract reporting.”
“The Administration shares Senator Warner’s commitment to government transparency and accountability, and appreciates his leadership in Congress on this issue,” said Steve Posner, spokesman for the White House Office of Management and Budget. “The Administration supports the objectives of the DATA Act and looks forward to working with Congress on implementing the new data standards and reporting requirements within the realities of the current constrained budget environment and agency financial systems.”
Update: Speaker of the House John Boehner (R-OH) signed the DATA Act on April 30, before sending it on to President Obama’s desk.
“From publishing legislative data in XML to live-streaming hearings and floor debates, our majority has introduced a number of innovations to make the legislative process more open and accessible,” he said, in a statement touting open government progress in the House. “With the DATA Act, which I signed today, we’re bringing this spirit of transparency to the rest of the federal government. For years, we’ve been able to track the status of our packages, but to this day there is no one website where you can see how all of your tax dollars are being spent. Once the president signs this bill, that will start to change. There is always more to be done when it comes to opening government and putting power back in the hands of the people, and the House will be there to lead the way.”
UPDATE: On May 9th, 2014, President Barack Obama signed The DATA Act into law.
Statement by Press Secretary Jay Carney:
On Friday, May 9, 2014, the President signed into law:
S. 994, the “Digital Accountability and Transparency Act of 2014” or the “DATA Act,” which amends the Federal Funding Accountability and Transparency Act of 2006 to make publicly available specific classes of Federal agency spending data, with more specificity and at a deeper level than is currently reported; require agencies to report this data on USASpending.gov; create Government-wide standards for financial data; apply to all agencies various accounting approaches developed by the Recovery Act’s Recovery Accountability and Transparency Board; and streamline agency reporting requirements.
Rep. Darrell Issa issued the following statement in response:
“The enactment of the DATA Act marks a transformation in government transparency by shedding light on runaway federal spending,” said Chairman Issa. “The reforms of this bipartisan legislation not only move the federal bureaucracy into the digital era, but they improve accountability to taxpayers and provide tools to allow lawmakers and citizen watchdogs to root out waste and abuse. Government-wide structured data requirements may sound like technical jargon, but the real impact of this legislation on our lives will be more open, more effective government.”