[FAQ] How do I download a tax transcript from IRS.gov?

UPDATE: This service was taken offline after IRS security was compromised.

irs-transcriptIn January 2014, the IRS quietly introduced a new feature at IRS.gov that enabled Americans to download their tax transcript over the Internet. Previously, filers could request a copy of the transcript (not the full return) but had to wait 5-10 business days to receive it in the mail. For people who needed more rapid access for applications, the delay could be critical.

What’s a tax transcript?

It’s a list of the line items that you entered onto your federal tax return (Form 1040), as it was originally filed to the IRS.

Wait, we couldn’t already download a transcript like this in 2014?

Nope. Previously, filers could request a copy of the transcript (not the full return) but they would have to wait 5-10 business days to receive it in the mail.

Why did this happen now?

The introduction of the IRS feature coincided with a major Department of Education event focused on opening up such data. A U.S. Treasury official said that the administration was doing that to make it “easier for student borrowers to access tax records he or she might need to submit loan applications or grant applications.”

Why would someone want their tax transcript?

As the IRS itself says, “IRS transcripts are often used to validate income and tax filing status for mortgage applications, student and small business loan applications, and during tax preparation.” It’s pretty useful.

OK, so what do I do to download my transcript?

Visit “get transcript” and register online. You’ll find that the process is very similar to setting up online access for a bank accounts. You’ll need to choose a pass phrase, pass image and security questions, and then answer a series of questions about your life, like where you’ve lived. If you write them down, store them somewhere safe and secure offline, perhaps with your birth certificate and other sensitive documents.

Wait, what? That sounds like a lot of of private information.

True, but remember: the IRS already has a lot of private data about you. These questions are designed to prevent someone else from setting up a fake account on your behalf and stealing it from them. If you’re uncomfortable with answering these questions, you can request a print version of your transcript. To do so, you’ll need to enter your Social Security number, data of birth and street address online. If you’re still uncomfortable doing so, you can visit or contact the IRS in person.

So is this safe?

It’s probably about as safe as doing online banking. Virtually nothing you do online is without risk. Make sure you 1) go to the right website 2) connect securely and 3) protect the transcript, just as you would paper tax records. Here’s what the IRS told me about their online security:

“The IRS has made good progress on oversight and enhanced security controls in the area of information technology. With state-of-the-art technology as the foundation for our portal (e.g. irs.gov), we continue to focus on protecting the PII of all taxpayers when communicating with the IRS.

However, security is a two-way street with both the IRS and users needing to take steps for a secure experience. On our end, our security is comparable to leaders in private industry.

Our IRS2GO app has successfully completed a security assessment and received approval to launch by our cybersecurity organization after being scanned for weaknesses and vulnerabilities.

Any personally identifiable information (PII) or sensitive information transmitted to the IRS through IRS2Go for refund status or tax record requests uses secure communication channels that meet or exceed federal requirements for encryption. No PII is passed back to the taxpayer through IRS2GO and no PII is stored on the smartphone by the application.

When using our popular “Where’s My Refund?” application, taxpayers may notice just a few of our security measures. The URL for Where’s My Refund? begins with https. Just like in private industry, the “s” is a key indicator that a web user should notice indicating you are in a “secure session.” Taxpayers may also notice our message that we recommend they close their browser when finished accessing your refund status.

As we become a more mobile society and able to link to the internet while we’re on the go, we remind taxpayers to take precautions to protect themselves from being victimized, including using secure networks, firewalls, virus protection and other safeguards.

We always recommend taxpayers check with the Federal Trade Commission for the latest on reporting incidents of identity theft. You can find more information on our website, including tips if you believe you have become the victim of identity theft.”

What do I do with the transcript?

If you download tax transcripts or personal health information to a mobile device, laptop, tablet or desktop, install passcodes and full disk encryption, where available, on every machine its on. Leaving your files unprotected on computers connected to the Internet is like leaving the door to your house unlocked with your tax returns and medical records on the kitchen table.

I got an email from the IRS that asks me to email them personal information to access my transcript. Is this OK?

Nope! Don’t do it: it’s not them. The new functionality will likely inspire criminals to create mockups of the government website that look similar and then send phishing emails to consumers, urging them to “log in” to fake websites. You should know that IRS “does not send out unsolicited e-mails asking for personal information.” If you receive such an email, consider reporting the phishing to the IRS. Start at www.irs.gov/Individuals/Get-Transcript every time.

I tried to download my transcript but it didn’t work. What the heck?

You’re not alone. I had trouble using an Apple computer. Others have had technical issues as well.

Here’s what the IRS told me: “As a web application Get Transcript is supported on most modern OS/browser combinations. While there may be intermittent issues due to certain end-user configurations, IRS has not implemented any restrictions against certain browsers or operating systems. We are continuing to work open issues as they are identified and validated.”

A side note: For the best user experience, taxpayers may want to try up-to-date versions of Internet Explorer and a supported version of Microsoft Windows; however, that is certainly not a requirement.)”

What does that mean, in practice? That not all modern OS/browser combinations are supported, potentially including OS X and Android, that the IRS digital staff knows it — although they aren’t informing IRS.gov users regarding what versions of IE, Windows or other browsers/operating systems are presently supported and what is not — and are working to improve.

Unfortunately, ongoing security issues with Internet Explorer means that in 2014, we have the uncomfortable situation where the Department of Homeland Security is recommending that people avoid using Internet Explorer while the IRS recommends that its customers choose it for the “best experience.”

Given the comments from frustrated users, the IRS could and should do better on all counts.

Will I be able to file my tax return directly to the government through IRS.gov now?

You can already file your federal tax return online. According to the IRS, almost 120 million people used IRS e-file last year.

Well, OK, but shouldn’t having a user account and years of returns make it easier to file without a return at all?

It could. As you may know, other countries already have “return-free filing,” where a taxpayer can go online, login and access a pre-populated tax return, see what the government estimates her or she owes, make any necessary adjustments, and file.

Wait, that sounds pretty good. Why doesn’t the USA have return-free filing yet?

Yes, it does. As ProPublica reported last year, “the concept has been around for decades and has been endorsed by both President Ronald Reagan and a campaigning PresidentObama.”

As ProPublica reported last year, both H&R Block and Intuit, the maker of TurboTax, have lobbied against free and simple tax filing in Washington, given that it’s in their economic self-interest to do so:

In its latest annual report filed with the Securities and Exchange Commission, however, Intuit also says that free government tax preparation presents a risk to its business. Roughly 25 million Americans used TurboTax last year, and a recent GAO analysis said the software accounted for more than half of individual returns filed electronically. TurboTax products and services made up 35 percent of Intuit’s $4.2 billion in total revenues last year. Versions of TurboTax for individuals and small businesses range inprice from free to $150.

What are the chances return-free filing could be on IRS.gov soon?

Hard to say, but the IRS told me that something that sounds like a precursor to return-free filing is on the table.  According to the agency, “the IRS is considering a number of new proposals that may become a part of the online services roadmap some time in the future. This may include a taxpayer account where up to date status could be securely reviewed by the account owner.”

Creating the ability for people to establish secure access to IRS.gov to review and download tax transcripts is a big step in that direction. Whether the IRS takes any more steps  soon is more of a political and policy question than a technical one, although the details of the latter matter.  

Is the federal government offering other services like this for other agencies or personal data?

The Obama administration has been steadily modernizing government technology, although progress has been uneven across agencies. While the woes of Healthcare.gov attracted a lot of attention, many federal agencies have improved how they deliver services over the Internet. One of the themes of the administration’s digital government approach is “smart disclosure,” a form of targeted transparency in which people are offered the opportunity to download their own data, or data about them, from government or commercial services. The Blue Button is an example of this approach that has the potential to scale nationally.

From broadband maps to Data.gov, WordPress looks to power more open source government

I had a blast interviewing Matt Mullenweg, the co-creator of WordPress and CEO of Automattic, last night at the inaugural WordPress and government meetup in DC. UPDATE: Video of our interview and the Q&A that followed is embedded below:

WordPress code powers some 60 million websites, including 22% of the top 10 million sites on the planet and .gov platforms like Broadbandmap.gov. Mullenweg was, by turns, thoughtful, geeky and honest about open source and giving hundreds of millions of people free tools to express themselves, along with quietly principled,  with respect to the corporate values for an organization spread between 35 countries, government censorship and the ethics of transparency.

After Mullenweg finished taking questions from the meetup, Data.gov architect Philip Ashlock gave a presentation on how the staff working on the federal government’s open data platform are using open source software to design, build, publish and collaborate, from WordPress to CKAN to Github issue tracking.

New study details technology deficit in government and civil society

stem-talent-federal-agenciesThe botched re-launch of Healthcare.gov led many observers unfamiliar with the endemic issues in government information technology to wonder how the first Internet president produced the government’s highest Internet failure. The Obama administration endured a winter full of well-deserved criticism, some informed, some less, regarding what went wrong at Healthcare.gov, from bad management to poor technology choices and implementation, agency insularity and political sensitivity at the White House.

While “Obama’s trauma team” successfully repaired the site, enabling millions to enroll in the health insurance plans offered in the online marketplace, the problems the debacle laid bare in human resources and IT procurement are now receiving well-deserved attention. While the apparent success of “the big fix” has taken some urgency away from Congress or the administration to address how the federal government can avoid another Healthcare.gov, the underlying problems remain. Although lawmakers have introduced legislation to create a “Government Digital Office” and the U.S. House of Representatives passed a bill to reform aspects of federal IT, neither has gotten much traction in the Senate. In the meantime, hoping to tap into the success of the United Kingdom’s Government Digital Services team, the U.S. General Services Administration has stood up a new IT services unit, 18F, which officials hope will help government technology projects fail fast instead of failing big.

Into this mix comes  a new report from Friedman Consulting, commissioned by the Ford and MacArthur Foundations. Notably, the report also addresses the deficit of technology talent in the nonprofit sector and other parts of civil society, where such expertise and capacity could make demonstrable improvements to operations and performance. The full 51 page report is well worth reading, for those interested in the topic, but for those limited by time, here are the key findings:

1) The Current Pipeline Is Insufficient: the vast majority of interviewees indicated that there is a severe paucity of individuals with technical skills in computer science, data science, and the Internet or other information technology expertise in civil society and government. In particular, many of those interviewed noted that existing talent levels fail to meet current needs to develop, leverage, or understand technology.
2) Barriers to Recruitment and Retention Are Acute: many of those interviewed said that substantial barriers thwart the effective recruitment and retention of individuals with the requisite skills in government and civil society. Among the most common barriers mentioned were those of compensation, an inability to pursue groundbreaking work, and a culture that is averse to hiring and utilizing potentially disruptive innovators.
3) A Major Gap Between The Public Interest and For-Profit Sectors Persists: as a related matter, interviewees discussed superior for-profit recruitment and retention models. Specifically the for-profit sector was perceived as providing both more attractive compensation (especially to young talent) and fostering a culture of innovation, openness, and creativity that was seen as more appealing to technologists and innovators.
4) A Need to Examine Models from Other Fields: interviewees noted significant space to develop new models to improve the robustness of the talent pipeline; in part, many existing models were regarded as unsustainable or incomplete. Interviewees did, however, highlight approaches from other fields that could provide relevant lessons to help guide investments in improving this pipeline.
5) Significant Opportunity for Connection and Training: despite consonance among those interviewed that the pipeline was incomplete, many individuals indicated the possibility for improved and more systematic efforts to expose young technologists to public interest issues and connect them to government and civil society careers through internships, fellowships, and other training and recruitment tools.
6) Culture Change Necessary: the culture of government and civil society – and its effects on recruitment and other bureaucratic processes – was seen as a
vital challenge that would need to be addressed to improve the pipeline. This view manifested through comments that government and civil society organizations needed to become more open to utilizing technology and adopting a mindset of experimentation and disruption.

And here’s the conclusion:

Based on this research, the findings of the report are clear: technology talent is a key need in government and civil society, but the current state of the pipeline is inadequate to meet that need. The bad news is that existing institutions and approaches are insufficient to build and sustain this pipeline, particularly in the face of
sharp for-profit competition. The good news is that stakeholders interviewed identified a range of organizations and practices that, at scale, have the potential to make an enormous difference. While the problem is daunting, the stakes are high. It will be critical for civil society and government to develop sustainable and
effective pathways for the panoply of technologists and experts who have the skills to create truly 21st century institutions.

For those interested, the New America Foundation will be hosting a forum on the technology deficit in Washington, DC, on April 29th. The event will be livestreamed and archived.

Presidential Innovation Fellows show (some) government technology can work, after all

The last six months haven’t been kind to the public’s perception of the Obama administration’s ability to apply technology to government. The administration’s first term that featured fitful but genuine progress in modernizing the federal government’s use of technology, from embracing online video and social media to adopting cloud computing, virtualization, mobile devices and open source software. The Consumer Financial Protection Bureau earned praise from The Washington Post, Bloomberg View, and The New York Times for getting government technology right.

Last fall, however, the White House fell into a sinkhole of its own creation when the troubled launch of Healthcare.gov led to the novel scene of a President of the United States standing in the Rose Garden, apologizing for the performance of a website. After the big fix to Healthcare.gov by a quickly assembled trauma team got the site working, the administration has quietly moved towards information technology reforms, with the hopes of avoiding the next Healthcare.gov, considering potential shifts in hiring rules and forming a new development unit within the U.S. General Services agency.

Without improved results, however, those reforms won’t be sufficient to shift the opinion of millions of angry Americans. The White House and agencies will have to deliver on better digital government, from services to public engagement.

pif-logo-300pxThis week, the administration showed evidence that it has done so: The projects from the second round of the White House’s Presidential Innovation Fellows program are online, and they’re impressive. US CTO Todd Park and US GSA Administrator Dan Tangherlini proudly described their accomplishments today:

Since the initiative launched two years ago, Presidential Innovation Fellows, along with their government teammates, have been delivering impressive results—at start-up velocity. Fellows have unleashed the power of open government data to spur the creation of new products and jobs; improved the ability of the Federal government to respond effectively to natural disasters; designed pilot projects that make it easier for new economy companies to do business with the Federal Government; and much more. Their impact is enormous.

These projects show that a relatively small number of talented fellows can work with and within huge institutions to rapidly design and launch platforms, Web applications and open data initiatives. The ambition and, in some cases, successful deployment of projects like RFPEZ, Blue Button Connect, OpenFDA, a GI Bill toolGreen Button, and a transcription tool at the Smithsonian Institute are a testament to the ability of public servants in the federal government to accomplish their missions using modern Web technologies and standards. (It’s also an answer to some of the harsh partisan criticism that the program faced at launch.)

In a blog post and YouTube video from deputy U.S. chief technology officer Jennifer Pahlka, the White House announced today they had started taking applications for a third round of fellows that would focus on 14 projects within three broad areas: veterans, open data and crowdsourcing:

  • “Making Digital the Default: Building a 21st Century Veterans Experience: The U.S. Department of Veterans Affairs is embarking on a bold new initiative to create a “digital by default” experience for our Nation’s veterans that provides better, faster access to services and complements the Department’s work to eliminate the disability claims backlog.
  • Data Innovation: Unleashing the Power of Data Resources to Improve Americans’ Lives: This initiative aim to accelerate and expand the Federal Government’s efforts to liberate government data by making these information resources more accessible to the public and useable in computer readable forms, and to spur the use of those data by companies, entrepreneurs, citizens, and others to fuel the creation of new products, services, and jobs.
  • By the People, for the People: Crowdsourcing to Improve Government: Crowdsourcing is a powerful way to organize people, mobilize resources, and gather information. This initiative will leverage technology and innovation to engage the American public as a strategic partner in solving difficult challenges and improving the way government works—from helping NASA find asteroid threats to human populations to improving the quality of U.S. patents to unlocking information contained in government records.”

Up until today, the fruits of the second class of fellows have been a bit harder to ascertain from the outside, as compared to the first round of five projects, like RFPEZ, where more iterative development was happening out in the open on Github. Now, the public can go see for themselves what has been developed on their behalf and judge for themselves whether it works or not, much as they have with Healthcare.gov.

I’m particularly fond of the new Web application at the Smithsonian Institute, which enables the public to transcribe handwritten historic documents and records. It’s live at Transcription.si.edu, if you’d like to pitch in, you can join more than three thousand volunteers who have already transcribed and reviewed more than 13,000 historic and scientific records. It’s a complement to the citizen archivist platform that the U.S. National Archives announced in 2011 and subsequently launched. Both make exceptional use of the Internet’s ability to distribute and scale a huge project around the country, enabling public participation in the creation of a digital commons in a way that was not possible before.

RankAndFiled.com is like the SEC’s EDGAR database, but for humans

A new website, Rank and Filed, gathers data from the Security and Exchange Commission’s EDGAR database, indexes it, and publishes it online in open formats that  investors can use to research and discover companies. I’ve included a screenshot of Tesla’s SEC filings below.

tesla-rank-filed

The site currently has over 25 million files indexed.

I heard about the new website directly from its creator, Maris Jensen, a former SEC analyst who built the site independently. According to Maris, she proposed the project internally in March 2013 but was immediately turned down.

A month later, after she was terminated for threatening the Commission’s mission with a “lack of respect for senior management” — an issue she holds was unrelated to the proposal — Maris decided to make the idea become real independently and started building. She has since offered to give the site and its code to the SEC but has not heard back from them yet.

Our interview, lightly edited for content and clarity, follows.

20140219-201203.jpg

Where did the idea for this originate?

The breaking point was realizing that the guy in the cubicle across from me had spent a week writing the same parser as me — a Python program to parse the EDGAR FTP index for specific filings. This is nearly two decades after Carl Malamud set everything up; the FTP index is exactly as he left it. We were in the division responsible for the SEC’s data analytics and interactive data initiatives. The division literally rewrites this program each time they need SEC filings data. There’s no version control. There’s just no excuse!  Hilariously, that guy also left the SEC and built an SEC filings website, though his is for-profit: http://legalai.com/

What does this do that the SEC needed?

In 2008, the SEC set up a task force (the ‘21st Century Disclosure Initiative‘) to rethink the way they were making data available to the public. A year later, they published this report, with their conclusion and proposal for a new, modernized disclosure system.  I basically just tried to build the system they described. I also did lots of googling — ‘SEC EDGAR tool terrible‘, ‘how to find SEC data‘, etc — and then tried to address the problems people were having.

The problems have been the same for decades. In 1994, people wanted a SEC CIK-to-ticker mapping. 20 years later, this question still pops up on forums monthly.

There are over 600 different forms on EDGAR but the SEC’s form lists are basically no help at all. I went through and googled each form individually. I tried to group them into understandable categories.

The comment at the bottom of this post describes the SEC’s current problem better than I ever could:

Has anyone out there ever tried to use SEC.GOV to search for information about a company? The problem is very easy to articulate. If you search for something, you get 5000 results. At about 10 results per page, you have 500 pages to sift through to find what you want. Once you find what you want, there is ZERO ability to navigate from what you found into related documents!

What if you want to research a particular company’s board of directors? What other companies is each director associated with? Have there been any problems in any of those companies? You can’t investigate these types of things using the technology sec.gov has fielded. You want a needle. The SEC gives you a haystack.

Why not allow for better discovery of all of the SEC data and let investors perform their own investigations of markets & companies?

So instead of focusing on this obvious improvement to the public service the SEC provides, the emphasis apparently is on improving investigative actions. Great. Why not just shut off the sec.gov website completely and let the SEC do all of the investigating and researching of SEC data?

How does RankAndFiled.com compare to other sources of SEC data online?

I unfortunately haven’t added that much ‘value’ yet. I’m a total amateur. I’m just trying to make the data available and understandable! The website doesn’t do any analysis: it just collects, links and presents data from different SEC filings.

Looks like you got some great help from the folks you thanked. Did you build this all yourself with these tools?

Yes, open source tools these days are amazing!!  I started this project with no web or software development experience at all.

I actually feel really lucky to have fallen into all of this. Everything I know I learned on google, mostly through tutorials written by the developers listed there.

I also didn’t know anyone in the dataviz or open source community, so I reached out to some of them with stuff like etiquette questions. Their response and support was just incredible — especially the D3 community, they’re just wonderful.

Can you tell me more about where the data on this site comes from and what you’ve done to it?

Basically, the system watches the SEC’s RSS feeds. It reads and indexes data from SEC filings as they come in. Not all the filings show up on the feeds — I’m not sure why — so it also scans the FTP index for any missed filings.

About 25 million SEC documents have been parsed and incorporated so far, which is everything that’s publicly available on EDGAR.  So companies and people are tracked and connected over time — who’s raising money where, who owns whom, who moved companies or got promoted, who sold a ton of shares.  I also realign all the financial data from quarterly and annual reports so you can see a company’s financial history and so the data is comparable between companies.

It actually feels silly even talking about it, because it’s just so basic. This is stuff the SEC should have been doing years and years and years ago.

But its not a perfect science because one, only a few SEC forms are machine-readable and two, the SEC doesn’t even try to standardize names. SEC registrants are given distinct identifiers but anything goes when companies or names are listed inside a filing. Middle names, middle initials, nicknames, suffixes, titles…

What’s next?

I spent November and December trying to give all my code to the SEC. I received no response, not even a polite no. That’s still the goal — I want them to take over and open source it, or at the very least host the underlying API.  It’s their job to make this data available and accessible. They NEED a team over there doing hands-on work with SEC filings, a team struggling to make sense of this data with just the tools available to retail investors, especially now that they’re talking about disclosure reform.  Right now, they have almost no incentive to change things over to structured data — they buy all the structured EDGAR data they need.

The SEC keeps saying that it’s the private sector’s job to build tools like this, not theirs, but in the past 20 years nobody has come up with a really great, really affordable option.  It doesn’t make sense for any of us to even try — I’ve heard that Bloomberg and Thomson Reuters hire legions of Indian professionals to go through each SEC filing by hand.  We just can’t compete.

The SEC will have to make a lot more of their data machine-readable before any ‘disruptive’ innovation can happen, but they won’t do that until they’re forced to (by Congress), unless they have people there who realize how unfair the situation has become.

There are actually a heartbreaking number of SEC employees who also want this to happen, self-described worker bees who’ve reached out to me from personal email to say they’ve been trying to convince their bosses to give this thing a chance.  So far, no luck! I would open source it myself, but unfortunately I can’t afford to host the project indefinitely.

U.S. CIO Steven VanRoekel on the risks and potential of open data and digital government

Last year, I conducted an in-depth interview with United States chief information officer Steven VanRoekel in his office in the Eisenhower Executive Office Building, overlooking the White House. I was there to talk about the historic open data executive order that President Obama had signed in May 2013. vanroekel On this visit, I couldn’t help but notice that VanRoekel has a Star Wars clock in his office.  The Force is strong here. The US CIO also had a lot of other consumer technology around his workspace: a MacBook and Windows laptop and dock, dual monitors, iPad, a teleconferencing system integrated with a desktop PC, and an iPhone, which recently became securely permissible on in the White House IT system in a “bring your own device” pilot. The interview that follows is slightly dated, in certain respects, but still offers significant insight into how the nation’s top IT executive is thinking about digital government, open data and more. It has also been lightly edited, primarily removing the long-winded questions of the interviewer.

We’re at the one year mark of the Digital Government Strategy. Where do we stand with hitting the metrics in the strategy? Why did it take until now to get this out?

VanRoekel: The strategy calls for the launch of the policy itself. Throughout the year, the policy was a framework for a 12 month set of deliverables of different aspects, from the work we’re doing in mobile, from ‘bring your own device,’ to security baselines and mobile device management platforms. Not only streamlining procurement, streamlining app development in government. Managing those devices securely to thinking about the way we do customer service and the way we think about the power of data and how it plays into all of this. It’s been part of that process for about the year we’ve been working on it. Of course, we thought through these principles and have been working on data-related aspects for longer. The digital strategy policy was the framework for us to catalyze and accelerate that, and over the course of the year, the stuff that’s been going on behind the scenes has largely been working with agencies on building some of this capability around open data. You’re going to see some things happening very soon on the release of some of this capability. Second, standing up the Presidential Innovation Fellows program and then putting specific ‘PIFs’ into certain targeted agencies to fast track their opening of data — that’s going to extend into Wave Two. You’re going to see that continuing to happen, where we just take these principles and just kind of ‘rinse and repeat’ in government. Third, we’re working with a small set of the community to build tools to make it easy for agencies to implement these guidelines. So if there’s an agency that doesn’t know how to create a JSON file, that tool is on Github. You can see that on Project Open Data .

How involved has the president been in this executive order? It’s his name, his words are in there — how much have you and U.S. chief technology officer Todd Park talked with the president about this?

VanRoekel: Ever since about last summer, we’ve been talking to the president about open data, specifically. I think there’s lots of examples where we’ve had conversations on the periphery, and he’s met with a lot of tech leaders and others around the country that in many, many cases have either built their business or are relying upon some government service or data stream. We’re seeing that culminating into the mindset of what we do as a factor of economic growth. His thoughts are ‘how do we unlock this national resource?’ We’re sitting on this treasure trove – how do we unleash it into the developer community, so that these app developers can build these different solutions?’ He’s definitely inspired – he wrote that cover memo to the digital strategy last May – and then we’ve had all of these different meetings, across the course of the year, and now it culminates into this executive order, where we’re working to catalyze these agencies and get them to pay attention and follow up.

We’ve been down this road before, in some respects, with the Open Government Directive in 2009, with former US CIO Vivek Kundra putting forward claims of positive outcomes from releasing data. Yet, what have we learned over the past four years? What makes this different? Where’s the “how,” in terms of implementing this?

VanRoekel: The original launch of data.gov was, I think, a way of really shocking the system, and getting people to pay attention to and notice that there was an important resource we’re sitting on called data. Prior to data.gov, and prior to the work of this administration, the daily approach to data was very ad hoc. It wasn’t taken as data, it was just an output or a piece of a broader mix. That’s why you get so much disparity in the approach to the way we manage data. You get the paper-driven processes that are still very prevalent, where someone will send a paper document, and someone will sign it, and scan it, feed it into a system, and then eventually print it and mail it. It’s crazy what you end up seeing and experiencing inside of government in terms of how these things work. Data.gov was an important first step. The difference now is really around taking this approach to everything that we do. The work that we did with the Open Government Directive back in 2009 was really about taking some high value data sets and putting them up on Data.gov. What you ended up seeing was kind of a ‘bulk upload, bulk download,’ kind of access to the data. Machine-readability and programmability wasn’t taken into account, or the searchability and findability.

Did entrepreneurs or advocates validate these data sets as “high value?” Entrepreneurs have kept buying data from government over the past four years or making Freedom of Information Act requests for data from government or scraping data. They’re not getting that from Data.gov.

VanRoekel: I have no official way of measuring the ‘value’ of the data, other than anecdotal conversations. I do think that the motion of getting people to wake up and think about how they are treating data internally within in an organization – well, there was a convenience factor to that, which basically was that ‘I got to pick what data I release,’ which probably dates from ‘what data I have that’s releasable?’ The different tiers to this executive order and this policy are a huge part of why it’s different. It sets the new default. It basically says, if you are modernizing a system or creating a new system, you can do that in a way that adopts these principles. If you [undertake] the collection, use and dissemination of data, you’ll make those machine-readable and interoperable by default. That doesn’t always mean public, because there are applications that privacy and national security mean we should make public, but those principles still hold, in terms of the way I believe we the ways we build things should evolve on this foundation. For the community that’s getting value outside of the government, this really sets a predictable, consistent course for the government to open up data. Any business decisions are risk-based decisions. You have to assume some level of risk with anything you do.

If there’s too much risk, entrepreneurs won’t do it.

VanRoekel: True. To that end, the work we’ve done in this policy that’s different than before is the way we’re collecting information about the data is being standardized. We’re creating a meta data infrastructure. Data itself doesn’t have to be all described in the same way. We’re not coming up with “one schema to rule them all” across government. The complexity of that would be insurmountable. I don’t think that’s a 21st century approach. That’s probably a last century thinking around to say that if we get one schema, we’re going to get it all done. The meta data approach is to say let’s collect a standard template way of describing – but flexible for future extension – the data that is contained in government. In that description, and in that meta data, tags like “who owns this data” and “how often is the data updated,” information about how to get a hold of people to find out more about descriptions within the data. They will be a part of that description in a way that gives you some level of assurance on how the data is managed. Much of the data we have out there, there’s existing laws on the books to collect the data. Most of it, there’s existing laws, not just a business process. One of the great conversations we’re having with the agencies is that they find greater efficiency in the way they collect data and build solutions based upon these open data principles.

I received a question from David Robinson, regarding open licensing in this policy. Isn’t U.S. government data exempt from copyright?

VanRoekel: Not all government data is exempt from copyright, but those are generally edge cases. The Smithsonian takes pictures of things that are still under copyright, for instance. That’s government data. I sent a note about this announcement to the Secretary of the Smithsonian this morning. I’ve been talking to him about opening up data for some time. The nuance there, about open licenses, is really around the types of systems that create the data, and putting a preference for a non-proprietary format. You can imagine a world in which I give you an XML file, and I give you a Microsoft Excel file. Those are both piece of data. To some extent, the Excel format is machine-readable. You can open it up and look at it internally just the way it is, but do you have to go buy a special piece of software to read the file or not? That kind of denotes the open[ness] and accessibility of the data. In the case of policy, we declare a strong preference towards these non-proprietary formats, so that not only do you get machine-readability but you get the broadest access to the data. It’s less about the content in there – is that’s copyrighted or not — I think most data in government, outside of the realm of confidential or private data, is not copyrighted, so to speak from the standpoint of the license. It’s more about the format, and if there’s a proprietary standard wrapped in the stuff. We have an obligation as a government to pick formats, pick solutions, et cetera that not only have the broadest applicability and accessibility for the public but also create the most opportunity in the broadest sense.

Open data doesn’t come without costs. Is this open data policy an unfunded mandate on all of the agencies, instructing them to put all of the data online they can, to digitize content?

VanRoekel: In the broadest sense, the phrase ‘the new default’ is an important one. It basically says, for enhancements to existing systems or new systems, follow this guideline. If people are making changes, this is in the list of requirements. From a costing perspective, it’s pre-baked into the cost of any enhancement or release. That’s the broad statement. The narrow statement is that there are many agencies out there, increasing every day, that are embracing these retroactive open data approaches, saying that there is value to the outside world, there is lower cost, greater interoperability, there are solutions that can be derived from taking these open data approaches inside of my own organization. That’s what we saw in PIF [Presidential Innovation Fellows] round one, where these agencies adopted the innovations fellows to unlock their data. That’s increasing and expanding in round two, and continuing in the agencies which we thought were high administration priorities, along with others. I think we’re going to continue to see this as a catalyzing element of that phenomenon, where people are going to back and spend the resources on doing this. Just invite any of these leaders to the last twenty minutes of a hackathon, where folks are standing up and showing their solutions that they developed in one day, based on the principles of open data and APIs. They just are overwhelmed about the potential within their own organizations, and they run back and want to do this as fast as they can.

Are you using anything that has ever been developed at a hackathon, personally or professionally?

VanRoekel: We are incorporating code from the “We The People” hackathon, the most recent one. I know Macon Phillips and team are looking at incorporating feature sets they got out of that. An important part of the hackathon, like most conferences you go to, is the time between the sessions. They’re the most important – the relationship building aspect, figuring out how we shape the next set of capabilities or APIs or other things you want to build.

How does this relate to the way that the federal government uses open data internally?

VanRoekel: There are so many examples of government agencies, when faced with a technical problem, will go hire a single monolithic vendor to do a single, monolithic solution – and spend most of the budget on the planning cycle – and you end up with these multi-million dollar, 3-ring binders that ultimately fail because technology has moved on or people have left or laws have moved on five or ten years later, after they started these projects. One of the key components of this is laying foundational stones down to say how are we going to build upon that, to create the apps and solutions of the future. You know, I can swoop in and say “here’s how to do modular contracting in the context of government acquisition” – but unless you say, you’ve got to adopt open data and these principles of API-first, of doing things a different way — smaller, reusable, interoperable pieces – you can really build the phenomenon. These are all elements of that – and the cost savings aspect of it are extraordinary. The risk profile is going to be a lot smaller. Inside government I’m as excited about as outside.

Do you think the federal government will ever be able to move from big data centers and complicated enterprise software to a lightweight, distributed model for mobile services built on APIs?

VanRoekel: I think there is massive potential for things like that across the whole of government. I mean, we’re a big organization. We’re the largest buyer of technology in the world. We have unending opportunities to do things in a more efficient way. I’ve been running this process that I launched last year called Portfolio Stat. It’s all about taking a left to right look, sitting down with agencies. What I’ve always been missing from those is some of these groundbreaking policies that start to paint the picture for what the ideal is, and how to get your job done in a way that’s different than the way you’ve don’t it before, like the notion of continuous improvement. We’ve needed things like the EO to give us those conversation starters to say, here’s the way to do it, see what they are doing over at HHS. “How are you going to bring that kind of discipline into your organization?” I’m sitting down with every deputy secretary and all the C-level executives to have those tough conversations. Fruitful, but good conversations about how we are going to change the way we deliver solutions inside of government. The ideal state that they’ll all hear about is the service-oriented model with centralized, commodity computing that’s mostly cloud-based. Then, how do you provide services out to the periphery of your organization.

You told me in our last interview that you had statutory authority to make things happen. What happens if a federal CIO drags his or her feet and, a year from now, you’re still here and they’re not moving on these policies, from cloud to open data?

VanRoekel: The answer I gave to you last time still holds: it’s about inspire and push. Inspire comes in many factors. One is me coming in and showing them the art of the possible, saying there’s a better way of doing this, getting their customers to show up at the door to say that we want better capabilities and get them inspired to do things, getting their leadership to show up and say we want better things. Push is about budget – how do you manage their budget. There’s aspects of both inspire and push in the way we’ve managed the budget this year. I have the authority to do that.

What’s your best case for adopting an open data strategy and enterprise data inventory, if you’re trying to inspire?

VanRoekel: The bottom line is meet your mission faster and at a much lower cost. Our job is not about technology as an end state – it’s about our mission. We’ve got to get the mission of government done. You’re fostering immigration, you’re protecting public safety, you’re providing better energy guidance, you’re shaping an industry for the country. Open data is a fundamental building block of providing flexibility and reusability into the workplace. It’s what you do to get you to the end state of your mission. I hearken back a lot to the examples we used at the FCC, which was moving from like fourteen websites to one and how we managed that. How do we take workload of a place so that the effort pays for itself in six months and start yielding benefits beyond that? The benefits are long-term. When you build that next enhancement, or that new thing on top of it, you can realize the benefits at lower cost. It’s amazing. I do these TechStat processes, where I sit down with the agencies. They have some project that’s going off the rails. They need help, focus, and some executive oversight. I sit down, usually in a big room of people, and it’s almost gotten to the point where you don’t need to look at the briefing documents ahead of time. You sit down and say, I bet you’re doing it this way – and it’s monolithic, proprietary, probably taking a lot of packaged software and writing a lot of glue code to hold it all together – and you then propose to them the principles of open data and open approaches to doing the solution, and tell them I want to see in the next sixty days some customer-facing, benefit value that’s built on this model. They go off and do that, and they get right back on the tracks and they succeed. Time after time when we do TechStat, that’s the formula and it’s yielded these incredible results. That culture is starting to permeate into how we get stuff done, because they see how it might accomplish their mission if they just turn 45 degrees and try a different approach. If that makes them successful, they will go there every time.

Critiques of open data raise concerns about “discretionary disclosure,” where a government entity releases what it wants, claim credit for embracing open government, and obfuscates the rest of the data. Does this policy change any of the decisions that are being made to delay, redact or not release requested data?

VanRoekel: I think today marks an inflection point that will set a course for the future. It’s not that tomorrow or next month or next year that all government data will just be transformed into open, machine-readable form. It will happen over time. The key here is that we’ve created mechanisms to protect privacy and security of data but built in culture where that which is intended to be public should be made public. Part of what is described in the executive order is the formation of this cross-agency executive group that will define a cross-agency priority goal, that we need to get inventories in from agencies regarding that which they hold that could be made public. We want to know stuff that’s not public today, what could be out there. We’re going to take that in and look at how we can set goals for this year, the next year and the year after that to continue to open up data at a faster pace than we’ve been doing in the past. The modernization act and some of the work around setting goals in government is much more compatible and looks a lot like the private sector. We’re embracing these notions that I’ve really grown to love and respect over the course of my private sector career in government around methodologies. Stay tuned on the capital and what that looks like.

Are you all going to work with the House and Senate on the DATA Act or are statutory issues on oversight still a stumbling block?

VanRoekel: The spirit of the DATA Act, of transparency and openness, are the things we’re doing, and I think are embraced. Some of the tactical aspects of the act were a little off the mark, in terms of getting to the end state that we want to get to. If you look at the FY-14 budget and the work we’ve done on transferring USASpending.gov to Treasury to get it closer to the source of the data, plus a view into how those systems get modernized, how we bring these principles into that mix, that will all be a part of the end state, which is how we track the spending.

Do you ever anticipate the data going into FOIA.gov also going into Data.gov?

VanRoekel: I don’t know. I can’t speculate on that. I’m not close enough to it.

Well, FOIA requests show demand. Do you have any sense of what people are paying for now, in terms of government data?

VanRoekel: I don’t.

Has anybody ever asked, to try to figure that out?

VanRoekel: I think that would be a great thing for you to do.

I appreciate that, but this strikes me as an interesting assessment that you could be doing, in terms of measuring outflows for business intelligence. If someone buys data, it shows that there is value in it. What would it mean if releases reflected that signal?

VanRoekel: You mean preference data that is being purchased?

Right.

VanRoekel: Well, part of this will be building and looking at Data.gov. Some of the stuff coming there is really building community around the data. The number one question Todd Park and I had coming out of the PIF program, at the end of May [2013] was, what if I think there’s data, but I don’t know, who do I contact? An important part of the delivery of this wave and the product coming out as part of this policy is going to be this enhanced Data.gov, that’s our intention to build a much richer community around government data. We want to hear from people. If there are data sources that do hold promise and value, let’s hear about those and see if there are things we can do to get a PIF on structuring it, and get agencies to modernize systems to get it released and open. I know some of the costs are like administrative feeds for printing or finding the data, something that’s related to third parties collecting it and then reselling it. We want to make sure that we’re thoughtful in how we approach that.

How has the experience that you’ve seen everyone have with the first iteration of Data.gov informed the nation’s open data strategy today? What specifically had not been done before that you will be doing now?

VanRoekel: The first Data.gov set us on a cultural path.What it didn’t do was connect you to data the source. What is this data? How often is it updated? Findability and searchability of broad government data wasn’t there. Programmability of the data wasn’t necessarily there. Data.gov, in the future, instead of being a repository for data, a place to upload the data, my intention is that it will become a meta data catalog. It will be the place you go, the one-stop-shop, to find government data, across multiple aspects. The way we’re doing this is through the policy itself, which says that agencies have to go and set up this new page, similar to what is now standard in open government, /open, /developer. In that page, the most important part of that page is a JSON file. That’s what data.gov can go out and crawl, or any developer outside can go out and crawl, to find out when data has been updated, what data is available, in what format. All of the standard meta data that I’ve described earlier will be represented through that JSON file. Data.gov will then become a meta data catalog of all the open data out in government at its source. As a developer, you’d come in, and it you wanted to do a map, for instance, to see what broadband capabilities exist near low-income Americans and then overlay locations of educational institutions, if you wanted to look for a correlation between income and broadband deployment and education, you’d hypothetically be looking for 3 different data sources, from 3 different agencies. You’d be able to find the open data streams, the APIs, to go get that data in one place, and then you’d have a connection back to the mothership to be able to grab it, find out who owns it. We want to still have a center of gravity for data, but make the data itself follow these principles, in terms of discoverability and use. The thing that probably got me most pointed in this direction is the President’s Council of Advisors on Science and Technology (PCAST), which did a report on health IT. Buried on page 60 or something, it had this description of meta data as the linchpin of discoverability of diverse data sources. That’s the approach we’ve taken, much like Google.

5 years from now, what will have changed because of this effort?

VanRoekel: The way we build solutions inside of government is going to change, and the amount of apps and solutions outside of government are going to fundamentally change. You and I now, sitting in our cars, take for granted the GPS signal going to the device on the dash. I think about government. Government is right there with me, every single day, as I’m driving my car, or when I do a Foursquare check-in on my phone. We’ll be bringing government data to citizens where they are, versus making people come to government. It’s been a long time since the mid-80s, when we opened up GPS, but look at where we are today. I think we’ll look back in 10 or 15 years and think about all of the potential we unlocked today.

What data could be like GPS, in terms of their impact on our lives?

VanRoekel: I think health and energy are probably two big ones.

POSTSCRIPT

Since we talked, the Obama administration has followed through on some of the commitments the U.S. CIO described, including relaunching Data.gov and releasing more data. Other goals, like every agency releasing an enterprise data inventory or publishing a /data and /developer page online, have seen mixed compliance, as an audit by the Sunlight Foundation showed in December. The federal government shutdown last fall also scuttled open data access, where certain data types were deemed essential to maintain and others were not. The shutdown also suggested that an “API-first” strategy for open data might be problematic. OMB, where VanRoekel works, has also quietly called for major changes in the DATA Act, which passed the House of Representatives with overwhelming support at the end of last year. A marked up version of the DATA Act obtained by Federal News Radio removes funding for the legislation and language that would require standardized data elements for reporting federal government spending. The news was not received well on Capitol Hill. Sen. Mark Warner, D-Va., the lead sponsor of the DATA Act in the Senate, reaffirmed his commitment to the current version of the bill in statement: “The Obama administration talks a lot about transparency, but these comments reflect a clear attempt to gut the DATA Act. DATA reflects years of bipartisan, bicameral work, and to propose substantial, unproductive changes this late in the game is unacceptable. We look forward to passing the DATA Act, which had near universal support in its House passage and passed unanimously out of its Senate committee. I will not back down from a bill that holds the government accountable and provides taxpayers the transparency they deserve.” The leaked markup has led to observers wondering whether the White House wants to scuttle the DATA Act and others to potentially withdraw support. “OMB’s version of the DATA Act is not a bill that the Sunlight Foundation can support,” wrote Matt Rumsey, a policy analyst at the Sunlight Foundation. “If OMB’s suggestions are ultimately added to the legislation, we will join our friends at the Data Transparency Coalition and withdraw our support of the DATA Act.” In response to repeated questions about the leaked draft, the OMB press office has sent the same statement to multiple media outlets: “The Administration believes data transparency is a critical element to good government, and we share the goal of advancing transparency and accountability of Federal spending. We will continue to work with Congress and other stakeholders to identify the most effective & efficient use of taxpayer dollars to accomplish this goal.” I have asked the Office of Management and Budget (OMB) about all of these issues and will publish any reply I receive separately, with a link from this post.

Lawmakers release proposed draft to codify US CTO role, create U.S. Digital Government Office (DGO)

After months of discussion regarding how the government can avoid another healthcare.gov debacle, legislative proposals are starting to emerge in Washington. Last year, FITARA gathered steam before running into a legislative impasse. Today, a new draft bill introduced for discussion in the United House of Representatives proposes specific reforms that substantially parallel those made by the United Kingdom after a similar technology debacle in its National Health Service.

The draft bill is embedded below.

The subtext for the ‘Reforming Federal Procurement of Information Technology Act’ (RFP-IT), is the newfound awareness in Congress and the nation at large driven by the issues with Healthcare.gov that something is profoundly amiss in the way that the federal government buys, builds and maintains technology.

“Studies show that 94 percent of major government IT projects between 2003 and 2012 came in over budget, behind schedule, or failed completely, said Representative Anna G. Eshoo (D-CA), ranking member of the House Communications and Technology Subcommittee, and co-sponsor of RFP-IT, in a statement. “In an $80 billion sector of our federal government’s budget, this is an absolutely unacceptable waste of taxpayer dollars. Furthermore, thousands of pages of procurement regulations discourage small innovative businesses from even attempting to navigate the rules. Our draft bill puts proven best practices to work by instituting a White House office of IT procurement and gives all American innovators a fair shake at competing for valuable federal IT contracts by lowering the burden of entry.”

Specifically, RFP-IT would:

  • Make the position of the U.S. chief technology officer and Presidential Innovation Fellows program permanent
  • Create a U.S. Digital Government Office (DGO) that would not only govern the country’s mammoth federal information technology project portfolio more effectively but actively build and maintain aspects of it
  • Increase the size of a contract for IT services allowable under the Small Business Act from $100,000 to $500,000
  • Create a U.S. DGO fund supported by 5% of the fees collected by executive agencies for various types of contracts

“In the 21st century, effective governance is inextricably linked with how well government leverages technology to serve its citizens,” said Representative Gerry Connolly (D-VA), ranking member of the House Oversight and Government Reform Subcommittee, and co-sponsor of RFP-IT, in a statement. “Despite incremental improvements in federal IT management over the years, the bottom line is that large-scale federal IT program failures continue to waste taxpayers’ dollars, while jeopardizing our Nation’s ability to carry out fundamental constitutional responsibilities, from conducting a census to securing our borders. Our RFP-IT discussion draft recognizes that transforming how the federal government procures critical IT assets will likely require bolstering ongoing efforts to comprehensively strengthen general federal IT management practices with targeted enhancements that promote innovative and bold procurement strategies from the White House on down.”

The legislative proposal earned qualified praise from Clay Johnson, former Presidential Innovation Fellow and CEO of the Department for Better Technology, whose advocacy for reforming government IT procurement and fixing the issues behind Healthcare.gov seemed to be on every cable news channel and editorial page last fall and winter.

“This, I think, really works well alongside FITARA, which calls for increased agency CIO authority,” wrote Johnson. “What will hopefully end up happening if both bills pass, is that good talent can get inside of government, and agencies that perform well can operate independently, and agencies that don’t can be pulled back in and reformed, while still having operational continuity (meaning: while that reform is happening, IT projects can still be done well, and run by the DGO).”

In 2014, digital government supports open government. What’s unclear is whether this proposal from two Democratic lawmakers can gain a Republican co-sponsor in the GOP-controlled legislative body or if a federal IT reform-minded Senator like Mr. Carper or Mr. Booker will take it up in the Senate.

This is singular bill isn’t a panacea, however, Johnson emphasized, pointing to the need to fix SAM.gov, the error-prone website for contractors to register with the federal government, and reforms to registration for “set-aside” business.

“We’re not sure how Congress writes a ‘stop throwing errors when a user clicks submit on sam.gov’ law,” wrote Johnson. “That’s going to take hearings, and most likely, a digital government office to fix. And we think this is a bill that complements Issa’s FITARA. Since this bill is at the discussion draft stage, perhaps soon we’ll see some Republicans jump on board.

UPDATE:
On July 30, RFP-IT was officially introduced. (Full text of the bill, via Rep. Eshoo’s office): “The Reforming Federal Procurement of Information Technology (RFP-IT) Act, introduced by Rep. Anna G. Eshoo (D-Calif.), Ranking Member of the Communications and Technology Subcommittee, Rep. Gerry Connolly (D-Va.), Ranking Member of the Oversight and Government Reform Subcommittee, Rep. Richard Hanna (R-N.Y.), Chairman of the Small Business Subcommittee on Contracting and Workforce, and Rep. Eric Swalwell (D-Calif.), Ranking Member of the Committee on Science, Space and Technology’s Energy Subcommittee, and Rep. Suzan DelBene (D-Wash.)”

Here’s the quick summary of revised RFP-IT Act:

1) It would officially establish a Digital Government Office within the White House Office of Management and Budget (OMB), with the U.S. CIO at its head as a Senate-confirmed presidential appointee, reporting to the head of the OMB, shifting from “electronic government” to “digital government.”
2) It would codify the Presidential Innovation Fellows program.
3) It would expand competition for federal IT contracting under a simplified process that would ease the regulatory and compliance burden upon smaller companies bidding, bumping the threshold for information tech projects up to $500,000.
4) Establish a digital service pilot program
5) Direct the General Services Administrator to conduct an in-depth analysis of IT Schedule 70.
6) Direct the Comptroller General of the United States to produce three reports to Congress within 2 years of the law passing, on 1) the effectiveness of the 18F program of the General Services Administration, 2) IT Schedule 70, and 3) “challenges and barriers to entry for small business technology firms.”

President Obama to host Google+ Hangout on January 31st

20140123-231418.jpg

The Google home page currently has a link to ask President Obama a question in a Google+ Hangout. That’s some mighty popular online real estate devoted to citizen engagement.

The first presidential hangout featured real questions from citizens. I hope this one is up to the same standard.

You can see publicly shared questions on the #AskObama2014 hashtag on YouTube or Google+.

More details on the “virtual road trip” with President Obama are available at the official Google blog.

We are, once again, living in the future.

obama-hangout

Senators introduce bill to rename U.S. GPO “Government Publishing Office”

As ever, laws and institutions lag the rapid pace of technological change. In 2014, for instance, mandating that the person designated to publish federal information must be a practical printer “versed in the art of bookbinding” is a statutory remnant of a bygone age.

Last week, Senator Amy Klobuchar [D-MN] introduced the Government Publishing Office Act of 2014, S.1947, which would rename the United States Government Printing Office the Government Publishing Office. (It would also strike the bookbinding requirement.)

The current Public Printer of the United States supported the proposal. “Publishing defines a broad range of services that includes print, digital, and future technological advancements,” said Public Printer Davita Vance-Cooks, in a statement. “The name Government Publishing Office better reflects the services that GPO currently provides and will provide in the future. I appreciate the efforts of Senators Klobuchar and Chambliss for introducing and supporting this bill. GPO will continue to meet the information needs of Congress, Federal agencies, and the public and
carry out our mission of Keeping America Informed.”

“The idea of renaming GPO was discussed in a December Committee on House Administration hearing entitled “Mission of the Government Printing Office in a post-print world”, which I wrote about here,” said Daniel Schuman, policy director at Citizens for Responsibility and Ethics in Washington (CREW), in a blog post on the GPO bill.

While many questions about the GPO’s digital future remain, there’s some hope that at least the name for institution might receive an update.

For more on the history of the Government Printing Office, watch citizen archivist Carl Malamud’s talk from 2009, embedded below:

We can only wonder how the last five years might have been different if President Obama had nominated him as the head of the U.S. GPO.