Remote biosurveillance poses novel existential threats to civil liberties

FierceElectronics reports that Draganfly is claiming that their technologies can, when attached to a drone, detect fever, coughs, respiration, heart rate, & blood pressure for a given human at a distance. Put another way, this drone company is saying that … Continue reading

Digital technologies are disrupting democracies, for both good and ill

Elon University and Pew Research Center asked experts what the impact of digital disruption will be upon democracy in 2030: Perspectives differ! About half predicted that humans will use technology to weaken democracy over the next decade, with concerns grounded … Continue reading

White House moves WhiteHouse.gov to HTTPS by default, tying privacy to security

The_White_House-https

A .gov website that uses HTTPS encryption by default for its visitors is a superb example of “privacy by design.” On March 6th, the Federal Trade Commission enabled encryption for FTC.gov. When I visited whitehouse.gov tonight, I found that the White House digital team had flipped the site for what’s likely the most prominent government website in the world. The White House Web team confirmed the change just after midnight.

According to Leigh Heyman, director of new media technologies at the White House, over the next few days, the team be migrating other domains, like the bare domain name, whitehouse.gov, and m.whitehouse.gov, over to HTTPS as well, joining http://www.whitehouse.gov.

“Americans care about their privacy, and that’s what the White House’s move to HTTPS by default is about,” said Eric Mill, an open government software engineer at 18F. “The White House’s use of HTTPS protects visitors’ personal information and browsing activity when they connect to whitehouse.gov across the vast, unpredictable network of computers that is the internet.”

If you’re unfamiliar with HTTPS, it’s a way of encrypting the way you connect to a Web server online. Specifically, HTTPS refers to layering the Hypertext Transfer Protocol (HTTP) on top of the Secure Sockets Layer (SSL) or Transport Layer Security (TLS). What that means in practice is that your requests to the Web server and the pages results from it are encrypted and decrypted. Why does that matter? Consider, for instance, if someone is looking up sensitive health information online and visits a government website without HTTPS that also has data collection.

“Use of https is generally considered to be good practice, however, as opposed to unencrypted, regular http, although it adds a small amount of extra processing and delay to do the encryption,” commented Eugene Spafford, a Purdue University computer science professor and founder and executive director of the Center for Education and Research in Information Assurance and Security.

“HTTPS primarily provides three things: greater authentication, stream privacy, and message integrity. A quick look at the site doesn’t reveal (to me) anything that would likely require privacy or heightened message integrity. The most immediate consequence is that parties connecting to the website can have increased confidence of the site’s authenticity because a signed certificate will be employed. Of course, most people don’t actually verify certificates and their roots (cf. Superfish), so this isn’t an ironclad identification.”

Why does this matter?

“This immediately creates a strong baseline of privacy and security for anyone in the world, American or otherwise, who visits the White House website — whether to read their blog, learn more about the President, download official policies, or anything else inside whitehouse.gov,” said Mill.

“At a basic level, what a person sees and does on whitehouse.gov should be between them and the White House. When someone reads official policies published on whitehouse.gov, they should be confident that policy is real and authentic. The White House’s use of HTTPS by default means those promises just got a lot stronger.”

Ashkan Soltani, the FTC’s chief technologist, explained why that federal agency shifted at the Tech@FTC blog:

As a quick primer, HTTPS encryption secures your communications while in transit with websites so that only you and the website are able to view the content. The lock icon now appearing in your browser represents that the communication is encrypted and eavesdroppers are unable to look in. At this time, secure browsing is generally not a requirement for federal websites, but it is considered an industry best practice. Transit encryption is an important safeguard against eavesdroppers and has been the subject of previous investigations where we alleged companies failed to live up to their security promises when collecting personal information. It’s an important step when websites or apps collect personal information, and is a great best practice even if they don’t.

What broader trends does this tap into?

The White House moving to HTTPS is part of a larger move to lead by example in promoting privacy and security best practices, related Soltani, over email.

“I believe we’ll see a slow shift over the next few years of websites and services moving to HTTPS by default,” he said, “something a number of standards bodies including ISOC, IETF, and IAB have also called for.”

Along with FTC.gov, Mill highlighted the Privacy and Civil Liberties Oversight Board (PCLOB), the independent agency charged with balancing the rights of American citizens against the security steps taken in the wake of the terrorist attacks of 9/11, to HTTPS.

They’re far from alone: “Last month, 18F worked with 19 other .gov domains to go the distance to ensure browsers would always connect to them over HTTPS,” said Mill.

“Tt’s important to understand that what’s happening now in the federal government is what the broader internet has been working on for a while: making privacy the default.

The standards bodies that guide the internet’s development are recommending that the internet be encrypted by default, instructing their working groups to prioritize encryption in new protocol development, and declaring a more secure future for the web. The fastest versions of HTTP today already require encryption in major browsers, and it’s becoming easier to imagine a future where web browsers proactively warn users about unencrypted websites.

This is also why every .gov that 18F builds with its partner agencies uses HTTPS, full stop. We work hard to demonstrate that HTTPS can be fast, inexpensive, and easy. It’s a better future, and a practical one.”

The kind of privacy and security the White House is offering its visitors is what we should come to expect from the entire web, not just websites someone thinks are “sensitive”. All Web browsing is sensitive, and the White House’s leadership here reinforces that.”

It looks like Chris Soghoian, the principal technologist at the Speech, Privacy and Technology Project in the American Civil Liberties Union, is going to have a good day tomorrow.

While the Obama administration has taken its lumps on digital privacy after revelations of bulk surveillance of the Internet backbone by the National Security Agency, this is undeniably an important step towards securing the traffic of millions of people who visit whitehouse.gov every month.

Now that the White House is leading by example, hopefully other federal, state and local government entities will also adopt the standard.

“Everyone should want a simple feeling of privacy as they use the web, and confidence that they’re at the real and exact website they meant to visit,” said Mill. “While not everyone is highly attuned to watching for that padlock in their browser, the more websites that add it — especially high profile ones like the White House — the more that people can depend on that promise being met.”

On its 3rd anniversary, opportunities and challenges for the Open Government Partnership

opgFrance1

In 2010, President Barack Obama spoke to the United Nations General Assembly about open government. “The common thread of progress is the principle that government is accountable to its citizens,” he said, “and the diversity in this room makes clear — no one country has all the answers, but all of us must answer to our own people.”

In all parts of the world, we see the promise of innovation to make government more open and accountable.  And now, we must build on that progress.  And when we gather back here next year, we should bring specific commitments to promote transparency; to fight corruption; to energize civic engagement; to leverage new technologies so that we strengthen the foundations of freedom in our own countries, while living up to the ideals that can light the world.

Open government, said Samantha Power, now the U.S. ambassador to the United Nations, could have a global impact.

In 2011, a historic Open Government Partnership launched in New York City, hailed as a fresh approach to parting the red tape by the Economist. “The partnership is really the first time that there is a multilateral platform to address these issues,” said Maria Otero, former under secretary of state for democracy and global affairs at the United States State Department. “The partnership could have focused on countries come in and present best practices and exchange ideas and then just go home.”

“The partnership is really focused on first having countries participate that have already demonstrated interest in this area and have already put in place a number of specific things and the material laid out, if you will, the minimum standards that are being requested. What the partnership really looks for is to provide a mechanism by which the countries can each develop their own national plans on ways to expand what they’re doing on transparency, accountability, and civic engagement, or to start new initiatives for them. That is really what is very different and important about this partnership, is that it is very action- and results-oriented.”

In 2012, the Open Government Partnership became a player on the world stage as it hosted a global gathering of national leaders and civil society an annual meeting in Brazil, with the responsibilities and challenges that accompany that role, including pushing participants to submit missing action plans and progress reports, not just letters of commitment.

In January 2013, Power hailed the Open Government Partnership (OGP) as President Obama’s signature governance initiative:

It’s not about the abstraction about ‘fighting corruption’ or ‘promoting transparency’ or ‘harnessing innovation’ — it’s about ‘are the kids getting the textbooks they’re supposed to get’ or does transparency provide a window into whether resources are going where they’re supposed to go and, to the degree to which that window exists, are citizens aware and benefiting from the data and that information such that they can hold their governments accountable. And then, does the government care that citizens care that those discrepancies exist?

In May 2013, a seminal event in the evolution of OGP occurred when Russia withdrew from the Open Government Partnership:

If the dominant binary of the 21st century is between open and closed, Russia looks more interested in opting towards more controllable, technocratic options that involve discretionary data releases instead of an independent judiciary or freedom of assembly or the press. One of the challenges of the Open Government Partnership has always been the criteria that a country had to pass to join and then continue to be a member. Russia’s inclusion in OGP instantly raised eyebrows, doubts and fears last April, given rampant corruption in the public sector and Russia’s terrible record on press freedom. “Russia’s withdrawal from the OGP is an important reminder that open government isn’t easy or politically simple,” said Nathaniel Heller, executive director of Global Integrity. “While we don’t yet fully understand why Russia is leaving OGP, it’s safe to assume that the powers that be in the Kremlin decided that it was untenable to give reformers elsewhere in the Russian government the freedom to advance the open government agenda within the bureaucracy.”

In November 2013, the world may have hit ‘peak open‘ at the OGP annual summit in London, despite the partnerships’ members facing default states of closed.

Swirling underneath the professional glitz of an international summit were strong undercurrents of concern about its impact upon governments reluctant to cede power, reveal corruption or risk embarrassment upon disclosure of simple incompetence. The OGP summit took place at a moment where 21st century technology-fueled optimism has splashed up against the foundations of institutions created in the previous century. While the use of the Internet as a platform for collective action has grown, so too have attendent concerns about privacy and surveillance, in the wake of disclosures by NSA contractor Edward Snowden, where the same technologies that accelerated revolutions across the Middle East and North Africa are being used to capture and track the people advocating for change.

In 2014 the Open Government Partnership has matured and expanded, with France joining earlier in the year and Bosnia and Herzegovina bringing the total number of participating countries to 65 out of about 88 eligible countries worldwide. As OGP turns three, the partnership is celebrating the success of its expansion and looking ahead to its future, with a clearer mission and goals and ambitious four year strategy (PDF). The partnership is finally writing letters to countries that are not living up to their commitments, although the consequences for their continued participation if they do not comply remain to be seen.

The challenges and opportunities ahead for a partnership that provides a platform for civil society to hold government accountable are considerable, given the threats to civil society worldwide and the breathtaking changes brought about through technological innovation. Today, 10 national leaders will speak in New York City to mark OGP’s third anniversary. (I’ll be there to listen and share what I can.)

After the speeches end and the presidents and prime ministers return home, serious questions will remain regarding their willingness to put political capitol behind reforms and take tough stands to ensure that their governments actually open up. Digital government is not open government, just as not all open data supports democratic reforms.  As Mexico prepares to become lead co-chair of OGP, one element that didn’t make it into the challenges listed for the country is the state of press freedom in Mexico. As the Committee to Protect Journalists highlighted, open government is not sustainable without a free press. As long as the murders of journalists go unpunished in Mexico, the commitments and efforts of the Mexican national government will have to be taken in context.

Given this blog’s past stance that as press freedom goes, so too does open government, I’ve signed a petition urging the White House to explicitly support a right to report. Every other country that has committed to open government should do the same. Given OGP’s own challenges around the media and open government (PDF), I would also urge the partnership to make sure that press freedom and freedom of expression occupies a prominent place in its advocacy efforts in the years ahead.

Data journalism and the changing landscape for policy making in the age of networked transparency

This morning, I gave a short talk on data journalism and the changing landscape for policy making in the age of networked transparency at the Woodrow Wilson Center in DC, hosted by the Commons Lab.

Video from the event is online at the Wilson Center website. Unfortunately, I found that I didn’t edit my presentation down enough for my allotted time. I made it to slide 84 of 98 in 20 minutes and had to skip the 14 predictions and recommendations section. While many of the themes I describe in those 14 slides came out during the roundtable question and answer period, they’re worth resharing here, in the presentation I’ve embedded below:

Congress passes bill to make unlocking cellphones legal, shining new sunlight on White House e-petitions

us-capitol-dome-sun

When the U.S. House of Representatives passed S.517 today, voting to send the “Unlocking Consumer Choice and Wireless Competition Act” that the U.S. Senate passed unanimously last week, the legislative branch completed an unprecedented democratic process: a bill that had in its genesis in a White House e-petition signed by more than 100,000 consumers was sent back to the White House for the President’s signature. If signed into law, the bill would 1) make it legal for consumers to unlock their cellphones in January 2015, reversing a controversial decision made by the Librarian of Congress in 2013 by reinstating a 2010 rulemaking and 2) direct the Librarian to consider if other mobile devices, like tablets, should also be eligible to be unlocked.

As Vermont Senator Patrick Leahy’s staff highlighted, the ranking members and chairmen of the House and Senate Judiciary Committees started cooperating on the issue in 2013 after the White House responded to the e-petition.

“I thank the House for moving so quickly on the bill we passed in the Senate last week and for working in a bipartisan way to support consumers,” said Leahy, in a statement. “The bipartisan Unlocking Consumer Choice and Wireless Competition Act puts consumers first, promotes competition in the wireless phone marketplace, and encourages continued use of existing devices. Once the President signs this bill into law, consumers will be able to more easily use their existing cell phones on the wireless carrier of their choice.”

In the annals of still-embryonic American experiments in digital democracy, I can find no ready equivalent or precedent for this positive outcome for the people petitioning their government. The closest may be when the White House responded to an e-petition on the Stop Online Piracy Act in 2012, taking a position on anti-piracy bills that posed a threat to online industry, security and innovation. Even then, it the voices of millions of people activated online to change Washington and the votes of members of Congress.

It’s critical to note that there’s a much deeper backstory to why activism worked: the people behind the e-petition didn’t stop with an official response from the White House. After making a lot of noise online, activists engaged Congress over a year and a half, visiting Capitol Hill, sitting in on phone calls and hearings, and being involved in the democratic process that led to this positive change.

“Many of the initial conversations on DMCA reform were engaged with the Republic Study Committee copyright memo in 2012, so it’s been a 21 month process,” said former Congressional staffer Derek Khanna, via email, “but such sea changes in policy usually take a long time, particularly if you’re confronting very powerful interests.”

Khanna, now a fellow at Yale Law School and columnist, was part of the coalition of activists advocating for this change in Congress.

“The campaign on unlocking was really trying to drive those issues and solutions through movement politics,” he said, “and that movement has succeeded in more than just the unlocking bill: now there is also the Judiciary Committee having hearings on copyright reform. The YG Network report, “Room to Grow,” also called for wholesale copyright and patent reforms and cited the RSC memo.”

This is an important lesson in why “clicktivism” alone won’t be enough to make changes to laws or regulations emanating from Washington:  people who want to shifts in policy or legislation have to learn how Congress works and act.

“A key part of our success was starting small with definable goals, and taking small successes and building upon them,” said Khanna. “Most movements throughout history have followed this strategy. Sometimes, e-campaigns shoot for the moon when the small battles have not been won yet. This is particularly a problem with tech issues. One reason why the unlocking petition was more successful than others was because it was only a tool in the toolkit. While it was ongoing, I was arguing our cause in the media, writing op-eds, meeting with Congress, giving speeches, and working with think-tanks. We basically saw the petition as energy to reinforce our message and channel our support, not the entire ballgame. Some petition campaigns fail because they assume that the petition is it: you get it to 100,000 signatures and you win or lose. Some fail because they don’t have a ground presence in Washington, DC, trying to influence the actual channels that Members of Congress and their staff follow.”

The hardest part, according to Khanna, was  keeping the momentum going after the e-petition succeeded and the White House responded, agreeing with the petitioners.

“We had no list-serve of our signatories, no organization, and no money,” he said. “It was extremely difficult. In fact, some of us were pushing for a more unified organization at the time. Others were more reluctant to go in that direction. A unified organization will be critical to future battles. Special interests were actively working against us and even derailed the original House bill after it passed Committee; having a unified organization would have helped move this process more quickly.”

That organization and DC ground game doesn’t mean that this e-petition didn’t matter: its success was a strong signal for policy makers that people cared about this issue. That’s also important: in the years since the launch of White House e-petitions in September 2011, the digital manifestation of the right of the people to “to petition the Government for a redress of grievances” guaranteed by the First Amendment of the Bill of Rights of the Constitution of the United States has come in for a lot of grief.

While White House e-petitions do sometimes work, 10% of successful e-petitions remain unanswered months or even years after they passed the threshold for a response, with activity in 2014 leading some critics to call “We the People” a “virtual ghost town.” Many of these criticisms remain founded in fact: popular epetitions do remain open. The longer these e-petitions remain open, the higher the chance that the platform will drive public disillusionment in “We the People,” not confidence that public participation of the people matters.

For instance, an e-petition on ECPA reform still sits unaddressed. For those unfamiliar with the acronym, it refers long-overdue legislative effort to make due process digital by updating the Electronic Communications Privacy Act to require law enforcement to get a warrant before accessing cloud-based email or data of American citizens online. A majority of the U.S. House of Representatives supports ECPA reform. The White House has voiced support for “robust privacy and civil liberties protections.” The Supreme Court has made it clear that law enforcement needs a warrant to search the contents of cellphones.

In a statement published by Politico, President Obama indicated that he would make unlocking cellphones legal: “The bill Congress passed today is another step toward giving ordinary Americans more flexibility and choice so that they can find a cellphone carrier that meets their needs and their budget,” he said.

Perhaps it’s now, finally, time for the President of the United States to personally respond to a second e-petition, making it clear whether or not a constitutional law professor believes that the federal government should have to get a warrant before reading the email or personal papers of citizens stored online.

UPDATE: On August 1st, President Obama signed the bill into law.

President Barack Obama signs S. 517, Unlocking Consumer Choice and Wireless Competition Act, in the Oval Office, Aug. 1, 2014. (Official White House Photo by Pete Souza)

President Barack Obama signs S. 517, Unlocking Consumer Choice and Wireless Competition Act, in the Oval Office, Aug. 1, 2014. (Official White House Photo by Pete Souza)

This is an example of the federal government “answering the public’s call,” wrote Jeffrey Zients, Director of the National Economic Council and Assistant to the President for Economic Policy, Senator Patrick Leahy, Chairman, Senate Judiciary Committee.

“Today, President Obama will sign into law the Unlocking Consumer Choice and Wireless Competition Act, and in doing so, will achieve a rare trifecta: a win for American consumers, a win for wireless competition, and an example of Democracy at its best — bipartisan Congressional action in direct response to a call to action from the American people.

The story of how we broke through Washington gridlock to restore the freedom of consumers to take their mobile phone wherever they choose is one worth telling, and a model worth repeating.”

Activist Sina Khanifar added a celebratory note via email, echoing Khanna’s points about process and highlighting what’s left to do to enable all consumers to unlock their mobile devices:

The original petition did a lot to kick off the process, but it took about a year and a half of negotiating with stakeholders, going back and forth with congressional staffers, and pushing back against corporate lobbies to get to an actual law. A big thanks to public advocacy groups like Public Knowledge, Consumers Union and the Electronic Frontier Foundation who helped guide that process.

The bill’s a great step forwards, but we had to make a lot of compromises along the way. For one thing, it’s not a permanent fix. In 2015, the exemption expires and the Librarian of Congress will make another rulemaking and decide the fate of unlocking. I asked repeatedly for Congress to make the exemption permanent, and Rep. Zoe Lofgren even introduced the excellent “Unlocking Technology Act of 2013” that would have done just that. Unfortunately, Congress wasn’t ready to deal with the underlying copyright issue that makes it illegal to unlock your phone. Doing so would require amending the DMCA’s controversial anti-circumvention provisions, a step that’s desparately needed.

It’s not too late for Congress to pass real reform along the lines of Zoe Lofgren’s bill. I’ll continue to push for that change as part of my campaign at FixtheDMCA.org. In the meanwhile, I’m going to be celebrating tonight. And consumers have another year and half to unlock their devices. Hopefully the Librarian of Congress will have better sense than to deny an unlocking exemption again – congress sent a very clear message that unlocking should be legal by overturning a DMCA rulemaking for the first time in the law’s history.

UPDATE: On August 15th, weeks after the bill passed, the White House published a blog post with its take on how cell phone unlocking became legal, including a note regarding explicit involvement in policy change:

The White House policy team convened more than a half-dozen agencies and offices’ senior officials to ask a simple question: How can we move this issue forward? After careful deliberation, it was clear to us: The Administration couldn’t agree more with petitioners, and we came out in strong support of again making it legal for consumers to unlock their devices.

But we didn’t just agree; we offered a template for how to make it a reality. Our response laid out steps that the Federal Communications Commission (FCC), wireless carriers, and Congress could take to make sure copyright law didn’t stand in the way of consumer choice. And over the following weeks and months, we worked with the FCC and wireless carriers to reach voluntary agreements to provide consumers with additional flexibility. That captured national attention, including support from national editorial pages.

All that helped motivate Congress to take action, and heed the call in a bipartisan way.

This post has been updated with a statement from the White House, comments from Derek Khanna, Sina Khanifar, a post by Senator Leahy and Jeffrey Zients, director of the National Economic Council and Assistant to the President for Economic Policy, and a WhiteHouse.gov blog post.

PCLOB issues report on U.S. government surveillance under Section 702 of FISA [UPDATED]

pclob-report

The pre-release version of the Privacy and Civil Liberties Oversight Board’s Report on the Surveillance Program Operated Pursuant to Section 702 of the Foreign Intelligence Surveillance Act (FISA) is now available online. [PDF]

Short version: The board found little legally awry with surveillance conducted under Section 702 of FISA, which permits the federal government to compel United States companies to assist them in conducting surveillance targeting foreign people and entities, noting that it was a strong, effective tool for counterterrorism. The extensive report explores the legal rationales for such surveillance and lists ten recommendations in its report. The scope of digital surveillance was detailed  in The Washington Post on Monday, which reported that only four countries in the world (the USA, Canada, UK, New Zealand and Australia) are not subject to the surveillance enabled by legal authority to intercept communications.

Context from Gregory McNeal in Forbes:

“Section 702 of FISA has not received the same level of attention as the 215 metadata collection program, largely because the program is not directly targeted at U.S. persons. However, under Section 702, the government can collect the contents of communications (for example examining email and other communications), rather than mere metadata, which it collects under Section 215.”

“702 is also a more powerful program because under it the government can collect the content of U.S. persons communications, if those persons are communicating with a foreign target. This means that U.S. persons communications can be incidentally collected by the agency, such as when two non-U.S. persons discuss a U.S. person. Communications of or concerning U.S. persons that are acquired in these ways may be retained and used by the government, subject to applicable rules and requirements. The communications of U.S. persons may also be collected by mistake, as when a U.S. person is erroneously targeted or in the event of a technological malfunction, resulting in “inadvertent” collection. In such cases, however, the applicable rules generally require the communications to be destroyed. Another circumstance where 702 collection has raised concerns is the collection of so-called “about” communication. An “about” communication is one in which the selector of a targeted person (such as that person’s email address) is contained within the communication but the targeted person is not necessarily a participant in the communication.” The PCLOB addresses each of these issues in their report.”

The PCLOB did find that “certain aspects of the program’s implementation raise privacy concerns,” specifically the “scope of the incidental collection of U.S. persons’ communications” when intelligence analysts targeted other individuals or entities.

As Josh Gerstein reported in Politico, the PCLOB “divided over key reforms to government collection of large volumes of email and other data from popular web businesses and from the backbone of the Internet. A preliminary report released Tuesday night hows that some of the proposals for changes to the Section 702 program caused a previously unseen split on the five-member Privacy and Civil Liberties Oversight Board: Two liberal members of the commission urged more aggressive safeguards, but a well-known privacy activist on the panel joined with two conservatives to withhold official endorsement of some of those changes.”

As Gerstein pointed out in a tweet, that means that reforms proposed in the House as Representatives go further than those recommended by the independent, bipartisan agency within the executive branch vested with the authority “to review and analyze actions the executive branch takes to protect the Nation from terrorism, ensuring the need for such actions is balanced with the need to protect privacy and civil liberties” and “ensure that liberty concerns are appropriately considered in the development and implementation of laws, regulations, and policies related to efforts to protect the Nation against terrorism”

Perhaps even more problematically, the PCLOB wrote in the report that “the government is presently unable to assess the scope of the incidental collection of U.S. person information under the program.”

As Matt Sledge observed in the Huffington Post, the report’s authors “express frustration that the NSA and other government agencies have been unable to furnish estimates of the incidental collection of Americans’ communications, which ‘hampers attempts to gauge whether the program appropriately balances national security interests with the privacy of U.S. persons.’

But without signs of abuse, the board concludes privacy intrusions are justified in protecting against threats to the U.S. Nevertheless, the board suggests that the government take on the ‘backdoor searches’ that have alarmed Wyden. In those searches, the government searches through the content of communications collected while targeting foreigners for search terms associated with U.S. citizens and residents. The House voted in June to end such searches. The searches ‘push the program close to the line of constitutional reasonableness,’ the privacy board report says, but it doesn’t recommend ending them.

Privacy and civil liberties advocates issued swift expressions of dismay about the constitutionality of the surveillance and questioned the strength of the recommendations.

“The Board’s report is a tremendous disappointment,” said Nuala O’Connor, the president of the Center for Democracy and Technology, in a statement. “Even in the few instances where it recognizes the privacy implications of these programs, it provides little reassurance to all who care about digital civil liberties. The weak recommendations in the report offer no serious reform of government intrusions on the lives of individuals. It also offers scant support to the U.S. tech industry in its efforts to alleviate customer concerns about NSA surveillance, which continue to harm the industry in the global marketplace,” she added.

“If there is a silver lining, it is that the Board recognized that surveillance of people abroad implicates their human rights, as well as the constitutional rights of people in the U.S.,” said Greg Nojeim, director of the Center’s Project on Freedom, Security and Technology.  “However, the Board defers until a future date its consideration of human rights and leaves it to Congress to address the important constitutional issues.”

“If the Board’s last report on the bulk collection of phone records was a bombshell, this one is a dud,” said Kevin Bankston, policy director of New America’s Open Technology Institute (OTI).

“If the Board’s last report on the bulk collection of phone records was a bombshell, this one is a dud.  The surveillance authority the Board examined in this report, Section 702 of 2008’s FISA Amendments Act, is in many ways much more worrisome than the bulk collection program.  As the Board itself explains, that law has been used to authorize the NSA’s wiretapping of the entire Internet backbone, so that the NSA can scan untold numbers of our emails and other online messages for information about tens of thousands of targets that the NSA chooses without individualized court approval.  Yet the reforms the Board recommends today regarding this awesome surveillance power are much weaker than those in their last report, and essentially boil down to suggesting that the government should do more and better paperwork and develop stricter internal protocols as a check against abuse.

“As Chief Justice Roberts said just last week, “the Founders did not fight a revolution to gain the right to government agency protocols,” they fought to require search warrants that are based on probable cause and specifically identify who or what can be searched.  Yet as we know from documents released earlier this week, government agents are searching through the data they’ve acquired through this surveillance authority–an authority that was sold to Congress as being targeted at people outside the US–tens of thousands of times a year without having to get a warrant first.

“The fact that the Board has endorsed such warrantless rummaging through our communications, just weeks after the House of Representatives voted almost three to one to defund the NSA’s “backdoor” searches of Americans’ data, is a striking disappointment.  The Board is supposed to be an independent watchdog that aggressively seeks to protect our privacy against government overreach, rather than undermining privacy by proposing reforms that are even weaker than those that a broad bipartisan majority of the House has already endorsed.

“We are grateful to the Board for its last report and are grateful to them now for laying out, in the clearest and most comprehensive way we’ve seen so far, exactly how the NSA is using its surveillance authority.  But Congress shouldn’t wait for the NSA to take the Board’s weak set of recommendations and get its own house in order.  Congress should instead move forward with strong reforms that protect our privacy and that tell the NSA, as the Supreme Court told the government last week: if you want our data you need to come back with a warrant.”

The Electronic Frontier Foundation was even stronger, with Cindy Cohn calling the PCLOB report “legally flawed and factually incomplete.”

Hiding behind the “complexity” of the technology, it gives short shrift to the very serious privacy concerns that the surveillance has rightly raised for millions of Americans. The board also deferred considering whether the surveillance infringed the privacy of many millions more foreigners abroad.

The board skips over the essential privacy problem with the 702 “upstream” program: that the government has access to or is acquiring nearly all communications that travel over the Internet. The board focuses only on the government’s methods for searching and filtering out unwanted information. This ignores the fact that the government is collecting and searching through the content of millions of emails, social networking posts, and other Internet communications, steps that occur before the PCLOB analysis starts.  This content collection is the centerpiece of EFF’s Jewel v. NSA case, a lawsuit battling government spying filed back in 2008.

Trevor Timm, writing in the Guardian, said the PCLOB “chickened out of making any real reform proposals” and questioned why one member of the panel didn’t support more aggressive recommendations in

“More bizarrely, one of the holdouts on the panel for calling for real reform is supposed to be a civil liberties advocate. The Center for Democracy and Technology’s vice president, James Dempsey, had the chance to side with two other, more liberal members on the four-person panel to recommend the FBI get court approval before rummaging through the NSA’s vast databases, but shamefully he didn’t.

Now, as the Senate takes up a weakened House bill along with the House’s strengthened backdoor-proof amendment, it’s time to put focus back on sweeping reform. And while the PCLOB may not have said much in the way of recommendations, now Congress will have to. To help, a coalition of groups (including my current employer, Freedom of the Press Foundation) have graded each and every representative in Washington on the NSA issue. The debate certainly isn’t going away – it’s just a question of whether the public will put enough pressure on Congress to change.”

Editor’s note: This post has been substantially rewritten. More statements were added, and the headline has been amended.

[REPORT] On data journalism, democracy, open government and press freedom

On May 30, I gave a keynote talk on my research on the art and science of data journalism at the first Tow Center research conference at Columbia Journalism School in New York City. I’ve embedded the video below:

My presentation is embedded below, if you want to follow along or visit the sites and services I described.

Here’s an observation drawn from an extensive section on open government that should be of interest to readers of this blog:

“Proactive, selective open data initiatives by government focused on services that are not balanced by support for press freedoms and improved access can fairly be criticized as “openwashing” or “fauxpen government.”

Data journalists who are frequently faced with heavily redacted document releases or reams of blurry PDFs are particularly well placed to make those critiques.”

My contribution was only one part of the proceedings for “Quantifying Journalism: Metrics, Data and Computation,” which you can catch up through the Tow Center’s live blog or TechPresident’s coverage of measuring the impact of journalism.

PCAST report on big data and privacy emphasizes value of encryption, need for policy

pcast-4-4-2014 (1)
April 4, 2014 meeting of PCAST at National Academy of Sciences

This week, the President’s Council of Advisors on Science and Technology (PCAST) met to discuss and vote to approve a new report on big data and privacy.

UPDATE: The White House published the findings of its review on big data today, including the PCAST review of technologies underpinning big data (PDF), discussed below.

As White House special advisor John Podesta noted in January, the PCAST has been conducting a study “to explore in-depth the technological dimensions of the intersection of big data and privacy.” Earlier this week, the Associated Press interviewed Podesta about the results of the review, reporting that the White House had learned of the potential for discrimination through the use of data aggregation and analysis. These are precisely the privacy concerns that stem from data collection that I wrote about earlier this spring. Here’s the PCAST’s list of “things happening today or very soon” that provide examples of technologies that can have benefits but pose privacy risks:

 Pioneered more than a decade ago, devices mounted on utility poles are able to sense the radio stations
being listened to by passing drivers, with the results sold to advertisers.26
 In 2011, automatic license‐plate readers were in use by three quarters of local police departments
surveyed.  Within 5 years, 25% of departments expect to have them installed on all patrol cars, alerting
police when a vehicle associated with an outstanding warrant is in view.27  Meanwhile, civilian uses of
license‐plate readers are emerging, leveraging cloud platforms and promising multiple ways of using the
information collected.28
 Experts at the Massachusetts Institute of Technology and the Cambridge Police Department have used a
machine‐learning algorithm to identify which burglaries likely were committed by the same offender,
thus aiding police investigators.29
 Differential pricing (offering different prices to different customers for essentially the same goods) has
become familiar in domains such as airline tickets and college costs.  Big data may increase the power
and prevalence of this practice and may also decrease even further its transparency.30
 reSpace offers machine‐learning algorithms to the gaming industry that may detect
early signs of gambling addiction or other aberrant behavior among online players.31
 Retailers like CVS and AutoZone analyze their customers’ shopping patterns to improve the layout of
their stores and stock the products their customers want in a particular location.32  By tracking cell
phones, RetailNext offers bricks‐and‐mortar retailers the chance to recognize returning customers, just
as cookies allow them to be recognized by on‐line merchants.33  Similar WiFi tracking technology could
detect how many people are in a closed room (and in some cases their identities).
 The retailer Target inferred that a teenage customer was pregnant and, by mailing her coupons
intended to be useful, unintentionally disclosed this fact to her father.34
 The author of an anonymous book, magazine article, or web posting is frequently “outed” by informal
crowd sourcing, fueled by the natural curiosity of many unrelated individuals.35
 Social media and public sources of records make it easy for anyone to infer the network of friends and
associates of most people who are active on the web, and many who are not.36
 Marist College in Poughkeepsie, New York, uses predictive modeling to identify college students who are
at risk of dropping out, allowing it to target additional support to those in need.37
 The Durkheim Project, funded by the U.S. Department of Defense, analyzes social‐media behavior to
detect early signs of suicidal thoughts among veterans.38
 LendUp, a California‐based startup, sought to use nontraditional data sources such as social media to
provide credit to underserved individuals.  Because of the challenges in ensuring accuracy and fairness,
however, they have been unable to proceed.

The PCAST meeting was open to the public through a teleconference line. I called in and took rough notes on the discussion of the forthcoming report as it progressed. My notes on the comments of professors Susan Graham and Bill Press offer sufficient insight and into the forthcoming report, however, that I thought the public value of publishing them was warranted today, given the ongoing national debate regarding data collection, analysis, privacy and surveillance. The following should not be considered verbatim or an official transcript. The emphases below are mine, as are the words of [brackets]. For that, look for the PCAST to make a recording and transcript available online in the future, at its archive of past meetings.


 

graham-sSusan Graham: Our charge was to look at confluence of big data and privacy, to summarize current tech and the way technology is moving in foreseeable future, including its influence the way we think about privacy.

The first thing that’s very very obvious is that personal data in electronic form is pervasive. Traditional data that was in health and financial [paper] records is now electronic and online. Users provide info about themselves in exchange for various services. They use Web browsers and share their interests. They provide information via social media, Facebook, LinkedIn, Twitter. There is [also] data collected that is invisible, from public cameras, microphones, and sensors.

What is unusual about this environment and big data is the ability to do analysis in huge corpuses of that data. We can learn things from the data that allow us to provide a lot of societal benefits. There is an enormous amount of patient data, data about about disease, and data about genetics. By putting it together, we can learn about treatment. With enough data, we can look at rare diseases, and learn what has been effective. We could not have done this otherwise.

We can analyze more online information about education and learning, not only MOOCs but lots of learning environments. [Analysis] can tell teachers how to present material effectively, to do comparisons about whether one presentation of information works better than another, or analyze how well assessments work with learning styles.
Certain visual information is comprehensible, certain verbal information is hard to understand. Understanding different learning styles [can enable] develop customized teaching.

The reason this all works is the profound nature of analysis. This is the idea of data fusion, where you take multiple sources of information, combine them, which provides much richer picture of some phenomenon. If you look at patterns of human movements on public transport, or pollution measures, or weather, maybe we can predict dynamics caused by human context.

We can use statistics to do statistics-based pattern recognition on large amounts of data. One of the things that we understand about this statistics-based approach is that it might not be 100% accurate if map down to the individual providing data in these patterns. We have to very careful not to make mistakes about individuals because we make [an inference] about a population.

How do we think about privacy? We looked at it from the point of view of harms. There are a variety of ways in which results of big data can create harm, including inappropriate disclosures [of personal information], potential discrimination against groups, classes, or individuals, and embarrassment to individuals or groups.

We turned to what tech has to offer in helping to reduce harms. We looked at a number of technologies in use now. We looked at a bunch coming down the pike. We looked at several tech in use, some of which become less effective because of pervasivesness [of data] and depth of analytics.

We traditionally have controlled [data] collection. We have seen some data collection from cameras and sensors that people don’t know about. If you don’t know, it’s hard to control.

Tech creates many concerns. We have looked at methods coming down the pike. Some are more robust and responsive. We have a number of draft recommendations that we are still working out.

Part of privacy is protecting the data using security methods. That needs to continue. It needs to be used routinely. Security is not the same as privacy, though security helps to protect privacy. There are a number of approaches that are now used by hand that with sufficient research could be automated could be used more reliably, so they scale.

There needs to be more research and education about education about privacy. Professionals need to understand how to treat privacy concerns anytime they deal with personal data. We need to create a large group of professionals who understand privacy, and privacy concerns, in tech.

Technology alone cannot reduce privacy risks. There has to be a policy as well. It was not our role to say what that policy should be. We need to lead by example by using good privacy protecting practices in what the government does and increasingly what the private sector does.

pressBill Press: We tried throughout to think of scenarios and examples. There’s a whole chapter [in the report] devoted explicitly to that.

They range from things being done today, present technology, even though they are not all known to people, to our extrapolations to the outer limits, of what might well happen in next ten years. We tried to balance examples by showing both benefits, they’re great, and they raise challenges, they raise the possibility of new privacy issues.

In another aspect, in Chapter 3, we tried to survey technologies from both sides, with both tech going to bring benefits, those that will protect [people], and also those that will raise concerns.

In our technology survey, we were very much helped by the team at the National Science Foundation. They provided a very clear, detailed outline of where they thought that technology was going.

This was part of our outreach to a large number of experts and members of the public. That doesn’t mean that they agree with our conclusions.

Eric Lander: Can you take everybody through analysis of encryption? Are people using much more? What are the limits?

Graham: The idea behind classical encryption is that when data is stored, when it’s sitting around in a database, let’s say, encryption entangles the representation of the data so that it can’t be read without using a mathematical algorithm and a key to convert a seemingly set of meaningless set of bits into something reasonable.

The same technology, where you convert and change meaningless bits, is used when you send data from one place to another. So, if someone is scanning traffic on internet, you can’t read it. Over the years, we’ve developed pretty robust ways of doing encryption.

The weak link is that to use data, you have to read it, and it becomes unencrypted. Security technologists worry about it being read in the short time.

Encryption technology is vulnerable. The key that unlocks the data is itself vulnerable to theft or getting the wrong user to decrypt.

Both problems of encryption are active topics of research on how to use data without being able to read it. There research on increasingly robustness of encryption, so if a key is disclosed, you haven’t lost everything and you can protect some of data or future encryption of new data. This reduces risk a great deal and is important to use. Encryption alone doesn’t protect.

Unknown Speaker: People read of breaches derived from security. I see a different set of issues of privacy from big data vs those in security. Can you distinguish them?

Bill Press: Privacy and security are different issues. Security is necessary to have good privacy in the technological sense if communications are insecure, they clearly can’t be private. This goes beyond, to where parties that are authorized, in a security sense, to see the information. Privacy is much closer to values. security is much closer to protocols.

Interesting thing is that this is less about purely tech elements — everyone can agree on right protocol, eventually. These things that go beyond and have to do with values.

Boston Mayor Marty Walsh issues open data executive order; city council ordinance to come?

5943910065_b422feecec_o

The City of Boston has joined the growing list of cities around the world that have adopted open data. The executive order issued yesterday by Mayor Marty Walsh has been hailed by open government advocates around the country. The move to open up Boston’s data has been followed by action, with 411 data sets listed on data.cityofboston.gov as of this morning. The EO authorizes and requires Boston’s chief information officer to issue a City of Boston Open Data Policy and “include standards for the format and publishing of such data and guidance on accessibility, re-use and minimum documentation for such data.”

The element on re-use is critical: the success of such initiatives should be judged based upon the network effects of open data releases, not the raw amount of data published online, and improvements to productivity, efficiency, city services, accountability and transparency.

Notably, Boston City Councilor-at-Large Michelle Wu also filed a proposal yesterday morning to create an open data ordinance that would require city agencies and departments to make open data available, codifying the executive order into statue as San Francisco, New York City and Philadelphia have done.

“Government today should center on making data-driven decisions and inviting in the public to collaborate around new ideas and solutions,” said Wu, in a statement.  “The goal of this ordinance is greater transparency, access, and innovation.  We need a proactive, not a reactive, approach to information accessibility and open government.”

 

Notably, she posted the text of her proposed open data ordinance online on Monday, unlike the city government, and tweeted a link to it. (It took until today for the city of Boston to post the order; city officials have yet to share it on social media. )

“Boston is a world-class city full of energy and talent,” said Wu. “In addition to promoting open government, making information available to the fullest extent possible will help leverage Boston’s energy and talent for civic innovation. From public hackathons to breaking down silos between city departments, putting more data online can help us govern smarter for residents in every neighborhood.”

As long-time readers know, I lived in Boston for a decade. It’s good to see the city government move forward to making the people’s data available to them for use and reuse. I look forward to seeing what the dynamic tech, financial, health care, educational and research communities in the greater Boston area do with it.

EXECUTIVE ORDER OF MAYOR MARTIN J. WALSH

An Order Relative to Open Data and Protected Data Sharing

Whereas, it is the policy of the City of Boston to practice Open Government, favoring participation, transparency, collaboration and engagement with the people of the City and its stakeholders; and
Whereas, information technologies, including web-based and other Internet applications and services, are an essential means for Open Government, and good government generally; and
Whereas, the City of Boston should continue, expand and deepen the City’s innovative use of information technology toward the end of Open Government, including development and use of mobile computing and applications, provision of online data, services and transactions; and
Whereas, the City of Boston also has an obligation to protect some data based upon privacy, confidentiality and other requirements and must ensure that protected data not be released in violation of applicable constraints; and
Whereas, clarification and definition of open data, privacy, security requirements, interoperability and interaction flows is necessary for the City’s Open Government agenda;
NOW THEREFORE, pursuant to the authority vested in me as Chief Executive Officer of the City of Boston by St. 1948, c. 452 Section 11, as appearing in St. 1951, c. 376, Section 1, and every other power hereto enabling, I hereby order and direct as follows:

1. The City of Boston recognizes Open Government as a key means for enabling public participation, transparency, collaboration and effective government, including by ensuring the availability and use of Open Data, appropriate security and sharing of Protected Data, effective use of Identity and Access Management and engagement of stakeholders and experts toward the achievement of Open Government.
2. The City of Boston Chief Information Officer (“CIO”), in consultation with City departments, is authorized and directed to issue a City of Boston Open Data Policy.
a) The Open Data Policy shall include standards for the format and publishing of such data and guidance on accessibility, re-use and minimum documentation for such data;

b) The Open Data Policy shall include guidance for departments on the classification of their data sets as public or protected and a method to report such classification to the CIO. All departments shall publish their public record data sets on the City of Boston open data portal to the extent such data sets are determined to be appropriate for public disclosure, and/or if appropriate, may publish their public record data set through other methods, in accordance with API, format, accessibility and other guidance of the Open Data Policy.
3. The City of Boston CIO, in consultation with City departments, is authorized and directed to issue a City of Boston Protected Data Policy applicable to non-public data, such as health data, educational records and other protected data;

a) The policy shall provide guidance on the management of Protected Data, including guidance on security and other controls to safeguard Protected Data, including appropriate Identity and Access Management and good practice guidelines for compliance with legal or other rules requiring the sharing of Protected Data with authorized parties upon the grant of consent, by operation of law or when otherwise so required;
b) The policy shall provide a method to ensure approval by the Corporation Counsel of the City of Boston to confirm Protected Data is only disclosed in accordance with the Policy.
4. This Executive Order is not intended to diminish or alter the rights or obligations afforded under the Massachusetts Public Records Law, Chapter 66, Section 10 of the Massachusetts General Laws and the exemptions under Chapter 4, Section 7(26). Additionally, this Executive Order is intended to be interpreted consistent with Federal, Commonwealth, and local laws and regulations regarding the privacy, confidentiality, and security of data. Nothing herein shall authorize the disclosure of data that is confidential, private, exempt or otherwise legally protected unless such disclosure is authorized by law and approved by the Corporation Counsel of the City of Boston.
5. This Executive Order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the City of Boston, its departments, agencies, or entities, its officers, employees, or agents, or any other person.
6. The City of Boston CIO is authorized and directed to regularly consult with experts, thought leaders and key stakeholders for the purpose of exploring options for the implementation of policies and practices arising under or related to this Executive Order.