Apple, Secrecy, and Paranoia

While we’re on the subject of trade secrets and paranoia, last week, Gizmodo got its hands on a prototype of the new iPhone, allegedly left in a Redwood City bar, picked up by someone, and sold to Gizmodo for $5,000. A criminal investigation is now under way not about trade secrets, but whether or not Gizmodo’s acquisition of the phone amounts to theft. It’s like a trade secret case on steroids. CNET reports that a San Mateo judge granted California’s Rapid Enforcement Allied Computer Team (REACT) a warrant to raid Gizmodo editor Jason Chen’s house and seize “four computers and two servers.” I know this is a bit outside of the scope of this class’s readings, but the most interesting part of this will be around First Amendment protection of online journalism. According to CNET, “Any prosecution would be complicated because of the First Amendment’s guarantee of freedom of the press: the U.S. Supreme Court ruled in 2001 that confidential information leaked to a news organization could be legally broadcast, although that case did not deal with physical property and the radio station did not pay its source.”

When does the oppressed become the oppressor?

I remember reading a TechCrunch article around the start of the year, which talked about how the EFF defended three anti H1-B websites that were taken down due to defamatory messages against Apex Technology Group, an IT staffing and consulting services firm. [For those not in the know, an H1-B visa enables foreign nationals to work full-time in American corporations] Here’s a link to the article: http://tcrn.ch/cUsg3T

There was some shocking language on display, and as the article demonstrates, it was decidedly racist, and promoted hatred. Why, one of the posts was an open threat: the author said he would “stop blogging” if a senior official in an Indian IT firm was killed. Armed with some (basic) legal knowledge, I decided to visit the EFF blog post to understand their stand better.

The core of their argument is this: should an entire website be taken down due a few defamatory messages? The First Amendment should apply, unless there is a “clear and present danger or a serious and imminent threat to a protected competing interest”, or less restrictive measures are not available. Aren’t these serious threats though? Or do they suggest that these exceptions don’t apply in case of foreign nationals?

A reference is also made to section 230, arguing that websites should be protected from claims against them due to information published by another “information content provider”. However, I am not convinced that propaganda websites in this example can be classified as an “access software provider”,  since they are essentially a forum for hate speech. Which also raises the question – when does free speech cross over to the dark side, and exhibit the properties of hate speech?

It seems as though EFF’s concern was based not so much on the actual content of the posts, but the fact that the entire website was taken down. The usual precedent was just taking down the questionable content from the website. However, the author of the EFF page (which can be found here: http://bit.ly/5S76b6) admitted that he hadn’t even read the disputed content.

Our readings for the Freedom of Expression lecture seemed to touch the issue, but none of them examined it in detail. The proceedings of the ACLU v. Mukasey case pointed to how content being objectionable was relative. Berman & Weitzner suggest that interactive media meant a lesser need for regulation – but doesn’t user controlled content introduce a bias of its own? The UN Joint Declaration asks us to differentiate between incitement and glorification of terrorism. If we adhere strictly to the definition of incitement here, we can find some justification for the NJ court’s ruling against the websites. However, the distinction may not always be that clear. The line between simply expressing one’s views and encouraging hatred is blurred, and the emergence of interactive media only means such occurrences would be more commonplace.

The ACTA Threat

By Yoshang Cheng, David Rolnitzky, Lee Schneider, Jessica Voytek,

A coalition of governments are secretly meeting in an attempt to form a far-reaching agreement over copyright infringement.  The secret agreement, called the Anti-Counterfeiting Trade Agreement (ACTA), would establish international standards on intellectual property rights enforcement throughout such participating countries as the US, Canada and several members of the EU.  The agreement will formalize a process for finding 3rd parties liable for the infringements of their subscribers, and limiting their liability if they remove the content.  Although the actual wording of this agreement has not been officially published, reliable sources report that these negotiations are occurring behind closed doors, without public input.  The agreement is being structured similarly to the North American Free Trade Agreement (NAFTA), except it will create rules and regulations regarding private copying and copyright laws.  Recently a document containing the ACTA provisions related to the Internet was leaked, which has spurred more discussion.

The ACTA has come under fire from a number of organizations, including the Electronic Frontier Foundation and Public Knowledge, claiming that ACTA has several potentially serious concerns for consumers’ privacy, civil liberties, innovation and the free flow of information on the Internet, and the sovereignty of developing nations.  Increased liability for ISPs (Internet Service Providers) could result in increased monitoring of users, which could then impede their civil rights. Although developing nations have not been involved with negotiations, chances are that their compliance with the regulations will be required as a condition of future trade agreements, effectively limiting the ability for developing countries to choose policy options best suited to their countries.

The effect of this secret agreement on ISP’s could be far-reaching.  These new provisions would require ISPs to take down possibly infringing content, as well as delete the accounts and subscriptions of users who are accused without any judicial oversight.  As a result, the so-called “chilling effect” documented by Urban and Quilter in their 2005 study could be expanded on a global-scale.  The chilling effect is a term that describes the practice of using DMCA takedown notices to improperly limit the speech of others, usually for commercial or political gain.  According to the study, an astonishing 57% of DMCA take down notices sent to Google were targeting apparent competitors. ISPs, fearing liability from this new international coalition, would be compelled to comply with these notices without any kind of judicial oversight.

Civil rights, including freedom of expression may be impacted by this agreement.  In the US, the fair use doctrine was adopted in order to mitigate the tension between freedom of expression and copyright restrictions on expression.  Fair use is not a standard component of copyright law in every country.  In a society without fair use, unauthorized use would be infringement.  “If Hollywood could order intellectual property laws for Christmas what would they look like? This is pretty close,” said David Fewer, staff counsel at the University of Ottawa’s Canadian Internet Policy and Public Interest Clinic, in an article posted on Canada.com. Therefore this legislation would unduly benefit corporations at the expense of civil liberties.

The implementation of these new rules raises civil rights concerns.  For example, media outlets in Ireland and Canada have proposed scenarios in which people would be searched at international borders by security-turned-copyright-enforcer.  Such search and potential seizure of any items that “infringes” on copyright laws, such as ripped CD’s and movies, could be done without probable cause.

The impact of the agreement also has some potential consequences for policy in countries both within and outside the countries negotiating the ACTA.  As stated earlier, the agreement has a provision to create a legal regime to “encourage ISP’s to cooperate with right holders in the removal of infringing material” by giving them safe harbor from certain legal threats.  Though the US already has a “takedown notice” provision, the agreement could provide other coalition governments with new leeway to by-pass the normal legislative process in their countries in order to get DMCA-like legislation on the books.  Furthermore, tying such legislation to foreign treaties may constrain Congress’s ability to modify the DMCA in the future.

National sovereignty may also be impacted by ratification of this agreement.  Developing nations (such as China, India and Indonesia) have not been included in the negotiations but may still be required (indirectly) to comply with them.  Since the developing nations rely on international trade agreements for their well being, it would be easy for the ACTA nations to require compliance as a prerequisite to trade.  These laws would then supercede the ability for these developing nations to create laws for their countries.

The effects of ratifying ACTA in its current state would be far-reaching.  The applications of DMCA laws in the US, specifically the safe harbor provisions and take down notice process, are easily manipulated for commercial or political gain.  The rights of corporations are placed above the rights of citizens, and the process of law is turned on its head.  Accusation is proof enough to force action to remove the offending content.  The agreement would restrict the choices available to agreeing nations and developing nations alike. With so much at stake concerning the rights of individuals and the implications for international policies, it is troubling that these negotiations are being conducted secretly.

The changing nature of regulation and speech online

by Heather Ford, Thomas Schlucter and Emily Wagner

The tussle between the government and civil liberty groups about how to regulate the Internet often has more to do with the changing role of government and its struggle against this change than any specific inability for the government to understand how the Internet is “different” technically.

Regulating freedom of expression before the Internet was characterised by well-defined roles for the government as rule-maker and enforcer. Both roles were easily documented and could be clearly measured in terms of their effectiveness. In the Internet era, the role of the government seems to favor softer roles as educator and advocate – leaving the choice of regulation to individuals and corporations.

For example, in COPA, it wasn’t necessarily that the government was trying to place the framework of the old analogue ways of achieving regulation and enforcement (in this case substituting age verification mechanisms in place of blinder racks for magazines) but rather that the government used the framework that they make the rules centrally in order to ensure enforcement.

As Greenberg narrates: ‘The court reasoned that “unlike COPA there are no fines or prison sentences associated with filters which would chill speech.”‘ It is exactly the lack of and the statistics arising from such centralized enforcement that the government seems to be afraid of. How will they know whether they’re being successful or not? How will they justify their role in government if they can’t persuade the populace that they have allayed their fears (even ineffectively)? If people can govern themselves and achieve freedom of expression through technology, why do they need the government?

It is this central question that forces governments to regulate the Internet as they did old media – not because they don’t get the technology, but because they don’t get their changing role.

The changing nature of speech

The advent of the commercial Internet has brought not only a new technical infrastructure (and the struggle to keep up with it through legislative efforts), it has also changed the nature of speech. Two key characteristics of speech in pre-internet days are: attribution and limited reach. Whether you look at Habermas’ vision of a deliberative public culture in 18th century Europe, the proverbial speaker on a soapbox, the circulation of newspapers in the formative phase of the American democracy or the channeled architecture of a television network: speech that is delivered in these forms is always attributable to someone and it reaches a limited audience.

The latter examples are, with regards to attribution and limited reach, very different in nature. The key difference is the one between mediated and unmediated speech. With unmediated speech, the case could not be clearer: The soapbox speakers’ effect reaches as far as their voice does. They embody the speech as they deliver it, it is always attributable to them. (It would be an interesting experiment to try to anonymously exercise your right of speech in public. Many countries guard against this by banning face coverings at demonstrations.)

Mediated speech, in comparison to unmediated speech, reduces attribution while expanding reach. The content of newspapers or television programs cannot as easily be attributed to individuals as embodied speech can, and the distribution of speech through media extends the audience that can receive it by several orders of magnitude. From a practical perspective though the principles of attribution and limited reach remain unchanged: even if it is not possible to find the one person responsible for a news show on television, we can attribute that speech to the institution that broadcasts it. And even if newspapers circulate widely throughout a nation, the physical nature of the medium and its inner logic limit its reach to those territories where a reasonable logistical effort can be made to deliver it.

The Internet changes this equation entirely. Where the architecture that Berman and Weitzner argue for becomes reality, there is a potential for a medium that delivers speech with no attribution and potentially unlimited reach. The First Amendment was conceived to further a set of societal goals that we have come to view as essential to democracy. But the conditions of speech it was designed for are changing. The struggles over regulating the Internet reflect, in my opinion, this fundamental shift. Whether democratic structures can be fostered by speech that is freer (with regards to many constraints) than ever before, remains to be seen. Those who try to regulate the Internet clearly see a danger that uncontrolled speech might erode the foundation of society.

There is evidence to suggest that the Internet may not necessarily be a medium of limitless reach.

The Berman and Weitzner article emphasized the potential pitfalls and issues in providing a user-controlled Internet over those of creating one of open-access, and I think we should be cautious to not think that open-access is a foregone conclusion.   While the end-to-end architecture of the Internet is well-positioned to support a decentralized, open communication medium, there are still economic and skill factors which stand between where we are now and the democratic ideal of freedom of expression and an abundance of ideas on the Internet.

First, this may seem obvious, but someone still has to install wires, fiber optic cable, or wireless access points for any kind of network to exist.  Then of course there are devices at the end points needed to connect to the network.  What I found interesting about overcoming these obstacles was the different take Berman and Weitzner had compared with the U.N’s Joint Declaration.  Berman and Weitzner were proponents of open technical standards to allow for a multiplicity of devices and avoiding restrictive licenses but left the nuts and bolts of network architecture to the telecomm companies and market forces, whereas the U.N.’s declaration took a much more active stance by stating that the international community should provide assistance programs to help provide public access.  It just made me think about whether the U.S. government would ever subsidize telecomm the way that they subsidize the postal service now (news to me) or not, which has been experiencing solvency problems for several years now.  If you consider computer terminals in public libraries, then I suppose we already are.  Providing the infrastructure for the free exchange of ideas is not an insignificant or inexpensive task, especially at the fringes of the network.

Secondly, and this may be another obvious point, not everyone who has access to a home or public Internet connection is able to self-publish their ideas on the Internet.  Does this mean that we also need to provide technical “translators” to write web pages as we provide translators for voters during public elections?  Wouldn’t hiring web developers at every public library rival the postal system in terms of cost for these particularly skilled workers?  My point is that architecting a decentralized, open-access Internet is necessary, but not sufficient for enabling freedom of expression online.