EFF statement: Facebook TOS violations != Criminal violations

I stumbled upon this post today and found it very relevant both to our first assignment and to the Lori Drew case we read a few weeks ago.

Power Ventures built a service that aggregates information from different social networks and provides the users with a single view. To get information from Facebook, however, it logs in with the user’s password and scrapes the data–basically what the company in our first assignment was supposed to do.

Facebook is suing Power Ventures because it violates its TOS by logging in using “automated means”. The thing is that Facebook is also trying to penalize Power’s users under the California penal code as an access “without permission”. This looks very similar to the Drew case, and it is another example of a company trying to establish the definition of a crime in its TOS. However, this time the California penal code is argued, instead of the CFAA act.

The EFF, however, is urging a federal judge to dismiss such a claim, because it would set a terrible precedent in scope of the California criminal law.

I’m not an expert on how case law in one jurisdiction applies in another jurisdiction, but I feel that the judge could easily find a parallel in both cases. Hopefully, he will agree with the district court in the case and that will establish a huge barrier for Facebook’s claims.

FCC “Third Way” Compromise Challenged

As technology continues to evolve ahead of regulation, FCC is trying to strike a balance between the current lack of regulation and reclassifying broadband operators from Title I , as an “information service”, to Title II , as a ‘telecommunications service”. The latter would result in the unintended consequence of subjecting broadband providers to extensive Title II regulations not suitable for that industry. FCC’s “Third Way” proposal suggested a limited subset of Title II controls be applied to broadband operators specifically; and, more generally, clarifies FCC authority over wired and wireless broadband. Challenges to the “Third Way” is the latest in a series of tussles between the regulator and the industry over net neutrality. In Comcast v. FCC, the DC Circuit court overturned the agency’s 2008 ruling over Comcast’s practice in discriminating between packets from peer-to-peer services, such as BitTorrent, by citing that FCC did not have “any statutorily mandated responsibility” to enforce network neutrality. Without Congress further clarifying the role and scope of the FCC, the agency’s ability to pursue its National Broadband Plan, which outlines a vision for the future of the US Internet, is impaired.

PC World article on the challenge:
http://www.pcworld.com/businesscenter/article/195848/fcc_third_way_compromise_challenged.html
FCC’s legal framework on the “Third Way”:
http://www.broadband.gov/the-third-way-narrowly-tailored-broadband-framework-chairman-julius-genachowski.html
Connecting America: The National Broadband Plan:
http://www.broadband.gov/

Better Late Than Never!

After learning about Computer Ethics and Approaches and how important it could be to apply ethical approaches in our day to day use of computer technology too, as a Computer Professional, I was wondering whether India (Yes, I come from India :O) had any such ethical policies for Computer Professionals . And sadly I couldn’t find any such policies but this recent news article about a recommendation report published by Nasscom on Ethics and Corporate Goverance.

In India, the percentage share of service sector in the Total GDP is more than 50% and IT services & outsourcing plays a major role in it. After the Satyam Fiasco in January 2009, I think it is high time that Indian Companies implement some strict policies in place to avoid such corporate disaster in future, as such crisis could affect the confidence of foreign customers in other Indian IT outsources too.

This report recommends similar approaches to what we studied in class like having a whistleblower policies, high standards of probity and corporate governance, code of conduct with customers, including in the areas of data security and protection of intellectual property, etc.

In the end, I would say I am happy to see a small stepping step towards it at least if not a complete policy in place. This would hopefully encourage Indian Companies implement a much needed policy on Ethics and Corporate Governance and reaffirm its lost credibility.

Cyberlaws in Malaysia

Coming into this class, I didn’t really have a clear grasp on the complexity and issues faced when dealing with “cyber” laws and policy. As a student from Malaysia, I think the USA has done a lot to try to make sure that the internet is not a free for all, do as you please venue. The class has also made me think about cyber laws back home in Malaysia and how they differ.

The main part of Malaysian cyber laws seem to be centered around the Communications and Multimedia Act of 1998 (http://www.msc.com.my/cyberlaws/act_communications.asp).

So what’s different? It seems that the Malaysian laws are much more draconian. There is no difference between “offensive” and “false” material, and if you post content online that is true but “offensive” because it defames that person, then you’re in trouble.Surprisingly, this law has only been invoked a few times since its conception. The first time was in March of 2009 when 6 bloggers were charged with insulting the king of a state in Malaysia (http://thestar.com.my/news/story.asp?sec=nation&file=/2009/3/12/nation/20090312194041#). Free speech isn’t held as highly in Malaysia as it is here in the USA.

More recently, a student was charged under this law for posting hateful comments about a certain race on his Facebook account (http://www.theborneopost.com/?p=4936).

Before these cases, myself (and most probably many other Malaysians) had no idea that the country even had such laws. The internet is such a different arena for the people, with content generally uncensored by the government.

Marketing, Internet and Privacy

WSJ Article

The article linked to above discusses new privacy legislation, specifically addressed at advertisers and Internet companies. The legislation, proposed by Rep. Rick Boucher (D., Va.) and Rep. Cliff Stearns (R., Fla.), will be made public tomorrow and will sit for two months before being introduced to allow time for feedback and tweaking.

In its current form, it seems the bill addresses the need for websites to clearly disclose to users how their information is collected, used and shared. Further, the bill suggests users should have an opportunity to opt out if they do not want their information to be collected or used in certain ways. And not surprisingly, the bill would grant authority to the FTC to enforce its provisions.

So, please read and enjoy yet another example of how our discussions in class readily apply to the world around us.

Apple, the CIA & Shield Law

There has been further progress on the iPhone/ Gizmodo debacle that Jess wrote about last week, and the implications it could have for more serious journalistic concerns. On the Media reported April 30th that in order to halt the further investigation into the property of Jason Chen, the Gizmodo blogger who had much of his computing equipment seized, Shield Law was claimed so that he would not need to reveal his sources and how he came about receiving the next generation iPhone prototype. I thought this would be especially relevant to talk about because today is of course World Press Freedom Day, a UN supported day of awareness for governments to uphold the rights of the press.

In the segment On the Media features Johnathan Turley, a law professor from George Washington University, who addresses whether or not Jason Chen should be eligible for Shield Law and how this could effect journalists pursuing more serious matters such as James Risen. Last week New York Times reporter and novelist James Risen was give a Grand Jury Subpoena to reveal his sources for part of what he wrote on a book concerning the CIA. Professor Turley argues that whether or not Jason Chen is able to use Shield Law in his case, he could effect journalists like James Risen, who are actually reporting on subjects important to American freedoms, and that should potentially be protected.

Turley argues that Shield Law should probably not apply to a blogger like Jason Chen. He supposedly paid five thousand dollars for the phone and under California Penal Code Section 485 if an item is found and the owner is known and there is not a proper effort to return the item it is considered stolen. Turley believes that journalists such as Jason Chen could potentially weaken the power of Shield Law, which would be detrimental to our freedom of speech and freedom of press. I’m inclined to agree, and don’t believe what Jason Chen and Gizmodo did should necessarily be protected. I’m a frequent reader of Appleinsider, Macrumours, Gizmodo and Wired, but still believe that what Jason Chen was doing could potentially have been crossing the line. I don’t believe there was enough effort put into returning the phone on the part of the original finder of the phone and it is known that he shopped it around instead of trying to return it to the bar where it was found or Apple.

Similar to Turley’s argument I also believe that if Jason Chen is able to use the fact that he is a blogger halt investigation into how he found this “stolen” good, then why can’t anyone or more importantly any criminal do the same thing. The real question is who is actually a journalist? Twitter is a microblogging service and I definitely have followers I don’t know, does that make me a journalist? I use Facebook all the time and update the status, am I technically reporting then? Even this blog post I’m writing now can be viewed by anyone on the web, and does that make me a writer? I’d say that I am not, and I hope that Shield Law is used for those important cases like Risen’s rather than Chen’s.

Ethics, Policy, and the Information “Professional”

In James Moor’s “What Is Computer Ethics?,” he notes that computers are “special technology” raising equally “special” ethical issues.  New technology continues to confer new capabilities on humans, but it does so within a policy vacuum regarding how we should use these new capabilities, whose effects may be merely on the individual or they may reverberate society-wide.  Computer ethics, then, must consider both personal and social policies on the ethical use of the technology (Moor includes software, hardware, and networks under the “computer” rubric.)

What’s so “special,” then, about computers?  Many new technologies are considered on some scale to be revolutionary–are such claims, when made about computers, hyperbole or is there some basis to these claims?  Moor argues that a few features present themselves prominently.  First, they are affordable and abundant (within our society, certainly, and most other developed societies).  Additionally, they are increasingly integrated into many other devices which are not directly or intuitively tied to computer technology.  But abundance and cheapness, in themselves, are not sufficient characteristics for a revolutionary technology, or pencils, Moor points out, would hold a higher place in history.  What makes computers revolutionary is what he calls “logical malleability”–the capacity to be used modularly as components within many other technologies.  Analogous to the way steam engines were used in a “plug-and-play” (relatively speaking, of course) way to power the machines of the Industrial Age, computers are used modularly to bring intelligence to those machines in the Information Age.  This is what revolution looks like.  Importantly, though, although computers bring great logical power to a variety of media, it is important that we not view its operations in purely logical/numerical terms.  As computing power and utilization increases, so will the importance of our conceptualization of computing “work.”  To use a crude example: a packet sniffer at one level is simply copying bit patterns, but at another level, the nature and impact of its work is completely different and perhaps more relevant in human terms.

Such reconceptualizations will take on increased necessity, Moor asserts, not just because of its impact on human/social activity but also in light of its ability to transform those activities themselves.  Not much imagination is required to describe how computers have changed work, school, and social life.  But computers are transforming activities of broader scope and import as well.  Moor shows that computers’ roles in finance, education, and voting transform the essence of the activities, raising questions about the very nature of money, education, and even the concept of a fair election in a world where votes are counted and winners projected before significant amounts of voters have cast their ballots.  Logical malleability ensures that technology will inexorably expand its reach, creating situations to which existing policy cannot map.

Moor admits that his argument rests on a vision of the Information Revolution that some may not share, so he suggests a more ominous example, which he calls the invisibility factor.  Privacy abuses become all the more prevalent in the networked world, but at the same time, more difficult for the average person to detect.  He also warns of biases within code itself that benefit one party over another, citing the “SABRE” airline reservation system that made suggestions beneficial to American Airlines by default.  Lastly, Moor highlights invisible complex calculation as perhaps the greatest possible cause for concern.  As we rely increasingly on computers to do our calculations for us, we make judgments and choices about fallibility and traceability.  Without policies, we leave such interactions up to fate; without computer ethics, we have no framework for policy making.

Morris and Davidson’s “Policy Impact Assessments” describes the importance of addressing the public policy impacts of standards.  There aren’t enough resources for public policy people to assess and address the public policy impacts of every standards body due to a lack of resources — specifically, the large time commitment required by individuals, a lack of people with both the technical and public policy experience required, long “time horizons” for standards development, and the sheer number of technical standards bodies.  Due to this lack of resources, as well as the need for streamlined, “systematic approaches” to the consideration of public policy impacts that incorporate public policy concerns early in the design phase, the article suggests that it may be helpful in some circumstances for public policy people to create tools that can be implemented within standards bodies to identify areas where there may be public policy impacts.   One example tool is the draft of the Public Policy Considerations for Internet Design Decisions created for the IETF, which suggests that technologists consider questions related to the control of information (such as the creation of chokepoints that are easily amenable to surveillance and censorship), privacy (such as the use of persistent identifiers), and accessibility (such as for disabled individuals).  Tools created for technologists should be designed to “look[] at technical design issues from the perspective of the technology designer, not the public policy advocate,” because they will be executed internally.

In addition to suggesting the creation of policy impact assessment tools customized to the needs of individual standards bodies, the article recommends more generally that standards bodies solicit the input of public policy advocates early in the design process.  In addition, the authors observe that there is a need to raise awareness among technologists about “the social context and implications of their work” perhaps through education initiatives (the I-School!)

Although both “Tussle in Cyberspace” and the Morris and Davidson article address the tussle between technology and policy, Morris and Davidson place a greater emphasis on identifying design issues and turning them over to public consideration early in the design phase, rather than designing around the tussle — perhaps emphasizing issues that can’t be effectively designed around.  Even if the IETF claims that “we don’t do public policy,” the article suggests that public policy is inevitably impacted by IETF standards.  We see the “Tussle” continuing to play out in issues such as network neutrality, privacy, and more.

Morris and Davidson mention IPv6 as an example of “the ways that technical design decisions can directly impact public policy concerns,” in which people outside the organization pointed out the privacy implications of gnerating a unique identifier tied to the user’s Ethernet network card. By contrast, P3P is mentioned as example where “public interest participation has been a part of a technical design effort from its inception.”

However, even with initial public interest participation in P3P, and attempts to remedy the public interest issues in IPv6, both seem to have flaws from a usability/social point of view. On some platforms, IPv6 continues to require onerous interventions by users to circumvent the use of potentially non-anonymous identifiers, and P3P doesn’t seem to be widely used as people initially expected it to be. According to EPIC, P3P may not have been successful because it is too complicated, and the defaults don’t provide adequate privacy protections. If “user choice is a fundamental public policy concern,” as described in the “Tussle in Cyberspace” article, then attempts to address public policy impacts will need to incorporate usability concerns. Perhaps issues related to usability and the social world are yet another kind of expertise that public policy advocates need to bring to the table. However, as Morris and Davidson point out, there is no perfect way to address public policy impacts, because there is no way to know what all the potential impacts will be…

So how is an information professional to deal with ethical dilemmas in his/her daily work? In “Paramedic Ethics for the Information Professional,” Collins and Miller propose a “paramedic” method of ethics by which computer professionals can quickly and systematically assess all the relevant factors of an ethical problem that arises in their work. The authors call their approach “paramedic ethics” because it functions much like what paramedics do when they respond to an emergency. Paramedics are not doctors, but they are trained to quickly assess an emergency and make good decisions, stabilizing the problem until they can deliver a patient to better-trained personnel. Similarly, the paramedic approach to ethics is not meatn to replace a more formal or rigorous study of ethics, but rather serves as a toolkit that computer professionals can use to organize the relevant facts and actors in an ethical problem in order to come up with the best possible solution for all parties involved.

The method looks like an algorithm so that computer professionals can approach novel ethical dilemmas with a language and form they are familiar with. The method makes use of the concepts of vulnerability and opportunity. Simply speaking, a vulnerability is anything that results in a loss for one of the parties if a particular solution to an ethical dilemma were carried out, while an opportunity represents a gain. So higher pay, a sense of well-being or integrity, having a good reputation, etc. are all opportunities, while the loss of control over one’s own work is a vulnerability. The paramedic approach to ethics also centers around the different obligations that the constituents in an ethical dilemma have towards one another. According to the authors (who draw from theories of deontological and utilitarian ethics), the best solution to that dilemma is one where the most number of those obligations can be fulfilled.

In a nutshell, the paramedic method takes a user sequentially through a series of phases. Phase 1 involves gathering data. The individual faced with an ethical problem starts by listing all the possible alternatives for a decision or set of decisions he must make. The article presented the example of George, an electrical engineer working for an aerospace company. George headed the development of a computerized control system for a military aircraft. He became convinced that there was a fundamental flaw in the system’s design that could potentially lead to a crash if not addressed. However, George’s superiors were convinced that the system was safe because it had passed required tests. George was thus faced with the pressure of signing off on the system; to not do so would cause delays that might cause the company to miss the contracted delivery date. In addition, a non-compliant George could lose his duties to someone else.

Using the example of George, Phase 1 of the paramedic process would involve him listing out all the alternatives available to him. He could either sign off on the project or delay it, and in either case he could either publicly object to the company or publicly comply with it. Phase 1 also asks George to determine all the obligations and rights between all possible parties in this situation (and create a diagram if that helps). For example, George has an obligation to himself to keep his integrity and an obligation to support his family. His company has a right to his loyalty.

Phase 2 of the process asks users to assess how each alternative affects each of the parties involved. The article presented a series of matrices that helped to clearly organize all the data in this phase of the process. The matrices enabled the reader to quickly see, for example, that if George signed off on his project, he would not keep his integrity nor would he be fulfilling his obligation to the pilot testing his system, but he would be fulfilling his obligation to support his family (by keeping his job).

Phase 3 of the paramedic process asks the user to negotiate an agreement. By applying social contract ethics whereby all parties com to a consensus, the user must step into the shoes of all the other parties and ask whether or not he could live with a particular solution to the ethical problem at hand. In this way, the user might discover new alternative solutions during this step of the paramedic process. In our example case, social contract analysis might yield an additional possible solution in which George’s company, in return for him signing off on this place, agreed to inform the test pilots about possible airplane malfunctioning.

Finally, in Phase 4, the user applies deontological and utilitarian ethics in order to come to a conclusive decision. Deontological ethics focuses on the decision by assessing whether it violates or meets the obligations of the parties involved. Or, even if a decision affects multiple parties in the same positive or negative direction, it might not do so to the same degree. On the other hand, utilitarian ethics seeks to find the overall best solution for the most people–the one that maximizes opportunities and minimizes vulnerabilities for the most people. The result of Phase 4 should be a ranking of all the possible alternatives from best to worst in order to judge the best possible solution. The author notes that in the best case scenario, one alternative stands out as the best one, but otherwise the user must choose one according to his judgment.

The article goes on to apply the four phases of the paramedic method to another example case. Simply put, the entire paramedic method could be thought of as iteratively asking questions about who is involved in a decision, what the possible solutions are, what everyone can agree on, and what is the best possible solution given all that information. In conclusion, the authors express their hopes that professionals would utilize the paramedic method to make thorough inquiries about the impact of their ethical decisions. However, the authors are also careful to qualify that their method is not meant to be some sort of quick fix to difficult problems. They also note that “ethical problems are [not] amenable to unequivocal algebraic solutions.” Their main goal is just to help people see and consider all the relevant issues in an ethical dilemma such that the process they use is systematic and logical.

Having a unifying ethical approach can be especially helpful to information professionals, especially as the concept of an information professional grows and changes. Existing professional bodies have compiled codes of ethics, listed below:

  • ACM Code of Ethics
  • IEEE Code of Ethics
  • Association of IT Professionals Code of Ethics
  • Librarian’s Code of Ethics
  • USENIX SAGE (System Administrator’s Guild) Code of Ethics

Because of the gaps (and tussles) between technology and policy, a code of ethics can provide a valuable guide for professionals navigating ethical quandaries in the workplace. However, this is by no means a panacea. Tension between ethical approaches, professional ethics and the law, or even professional ethics and organizational policy, can still exist. The Paramedic Approach can help alleviate this by striving toward a “social contract” in which no parties end up satisfied (just kidding!). The aim is to arrive at a detente in which all competing factors have been taken into account and judged as objectively and fairly as possible.

The proliferation of codes implies a more diffuse standard for information professionals than that of doctors or attorneys. What exactly is an information professional’s duty of care? And how does the definition of an information professional change, when the abilities of a “Superuser” (to paraphrase Paul Ohm’s definition, a user who, via additional time, capabilities, or access, gains abilities to do things that the average user cannot) become extended to non-professionals everywhere? The recent case of the disgruntled former auto dealership worker who used a web-based administrative panel to disable more than 100 cars remotely exemplifies this issue’s complexity. What will it mean when we are all “Superusers” in some way?

-Rachelle, Prateek, Daniel, and Tony

Is Google Really Reasonable? Are Any of Us?

Is there privacy in public spaces? This is legal question has been around the block a time or two but has recently resurfaced in a media article in the NYTimes article titled “New Questions Over Google’s Street View in Germany.”  The article brings up an ongoing dispute between German officials and Google over what information can and cannot be collected and displayed from Google Street View.  If it sounds familiar to our “Oh Canada” neighbor that’s correct, German privacy laws are much stricter than the United States, which has no privacy issues with Street View.  In response to official and judicial outcry, Google’s German site agreed to blur out identifying information like faces, license plates, house numbers and even entire properties based on owners’ request.  Why not just put fig leaves over all of this offensive data sitting around?

While it might be easy to smile in the United States about the demure Germans (certainly the French took no issue), this debate brings up important considerations regarding the reasonableness of American privacy laws.  In the Nissenbaum article, the author points to the challenge of delineating a “reasonable expectation of privacy in public” (12).  Exactly what does “reasonable” mean anyway?  If I’m walking down the street and taking pictures of my dog and happen to capture a view inside my neighbor’s window is this reasonable? According to the California Civil Code Section 1708.8 this would all depend on whether the neighbor might find said picture “offensive to a reasonable person.”  So apparently whether I’m reasonable and whether my neighbor is reasonable legally matters!

The U.S. court cases that provide a precedence for this legal gray area offer little help in qualifying “reasonable.” In Nessenbaum we are given the example of Florida v. Riley in which the police discovered marijuana growing the defendants yard.  The court ruled that the police department did not violate an expectation “that society is prepared to recognize as ‘reasonable'” by flying over someone’s yard at 400 feet in a helicopter (14). Sure I take out my helicopter out all the time, I just don’t do it while walking the dog, and that my friends seems reasonable.

Big Content doesn’t like proposed amendments to Indian Copyright Law

The last major revision to Indian Copyright Law was in 1957. It has seen a number of amendments since, the last one  in 1999. These laws have not kept up with the changing nature of the role of technology in the distribution of copyrighted material. India recently proposed amendments (pdf) to the law to address these changes and to bring Indian Copyright Law into compliance with the WIPO Internet Treaties (pdf).

As discussed in this article at Ars Technica, the International Intellectual Property Alliance (IIPA), representing Big Content in the US (MPAA, RIAA etc.) is not happy with the proposed amendments. Their main complaint is about India’s take on the DRM anticircumvention provisions. The amendment allows the circumvention of DRM if the intended use is not infringing. For example, end users can rip a DRM protected DVD for personal use without breaking the law. While this seems to be fair and is in complicance with the WIPO Internet Treaties, the DMCA, an equivalent law in the US holds that circumventing DRM for any use is illegal. Clearly, it serves the purposes of Big Content well. Under the Indian version of the law, the onus would be on copyright holders to determine if a circumvention was an infringement.

My main take away was the influence Big Content has on government policy in the US. Apart from laws like the DMCA, parts of which go overboard in their attempt to protect the interests of copyright holders, they also heavily influence creation of the “Priority Watch List” of the US which includes India, China and even Canada. Formulating unbiased policy is a real challenge and I’m glad India got it right in this case.

Senators appeal to Facebook

I found this article interesting, as it demonstrates a very active role taken by national politicians related to something as (seemingly) trivial as Facebook:

http://voices.washingtonpost.com/posttech/2010/04/senators_pressure_facebook_to.html

My immediate reaction to this article was “why are our Senators spending time writing letters to Facebook when they should be dealing with more important stuff, like healthcare and immigration rights, and you know, other life-or-death types of things…?”. But after thinking about it, I realized that this action was more than just a publicity stunt directed towards their constituents of the Internet generation: it gives emphasis to the fact that Facebook, while it may have started as a place to post pictures of last night’s parties and stalk your ex-high school crushes, now represents the frontier in collection and storage of (non-financial) personal information–so its policies are likely to be referenced as standards in the “Sears Holdings Management/FTC” issue or “Quon v Arch” case of the future. And after our class assignments, we all know how beneficial (or dangerous) it would be for advertisers and other third parties to be able to store personal data indefinitely. Yes, this really does “fundamentally alter the relationship between the user and social networking site,” and I am glad that it was officially recognized.

[I wonder if those four senators would have been elected to office if information they had posted on Facebook in their youth was available to be stored indefinitely by media outlets…]

« Previous PageNext Page »