Ethics, Policy, and the Information “Professional”

In James Moor’s “What Is Computer Ethics?,” he notes that computers are “special technology” raising equally “special” ethical issues.  New technology continues to confer new capabilities on humans, but it does so within a policy vacuum regarding how we should use these new capabilities, whose effects may be merely on the individual or they may reverberate society-wide.  Computer ethics, then, must consider both personal and social policies on the ethical use of the technology (Moor includes software, hardware, and networks under the “computer” rubric.)

What’s so “special,” then, about computers?  Many new technologies are considered on some scale to be revolutionary–are such claims, when made about computers, hyperbole or is there some basis to these claims?  Moor argues that a few features present themselves prominently.  First, they are affordable and abundant (within our society, certainly, and most other developed societies).  Additionally, they are increasingly integrated into many other devices which are not directly or intuitively tied to computer technology.  But abundance and cheapness, in themselves, are not sufficient characteristics for a revolutionary technology, or pencils, Moor points out, would hold a higher place in history.  What makes computers revolutionary is what he calls “logical malleability”–the capacity to be used modularly as components within many other technologies.  Analogous to the way steam engines were used in a “plug-and-play” (relatively speaking, of course) way to power the machines of the Industrial Age, computers are used modularly to bring intelligence to those machines in the Information Age.  This is what revolution looks like.  Importantly, though, although computers bring great logical power to a variety of media, it is important that we not view its operations in purely logical/numerical terms.  As computing power and utilization increases, so will the importance of our conceptualization of computing “work.”  To use a crude example: a packet sniffer at one level is simply copying bit patterns, but at another level, the nature and impact of its work is completely different and perhaps more relevant in human terms.

Such reconceptualizations will take on increased necessity, Moor asserts, not just because of its impact on human/social activity but also in light of its ability to transform those activities themselves.  Not much imagination is required to describe how computers have changed work, school, and social life.  But computers are transforming activities of broader scope and import as well.  Moor shows that computers’ roles in finance, education, and voting transform the essence of the activities, raising questions about the very nature of money, education, and even the concept of a fair election in a world where votes are counted and winners projected before significant amounts of voters have cast their ballots.  Logical malleability ensures that technology will inexorably expand its reach, creating situations to which existing policy cannot map.

Moor admits that his argument rests on a vision of the Information Revolution that some may not share, so he suggests a more ominous example, which he calls the invisibility factor.  Privacy abuses become all the more prevalent in the networked world, but at the same time, more difficult for the average person to detect.  He also warns of biases within code itself that benefit one party over another, citing the “SABRE” airline reservation system that made suggestions beneficial to American Airlines by default.  Lastly, Moor highlights invisible complex calculation as perhaps the greatest possible cause for concern.  As we rely increasingly on computers to do our calculations for us, we make judgments and choices about fallibility and traceability.  Without policies, we leave such interactions up to fate; without computer ethics, we have no framework for policy making.

Morris and Davidson’s “Policy Impact Assessments” describes the importance of addressing the public policy impacts of standards.  There aren’t enough resources for public policy people to assess and address the public policy impacts of every standards body due to a lack of resources — specifically, the large time commitment required by individuals, a lack of people with both the technical and public policy experience required, long “time horizons” for standards development, and the sheer number of technical standards bodies.  Due to this lack of resources, as well as the need for streamlined, “systematic approaches” to the consideration of public policy impacts that incorporate public policy concerns early in the design phase, the article suggests that it may be helpful in some circumstances for public policy people to create tools that can be implemented within standards bodies to identify areas where there may be public policy impacts.   One example tool is the draft of the Public Policy Considerations for Internet Design Decisions created for the IETF, which suggests that technologists consider questions related to the control of information (such as the creation of chokepoints that are easily amenable to surveillance and censorship), privacy (such as the use of persistent identifiers), and accessibility (such as for disabled individuals).  Tools created for technologists should be designed to “look[] at technical design issues from the perspective of the technology designer, not the public policy advocate,” because they will be executed internally.

In addition to suggesting the creation of policy impact assessment tools customized to the needs of individual standards bodies, the article recommends more generally that standards bodies solicit the input of public policy advocates early in the design process.  In addition, the authors observe that there is a need to raise awareness among technologists about “the social context and implications of their work” perhaps through education initiatives (the I-School!)

Although both “Tussle in Cyberspace” and the Morris and Davidson article address the tussle between technology and policy, Morris and Davidson place a greater emphasis on identifying design issues and turning them over to public consideration early in the design phase, rather than designing around the tussle — perhaps emphasizing issues that can’t be effectively designed around.  Even if the IETF claims that “we don’t do public policy,” the article suggests that public policy is inevitably impacted by IETF standards.  We see the “Tussle” continuing to play out in issues such as network neutrality, privacy, and more.

Morris and Davidson mention IPv6 as an example of “the ways that technical design decisions can directly impact public policy concerns,” in which people outside the organization pointed out the privacy implications of gnerating a unique identifier tied to the user’s Ethernet network card. By contrast, P3P is mentioned as example where “public interest participation has been a part of a technical design effort from its inception.”

However, even with initial public interest participation in P3P, and attempts to remedy the public interest issues in IPv6, both seem to have flaws from a usability/social point of view. On some platforms, IPv6 continues to require onerous interventions by users to circumvent the use of potentially non-anonymous identifiers, and P3P doesn’t seem to be widely used as people initially expected it to be. According to EPIC, P3P may not have been successful because it is too complicated, and the defaults don’t provide adequate privacy protections. If “user choice is a fundamental public policy concern,” as described in the “Tussle in Cyberspace” article, then attempts to address public policy impacts will need to incorporate usability concerns. Perhaps issues related to usability and the social world are yet another kind of expertise that public policy advocates need to bring to the table. However, as Morris and Davidson point out, there is no perfect way to address public policy impacts, because there is no way to know what all the potential impacts will be…

So how is an information professional to deal with ethical dilemmas in his/her daily work? In “Paramedic Ethics for the Information Professional,” Collins and Miller propose a “paramedic” method of ethics by which computer professionals can quickly and systematically assess all the relevant factors of an ethical problem that arises in their work. The authors call their approach “paramedic ethics” because it functions much like what paramedics do when they respond to an emergency. Paramedics are not doctors, but they are trained to quickly assess an emergency and make good decisions, stabilizing the problem until they can deliver a patient to better-trained personnel. Similarly, the paramedic approach to ethics is not meatn to replace a more formal or rigorous study of ethics, but rather serves as a toolkit that computer professionals can use to organize the relevant facts and actors in an ethical problem in order to come up with the best possible solution for all parties involved.

The method looks like an algorithm so that computer professionals can approach novel ethical dilemmas with a language and form they are familiar with. The method makes use of the concepts of vulnerability and opportunity. Simply speaking, a vulnerability is anything that results in a loss for one of the parties if a particular solution to an ethical dilemma were carried out, while an opportunity represents a gain. So higher pay, a sense of well-being or integrity, having a good reputation, etc. are all opportunities, while the loss of control over one’s own work is a vulnerability. The paramedic approach to ethics also centers around the different obligations that the constituents in an ethical dilemma have towards one another. According to the authors (who draw from theories of deontological and utilitarian ethics), the best solution to that dilemma is one where the most number of those obligations can be fulfilled.

In a nutshell, the paramedic method takes a user sequentially through a series of phases. Phase 1 involves gathering data. The individual faced with an ethical problem starts by listing all the possible alternatives for a decision or set of decisions he must make. The article presented the example of George, an electrical engineer working for an aerospace company. George headed the development of a computerized control system for a military aircraft. He became convinced that there was a fundamental flaw in the system’s design that could potentially lead to a crash if not addressed. However, George’s superiors were convinced that the system was safe because it had passed required tests. George was thus faced with the pressure of signing off on the system; to not do so would cause delays that might cause the company to miss the contracted delivery date. In addition, a non-compliant George could lose his duties to someone else.

Using the example of George, Phase 1 of the paramedic process would involve him listing out all the alternatives available to him. He could either sign off on the project or delay it, and in either case he could either publicly object to the company or publicly comply with it. Phase 1 also asks George to determine all the obligations and rights between all possible parties in this situation (and create a diagram if that helps). For example, George has an obligation to himself to keep his integrity and an obligation to support his family. His company has a right to his loyalty.

Phase 2 of the process asks users to assess how each alternative affects each of the parties involved. The article presented a series of matrices that helped to clearly organize all the data in this phase of the process. The matrices enabled the reader to quickly see, for example, that if George signed off on his project, he would not keep his integrity nor would he be fulfilling his obligation to the pilot testing his system, but he would be fulfilling his obligation to support his family (by keeping his job).

Phase 3 of the paramedic process asks the user to negotiate an agreement. By applying social contract ethics whereby all parties com to a consensus, the user must step into the shoes of all the other parties and ask whether or not he could live with a particular solution to the ethical problem at hand. In this way, the user might discover new alternative solutions during this step of the paramedic process. In our example case, social contract analysis might yield an additional possible solution in which George’s company, in return for him signing off on this place, agreed to inform the test pilots about possible airplane malfunctioning.

Finally, in Phase 4, the user applies deontological and utilitarian ethics in order to come to a conclusive decision. Deontological ethics focuses on the decision by assessing whether it violates or meets the obligations of the parties involved. Or, even if a decision affects multiple parties in the same positive or negative direction, it might not do so to the same degree. On the other hand, utilitarian ethics seeks to find the overall best solution for the most people–the one that maximizes opportunities and minimizes vulnerabilities for the most people. The result of Phase 4 should be a ranking of all the possible alternatives from best to worst in order to judge the best possible solution. The author notes that in the best case scenario, one alternative stands out as the best one, but otherwise the user must choose one according to his judgment.

The article goes on to apply the four phases of the paramedic method to another example case. Simply put, the entire paramedic method could be thought of as iteratively asking questions about who is involved in a decision, what the possible solutions are, what everyone can agree on, and what is the best possible solution given all that information. In conclusion, the authors express their hopes that professionals would utilize the paramedic method to make thorough inquiries about the impact of their ethical decisions. However, the authors are also careful to qualify that their method is not meant to be some sort of quick fix to difficult problems. They also note that “ethical problems are [not] amenable to unequivocal algebraic solutions.” Their main goal is just to help people see and consider all the relevant issues in an ethical dilemma such that the process they use is systematic and logical.

Having a unifying ethical approach can be especially helpful to information professionals, especially as the concept of an information professional grows and changes. Existing professional bodies have compiled codes of ethics, listed below:

  • ACM Code of Ethics
  • IEEE Code of Ethics
  • Association of IT Professionals Code of Ethics
  • Librarian’s Code of Ethics
  • USENIX SAGE (System Administrator’s Guild) Code of Ethics

Because of the gaps (and tussles) between technology and policy, a code of ethics can provide a valuable guide for professionals navigating ethical quandaries in the workplace. However, this is by no means a panacea. Tension between ethical approaches, professional ethics and the law, or even professional ethics and organizational policy, can still exist. The Paramedic Approach can help alleviate this by striving toward a “social contract” in which no parties end up satisfied (just kidding!). The aim is to arrive at a detente in which all competing factors have been taken into account and judged as objectively and fairly as possible.

The proliferation of codes implies a more diffuse standard for information professionals than that of doctors or attorneys. What exactly is an information professional’s duty of care? And how does the definition of an information professional change, when the abilities of a “Superuser” (to paraphrase Paul Ohm’s definition, a user who, via additional time, capabilities, or access, gains abilities to do things that the average user cannot) become extended to non-professionals everywhere? The recent case of the disgruntled former auto dealership worker who used a web-based administrative panel to disable more than 100 cars remotely exemplifies this issue’s complexity. What will it mean when we are all “Superusers” in some way?

-Rachelle, Prateek, Daniel, and Tony

Twitter’s Geolocation Feature

There has been some news over the last few days about Twitter finally turning on its geolocation feature.  It allows you to see a map overlaying individual tweets together with place names and the location of the tweet. Though the feature has been live via the twitter API since last fall (so it’s not “breaking” news), this week it was finally turned on.  Facebook is also expected to turn on geolocation in the near future.  Though the Twitter service is opt-in (what a Google Buzz-like story it would have been otherwise!), and there are a number of really cool/useful things that twitter+location could bring (Another way for impromptu meetups with friends! Deals from the store on the corner!), some people (here, here, here are a few) are raising privacy concerns.  There aren’t any current tools for controlling who actually sees your location (uh oh), and scenarios like tweeting while you’re on vacation and getting burglarized , more tools for stalkers, employers going all big brother on their employees, etc. could happen.  Some food for thought as we talk more about technology and privacy issues in lecture.