Court Rules LimeWire Infringed on Copyrights

http://www.wired.com/images_blogs/threatlevel/2010/05/limewireruling.pdf

The court ruled that LimeWire intended to “induce infringement” because:

(1) The massive scale of infringement committed by LimeWire users

(2) LimeWire knew about the massive scale of infringing users on their network

One item that was interesting to me was that the court determined that LimeWire’s response to the massive scale of infringement to be insufficient. There were company papers acknowledging the mass infringement, but also strategic plan to convert infringing users to customers of LimeWire’s online store (which would sell authorized music). It seems interesting to me that the strategic plan for conversion did not protect LimeWire and the court implies that knowledge of infringement along with insufficient response may place companies liable for infringement of copyright.

Additionally, the court ruled that LimeWire’s advertisement campaigns incited copyright infringement. However I think that these ad campaigns were very different than the slogans used by Grokster and Napster. In the previous cases, advertisements explicitly stated the copyrighted materials that could be downloaded (ie: the top 10 songs on the network etc). However, in the case of LimeWire, campaigns were designed to advertise the technical functionality of the application: “free music downloads”, “excellent for downloading music files”. Where the company faltered was in associating themselves with other infringement fostering programs (Napster, Morpheus, etc), however the there were explicit advertisements marketing LimeWire as a way to obtain copyrightable material for free. This is interesting because it is placing more pressure on companies: marketing of their products could be used against them if  interpreted as promoting illegal activities.

Ethics, Policy, and the Information “Professional”

In James Moor’s “What Is Computer Ethics?,” he notes that computers are “special technology” raising equally “special” ethical issues.  New technology continues to confer new capabilities on humans, but it does so within a policy vacuum regarding how we should use these new capabilities, whose effects may be merely on the individual or they may reverberate society-wide.  Computer ethics, then, must consider both personal and social policies on the ethical use of the technology (Moor includes software, hardware, and networks under the “computer” rubric.)

What’s so “special,” then, about computers?  Many new technologies are considered on some scale to be revolutionary–are such claims, when made about computers, hyperbole or is there some basis to these claims?  Moor argues that a few features present themselves prominently.  First, they are affordable and abundant (within our society, certainly, and most other developed societies).  Additionally, they are increasingly integrated into many other devices which are not directly or intuitively tied to computer technology.  But abundance and cheapness, in themselves, are not sufficient characteristics for a revolutionary technology, or pencils, Moor points out, would hold a higher place in history.  What makes computers revolutionary is what he calls “logical malleability”–the capacity to be used modularly as components within many other technologies.  Analogous to the way steam engines were used in a “plug-and-play” (relatively speaking, of course) way to power the machines of the Industrial Age, computers are used modularly to bring intelligence to those machines in the Information Age.  This is what revolution looks like.  Importantly, though, although computers bring great logical power to a variety of media, it is important that we not view its operations in purely logical/numerical terms.  As computing power and utilization increases, so will the importance of our conceptualization of computing “work.”  To use a crude example: a packet sniffer at one level is simply copying bit patterns, but at another level, the nature and impact of its work is completely different and perhaps more relevant in human terms.

Such reconceptualizations will take on increased necessity, Moor asserts, not just because of its impact on human/social activity but also in light of its ability to transform those activities themselves.  Not much imagination is required to describe how computers have changed work, school, and social life.  But computers are transforming activities of broader scope and import as well.  Moor shows that computers’ roles in finance, education, and voting transform the essence of the activities, raising questions about the very nature of money, education, and even the concept of a fair election in a world where votes are counted and winners projected before significant amounts of voters have cast their ballots.  Logical malleability ensures that technology will inexorably expand its reach, creating situations to which existing policy cannot map.

Moor admits that his argument rests on a vision of the Information Revolution that some may not share, so he suggests a more ominous example, which he calls the invisibility factor.  Privacy abuses become all the more prevalent in the networked world, but at the same time, more difficult for the average person to detect.  He also warns of biases within code itself that benefit one party over another, citing the “SABRE” airline reservation system that made suggestions beneficial to American Airlines by default.  Lastly, Moor highlights invisible complex calculation as perhaps the greatest possible cause for concern.  As we rely increasingly on computers to do our calculations for us, we make judgments and choices about fallibility and traceability.  Without policies, we leave such interactions up to fate; without computer ethics, we have no framework for policy making.

Morris and Davidson’s “Policy Impact Assessments” describes the importance of addressing the public policy impacts of standards.  There aren’t enough resources for public policy people to assess and address the public policy impacts of every standards body due to a lack of resources — specifically, the large time commitment required by individuals, a lack of people with both the technical and public policy experience required, long “time horizons” for standards development, and the sheer number of technical standards bodies.  Due to this lack of resources, as well as the need for streamlined, “systematic approaches” to the consideration of public policy impacts that incorporate public policy concerns early in the design phase, the article suggests that it may be helpful in some circumstances for public policy people to create tools that can be implemented within standards bodies to identify areas where there may be public policy impacts.   One example tool is the draft of the Public Policy Considerations for Internet Design Decisions created for the IETF, which suggests that technologists consider questions related to the control of information (such as the creation of chokepoints that are easily amenable to surveillance and censorship), privacy (such as the use of persistent identifiers), and accessibility (such as for disabled individuals).  Tools created for technologists should be designed to “look[] at technical design issues from the perspective of the technology designer, not the public policy advocate,” because they will be executed internally.

In addition to suggesting the creation of policy impact assessment tools customized to the needs of individual standards bodies, the article recommends more generally that standards bodies solicit the input of public policy advocates early in the design process.  In addition, the authors observe that there is a need to raise awareness among technologists about “the social context and implications of their work” perhaps through education initiatives (the I-School!)

Although both “Tussle in Cyberspace” and the Morris and Davidson article address the tussle between technology and policy, Morris and Davidson place a greater emphasis on identifying design issues and turning them over to public consideration early in the design phase, rather than designing around the tussle — perhaps emphasizing issues that can’t be effectively designed around.  Even if the IETF claims that “we don’t do public policy,” the article suggests that public policy is inevitably impacted by IETF standards.  We see the “Tussle” continuing to play out in issues such as network neutrality, privacy, and more.

Morris and Davidson mention IPv6 as an example of “the ways that technical design decisions can directly impact public policy concerns,” in which people outside the organization pointed out the privacy implications of gnerating a unique identifier tied to the user’s Ethernet network card. By contrast, P3P is mentioned as example where “public interest participation has been a part of a technical design effort from its inception.”

However, even with initial public interest participation in P3P, and attempts to remedy the public interest issues in IPv6, both seem to have flaws from a usability/social point of view. On some platforms, IPv6 continues to require onerous interventions by users to circumvent the use of potentially non-anonymous identifiers, and P3P doesn’t seem to be widely used as people initially expected it to be. According to EPIC, P3P may not have been successful because it is too complicated, and the defaults don’t provide adequate privacy protections. If “user choice is a fundamental public policy concern,” as described in the “Tussle in Cyberspace” article, then attempts to address public policy impacts will need to incorporate usability concerns. Perhaps issues related to usability and the social world are yet another kind of expertise that public policy advocates need to bring to the table. However, as Morris and Davidson point out, there is no perfect way to address public policy impacts, because there is no way to know what all the potential impacts will be…

So how is an information professional to deal with ethical dilemmas in his/her daily work? In “Paramedic Ethics for the Information Professional,” Collins and Miller propose a “paramedic” method of ethics by which computer professionals can quickly and systematically assess all the relevant factors of an ethical problem that arises in their work. The authors call their approach “paramedic ethics” because it functions much like what paramedics do when they respond to an emergency. Paramedics are not doctors, but they are trained to quickly assess an emergency and make good decisions, stabilizing the problem until they can deliver a patient to better-trained personnel. Similarly, the paramedic approach to ethics is not meatn to replace a more formal or rigorous study of ethics, but rather serves as a toolkit that computer professionals can use to organize the relevant facts and actors in an ethical problem in order to come up with the best possible solution for all parties involved.

The method looks like an algorithm so that computer professionals can approach novel ethical dilemmas with a language and form they are familiar with. The method makes use of the concepts of vulnerability and opportunity. Simply speaking, a vulnerability is anything that results in a loss for one of the parties if a particular solution to an ethical dilemma were carried out, while an opportunity represents a gain. So higher pay, a sense of well-being or integrity, having a good reputation, etc. are all opportunities, while the loss of control over one’s own work is a vulnerability. The paramedic approach to ethics also centers around the different obligations that the constituents in an ethical dilemma have towards one another. According to the authors (who draw from theories of deontological and utilitarian ethics), the best solution to that dilemma is one where the most number of those obligations can be fulfilled.

In a nutshell, the paramedic method takes a user sequentially through a series of phases. Phase 1 involves gathering data. The individual faced with an ethical problem starts by listing all the possible alternatives for a decision or set of decisions he must make. The article presented the example of George, an electrical engineer working for an aerospace company. George headed the development of a computerized control system for a military aircraft. He became convinced that there was a fundamental flaw in the system’s design that could potentially lead to a crash if not addressed. However, George’s superiors were convinced that the system was safe because it had passed required tests. George was thus faced with the pressure of signing off on the system; to not do so would cause delays that might cause the company to miss the contracted delivery date. In addition, a non-compliant George could lose his duties to someone else.

Using the example of George, Phase 1 of the paramedic process would involve him listing out all the alternatives available to him. He could either sign off on the project or delay it, and in either case he could either publicly object to the company or publicly comply with it. Phase 1 also asks George to determine all the obligations and rights between all possible parties in this situation (and create a diagram if that helps). For example, George has an obligation to himself to keep his integrity and an obligation to support his family. His company has a right to his loyalty.

Phase 2 of the process asks users to assess how each alternative affects each of the parties involved. The article presented a series of matrices that helped to clearly organize all the data in this phase of the process. The matrices enabled the reader to quickly see, for example, that if George signed off on his project, he would not keep his integrity nor would he be fulfilling his obligation to the pilot testing his system, but he would be fulfilling his obligation to support his family (by keeping his job).

Phase 3 of the paramedic process asks the user to negotiate an agreement. By applying social contract ethics whereby all parties com to a consensus, the user must step into the shoes of all the other parties and ask whether or not he could live with a particular solution to the ethical problem at hand. In this way, the user might discover new alternative solutions during this step of the paramedic process. In our example case, social contract analysis might yield an additional possible solution in which George’s company, in return for him signing off on this place, agreed to inform the test pilots about possible airplane malfunctioning.

Finally, in Phase 4, the user applies deontological and utilitarian ethics in order to come to a conclusive decision. Deontological ethics focuses on the decision by assessing whether it violates or meets the obligations of the parties involved. Or, even if a decision affects multiple parties in the same positive or negative direction, it might not do so to the same degree. On the other hand, utilitarian ethics seeks to find the overall best solution for the most people–the one that maximizes opportunities and minimizes vulnerabilities for the most people. The result of Phase 4 should be a ranking of all the possible alternatives from best to worst in order to judge the best possible solution. The author notes that in the best case scenario, one alternative stands out as the best one, but otherwise the user must choose one according to his judgment.

The article goes on to apply the four phases of the paramedic method to another example case. Simply put, the entire paramedic method could be thought of as iteratively asking questions about who is involved in a decision, what the possible solutions are, what everyone can agree on, and what is the best possible solution given all that information. In conclusion, the authors express their hopes that professionals would utilize the paramedic method to make thorough inquiries about the impact of their ethical decisions. However, the authors are also careful to qualify that their method is not meant to be some sort of quick fix to difficult problems. They also note that “ethical problems are [not] amenable to unequivocal algebraic solutions.” Their main goal is just to help people see and consider all the relevant issues in an ethical dilemma such that the process they use is systematic and logical.

Having a unifying ethical approach can be especially helpful to information professionals, especially as the concept of an information professional grows and changes. Existing professional bodies have compiled codes of ethics, listed below:

  • ACM Code of Ethics
  • IEEE Code of Ethics
  • Association of IT Professionals Code of Ethics
  • Librarian’s Code of Ethics
  • USENIX SAGE (System Administrator’s Guild) Code of Ethics

Because of the gaps (and tussles) between technology and policy, a code of ethics can provide a valuable guide for professionals navigating ethical quandaries in the workplace. However, this is by no means a panacea. Tension between ethical approaches, professional ethics and the law, or even professional ethics and organizational policy, can still exist. The Paramedic Approach can help alleviate this by striving toward a “social contract” in which no parties end up satisfied (just kidding!). The aim is to arrive at a detente in which all competing factors have been taken into account and judged as objectively and fairly as possible.

The proliferation of codes implies a more diffuse standard for information professionals than that of doctors or attorneys. What exactly is an information professional’s duty of care? And how does the definition of an information professional change, when the abilities of a “Superuser” (to paraphrase Paul Ohm’s definition, a user who, via additional time, capabilities, or access, gains abilities to do things that the average user cannot) become extended to non-professionals everywhere? The recent case of the disgruntled former auto dealership worker who used a web-based administrative panel to disable more than 100 cars remotely exemplifies this issue’s complexity. What will it mean when we are all “Superusers” in some way?

-Rachelle, Prateek, Daniel, and Tony

Is consumer’s privacy protected by consumer protection policies?

Alex Kantchelian, Dhawal Mujumdar & Sean Carey

FTC Policies on Deception and Unfairness

These two papers outline the FTC’s policies on cracking down on consumer unfairness and deception. The FTC policies are defined from several court cases that influenced consumer protection. However, no single statement on consumer unfairness and deception had been issued up to that point by the FTC.


The FTC policy statement on Unfairness


The FTC is responding to a letter by Senators Danforth and Ford, concerning one aspect of the FTC’s jurisdiction over “unfair or deceptive acts or practices.” The senate subcommittee is planning to hold hearings on the concept of “unfairness” as applied to consumer transactions

The FTC states that the concept of consumer unfairness is not immediately obvious and this uncertainty is troublesome for some business and members of the legal profession. They attempt to delineate a concrete framework for future application of the FTC’s unfairness authority. However, from court rulings, the FTC has boiled down unfair acts or practices in affecting commerce into three categories: consumer injury, violating established public policy, or it is unethical or unscrupulous.

Consumer Injury

The commission is concerned with substantive harms, such as monetary harm and unwarranted health and safety risks. Emotional effects tend to not ‘make the cut’ as evidence of injury. The injury must not be outweighed by any offsetting of the consumer or competitive benefits that the sales practices also produces, i.e. the item producer can justify not informing the consumer if it saves the consumer money. However, if sellers adopt a number of practices that unjustifiably hinder free market decisions, it can be considered unfair. This includes over coercion, or exercising undue influence over highly susceptible purchasers.

Violation of public policy

Violation of public policy is used by the FTC as a means of providing additional evidence on the degree of consumer injury caused by specific practices. The S&H court considered it as a separate consideration. The FTC thinks its important to examine outside statutory policies and established judicial principles for assistance in helping the agency

Unethical or unscrupulous conduct

Unethical or unscrupulous conduct is used for certainty in reaching all the purposes of the underlying statue that forbids “Unfair” acts or practices. The FTC has decided that though this is largely duplicative, because truly unethical or unscrupulous conduct will almost always injure customers or violate public policy as well.

Summary of FTC policy statement on deception

Section 5 of the FTC act declares unfair or deceptive acts or practices unlawful. Section 12 specifically prohibits false ads. There is no single definitive statement of the Commission’s authority on deceptive acts.

Summary:

The FTC does not have any single, definitive statement of their authority on deceptive acts. However, they have an outline for the basis of a deception case: It must have misrepresentation, omission or practice that is likely to mislead the customer, false oral or written representations, misleading price claims, sales of hazardous or systematically defective products or services without adequate disclosers or similar issues. Second, the FTC examines the practice from the perspective of a consumer acting reasonably in the circumstances and third, the FTC looks if the representation, omission or practice is a material one. Most deception involves written or oral misrepresentations, or omission of material information and generally occurs in other forms of conduct associated with a sales transaction. Advertisements will also be considered when dealing with a case of deception. The commission has also found deception where a sales representative misrepresented the purpose of the initial contact with customers.

Part 2, There Must be a Representation, Omission or Practice that is likely to mislead the consumer.

Most deception involves written or oral misrepresentation, or omissions of material information. The Commission looks for both expressed and implied claims, the latter determined through an examination of the representation itself. In some cases, consumers can be presumed to reach false beliefs about products or services because of omissions. The commission can sometimes reach these claims, but other times may require evidence of a consumers’ expectations.

Part 3, The act or practice must be considered from the perspective of the reasonable consumer.

Marketing and point-of-sales practices such as bait and switch cases that can mislead consumers are also deceptive. When a product is sold, there is an implied representation that the product is fit for the purpose for which it is sold, if not then it is considered deceptive. Additionally, the FTC will take special consideration to the needs of specific audiences, for example: vulnerable audiences such as the terminally ill, the elderly and young children. The FTC takes into consideration how the consumer will interpret claims by advertisements and written material. They will avoid cases with ‘obviously exaggerated or puffing representations’ that consumers would not take seriously. Also, the Commission notes that it sees little incentive to deceive consumers for products that are inexpensive or easy to evaluate such as consumables (toilet paper, soap, etc). The commission will look at the practice closely before issuing a complaint based on deception. The FTC takes into account the entire advertisement, transaction or course of dealing  and how the consumer is likely to respond. The FTC considers the entire “mosaic” in addition to materiality

Part 4, the representation, omission or practice must be material

The third major element that the FTC considers is the materiality of the representation. The FTC considers a “material” as information that affects the consumer’s choice or conduct. This “material” can be concern purpose, safety, efficacy, or cost. If the commission cannot find material evidence that there is deception, the commission will seek evidence that the omission of material is important to consumers.

Conclusion:

The Commission works to find acts or practices that it considers deceptive if there is a misrepresentation, omission or other such practices that could harm consumers. Although the commission does not require extrinsic material evidence, but in certain situations such evidence might be necessary.

Sears Holdings Management Corporation Case

Sometimes you wonder whether all these commissions like Federal Trade Commission are there for namesake only. But when you look at the recent case involving Sears Holdings Management Corporation, then you realize their importance. The principle mission of Federal Trade Commission (FTC) is “consumer protection” and prevention of “anti-competitive” business practices. And in this case they precisely stick to their core mission and once again prove their worth.

Sears Holding Management Corporation (“respondent” or “SHMC”), a subsidiary of Sears Holding Corporation. SHMC  handles marketing operations for the Sears Roebuck and Kmart retail stores, and operates the sears.com and kmart.com retail internet websites.
From on or about April 2007 through January 2008, SHMC disseminated via the internet a software application for consumers to download and install onto their computers, This application was created, developed, and managed for SHMC by a third party in connection with SHMC’s “My SHC Community” market research program. The application, when installed, runs in background at all times on consumers’ computers and transmits tracked information, including nearly all of the internet behavior that occurs on those computers, to servers maintained on behalf of SHMC. Information collected and transmitted included all the web browsing, secure sessions, checking online accounts, and use of web-based email and instant messaging services.
If you are angered and aghast with the level of encroachment into the privacy of consumers then hold on to your seat, its just the beginning. SHMC didn’t mention all the details about their application and what it was going to collect in their “click-wrap” license or their privacy policies. Fifteen out of hundred visitors to sears.com and kmart.com websites presented with a “My SHC Community” pop-up box. This pop-up box mentioned the purpose and benefits joining of “My SHC Community”. But it made no mention of the software application (“the application”). Likewise, general “Privacy Policy” statement accessed via the hyperlink in the pop-up box did not mention the application. Furthermore, the pop-up box message invited consumers to enter their email address to receive a follow-up email from SHMC with more information. Subsequently, invitation messages were emailed to those consumers who supplied their email address. These invitation messages described what consumers would receive in exchange for becoming member of the “My SHC Community”. Consumers who wished to proceed were asked to click the “Join Today” button at the bottom of the message.
After clicking “Join Today” button in the email, consumers were directed to a landing page that restated many of the representations about the potential interactions between members and the “community”. However, landing page did not mention anything about the application. There was one more “Join Today” button on the landing page. Consumers who clicked on this button were directed to registration page. To complete the registration, consumers needed to enter their name, address, age, and email address. Below the fields of entering information, the registration page presented a “Privacy Statement and User License Agreement” (PSULA) in a scroll box that displayed ten lines of multi-page document at a time.
A description of the software application (that was going to get installed) begins on approximately the 75th line down in the scroll box. That means consumer had to navigate through seven pages to read this information. This description involved the information about internet usage. It also mentioned about various activities of it was going to monitor. Even though the PSULA had information about the activities it was going to monitor, it was still ambiguous about what this application was actually going to do. For example, it was mentioned that this application will monitor the collected information for better understanding of their consumers and their household but it didn’t mention what SMHC meant by monitoring. Was this monitoring done by automatic programs or someone manually? The PSULA did not mention about any specific information that was monitored. They also mentioned that their application might examine the header information of the instant/e-mail messages of their consumers. PSULA also described how the information that application would collect was transmitted to SHMC’s servers, how it might be used and how it was maintained. Lastly it clearly stated that PSULA reserved the right to continue to use information collected. At the end, it asked consumers to accept these terms and conditions and those who accepted these terms and conditions were directed to an installation page that explained downloading and installation instructions for the application. Installation page didn’t give any information about the application. When installed, the application worked and transmitted information substantially as described in PSULA.
The tracked information included not only information about websites consumers visited and links that they clicked but also text of secure pages, such as online banking statements, online drug prescription records, select header files that could show the sender, recipient, subject and size of web-based email messages etc.
We believe the level of encroachment into the privacy of consumers was not only blatant but also shocking. It failed to disclose adequately that the software application when installed would nearly monitor all the internet behavior and activities on the consumers computers. Thus, this failure to disclose various facts and information was nothing but deceptive practice as discussed in FTC’s policy about deception.

Understanding privacy under FTC and OECD consumer protection policies


Precisely and exhaustively defining the concept of privacy is a challenging problem. For starters, the Merriam-Webster defines one’s right to privacy as the “freedom from unauthorized intrusion”. How inclusive is this definition?
As suggested, we often contrast privacy with being spied on – someone collecting and possibly disclosing data about us, without our knowledge or consent. The FTC policy on unfairness would a priori seem naturally suited to the task. To be unfair, the privacy breach has to be a practice that injuries the consumer. Can we establish injury in a general privacy breach case? Unfortunately, the requirements do not look extremely promising. First, privacy breach must have substantial effect, namely lead to monetary or physical harm for the consumer: “more subjective types of harm are excluded” [FTC on unfairness]. There is usually no directly observable monetary or physical harm when a privacy breach occurs, with the exception of a few cases which tend to receive massive media-coverage, such as the murder of Rebecca Schaeffer, where a stalker obtained the actress home address through the California DMV records.  Second, the net value of the privacy breach has to be considered: possibly good outcomes balance the gravity of the injury. So, trading privacy in exchange of cash has good chances to actually play in your favor before the FTC committee (and your department store fidelity card does just that). Third, the privacy breach has to be unavoidable to the consumer. This obviously happens with an information hiding manufacturer [Sony BMG rootkit incident], but it does not need to be the case [Sears Holdings case] in order to result in a huge privacy disaster.
The FTC statement on unfairness is thus not so well suited for privacy protection purposes. What about the statement on deception? Oddly, it turns out that one can take the problem by a somewhat idiosyncratic angle: alleging misleading privacy expectations regarding a given product[Sears Holdings case]. What is surprising is the fact that privacy is treated as any other product feature, so that we are never really talking that much about privacy rather than misrepresenting and deceptive practices. Moreover, in the analysis of the likelihood of deception, one implicitly relies on unstated privacy expectations of reasonable consumers. The problem is that even reasonable consumers may not have enough technical knowledge to understand privacy issues in today’s highly complex world of softwares, so that the very foundations of reasonable expectations and the analysis of effects on the targeted audience are deeply weakened.
Privacy, understand as the right to be left alone is, at most, moderately well served by the FTC consumer protection policies. Unfortunately, in the information-intensive world, it also seems that a lot of “non-intrusive” data processing naturally falls into our understanding of privacy. For example, one’s ability to inspect her stored personal data on relevant systems, or one’s right to have her personal information data well secured from malevolent outsiders are pretty basic privacy requirements, which are not covered by our leading definition.
Interestingly, the OECD has pointed to some of those issues in its Guidelines on Protection of Privacy and Transborder Flows of Personal Data. In a tussle between free-flow of information which is economically benefic and privacy for the protection of the consumer, the OECD suggests 7 principles to be enforced by its members. Data collecting limitation, data quality for restraining data collection to only the relevant to the purpose it is been collected, purpose specification before the time of collection, security safeguards for protecting collected data, openness which is readily available purposes and nature of the collected data, individual participation for avoiding the tyranny of a blind administration, and finally accountability.
In France for instance, the CNIL (the National Commission of Computerized Systems and Liberties) implements these recommendations since 1978 (thus, ahead of OECD’s guidelines), albeit not without several criticisms, ranging from the quality of the decisions reached, decisions being often in favor of governmental actions, to its painfully long processes because of the overwhelming number of submitted requests and cases before this relatively small administrative organ.