Protection of AI through the Department of Technology: an Incomplete Solution
By Anonymous | December 5, 2019
From Stephen Hawking to Elon Musk, futurists and scientists have feared for the state of humanity post singularity. This apocalyptic Roko’s Basilisk scenario has now entered the platforms of United States Presidential candidate platforms for the first time, via Andrew Yang.
Yang has received both positive and negative attention for his proposed policies on data as an individual property. This position could potentially mitigate some of the current negative effects of data collection, security, use and reuse by many companies today. But there has been less talk about his idea to create the cabinet level position of the Department of Technology to address advances in algorithms.
In his platform page, Yang vaguely casts artificial intelligence as a bogeyman who is out “displacing…jobs” and “causing unknown psychological issues for our children” created by techies who “don’t fully understand how it works”. While these broad strokes seem to resonate more with language of the 19th century Luddites, one thing we can agree on is that no one truly understands where technology will land and what implications this will mean for our lives. To address this uncertainty, Yang proposes the creation of the Department of Technology to:
- Monitor the development of new technology
- More quickly adapt to the changing technological landscape
- Prevent technological threats to humanity from developing without oversight
While none of these ideas is controversial, they are also unhelpfully broad. Those objectives currently have representation, at least theoretically, in presidential administrations through the Office of Science and Technology Policy (OSTP), both through the directorate and through the position of the United States Chief Technology Officer. Yang does not suggest abolishing or transforming the OSTP but additionally insists on the reinstitution of the Office of Technology Assessment (OTA) as its legislative counterpart.
It is unclear how the proposed Department of Technology would interact with the congressionally-confirmed OSTP, the revived (OTA), and his recent addition to Cabinet rank positions, the Department of Attention Economy, another department that would focus on the critical areas of protections from the negative side effects of social media, especially for children.
The additions of more departments and offices of the executive branch cheapen the sophistication of solutions being offered, with described roles resembling “think tanks” more than governing bodies.
Nevertheless, Mr. Yang’s surfacing of these potential harms towards children, get closer to some of the other algorithmic needs not addressed in his policies: biases and potential harms of current applications of machine learning used across the country. While the Department of Technology is working to ensure the stop of the singularity, we still have unaddressed algorithmic issues and processes to address. Yang is right, that the efficacy of the government to respond to these needs is insufficient. But it isn’t clear that creating another Department will actually provide people with the protection that they need to understand the implications of their data as property, to protect them from data exposure or from algorithmic abuse, and to secure their information and safety in the evolving cyberscape.
If we do accept his assumption that the most pressing technological issues are in regulating advanced AI applications, we still don’t have what is needed to protect the electorate: checks and balances. As Mr. Yang’s policy states:
The level of technological fluency that members of our government has shown has created justified fears in the minds of Americans that the government isn’t equipped to create a regulatory system that’s designed to protect them.
Congress would have a difficult time crafting laws for enforcement, as some congresspersons, I gather, think that Pac-Man is an emerging technology. And the Judiciary branch of the government, already struggling to process technological patent laws due to the level of depth of expertise required both legally and technologically of content experts.
While the revival of OTA would help, it would be a legislative act, since Congress defunded it in 1995 after “some Republican lawmakers came to view [the OTA] as duplicative, wasteful, and biased against their party.” But no such equivalent judiciary plan has been put forward, leaving the public without one of their key levers for recourse: if technological fluency is not equally available across all branches of government, we risk abuses from those branches that do have intellectual power.
The success of the Cabinet-ranked EPA to protect the environment and people from economic externalities came in part by an accessible level content expertise in all three branches (like the legislative Clean Air Act and the judicial Massachusetts v. Environmental Protection Agency). To be at least as successful as the EPA (whose “success” had arguably been diminished under Scott Pruitt), all these proposed regulatory bodies need to be both given that power through law and to be held accountable for its work in the courts.
In short, to protect individuals from the unknown harmful effects of future technology, the government needs to be much better equipped to address these needs. But the promotion or creation of a Department of Technology led by deeply sophisticated technologists is less than one third of the solution needed to address our algorithmic obstacles. That level of technological acumen would need to proliferate all three branches of government and would need address current algorithmic issues—not just the rise of our computer overlords.