Staying One Step Ahead: How Future Technologies Should Cooperate With Society

In modern times, computing technologies have been developing rapidly. Big Data, machine engineering, and AI are playing a larger part in our daily lives than ever before, creating an epistemic transition. As these systems advance, the developers of these technologies will be tasked with finding progressive ways of meshing technology with society, ever cognizant of the possible consequences at stake. While it is important to continue developing these technologies, we must not continue moving forward blindly; rather engineers and scientists must pay close attention to the way their technologies interact with society.

In the article Engineering the Public: Big Data, Surveillance and Computational Politics, author Zeynep Tufekci touches upon Big Data and its effects on society. One consequence Tufekci mentions is the invasion of personal privacy, in which online platforms where a “user is going about her day… purchasing products and participating on social media” leave traceable imprints. These remnants can carry important information and can include personal conversations as well as private information, which are then available to anyone willing to retrace the user’s steps. Big Data deals with a lot of private information such as this, as it is supposed to. However, this private data can be analyzed with machine learning in an undesired way, as described by Tufekci. He warns that Big Data, particularly related to politics has the potential to “undermine the civic experience”. He goes into detail about how gathering data to create specialized ads towards certain viewers can result in a situation where people are only seeing one side of a story in a way similar to propaganda. This may eventually hinder or ruin information flow and may even be considered unconstitutional. Such fears are why computer scientists working with Big Data need to pay attention to how they are using people’s private information. Machine learning should not be used in a way that regulates one’s information feed, rather in a way that can personalize computer experiences.

Artificial Intelligence is another rapidly emerging field of modern technology which, when harnessed responsibly, has the potential to greatly improve societal functions. A specific application of AI technology can be found in the development of autonomous cars. There are of course many benefits that come with driverless cars, such as the perceived removal of human error and more comfortable, efficient travel. However, there are also many ethical concerns engineers and scientists must consider before attempting to implement this technology throughout society. As Michael Nees, author of Self-Driving Cars Will Need People, Too, explains, “tech innovators know from experience that automation will fail at least some of the time.” By removing the human element from driving, consumers must place their safety, as well as the safety of those around them, entirely on these automated systems. If these systems were to malfunction while conducting this complex and variable task without the possibility of human intervention, the consequences could be catastrophic. To address these flaws in autonomous technology, Nees argues that “the best option remains intervention by the human driver,” allowing the consumer to remain engaged and responsible for their own well-being. This coincides with J. C. R. Licklider’s theory of “man-computer symbiosis,” “in which machines would aid people in the real-time work of thinking,” avoiding the counterproductive exclusion of humans from automated processes (Mindell 4). Although many in the general populous believe Artificial Intelligence is seemingly flawless by nature, engineers must not be ignorant of numerous risks that emerge when removing the possibility for human involvement during automated processes.

The contemporary wave of rapidly emerging technology is, in fact, not a novel occurrence. Previous waves of Big Data should not be overlooked when considering how to stay ahead of the epistemic transition taking place. By “comparing the flood of data that washed over society after a technical revolution two hundred years ago to the flood of data we are experiencing today,” new insight into how these technologies should be implemented and governed are revealed (Ambrose). Engineers, scientists, and others must be wary that their innovations, once implemented into society, do not bring with them negative, unintended consequences for their consumers. Society must also implement novel policies that adapt with the epistemic and ethical transitions occurring, so that the security and well-being of the populous is maintained and not eroded by misused technology. Doing so, the full potential of these emerging technologies can be fully harnessed to benefit the majority of the populous, improving society as a whole.

Gabriel Dudlicek, Phillip Durgin, Angela Ferro, Adam Goldsmith

Works Cited:

Mindell, David A. Between Human and Machine: Feedback, Control, and Computing before Cybernetics. The Johns Hopkins University Press, 2004.

Tufekci, Zeynep. “Engineering the Public: Big Data, Surveillance and Computational Politics.” 7 July 2014, firstmonday.org/article/view/4901/4097.

Nees, Michael. “Self-Driving Cars Will Need People, Too.” IFLScience, IFLScience, 20 Mar. 2018, www.iflscience.com/technology/self-driving-cars-will-need-people-too/.

Ambrose, Meg Leta. “Lessons from the Avalanche of Numbers: Big Data in Historical Perspective” 2015.

Leave a Reply