Looking Down the Barrel of Technological Challenges

Connor Burri, Patrick O’Mullan, Matthew Spofford, Alex Young

 

While we develop new and innovative technologies, society is continuing to evolve faster than ever before.  Technologies such as AI, Machine Learning, and Big Data, are going to affect our future in extreme ways. By looking to historical examples of time periods when technology was rapidly changing, we can predict the types of changes to be expected.  From this, we can infer what potential issues may present themselves in the future. We believe that the biggest challenges have to do with automation, the ever-shrinking distinction between human and non-human, and ethical issues related to AI and Big Data.

Since we have conceived the idea of automation in the workplace, we have been slowly replacing workers with machines in the workplace. In David Mindell’s book Between Human and Machine: Feedback, Control, and Computing Before Cybernetics, he addresses one of Mumford’s ideas that once we reach the full potential of the neotechnic, the worker will be gone. Mindell warns us that ‘when the neotechnic is fully realized, “automatism” in production progresses to the point where “in the really neotechnic industries and processes, the worker has been almost eliminated.”'(Mindell 1). While automation has been an issue that has been around for nearly a century, its scale in context today is more hazardous. When the assembly line was first created it replaced skilled workers with many more unskilled ones working along an automated track. Modern automation would see the exit of all these unskilled workers from the workplace. Any jobs that would stay would be for very specific skill sets that many people don’t have and won’t be able to get. Understanding how modern automation will change society is a huge aspect to consider when moving forward in these technologies.

Any time there is a sufficient technological advancement in history, society integrates that technology into its norms. Mindell talks about how technology developed during WWII was integrated into society. “Yet even as Mumford wrote, people were entering into new, intimate couplings with machines, with dramatic effects… World War II continued to blur the boundaries between mechanical and organic. Radar operators manipulated blips on screens as they fought automated attackers, and aircraft made human bodies into new and terrible weapons.”(Mindell 2). Human beings tend to adapt to new technology when it is introduced and integrate it into their societies. Today it is normal to see people flying airplanes, or driving cars, but when we talk about it we say things such as “I drove” or “I flew” not “The car drove” or “The plane flew”. We can see it even more drastically with our phones, our phones are almost an extra appendage for many people. Today, you can not make it through the day without a phone. When developing AI, perfecting machine learning, and utilizing big data, experts must consider how these new technologies will integrate themselves into society and what issues this may cause, ethically, socially, or otherwise.

Along with this expansion in computer technology and machine learning, the amount of data that exists about every person is rapidly accelerating.  The dilemma lies in whether this existing data is being used for beneficial statistics or for malevolent purposes. This need to protect individuals data has been prevalent since the 1970s, as shown in Lessons from the Avalanche of Numbers.  International governments at this time developed laws to help protect individuals information from third parties.  The journal states that these laws must be “efficient for data practices because those data practices result in profound social benefits,” (Ambrose 18) such as medical and security advancements.  Today, according to Engineering the public: Big data, surveillance, and computational politics, companies such as Facebook are storing up to 100 petabytes of data on their users.  Individuals are freely giving out their data to corporations through socializing and “civic participation.”  While some of this data being gathered may seem harmless, much of it is now used to target individuals with advertisements or political campaigns.  Through modeling these datasets, many attributes about an individual can now be predicted by using like buttons. Without asking a single question to users, analysts and software algorithms can now see “psychological traits as accurately as a psychologist,” discovering a users preferences.  With this amount of real-time surveillance towards every individual, this wealth of data will lead to serious privacy concerns. Not only could corporations use this data to bombard users with advertisements, but it could also be used with malicious intent. New laws will need to be put in place in order to prevent these major privacy issues.  Without laws, individuals private data will become exposed to the public eye, and almost everything about anyone can be known. These legal implementations are a serious challenge that developers and researchers will need to consider when expanding these technologies.


Works Cited:

Ambrose, Meg Leta. “Lessons from the Avalanche of Numbers: Big Data in Historical Perspective.” A Journal of Law & Policy for the Information Society, 2015, pp. 1–56.

 

Mindell, David A. Between Human and Machine : Feedback, Control, and Computing Before Cybernetics, . John Hopkins University Press, 2003.

 

Tufekci, Zeynep. “Engineering the Public: Big Data, Surveillance and Computational Politics.” First Monday, 7 July 2014, firstmonday.org/article/view/4901/4097.

 

Leave a Reply