AI & Privacy

It was not a given that we were not going to own our personal digital/data outputs. As with as new technological innovations and the often unintended, but accompanying, phenomenon, we have a choice as to what to do with all the “stuff” we generate. Unweidly technopolies like Meta, Amazon, or Alphabet were not bestowed ownership of our digital debris; instead, they developed a new form of capitalism according to Soshana Zuboff in The Age of Surveillance Capitalism. Under the new paradigm, “[s]urveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data…the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as “machine intelligence,” and fabricated into prediction products that anticipate what you will do now, soon, and later.” We are, therefore, subject to shaping and molding in the most nuanced of ways through the use of our data.

These mountains of data now follow us, determine us, and incriminate us. We have created personal archives of not only ourselves but our family and friends (often without their express permission) that rival those of royalty. However, having all of this associated data is problematic when considering the technology of AI. According to an IBM post entitled “Exploring privacy issues in the age of AI”, there are several main issues for AI and privacy including:

  • Collection of sensitive data,
  • Collection of data without consent,
  • Use of data without permission,
  • Unchecked surveillance and bias,
  • Data exfiltration (theft), and
  • Data leakage (accidental exposure).
However, such a nicely-worded list obscures the types of activities happening to unsuspecting citizens. An example is the case of Williams v. City of Detroit in which Robert Williams was wrongfully arrested due to a false facial recognition. The police attempted to find a suspect from grainy surveillance footage of the incident by sharing it with the Michigan State Police to run a “face recognition technology search.”

One of the known limits of this technology, however, is racial bias. In an article entitled “Police Facial Recognition Technology Can’t Tell Black People Apart,” Dr. Thaddeus Johnson and Dr. Natasha Joshson write:

facial recognition technology (FRT) can worsen racial inequities in policing. We found that law enforcement agencies that use automated facial recognition disproportionately arrest Black people. We believe this results from factors that include the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues.

This is why the EU AI Act addressed issues such as this head on. The first law of its kind in the world, the EU AI Act established 4 levels of risk for AI use in society. The four levels are:
  • Unacceptable risk – systems considered a clear threat to the safety, livelihoods and rights of people are banned.
  • High risk – pose serious risks to health, safety or fundamental rights.
  • Limited risk – introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust.
  • Minimal risk- vast majority of AI systems currently used in the EU fall into this category (i.e. games)

While the vast majority in the EU fall into minimal risk, policing and AI fall into the high risk category. Other activities that fall into the high risk category are ones commonly used in both the public and private sectors in the United States. For example. it is now considered a high risk AI activity to use AI tools in employment, specifically sorting resumes for potentially employment, due to the errors and long-term damage and lack of employment individuals may face because it is not reviewed by an individual who will understand the nuances present in the document. The same reasoning is held for the grading of schoolwork as it “may determine the access to education and course of someone’s professional life.”

Leave a comment