Since the beginning of Spring Semester, I have been working on a new research project with my colleague, Josh Azriel. Together, we are crafting a textbook for undergraduates about the topic of digital privacy rights. Understanding our digital privacy rights in the United States is an essential aspect of our lives for we have less and less autonomy over the digital footprints we create.
I believe this topic for a textbook is long overdue. I began teaching at Kennesaw State University for the School of Media and Communication over a decade ago. When I would ask my students if they had an expectation of privacy in the earlier part of my career, the majority of the students would, without hesitation, say yes. In the intervening years, however, the response has reversed. I rarely, if ever, have a student who has an expectation of online privacy. There is an inherent sense of resignation in the air when I pose the question. However, the negation of our digital rights does not have to be a “given” for there are examples from around the world in which individuals have a say in how corporate and governmental entities use their data.
I have spent the last few weeks thinking through the outline for the chapter on AI and what facets of industry and society need to be discussed. The introduction of the chapter needs to explain what AI is, the types of AI that currently exist, AI’s connection to big data and how that is of central concern to the issue of privacy</a?, and what it would mean to integrate AI more fully in the majority of the aspects of our daily lives. Even a Google search is now crowned with Google’s AI overview. More and more of what we do online is touched, in some way, by AI. However, AI has begun to seep into those aspects of our lives which have considered heretofore off-line.
Although AI may be living in McLuhan’s global village, nations around the globe have begun to legislate AI. Providing a understanding of the current patchwork of legislation, some proposed and some already law, is an important part of the chapter. While the discussion will reside primarily in the form of an overview, examples from around the world demonstrate the lack of protection in the United States is not the only way forward. For a short period of time, the United States did have some over-arching protections with President Biden’s executive order 14110 calling for greater protections and safety in the use of AI; however, President Trump revoked it on his first day in office. The US Federal Government via the Office of Management and Budget released two memos this month, M-23-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust and M-25-22, to provide guidance on the use of AI in both the public and private sectors. Previously, the Federal Government had issued a request for information (RIF) for the development of an Artificial Intelligence (AI) Action plan which opened in January and closed in mid-March of 2025.
The United States of America, however, is not the only player in this game. Canada, our neighbor to the north, has already introduced a Code of Practice for generative AI as well as introduced a directive on automated decision-making. The European Union leads the world with AI legislation with the passage of the EU AI Act in 2024. On the content of Africa, Mauritius issued the2018 Artificial Intelligence Strategy and published their 2030 Digital Strategic Plan. Countries including Egypt, Morocco, and South Africa have all issued AI policy frameworks to guide them towards national legislation. Finally, Chinese scholars proposed a draft of the Artificial Intelligence Law of the People’s Republic of China in 2024.
The next facet of the chapter needs to engage the ways in which facets of our society such as medicine and education are engaging with AI. Finally, this chapter will need to consider the dangers of as well as best practices for Artificial Intelligence.
