Julia Kelley
Artificial Intelligence is quickly becoming a vital part of everyday life, from social media to healthcare services, compelling governments worldwide to introduce regulations in the face of potentially harmful developments.
The World Artificial Intelligence Conference. ITU Pictures. CC BY-NC-SA 2.0.
As artificial intelligence quickly develops, it is becoming omnipresent in daily life. Whether looking something up in a search engine or simply scrolling on social media, we are witnessing our web spaces being increasingly inundated with automated applications. While it is intended to make consumers’ lives easier, its widespread distribution and easily accessible nature present potential threats of promoting harmful rhetoric and risking data protection for the general population. In an interview with Virginia Tech Engineer, Ella Atkins, Head of Ocean Engineering at Virginia Tech, warns of AI’s use as a propaganda engine and its ability to mislead consumers. This has become a rising concern as certain AI technology is weaponized politically; in the last U.S. presidential election, for example, U.S. intelligence officials found that Russian services funded the widespread distribution of deepfake videos and misinformation targeting Democratic candidate Kamala Harris. Moreover, AI is unique in its ability to collect large amounts of data from users, sometimes collected and used without consent, which is made vulnerable due to insufficient regulations. Just between March 2024 and February 2025, Michigan’s Ponemon Institute found that 600 organizations globally experienced data breaches, with 97% admitting that they did not have established AI access controls and 63% stating they had no governance policy or were still in the process of developing one.
In the face of this changing digital landscape, governments worldwide are recognizing a need to push new regulations. Brazil, for example, passed an AI Act in December 2024, targeted toward protecting consumers’ rights and managing risks. The bill outlines the categorization of risk in AI systems as excessive, high or low, certifying that models are safe for public use. Additionally, the legislation indicates a user’s right to privacy, data protection and non-discrimination when using AI, enforced by documentation requirements, incident reporting and promotion of best practices. Through compliance, AI companies can thus protect against data breach concerns while working under ethical standards. Likewise, the European Union’s AI Act enforces transparency requirements to disclose which information is generated by AI and prevent the dissemination of illegal content. It also implements data privacy requirements that mandate AI systems to register a user’s personal information in an EU database. Some countries are taking a step further, including Denmark, which is amending copyright laws to protect individuals against AI-generated deepfakes. These images or videos created by deep machine learning can impersonate individuals by producing realistic digital representations of their appearance and voice, which can be used online to make it appear like they are doing or saying something compromising.
Since 2023, China has also administered regulations on generative AI that implement the risk categorization of AI services, ensure the lawful processing of data according to consumers’ intellectual property and personal information rights and secure cybersecurity. The Chinese Premier Li Qiang recently called for an organization to foster global cooperation on the development and security of AI as it quickly evolves. His announcements were given at the July 2025 World Artificial Intelligence Conference, days after U.S. President Donald Trump released his AI Action Plan, which outlined the limitations of federal funding for state AI laws and support for accelerated AI development and use in government, enforcing fewer regulations. While many view this legislation to be engaging fast-paced competition for expanding AI, the introduction of stronger regulations worldwide signifies a broadening recognition of governance in open-source development. As the technology rapidly permeates global systems more and more, worldwide collaboration is similarly becoming imperative.
GET INVOLVED:
Those looking to support the ethical adoption of AI can look into organizations such as ForHumanity, which analyzes the risks associated with AI, the Center for AI and Digital Policy, which aims to build AI education promoting fundamental rights and democratic values, or the AI Sustainability Center, which focuses on the impacts of AI on people and society. In addition, you can check out organizations like Doteveryone, The Governance Lab, Privacy International or the Open Data Institute, which work with governments and other organizations to help support the research of AI, regulation and the protection of individuals’ privacy and data. When looking to mitigate against adverse AI utilization, technology users can also look into the risks of artificial intelligence, including deepfake technologies, hacking and threats to data privacy.
Julia Kelley
Julia is a recent graduate from UC San Diego majoring in Sociocultural Anthropology with a minor in Art History. She is passionate about cultural studies and social justice, and one day hopes to obtain a postgraduate degree expanding on these subjects. In her free time, she enjoys reading, traveling, and spending time with her friends and family.
