Politics in Pop Culture

The Fantastic Nightmare of Generative AI: The lack of legislation surrounding generative AI has made its use increasingly unstable.

Have you ever wanted to ride a cotton candy horse in outer space, own a refrigerator that dispenses full seafood boils with the press of a button or know what you would look like if you were a pile of melted goo? Unfortunately, you cannot. Even though it’s 2025. However, you can watch yourself doing all those things through Sora AI.

Sora 2, launched by Open AI on September 30, is a video generation platform in which the user uploads images and enters a prompt to create short videos ranging from 15-20 seconds. Instagram users immediately flooded Instagram reels with Sora 2 generated content ranging from fruit-pet hybrids to deep fakes of a Charlie Kirk / Bhad Bhabie fusion singing her hit song, “Gucci Flip Flops.” 

These videos can be incredibly advanced in terms of quality and realism, making it increasingly difficult to tell the difference between an AI generated video and a real one. The conflation between virtual and reality is not just exacerbated from video generated AI, but from learned language models as well, and the repercussions are dangerous. There has already been a surge in AI related scams, the spread of misinformation, and even mental health disorders, yet there is still a severe lack of policy around AI. 

Let me reiterate: there is currently no legislation inhibiting the development or use of AI. In 2023, Biden signed Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which served to prevent fraud or privacy violations due to AI, and developed standards and practices to detect AI-generated content by stamping material with watermarks or banners. While this policy would not eradicate AI, it would reduce many of the risks associated with it by educating organizations and the public on ethical AI uses and how to properly identify something that is AI-generated.  

However, this executive order was rescinded by President Trump in January 2025, and was replaced by an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence.” Trump’s new order revoked  parts of Biden’s mandate that protected the public, such as dismantling the AI Safety Institute and removing the requirement that companies developing AI models trained on extensive data must test the models against sophisticated threats such as hackings, and report those results to the federal government. 

While it can be funny and seemingly harmless to scroll through these videos, the problem arises when reality is conflated with the digital world. Already, elderly populations have lost a total of $3.4 billion due to AI scams. One such scam is called “voice cloning,” which uses AI to mimic the voices of loved ones. With only a few seconds of a voice clip found off social media, scammers create an entire dialogue spoken by that person in which they claim to be in an emergency, such as a kidnapping, and require funds immediately to rescue them. The elderly are already more vulnerable to scams due to their unfamiliarity with the digital world, and machine-learning software complicates this further by creating fake websites that are almost identical to the real thing. An 82-year old veteran lost $97,000 in a cryptocurrency scam because he believed he was investing in a government-backed cryptocurrency when in reality the website he was using was fake, with scammers utilizing AI to send him fake emails and documents to help create the facade.

While AI seems detectable for those who have grown up during the digital age, the truth is that software is advancing at an increasingly faster pace, creating ever-more realistic content. Deepfakes– synthetic audio, video, or images that convincingly imitate real people’s appearance or voice– are equally concerning, given their capacity to generate and spread disinformation. This is especially worrying when taking into account the period of heightened political polarization the U.S. is currently living through. For example, if a candidate is running in an election, they can use advertising that utilizes AI to generate campaign material that they know will catch a viewer’s attention, even if it is misinformation. President Trump’s most recent campaign used an AI-generated image of Taylor Swift in which she wears a Swift-ified Uncle Sam costume and points at the viewer, with the caption “Taylor wants you to vote for Donald Trump.” Additionally, images emerged of “Swifties” in crowds wearing shirts that read “Swifties for Trump.” The images had many characteristics of generative AI which viewers quickly identified, but with the recent advancements in photo generation through platforms such as Sora 2, these telltale characteristics may soon disappear.  

Most people use AI every day without even realizing it. When you say “hey Siri,” scroll through Instagram or online shop, you are using algorithms that determine what you enjoy or are most likely to click on in order to maximize user consumption. There are also AI platforms such as Google’s Gemini or OpenAI’s ChatGPT that many people use for productivity gains. While a common case made for AI is that it makes us more efficient and therefore smarter, it is also responsible for a phenomenon known as “false narcissism.” This concept refers to the facade that one’s abilities are limitless, when in reality the human user is conflating their work with the AI’s output.

Say a student is writing an essay, and they use ChatGPT to generate an outline. While prompted with only a few words, ChatGPT can produce an extremely elaborate output that far surpasses the student’s knowledge, yet, because the prompt was so simple, the student internalizes this work as their own. As a result, the student may feel simultaneously like a success and a fraud, as they cannot tell where their work ends and the work of the AI begins. This is just a small example of the impact of AI on the brain. 

Researchers fear a mental health crisis due to AI in which adolescents, elderly adults and people with mental illness become psychologically attached to AI, creating issues such as delusional thinking, emotional dysregulation and social withdrawal. Since there is no legislation regulating AI use, more people are at risk for developing mental health issues as a result of interaction with these models. 

However, there is already a stigma around AI being promulgated. A recent MIT study  found that people express a bias towards products created by generative AI. When subjects were presented with AI-generated content that was labeled as such, they responded negatively and expressed a positive bias towards content generated by humans. However, when shown unlabeled content that was AI-generated, subjects preferred the AI-generated content over the human-generated one. As generative AI expands even further into our lives, we must accept that at some point, AI-generated and human-created works will be indistinguishable when viewed without context. The only current solution is to be aware of AI’s potential presence,be skeptical of the content we consume, never take content at face value, find trustworthy sources and be more vigilant than ever with our fact-checking.


The image featured in this article is licensed for reuse under the Creative Commons Attribution 2.0 Generic license. The photo was originally created by Imagen 4 AI. No changes were made to the original, which can be found here.

Leave a Reply

Discover more from The Gate

Subscribe now to keep reading and get access to the full archive.

Continue reading