Rise of AI-Generated Fake Media

In an era where seeing is no longer believing, the rise of AI-generated fake media is reshaping our digital landscape. Due to advancements in Artificial Intelligence, there have been a rise of fake media that has made it incredibly difficult to tell the difference from a real and false media. Deepfake videos can make anyone say anything, AI-altered images can distort reality, and voice cloning that can mimic voices with a scary level of accuracy, the potential for deception is unprecedented. These AI-driven scams are not just technological curiosities—they pose real threats to our privacy, security, and trust.  

As our LMU community navigate this new digital frontier, understanding and mitigating these risks is more crucial than ever. Join us as we delve into the world of virtual deception and uncover the tactics used by cybercriminals to exploit these advanced technologies. Here’s a detailed look at AI-generated risks, their impact on daily activities, and practical tips to mitigate them. 

AI-generated Media Types and Potential Risks

  • Deepfake Videos: Malicious actors can use AI to create fake videos of faculty members or administrative staff, spreading false information or damaging reputations. 
  • Altered Images: AI-generated images can be used to create misleading visuals that alter public perception of events or individuals. 
  • Voice Cloning: AI-generated audio can clone the voices of faculty or staff, allowing cybercriminals to impersonate them and gain access to sensitive information. 
  • AI-Generated Communication: AI can generate convincing text that enhances disinformation campaigns, spreading false narratives about the institution or its members. 

Potential Impact on Faculty, Staff, and Students

  • Faculty's Research Integrity: Deepfake videos or AI-generated text could misrepresent research findings or academic opinions, undermining the credibility of faculty members. 
  • Considering Classroom Security: Voice cloning could be used to impersonate professors, leading to unauthorized access to virtual classrooms or the dissemination of false information to students. 
  • Navigating Phishing Emails in Administrative Security: AI-generated emails or voice messages could impersonate university officials, leading to unauthorized access to administrative systems or the mishandling of sensitive information. 
  • Reputation Management of a Staff: Altered images or deepfake videos could damage the reputation of staff members, leading to personal and professional consequences. 
  • AI-Altered Images: AI-generated images of campus events or faculty members are used to spread misinformation. For example, an altered image showing a fake protest on campus could lead to confusion and panic among students and staff. 
  • A Student's Personal Information Security: AI-generated phishing attempts could trick students into sharing personal information, leading to identity theft or financial loss. 
  • Academic Integrity: AI-generated content could be used to cheat on assignments or exams, undermining the integrity of the academic process. 
  • Voice Cloning a Professor or University Leadership: A cybercriminal uses AI-generated audio to clone the voice of an LMU professor, calling students to request personal information or to submit assignments to a fraudulent platform. 

How to Protect Yourself

1. Verify Sources

  • Check Authenticity: Always verify the source of information, especially if it seems sensational or urgent. Use trusted news outlets and official university communications sources.
  • Fact-Checking Tools: Utilize fact-checking websites and tools to confirm the authenticity of information before sharing it to others.

2. Restrict Sharing Information

  • Privacy Settings: Make sure to read the privacy settings of external websites and social media. Set your account and personal information on social media accounts private.
  • Limit How Much You Share Online: Be careful to not overshare or share personal or sensitive information at broad to prevent information from being misused.

3. Monitor and Report

  • Regular Monitoring: Regularly monitor your university and systems that you manage for any unusual activity. Set up alerts for suspicious transactions or login attempts.
  • Report Suspicious Activity: If you encounter any suspicious content or activity, report it to servicedesk@lmu.edu or the appropriate authorities immediately. 

4. Stay Informed & Be Vigilant

  • Stay up-to-date on current scams: Educate yourself on common AI generated scams. Check out the LMU Phish Bowl to see the latest scams seen across our community.
  • Awareness Programs: Participate in cybersecurity awareness programs and training offered by LMU. Stay informed about the latest AI-related threats and how to recognize them. Learn more here. 

 

How to Report AI-Generated Fake Media Related Incidents

By understanding AI-generated fake media risks and implementing preventive measures, collectively we can protect our LMU community and maintain a secure and trustworthy environment. Stay vigilant, stay informed, and prioritize cybersecurity in all your activities. 

If you have any specific concerns or need further assistance, feel free to reach out to ITS Service Desk at servicedesk@lmu.edu. If you are at risk of potential harm, please notify LMU Campus Safety and local law enforcement.   

If you have any questions or need more detailed advice, ITS Information Security Team are here to help! 

 

The following resources from Cybersecurity & Infrastructure Security Agency (CISA) provide additional guidance on potential AI-generated false media risks and threats. 

Social Media Bots Infographic 

Tactics of Disinformation & AI-Generated Fake Media