#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs
.webp)
Introduction
MEITY’s Indian Computer Emergency Response Team (CERT-In) in collaboration with SISA, a global leader in forensics-driven cyber security company, launched the ‘Certified Security Professional for Artificial Intelligence’ (CSPAI) program on 23rd September. This initiative marks the first of its kind ANAB-accredited AI security certification. The CSPAI also complements global AI governance efforts. International efforts like the OECD AI Principles and the European Union's AI Act, which aim to regulate AI technologies to ensure fairness, transparency, and accountability in AI systems are the sounding board for this initiative.
About the Initiative
The Certified Security Professional for Artificial Intelligence (CSPAI) is the world’s first ANAB-accredited certification program that focuses on Cyber Security for AI. The collaboration between CERT-In and SISA plays a pivotal role in shaping AI security policies. Such partnerships between the public and private players bridge the gap between government regulatory needs and the technological expertise of private players, creating comprehensive and enforceable AI security policies. The CSPAI has been specifically designed to integrate AI and GenAI into business applications while aligning security measures to meet the unique challenges that AI systems pose. The program emphasises the strategic application of Generative AI and Large Language Models in future AI deployments. It also highlights the significant advantages of integrating LLMs into business applications.
The program is tailored for security professionals to understand the do’s and don’ts of AI integration into business applications, with a comprehensive focus on sustainable practices for securing AI-based applications. This is achieved through comprehensive risk identification and assessment frameworks recommended by ISO and NIST. The program also emphasises continuous assessment and conformance to AI laws across various nations, ensuring that AI applications adhere to standards for trustworthy and ethical AI practices.
Aim of the Initiative
As AI technology integrates itself to become an intrinsic part of business operations, a growing need for AI security expertise across industries is visible. Keeping this thought in the focal point, the accreditation program has been created to equip professionals with the knowledge and tools to secure AI systems. The CSPAI program aims to make a safer digital future while creating an environment that fosters innovation and responsibility in the evolving cybersecurity landscape focusing on Generative AI (GenAI) and Large Language Models (LLMs).
Conclusion
This Public-Private Partnership between the CERT-In and SISA, which led to the creation of the Certified Security Professional for Artificial Intelligence (CSPAI) represents a groundbreaking initiative towards AI and its responsible usage. CSPAI can be seen as an initiative addressing the growing demand for cybersecurity expertise in AI technologies. As AI becomes more embedded in business operations, the program aims to equip security professionals with the knowledge to assess, manage, and mitigate risks associated with AI applications. CSPAI as a programme aims to promote trustworthy and ethical AI usage by aligning with frameworks from ISO and NIST and ensuring adherence to AI laws globally. The approach is a significant step towards creating a safer digital ecosystem while fostering responsible AI innovation. This certification will significantly impact the healthcare, finance, and defence sectors, where AI is rapidly becoming indispensable. By ensuring that AI applications meet the requirements of security and ethical standards in these sectors, CSPAI can help build public trust and encourage broader AI adoption.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=2057868
- https://www.sisainfosec.com/training/payment-data-security-programs/cspai/
- https://timesofindia.indiatimes.com/business/india-business/cert-in-and-sisa-launch-ai-security-certification-program-to-integrate-ai-into-business-applications/articleshow/113622067.cms
%20(2).webp)
Introduction
Digitalization in India has been a transformative force, India is also marked as the second country in the world in terms of active internet users. With this adoption of digitalization and technology, the country is becoming a digitally empowered society and knowledge-based economy. However, the number of cyber crimes in the country has also seen a massive spike recently with the sophisticated cyber attacks and manipulative techniques being used by cybercriminals to lure innocent individuals and businesses.
As per recent reports, over 740,000 cybercrime cases were reported to the I4C, in the first four months of 2024, which raises serious concern on the growing nature of cyber crimes in the country. Recently Prime Minister Modi in his Mann Ki Baat address, cautioned the public about a particular rising cyber scam known as ‘digital arrest’ and highlighted the seriousness of the issue and urged people to be aware and alert about such scams to counter them. The government has been keen on making efforts to reduce and combat cyber crimes by introducing new measures and strengthening the regulatory landscape governing cyberspace in India.
Indian Cyber Crime Coordination Centre
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. I4C handles the ‘National Cyber Crime Reporting Portal’ (https://cybercrime.gov.in) and the 1930 Cyber Crime Helpline. Recently at the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration, Union Home Minister Amit Shah launched the Cyber Fraud Mitigation Centre (CFMC), Samanvay platform (Joint Cybercrime Investigation Facilitation System), 'Cyber Commandos' program and Online Suspect Registry as efforts to combat the cyber crimes, establish cyber resilence and awareness and strengthening capabilities of law enforcement agencies.
Regulatory landscape Governing Cyber Crimes
Information Technology Act, 2000 (IT Act) and the rules made therein, the Intermediary Guidelines, Digital Personal Data Protection Act, 2023 and Bhartiya Nyay Sanhita, 2023 are the major legislation in India governing Cyber Laws.
CyberPeace Recommendations
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter these emerging threats. The government should prioritise its efforts by introducing robust policies and technical measures to reduce cybercrime in the country. The law enforcement agencies' capabilities must be strengthened with advanced technologies to deal with cyber crimes especially considering the growing sophisticated nature of cyber crime tactics used by cyber criminals.
The netizens must be aware of the manipulative tactics used by cyber criminals to target them. Social media companies must also implement robust measures on their respective platforms to counter and prevent cyber crimes. Coordinated approaches by all relevant authorities, including law enforcement, cybersecurity agencies, and regulatory bodies, along with increased awareness and proactive engagement by netizens, can significantly reduce cyber threats and online criminal activities.
References
- https://www.statista.com/statistics/1499739/india-cyber-crime-cases-reported-to-i4c/#:~:text=Cyber%20crime%20cases%20registered%20by%20I4C%20India%202019%2D2024&text=Over%20740%2C000%20cases%20of%20cyber,related%20to%20online%20financial%20fraud
- https://www.deccanherald.com/india/parliament-panel-to-examine-probe-agencies-efforts-to-tackle-cyber-crime-illegal-immigration-3270314
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2003158

Introduction
The Sexual Harassment of minors in cyberspace has become a matter of grave concern that needs to be addressed. Sextortion is the practice of extorting individuals into sharing explicit and sexual content under the threat of exposure. This grim activity has evolved into a pervasive issue on several social media platforms, particularly Instagram. To combat this illicit act, big corporate giants such as Meta have deployed a comprehensive ‘nudity protection’ feature, leveraging the use of AI (Artificial Intelligence) algorithms to ascertain and address the rapid distribution of unsolicited explicit content.
The Meta Initiative presented a multifaceted approach to improve user safety, especially for young people online, who are more vulnerable to predatory behavior.
The Salient Feature
Instagram’s use of advanced AI algorithms to automatically identify and blur out explicit images shared within direct messages is the driving force behind this initiative. This new safety measure serves two essential purposes.
- Preventing dissemination of sensitive content - The feature, when enabled, obstructs the visibility of sensitive personal pictures and also limits dissemination of the same.
- Empower minors to exercise more control over their social media - This cutting feature comes with the ability to disable the nudity protection at the will of users, allowing users, including minors, to regulate their exposure to age-inappropriate and harmful materials online. The nudity protection feature is enabled for all users under 18 as a default setting on Instagram globally. This measure guarantees a baseline standard of security for the most vulnerable demographic of users. Adults are able to exercise more autonomy over the feature, receiving periodic prompts for its voluntary activationWhen this feature detects an explicit image, it automatically blurs the image with cautionary overlay, enabling recipients to make an informed decision about whether or not they wish to view the flagged content. The decision to introduce this feature is an interesting and sensitive approach to balancing individual agency with institutionalising online protection.
Comprehensive Safety Measures Beyond Nudity Detection
The cutting-edge nudity protection feature is a crucial element of Instagram’s new strategy and is supported by a comprehensive set of measures devised to tackle sextortion and ensure a safe cyber environment for its users:
Awareness Drives and Safety Tips - Users sending and receiving sexually explicit content are directed to a screen with curated safety tips to ensure complete user awareness and inspire due diligence. These safety tips are critical in raising awareness about the risks of sharing sensitive content and inculcating responsible online behaviour.
New Technology to Identify Sextortionists - Meta Platforms are constantly evolving, and new sophisticated algorithms are introduced to better detect malicious accounts engaged in possible sextortion. These proactive measures check for any predatory behaviour so that such threats can be neutralised before they escalate and do grave harm.
Superior Reporting and Support Mechanisms - Instagram is implementing new technology to bolster its reporting mechanisms so that users reporting concerns pertaining to nudity, sexual exploitation and threats are instantaneously directed to local child safety authorities for necessary support and assistance.
This new sophisticated approach highlights Instagram's Commitment to forging a safer haven for users by addressing various aspects of this grim issue through the three-pronged strategy of detection, prevention and support.
User’s Safety and Accountability
The implementation of the nudity protection feature and various associated safety measures is Meta’s way of tackling the growing concern about user safety in a more proactive manner, especially when it concerns minors. Instagram’s experience with this feature will likely be the sandbox in which Meta tests its new user protection strategy and refines it before extending it to other platforms like Facebook and WhatsApp.
Critical Reception and Future Outlook
The nudity protection feature has been met with positive feedback from experts and online safety advocates, commending Instagram for taking a proactive stance against sextortion and exploitation. However, critics also emphasise the need for continued innovation, transparency, and accountability to effectively address evolving threats and ensure comprehensive protection for all users.
Conclusion
As digital spaces continue to evolve, Meta Platforms must demonstrate an ongoing commitment to adapting its safety measures and collaborating with relevant stakeholders to stay ahead of emerging challenges. Ongoing investment in advanced technology, user education, and robust support systems will be crucial in maintaining a secure and responsible online environment. Ultimately, Instagram's nudity protection feature represents a significant step forward in the fight against online sexual exploitation and abuse. By leveraging cutting-edge technology, fostering user awareness, and implementing comprehensive safety protocols, Meta Platforms is setting a positive example for other social media platforms to prioritise user safety and combat predatory behaviour in digital spaces.
References
- https://www.nbcnews.com/tech/tech-news/instagram-testing-blurring-nudity-messages-protect-teens-sextortion-rcna147402
- https://techcrunch.com/2024/04/11/meta-will-auto-blur-nudity-in-instagram-dms-in-latest-teen-safety-step/
- https://hypebeast.com/2024/4/instagram-dm-nudity-blurring-feature-teen-safety-info