#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs

Executive Summary:
A viral claim circulated in social media that Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe. Thorough analysis revealed abnormalities in image quality, particularly between the face, neck, and hands compared to the claimed gold clothing, leads to possible AI manipulation. A keyword search found no credible news reports or authentic images supporting this claim. Further analysis using AI detection tools, TrueMedia and Hive Moderator, confirmed substantial evidence of AI fabrication, with a high probability of the image being AI-generated or a deep fake. Additionally, a photo from a previous event at Jio World Plaza matched with the pose of the manipulated image, further denying the claim and indicating that the image of Anant Ambani and Radhika Merchant wearing golden outfit during their pre-wedding cruise was digitally altered.

Claims:
Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.



Fact Check:
When we received the posts, we found anomalies that were usually found in edited images or AI manipulated images, particularly between the face, neck, and hands.

It’s very unusual in any image. So we then checked in AI Image detection software named Hive Moderation detection tool and found it to be 95.9% AI manipulated.

We also checked with another widely used AI detection tool named True Media. True Media also found it to be 100% to be made using AI.




This implies that the image is AI-generated. To find the original image that has been edited, we did keyword search. We found an image with the same pose as in the manipulated image, with the title "Radhika Merchant, Anant Ambani pose with Mukesh Ambani at Jio World Plaza opening”. The two images can be compared to verify that the digitally altered image is the same.

Hence, it’s confirmed that the viral image is digitally altered and has no connection with the 2nd Pre-wedding cruise party in Europe. Thus the viral image is fake and misleading.
Conclusion:
The claim that Anant Ambani and Radhika Merchant wore clothes made of pure gold at their pre-wedding cruise party in Europe is false. The analysis of the image showed signs of manipulation, and a lack of credible news reports or authentic photos supports that it was likely digitally altered. AI detection tools confirmed a high probability that the image was fake, and a comparison with a genuine photo from another event revealed that the image had been edited. Therefore, the claim is false and misleading.
- Claim: Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.
- Claimed on: YouTube, LinkedIn, Instagram
- Fact Check: Fake & Misleading

Introduction
The Australian Parliament has passed the world’s first legislation regarding a ban on social media for children under 16. This was done citing risks to the mental and physical well-being of children and the need to contain misogynistic influence on them. The debate surrounding the legislation is raging strong, as it is the first proposal of its kind and would set precedence for how other countries can assess their laws regarding children and social media platforms and their priorities.
The Legislation
Currently trailing an age-verification system (such as biometrics or government identification), the legislation mandates a complete ban on underage children using social media, setting the age limit to 16 or above. Further, the law does not provide exemptions of any kind, be it for pre-existing accounts or parental consent. With federal elections approaching, the law seeks to address parental concerns regarding measures to protect their children from threats lurking on social media platforms. Every step in this regard is being observed with keen interest.
The Australian Prime Minister, Anthony Albanese, emphasised that the onus of taking responsible steps toward preventing access falls on the social media platforms, absolving parents and their children of the same. Social media platforms like TikTok, X, and Meta Platforms’ Facebook and Instagram all come under the purview of this legislation.
CyberPeace Overview
The issue of a complete age-based ban raises a few concerns:
- It is challenging to enforce digitally as children might find a way to circumnavigate such restrictions. An example would be the Cinderella Law, formally known as the Shutdown Law, which the Government of South Korea had implemented back in 2011 to reduce online gaming and promote healthy sleeping habits among children. The law mandated the prohibition of access to online gaming for children under the age of 16 between 12 A.M. to 6 A.M. However, a few drawbacks rendered it less effective over time. Children were able to use the login IDs of adults, switch to VPN, and even switch to offline gaming. In addition, parents also felt the government was infringing on the right to privacy and the restrictions were only for online PC games and did not extend to mobile phones. Consequently, the law lost relevance and was repealed in 2021.
- The concept of age verification inherently requires collecting more personal data and inadvertently opens up concerns regarding individual privacy.
- A ban is likely to reduce the pressure on tech and social media companies to develop and work on areas that would make their services a safe child-friendly environment.
Conclusion
Social media platforms can opt for an approach that focuses on how to create a safe environment online for children as they continue to deliberate on restrictions. An example of an impactful-yet-balanced step towards the protection of children on social media while respecting privacy is the U.K.'s Age-Appropriate Design Code (UK AADC). It is the U.K.’s implementation of the European Union’s General Data Protection Regulation (GDPR), prepared by the ICO (Information Commissioner's Office), the U.K. data protection regulator. It follows a safety-by-design approach for children. As we move towards a future that is predominantly online, we must continue to strive and create a safe space for children and address issues in innovative ways.
References
- https://indianexpress.com/article/technology/social/australia-proposes-ban-on-social-media-for-children-under-16-9657544/
- https://www.thehindu.com/opinion/op-ed/should-children-be-barred-from-social-media/article68661342.ece
- https://forumias.com/blog/debates-on-whether-children-should-be-banned-from-social-media/
- https://timesofindia.indiatimes.com/education/news/why-banning-kids-from-social-media-wont-solve-the-youth-mental-health-crisis/articleshow/113328111.cms
- https://iapp.org/news/a/childrens-privacy-laws-and-freedom-of-expression-lessons-from-the-uk-age-appropriate-design-code
- https://www.techinasia.com/s-koreas-cinderella-law-finally-growing-up-teens-may-soon-be-able-to-play-online-after-midnight-again
- https://wp.towson.edu/iajournal/2021/12/13/video-gaming-addiction-a-case-study-of-china-and-south-korea/
- https://www.dailysabah.com/world/asia-pacific/australia-passes-worlds-1st-total-social-media-ban-for-children

Introduction
An age of unprecedented problems has been brought about by the constantly changing technological world, and misuse of deepfake technology has become a reason for concern which has also been discussed by the Indian Judiciary. Supreme Court has expressed concerns about the consequences of this quickly developing technology, citing a variety of issues from security hazards to privacy violations to the spread of disinformation. In general, misuse of deepfake technology is particularly dangerous since it may fool even the sharpest eye because they are almost identical to the actual thing.
SC judge expressed Concerns: A Complex Issue
During a recent speech, Supreme Court Justice Hima Kohli emphasized the various issues that deepfakes present. She conveyed grave concerns about the possibility of invasions of privacy, the dissemination of false information, and the emergence of security threats. The ability of deepfakes to be created so convincingly that they seem to come from reliable sources is especially concerning as it increases the potential harm that may be done by misleading information.
Gender-Based Harassment Enhanced
In this internet era, there is a concerning chance that harassment based on gender will become more severe, as Justice Kohli noted. She pointed out that internet platforms may develop into epicentres for the quick spread of false information by anonymous offenders who act worrisomely and freely. The fact that virtual harassment is invisible may make it difficult to lessen the negative effects of toxic online postings. In response, It is advocated that we can develop a comprehensive policy framework that modifies current legal frameworks—such as laws prohibiting sexual harassment online —to adequately handle the issues brought on by technology breakthroughs.
Judicial Stance on Regulating Deepfake Content
In a different move, the Delhi High Court voiced concerns about the misuse of deepfake and exercised judicial intervention to limit the use of artificial intelligence (AI)-generated deepfake content. The intricacy of the matter was highlighted by a division bench. The bench proposed that the government, with its wider outlook, could be more qualified to handle the situation and come up with a fair resolution. This position highlights the necessity for an all-encompassing strategy by reflecting the court's acknowledgement of the technology's global and borderless character.
PIL on Deepfake
In light of these worries, an Advocate from Delhi has taken it upon himself to address the unchecked use of AI, with a particular emphasis on deepfake material. In the event that regulatory measures are not taken, his Public Interest Litigation (PIL), which is filed at the Delhi High Court, emphasises the necessity of either strict limits on AI or an outright prohibition. The necessity to discern between real and fake information is at the center of this case. Advocate suggests using distinguishable indicators, such as watermarks, to identify AI-generated work, reiterating the demand for openness and responsibility in the digital sphere.
The Way Ahead:
Finding a Balance
- The authorities must strike a careful balance between protecting privacy, promoting innovation, and safeguarding individual rights as they negotiate the complex world of deepfakes. The Delhi High Court's cautious stance and Justice Kohli's concerns highlight the necessity for a nuanced response that takes into account the complexity of deepfake technology.
- Because of the increased complexity with which the information may be manipulated in this digital era, the court plays a critical role in preserving the integrity of the truth and shielding people from the possible dangers of misleading technology. The legal actions will surely influence how the Indian judiciary and legislature respond to deepfakes and establish guidelines for the regulation of AI in the nation. The legal environment needs to change as technology does in order to allow innovation and accountability to live together.
Collaborative Frameworks:
- Misuse of deepfake technology poses an international problem that cuts beyond national boundaries. International collaborative frameworks might make it easier to share technical innovations, legal insights, and best practices. A coordinated response to this digital threat may be ensured by starting a worldwide conversation on deepfake regulation.
Legislative Flexibility:
- Given the speed at which technology is advancing, the legislative system must continue to adapt. It will be required to introduce new legislation expressly addressing developing technology and to regularly evaluate and update current laws. This guarantees that the judicial system can adapt to the changing difficulties brought forth by the misuse of deepfakes.
AI Development Ethics:
- Promoting moral behaviour in AI development is crucial. Tech businesses should abide by moral or ethical standards that place a premium on user privacy, responsibility, and openness. As a preventive strategy, ethical AI practices can lessen the possibility that AI technology will be misused for malevolent purposes.
Government-Industry Cooperation:
- It is essential that the public and commercial sectors work closely together. Governments and IT corporations should collaborate to develop and implement legislation. A thorough and equitable approach to the regulation of deepfakes may be ensured by establishing regulatory organizations with representation from both sectors.
Conclusion
A comprehensive strategy integrating technical, legal, and social interventions is necessary to navigate the path ahead. Governments, IT corporations, the courts, and the general public must all actively participate in the collective effort to combat the misuse of deepfakes, which goes beyond only legal measures. We can create a future where the digital ecosystem is safe and inventive by encouraging a shared commitment to tackling the issues raised by deepfakes. The Government is on its way to come up with dedicated legislation to tackle the issue of deepfakes. Followed by the recently issued government advisory on misinformation and deepfake.