#FactCheck - "Deepfake Video Falsely Claims Justin Trudeau Endorses Investment Project”
Executive Summary:
A viral online video claims Canadian Prime Minister Justin Trudeau promotes an investment project. However, the CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Trudeau's facial expressions and voice. The original footage has no connection to any investment project. The claim that Justin Trudeau endorses this project is false and misleading.

Claims:
A viral video falsely claims that Canadian Prime Minister Justin Trudeau is endorsing an investment project.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Prime Minister Justin Trudeau, none of which included promotion of any investment projects. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 99.8% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Prime Minister Trudeau revealed no mention of any such investment project. No credible reports were found linking Trudeau to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Justin Trudeau promotes an investment project is a deepfake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Justin Trudeau promotes an investment project viral on social media.
- Claimed on: Facebook
- Fact Check: False & Misleading
Related Blogs

Introduction
According to a shocking report, there are multiple scam loan apps on the App Store in India that charge excessive interest rates and force users to pay by blackmailing and harassing them. Apple has prohibited and removed these apps from the App Store, but they may still be installed on your iPhone and running. You must delete any of these apps if you have downloaded them. Learn the names of these apps and how they operated the fraud.
Why Apple banned these apps?
- Apple has taken action to remove certain apps from the Indian App Store. These apps were engaging in unethical behaviour, such as impersonating financial institutions, demanding high fees, and threatening borrowers. Here are the titles of these apps, as well as what Apple has said about their suspension.
- Following user concerns, Apple removed six loan apps from the Indian App Store. Loan apps include White Kash, Pocket Kash, Golden Kash, Ok Rupee, and others.
- According to multiple user reviews, certain apps seek unjustified access to users’ contact lists and media. These apps also charge exorbitant fees that are not necessitated. Furthermore, companies have been found to engage in unethical tactics such as charging high-interest rates and “processing fees” equal to half the loan amount.
- Some lending app users have reported being harassed and threatened for failing to return their loans on time. In some circumstances, the apps threatened the user’s contacts if payment was not completed by the deadline. According to one user, the app company threatened to produce and send false photographs of her to her contacts.
- These loan apps were removed from the App Store, according to Apple, because they broke the norms and standards of the Apple Developer Program License Agreement. These apps were discovered to be falsely claiming financial institution connections.
Issue of Fake loan apps on the App Store
- The App Store and our App Review Guidelines are designed to ensure we provide our users with the safest experience possible,” Apple explained. “We do not tolerate fraudulent activity on the App Store and have strict rules against apps and developers who attempt to game the system.
- In 2022, Apple blocked nearly $2 billion in fraudulent App Store sales. Furthermore, it rejected nearly 1.7 million software submissions that did not match Apple’s quality and safety criteria and cancelled 428,000 developer accounts due to suspected fraudulent activities.
- The scammers also used heinous tactics to force the loanees to pay. According to reports, the scammers behind the apps gained access to the user’s contact list as well as their images. They would morph the images and then scare the individual by sharing their fake nude photos with their whole contact list.
Dangerous financial fraud apps have surfaced on the App Store
- TechCrunch acquired a user review from one of these apps. “I borrowed an amount in a helpless situation, and a day before the repayment due date, I got some messages with my picture and my contacts in my phone saying that repay your loan or they will inform our contacts that you are not paying the loan,” it said.
- Sandhya Ramesh, a journalist from The Print, recently tweeted a screenshot of a direct message she got. A victim’s friend told a similar story in the message.
- TechCrunch contacted Apple, who confirmed that the apps had been removed from the App Store for breaking the Apple Developer Program License Agreement and guidelines.
Conclusion
Recently, some users have claimed that some quick-loan applications, such as White Kash, Pocket Kash, and Golden Kash, have appeared on the Top Finance applications chart in recent days. These apps necessitate unauthorised and intrusive access to users’ contact lists and media. According to hundreds of user evaluations, these apps charged exorbitantly high and useless fees. They used unscrupulous techniques such as demanding “processing fees” equal to half the loan amount and charging high-interest rates. Users were also harassed and threatened with restitution. If payments were not made by the due date, the lending applications threatened to notify users’ contacts. According to one user, the app provider even threatened to generate phoney nude images of her and send them to her contacts.

Introduction
The rise in start-up culture, increasing investments, and technological breakthroughs are being encouraged alongside innovations and the incorporation of generative Artificial Intelligence elements. Witnessing the growing focus on human-centred AI, its potential to transform industries like education remains undeniable. Enhancing experiences and inculcating new ways of learning, there is much to be explored. Recently, a Delhi-based non-profit called Rocket Learning, in collaboration with Google.org, launched Appu- a personalised AI educational tool providing a multilingual and conversational learning experience for kids between 3 and 6.
AI Appu
Developed in 6 months, along with the help of dedicated Google.org fellows, interactive Appu has resonated with those the founders call “super-users,” i.e. parents and caregivers. Instead of redirecting students to standard content and instructional videos, it operates on the idea of conversational learning, one equally important for children in the targeted age bracket. Designed in the form of an elephant, Appu is supposed to be a personalised tutor, helping both children and parents understand concepts through dialogue. AI enables the generation of different explanations in case of doubt, aiding in understanding. If children were to answer in mixed languages instead of one complete sentence in a single language (eg, Hindi and English), the AI would still consider it as a response. The AI lessons are two minutes long and are inculcated with real-world examples. The emphasis on interactive and fun learning of concepts through innovation enhances the learning experience. Currently only available in Hindi, it is being worked on to include 20 other languages such as Punjabi and Marathi.
UNESCO, AI, and Education
It is important to note that such innovations also find encouragement in UNESCO’s mandate as AI in education contributes to achieving the 2030 Agenda of Sustainable Development Goals (here; SDG 4- focusing on quality education). Within the ambit of the Beijing Consensus held in 2019, UNESCO encourages a human-centred approach to AI, and has also developed the “Artificial Intelligence and Education: Guidance for Policymakers” aiming towards understanding its potential and opportunities in education as well as the core competencies it needs to work on. Another publication was launched during one of the flagship events of UNESCO- (Digital Learning Week, 2024) - AI competency frameworks for both, students and teachers which provide a roadmap for assessing the potential and risks of AI, each covering common aspects such as AI ethics, and human-centred mindset and even certain distinct options such as AI system design for students and AI pedagogy for teachers.
Potential Challenges
While AI holds immense promise in education, innovation with regard to learning is contentious as several risks must be carefully managed. Depending on the innovation, AI’s struggle with multitasking beyond the classroom, such as administrative duties and tedious grading, which require highly detailed role descriptions could prove to be a challenge. This can become exhausting for developers managing innovative AI systems, as they would have to fit various responses owing to the inherent nature of AI needing to be trained to produce output. Security concerns are another major issue, as data breaches could compromise sensitive student information. Implementation costs also present challenges, as access to AI-driven tools depends on financial resources. Furthermore, AI-driven personalised learning, while beneficial, may inadvertently reduce student motivation, also compromising students' soft skills, such as teamwork and communication, which are crucial for real-world success. These risks highlight the need for a balanced approach to AI integration in education.
Conclusion
Innovations related to education, especially the ones that focus on a human-centred AI approach, have immense potential in not only enhancing learning experiences but also reshaping how knowledge is accessed, understood, and applied. Untapped potential using other services is also encouraged in this sector. However, maintaining a balance between fostering intrigue and ensuring the inculcation of ethical and secure AI remains imperative.
References
- https://www.unesco.org/en/articles/what-you-need-know-about-unescos-new-ai-competency-frameworks-students-and-teachers?hub=32618
- https://www.unesco.org/en/digital-education/artificial-intelligence
- https://www.deccanherald.com/technology/google-backed-rocket-learning-launches-appu-an-ai-powered-tutor-for-kids-3455078
- https://indianexpress.com/article/technology/artificial-intelligence/how-this-google-backed-ai-tool-is-reshaping-education-appu-9896391/
- https://www.thehindu.com/business/ai-appu-to-tutor-children-in-india/article69354145.ece
- https://www.velvetech.com/blog/ai-in-education-risks-and-concerns/

Key points: Data collection, Protecting Children, and Awareness
Introduction
The evolution of technology has drastically changed over the period impacting mankind and their lifestyle. For every single smallest aspect, humans are reliable on the computers they have manufactured. The use of AI has almost hindered mankind, kids these days are more lethargic to work and write more sensibly on their own, but they are more likely interested in television, video games, mobile games, etc. School kids use AI just to complete their homework. Is it a good sign for the country’s future? The study suggests that Tools like ChatGPT is a threat to humans/a child’s potential to be creative and make original content requiring a human writer’s insight. Tools like ChatGPT can remove students’ artistic voices rather than using their unique writing style.
Does any of those browsers or search engines use your search history against you? or How do non-users tend to lose their private info on such a search engine?
Are there any safety measures that one’s the government of a particular country taking to protect their people’s rights?
Some of us might wonder how these two fancy-looking world merge and into, Arey they a boon or curse?
So here’s the top news getting flooded all over the world through the internet,
“Italian Agency impose strict measures on OpenAI’s ChatGPT”
Italy becomes the first Western European country to take serious measures about using Open AI ChatGPT. An Italian Data Protection agency named Garante has set mandates on ChatGPT. Garante has raised concerns about privacy violations and the inability to verify the age of users. Garate has also claimed that the AI ChatBot is violating the EU’s General Data Protection Regulation (GDPR). In a press release, Garante demanded OpenAI take necessary actions.
To begin with, Garante has demanded that OpenAI’s ChatGPT should increase its transparency and give a comprehensive statement about its data processing practices. OpenAI must specify between obtaining user consent for processing users’ data to train its AI model or may rely on a legitimate basis. OpenAI must maintain the privacy of users’ data.
In addition, ChatGPT should also take measures to prevent minors from accessing the technology at such an early stage of life, which could hinder their brain power. ChatGPT should add some age verification system to prevent minors from accessing explicit content. Moreover, Garante suggests that OpenAI should spread awareness among its users about their data being processed to train its AI model. Garante has set a deadline of April 30 for ChatGPT to complete the given tasks. Until then, its service should be banned in the country.
Child safety while surfing on ChatGpt
Italian agency demands age limitation to surf and an age verification method to exclude users under the age of 13, and parental authority should be required for users between the ages of 13 and 18. As this is a matter of security. Children might get exposed to explicit content invalidated to their age or explore illegitimate content. The AI chatbot doesn’t have the sense to determine which content is appropriate for the underage audience. Due to tools like chatbots, subjective things/information are already available to young students, leading to endangered irrespective of their future. As ChatGpt can hinder their potential and ability to create original and creative content for young minds. It is a threat motivation to humans’ motivation to write. Moreover, when students need time to think and analyze they get lethargic due to tools like ChatGPT, and the practice they need fades away.
Collection of User’s Data
According to some reports from the company’s privacy policy, OpenAI ChatGpt collects an assortment of additional data. The first two questions are for a free trial when a session starts. It asks for your Login, and SignUp through your Gmail account collects your IP address, browser type, and the data you put in the form of input, i.e. it collects data on the user’s interaction with the website, It also collects the user’s data like session time, cookies through third party may tend to sell it to an unspecified third party.
This snapshot shows that they have added a few things after Garante’s draft.
Conclusion
AI chatbot – Chatgpt is an advanced technology tool that makes work a little easier, but one surfing on such tools must stay aware of the information they are asking for. Such AI bots are trained to understand mankind, its job is to give a helping hand and not doltish. In case of this, some people tend to provide sensitive information unknowingly, young minds get exposed to explicit information. Such bots need to put some age limitations. Such innovations keep taking place, but it’s individuals’ responsibility what actions to be allowed to access their online connected device. Unlike the Italian Agency, which has taken some preventive measures to keep their user’s data safe, also looking at the adverse effect of such chatbots on a young mind.