Deepfake Alert: Sachin Tendulkar's Warning Against Technology Misuse
Introduction
Deepfake have become a source of worry in an age of advanced technology, particularly when they include the manipulation of public personalities for deceitful reasons. A deepfake video of cricket star Sachin Tendulkar advertising a gaming app recently went popular on social media, causing the sports figure to deliver a warning against the widespread misuse of technology.
Scenario of Deepfake
Sachin Tendulkar appeared in the deepfake video supporting a game app called Skyward Aviator Quest. The app's startling quality has caused some viewers to assume that the cricket legend is truly supporting it. Tendulkar, on the other hand, has resorted to social media to emphasise that these videos are phony, highlighting the troubling trend of technology being abused for deceitful ends.
Tendulkar's Reaction
Sachin Tendulkar expressed his worry about the exploitation of technology and advised people to report such videos, advertising, and applications that spread disinformation. This event emphasises the importance of raising knowledge and vigilance about the legitimacy of material circulated on social media platforms.
The Warning Signs
The deepfake video raises questions not just for its lifelike representation of Tendulkar, but also for the material it advocates. Endorsing gaming software that purports to help individuals make money is a significant red flag, especially when such endorsements come from well-known figures. This underscores the possibility of deepfakes being utilised for financial benefit, as well as the significance of examining information that appears to be too good to be true.
How to Protect Yourself Against Deepfakes
As deepfake technology advances, it is critical to be aware of potential signals of manipulation. Here are some pointers to help you spot deepfake videos:
- Look for artificial facial movements and expressions, as well as lip sync difficulties.
- Body motions and Posture: Take note of any uncomfortable body motions or discrepancies in the individual's posture.
- Lip Sync and Audio Quality: Look for mismatches between the audio and lip motions.
- background and Content: Consider the video's background, especially if it has a popular figure supporting something in an unexpected way.
- Verify the legitimacy of the video by verifying the official channels or accounts of the prominent person.
Conclusion
The popularity of deepfake videos endangers the legitimacy of social media material. Sachin Tendulkar's response to the deepfake in which he appears serves as a warning to consumers to remain careful and report questionable material. As technology advances, it is critical that individuals and authorities collaborate to counteract the exploitation of AI-generated material and safeguard the integrity of online information.
Reference
- https://www.news18.com/tech/sachin-tendulkar-disturbed-by-his-new-deepfake-video-wants-swift-action-8740846.html
- https://www.livemint.com/news/india/sachin-tendulkar-becomes-latest-victim-of-deepfake-video-disturbing-to-see-11705308366864.html
Related Blogs
Introduction
Snapchat's Snap Map redefined location sharing with an ultra-personalised feature that allows users to track where they and their friends are, discover hotspots, and even explore events worldwide. In November 2024, Snapchat introduced a new addition to its Family Center, aiming to bolster teen safety. This update enables parents to request and share live locations with their teens, set alerts for specific locations, and monitor who their child shares their location with.
While designed with keeping safety in mind, such tracking tools raise significant privacy concerns. Misusing these features could expose teens to potential harm, amplifying the debate around safeguarding children’s online privacy. This blog delves into the privacy and safety challenges Snap Map poses under existing data protection laws, highlighting critical gaps and potential risks.
Understanding Snapmap: How It Works and Why It’s Controversial
Snap Map, built on technology from Snap's acquisition of social mapping startup Zenly, revolutionises real-time location sharing by letting users track friends, send messages, and explore the world through an interactive map. With over 350 million active users by Q4 2023, and India leading with 202.51 million Snapchat users, Snap Map has become a global phenomenon.
This opt-in feature allows users to customise their location-sharing settings, offering modes like "Ghost Mode" for privacy, sharing with all friends, or selectively with specific contacts. However, location updates occur only when the app is in use, adding a layer of complexity to privacy management.
While empowering users to connect and share, Snap Map’s location-sharing capabilities raise serious concerns. Unintentional sharing or misuse of this tool could expose users—especially teens—to risks like stalking or predatory behaviour. As Snap Map becomes increasingly popular, ensuring its safe use and addressing its potential for harm remains a critical challenge for users and regulators.
The Policy Vacuum: Protecting Children’s Data Privacy
Given the potential misuse of location-sharing features, evaluating the existing regulatory frameworks for protecting children's geolocation privacy is important. Geolocation features remain under-regulated in many jurisdictions, creating opportunities for misuse, such as stalking or unauthorised surveillance. Presently, multiple international and national jurisdictions are in the process of creating and implementing privacy laws. The most notable examples are the COPPA in the US, GDPR in the EU and the DPDP Act which have made considerable progress in privacy for children and their online safety. COPPA and GDPR prioritise children’s online safety through strict data protections, consent requirements, and limits on profiling. India’s DPDP Act, 2023, prohibits behavioral tracking and targeted ads for children, enhancing privacy. However, it lacks safeguards against geolocation tracking, leaving a critical gap in protecting children from risks posed by location-based features.
Balancing Innovation and Privacy: The Role of Social Media Platforms
Privacy is an essential element that needs to be safeguarded and this is specifically important for children as they are vulnerable to harm they cannot always foresee. Social media companies must uphold their responsibility to create platforms that do not become a breeding ground for offences against children. Some of the challenges that platforms face in implementing a safe online environment are robust parental control and consent mechanisms to ensure parents are informed about their children’s online presence and options to opt out of services that they feel are not safe for their children. Platforms need to maintain a level of privacy that allows users to know what data is collected by the platform, sharing and retention data policies.
Policy Recommendations: Addressing the Gaps
Some of the recommendations for addressing the gaps in the safety of minors are as follows:
- Enhancing privacy and safety for minors by taking measures such as mandatory geolocation restrictions for underage users.
- Integrating clear consent guidelines for data protection for users.
- Collaboration between stakeholders such as government, social media platforms, and civil society is necessary to create awareness about location-sharing risks among parents and children.
Conclusion
Safeguarding privacy, especially of children, with the introduction of real-time geolocation tools like Snap Map, is critical. While these features offer safety benefits, they also present the danger of misuse, potentially harming vulnerable teens. Policymakers must urgently update data protection laws and incorporate child-specific safeguards, particularly around geolocation tracking. Strengthening regulations and enhancing parental controls are essential to protect young users. However, this must be done without stifling technological innovation. A balanced approach is needed, where safety is prioritised, but innovation can still thrive. Through collaboration between governments, social media platforms, and civil society, we can create a digital environment that ensures safety and progress.
References
- https://indianexpress.com/article/technology/tech-news-technology/snapchat-family-center-real-time-location-sharing-travel-notifications-9669270/
- https://economictimes.indiatimes.com/tech/technology/snapchat-unveils-location-sharing-features-to-safeguard-teen-users/articleshow/115297065.cms?from=mdr
- https://www.thehindu.com/sci-tech/technology/snapchat-adds-more-location-safety-features-for-teens/article68871301.ece
- https://www.moneycontrol.com/technology/snapchat-expands-parental-control-with-location-tracking-to-make-it-easier-for-parents-to-track-their-kids-article-12868336.html
- https://www.statista.com/statistics/545967/snapchat-app-dau/
Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide
On the occasion of 20th edition of Safer Internet Day 2023, CyberPeace in collaboration with UNICEF, DELNET, NCERT, and The National Book Trust (NBT), India, took steps towards safer cyberspace by launching iSafe Multimedia Resources, CyberPeace TV, and CyberPeace Café in an event held today in Delhi.
CyberPeace also showcased its efforts, in partnership with UNICEF, to create a secure and peaceful online world through its Project iSafe, which aims to bridge the knowledge gap between emerging advancements in cybersecurity and first responders. Through Project iSafe, CyberPeace has successfully raised awareness among law enforcement agencies, education departments, and frontline workers across various fields. The event marked a significant milestone in the efforts of the foundation to create a secure and peaceful online environment for everyone.
Launching the Cyberpeace TV, café and isafe material , National Cybersecurity coordinator of Govt of India, Lt Gen Rajesh Pant interacts with the students by introducing them with the theme of this safer internet day. He launched the coword cyber challenge initiative by the countries. Content is most important in cyberspace. He also assured everyone that the government of India is taking a lot of steps at national level to make cyber space safer. He compliments CPF for their initiatives.
Ms. Zafrin Chaudhry, Chief of Communication, UNICEF addresses students with the facts that children make out 1 out of 3 in cyberspace, so they should have a safe cyberspace. They should be informed and equipped with all the information on how to deal with any kind of issues they face in cyberspace. They should share their experience with everyone to make others aware. UNICEF in partnership with CPF is extending help to children to equip them with the help and information.
Major Vineet Kumar, Founder and Global President of CPF welcomed all and introduced us about the launching of iSafe Multimedia Resources, CyberPeace TV, and CyberPeace Café . With this launch he threw some light on upcoming plans like launching a learning module of metaverse with AR and VR. He wants to make cyberspace safe even in tier 3 cities that’s why he established the first cybercafé in Ranchi.
As the internet plays a crucial role in our lives, CyberPeace has taken action to combat potential cyber threats. They introduced CyberPeace TV, the world’s first multilingual TV Channel on Jio TV focusing on Education and Entertainment, a comprehensive online platform that provides the latest in cybersecurity news, expert analysis, and a community for all stakeholders in the field. CyberPeace also launched its first CyberPeace Café for creators and innovators and released the iSafe Multimedia resource containing Flyers, Posters, E hand book and handbook on digital safety for children developed jointly by CyberPeace, UNICEF and NCERT for the public.
O.P. Singh, Former DGP, UP Police & CEO Kailash Satyarthi foundation, , started with the data of internet users in India. The Internet is used in day-to -day activities nowadays and primarily in social media. Students should have a channelized approach to cyberspace like fixed screen time, information to the right content, and usage of the internet. I really appreciate the initiates that CyberPeace is taking in this direction.
The celebration continued by iSafe Panel Discussion on “Creating Safer Cyberspace for Children.” The discussion was moderated by Dr. Sangeeta Kaul, Director of DELNET, and was attended by panellists Mr. Rakesh Maheshwari from MeitY(Ministry of electronics and information Technology, Govt. of India), Dr. Indu Kumar from CIET-NCERT, Ms. Bindu Sharma from ICMEC, and Major Vineet Kumar from CyberPeace.
The event was also graced by professional artists from the National School of Drama, who performed Nukkad Natak and Qawwali based on cyber security themes. Students from SRDAV school also entertained the audience with their performances. The attendees were also given a platform to share their experiences with online security issues, and ICT Awardees, Parents and iSafe Champions shared their insights with the guests. The event also had stalls by CyberPeace Corps, a Global volunteer initiative, and CIET-NCERT for students to explore and join the cause. The event’s highlight was the 360 Selfie Booth, where attendees lined up to have their turn.