#FactCheck - Deepfake Alert: Virat Kohli's Alleged Betting App Endorsement Exposed
Executive Summary
A viral video allegedly featuring cricketer Virat Kohli endorsing a betting app named ‘Aviator’ is being shared widely across the social platform. CyberPeace Research Team’s Investigations revealed that the same has been made using the deepfake technology. In the viral video, we found some potential anomalies that can be said to have been created using Synthetic Media, also no genuine celebrity endorsements for the app exist, we have also previously debunked such Deep Fake videos of cricketer Virat Kohli regarding the misuse of deep fake technology. The spread of such content underscores the need for social media platforms to implement robust measures to combat online scams and misinformation.
Claims:
The claim made is that a video circulating on social media depicts Indian cricketer Virat Kohli endorsing a betting app called "Aviator." The video features an Indian News channel named India TV, where the journalist reportedly endorses the betting app followed by Virat Kohli's experience with the betting app.
Fact Check:
Upon receiving the news, we thoroughly watched the video and found some featured anomalies that are usually found in regular deep fake videos such as the lip sync of the journalist is not proper, and if we see it carefully the lips do not match with the audio that we can hear in the Video. It’s the same case when Virat Kohli Speaks in the video.
We then divided the video into keyframes and reverse searched one of the frames from the Kohli’s part, we found a video similar to the one spread, where we could see Virat Kohli wearing the same brown jacket in that video, uploaded on his verified Instagram handle which is an ad promotion in collaboration with American Tourister.
After going through the entire video, it is evident that Virat Kohli is not endorsing any betting app, rather he is talking about an ad promotion collaborating with American Tourister.
We then did some keyword searches to see if India TV had published any news as claimed in the Viral Video, but we didn’t find any credible source.
Therefore, upon noticing the major anomalies in the video and doing further analysis found that the video was created using Synthetic Media, it's a fake and misleading one.
Conclusion:
The video of Virat Kohli promoting a betting app is fake and does not actually feature the celebrity endorsing the app. This brings up many concerns regarding how Artificial Intelligence is being used for fraudulent activities. Social media platforms need to take action against the spread of fake videos like these.
Claim: Video surfacing on social media shows Indian cricket star Virat Kohli promoting a betting application known as "Aviator."
Claimed on: Facebook
Fact Check: Fake & Misleading
Related Blogs
Introduction
US President Biden takes a step by signing a key executive order to manage the risks posed by AI. The new presidential order on Artificial intelligence (AI) sets rules on the rapidly growing technology that has big potential but also covers risks. The presidential order was signed on 30th October 2023. It is a strong action that the US president has taken on AI safety and security. This order will require the developers to work on the most powerful AI model to share their safety test results with the government before releasing their product to the public. It also includes developing standards for ethically using AI and for detecting AI-generated content and labelling it as such. Tackling the many dangers of AI as it rapidly advances, the technology poses certain risks by replacing human workers, spreading misinformation and stealing people's data. The white house is also making clear that this is not just America’s problem and that the US needs to work with the world to set standards here and to ensure the responsible use of AI. The white house is also urging Congress to do more and pass comprehensive privacy legislation. The order includes new safety guidelines for AI developers, standards to disclose AI-generated content and requirements for federal agencies that are utilising AI. The white house says that it is the strongest action that any government has taken on AI safety and security. In the most recent events, India has reported the biggest ever data breach, where data of 815 million Indians has been leaked. ICMR is the Indian Council of Medical Research and is the imperial medical research institution of India.
Key highlights of the presidential order
The presidential order requires developers to share safety test results. It focuses on developing standards, tools & tests to ensure safe AI. It will ensure protection from AI-enabled frauds and protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, protect against risks of using AI to engineer dangerous material and provide guidelines for detecting AI -AI-generated content and establishing overall standards for AI safety and security.
Online content authentication and labelling
Biden administration has asked the Department of Commerce to set guidelines to help authenticate content coming from the government, meaning the American people should be able to trust official documents coming from the government. So, focusing on content authentication, they have also talked about labelling AI-generated content, making the differentiation between a real authentic piece of content and something that has been manipulated or generated using AI.
ICMR Breach
On 31/10/2023, an American intelligence and cybersecurity agency flagged the biggest-ever data breach, putting the data of 81.5 crore Indians at stake and at at potential risk of making its way to the dark market. The cyber agency has informed that a ‘threat actor’, also known as ‘pwn001’ shared a thread on Breach Forums, which is essentially claimed as the ‘premier Databreach discussion and leaks forum’. The forum confirms a breach of 81.5 crore Indians. As of today,, ICRM has not issued any official statement, but it has informed the government that the prestigious Central Bureau of Investigation (CBI) will be taking on the investigation and apprehending the cybercriminals behind the cyber attack. The bad actor’s alias, 'pwn001,' made a post on X (formerly Twitter); the post informed that Aadhaar and passport information, along with personal data such as names, phone numbers, and addresses. It is claimed that the data was extracted from the COVID-19 test details of citizens registered with ICMR. This poses a serious threat to the Indian Netizen from any form of cybercrime from anywhere in the world.
Conclusion:
The US presidential order on AI is a move towards making Artificial intelligence safe and secure. This is a major step by the Biden administration, which is going to protect both Americans and the world from the considerable dangers of AI. The presidential order requires developing standards, tools, and tests to ensure AI safety. The US administration will work with allies and global partners, including India, to develop a strong international framework to govern the development and use of AI. It will ensure the responsible use of AI. With the passing of legislation such as the Digital Personal Data Protection Act, 2023, it is pertinent that the Indian government works towards creating precautionary and preventive measures to protect Indian data. As the evolution of cyber laws is coming along, we need to keep an eye on emerging technologies and update/amend our digital routines and hygienes to stay safe and secure.
References:
- https://m.dailyhunt.in/news/india/english/lokmattimes+english-epaper-lokmaten/biden+signs+landmark+executive+order+to+manage+ai+risks-newsid-n551950866?sm=Y
- https://www.hindustantimes.com/technology/in-indias-biggest-data-breach-personal-information-of-81-5-crore-people-leaked-101698719306335-amp.html?utm_campaign=fullarticle&utm_medium=referral&utm_source=inshorts
Introduction
Misinformation is rampant all over the world and impacting people at large. In 2023, UNESCO commissioned a survey on the impact of Fake News which was conducted by IPSOS. This survey was conducted in 16 countries that are to hold national elections in 2024 with a total of 2.5 billion voters and showed how pressing the need for effective regulation had become and found that 85% of people are apprehensive about the repercussions of online disinformation or misinformation. UNESCO has introduced a plan to regulate social media platforms in light of these worries, as they have become major sources of misinformation and hate speech online. This action plan is supported by the worldwide opinion survey, highlighting the urgent need for strong actions. The action plan outlines the fundamental principles that must be respected and concrete measures to be implemented by all stakeholders associated, i.e., government, regulators, civil society and the platforms themselves.
The Key Areas in Focus of the Action Plan
The focus area of the action plan is on the protection of the Freedom of Expression while also including access to information and other human rights in digital platform governance. The action plan works on the basic premise that the impact on human rights becomes the compass for all decision-making, at every stage and by every stakeholder. Groups of independent regulators work in close coordination as part of a wider network, to prevent digital companies from taking advantage of disparities between national regulations. Moderation of content as a feasible and effective option at the required scale, in all regions and all languages.
The algorithms of these online platforms, particularly the social media platforms are established, but it is too often geared towards maximizing engagement rather than the reliability of information. Platforms are required to take on more initiative to educate and train users to be critical thinkers and not just hopers. Regulators and platforms are in a position to take strong measures during particularly sensitive conditions ranging from elections to crises, particularly the information overload that is taking place.
Key Principles of the Action Plan
- Human Rights Due Diligence: Platforms are required to assess their impact on human rights, including gender and cultural dimensions, and to implement risk mitigation measures. This would ensure that the platforms are responsible for educating users about their rights.
- Adherence to International Human Rights Standards: Platforms must align their design, content moderation, and curation with international human rights standards. This includes ensuring non-discrimination, supporting cultural diversity, and protecting human moderators.
- Transparency and Openness: Platforms are expected to operate transparently, with clear, understandable, and auditable policies. This includes being open about the tools and algorithms used for content moderation and the results they produce.
- User Access to Information: Platforms should provide accessible information that enables users to make informed decisions.
- Accountability: Platforms must be accountable to their stakeholders which would include the users and the public, which would ensure that redressal for content-related decisions is not compromised. This accountability extends to the implementation of their terms of service and content policies.
Enabling Environment for the application of the UNESCO Plan
The UNESCO Action Plan to counter misinformation has been created to create an environment where freedom of expression and access to information flourish, all while ensuring safety and security for digital platform users and non-users. This endeavour calls for collective action—societies as a whole must work together. Relevant stakeholders, from vulnerable groups to journalists and artists, enable the right to expression.
Conclusion
The UNESCO Action Plan is a response to the dilemma that has been created due to the information overload, particularly, because the distinction between information and misinformation has been so clouded. The IPSOS survey has revealed the need for an urgency to address these challenges in the users who fear the repercussions of misinformation.
The UNESCO action plan provides a comprehensive framework that emphasises the protection of human rights, particularly freedom of expression, while also emphasizing the importance of transparency, accountability, and education in the governance of digital platforms as a priority. By advocating for independent regulators and encouraging platforms to align with international human rights standards, UNESCO is setting the stage for a more responsible and ethical digital ecosystem.
The recommendations include integrating regulators through collaborations and promoting global cooperation to harmonize regulations, expanding the Digital Literacy campaign to educate users about misinformation risks and online rights, ensuring inclusive access to diverse content in multiple languages and contexts, and monitoring and refining tech advancements and regulatory strategies as challenges evolve. To ultimately promote a true online information landscape.
Reference
- https://www.unesco.org/en/articles/online-disinformation-unesco-unveils-action-plan-regulate-social-media-platforms
- https://www.unesco.org/sites/default/files/medias/fichiers/2023/11/unesco_ipsos_survey.pdf
- https://dig.watch/updates/unesco-sets-out-strategy-to-tackle-misinformation-after-ipsos-survey
Introduction
Assisted Reproductive Technology (“ART”) refers to a diverse set of medical procedures designed to aid individuals or couples in achieving pregnancy when conventional methods are unsuccessful. This umbrella term encompasses various fertility treatments, including in vitro fertilization (IVF), intrauterine insemination (IUI), and gamete and embryo manipulation. ART procedures involve the manipulation of both male and female reproductive components to facilitate conception.
The dynamic landscape of data flows within the healthcare sector, notably in the realm of ART, demands a nuanced understanding of the complex interplay between privacy regulations and medical practices. In this context, the Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011, play a pivotal role, designating health information as "sensitive personal data or information" and underscoring the importance of safeguarding individuals' privacy. This sensitivity is particularly pronounced in the ART sector, where an array of personal data, ranging from medical records to genetic information, is collected and processed. The recent Assisted Reproductive Technology (Regulation) Act, 2021, in conjunction with the Digital Personal Data Protection Act, 2023, establishes a framework for the regulation of ART clinics and banks, presenting a layered approach to data protection.
A note on data generated by ART
Data flows in any sector are scarcely uniform and often not easily classified under straight-jacket categories. Consequently, mapping and identifying data and its types become pivotal. It is believed that most data flows in the healthcare sector are highly sensitive and personal in nature, which may severely compromise the privacy and safety of an individual if breached. The Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011 (“SPDI Rules”) categorizes any information pertaining to physical, physiological, mental conditions or medical records and history as “sensitive personal data or information”; this definition is broad enough to encompass any data collected by any ART facility or equipment. These include any information collected during the screening of patients, pertaining to ovulation and menstrual cycles, follicle and sperm count, ultrasound results, blood work etc. It also includes pre-implantation genetic testing on embryos to detect any genetic abnormality.
But data flows extend beyond mere medical procedures and technology. Health data also involves any medical procedures undertaken, the amount of medicine and drugs administered during any procedure, its resultant side effects, recovery etc. Any processing of the above-mentioned information, in turn, may generate more personal data points relating to an individual’s political affiliations, race, ethnicity, genetic data such as biometrics and DNA etc.; It is seen that different ethnicities and races react differently to the same/similar medication and have different propensities to genetic diseases. Further, it is to be noted that data is not only collected by professionals but also by intelligent equipment like AI which may be employed by any facility to render their service. Additionally, dissemination of information under exceptional circumstances (e.g. medical emergency) also affects how data may be classified. Considerations are further nuanced when the fundamental right to identity of a child conceived and born via ART may be in conflict with the fundamental right to privacy of a donor to remain anonymous.
Intersection of Privacy laws and ART laws:
In India, ART technology is regulated by the Assisted Reproductive Technology (Regulation) Act, 2021 (“ART Act”). With this, the Union aims to regulate and supervise assisted reproductive technology clinics and ART banks, prevent misuse and ensure safe and ethical practice of assisted reproductive technology services. When read with the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and other ancillary guidelines, the two legislations provide some framework regulations for the digital privacy of health-based apps.
The ART Act establishes a National Assisted Reproductive Technology and Surrogacy Registry (“National Registry”) which acts as a central database for all clinics and banks and their nature of services. The Act also establishes a National Assisted Reproductive Technology and Surrogacy Board (“National Board”) under the Surrogacy Act to monitor the implementation of the act and advise the central government on policy matters. It also supervises the functioning of the National Registry, liaises with State Boards and curates a code of conduct for professionals working in ART clinics and banks. Under the DPDP Act, these bodies (i.e. National Board, State Board, ART clinics and banks) are most likely classified as data fiduciaries (primarily clinics and banks), data processors (these may include National Board and State boards) or an amalgamation of both (these include any appropriate authority established under the ART Act for investigation of complaints, suspend or cancellation of registration of clinics etc.) depending on the nature of work undertaken by them. If so classified, then the duties and liabilities of data fiduciaries and processors would necessarily apply to these bodies. As a result, all bodies would necessarily have to adopt Privacy Enhancing Technologies (PETs) and other organizational measures to ensure compliance with privacy laws in place. This may be considered one of the most critical considerations of any ART facility since any data collected by them would be sensitive personal data pertaining to health, regulated by the Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011 (“SPDI Rules 2011”). These rules provide for how sensitive personal data or information are to be collected, handled and processed by anyone.
The ART Act independently also provides for the duties of ART clinics and banks in the country. ART clinics and banks are required to inform the commissioning couple/woman of all procedures undertaken and all costs, risks, advantages, and side effects of their selected procedure. It mandatorily ensures that all information collected by such clinics and banks to not informed to anyone except the database established by the National Registry or in cases of medical emergency or on order of court. Data collected by clinics and banks (these include details on donor oocytes, sperm or embryos used or unused) are required to be detailed and must be submitted to the National Registry online. ART banks are also required to collect personal information of donors including name, Aadhar number, address and any other details. By mandating online submission, the ART Act is harmonized with the DPDP Act, which regulates all digital personal data and emphasises free, informed consent.
Conclusion
With the increase in active opt-ins for ART, data privacy becomes a vital consideration for all healthcare facilities and professionals. Safeguard measures are not only required on a corporate level but also on a governmental level. It is to be noted that in the 262 Session of the Rajya Sabha, the Ministry of Electronics and Information Technology reported 165 data breach incidents involving citizen data from January 2018 to October 2023 from the Central Identities Data Repository despite publicly denying. This discovery puts into question the safety and integrity of data that may be submitted to the National Registry database, especially given the type of data (both personal and sensitive information) it aims to collate. At present the ART Act is well supported by the DPDP Act. However, further judicial and legislative deliberations are required to effectively regulate and balance the interests of all stakeholders.
References
- The Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011
- Caring for Intimate Data in Fertility Technologies https://dl.acm.org/doi/pdf/10.1145/3411764.3445132
- Digital Personal Data Protection Act, 2023
- https://www.wolterskluwer.com/en/expert-insights/pharmacogenomics-and-race-can-heritage-affect-drug-disposition