#FactCheck - Edited Video of ‘India-India’ Chants at Republican National Convention
Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
Privacy has become a concern for netizens and social media companies have access to a user’s data and the ability to use the said data as they see fit. Meta’s business model, where they rely heavily on collecting and processing user data to deliver targeted advertising, has been under scrutiny. The conflict between Meta and the EU traces back to the enactment of GDPR in 2018. Meta is facing numerous fines for not following through with the regulation and mainly failing to obtain explicit consent for data processing under Chapter 2, Article 7 of the GDPR. ePrivacy Regulation, which focuses on digital communication and digital data privacy, is the next step in the EU’s arsenal to protect user privacy and will target the cookie policies and tracking tech crucial to Meta's ad-targeting mechanism. Meta’s core revenue stream is sourced from targeted advertising which requires vast amounts of data for the creation of a personalised experience and is scrutinised by the EU.
Pay for Privacy Model and its Implications with Critical Analysis
Meta came up with a solution to deal with the privacy issue - ‘Pay or Consent,’ a model that allows users to opt out of data-driven advertising by paying a subscription fee. The platform would offer users a choice between free, ad-supported services and a paid privacy-enhanced experience which aligns with the GDPR and potentially reduces regulatory pressure on Meta.
Meta presently needs to assess the economic feasibility of this model and come up with answers for how much a user would be willing to pay for the privacy offered and shift Meta’s monetisation from ad-driven profits to subscription revenues. This would have a direct impact on Meta’s advertisers who use Meta as a platform for detailed user data for targeted advertising, and would potentially decrease ad revenue and innovate other monetisation strategies.
For the users, increased privacy and greater control of data aligning with global privacy concerns would be a potential outcome. While users will undoubtedly appreciate the option to avoid tracking, the suggestion does beg the question that the need to pay might become a barrier. This could possibly divide users between cost-conscious and privacy-conscious segments. Setting up a reasonable price point is necessary for widespread adoption of the model.
For the regulators and the industry, a new precedent would be set in the tech industry and could influence other companies’ approaches to data privacy. Regulators might welcome this move and encourage further innovation in privacy-respecting business models.
The affordability and fairness of the ‘pay or consent’ model could create digital inequality if privacy comes at a digital cost or even more so as a luxury. The subscription model would also need clarifications as to what data would be collected and how it would be used for non-advertising purposes. In terms of market competition, competitors might use and capitalise on Meta’s subscription model by offering free services with privacy guarantees which could further pressure Meta to refine its offerings to stay competitive. According to the EU, the model needs to provide a third way for users who have ads but are a result of non-personalisation advertising.
Meta has further expressed a willingness to explore various models to address regulatory concerns and enhance user privacy. Their recent actions in the form of pilot programs for testing the pay-for-privacy model is one example. Meta is actively engaging with EU regulators to find mutually acceptable solutions and to demonstrate its commitment to compliance while advocating for business models that sustain innovation. Meta executives have emphasised the importance of user choice and transparency in their future business strategies.
Future Impact Outlook
- The Meta-EU tussle over privacy is a manifestation of broader debates about data protection and business models in the digital age.
- The EU's stance on Meta’s ‘pay or consent’ model and any new regulatory measures will shape the future landscape of digital privacy, leading to other jurisdictions taking cues and potentially leading to global shifts in privacy regulations.
- Meta may need to iterate on its approach based on consumer preferences and concerns. Competitors and tech giants will closely monitor Meta’s strategies, possibly adopting similar models or innovating new solutions. And the overall approach to privacy could evolve to prioritise user control and transparency.
Conclusion
Consent is the cornerstone in matters of privacy and sidestepping it violates the rights of users. The manner in which tech companies foster a culture of consent is of paramount importance in today's digital landscape. As the exploration by Meta in the ‘pay or consent’ model takes place, it faces both opportunities and challenges in balancing user privacy with business sustainability. This situation serves as a critical test case for the tech industry, highlighting the need for innovative solutions that respect privacy while fostering growth with the specificity of dealing with data protection laws worldwide, starting with India’s Digital Personal Data Protection Act, of 2023.
Reference:
- https://ciso.economictimes.indiatimes.com/news/grc/eu-tells-meta-to-address-consumer-fears-over-pay-for-privacy/111946106
- https://www.wired.com/story/metas-pay-for-privacy-model-is-illegal-says-eu/
- https://edri.org/our-work/privacy-is-not-for-sale-meta-must-stop-charging-for-peoples-right-to-privacy/
- https://fortune.com/2024/04/17/meta-pay-for-privacy-rejected-edpb-eu-gdpr-schrems/

Introduction
Misinformation and disinformation are significant issues in today's digital age. The challenge is not limited to any one sector or industry, and has been seen to affect everyone that deals with data of any sort. In recent times, we have seen a rise in misinformation about all manner of subjects, from product and corporate misinformation to manipulated content about regulatory or policy developments.
Micro, Small, and Medium Enterprises (MSMEs) play an important role in economies, particularly in developing nations, by promoting employment, innovation, and growth. However, in the evolving digital landscape, they also confront tremendous hurdles, such as the dissemination of mis/disinformation which may harm reputations, disrupt businesses, and reduce consumer trust. MSMEs are particularly susceptible since they have minimal resources at their disposal and cannot afford to invest in the kind of talent, technology and training that is needed for a business to be able to protect itself in today’s digital-first ecosystem. Mis/disinformation for MSMEs can arise from internal communications, supply chain partners, social media, competitors, etc. To address these dangers, MSMEs must take proactive steps such as adopting frameworks to counter misinformation and prioritising best practices like digital literacy and training, monitoring and social listening, transparency protocols and robust communication practices.
Assessing the Impact of Misinformation on MSMEs
To assess the impact of misinformation on MSMEs, it is essential to get a full sense of the challenges. To begin with, one must consider the categories of damage which can include financial loss, reputational damage, operational damages, and regulatory noncompliance. Various assessment methodologies can be used to analyze the impact of misinformation, including surveys, interviews, case studies, social media and news data analysis, and risk analysis practices.
Policy Framework and Gaps in Addressing Misinformation
The Digital India Initiative, a flagship program of the Government of India, aims to transform India into a digitally empowered society and knowledge economy. The Information Technology Act, 2000 and the rules made therein govern the technology space and serve as the legal framework for cyber security and data protection. The Bhartiya Nyay Sanhita, 2023 also contains provisions regarding ‘fake news’. The Digital Personal Data Protection Act, 2023 is a brand new law aimed at protecting personal data. Fact-check units (FCUs) are government and private independent bodies that verify claims about government policies, regulations, announcements, and measures. However, these policy measures are not sector-specific and lack specific guidelines, which have limited impact on their awareness initiatives on misinformation and insufficient support structure for MSMEs to verify information and protect themselves.
Recommendations for Countering Misinformation in the MSME Sector
To counter misinformation for MSMEs, recommendations include creating a dedicated Misinformation Helpline, promoting awareness campaigns, creating regulatory support and guidelines, and collaborating with tech platforms and expert organisations for the identification and curbing of misinformation.
Organisational recommendations include the Information Verification Protocols for the consumers of Information for the verification of critical information before acting upon it, engaging in employee training for regular training on the identification and management of misinformation, creating a crisis management plan to deal with misinformation crisis, form collaboration networks with other MSMEs to share verified information and best practices.
Engage with technological solutions like AI and ML tools for the detection and flagging of potential misinformation along with fact-checking tools and engaging with cyber security measures to prevent misinformation via digital channels.
Conclusion: Developing a Vulnerability Assessment Framework for MSMEs
Creating a vulnerability assessment framework for misinformation in Micro, Small, and Medium Enterprises (MSMEs) in India involves several key components which include the understanding of the sources and types of misinformation, assessing the impact on MSMEs, identifying the current policies and gaps, and providing actionable recommendations. The implementation strategy for policies to counter misinformation in the MSME sector can be by starting with pilot programs in key MSME clusters, and stakeholder engagement by involving industry associations, tech companies and government bodies. Initiating a feedback mechanism for constant improvement of the framework and finally, developing a plan to scale successful initiatives across the country.
References
- https://publications.ut-capitole.fr/id/eprint/48849/1/wp_tse_1516.pdf
- https://techinformed.com/how-misinformation-can-impact-businesses/
- https://pib.gov.in/aboutfactchecke.aspx

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21