#FactCheck - Debunking Viral Photo: Tears of Photographer Not Linked to Ram Mandir Opening
Executive Summary:
A photographer breaking down in tears in a viral photo is not connected to the Ram Mandir opening. Social media users are sharing a collage of images of the recently dedicated Lord Ram idol at the Ayodhya Ram Mandir, along with a claimed shot of the photographer crying at the sight of the deity. A Facebook post that posts this video says, "Even the cameraman couldn't stop his emotions." The CyberPeace Research team found that the event happened during the AFC Asian Cup football match in 2019. During a match between Iraq and Qatar, an Iraqi photographer started crying since Iraq had lost and was out of the competition.
Claims:
The photographer in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration. The Collage was also shared by many users in other Social Media like X, Reddit, Facebook. An Facebook user shared and the Caption of the Post reads,
Fact Check:
CyberPeace Research team reverse image searched the Photographer, and it landed to several memes from where the picture was taken, from there we landed to a Pinterest Post where it reads, “An Iraqi photographer as his team is knocked out of the Asian Cup of Nations”
Taking an indication from this we did some keyword search and tried to find the actual news behind this Image. We landed at the official Asian Cup X (formerly Twitter) handle where the image was shared 5 years ago on 24 Jan, 2019. The Post reads, “Passionate. Emotional moment for an Iraqi photographer during the Round of 16 clash against ! #AsianCup2019”
We are now confirmed about the News and the origin of this image. To be noted that while we were investigating the Fact Check we also found several other Misinformation news with the Same photographer image and different Post Captions which was all a Misinformation like this one.
Conclusion:
The recent Viral Image of the Photographer claiming to be associated with Ram Mandir Opening is Misleading, the Image of the Photographer was a 5 years old image where the Iraqi Photographer was seen Crying during the Asian Cup Football Competition but not of recent Ram Mandir Opening. Netizens are advised not to believe and share such misinformation posts around Social Media.
- Claim: A person in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration.
- Claimed on: Facebook, X, Reddit
- Fact Check: Fake
Related Blogs
The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions
Executive Summary:
A viral online video claims of an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate. However, the CyberPeace Research Team has confirmed that the video is fake, created using video editing tools to manipulate the true essence of the original footage by merging two very different videos as one and making false claims. The original footage has no connection to an attack on Mr. Netanyahu. The claim that endorses the same is therefore false and misleading.
Claims:
A viral video claims an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate.
Fact Check:
Upon receiving the viral posts, we conducted a Reverse Image search on the keyframes of the video. The search led us to various legitimate sources featuring an attack on an ethnic Turkish leader of Bulgaria but not on the Prime Minister Benjamin Netanyahu, none of which included any attacks on him.
We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 68.0% confidence that the video was an editing. The tools identified "substantial evidence of manipulation," particularly in the change of graphics quality of the footage and the breakage of the flow in footage with the change in overall background environment.
Additionally, an extensive review of official statements from the Knesset revealed no mention of any such incident taking place. No credible reports were found linking the Israeli PM to the same, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming of an attack on Prime Minister Netanyahu is an old video that has been edited. The research using various AI detection tools confirms that the video is manipulated using edited footage. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using video editing technology, making the claim false and misleading.
- Claim: Attack on the Prime Minister Netanyahu Israeli Senate
- Claimed on: Facebook, Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Introduction
Microsoft has unveiled its ambitious roadmap for developing a quantum supercomputer with AI features, acknowledging the transformative power of quantum computing in solving complex societal challenges. Quantum computing has the potential to revolutionise AI by enhancing its capabilities and enabling breakthroughs in different fields. Microsoft’s groundbreaking announcement of its plans to develop a quantum supercomputer, its potential applications, and the implications for the future of artificial intelligence (AI). However, there is a need for regulation in the realms of quantum computing and AI and significant policies and considerations associated with these transformative technologies. This technological advancement will help in the successful development and deployment of quantum computing, along with the potential benefits and challenges associated with its implementation.
What isQuantum computing?
Quantum computing is an emerging field of computer science and technology that utilises principles from quantum mechanics to perform complex calculations and solve certain types of problems more efficiently than classical computers. While classical computers store and process information using bits, quantum computers use quantum bits or qubits.
Interconnected Future
Quantum computing promises to significantly expand AI’s capabilities beyond its current limitations. Integrating these two technologies could lead to profound advancements in various sectors, including healthcare, finance, and cybersecurity. Quantum computing and artificial intelligence (AI) are two rapidly evolving fields that have the potential to revolutionise technology and reshape various industries. This section explores the interdependence of quantum computing and AI, highlighting how integrating these two technologies could lead to profound advancements across sectors such as healthcare, finance, and cybersecurity.
- Enhancing AI Capabilities:
Quantum computing holds the promise of significantly expanding the capabilities of AI systems. Traditional computers, based on classical physics and binary logic, need help solving complex problems due to the exponential growth of computational requirements. Quantum computing, on the other hand, leverages the principles of quantum mechanics to perform computations on quantum bits or qubits, which can exist in multiple states simultaneously. This inherent parallelism and superposition property of qubits could potentially accelerate AI algorithms and enable more efficient processing of vast amounts of data.
- Solving Complex Problems:
The integration of quantum computing and AI has the potential to tackle complex problems that are currently beyond the reach of classical computing methods. Quantum machine learning algorithms, for example, could leverage quantum superposition and entanglement to analyse and classify large datasets more effectively. This could have significant applications in healthcare, where AI-powered quantum systems could aid in drug discovery, disease diagnosis, and personalised medicine by processing vast amounts of genomic and clinical data.
- Advancements in Finance and Optimisation:
The financial sector can benefit significantly from integrating quantum computing and AI. Quantum algorithms can be employed to optimise portfolios, improve risk analysis models, and enhance trading strategies. By harnessing the power of quantum machine learning, financial institutions can make more accurate predictions and informed decisions, leading to increased efficiency and reduced risks.
- Strengthening Cybersecurity:
Quantum computing can also play a pivotal role in bolstering cybersecurity defences. Quantum techniques can be employed to develop new cryptographic protocols that are resistant to quantum attacks. In conjunction with quantum computing, AI can further enhance cybersecurity by analysing massive amounts of network traffic and identifying potential vulnerabilities or anomalies in real time, enabling proactive threat mitigation.
- Quantum-Inspired AI:
Beyond the direct integration of quantum computing and AI, quantum-inspired algorithms are also being explored. These algorithms, designed to run on classical computers, draw inspiration from quantum principles and can improve performance in specific AI tasks. Quantum-inspired optimisation algorithms, for instance, can help solve complex optimisation problems more efficiently, enabling better resource allocation, supply chain management, and scheduling in various industries.
How Quantum Computing and AI Should be Regulated-
As quantum computing and artificial intelligence (AI) continues to advance, questions arise regarding the need for regulations to govern these technologies. There is debate surrounding the regulation of quantum computing and AI, considering the potential risks, ethical implications, and the balance between innovation and societal protection.
- Assessing Potential Risks: Quantum computing and AI bring unprecedented capabilities that can significantly impact various aspects of society. However, they also pose potential risks, such as unintended consequences, privacy breaches, and algorithmic biases. Regulation can help identify and mitigate these risks, ensuring these technologies’ responsible development and deployment.
- Ethical Implications: AI and quantum computing raise ethical concerns related to privacy, bias, accountability, and the impact on human autonomy. For AI, issues such as algorithmic fairness, transparency, and decision-making accountability must be addressed. Quantum computing, with its potential to break current encryption methods, requires regulatory measures to protect sensitive information. Ethical guidelines and regulations can provide a framework to address these concerns and promote responsible innovation.
- Balancing Innovation and Regulation: Regulating quantum computing and AI involves balancing fostering innovation and protecting society’s interests. Excessive regulation could stifle technological advancements, hinder research, and impede economic growth. On the other hand, a lack of regulation may lead to the proliferation of unsafe or unethical applications. A thoughtful and adaptive regulatory approach is necessary, considering the dynamic nature of these technologies and allowing for iterative improvements based on evolving understanding and risks.
- International Collaboration: Given the global nature of quantum computing and AI, international collaboration in regulation is essential. Harmonising regulatory frameworks can avoid fragmented approaches, ensure consistency, and facilitate ethical and responsible practices across borders. Collaborative efforts can also address data privacy, security, and cross-border data flow challenges, enabling a more unified and cooperative approach towards regulation.
- Regulatory Strategies: Regulatory strategies for quantum computing and AI should adopt a multidisciplinary approach involving stakeholders from academia, industry, policymakers, and the public. Key considerations include:
- Risk-based Approach: Regulations should focus on high-risk applications while allowing low-risk experimentation and development space.
- Transparency and Explainability: AI systems should be transparent and explainable to enable accountability and address concerns about bias, discrimination, and decision-making processes.
- Privacy Protection: Regulations should safeguard individual privacy rights, especially in quantum computing, where current encryption methods may be vulnerable.
- Testing and Certification: Establishing standards for the testing and certification of AI systems can ensure their reliability, safety, and adherence to ethical principles.
- Continuous Monitoring and Adaptation: Regulatory frameworks should be dynamic, regularly reviewed, and adapted to keep pace with the evolving landscape of quantum computing and AI.
Conclusion:
Integrating quantum computing and AI holds immense potential for advancing technology across diverse domains. Quantum computing can enhance the capabilities of AI systems, enabling the solution of complex problems, accelerating data processing, and revolutionising industries such as healthcare, finance, and cybersecurity. As research and development in these fields progress, collaborative efforts among researchers, industry experts, and policymakers will be crucial in harnessing the synergies between quantum computing and AI to drive innovation and shape a transformative future.The regulation of quantum computing and AI is a complex and ongoing discussion. Striking the right balance between fostering innovation, protecting societal interests, and addressing ethical concerns is crucial. A collaborative, multidisciplinary approach to regulation, considering international cooperation, risk assessment, transparency, privacy protection, and continuous monitoring, is necessary to ensure these transformative technologies' responsible development and deployment.