#FactCheck - Old Video Misleadingly Claimed as Footage of Iranian President Before Crash
Executive Summary:
A video that circulated on social media to show Iranian President Ebrahim Raisi inside a helicopter moments before the tragic crash on May 20, 2024, has equally been proven to be fake. The validation of information leaves no doubt, that the video was shot in January 2024, which showed Raisi’s visiting Nemroud Reservoir Dam project. As a means of verifying the origin of the video, the CyberPeace Research Team conducted reverse image search and analyzed the information obtained from the Islamic Republic News Agency, Mehran News, and the Iranian Students’ News Agency. Further, the associated press pointed out inconsistencies between the part in the video that went viral and the segment that was shown by Iranian state television. The original video is old and it is not related to the tragic crash as there is incongruence between the snowy background and the green landscape with a river presented in the clip.
Claims:
A video circulating on social media claims to show Iranian President Ebrahim Raisi inside a helicopter an hour before his fatal crash.
Fact Check:
Upon receiving the posts, in some of the social media posts we found some similar watermarks of the IRNA News agency and Nouk-e-Qalam News.
Taking a cue from this, we performed a keyword search to find any credible source of the shared video, but we found no such video uploaded by the IRNA News agency on their website. Recently, they haven’t uploaded any video regarding the viral news.
We closely analyzed the video, it can be seen that President Ebrahim Raisi was watching outside the snow-covered mountain, but in the internet-available footage regarding the accident, there were no such snow-covered mountains that could be seen but green forest.
We then checked for any social media posts uploaded by IRNA News Agency and found that they had uploaded the same video on X on January 18, 2024. The post clearly indicates the President’s aerial visit to Nemroud Dam.
The viral video is old and does not contain scenes that appear before the tragic chopper crash involving President Raisi.
Conclusion:
The viral clip is not related to the fatal crash of Iranian President Ebrahim Raisi's helicopter and is actually from a January 2024 visit to the Nemroud Reservoir Dam project. The claim that the video shows visuals before the crash is false and misleading.
- Claim: Viral Video of Iranian President Raisi was shot before fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs
Introduction
Data protection has been a critical aspect of advocacy and governance all across the world. Data fuels our cyber-ecosystem and strengthens the era of emerging technologies. All industries and sectors are now dependent upon the data of the user. The governments across the world have been deliberating internally to address the issue and legality of Data protection and privacy. The Indian government has witnessed various draft bills and policies focusing on Data protection over the years, and the contemporary bill is the Digital Personal Data Protection Bill, 2023, which was tabled at the Lok Sabha (Lower House of Parliament) on 03 August for discussions and parliamentary assent.
What is DPDP, 2023?
The goal of the complete and comprehensive Digital Personal Data Protection Bill of 2023 is to establish a framework for the protection of personal data in India. The measure acknowledges the significance of protecting personal data and seeks to strike a balance between the necessity to process personal data for legitimate purposes and the right of individuals to do so. The bill establishes a number of crucial expressions and ideas associated with the protection of personal data, including “data fiduciary,” “data principal,” and “sensitive personal data.” It also emphasises the duties of data fiduciaries, including the need to establish suitable security measures to preserve personal data and the need to secure data principals’ consent before processing their personal information. The measure also creates the Data Protection Board of India, which would implement its requirements and guarantee data fiduciaries’ compliance. The board will have the authority to look into grievances, give directives, and impose sanctions for non-compliance.
Key Features of the Bill
The bill tabled at the parliament has the following key features:
- The 2023 bill imposes reasonable obligations on data fiduciaries and data processors to safeguard digital personal data.
- Under the 2023 bill, a new Data Protection Board is established, which will ensure compliance, remedies and penalties.
- Under the new bill, the Board has been entrusted with the power equivalent to a civil court, such as the power to take cognisance in response to personal data breaches, investigate complaints, imposing penalties. Additionally, the Board can issue directions to ensure compliance with the act.
- The 2023 bill also secures more rights of Individuals and establishes a balance between user protection and growing innovations.
- The bill creates a transparent and accountable data governance framework by giving more rights to individuals.
- There is an Incorporation of Business-friendly provisions by removing criminal penalties for non-compliance and facilitating international data transfers.
- The new 2023 bill balances out fundamental privacy rights and puts reasonable limitations on those rights.
- The new data protection board will carefully examine the instance of non-compliance by imposing penalties on non-compiler.
- The bill does not provide any express clarity in regards to compensation to be granted to the Data Principal in case of a Data Breach.
- Under 2023 Deemed consent is there in its new form as ‘Legitimate Users’ pertaining to the conditions in regard to Sovernity and Intergrity of India.
- There is an introduction of the negative list, which restricts cross-data transfer.
Additionally, the measure makes special provisions for the processing of children’s personal data and acknowledges the significance of protecting children’s privacy. Additionally, it highlights the rights of the data subjects, including their right to access their personal information, their right to have wrong information corrected, and their right to be forgotten.
Drive4CyberPeace
A campaign was undertaken by CyberPeace to gain a critical understanding of what people understand about Data privacy and protection in India. The 4-month long campaign led to a pan-India interaction with netizens from different areas and backgrounds. The thoughts and opinions of the netizens were understood and collated in the form of a whitepaper which was, in turn, presented to Parliamentarians and government officials. The whitepaper laid the foundation of the recommendations submitted to the Ministry of Electronics and Information Technology as part of the stakeholder consultation.
Conclusion
Overall, the Digital Personal Data Protection Bill of 2023 is an important step towards safeguarding Indian citizens’ privacy and personal data. It creates a regulatory agency to guarantee compliance and enforcement and offers a thorough framework for data protection. The law includes special measures for the protection of sensitive personal data and the personal data of children and acknowledges the significance of striking a balance between the right to privacy and the necessity of data processing.
Introduction
In the digital era, where technology is growing rapidly, the role of Artificial Intelligence (AI) has been making its way to different corners of the world. Where nothing seems to be impossible, technology and innovation have been moving conjointly and once again, and such innovation is in the limelight with its groundbreaking initiative known as “Project Groot”, which has been announced by the AI chip leader “Nvidia”. The core of this project is the fusion of technology with AI and robotics, where a humanoid can be produced with the capability to understand the natural language and interact with it to further learn from the physical environment by observing human actions and skills. Project Groot aims to assist humans in diverse sectors such as Healthcare and so on.
Humanoid robots are based on NVIDIA’s thor system-on-chip (SoC). The thor powers the intelligence of these robots, and the chip has been designed to handle complex tasks and ensure a safe and natural interaction between humans and robots. However, a big question arises about the ethical considerations of privacy, autonomy and the possible replacement of human workers.
Brief Analysis
Nvidia has announced Project GR00T, or Generalist Robot 00 Technology, which aims to create AI-powered humanoid robots with human-like understanding and movement. The project is part of Nvidia's efforts to drive breakthroughs in robotics and embodied AI, which can interact with and learn from a physical environment. The robots built on this platform are designed to understand natural language and emulate movements by observing human actions, such as coordination, dexterity, and other skills.
The model has been trained on NVIDIA GPU-accelerated simulation, enabling the robots to learn from human demonstrations with imitation learning and from the robotics platform NVIDIA Isaac Lab for reinforcement learning. This multimodal AI system acts as the mind for humanoid robots, allowing them to learn new skills and interact with the real world. Leading names in robotics, such as Figure, Boston Dynamics, Apptronik, Agility Robotics, Sanctuary AI, and Unitree, are reported to have collaborated with Nvidia to leverage GR00T.
Nvidia has also updated Isaac with Isaac Manipulator and Isaac Perceptor, which add multi-camera 3D vision. The company also unveiled a new computer, Jetson Thor, to aid humanoid robots based on NVIDIA's SoC, which is designed to handle complex tasks and ensure a safe and natural interaction between humans and robots.
Despite the potential job loss associated with humanoid robots potentially handling hazardous and repetitive tasks, many argue that they can aid humans and make their lives more comfortable rather than replacing them.
Policy Recommendations
The Nvidia project highlights a significant development in AI Robotics, presenting a brimming potential and ethical challenges critical for the overall development and smooth assimilation of AI-driven tech in society. To ensure its smooth assimilation, a comprehensive policy framework must be put in place. This includes:
- Human First Policy - Emphasis should be on better augmentation rather than replacement. The authorities must focus on better research and development (R&D) of applications that aid in modifying human capabilities, enhancing working conditions, and playing a role in societal growth.
- Proper Ethical Guidelines - Guidelines stressing human safety, autonomy and privacy should be established. These norms must include consent for data collection, fair use of AI in decision making and proper protocols for data security.
- Deployment of Inclusive Technology - Access to AI Driven Robotics tech should be made available to diverse sectors of society. It is imperative to address potential algorithm bias and design flaws to avoid discrimination and promote inclusivity.
- Proper Regulatory Frameworks - It is crucial to establish regulatory frameworks to govern the smooth deployment and operation of AI-driven tech. The framework must include certification for safety and standards, frequent audits and liability protocols to address accidents.
- Training Initiatives - Educational programs should be introduced to train the workforce for integrating AI driven robotics and their proper handling. Upskilling of the workforce should be the top priority of corporations to ensure effective integration of AI Robotics.
- Collaborative Research Initiatives - AI and emerging technologies have a profound impact on the trajectory of human development. It is imperative to foster collaboration among governments, industry and academia to drive innovation in AI robotics responsibly and undertake collaborative initiatives to mitigate and address technical, societal, legal and ethical issues posed by AI Robots.
Conclusion
On the whole, Project GROOT is a significant quantum leap in the advancement of robotic technology and indeed paves the way for a future where robots can integrate seamlessly into various aspects of human lives.
References
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-nvidias-project-gr00t-impact-robotics-9225089/
- https://medium.com/paper-explanation/understanding-nvidias-project-groot-762d4246b76d
- https://www.techradar.com/pro/nvidias-project-groot-brings-the-human-robot-future-a-significant-step-closer
- https://www.barrons.com/livecoverage/nvidia-gtc-ai-conference/card/nvidia-announces-ai-model-for-humanoid-robot-development-BwT9fewMyD6XbuBrEDSp
Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445