#FactCheck: Edited Broadcast Misused to Spread False Assam Political Rift Claim
Executive Summary:
A video from an India TV news show related to the Assam elections is going viral on social media. In the clip, anchor Meenakshi Joshi is allegedly seen claiming that there is a rift between the BJP and the RSS in Assam. The video further suggests that RSS chief Mohan Bhagwat wrote a letter to Prime Minister Narendra Modi stating that former Congress members have taken over the BJP, and that RSS volunteers would not work for the party in Assam. However, a research by the CyberPeace found that the viral video is edited and misleading. The original video contains no such claims.
Claim:
A social media user Ajit Singh shared the video on X with the caption:“The core idea of today’s BJP is to capture power by any means. We have been saying this for long, and now even RSS has accepted that BJP in Assam has been taken over by Congress mindset.”

Fact Check:
To verify the claim, we searched relevant keywords about the alleged letter by RSS chief Mohan Bhagwat to Prime Minister Narendra Modi. However, we found no credible media reports supporting this claim. We then checked the YouTube channel of India TV but could not find the viral clip there. During the search, we did find a similar video from Meenakshi Joshi’s show. In the beginning of that video, the portion seen in the viral clip appears.

In the original video, the anchor is discussing the announcement of election dates in five states. There is no mention of any rift between the BJP and RSS in Assam.
Conclusion:
The viral India TV video claiming a rift between the BJP and RSS in Assam is edited and misleading. The original broadcast was about election dates in five states and did not include any such claims.
Related Blogs

On the occasion of 20th edition of Safer Internet Day 2023, CyberPeace in collaboration with UNICEF, DELNET, NCERT, and The National Book Trust (NBT), India, took steps towards safer cyberspace by launching iSafe Multimedia Resources, CyberPeace TV, and CyberPeace Café in an event held today in Delhi.
CyberPeace also showcased its efforts, in partnership with UNICEF, to create a secure and peaceful online world through its Project iSafe, which aims to bridge the knowledge gap between emerging advancements in cybersecurity and first responders. Through Project iSafe, CyberPeace has successfully raised awareness among law enforcement agencies, education departments, and frontline workers across various fields. The event marked a significant milestone in the efforts of the foundation to create a secure and peaceful online environment for everyone.
Launching the Cyberpeace TV, café and isafe material , National Cybersecurity coordinator of Govt of India, Lt Gen Rajesh Pant interacts with the students by introducing them with the theme of this safer internet day. He launched the coword cyber challenge initiative by the countries. Content is most important in cyberspace. He also assured everyone that the government of India is taking a lot of steps at national level to make cyber space safer. He compliments CPF for their initiatives.
Ms. Zafrin Chaudhry, Chief of Communication, UNICEF addresses students with the facts that children make out 1 out of 3 in cyberspace, so they should have a safe cyberspace. They should be informed and equipped with all the information on how to deal with any kind of issues they face in cyberspace. They should share their experience with everyone to make others aware. UNICEF in partnership with CPF is extending help to children to equip them with the help and information.
Major Vineet Kumar, Founder and Global President of CPF welcomed all and introduced us about the launching of iSafe Multimedia Resources, CyberPeace TV, and CyberPeace Café . With this launch he threw some light on upcoming plans like launching a learning module of metaverse with AR and VR. He wants to make cyberspace safe even in tier 3 cities that’s why he established the first cybercafé in Ranchi.
As the internet plays a crucial role in our lives, CyberPeace has taken action to combat potential cyber threats. They introduced CyberPeace TV, the world’s first multilingual TV Channel on Jio TV focusing on Education and Entertainment, a comprehensive online platform that provides the latest in cybersecurity news, expert analysis, and a community for all stakeholders in the field. CyberPeace also launched its first CyberPeace Café for creators and innovators and released the iSafe Multimedia resource containing Flyers, Posters, E hand book and handbook on digital safety for children developed jointly by CyberPeace, UNICEF and NCERT for the public.
O.P. Singh, Former DGP, UP Police & CEO Kailash Satyarthi foundation, , started with the data of internet users in India. The Internet is used in day-to -day activities nowadays and primarily in social media. Students should have a channelized approach to cyberspace like fixed screen time, information to the right content, and usage of the internet. I really appreciate the initiates that CyberPeace is taking in this direction.
The celebration continued by iSafe Panel Discussion on “Creating Safer Cyberspace for Children.” The discussion was moderated by Dr. Sangeeta Kaul, Director of DELNET, and was attended by panellists Mr. Rakesh Maheshwari from MeitY(Ministry of electronics and information Technology, Govt. of India), Dr. Indu Kumar from CIET-NCERT, Ms. Bindu Sharma from ICMEC, and Major Vineet Kumar from CyberPeace.
The event was also graced by professional artists from the National School of Drama, who performed Nukkad Natak and Qawwali based on cyber security themes. Students from SRDAV school also entertained the audience with their performances. The attendees were also given a platform to share their experiences with online security issues, and ICT Awardees, Parents and iSafe Champions shared their insights with the guests. The event also had stalls by CyberPeace Corps, a Global volunteer initiative, and CIET-NCERT for students to explore and join the cause. The event’s highlight was the 360 Selfie Booth, where attendees lined up to have their turn.

Introduction
Robotic or Robo dogs are created to resemble dogs in conduct and appearance, usually comprising canine features including barking and wagging tails. Some examples include Rhex (hexapod robot), Littledog and BigDog (created by Boston Dynamics robot). Robodogs, on the whole, can even respond to commands and look at a person with large LED-lit puppy eyes.
A four-legged robotic solution was recently concluded through its foremost successful radiation protection test inside the most extensive experimental area at the European Organization for Nuclear Research known as CERN. Each robot created at CERN is carefully crafted to fulfil exceptional challenges and complement each other. Unlike the previous wheeled, tracked or monorail robots, the robodogs will be capable of penetrating unexplored dimensions of the caverns, expanding the spectrum of surroundings that CERN robots can act as a guide. Also, Incorporating the robodog with the existing monorail robots in the Large Hadron Collider (LHC) tunnel will expand the range of places available for monitoring and supervision, improving the security and efficiency of the operation of CERN. Lenovo too has designed a six-legged robot called the "Daystar Bot GS" to be launched this year, which promises "comprehensive data collection."
Use of Robodogs in diverse domains
Due to the enhancement of Artificial Intelligence (AI), robodogs can be a boon for those with exceptional requirements. The advantage of AI is the dependability of its features, which can be programmed to answer certain commands detailed to the user.
In the context of health and well-being, they can be useful if they are programmed to take care of a person with distinct or special requirements (elderly person or visually impaired person). For this reason, they are considered more advantageous than the real dogs. Recently, New Stanford has designed robodogs that can perform several physical activities, including dancing and may also one day assist in putting pediatric patients in comfort during their hospital stays. Similarly, the robodog, "Pupper", is a revamped version of another robotic dog designed at Stanford called "Doggo", an open-source bot with 3D printed elements that one could create on a fairly small budget. They were also created to interact with humans. Furthermore, Robots as friends are a more comfortable hop for the Japanese. The oldest and most successful social robot in Japan is called "Paro", resembling an ordinary plush toy that can help in treating depression, stress, anxiety and also mood swings in a person. Following 1998, several Paro robots were exported overseas and put into service globally, reducing stress among children in ICUs, treating American veterans suffering from Post Traumatic Stress Disorder (PTSD), and assisting dementia patients.
Post-pandemic, the Japanese experiencing loneliness and isolation have been clinging to social robots for mind healing and comfort. Likewise, at a cafe in Japan, proud owners of the AI-driven robot dog "Aibo" have pawed its course into the minds and hearts of the people. Presently, robots are replacing the conventional class guinea pig or bunny at Moriyama Kindergarten in the central Japanese city of Nagoya. According to the teachers here, the bots apparently reduce stress and teach kids to be more humane.
In the security and defence domain, the unique skills of robodogs allow them to be used in hazardous and challenging circumstances. They can even navigate through rugged topography with reassurance to save stranded individuals from natural catastrophes. They could correspondingly help with search and rescue procedures, surveillance, and other circumstances that could be dangerous for humans. Researchers or experts are still fine-tuning the algorithm to develop them by devising the technology and employing affordable off-shelf robots that are already functional. Robodogs are further used for providing surveillance in hostage crises, defusing bombs, besides killing people to stop them from attacking other individuals. Similarly, a breakthrough in AI is being tested by the Australian military that reportedly allows soldiers to control robodogs solely with their minds. Cities like Florida and St. Petersburg also seem bound to keep police robodogs. The U.S. Department of Homeland Security is further seeking plans to deploy robot dogs at the borderlands. Also, the New York City Police Department (NYPD) intends to once again deploy four-legged 'Robodogs' to deal with high-risk circumstances like hostage negotiations. The NYPD has previously employed alike robodogs for high-octane duties in examining unsafe environments where human officers should not be exposed. The U.S. Marine Corps is additionally experimenting with a new breed of robotic canine that can be helpful in the battleground, enhance the safety and mobility of soldiers, and aid in other tasks. The Unitree Go1 robot dog (Nicknamed GOAT-Grounded Open-Air Transport) by the Marines is a four-legged machine that has a built-in AI system, which can be equipped to carry an infantry anti-armour rocket launcher on its back. The GOAT robot dog is designed to help the Marines move hefty loads, analyse topography, and deliver fire support in distant and dangerous places.
However, on the contrary, robodogs may pose ethical and moral predicaments regarding who is accountable for their actions and how to ensure their adherence to the laws of warfare. This may further increase security and privacy situations on how to safeguard the data of the robotic dogs and contain hacking or sabotage.
Conclusion
Teaching robots to traverse the world conventionally has been an extravagant challenge. Though the world has been seeing an increase in their manufacturing, it is simply a machine and can never replace the feeling of owning a real dog. Designers state that intelligent social robots will never replace humans, though robots provide the assurance of social harmony without social contact. Also, they may not be capable of managing complicated or unforeseen circumstances that need instinct or human decision-making. Nevertheless, owning robodogs in the coming decades is expected to become even more common and cost-effective as they evolve or advance with new algorithms being tested and implemented.
References:
- https://home.cern/news/news/engineering/introducing-cerns-robodog
- https://news.stanford.edu/2023/10/04/ai-approach-yields-athletically-intelligent-robotic-dog/
- https://nypost.com/2023/02/17/combat-ai-robodogs-follow-telepathic-commands-from-soldiers/
- https://www.popsci.com/technology/parkour-algorithm-robodog/
- https://ggba.swiss/en/cern-unveils-its-innovative-robodog-for-radiation-detection/
- https://www.themarshallproject.org/2022/12/10/san-francisco-killer-robots-policing-debate
- https://www.cbsnews.com/news/robo-dogs-therapy-bots-artificial-intelligence/
- https://news.stanford.edu/report/2023/08/01/robo-dogs-unleash-fun-joy-stanford-hospital/
- https://www.pcmag.com/news/lenovo-creates-six-legged-daystar-gs-robot
- https://www.foxnews.com/tech/new-breed-military-ai-robo-dogs-could-marines-secret-weapon
- https://www.wptv.com/news/national/new-york-police-will-use-four-legged-robodogs-again
- https://www.dailystar.co.uk/news/us-news/creepy-robodogs-controlled-soldiers-minds-29638615
- https://www.newarab.com/news/robodogs-part-israels-army-robots-gaza-war
- https://us.aibo.com/

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/