#FactCheck: Edited Broadcast Misused to Spread False Assam Political Rift Claim
Executive Summary:
A video from an India TV news show related to the Assam elections is going viral on social media. In the clip, anchor Meenakshi Joshi is allegedly seen claiming that there is a rift between the BJP and the RSS in Assam. The video further suggests that RSS chief Mohan Bhagwat wrote a letter to Prime Minister Narendra Modi stating that former Congress members have taken over the BJP, and that RSS volunteers would not work for the party in Assam. However, a research by the CyberPeace found that the viral video is edited and misleading. The original video contains no such claims.
Claim:
A social media user Ajit Singh shared the video on X with the caption:“The core idea of today’s BJP is to capture power by any means. We have been saying this for long, and now even RSS has accepted that BJP in Assam has been taken over by Congress mindset.”

Fact Check:
To verify the claim, we searched relevant keywords about the alleged letter by RSS chief Mohan Bhagwat to Prime Minister Narendra Modi. However, we found no credible media reports supporting this claim. We then checked the YouTube channel of India TV but could not find the viral clip there. During the search, we did find a similar video from Meenakshi Joshi’s show. In the beginning of that video, the portion seen in the viral clip appears.

In the original video, the anchor is discussing the announcement of election dates in five states. There is no mention of any rift between the BJP and RSS in Assam.
Conclusion:
The viral India TV video claiming a rift between the BJP and RSS in Assam is edited and misleading. The original broadcast was about election dates in five states and did not include any such claims.
Related Blogs

Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

Introduction
Cert-In (Indian Computer Emergency Response Team) has recently issued the “Guidelines on Information Security Practices” for Government Entities for Safe & Trusted Internet. The guideline has come at a critical time when the Draft Digital India Bill is about to be released, which is aimed at revamping the legal aspects of Indian cyberspace. These guidelines lay down the policy framework and the requirements for critical infrastructure for all government organisations and institutions to improve the overall cyber security of the nation.
What is Cert-In?
A Computer Emergency Response Team (CERT) is a group of information security experts responsible for the protection against, detection of and response to an organisation’s cybersecurity incidents. A CERT may focus on resolving data breaches and denial-of-service attacks and providing alerts and incident handling guidelines. CERTs also conduct ongoing public awareness campaigns and engage in research aimed at improving security systems. The Ministry of Electronics and Information Technology (MeitY) oversees CERT-In. It regularly releases alerts to help individuals and companies safeguard their data, information, and ICT (Information and Communications Technology) infrastructure.
Indian Computer Emergency Response Team (CERT-In) has been established and appointed as national agency in respect of cyber incidents and cyber security incidents in terms of the provisions of section 70B of Information Technology (IT) Act, 2000.
CERT-In requests information from service providers, intermediaries, data centres, and body corporates to coordinate reaction actions and emergency procedures regarding cyber security incidents. It is a focal point for incident reporting and offers round-the-clock security services. It manages cyber occurrences that are tracked and reported while continuously analysing cyber risks. It strengthens the security barriers for the Indian Internet domain.
Background
India is fast becoming one of the world’s largest connected nations – with over 80 Crore Indians (Digital Nagriks) presently connected and using the Internet and cyberspace – and with this number is expected to touch 120 Crores in the coming few years. The Digital Nagriks of the country are using the Internet for business, education, finance and various applications and services including Digital Government services. Internet provides growth and innovation and at the same time it has seen rise in cybercrimes, user harm and other challenges to online safety. The policies of the Government are aimed at ensuring an Open, Safe & Trusted and Accountable Internet for its users. Government is fully cognizant and aware of the growing cyber security threats and attacks.
It is the Government of India’s objective to ensure that Digital Nagriks experience a Safe & Trusted Internet. Along with ubiquitous applications of Information & Communication Technologies (ICT) in almost all facets of service delivery and operations, continuously evolving cyber threats have become a concern for the Government. Cyber-attacks can come in the form of malware, ransomware, phishing, data breach etc., that adversely affect an organisation’s information and systems. Cyber threats leading to cyber-attacks or incidents can compromise the confidentiality, integrity, and availability of an organisation’s information and systems and can have far reaching impact on essential services and national interests. To protect against cyber threats, it is important for government entities to implement strong cybersecurity measures and follow best practices. As ICT infrastructure of the Government entities is one of the preferred targets of the malicious actors, responsibility of implementing good cyber security practices for protecting computers, servers, applications, electronic systems, networks, and data from digital attacks, also remain with the ICT assets’ owner i.e. Government entity.
What are the new Guidelines about?
The Government of India (distribution of business) Rules, 1961’s First Schedule lists a number of Ministries, Departments, Secretariats, and Offices, along with their affiliated and subordinate offices, which are all subject to the rules. They also comprise all governmental organisations, businesses operating in the public sector, and other governmental entities under their administrative control.
“The government has launched a number of steps to guarantee an accessible, trustworthy, and accountable digital environment. With a focus on capabilities, systems, human resources, and awareness, we are extending and speeding our work in the area of cyber security, according to Rajeev Chandrasekhar, Minister of State for Electronics, Information Technology, Skill Development, and Entrepreneurship.
The Recommendations
- Various security domains are covered in the standards, including network security, identity and access management, application security, data security, third-party outsourcing, hardening procedures, security monitoring, incident management, and security audits.
- For instance, the rules advise using only a Standard User (non-administrator) account to use computers and laptops for regular work regarding desktop, laptop, and printer security in the workplace. Users may only be granted administrative access with the CISO’s consent.
- The usage of lengthy passwords containing at least eight characters that combine capital letters, tiny letters, numerals, and special characters; Never save any usernames or passwords in your web browser. Likewise, never save any payment-related data there.
- They include guidelines created by the National Informatics Centre for Chief Information Security Officers (CISOs) and staff members of Central government Ministries/Departments to improve cyber security and cyber hygiene in addition to adhering to industry best practises.
Conclusion
The government has been proactive in the contemporary times to eradicate the menace of cybercrimes and therreats from the Indian cyberspace and hence now we have seen a series of new bills and polices introduced by the Ministry of Electronics and Information Technology, and various other government organisations like Cert-In and TRAI. These policies have been aimed towards being relevant to time and current technologies. The threats from emerging technologies like web 3.0 cannot be ignored and hence with active netizen participation and synergy between government and corporates will lead to a better and improved cyber ecosystem in India.