#FactCheck: False Social Media Claim on six Army Personnel were killed in retaliatory attack by ULFA in Myanmar
Executive Summary:
A widely circulated claim on social media indicates that six soldiers of the Assam Rifles were killed during a retaliatory attack carried out by a Myanmar-based breakaway faction of the United Liberation Front of Asom (Independent), or ULFA (I). The post included a photograph of coffins covered in Indian flags with reference to soldiers who were part of the incident where ULFA (I) killed six soldiers. The post was widely shared, however, the fact-check confirms that the photograph is old, not related, and there are no trustworthy reports to indicate that any such incident took place. This claim is therefore false and misleading.

Claim:
Social media users claimed that the banned militant outfit ULFA (I) killed six Assam Rifles personnel in retaliation for an alleged drone and missile strike by Indian forces on their camp in Myanmar with captions on it “Six Indian Army Assam Rifles soldiers have reportedly been killed in a retaliatory attack by the Myanmar-based ULFA group.”. The claim was accompanied by a viral post showing coffins of Indian soldiers, which added emotional weight and perceived authenticity to the narrative.

Fact Check:
We began our research with a reverse image search of the image of coffins in Indian flags, which we saw was shared with the viral claim. We found the image can be traced to August 2013. We found the traces in The Washington Post, which confirms the fact that the viral snap is from the Past incident where five Indian Army soldiers were killed by Pakistani intruders in Poonch, Jammu, and Kashmir, on August 6, 2013.

Also, The Hindu and India Today offered no confirmation of the death of six Assam Rifles personnel. However, ULFA (I) did issue a statement dated July 13, 2025, claiming that three of its leaders had been killed in a drone strike by Indian forces.

However, by using Shutterstock, it depicts that the coffin's image is old and not representative of any current actions by the United Liberation Front of Asom (ULFA).

The Indian Army denied it, with Defence PRO Lt Col Mahendra Rawat telling reporters there were "no inputs" of such an operation. Assam Chief Minister Himanta Biswa Sarma also rejected that there was cross-border military action whatsoever. Therefore, the viral claim is false and misleading.

Conclusion:
The assertion that ULFA (I) killed six soldiers from the 6th Assam Rifles in a retaliation strike is incorrect. The viral image used in these posts is from 2013 in Jammu & Kashmir and has no relevance to the present. There have been no verified reports of any such killings, and both the Indian Army and the Assam government have categorically denied having conducted or knowing of any cross-border operation. This faulty narrative is circulating, and it looks like it is only inciting fear and misinformation therefore, please ignore it.
- Claim: Report confirms the death of six Assam Rifles personnel in an ULFA-led attack.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Microsoft has unveiled its ambitious roadmap for developing a quantum supercomputer with AI features, acknowledging the transformative power of quantum computing in solving complex societal challenges. Quantum computing has the potential to revolutionise AI by enhancing its capabilities and enabling breakthroughs in different fields. Microsoft’s groundbreaking announcement of its plans to develop a quantum supercomputer, its potential applications, and the implications for the future of artificial intelligence (AI). However, there is a need for regulation in the realms of quantum computing and AI and significant policies and considerations associated with these transformative technologies. This technological advancement will help in the successful development and deployment of quantum computing, along with the potential benefits and challenges associated with its implementation.
What isQuantum computing?
Quantum computing is an emerging field of computer science and technology that utilises principles from quantum mechanics to perform complex calculations and solve certain types of problems more efficiently than classical computers. While classical computers store and process information using bits, quantum computers use quantum bits or qubits.
Interconnected Future
Quantum computing promises to significantly expand AI’s capabilities beyond its current limitations. Integrating these two technologies could lead to profound advancements in various sectors, including healthcare, finance, and cybersecurity. Quantum computing and artificial intelligence (AI) are two rapidly evolving fields that have the potential to revolutionise technology and reshape various industries. This section explores the interdependence of quantum computing and AI, highlighting how integrating these two technologies could lead to profound advancements across sectors such as healthcare, finance, and cybersecurity.
- Enhancing AI Capabilities:
Quantum computing holds the promise of significantly expanding the capabilities of AI systems. Traditional computers, based on classical physics and binary logic, need help solving complex problems due to the exponential growth of computational requirements. Quantum computing, on the other hand, leverages the principles of quantum mechanics to perform computations on quantum bits or qubits, which can exist in multiple states simultaneously. This inherent parallelism and superposition property of qubits could potentially accelerate AI algorithms and enable more efficient processing of vast amounts of data.
- Solving Complex Problems:
The integration of quantum computing and AI has the potential to tackle complex problems that are currently beyond the reach of classical computing methods. Quantum machine learning algorithms, for example, could leverage quantum superposition and entanglement to analyse and classify large datasets more effectively. This could have significant applications in healthcare, where AI-powered quantum systems could aid in drug discovery, disease diagnosis, and personalised medicine by processing vast amounts of genomic and clinical data.
- Advancements in Finance and Optimisation:
The financial sector can benefit significantly from integrating quantum computing and AI. Quantum algorithms can be employed to optimise portfolios, improve risk analysis models, and enhance trading strategies. By harnessing the power of quantum machine learning, financial institutions can make more accurate predictions and informed decisions, leading to increased efficiency and reduced risks.
- Strengthening Cybersecurity:
Quantum computing can also play a pivotal role in bolstering cybersecurity defences. Quantum techniques can be employed to develop new cryptographic protocols that are resistant to quantum attacks. In conjunction with quantum computing, AI can further enhance cybersecurity by analysing massive amounts of network traffic and identifying potential vulnerabilities or anomalies in real time, enabling proactive threat mitigation.
- Quantum-Inspired AI:
Beyond the direct integration of quantum computing and AI, quantum-inspired algorithms are also being explored. These algorithms, designed to run on classical computers, draw inspiration from quantum principles and can improve performance in specific AI tasks. Quantum-inspired optimisation algorithms, for instance, can help solve complex optimisation problems more efficiently, enabling better resource allocation, supply chain management, and scheduling in various industries.
How Quantum Computing and AI Should be Regulated-
As quantum computing and artificial intelligence (AI) continues to advance, questions arise regarding the need for regulations to govern these technologies. There is debate surrounding the regulation of quantum computing and AI, considering the potential risks, ethical implications, and the balance between innovation and societal protection.
- Assessing Potential Risks: Quantum computing and AI bring unprecedented capabilities that can significantly impact various aspects of society. However, they also pose potential risks, such as unintended consequences, privacy breaches, and algorithmic biases. Regulation can help identify and mitigate these risks, ensuring these technologies’ responsible development and deployment.
- Ethical Implications: AI and quantum computing raise ethical concerns related to privacy, bias, accountability, and the impact on human autonomy. For AI, issues such as algorithmic fairness, transparency, and decision-making accountability must be addressed. Quantum computing, with its potential to break current encryption methods, requires regulatory measures to protect sensitive information. Ethical guidelines and regulations can provide a framework to address these concerns and promote responsible innovation.
- Balancing Innovation and Regulation: Regulating quantum computing and AI involves balancing fostering innovation and protecting society’s interests. Excessive regulation could stifle technological advancements, hinder research, and impede economic growth. On the other hand, a lack of regulation may lead to the proliferation of unsafe or unethical applications. A thoughtful and adaptive regulatory approach is necessary, considering the dynamic nature of these technologies and allowing for iterative improvements based on evolving understanding and risks.
- International Collaboration: Given the global nature of quantum computing and AI, international collaboration in regulation is essential. Harmonising regulatory frameworks can avoid fragmented approaches, ensure consistency, and facilitate ethical and responsible practices across borders. Collaborative efforts can also address data privacy, security, and cross-border data flow challenges, enabling a more unified and cooperative approach towards regulation.
- Regulatory Strategies: Regulatory strategies for quantum computing and AI should adopt a multidisciplinary approach involving stakeholders from academia, industry, policymakers, and the public. Key considerations include:
- Risk-based Approach: Regulations should focus on high-risk applications while allowing low-risk experimentation and development space.
- Transparency and Explainability: AI systems should be transparent and explainable to enable accountability and address concerns about bias, discrimination, and decision-making processes.
- Privacy Protection: Regulations should safeguard individual privacy rights, especially in quantum computing, where current encryption methods may be vulnerable.
- Testing and Certification: Establishing standards for the testing and certification of AI systems can ensure their reliability, safety, and adherence to ethical principles.
- Continuous Monitoring and Adaptation: Regulatory frameworks should be dynamic, regularly reviewed, and adapted to keep pace with the evolving landscape of quantum computing and AI.
Conclusion:
Integrating quantum computing and AI holds immense potential for advancing technology across diverse domains. Quantum computing can enhance the capabilities of AI systems, enabling the solution of complex problems, accelerating data processing, and revolutionising industries such as healthcare, finance, and cybersecurity. As research and development in these fields progress, collaborative efforts among researchers, industry experts, and policymakers will be crucial in harnessing the synergies between quantum computing and AI to drive innovation and shape a transformative future.The regulation of quantum computing and AI is a complex and ongoing discussion. Striking the right balance between fostering innovation, protecting societal interests, and addressing ethical concerns is crucial. A collaborative, multidisciplinary approach to regulation, considering international cooperation, risk assessment, transparency, privacy protection, and continuous monitoring, is necessary to ensure these transformative technologies' responsible development and deployment.

Introduction
Law grows by confronting its absences, it heals through its own gaps. States often find themselves navigating a shared frontier without a mutual guide or lines of law in an era of expanding digital boundaries and growing cyber damages. The United Nations General Assembly ratified the United Nations Convention against Cybercrime on December 24, 2024, and more than sixty governments were in attendance in the signing ceremony on 24th & 25th October this year, marking a moment of institutional regeneration and global commitment.
A new Lexicon for Global Order
The old liberal order is being strained by growing nationalism, economic fracturing, populism, and great-power competition as often emphasised in the works of scholars like G. John Iken berry and John Mearsheimer. Multilateral arrangements become more brittle in such circumstances. Therefore, the new cybercrimes convention represents not only a legal tool but also a resurgence of international promise, a significant win for collective governance in an uncertain time. It serves as a reminder that institutions can be rebuilt even after they have been damaged.
In Discussion: The Fabric of the Digital Polis
The digital sphere has become a contentious area. On the one hand, the US and its allies support stakeholder governance, robust individual rights, and open data flows. On the other hand, nations like China and Russia describe a “post-liberal cyber order” based on state mediation, heavily regulated flows, and sovereignty. Instead of focusing on ideological dichotomies, India, which is positioned as both a rising power and a voice of the Global South, has offered a viewpoint based on supply-chain security, data localisation, and capacity creation. Thus, rather than being merely a regulation, the treaty arises from a framework of strategic recalibration.
What Changed & Why it Matters
There have been regional cybercrime accords up to this point, such as the Budapest Convention. The goal of this new international convention, which is accessible to all UN members, is to standardise definitions, evidence sharing and investigation instruments. 72 states signed the Hanoi signature event in October, 2025, demonstrating an unparalleled level of scope and determination. In addition to establishing structures for cooperative investigations, extradition, and the sharing of electronic evidence, it requires signatories to criminalise acts such as fraud, unlawful access to systems, data interference, and online child exploitation.
For the first time, a legally obligatory global architecture aims to harmonise cross-border evidence flows, mutual legal assistance, and national procedural laws. Cybercrime offers genuine promise for community defence at a time when it is no longer incidental but existential, attacks on hospitals, schools and infrastructure are now common, according to the Global Observatory.
Holding the Line: India’s Deliberate Path in the Age of Cyber Multilateralism
India takes a contemplative rather than a reluctant stance towards the UN Cybercrime Treaty. Though it played an active role during the drafting sessions and lent its voice to the shaping of global cyber norms, New Delhi is yet to sign the convention. Subtle but intentional, the reluctance suggests a more comprehensive reflection, an evaluation of how international obligations correspond with domestic constitutional protections, especially the right to privacy upheld by the Supreme Court in Puttaswamy v. UOI (2017).
Prudence is the reason for this halt. Policy circles speculate that the government is still assessing the treaty’s consequences for national data protection, surveillance regimes, and territorial sovereignty. Officials have not provided explicit justifications for India’s refusal to join. India’s position has frequently been characterised by striking a careful balance between digital sovereignty and taking part in cooperative international regimes. In earlier negotiations, India had even proposed including clauses to penalise “offensive messages” on social media, echoing the erstwhile Section 66A of the IT Act, 2000, but the suggestion found little international traction.
Advocates for digital rights such as Raman Jit Singh Chima of Access Now have warned that ensuring that the treaty’s implementation upholds constitutional privacy principles may be necessary for India to eventually endorse it. He contends that the treaty’s wording might not entirely meet India’s legal requirements in the absence of such voluntary pledges.
UN Secretary-General Antonio Guterres praised the agreement as “a powerful, legally binding instrument to strengthen our collective defences against “cybercrime” during its signing in Hanoi. The issue for India is to make sure that multilateral collaboration develops in accordance with constitutional values rather than to reject that vision. Therefore, the path forward is one of assertion rather than absence, careful march towards a cyber future that protects freedom and sovereignty.
Sources:

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21