#FactCheck: False Social Media Claim on six Army Personnel were killed in retaliatory attack by ULFA in Myanmar
Executive Summary:
A widely circulated claim on social media indicates that six soldiers of the Assam Rifles were killed during a retaliatory attack carried out by a Myanmar-based breakaway faction of the United Liberation Front of Asom (Independent), or ULFA (I). The post included a photograph of coffins covered in Indian flags with reference to soldiers who were part of the incident where ULFA (I) killed six soldiers. The post was widely shared, however, the fact-check confirms that the photograph is old, not related, and there are no trustworthy reports to indicate that any such incident took place. This claim is therefore false and misleading.

Claim:
Social media users claimed that the banned militant outfit ULFA (I) killed six Assam Rifles personnel in retaliation for an alleged drone and missile strike by Indian forces on their camp in Myanmar with captions on it “Six Indian Army Assam Rifles soldiers have reportedly been killed in a retaliatory attack by the Myanmar-based ULFA group.”. The claim was accompanied by a viral post showing coffins of Indian soldiers, which added emotional weight and perceived authenticity to the narrative.

Fact Check:
We began our research with a reverse image search of the image of coffins in Indian flags, which we saw was shared with the viral claim. We found the image can be traced to August 2013. We found the traces in The Washington Post, which confirms the fact that the viral snap is from the Past incident where five Indian Army soldiers were killed by Pakistani intruders in Poonch, Jammu, and Kashmir, on August 6, 2013.

Also, The Hindu and India Today offered no confirmation of the death of six Assam Rifles personnel. However, ULFA (I) did issue a statement dated July 13, 2025, claiming that three of its leaders had been killed in a drone strike by Indian forces.

However, by using Shutterstock, it depicts that the coffin's image is old and not representative of any current actions by the United Liberation Front of Asom (ULFA).

The Indian Army denied it, with Defence PRO Lt Col Mahendra Rawat telling reporters there were "no inputs" of such an operation. Assam Chief Minister Himanta Biswa Sarma also rejected that there was cross-border military action whatsoever. Therefore, the viral claim is false and misleading.

Conclusion:
The assertion that ULFA (I) killed six soldiers from the 6th Assam Rifles in a retaliation strike is incorrect. The viral image used in these posts is from 2013 in Jammu & Kashmir and has no relevance to the present. There have been no verified reports of any such killings, and both the Indian Army and the Assam government have categorically denied having conducted or knowing of any cross-border operation. This faulty narrative is circulating, and it looks like it is only inciting fear and misinformation therefore, please ignore it.
- Claim: Report confirms the death of six Assam Rifles personnel in an ULFA-led attack.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
China is on the verge of unveiling a new policy that will address how Artificial Intelligence (AI) influences employment. On January 27, 2026, the Ministry of Human Resources and Social Security (MOHRSS) announced it would publish a paper on the contribution of AI to the labour and employment markets. The policy will include provisions to help impacted industries, expand assistance to young workers and graduates, and come up with interdisciplinary training programmes to equip individuals with jobs in an AI-enabled economy. The authorities have stressed that AI does not kill jobs but changes them, and education will be needed to assist employees in adjusting to the changes.
This announcement reflects a more proactive policy on AI-based changes in labour, showing that China intends to sustain economic modernisation through AI, as well as social stability. It also depicts wider international issues concerning the rate of automation and the necessity of considering labour and training policy.
AI and the Changing Nature of Work
AI is transforming work content and nature in industries. AI systems enhance the productivity of various functions, including data processing, logistics, and customer service, although they alter the nature of tasks carried out by humans. Extant studies indicate that although AI can automate routine activities, new occupations that require complex thinking, management of artificial intelligence, and skills related to people, including empathy, creativity, and problem-solving, may be generated.
This is the key nuance in the policy framing of China. Authorities point out that AI does not always result in massive unemployment. Instead, it transforms jobs and necessitates workers to change to new task profiles. This perspective is in line with the recent reports of the world research organisations, which predict the effects of AI as transformational and not necessarily destructive. As an example, the World Economic Forum Future Jobs Report 2023 observes that the change in technology will introduce new jobs that were not there 10 years ago, and retraining and upskilling will be instrumental in accessing those opportunities.
Key Components of China’s Policy Response
China’s forthcoming policy is expected to focus on three main areas that address both current workforce needs and future readiness.
Support for Key Industries
The policy will offer targeted assistance to sectors where artificial intelligence is gaining pace. Industries like advanced manufacturing, high-tech services, and online logistics will also get specialised assistance to assist companies in using AI to complement human labour and not just to replace it. The Chinese government tries to balance industrial upgrading with employment by channelling resources to the growth areas.
Assistance for Youth and Graduates
The youth and the recent graduates are entering a labour market that is changing rapidly. The policy aims to increase the support services to this population by career counselling, internships, and training programmes correlated with changing employer demands. According to a study by McKinsey Global Institute, the young workforce all over the globe can face disproportionate disruption in case the prospects of training are scarce, making initial career backing imperative.
Interdisciplinary Talent Development
The Chinese strategy focuses on interdisciplinary training that blends knowledge of domains and AI literacy and digital illiteracy. This is indicative of the realisation that hybrid skills are required in the future. The Organisation for Economic Cooperation and Development suggests that workers who can make it through the technical and non-technical elements of work will stand a better chance of winning in the AI age.
These components show that China’s strategy is not simply to protect existing jobs but to help workers transition to roles that leverage AI’s strengths.
Economy, Stability and Strategic Modernisation
The policy is an attempt to control technological transition as part of wider economic planning. It is an indication that the government regards AI as a structural change rather than an external shock that can be predicted and influenced by policy.
This is in contrast to some other reactions to labour markets in other countries, where the reactionary approach has been seen as a reaction to the job losses that have already become reality. The initiative by China implies that there should be a change in the manner in which one can expect change instead of reacting to change.
Global Comparisons and Shared Challenges
Governments worldwide are testing the options to adapt to the work effects of AI. The European Union is considering the individual learning account and portable training benefits, which would assist workers to gain access to reskilling opportunities in the course of their careers. In the US, there is a concerted effort by the public-private partnerships to match the development of the workforce with technological implementation.
The strategy of China has some of these components, but it stands out due to its incorporation with national planning processes. China wants the adoption of AI to help it achieve the common good and not division by connecting the workforce policy to the overall innovation and economic purpose.
Meanwhile, the issue of balancing the supply of labour with the demand of technology is a challenge of its own to countries with older populations and relatively smaller working forces. The timing and design of policy are particularly significant in China, as there is a large labour force and continuous changes in demography.
Practical Challenges and Risks
The success of China’s emerging policy will depend on effective implementation. Several practical issues will require careful attention:
Ensuring Equitable Access to Training
The labour force in China is diversified, and it goes through technology zones in cities and other rural areas. It will be paramount to make sure that the opportunity of upskilling is extended to all workers across the spectrum to prevent the further worsening of regional inequalities. Research conducted on reskilling across the globe shows that rural and low-income groups tend to lack access to training, despite the availability of programmes.
Aligning Training with Labour Demand
The programme of upskilling should be related to the market requirements. Disconnected training is prone to resulting in the production of skills that are obsolete or not applicable in actual work settings. Experience in emerging economies indicates that the involvement of employers in the training design enhances placement success on the part of the learner.
Private Sector Participation
The policy needs to be translated into employment outcomes with the help of private companies. Incentives to make firms invest in worker training, internships, and apprenticeships will enable workers to shift to AI-augmented jobs with ease.
A Model for AI Workforce Policy
The Chinese policy can serve as an example for other countries that want to balance technological advancement and labour market security. It acknowledges the fact that the effect of AI on employment is not only a technical or an economic problem but also a social challenge. Through foregrounding training, support, and coordinated action, China aims to create a future where people are ready to change and not lose their jobs to this change.
This strategy can be agreed with the suggestions of international organisations like the World Bank and the OECD, which insist on the idea of lifelong learning and flexibility of labour markets, as well as proactive investment in human capital as the main aspects of the labour policy in the future.
Conclusion
Artificial intelligence will continue to reshape work around the world. China’s forthcoming policy, which emphasises support, training and strategic integration of AI into labour markets, reflects a proactive and holistic view of technological transition. Other countries could benefit from studying this approach, especially in terms of linking workforce development with innovation goals.
By anticipating disruption and investing in people as well as technology, policymakers can help ensure that AI becomes a driver of shared economic opportunity rather than a source of exclusion. The balance between innovation and employment will shape not only economic outcomes but also social cohesion in the years ahead.
References

Executive Summary:
Recently, a viral social media post alleged that the Delhi Metro Rail Corporation Ltd. (DMRC) had increased ticket prices following the BJP’s victory in the Delhi Legislative Assembly elections. After thorough research and verification, we have found this claim to be misleading and entirely baseless. Authorities have asserted that no fare hike has been declared.
Claim:
Viral social media posts have claimed that the Delhi Metro Rail Corporation Ltd. (DMRC) increased metro fares following the BJP's victory in the Delhi Legislative Assembly elections.


Fact Check:
After thorough research, we conclude that the claims regarding a fare hike by the Delhi Metro Rail Corporation Ltd. (DMRC) following the BJP’s victory in the Delhi Legislative Assembly elections are misleading. Our review of DMRC’s official website and social media handles found no mention of any fare increase.Furthermore, the official X (formerly Twitter) handle of DMRC has also clarified that no such price hike has been announced. We urge the public to rely on verified sources for accurate information and refrain from spreading misinformation.

Conclusion:
Upon examining the alleged fare hike, it is evident that the increase pertains to Bengaluru, not Delhi. To verify this, we reviewed the official website of Bangalore Metro Rail Corporation Limited (BMRCL) and cross-checked the information with appropriate evidence, including relevant images. Our findings confirm that no fare hike has been announced by the Delhi Metro Rail Corporation Ltd. (DMRC).

- Claim: Delhi Metro price Hike after BJP’s victory in election
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Overview of the Advisory
On 18 November 2025, the Ministry of Information and Broadcasting (I&B) published an Advisory that addresses all of the private satellite television channels in India. The advisory is one of the critical institutional interventions to the broadcast of sensitive content regarding recent security incidents concerning the blast at the Red Fort on November 10th, 2025. This advisory came after the Ministry noticed that some news channels have been broadcasting content related to alleged persons involved in Red Fort blasts, justifying their acts of violence, as well as information/video on explosive material. Broadcasting like this at this critical situation may inadvertently encourage or incite violence, disrupt public order, and pose risks to national security.
Key Instructions under the Advisory
The advisory provides certain guidelines to the TV channels to ensure strict compliance with the Programming and Advertising Code under the Cable Television Networks (Regulation) Act, 1995. The television channels are advised to exercise the highest level of discretion and sensitivity possible in reporting on issues involving alleged perpetrators of violence, and especially when reporting on matters involving the justification of acts of violence or providing instructional media on making explosive materials. The fundamental focus is to be very strict in following the Programme and Advertising Code as stipulated in the Cable Television Network Rules. In particular, broadcasters should not make programming that:
- Contain anything obscene, defamatory, deliberately false, or suggestive innuendos and half-truths.
- Likely to encourage or incite violence, contain anything against the maintenance of law and order, or promote an anti-national attitude.
- Contain anything that affects the integrity of the Nation.
- Could aid, abet or promote unlawful activities.
Responsible Reporting Framework
The advisory does not constitute outright censorship but instead a self-regulatory system that depends on the discretion and sensitivity of the TV channels focused on differentiating between broadcasting legitimate news and the content that crosses the threshold from information dissemination to incitement.
Why This Advisory is Important in a Digital Age
With the modern media systems, there has been an erosion of the line between the journalism of the traditional broadcasting medium and digital virality. The contents of television are no longer limited to the scheduled programs or cable channels of distribution. The contents of a single news piece, especially that of dramatic or contentious nature, can be ripped off, revised and repackaged on social media networks within minutes of airing- often without the context, editorial discretion or timing indicators.
This effect makes sensitive content have a multiplier effect. The short news item about a suspect justifying violence or containing bombs can be viewed by millions on YouTube, WhatsApp, Twitter/X, Facebook, by spreading organically and being amplified by an algorithm. Studies have shown that misinformation and sensational reporting are much faster to circulate compared to factual corrections- a fact that has been noticed in the recent past during conflicts and crisis cases in India and other parts of the world.
Vulnerabilities of Information Ecosystems
- The advisory is created in a definite information setting that is characterised by:
- Rapid Viral Mechanism: Content spreads faster than the process of verification.
- Algorithmic-driven amplification: Platform mechanism boosts emotionally charged content.
- Coordinated amplification networks: Organised groups are there to make these posts, videos viral, to set a narrative for the general public.
- Deepfake and synthetic media risks: Original broadcasts can be manipulated and reposted with false attribution.
Interconnection with Cybersecurity and National Security
Verified or sensationalised reporting of security incidents poses certain weaknesses:
- Trust Erosion: Trust is broken when the masses observe broadcasters in the air giving unverified claims or emotional accounts as facts. This is even to security agencies, law enforcement and government institutions themselves. The lack of trust towards the official information gives rise to information gaps, which are occupied by rumours, conspiracy theories, and enemy tales.
- Cognitive Fragmentation: Misinformation develops multiple versions of the truth among the people. The narratives given to citizens vary according to the sources of the media that they listen to or read. This disintegration complicates organising the collective response of the society an actual security threat because the populations can be organised around misguided stories and not the correct data.
- Radicalisation Pipeline: People who are interested in finding ideological backgrounds to violent action might get exposed to media-created materials that have been carefully distorted to evidence justifications of terrorism as a valid political or religious stand.
How Social Instability Is Exploited in Cyber Operations and Influence Campaigns
Misinformation causes exploitable vulnerability in three phases.
- First, conflicting unverified accounts disintegrate the information environment-populations are presented with conflicting versions of events by various media sources.
- Second, institutional trust in media and security agencies is shaken by exposure to subsequently rectified false information, resulting in an information vacuum.
- Third, in such a distrusted and puzzled setting, the population would be susceptible to organised manipulation by malicious agents.
- Sensationalised broadcasting gives opponents assets of content, narrative frameworks, and information gaps that they can use to promote destabilisation movements. These mechanisms of exploitation are directly opposed by responsible broadcasting.
Media Literacy and Audience Responsibility
Structural Information Vulnerabilities-
A major part of the Indian population is structurally disadvantaged in information access:
- Language barriers: Infrastructure in the field of fact-checking is still highly centralised in English and Hindi, as vernacular-language misinformation goes viral in Tamil, Telugu, Marathi, Punjabi, and others.
- Digital literacy gaps: It is estimated that there are about 40 million people in India who have been trained on digital literacy, but more than 900 million Indians access digital content with different degrees of ability to critically evaluate the content.
- Divides between rural and urban people: Rural citizens and less affluent people experience more difficulty with access to verification tools and media literacy resources.
- Algorithmic capture: social media works to maximise engagement over accuracy, and actively encourages content that is emotionally inflammatory or divisive to its users, according to their history of engagement.
Conclusion
The advisory of the Ministry of Information and Broadcasting is an acknowledgment of the fact that media accountability is a part of state security in the information era. It states the principles of responsible reporting without interference in editorial autonomy, a balance that various stakeholders should uphold. Implementation of the advisory needs to be done in concert with broadcasters, platforms, civil society, government and educational institutions. Information integrity cannot be handled by just a single player. Without media literacy resources, citizens are unable to be responsible in their evaluation of information. Without open and fast communication with the media stakeholders, government agencies are unable to combat misinformation.
The recommendations include collaborative governance, i.e., institutional forms in which media self-regulation, technological protection, user empowerment, and policy frameworks collaborate and do not compete. The successful deployment of measures will decide whether India can continue to have open and free media without compromising on information integrity that is sufficient to provide national security, democratic governance and social stability during the period of high-speed information flow, algorithmic amplification, and information warfare actions.
References
https://mib.gov.in/sites/default/files/2025-11/advisory-18.11.2025.pdf