#FactCheck -No Evidence IPS Officer Ajay Pal Sharma Has Been Deputed to West Bengal for Five Years
Executive Summary
Ahead of the final phase of the West Bengal Assembly elections, a claim regarding Uttar Pradesh cadre IPS officer Ajay Pal Sharma began circulating widely on social media. Users claimed that Sharma was being sent to West Bengal on deputation for a period of five years. However, research conducted by CyberPeace Research Wing found the claim to be false. Sources close to the IPS officer confirmed that no such deputation order has been issued so far and that Ajay Pal Sharma is currently posted as Additional Commissioner in Prayagraj, Uttar Pradesh. Ajay Pal Sharma had earlier been deployed as a police observer during the West Bengal elections. During that period, a video of him warning Trinamool Congress candidate Jahangir Khan from the Falta constituency had gone viral on social media.
Claim
Several users on Facebook and X claimed that Ajay Pal Sharma had been transferred to West Bengal for five years under an administrative arrangement involving experienced officers from different states. One Facebook user wrote:“This decision has been taken under an administrative arrangement through which experienced officers are deployed in different states.”
- https://www.facebook.com/photo.php?fbid=818902764628152&set=a.296761956842238&type=3
- https://perma.cc/FD8Q-CF7L?type=standard

Fact Check
Our research found that the deputation claim is false. Ajay Pal Sharma is currently serving as Additional Commissioner in Prayagraj, a position he has held since 2025. Further scrutiny revealed that the claim appears to have originated from a parody account on X. On May 4, around 6 PM, the account @abdullah_0mar posted the claim regarding Sharma’s alleged five-year deputation to Bengal. However, in the comments section, the user later clarified that the post was intended as satire.

We also reviewed several news reports regarding Ajay Pal Sharma’s role during the West Bengal elections. Reports confirmed that the Election Commission had deployed him as a police observer in South 24 Parganas district during the polls. However, none of the reports mentioned any five-year transfer or deputation to West Bengal.

Conclusion
The viral claim is false. No official order has been issued regarding IPS officer Ajay Pal Sharma’s deputation to West Bengal for five years. Sources close to the officer confirmed that he continues to serve as Additional Commissioner in Prayagraj, Uttar Pradesh. Sharma had only been deputed as a police observer during the West Bengal Assembly elections, during which a video of him warning TMC candidate Jahangir Khan went viral online.
Related Blogs

Introduction
We live in a time where technological change is no longer slow or subtle. Robotics, automation, artificial intelligence, and digital systems are transforming the way we work, think, and even imagine the future. This is often celebrated as great progress. But a deeper question quietly waits behind the noise. Is every advancement truly an uplift when seen through the lens of scriptures, culture, and Indian philosophical thought? Are we creating for the good of humanity, or are we only chasing convenience and speed? And what kind of future are we actually preparing, not just for ourselves, but for those who will be born into this world shaped by these tools from the very beginning?
India has long been seen as a land that values balance, purity, and harmony with nature. Its rivers, mountains, forests, and traditions are not just geography or history, they are part of a civilizational way of thinking that connects life, duty, and responsibility. In this context, it becomes important to ask what the long-term cost of our technological appetite might be. Every invention has a footprint. Industries change landscapes. Energy demands reshape ecosystems. Convenience today often hides consequences that only appear years later. Progress, when measured only in speed and output, forgets to ask what it takes away in silence.
There is also a quieter change happening inside the human mind. As tools become smarter, humans begin to feel more powerful. The thought slowly shifts from “I can use this” to “I control this.” With artificial intelligence, the language becomes even bolder. We start hearing phrases like “we can create worlds, faces, voices, even minds.” But history have always warned us about ‘overreach’. Not because power is evil, but because pride blinds judgment. When ability grows faster than wisdom, imbalance follows. We can already see early signs of this in concerns about shrinking attention spans, weakening cognitive habits, and a growing dependence on systems that think for us before we learn to think for ourselves deeply.
None of this is an argument to reject innovation. The idea is not to blacklist technology or romanticise the past. The real question is about direction and responsibility. Advancements are not only for the comfort of the present generation. They shape the mental, moral, and emotional world of future generations who will grow up surrounded by these systems as something normal and unavoidable. What values will guide that world? What habits will it encourage? What will it quietly take away?
This is where the richness of Indian thought becomes relevant, not as nostalgia, but as guidance. Ideas of dharma, restraint, balance, and ethical action were never anti-progress. They were reminders that power without responsibility becomes dangerous, and that ability without humility leads to decline. In modern terms, we talk about safety by design, ethical innovation, and human-centred technology. In older language, we talked about duty, limits, and the consequences of unchecked desire. The words have changed, but the concern is the same.
Perhaps the real question is not whether we are becoming creators, but whether we remember that we are also caretakers. We do not bring existence out of nothing. We reshape what already exists. And in that reshaping, the line between wisdom and arrogance, between progress and pride, becomes the most important line of all.
The futuristic impact of AI, robotics, and technologies
In every yuga, humans have extended the limits of what they can do. What changes is not the desire, but the form it takes. Our ancient history speak of extraordinary abilities, not as fantasies, but as reminders of how power tests character. Figures like Naradmuni (a prominent divine sage (Rishiraja) in Hinduism) are described as moving from one place to another in moments. Others gained immense strength, knowledge, or influence through years of discipline and tapasya. Ravana (Figure from Ramayana) himself was learned and powerful, far beyond ordinary human measure. Sanjaya (the charioteer and advisor of King Dhritarashtra in the Mahabharata) receives the gift of divya drishti and narrates the events of the battlefield without being physically present there, seeing and speaking across distance in a way that still feels remarkable even today.
In this yuga, that ancient search for power and reach has not disappeared, it has only changed its language, and today it speaks through robotics, artificial intelligence, and advanced technologies, making us ask whether we are truly creators or only very advanced arrangers of what already exists.
In this age, science and technology are attempting something similar in a different language. We may not travel like Naradmuni, but we send our voices, images, and thoughts across the world in seconds. We build machines that can see, listen, respond, and even imitate human thinking. Artificial intelligence and robotics promise comfort, speed, and efficiency, and in many ways, they truly improve human life. Yet the old question remains. Not just what can we do, but how far should we go, and at what cost.
When we primarily build for human convenience, we often fail to thoroughly examine the long-term consequences. The environmental impact of large-scale technology is already visible in the pressure on resources, the growth of waste, and the slow damage to air, water, and soil. Nature does not recover at the pace of human ambition. What feels like small compromises today can become heavy burdens for tomorrow.
There is also the impact on the human mind. As systems become more capable, humans risk becoming more dependent. When answers arrive instantly, patience weakens. When machines start deciding for us, the habit of deep thinking slowly fades. Over time, this can affect attention, memory, and judgment. Knowledge becomes easier to reach, but wisdom becomes harder to build. Just as in old stories, the danger is not in having power, but in losing clarity while using it.
Future generations will not encounter these technologies as new inventions. They will be born into them. What we treat today as tools, they will experience as the normal environment of life. This makes responsibility unavoidable. The real question is not only whether these systems work, but what kind of humans they will shape.
The purpose of this reflection is not to reject progress. It is to ask for balance. Building for human comfort is important, but building without studying long-term impact is risky. If this age has the power to create intelligent systems, it must also have the wisdom to protect the environment, care for future generations, and preserve the depth of the human mind. Otherwise, advancement becomes speed without direction, and power without responsibility.
The Acceleration of the Technological Age
The current era has reached a state where technological progress now occurs through instantaneous changes which transform our methods of working and decision-making and future planning. People often view robotics and automation and artificial intelligence as signs of progress yet a less audible inquiry persists through time which asks whether every technological advancement enhances human existence or whether we merely pursue efficient and easy solutions without thinking about their implications. Indian philosophical thought offers a useful lens here, one that does not reject progress but asks whether it aligns with balance, responsibility, and long-term harmony. The definition of intelligence according to this perspective extends beyond computational skills and pattern imitation because it requires people to achieve awareness and intent and their complete understanding. Current machines possess the ability to mimic human reasoning and produce language while they can replicate decision-making processes, but they lack both consciousness and personal experience.
Power, Responsibility, and Ethical Imbalance
The development of new technological capabilities brings with it ethical responsibilities which every society must handle. Human beings must take on new ethical duties which match their increasing capabilities according to historical evidence. The current situation shows that people create new things at a speed which exceeds their ability to think about those innovations. Systems exist to enhance operational performance while they determine human actions and extend their power but they do not always evaluate their complete impact. Indian traditions emphasize dharma, the principle of balance and rightful action, which shows that power without ethical grounding creates destructive human force. The state of imbalance exists without showing its presence at all times. The process of imbalance development takes place through three channels: environmental degradation, social inequalities, and the gradual decline of human control.
The current society demonstrates this transformation through its existing results. The algorithms now determine our consumption choices and our methods of understanding everything around us. The system provides users with personalized comfort, but it also creates hidden patterns that determine their preferences. The process starts with decision assistance before it progresses to decision influence which eventually leads to decision conditioning. The concept of swatantrata as inner freedom becomes more complicated within such an environment. People stop making freedom choices when they find it easier to select between things that exist in their surroundings because they lose their ability to choose. People start to measure their work activities and personal identity through systems that use optimization techniques and digital validation systems, which leads to a decrease of space that exists for individuals to think and consider matters independently.
Technology, Ecology, and Civilizational Values
The environmental impact of technological demand exists together with social transformations. All systems need power while all infrastructure creates environmental effects and all products, we use contain unknown expenses which become apparent after many years. India's civilizational values maintain their dedication to nature because people see rivers and forests and ecosystems as essential parts of existence. Success in modern society measures output as the main achievement while actual value disappears through the evaluation process. The future requires us to create new things but we must also decide which things to keep intact.
The current situation requires progress to be defined differently because it needs to be measured through precise management instead of continuous rapid development. The question now extends beyond technological advancement to include the need for technologies to be operated through intelligent guidance. The increasing abilities of machines create a greater need for people to maintain their essential human characteristics. Human beings must actively maintain their capacity to make ethical decisions and understand their life's meaning and purpose. The future depends on two factors: the “innovations that will emerge and the values that will guide their development.”
Conclusion
It is high time we pause and honestly examine the path we are taking. The question is not whether technology should grow, but whether its overreach should be allowed to shape the future without restraint. We are building faster than ever, developing systems that touch every part of life. That makes it even more important to study their long-term impact, not only on markets or productivity, but on nature, on the human mind, and on the generations who will inherit this tech-driven world.
Progress should benefit those who come after us, not quietly weaken them. A future where people are born into pure convenience, surrounded by tools that think, decide, and act for them, may look comfortable, but comfort alone does not build strong, aware, or responsible human beings. Growth without effort and ease without discipline slowly takes away depth, resilience, and clarity. Technology should support human potential, not replace it.
This is why morality, ethics, and balance cannot be treated as optional ideas. They must guide innovation, not follow it. We do not need to overcreate. We need to create ‘wisely’. We need to build systems that remain under human control, not systems that slowly train humans to surrender their judgment, attention, and responsibility. Tools should remain tools. They should serve life, not define it.
Indian thought has always placed intention at the centre of action. Karma is not judged only by outcome, but by the spirit in which an act is performed. A tool in itself is neither pure nor impure. It becomes one or the other through the hand that uses it. This is a lens through which modern technology can also be examined. Artificial intelligence can help doctors read scans faster, help farmers predict weather patterns, and help students in remote areas access knowledge. At the same time, it can be used to watch, to sort, to exclude, and to reduce human beings to data points that fit neatly into a system. The difference lies not in the machine, but in the values of those who design and deploy it.
The purpose of this reflection is simple. We should build, but we should build with responsibility. We should innovate, but with awareness of consequences. True progress is not just about what is possible today. It is about what remains healthy, meaningful, and sustainable tomorrow. If this age can combine intelligence with humility, and power with restraint, then technology will not become a symbol of overreach. It will become a sign of maturity.

Introduction
As the sun rises on the Indian subcontinent, a nation teeters on the precipice of a democratic exercise of colossal magnitude. The Lok Sabha elections, a quadrennial event that mobilises the will of over a billion souls, is not just a testament to the robustness of India's democratic fabric but also a crucible where the veracity of information is put to the sternest of tests. In this context, the World Economic Forum's 'Global Risks Report 2024' emerges as a harbinger of a disconcerting trend: the spectre of misinformation and disinformation that threatens to distort the electoral landscape.
The report, a carefully crafted document that shares the insights of 1,490 experts from the interests of academia, government, business and civil society, paints a tableau of the global risks that loom large over the next decade. These risks, spawned by the churning cauldron of rapid technological change, economic uncertainty, a warming planet, and simmering conflict, are not just abstract threats but tangible realities that could shape the future of nations.
India’s Electoral Malice
India, as it strides towards the general elections scheduled in the spring of 2024, finds itself in the vortex of this hailstorm. The WEF survey positions India at the zenith of vulnerability to disinformation and misinformation, a dubious distinction that underscores the challenges facing the world's largest democracy. The report depicts misinformation and disinformation as the chimaeras of false information—whether inadvertent or deliberate—that are dispersed through the arteries of media networks, skewing public opinion towards a pervasive distrust in facts and authority. This encompasses a panoply of deceptive content: fabricated, false, manipulated and imposter.
The United States, the European Union, and the United Kingdom too, are ensnared in this web of varying degrees of misinformation. South Africa, another nation on the cusp of its own electoral journey, is ranked 22nd, a reflection of the global reach of this phenomenon. The findings, derived from a survey conducted over the autumnal weeks of September to October 2023, reveal a world grappling with the shadowy forces of untruth.
Global Scenario
The report prognosticates that as close to three billion individuals across diverse economies—Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom, and the United States—prepare to exercise their electoral rights, the rampant use of misinformation and disinformation, and the tools that propagate them, could erode the legitimacy of the governments they elect. The repercussions could be dire, ranging from violent protests and hate crimes to civil confrontation and terrorism.
Beyond the electoral arena, the fabric of reality itself is at risk of becoming increasingly polarised, seeping into the public discourse on issues as varied as public health and social justice. As the bedrock of truth is undermined, the spectre of domestic propaganda and censorship looms large, potentially empowering governments to wield control over information based on their own interpretation of 'truth.'
The report further warns that disinformation will become increasingly personalised and targeted, honing in on specific groups such as minority communities and disseminating through more opaque messaging platforms like WhatsApp or WeChat. This tailored approach to deception signifies a new frontier in the battle against misinformation.
In a world where societal polarisation and economic downturn are seen as central risks in an interconnected 'risks network,' misinformation and disinformation have ascended rapidly to the top of the threat hierarchy. The report's respondents—two-thirds of them—cite extreme weather, AI-generated misinformation and disinformation, and societal and/or political polarisation as the most pressing global risks, followed closely by the 'cost-of-living crisis,' 'cyberattacks,' and 'economic downturn.'
Current Situation
In this unprecedented year for elections, the spectre of false information looms as one of the major threats to the global populace, according to the experts surveyed for the WEF's 2024 Global Risk Report. The report offers a nuanced analysis of the degrees to which misinformation and disinformation are perceived as problems for a selection of countries over the next two years, based on a ranking of 34 economic, environmental, geopolitical, societal, and technological risks.
India, the land of ancient wisdom and modern innovation, stands at the crossroads where the risk of disinformation and misinformation is ranked highest. Out of all the risks, these twin scourges were most frequently selected as the number one risk for the country by the experts, eclipsing infectious diseases, illicit economic activity, inequality, and labor shortages. The South Asian nation's next general election, set to unfurl between April and May 2024, will be a litmus test for its 1.4 billion people.
The spectre of fake news is not a novel adversary for India. The 2019 election was rife with misinformation, with reports of political parties weaponising platforms like WhatsApp and Facebook to spread incendiary messages, stoking fears that online vitriol could spill over into real-world violence. The COVID-19 pandemic further exacerbated the issue, with misinformation once again proliferating through WhatsApp.
Other countries facing a high risk of the impacts of misinformation and disinformation include El Salvador, Saudi Arabia, Pakistan, Romania, Ireland, Czechia, the United States, Sierra Leone, France, and Finland, all of which consider the threat to be one of the top six most dangerous risks out of 34 in the coming two years. In the United Kingdom, misinformation/disinformation is ranked 11th among perceived threats.
The WEF analysts conclude that the presence of misinformation and disinformation in these electoral processes could seriously destabilise the real and perceived legitimacy of newly elected governments, risking political unrest, violence, and terrorism, and a longer-term erosion of democratic processes.
The 'Global Risks Report 2024' of the World Economic Forum ranks India first in facing the highest risk of misinformation and disinformation in the world at a time when it faces general elections this year. The report, released in early January with the 19th edition of its Global Risks Report and Global Risk Perception Survey, claims to reveal the varying degrees to which misinformation and disinformation are rated as problems for a selection of analyzed countries in the next two years, based on a ranking of 34 economic, environmental, geopolitical, societal, and technological risks.
Some governments and platforms aiming to protect free speech and civil liberties may fail to act effectively to curb falsified information and harmful content, making the definition of 'truth' increasingly contentious across societies. State and non-state actors alike may leverage false information to widen fractures in societal views, erode public confidence in political institutions, and threaten national cohesion and coherence.
Trust in specific leaders will confer trust in information, and the authority of these actors—from conspiracy theorists, including politicians, and extremist groups to influencers and business leaders—could be amplified as they become arbiters of truth.
False information could not only be used as a source of societal disruption but also of control by domestic actors in pursuit of political agendas. The erosion of political checks and balances and the growth in tools that spread and control information could amplify the efficacy of domestic disinformation over the next two years.
Global internet freedom is already in decline, and access to more comprehensive sets of information has dropped in numerous countries. The implication: Falls in press freedoms in recent years and a related lack of strong investigative media are significant vulnerabilities set to grow.
Advisory
Here are specific best practices for citizens to help prevent the spread of misinformation during electoral processes:
- Verify Information:Double-check the accuracy of information before sharing it. Use reliable sources and fact-checking websites to verify claims.
- Cross-Check Multiple Sources:Consult multiple reputable news sources to ensure that the information is consistent across different platforms.
- Be Wary of Social Media:Social media platforms are susceptible to misinformation. Be cautious about sharing or believing information solely based on social media posts.
- Check Dates and Context:Ensure that information is current and consider the context in which it is presented. Misinformation often thrives when details are taken out of context.
- Promote Media Literacy:Educate yourself and others on media literacy to discern reliable sources from unreliable ones. Be skeptical of sensational headlines and clickbait.
- Report False Information:Report instances of misinformation to the platform hosting the content and encourage others to do the same. Utilise fact-checking organisations or tools to report and debunk false information.
- Critical Thinking:Foster critical thinking skills among your community members. Encourage them to question information and think critically before accepting or sharing it.
- Share Official Information:Share official statements and information from reputable sources, such as government election commissions, to ensure accuracy.
- Avoid Echo Chambers:Engage with diverse sources of information to avoid being in an 'echo chamber' where misinformation can thrive.
- Be Responsible in Sharing:Before sharing information, consider the potential impact it may have. Refrain from sharing unverified or sensational content that can contribute to misinformation.
- Promote Open Dialogue:Open discussions should be promoted amongst their community about the significance of factual information and the dangers of misinformation.
- Stay Calm and Informed:During critical periods, such as election days, stay calm and rely on official sources for updates. Avoid spreading unverified information that can contribute to panic or confusion.
- Support Media Literacy Programs:Media Literacy Programs in schools should be promoted to provide individuals with essential skills to sail through the information sea properly.
Conclusion
Preventing misinformation requires a collective effort from individuals, communities, and platforms. By adopting these best practices, citizens can play a vital role in reducing the impact of misinformation during electoral processes.
References:
- https://thewire.in/media/survey-finds-false-information-risk-highest-in-india
- https://thesouthfirst.com/pti/india-faces-highest-risk-of-disinformation-in-general-elections-world-economic-forum/

As AI language models become more powerful, they are also becoming more prone to errors. One increasingly prominent issue is AI hallucinations, instances where models generate outputs that are factually incorrect, nonsensical, or entirely fabricated, yet present them with complete confidence. Recently, ChatGPT released two new models—o3 and o4-mini, which differ from earlier versions as they focus more on step-by-step reasoning rather than simple text prediction. With the growing reliance on chatbots and generative models for everything from news summaries to legal advice, this phenomenon poses a serious threat to public trust, information accuracy, and decision-making.
What Are AI Hallucinations?
AI hallucinations occur when a model invents facts, misattributes quotes, or cites nonexistent sources. This is not a bug but a side effect of how Large Language Models (LLMs) work, and it is only the probability that can be reduced, not their occurrence altogether. Trained on vast internet data, these models predict what word is likely to come next in a sequence. They have no true understanding of the world or facts, they simulate reasoning based on statistical patterns in text. What is alarming is that the newer and more advanced models are producing more hallucinations, not fewer. seemingly counterintuitive. This has been prevalent reasoning-based models, which generate answers step-by-step in a chain-of-thought style. While this can improve performance on complex tasks, it also opens more room for errors at each step, especially when no factual retrieval or grounding is involved.
As per reports shared on TechCrunch, it mentioned that when users asked AI models for short answers, hallucinations increased by up to 30%. And a study published in eWeek found that ChatGPT hallucinated in 40% of tests involving domain-specific queries, such as medical and legal questions. This was not, however, limited to this particular Large Language Model, but also similar ones like DeepSeek. Even more concerning are hallucinations in multimodal models like those used for deepfakes. Forbes reports that some of these models produce synthetic media that not only look real but are also capable of contributing to fabricated narratives, raising the stakes for the spread of misinformation during elections, crises, and other instances.
It is also notable that AI models are continually improving with each version, focusing on reducing hallucinations and enhancing accuracy. New features, such as providing source links and citations, are being implemented to increase transparency and reliability in responses.
The Misinformation Dilemma
The rise of AI-generated hallucinations exacerbates the already severe problem of online misinformation. Hallucinated content can quickly spread across social platforms, get scraped into training datasets, and re-emerge in new generations of models, creating a dangerous feedback loop. However, it helps that the developers are already aware of such instances and are actively charting out ways in which we can reduce the probability of this error. Some of them are:
- Retrieval-Augmented Generation (RAG): Instead of relying purely on a model’s internal knowledge, RAG allows the model to “look up” information from external databases or trusted sources during the generation process. This can significantly reduce hallucination rates by anchoring responses in verifiable data.
- Use of smaller, more specialised language models: Lightweight models fine-tuned on specific domains, such as medical records or legal texts. They tend to hallucinate less because their scope is limited and better curated.
Furthermore, transparency mechanisms such as source citation, model disclaimers, and user feedback loops can help mitigate the impact of hallucinations. For instance, when a model generates a response, linking back to its source allows users to verify the claims made.
Conclusion
AI hallucinations are an intrinsic part of how generative models function today, and such a side-effect would continue to occur until foundational changes are made in how models are trained and deployed. For the time being, developers, companies, and users must approach AI-generated content with caution. LLMs are, fundamentally, word predictors, brilliant but fallible. Recognising their limitations is the first step in navigating the misinformation dilemma they pose.
References
- https://www.eweek.com/news/ai-hallucinations-increase/
- https://www.resilience.org/stories/2025-05-11/better-ai-has-more-hallucinations/
- https://www.ekathimerini.com/nytimes/1269076/ai-is-getting-more-powerful-but-its-hallucinations-are-getting-worse/
- https://techcrunch.com/2025/05/08/asking-chatbots-for-short-answers-can-increase-hallucinations-study-finds/
- https://en.as.com/latest_news/is-chatgpt-having-robot-dreams-ai-is-hallucinating-and-producing-incorrect-information-and-experts-dont-know-why-n/
- https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
- https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
- https://towardsdatascience.com/how-i-deal-with-hallucinations-at-an-ai-startup-9fc4121295cc/
- https://www.informationweek.com/machine-learning-ai/getting-a-handle-on-ai-hallucinations