#FactCheck-Fake Video of Mass Cheating at UPSC Exam Circulates Online
Executive Summary:
A viral video that has gone viral is purportedly of mass cheating during the UPSC Civil Services Exam conducted in Uttar Pradesh. This video claims to show students being filmed cheating by copying answers. But, when we did a thorough research, it was noted that the incident happened during an LLB exam, not the UPSC Civil Services Exam. This is a representation of misleading content being shared to promote misinformation.

Claim:
Mass cheating took place during the UPSC Civil Services Exam in Uttar Pradesh, as shown in a viral video.

Fact Check:
Upon careful verification, it has been established that the viral video being circulated does not depict the UPSC Civil Services Examination, but rather an incident of mass cheating during an LLB examination. Reputable media outlets, including Zee News and India Today, have confirmed that the footage is from a law exam and is unrelated to the UPSC.
The video in question was reportedly live-streamed by one of the LLB students, held in February 2024 at City Law College in Lakshbar Bajha, located in the Safdarganj area of Barabanki, Uttar Pradesh.
The misleading attempt to associate this footage with the highly esteemed Civil Services Examination is not only factually incorrect but also unfairly casts doubt on a process that is known for its rigorous supervision and strict security protocols. It is crucial to verify the authenticity and context of such content before disseminating it, in order to uphold the integrity of our institutions and prevent unnecessary public concern.

Conclusion:
The viral video purportedly showing mass cheating during the UPSC Civil Services Examination in Uttar Pradesh is misleading and not genuine. Upon verification, the footage has been found to be from an LLB examination, not related to the UPSC in any manner. Spreading such misinformation not only undermines the credibility of a trusted examination system but also creates unwarranted panic among aspirants and the public. It is imperative to verify the authenticity of such claims before sharing them on social media platforms. Responsible dissemination of information is crucial to maintaining trust and integrity in public institutions.
- Claim: A viral video shows UPSC candidates copying answers.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Fundamentally, artificial intelligence (AI) is the greatest extension of human intelligence. It is the culmination of centuries of logic, reasoning, math, and creativity, machines trained to reflect cognition. However, such intelligence no longer resembles intelligence at all when it is put in the hands of the irresponsible, the one with malice, or the perverse, unleashed into the wild with minimal safeguards. Instead, distortion seems as a tool of debasement rather than enlightenment.
Recent incidents involving sexually explicit photographs created by AI on social media sites reveal an extremely unsettling reality. When intelligence is detached from accountability, morality, and governance, it corrodes society rather than elevates it. We are seeing a failure of stewardship rather than just a failure of technology.
The Cost of Unchecked Intelligence
The AI chatbot Grok, which operates under Elon Musk’s X (formerly Twitter), is the subject of a debate that goes beyond a single platform or product. The romanticisation of “unfiltered” knowledge and the perilous notion that innovation should come before accountability are signs of a bigger lapse in the digital ecosystem. We have allowed mechanisms that can be used as weapons against human dignity, especially the dignity of women and children, in the name of freedom.
We are no longer discussing artistic expression or experimental AI when a machine can digitally undress women, morph photos, or produce sexualised portrayals of kids with a few keystrokes. We stand in the face of algorithmic violence. Even if the physical touch is absent, the harm caused by it is genuine, long-lasting, and extremely personal.
The Regulatory Red Line
A major inflexion was reached when the Indian government responded by ordering a thorough technical, procedural, and governance-level audit. It acknowledges that AI systems are not isolated entities. Platforms that use them are not neutral pipes, but rather intermediaries with responsibilities. The Bhartiya Nyay Sanhita, the IT Act, the IT Rules 2021, and the possible removal of Section 79 safe-harbour safeguards all make it quite evident that innovation is not automatic immunity.
However, the fundamental dilemma cannot be resolved by legislation alone. AI is hailed as a force multiplier for innovation, productivity, and advancement, but when incentives are biased towards engagement, virality, and shock value, its misuse shows how easily intelligence can turn into ugliness. The output receives greater attention the more provocative it is. Profit increases with attention. Restraint turns into a business disadvantage in this ecology.
The Aftermath
Grok’s own acknowledgement that “safeguard lapses” enabled the creation of pictures showing children wearing skimpy attire underscores a troubling reality, safety was not absent due to impossibility, but due to insufficiency. It was always possible to implement sophisticated filtering, more robust monitoring, and stricter oversight. They were simply not prioritised. When a system asserts that “no system is 100% foolproof,” it must also acknowledge that there is no acceptable margin of error when it comes to child protection.
The casual normalisation of such lapses is what is most troubling. By characterising these instances as “isolated cases,” systemic design decisions run the risk of being trivialised. In addition to intelligence, AI systems that have been taught on enormous amounts of human data also inherit bias, misogyny, and power imbalances.
Conclusion
What is required today is recalibration. Platforms need to shift from reactive compliance to proactive accountability. Safeguards must be incorporated at the architectural level; they cannot be cosmetic or post-facto. Governance must encompass enforced ethical boundaries in addition to terms of service. The idea that “edgy” AI is a sign of advancement must also be rejected by society.
Artificial Intelligence has never promised freedom under the guise of vulgarity. It was improvement, support, and augmentation. The fundamental core of intelligence is lost when it is used as a tool for degradation.So what’s left is a decision between principled innovation and unbridled novelty. Between responsibility and spectacle, between intelligence as purpose and intellect as power.
References
https://www.rediff.com/news/report/govt-orders-x-review-of-grok-over-explicit-content/20260103.htm

Amid the popularity of OpenAI’s ChatGPT and Google’s announcement of introducing its own Artificial Intelligence chatbot called Bard, there has been much discussion over how such tools can impact India at a time when the country is aiming for an AI revolution.
During the Budget Session, Finance Minister Nirmala Sitharaman talked about AI, while her colleague, Minister of State (MoS) for Electronics and Information Technology Rajeev Chandrasekhar discussed it at the India Stack Developer Conference.
While Sitharaman stated that the government will establish three centres of excellence in AI in the country, Chandrashekhar at the event mentioned that India Stack, which includes digital solutions like Aadhaar, Digilocker and others, will become more sophisticated over time with the inclusion of AI.
As AI chatbots become the buzzword, News18 discusses with experts how such tech tools will impact India.
AI IN INDIA
Many experts believe that in a country like India, which is extremely diverse in nature and has a sizeable population, the introduction of technologies and their right adoption can bring a massive digital revolution.
For example, Manoj Gupta, Cofounder of Plotch.ai, a full-stack AI-enabled SaaS product, told News18 that Bard is still experimental and not open to everyone to use while ChatGPT is available and can be used to build applications on top of it.
He said: “Conversational chatbots are interesting since they have the potential to automate customer support and assisted buying in e-commerce. Even simple banking applications can be built that can use ChatGPT AI models to answer queries like bank balance, service requests etc.”
According to him, such tools could be extremely useful for people who are currently excluded from the digital economy due to language barriers.
Ashwini Vaishnaw, Union Minister for Communications, Electronics & IT, has also talked about using such tools to reduce communication issues. At World Economic Forum in Davos, he said: “We integrated our Bhashini language AI tool, which translates from one Indian language to another Indian language in real-time, spoken and text everything. We integrated that with ChatGPT and are seeing very good results.”
‘DOUBLE-EDGED SWORD’
Sundar Balasubramanian, Managing Director, India & SAARC, at Check Point Software, told News18 that generative AI like ChatGPT is a “double-edged sword”.
According to him, used in the right way, it can help developers write and fix code quicker, enable better chat services for companies, or even be a replacement for search engines, revolutionising the way people search for information.
“On the flip side, hackers are also leveraging ChatGPT to accelerate their bad acts and we have already seen examples of such exploitations. ChatGPT has lowered the bar for novice hackers to enter the field as they are able to learn quicker and hack better through asking the AI tool for answers,” he added.
Balasubramanian also stated that CPR has seen the quality of phishing emails improve tremendously over the past 3 months, making it increasingly difficult to discern between legitimate sources and a targeted phishing scam.
“Despite the emergence of the use of generative AI impacting cybercrime, Check Point is continually reminding organisations and individuals of the significance of being vigilant as ChatGPT and Codex become more mature, it can affect the threat landscape, for both good and bad,” he added.
While the real-life applications of ChatGPT include several things ranging from language translation to explaining tricky math problems, Balasubramanian said it can also be used for making the work of cyber researchers and developers more efficient.
“Generative AI or tools like ChatGPT can be used to detect potential threats by analysing large amounts of data and identifying patterns that may indicate malicious activity. This can help enterprises quickly identify and respond to a potential threat before it escalates to something more,” he added.
POSITIVE FACTORS
Major Vineet Kumar, Founder and Global President of CyberPeace Foundation, believes that the deployment of AI chatbots has proven to be highly beneficial in India, where a booming economy and increasing demand for efficient customer service have led to a surge in their use. According to him, both ChatGPT and Bard have the potential to bring significant positive change to various industries and individuals in India.
“ChatGPT has already made an impact by revolutionising customer service, providing instant and accurate support, and reducing wait time. It has automated tedious and complicated tasks for businesses and educational institutions, freeing up valuable time for more significant activities. In the education sector, ChatGPT has also improved learning experiences by providing quick and reliable information to students and educators,” he added.
He also said there are several possible positive impacts that the AI chatbots, ChatGPT and Bard, could have in India and these include improved customer experience, increased productivity, better access to information, improved healthcare, improved access to education and better financial services.
Reference Link : https://www.news18.com/news/explainers/confused-about-chatgpt-bard-experts-tell-news18-how-openai-googles-ai-chatbots-may-impact-india-7026277.html

Introduction
In the boundless world of the internet—a digital frontier rife with both the promise of connectivity and the peril of deception—a new spectre stealthily traverses the electronic pathways, casting a shadow of fear and uncertainty. This insidious entity, cloaked in the mantle of supposed authority, preys upon the unsuspecting populace navigating the virtual expanse. And in the heart of India's vibrant tapestry of diverse cultures and ceaseless activity, Mumbai stands out—a sprawling metropolis of dreams and dynamism, yet also the stage for a chilling saga, a cyber charade of foul play and fraud.
The city's relentless buzz and hum were punctuated by a harrowing tale that unwound within the unassuming confines of a Kharghar residence, where a 46-year-old individual's brush with this digital demon would unfold. His typical day veered into the remarkable as his laptop screen lit up with an ominous pop-up, infusing his routine with shock and dread. This deceiving popup, masquerading as an official communication from the National Crime Records Bureau (NCRB), demanded an exorbitant fine of Rs 33,850 for ostensibly browsing adult content—an offence he had not committed.
The Cyber Deception
This tale of deceit and psychological warfare is not unique, nor is it the first of its kind. It finds echoes in the tragic narrative that unfurled in September 2023, far south in the verdant land of Kerala, where a young life was tragically cut short. A 17-year-old boy from Kozhikode, caught in the snare of similar fraudulent claims of NCRB admonishment, was driven to the extreme despair of taking his own life after being coerced to dispense Rs 30,000 for visiting an unauthorised website, as the pop-up falsely alleged.
Sewn with a seam of dread and finesse, the pop-up which appeared in another recent case from Navi Mumbai, highlights the virtual tapestry of psychological manipulation, woven with threatening threads designed to entrap and frighten. In this recent incident a 46-year-old Kharghar resident was left in shock when he got a pop-up on a laptop screen warning him to pay Rs 33,850 fine for surfing a porn website. This message appeared from fake website of NCRB created to dupe people. Pronouncing that the user has engaged in browsing the Internet for some activities, it delivers an ultimatum: Pay the fine within six hours, or face the critical implications of a criminal case. The panacea it offers is simple—settle the demanded amount and the shackles on the browser shall be lifted.
It was amidst this web of lies that the man from Kharghar found himself entangled. The story, as retold by his brother, an IT professional, reveals the close brush with disaster that was narrowly averted. His brother's panicked call, and the rush of relief upon realising the scam, underscores the ruthless efficiency of these cyber predators. They leverage sophisticated deceptive tactics, even specifying convenient online payment methods to ensnare their prey into swift compliance.
A glimmer of reason pierced through the narrative as Maharashtra State cyber cell special inspector general Yashasvi Yadav illuminated the fraudulent nature of such claims. With authoritative clarity, he revealed that no legitimate government agency would solicit fines in such an underhanded fashion. Rather, official procedures involve FIRs or court trials—a structured route distant from the scaremongering of these online hoaxes.
Expert Take
Concurring with this perspective, cyber experts facsimiles. By tapping into primal fears and conjuring up grave consequences, the fraudsters follow a time-worn strategy, cloaking their ill intentions in the guise of governmental or legal authority—a phantasm of legitimacy that prompts hasty financial decisions.
To pierce the veil of this deception, D. Sivanandhan, the former Mumbai police commissioner, categorically denounced the absurdity of the hoax. With a voice tinged by experience and authority, he made it abundantly clear that the NCRB's role did not encompass the imposition of fines without due process of law—a cornerstone of justice grossly misrepresented by the scam's premise.
New Lesson
This scam, a devilish masquerade given weight by deceit, might surge with the pretence of novelty, but its underpinnings are far from new. The manufactured pop-ups that propagate across corners of the internet issue fabricated pronouncements, feigned lockdowns of browsers, and the spectre of being implicated in taboo behaviours. The elaborate ruse doesn't halt at mere declarations; it painstakingly fabricates a semblance of procedural legitimacy by preemptively setting penalties and detailing methods for immediate financial redress.
Yet another dimension of the scam further bolsters the illusion—the ominous ticking clock set for payment, endowing the fraud with an urgency that can disorient and push victims towards rash action. With a spurious 'Payment Details' section, complete with options to pay through widely accepted credit networks like Visa or MasterCard, the sham dangles the false promise of restored access, should the victim acquiesce to their demands.
Conclusion
In an era where the demarcation between illusion and reality is nebulous, the impetus for individual vigilance and scepticism is ever-critical. The collective consciousness, the shared responsibility we hold as inhabitants of the digital domain, becomes paramount to withstand the temptation of fear-inducing claims and to dispel the shadows cast by digital deception. It is only through informed caution, critical scrutiny, and a steadfast refusal to capitulate to intimidation that we may successfully unmask these virtual masquerades and safeguard the integrity of our digital existence.
References:
- https://www.onmanorama.com/news/kerala/2023/09/29/kozhikode-boy-dies-by-suicide-after-online-fraud-threatens-him-for-visiting-unauthorised-website.html
- https://timesofindia.indiatimes.com/pay-rs-33-8k-fine-for-surfing-porn-warns-fake-ncrb-pop-up-on-screen/articleshow/106610006.cms
- https://www.indiatoday.in/technology/news/story/people-who-watch-porn-receiving-a-warning-pop-up-do-not-pay-it-is-a-scam-1903829-2022-01-24