#FactCheck -Misleading Social Media Claim Targets University Over Viral Video
Executive Summary
A video circulating on social media shows a woman using abusive language in front of a camera. Users sharing the clip claim that the woman is a professor at Galgotias University and that the video exposes her alleged reality. However, an research by CyberPeace found the claim to be misleading. The probe revealed that the woman seen in the viral video has no connection with Galgotias University and is not a professor there.Fact-checking further showed that the video is not recent but around seven years old. The woman featured in the clip was identified as Shubhrastha, who is a political strategist by profession.
Claim:
A user on X (formerly Twitter) shared the viral video on February 18, 2026, claiming: “A ‘class in abuse studies’ at Galgotias University? An obscene video of a professor teaching ethics has gone viral. Another shameful chapter has been added to the list of controversies surrounding Galgotias University.” The post further alleged that after falsely claiming a Chinese robot as its own, the university’s “Culture and Ethics” faculty member was seen publicly using abusive language in the viral clip. The post link and its archived version are provided below:

Fact Check:
To verify the authenticity of the viral claim, we extracted key frames from the video and conducted a reverse image search using Google Lens. During the research , we found the same video uploaded on the Indian Spectator’s YouTube channel on June 9, 2018

The video was also found on another YouTube channel, where it had been uploaded on June 12, 2018.

Conclusion
The research clearly establishes that the woman seen in the viral video has no association with Galgotias University and is not a professor there. The clip is also not recent but approximately seven years old. The woman in the video was identified as Shubhrastha, a political strategist.
Related Blogs

Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
- Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
- AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
- Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
References:
- https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images
- https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
- https://www.bbc.com/news/articles/cy87076pdw3o
- https://newskarnataka.com/technology/government-releases-guide-to-help-citizens-identify-ai-generated-images/21052024/
- https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
- https://www.psypost.org/ai-models-struggle-with-expert-level-global-history-knowledge/
- https://www.youtube.com/watch?v=M65IYIWlqes&t=2597s
- https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/?utm_source=chatgpt.com
- https://www.reuters.com/technology/artificial-intelligence/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11/?utm_source=chatgpt.com
- https://www.theguardian.com/film/2024/sep/13/documentary-ai-guidelines?utm_source=chatgpt.com

Introduction
As the calendar pages turn inexorably towards 2024, a question looms large on the horizon of our collective consciousness: Are we cyber-resilient? This is not a rhetorical flourish but a pragmatic inquiry, as the digital landscape we navigate is fraught with cyberattacks and disruptions that threaten to capsize our virtual vessels.
What, then, is Cyber Resilience? It is the capacity to prepare for, respond to, and recover from these cyber squalls. Picture, if you will, a venerable oak amid a howling gale. The roots, those unseen sinews, delve deep into the earth, anchoring the tree – this is preparation. The robust trunk and flexible branches, swaying yet unbroken, embody response. And the new growth that follows the storm's rage is recovery. Cyber resilience is the digital echo of this natural strength and flexibility.
The Need for Resilience
Why, you might ask, is Cyber Resilience of such paramount importance as we approach 2024? The answer lies in the stark reality of our times:
- A staggering half of businesses have been breached by cyberattacks in the past three years.
- The financial haemorrhage from these incursions is projected to exceed a mind-numbing $10 trillion by the end of 2024.
- The relentless march of technology has not only brought innovation but also escalated the arms race against cyber threats.
- Cyber resilience transcends mere cybersecurity; it is a holistic approach that weaves recovery and continuity into the fabric of digital defenses.
- The adaptability of organisations, often through measures such as remote working protocols, is a testament to the evolving strategies of cyber resilience.
- The advent of AI and Machine Learning heralds a new era of automated cyber defense, necessitating an integrated framework that marries security with continuity protocols.
- Societal awareness, particularly of social engineering tactics, and maintaining public relations during crises are now recognised as critical elements of resilience strategies.
- Cyber threats have evolved in sophistication, paralleling the intense competition to develop new AI-driven solutions.
- As we gaze towards the future, cyber resilience is expected to be a prominent trend in both business and consumer technology sectors throughout 2024.
The Virtues
The benefits of cyber resilience for organisations are manifold, offering a bulwark against the digital onslaught:
- A reduction in the risk of data breaches, safeguarding sensitive information and customer data.
- Business continuity, ensuring operations persist with minimal disruption.
- Protection of reputation, as companies that demonstrate effective cyber resilience engender trust.
- Compliance with data protection and privacy regulations, thus avoiding fines and legal entanglements.
- Financial stability, as the costs associated with breaches can be mitigated or even prevented.
- Enhanced customer trust, as clients feel more secure with companies that take cybersecurity seriously.
- A competitive advantage in a market rife with cyber threats.
- Innovation and agility, as cyber-resilient companies can pivot and adapt without fear of digital disruptions.
- Employee confidence, leading to improved morale and productivity.
- Long-term savings by sidestepping the expenses of frequent or major cyber incidents.
As the year wanes, it is a propitious moment to evaluate your organisation's cyber resilience. In this edition, we will guide you through the labyrinth of cyber investment buy-in, tailored discussions with stakeholders, and the quintessential security tools for your 2024 cybersecurity strategy.
How to be more Resilient
Cyber resilience is more than a shield; it is the preparedness to withstand and recover from a cyber onslaught. Let us explore the key steps to fortify your digital defenses:
- Know your risks: Map the terrain where you are most vulnerable, identify the treasures that could be plundered, and fortify accordingly.
- Get the technology right: Invest in solutions that not only detect threats with alacrity but also facilitate rapid recovery, all the while staying one step ahead of the cyber brigands.
- Involve your people: Embed cybersecurity awareness into the fabric of every role. Train your crew in the art of recognising and repelling digital dangers.
- Test your strategies: Regularly simulate incidents to stress-test your policies and procedures, honing your ability to contain and neutralise threats.
- Plan for the worst: Develop a playbook so that everyone knows their part in the grand scheme of damage control and communication in the event of a breach.
- Continually review: The digital seas are ever-changing; adjust your sails accordingly. Cyber resilience is not a one-time endeavour but a perpetual commitment.
Conclusion
As we stand on the precipice of 2024, let us not be daunted by the digital storms that rage on the horizon. Instead, let us embrace the imperative of cyber resilience, for it is our steadfast companion in navigating the treacherous waters of the cyber world. Civil Society Organizations such as ‘CyberPeace Foundation’ playing a crucial role in promoting cyber resilience by bridging the gap between the public and cybersecurity complexities, conducting awareness campaigns, and advocating for robust policies to safeguard collective digital interests. Their active role is imperative in fostering a culture of cyber hygiene and vigilance.
References
- https://www.loginradius.com/blog/identity/cybersecurity-trends-2024/
- https://ciso.economictimes.indiatimes.com/news/ciso-strategies/cisos-guide-to-2024-top-10-cybersecurity-trends/106293196

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company