#FactCheck - AI-Generated Image of Abhishek Bachchan and Aishwarya Rai Falsely Linked to Kedarnath Visit
A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.
Related Blogs

Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
- Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
- AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
- Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
References:
- https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images
- https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
- https://www.bbc.com/news/articles/cy87076pdw3o
- https://newskarnataka.com/technology/government-releases-guide-to-help-citizens-identify-ai-generated-images/21052024/
- https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
- https://www.psypost.org/ai-models-struggle-with-expert-level-global-history-knowledge/
- https://www.youtube.com/watch?v=M65IYIWlqes&t=2597s
- https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/?utm_source=chatgpt.com
- https://www.reuters.com/technology/artificial-intelligence/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11/?utm_source=chatgpt.com
- https://www.theguardian.com/film/2024/sep/13/documentary-ai-guidelines?utm_source=chatgpt.com
.webp)
Introduction
Empowering today’s youth with the right skills is more crucial than ever in a rapidly evolving digital world. Every year on July 15th, the United Nations marks World Youth Skills Day to emphasise the critical role of skills development in preparing young people for meaningful work and resilient futures. As AI transforms industries and societies, equipping young minds with digital and AI skills is key to fostering security, adaptability, and growth in the years ahead.
Why AI Upskilling is Crucial in Modern Cyber Defence
Security in the digital age remains a complex challenge, regardless of the presence of Artificial Intelligence (AI). It is one of the biggest modern ironies, and not only that, it is a paradox wrapped in code, where the cure and the curse are written in the same language. The very hand that protects the world from cyber threats can very well be used for the creation of that threat. This being said, the modern-day implementation of AI has to circumvent the threats posed by it or any other advanced technology. A solid grasp of AI and machine learning mechanisms is no longer optional; it is fundamental for modern cybersecurity. The traditional cybersecurity training programs employ static content, which can often become outdated and inadequate for the vulnerabilities. AI-powered solutions, such as intrusion detection systems and next-generation firewalls, use behavioural analysis instead of just matching signatures. AI models are susceptible, nevertheless, as malevolent actors can introduce hostile inputs or tainted data to trick computers into incorrect classification. Data poisoning is a major threat to AI defences, according to Cisco's evidence.
As threats surpass the current understanding of cybersecurity professionals, a need arises to upskill them in advanced AI technologies so that they can fortify the security of current systems. Two of the most important skills for professionals would be AI/ML Model Auditing and Data Science. Skilled data scientists can sift through vast logs, from pocket captures to user profiles, to detect anomalies, assess vulnerabilities, and anticipate attacks. A news report from Business Insider puts it correctly: ‘It takes a good-guy AI to fight a bad-guy AI.’ The technology of generative AI is quite new. As a result, it poses fresh security issues and faces security risks like data exfiltration and prompt injections.
Another method that can prove effective is Natural Language Processing (NLP), which helps machines process this unstructured data, enabling automated spam detection, sentiment analysis, and threat context extraction. Security teams skilled in NLP can deploy systems that flag suspicious email patterns, detect malicious content in code reviews, and monitor internal networks for insider threats, all at speeds and scales humans cannot match.
The AI skills, as aforementioned, are not only for courtesy’s sake; they have become essential in the current landscape. India is not far behind in this mission; it is committed, along with its western counterparts, to employ the emerging technologies in its larger goal of advancement. With quiet confidence, India takes pride in its remarkable capacity to nurture exceptional talent in science and technology, with Indian minds making significant contributions across global arenas.
AI Upskilling in India
As per a news report of March 2025, Jayant Chaudhary, Minister of State, Ministry of Skill Development & Entrepreneurship, highlighted that various schemes under the Skill India Programme (SIP) guarantee greater integration of emerging technologies, such as artificial intelligence (AI), cybersecurity, blockchain, and cloud computing, to meet industry demands. The SIP’s parliamentary brochure states that more than 6.15 million recipients have received training as of December 2024. Other schemes that facilitate educating and training professionals, such as Data Scientist, Business Intelligence Analyst, and Machine Learning Engineer are,
- Pradhan Mantri Kaushal Vikas Yojana 4.0 (PMKVY 4.0)
- Pradhan Mantri National Apprenticeship Promotion Scheme (PM-NAPS)
- Jan Shikshan Sansthan (JSS)
Another report showcases how Indian companies, or companies with their offices in India such as Ernst & Young (EY), are recognising the potential of the Indian workforce and yet their deficiencies in emerging technologies and leading the way by internal upskilling and establishing an AI Academy, a new program designed to assist businesses in providing their employees with essential AI capabilities, in response to the increasing need for AI expertise. Using more than 200 real-world AI use cases, the program offers interactive, organised learning opportunities that cover everything from basic ideas to sophisticated generative AI capabilities.
In order to better understand the need for these initiatives, a reference is significant to a report backed by Google.org and the Asian Development Bank; India appears to be at a turning point in the global use of AI. As per the research, “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” India urgently needs to provide accessible and efficient AI upskilling despite having the largest workforce in the world. According to the paper, by 2030, AI could boost the Asia-Pacific region’s GDP by up to USD 3 trillion. The key to this potential is India, a country with the youngest and fastest-growing population.
Conclusion and CyberPeace Resolution
As the world stands at the crossroads of innovation and insecurity, India finds itself uniquely poised, with its vast young population and growing technologies. But to truly safeguard its digital future and harness the promise of AI, the country must think beyond flagship schemes. Imagine classrooms where students learn not just to code but to question algorithms, workplaces where AI training is as routine as onboarding.
India’s journey towards digital resilience is not just about mastering technology but about cultivating curiosity, responsibility, and trust. CyberPeace is committed to this future and is resolute in this collective pursuit of an ethically secure digital world. CyberPeace resolves to be an active catalyst in AI upskilling across India. We commit to launching specialised training modules on AI, cybersecurity, and digital ethics tailored for students and professionals. It seeks to close the AI literacy gap and develop a workforce that is both morally aware and technologically proficient by working with educational institutions, skilling initiatives, and industry stakeholders.
References
- https://www.helpnetsecurity.com/2025/03/07/ai-gamified-simulations-cybersecurity/
- https://www.businessinsider.com/artificial-intelligence-cybersecurity-large-language-model-threats-solutions-2025-5?utm
- https://apacnewsnetwork.com/2025/03/ai-5g-skills-boost-skill-india-targets-industry-demands-over-6-15-million-beneficiaries-trained-till-2024/
- https://indianexpress.com/article/technology/artificial-intelligence/india-must-upskill-fast-to-keep-up-with-ai-jobs-says-new-report-10107821/

The global race for Artificial Intelligence is heating up, and India has become one of its most important battlegrounds. Over the past few months, tech giants like OpenAI (ChatGPT), Google (Gemini), X (Grok), Meta (Llama), and Perplexity AI have stepped up their presence in the country, not by selling their AI tools, but by offering them free or at deep discounts.
At first, it feels like a huge win for India’s digital generation. Students, professionals, and entrepreneurs today can tap into some of the world’s most powerful AI tools without paying a rupee. It feels like a digital revolution unfolding in real time. Yet, beneath this generosity lies a more complicated truth. Experts caution that this wave of “free” AI access isn’t without strings attached. This offering impacts how India handles data privacy, the fairness of competition, and the pace of the development of homegrown AI innovation that the country is focusing on.
The Market Strategy: Free Now, Pay Later
The choice of global AI companies to offer free access in India is a calculated business strategy. With one of the world’s largest and fastest-growing digital populations, India is a market no tech giant wants to miss. By giving away their AI tools for free, these firms are playing a long game:
- Securing market share early: Flooding the market with free access helps them quickly attract millions of users before Indian startups have a chance to catch up. Recent examples are Perplexity, ChatGPT Go and Gemini AI which are offering free subscriptions to Indian users.
- Gathering local data: Every interaction, every prompt, question, or language pattern, helps these models learn from larger datasets to improve their product offerings in India and the rest of the world. Nothing is free in the world - as the popular saying goes, “if something is free, means you are the product. The same goes for these AI platforms: they monetise user data by analysing chats and their behaviour to refine their model and build paid products. This creates the privacy risk as India currently lacks specific laws to govern how such data is stored, processed or used for AI training.
- Create user dependency: Once users grow accustomed to the quality and convenience of these global models, shifting to Indian alternatives, even when they become paid, will be difficult. This approach mirrors the “freemium” model used in other tech sectors, where users are first attracted through free access and later monetised through subscriptions or premium features, raising ethical concerns.
Impact on Indian Users
For most Indians, the short-term impact of free AI access feels overwhelmingly positive. Tools like ChatGPT and Gemini are breaking down barriers by democratising knowledge and making advanced technology available to everyone, from students, professionals, to small businesses. It’s changing how people learn, think and do - all without spending a single rupee.But the long-term picture isn’t quite as simple. Beneath the convenience lies a set of growing concerns:
- Data privacy risks: Many users don’t realise that their chats, prompts, or queries might be stored and used to train global AI models. Without strong data protection laws in action, sensitive Indian data could easily find its way into foreign systems.
- Overdependence on foreign technology: Once these AI tools become part of people’s daily lives, moving away from them gets harder — especially if free access later turns into paid plans or comes with restrictive conditions.
- Language and cultural bias: Most large AI models are still built mainly around English and Western data. Without enough Indian language content and cultural representation, the technology risks overlooking the very diversity that defines India
Impact on India’s AI Ecosystem
India’s Generative AI market, valued at USD $ 1.30 billion in 2024, is projected to reach 5.40 billion by 2033. Yet, this growth story may become uneven if global players dominate early.
Domestic AI startups face multiple hurdles — limited funding, high compute costs, and difficulty in accessing large, diverse datasets. The arrival of free, GPT-4-level models sharpens these challenges by raising user expectations and increasing customer acquisition costs.
As AI analyst Kashyap Kompella notes, “If users can access GPT-4-level quality at zero cost, their incentive to try local models that still need refinement will be low.” This could stifle innovation at home, resulting in a shallow domestic AI ecosystem where India consumes global technology but contributes little to its creation.
CCI’s Intervention: Guarding Fair Competition
The Competition Commission of India (CCI) has started taking note of how global AI companies are shaping India’s digital market. In a recent report, it cautioned that AI-driven pricing strategies such as offering free or heavily subsidised access could distort healthy competition and create an uneven playing field for smaller Indian developers.
The CCI’s decision to step in is both timely and necessary. Without proper oversight, such tactics could gradually push homegrown AI startups to the sidelines and allow a few foreign tech giants to gain disproportionate influence over India’s emerging AI economy.
What the Indian Government Should Do
To ensure India’s AI landscape remains competitive, inclusive, and innovation-driven, the government must adopt a balanced strategy that safeguards users while empowering local developers.
1. Promote Fair Competition
The government should mandate transparency in free access offers, including their duration, renewal terms, and data-use policies. Exclusivity deals between foreign AI firms and telecom or device companies must be closely monitored to prevent monopolistic practices.
2. Strengthen Data Protection
Under the Digital Personal Data Protection (DPDP) Act, companies should be required to obtain explicit consent from users before using data for model training. Encourage data localisation, ensuring that sensitive Indian data remains stored within India’s borders.
3. Support Domestic AI Innovation
Accelerate the implementation of the IndiaAI Mission to provide public compute infrastructure, open datasets, and research funding to local AI developers like Sarvam AI, an Indian company chosen by the government to build the country's first homegrown large language model (LLM) under IndianAI Mission.
4. Create an Open AI Ecosystem
India should develop national AI benchmarks to evaluate all models, foreign or domestic, on performance, fairness, and linguistic diversity. And at the same time, they have their own national data Centre to train their indigenous AI models.
5. Encourage Responsible Global Collaboration
Speaking at the AI Action Summit 2025, the Prime Minister highlighted that governance should go beyond managing risks and should also promote innovation for the global good. Building on this idea, India should encourage global AI companies to invest meaningfully in the country’s ecosystem through research labs, data centres, and AI education programmes. Such collaborations will ensure that these partnerships not only expand markets but also create value, jobs and knowledge within India.
Conclusion
The surge of free AI access across India represents a defining moment in the nation’s digital journey. On one hand, it’s empowering millions of people and accelerating AI awareness like never before. On the other hand, it poses serious challenges from over-reliance on foreign platforms to potential risks around data privacy and the slow growth of local innovation. India’s real test will be finding the right balance between access and autonomy, allowing global AI leaders to innovate and operate here, but within a framework that protects the interests of Indian users, startups, and data ecosystems. With strong and timely action under the Digital Personal Data Protection (DPDP) Act, the IndiaAI Mission, and the Competition Commission of India’s (CCI) active oversight, India can make sure this AI revolution isn’t just something that happens to the country, but for it.
References
- https://www.moneycontrol.com/artificial-intelligence/cci-study-flags-steep-barriers-for-indian-ai-startups-calls-for-open-data-and-compute-access-to-level-playing-field-article-13600606.html#
- https://www.imarcgroup.com/india-generative-ai-market
- https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
- https://m.economictimes.com/tech/artificial-intelligence/nasscom-planning-local-benchmarks-for-indic-ai-models/articleshow/124218208.cms
- https://indianexpress.com/article/business/centre-selects-start-up-sarvam-to-build-country-first-homegrown-ai-model-9967243/#