#FactCheck - Viral Image of Bridge claims to be of Mumbai, but in reality it's located in Qingdao, China
Executive Summary:
The photograph of a bridge allegedly in Mumbai, India circulated through social media was found to be false. Through investigations such as reverse image searches, examination of similar videos, and comparison with reputable news sources and google images, it has been found that the bridge in the viral photo is the Qingdao Jiaozhou Bay Bridge located in Qingdao, China. Multiple pieces of evidence, including matching architectural features and corroborating videos tell us that the bridge is not from Mumbai. No credible reports or sources have been found to prove the existence of a similar bridge in Mumbai.

Claims:
Social media users claim a viral image of the bridge is from Mumbai.



Fact Check:
Once the image was received, it was investigated under the reverse image search to find any lead or any information related to it. We found an image published by Mirror News media outlet, though we are still unsure but we can see the same upper pillars and the foundation pillars with the same color i.e white in the viral image.

The name of the Bridge is Jiaozhou Bay Bridge located in China, which connects the eastern port city of the country to an offshore island named Huangdao.
Taking a cue from this we then searched for the Bridge to find any other relatable images or videos. We found a YouTube Video uploaded by a channel named xuxiaopang, which has some similar structures like pillars and road design.

In reverse image search, we found another news article that tells about the same bridge in China, which is more likely similar looking.

Upon lack of evidence and credible sources for opening a similar bridge in Mumbai, and after a thorough investigation we concluded that the claim made in the viral image is misleading and false. It’s a bridge located in China not in Mumbai.
Conclusion:
In conclusion, after fact-checking it was found that the viral image of the bridge allegedly in Mumbai, India was claimed to be false. The bridge in the picture climbed to be Qingdao Jiaozhou Bay Bridge actually happened to be located in Qingdao, China. Several sources such as reverse image searches, videos, and reliable news outlets prove the same. No evidence exists to suggest that there is such a bridge like that in Mumbai. Therefore, this claim is false because the actual bridge is in China, not in Mumbai.
- Claim: The bridge seen in the popular social media posts is in Mumbai.
- Claimed on: X (formerly known as Twitter), Facebook,
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In today's digital age, we consume a lot of information and content on social media apps, and it has become a daily part of our lives. Additionally, the algorithm of these apps is such that once you like a particular category of content or show interest in it, the algorithm starts showing you a lot of similar content. With this, the hype around becoming a content creator has also increased, and people have started making short reel videos and sharing a lot of information. There are influencers in every field, whether it's lifestyle, fitness, education, entertainment, vlogging, and now even legal advice.
The online content, reels, and viral videos by social media influencers giving legal advice can have far-reaching consequences. ‘LAW’ is a vast subject where even a single punctuation mark holds significant meaning. If it is misinterpreted or only partially explained in social media reels and short videos, it can lead to serious consequences. Laws apply based on the facts and circumstances of each case, and they can differ depending on the nature of the case or offence. This trend of ‘swipe for legal advice’ or ‘law in 30 seconds’, along with the rise of the increasing number of legal influencers, poses a serious problem in the online information landscape. It raises questions about the credibility and accuracy of such legal advice, as misinformation can mislead the masses, fuel legal confusion, and create risks.
Bar Council of India’s stance against legal misinformation on social media platforms
The Bar Council of India (BCI) on Monday (March 17, 2025) expressed concern over the rise of self-styled legal influencers on social media, stating that many without proper credentials spread misinformation on critical legal issues. Additionally, “Incorrect or misleading interpretations of landmark judgments like the Citizenship Amendment Act (CAA), the Right to Privacy ruling in Justice K.S. Puttaswamy (Retd.) v. Union of India, and GST regulations have resulted in widespread confusion, misguided legal decisions, and undue judicial burden,” the body said. The BCI also ordered the mandatory cessation of misleading and unauthorised legal advice dissemination by non-enrolled individuals and called for the establishment of stringent vetting mechanisms for legal content on digital platforms. The BCI emphasised the need for swift removal of misleading legal information.
Conclusion
Legal misinformation on social media is a growing issue that not only disrupts public perception but also influences real-life decisions. The internet is turning complex legal discourse into a chaotic game of whispers, with influencers sometimes misquoting laws and self-proclaimed "legal experts" offering advice that wouldn't survive in a courtroom. The solution is not censorship, but counterbalance. Verified legal voices need to step up, fact-checking must be relentless, and digital literacy must evolve to keep up with the fast-moving world of misinformation. Otherwise, "legal truth" could be determined by whoever has the best engagement rate, rather than by legislation or precedent.
References:

Introduction
As our experiments with Generative Artificial Intelligence (AI) continue, companies and individuals look for new ways to incorporate and capitalise on it. This also includes big tech companies betting on their potential through investments. This process also sheds light on how such innovations are being carried out, used, and affect other stakeholders. Google’s AI overview feature has raised concerns from various website publishers and regulators. Recently, Chegg, a US-based tech education company that provides online resources for high school and college students, has filed a lawsuit against Google alleging abuse of monopoly over the searching mechanism.
Legal Background
Google’s AI Overview/Search Generative Experience (SGE) is a feature that incorporates AI into its standard search tool and helps summarise search results. This is then presented at the top, over the other published websites, when one looks for the search result. Although the sources of the information present are linked, they are half-covered, and it is ambiguous to tell which claims made by the AI come from which link. This creates an additional step for the searcher as, to find out the latter, their user interface requires the searcher to click on a drop-down box. Individual publishers and companies like Chegg have argued that such summaries deter their potential traffic and lead to losses as they continue to bid higher for advertisement services that Google offers, only to have their target audience discouraged from visiting their websites. What is unique about the lawsuit that has been filed by Chegg, is that it is based on anti-trust law rather than copyright law, which it has dealt with previously. In August 2024, a US Federal Judge had ruled that Google had an illegal monopoly over internet search and search text advertising markets, and by November, the US Department of Justice (DOJ) filed its proposed remedy. Some of them were giving advertisers and publishers more control of their data flowing through Google’s products, opening Google’s search index to the rest of the market, and imposing public oversight over Google’s AI investments. Currently, the DOJ has emphasised its stand on dismantling the search monopoly through structural separations, i.e., divesting Google of Chrome. The company is slated to defend itself before the DC District Court Judge Amit Mehta starting April 20, 2025.
CyberPeace Insights
As per a report by Statista (Global market share of leading search engines 2015-2025), Google, as the market leader, held a search traffic share of around 89.62 per cent. It is also stated that its advertising services account for the majority of its revenue, which amounted to a total of 305.63 billion U.S. dollars in 2023. The inclusion of the AI feature is undoubtedly changing how we search for things online. Benefits for users include an immediate, convenient scan of general information pertaining to the looked-up subject, but it may also raise concerns on the part of the website publishers and their loss of ad revenue owing to fewer impressions/clicks. Even though links (sources) are mentioned, they are usually buried. Such a searching mechanism questions the incentive on both ends- the user to explore various viewpoints, as people are now satisfied with the first few results that pop up, and the incentive for a creator/publisher to create new content as well as generate an income out of it. There might be a shift to more passive consumption rather than an active one, where one looks up/or is genuinely searching for information.
Conclusion
AI might make life more convenient, but in this case, it might also take away from small businesses, their finances, and the results of their hard work. It is also necessary for regulators, publishers, and users to continue asking such critical questions to keep the accountability of big tech giants in check, whilst not compromising their creations and publications.
References
- https://www.washingtonpost.com/technology/2024/05/13/google-ai-search-io-sge/
- https://www.theverge.com/news/619051/chegg-google-ai-overviews-monopoly
- https://economictimes.indiatimes.com/tech/technology/google-leans-further-into-ai-generated-overviews-for-its-search-engine/articleshow/118742139.cms?from=mdr
- https://www.nytimes.com/2024/12/03/technology/google-search-antitrust-judge.html
- https://www.odinhalvorson.com/monopoly-and-misuse-googles-strategic-ai-narrative/
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/google-leans-further-into-ai-generated-overviews-for-its-search-engine/118748621
- https://www.techpolicy.press/the-elephant-in-the-room-in-the-google-search-case-generative-ai/
- https://www.karooya.com/blog/proposed-remedies-break-googles-monopoly-antitrust/
- https://getellipsis.com/blog/googles-monopoly-and-the-hidden-brake-on-ai-innovation/
- https://www.statista.com/statistics/266249/advertising-revenue-of-google/#:~:text=Google:%20annual%20advertising%20revenue%202001,local%20products%20are%20more%20preferred.
- https://www.statista.com/statistics/1381664/worldwide-all-devices-market-share-of-search-engines/
- https://www.techpolicy.press/doj-sets-record-straight-of-whats-needed-to-dismantle-googles-search-monopoly/

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency