Viral video attributed to the Australian Prime Minister is AI-generated; claim of cancelling Pakistani visas is false
A video is being shared on social media, falsely attributing it to Australian Prime Minister Anthony Albanese. The video claims that following the Bondi Beach attack, he decided to cancel the visas of Pakistani citizens.
An investigation by the Cyber Peace Foundation revealed that the viral video was created using AI. In the original video, Anthony Albanese was answering questions related to the Climate Change Bill during a press conference. It is important to note that in the attack that took place last Sunday (14 December) at Bondi Beach in Sydney, New South Wales, Australia, 15 people were killed. According to Australian police, the attack targeted the Jewish community. New South Wales Police Commissioner Mal Lanyon stated that the two accused involved in the attack were father and son—one aged 50 and the other 24. Media reports identified them as Sajid and Naved Akram.
Claim:
On 14 December 2025, a user on the social media platform X shared a video claiming, “After the attack by a Pakistani Islamic terrorist, the Australian Prime Minister has decided to cancel the visas of all Pakistanis. The whole world is troubled by this community, and in India it is said that Abdul cannot buy a house in a Hindu neighbourhood.”
The link to the related post, its archived version, and screenshots can be seen below:

Investigation:Upon closely examining the viral video, we suspected it to be AI-generated. Subsequently, we scanned the video using the AI detection tool aurigin.ai. According to the results provided by the tool, the video was found to be AI-generated.
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Social media has become far more than a tool of communication, engagement and entertainment. It shapes politics, community identity, and even shapes agendas. When misused, the consequences can be grave: communal disharmony, riots, false rumours, harassment or worse. Emphasising the need for digital Atmanirbhar, Prime Minister Narendra Modi recently urged India’s youth to develop the country’s own social media platforms, like Facebook, Instagram and X, to ensure that the nation’s technological ecosystems remain secure and independent, reinforcing digital autonomy. This growing influence of platforms has sharpened the tussle between government regulation, the independence of social media companies, and the protection of freedom of expression in most countries.
Why Government Regulation Is Especially Needed
While self-regulation has its advantages, ‘real-world harms’ show why state oversight cannot be optional:
- Incitement to violence and communal unrest: Misinformation and hate speech can inflame tensions. In Manipur (May 2023), false posts, including unverified sexual-violence claims, spread online, worsening clashes. Authorities shut down mobile internet on 3 May 2023 to curb “disinformation and false rumours,” showing how quickly harmful content can escalate and why enforceable moderation rules matter.
- Fake news and misinformation: False content about health, elections or individuals spreads far faster than corrections. During COVID-19, an “infodemic” of fake cures, conspiracy theories and religious discrimination went viral on WhatsApp and Facebook, starting with false claims that the virus came from eating bats. The WHO warned of serious knock-on effects, and a Reuters Institute study found that although such claims by public figures were fewer, they gained the highest engagement, showing why self-regulation alone often fails to stop it.
Nepal’s Example:
Nepal provides a clear example of the tension between government regulation and the self-regulation tussle of social media. In 2023, the government issued rules requiring all social media platforms, whether local or foreign, to register with the Ministry of Communication and Information Technology, appoint a local contact person, and comply with Nepali law. By 2025, major platforms such as Facebook, Instagram, and YouTube had not met the registration deadline. In response, the Nepal Telecommunications Authority began blocking unregistered platforms until they complied. While journalists, civil-rights groups and Gen Z criticised the move as potentially limiting free speech and exposing corruption against the government. The government argued it was necessary to stop harmful content and misinformation. The case shows that without enforceable obligations, self-regulation can leave platforms unaccountable, but it must also balance with protecting free speech.
Self-Regulation: Strengths and Challenges
Most social-media companies prefer to self-regulate. They write community rules, trust & safety guidelines, and give users ways to flag harmful posts, and lean on a mix of staff, outside boards and AI filters to handle content that crosses the line. The big advantage here is speed: when something dangerous appears, a platform can react within minutes, far quicker than a court or lawmaker. Because they know their systems inside out, from user habits to algorithmic quirks, they can adapt fast.
But there’s a downside. These platforms thrive on engagement, hence sensational or hateful posts often keep people scrolling longer. That means the very content that makes money can also be the content that most needs moderating , a built-in conflict of interest.
Government Regulation: Strengths and Risks
Public rules make platforms answerable. Laws can require illegal content to be removed, force transparency and protect user rights. They can also stop serious harms such as fake news that might spark violence, and they often feel more legitimate when made through open, democratic processes.
Yet regulation can lag behind technology. Vague or heavy-handed rules may be misused to silence critics or curb free speech. Global enforcement is messy, and compliance can be costly for smaller firms.
Practical Implications & Hybrid Governance
For users, regulation brings clearer rights and safer spaces, but it must be carefully drafted to protect legitimate speech. For platforms, self-regulation gives flexibility but less certainty; government rules provide a level playing field but add compliance costs. For governments, regulation helps protect public safety, reduce communal disharmony, and fight misinformation, but it requires transparency and safeguards to avoid misuse.
Hybrid Approach
A combined model of self-regulation plus government regulation is likely to be most effective. Laws should establish baseline obligations: registration, local grievance officers, timely removal of illegal content, and transparency reporting. Platforms should retain flexibility in how they implement these obligations and innovate with tools for user safety. Independent audits, civil society oversight, and simple user appeals can help keep both governments and platforms accountable.
Conclusion
Social media has great power. It can bring people together, but it can also spread false stories, deepen divides and even stir violence. Acting on their own, platforms can move fast and try new ideas, but that alone rarely stops harmful content. Good government rules can fill the gap by holding companies to account and protecting people’s rights.
The best way forward is to mix both approaches, clear laws, outside checks, open reporting, easy complaint systems and support for local platforms, so the digital space stays safer and more trustworthy.
References
- https://timesofindia.indiatimes.com/india/need-desi-social-media-platforms-to-secure-digital-sovereignty-pm/articleshow/123327780.cms#
- https://www.bbc.com/news/world-asia-india-66255989
- https://nepallawsunshine.com/social-media-registration-in-nepal/ https://www.newsonair.gov.in/nepal-bans-26-unregistered-social-media-sites-including-facebook-whatsapp-instagram/
- https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
- https://www.drishtiias.com/daily-updates/daily-news-analysis/social-media-regulation-in-india

Introduction
India envisions reaching its goal of becoming Viksit Bharat by 2047. With a net-zero emissions target by 2070, it has already reduced GDP emission intensity by 36% (from 2005 to 2020) and is working towards a 45% reduction goal by 2030. This will help the country achieve economic growth while minimizing environmental impact, ensuring sustainable development for the future. The 2025 Union Budget prioritises energy security, clean energy expansion, and green tech manufacturing. Furthermore, India’s promotion of sustainability policies in startups, MSMEs, and clean tech shows its commitment to COP28 and SDGs. India’s key policy developments for sustainability and energy efficiency include the Energy Conservation Act (2022), PAT scheme, S&L scheme, and the Energy Conservation Building Code, driving decarbonization, energy efficiency, and a sustainable future.
India’s Policy and Regulatory Landscape
The Indian law of Energy Conservation (Amendment) Act which was enacted in 2022 aims at enhancing energy efficiency while ensuring economic growth. It works on the aim of reducing emission intensity by 2030. The Act tackles regulatory, financial, and awareness barriers to promote energy-saving technologies. Next, the Perform, Achieve, and Trade (PAT) Scheme improves cost-effective energy efficiency in energy-intensive industries through tradable energy-saving certificates. Adding on, the PLI Scheme boosts green manufacturing by attracting investments, both domestically and internationally. The Bureau of Energy Efficiency (BEE) enforces Minimum Energy Performance Standards (MEPS) and star ratings for appliances, guiding consumers toward energy-efficient choices. These initiatives collectively drive carbon reduction and sustainable energy use in India.
Growth of Energy-Efficient Technologies
India has been making massive strides in its integration of renewable energy, such as solar and wind energies, mainly due to improvements in storage technologies. Another key development is the real-time optimization of energy usage through smart grids and AI-driven energy management. The EV and green mobility boom has been charged through by the rapid expansion of charging infrastructure and the policy interventions to support the shift. The building of green building codes and IoT-driven energy management has led to building efficiency, and finally, the efforts for industrial energy optimisation have been met through AI/ML-driven demand-side management in heavy industries.
Market Trends, Investment, and Industry Adoption
The World Energy Investment Report 2024 (IEA) projects global energy investment to surpass $3 trillion, with $2 trillion allocated to clean energy. India’s clean energy investment reached $68 billion in 2023, a 40%+ rise from 2016-2020, with nearly 50% directed toward low-emission power, including solar PV. Investment is set to double by 2030 but needs a 20% further rise to meet climate goals.
India’s ESG push is driven by Net Zero 2070, SEBI’s BRSR mandates, and UN SDGs, with rising scrutiny on corporate governance. ESG-aligned investments are expanding, reinforcing sustainability. Meanwhile, energy efficiency in manufacturing minimizes waste and environmental impact, while digital transformation in energy management boosts renewable integration, grid reliability, and cost efficiency, ensuring a sustainable energy transition.
The Way Forward
There are multiple implementation bottlenecks present for the active policies which include infrastructure paucity, financing issues and even the on-ground implementational challenges of the active policies. To combat these issues India needs to adopt measures for promoting public-private partnerships to scale energy-efficient solutions. Incentives for industries to adopt green technologies should be strengthened (tax exemptions and subsidies for specific periods), with increased R&D support and regulatory sandboxes to encourage adoption. Finally, the role of industries, policymakers and consumers needs to be in tandem to accelerate the efforts made towards a sustainable and green future for India. Emerging technologies play an important in bridging gaps and aim towards the adoption of global best practices for India.
References
- https://instituteofsustainabilitystudies.com/insights/lexicon/green-technologies-innovations-opportunities-challenges/
- https://powermin.gov.in/sites/default/files/The_Energy_Conservation_Amendment_Act_2022_0.pdf
- https://www.ibef.org/blogs/esg-investing-in-india-navigating-environmental-social-and-governance-factors-for-sustainable-growth