Securing Digital Banking: RBI Mandates Migration to [.]bank[.]in Domains
Introduction
The Reserve Bank of India (RBI) has mandated banks to switch their digital banking domains to 'Bank.in' by October 31, 2025, as part of a strategy to modernise the sector and maintain consumer confidence. The move is expected to provide a consistent and secure interface for online banking, as a response to the increasing threats posed by cybercriminals who exploit vulnerabilities in online platforms. The RBI's directive is seen as a proactive measure to address the growing concerns over cybersecurity in the banking sector.
RBI Circular - Migration to '.bank.in' domain
The official circular released by the RBI dated April 22, 2025, read as follows:
“It has now been decided to operationalise the ‘. bank.in’ domain for banks through the Institute for Development and Research in Banking Technology (IDRBT), which has been authorised by National Internet Exchange of India (NIXI), under the aegis of the Ministry of Electronics and Information Technology (MeitY), to serve as the exclusive registrar for this domain. Banks may contact IDRBT at sahyog@idrbt.ac.in to initiate the registration process. IDRBT shall guide the banks on various aspects related to application process and migration to new domain.”
“All banks are advised to commence the migration of their existing domains to the ‘.bank.in’ domain and complete the process at the earliest and in any case, not later than October 31, 2025.”
CyberPeace Outlook
The Reserve Bank of India's directive mandating banks to shift to the 'Bank.in' domain by October 31, 2025, represents a strategic and forward-looking measure to modernise the nation’s digital banking infrastructure. With this initiative, the RBI is setting a new benchmark in cybersecurity by creating a trusted, exclusive domain that banks must adopt. This move will drastically reduce cyber threats, phishing attacks, and fake banking websites, which have been major sources of financial fraud. This fixed domain will simplify verification for consumers and tech platforms to more easily identify legitimate banking websites and apps. Furthermore, a strong drop in online financial fraud will have a long-term effect by this order. Since phishing and domain spoofing are two of the most prevalent forms of cybercrime, a shift to a strictly regulated domain name system will remove the potential for lookalike URLs and fraudulent websites that mimic banks. As India’s digital economy grows, RBI’s move is timely, essential, and future-ready.
References
Related Blogs
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report

Introduction
When a tragedy strikes, moments are fragile, people are vulnerable, emotions run high, and every second is important. In such critical situations, information becomes as crucial as food, water, shelter, and medication. As soon as any information is received, it often leads to stampedes and chaos. Alongside the tragedy, whether natural or man-made, emerges another threat: misinformation. People, desperate for answers, cling to whatever they can find.
Tragedies can take many forms. These may include natural disasters, mass accidents, terrorist activities, or other emergencies. During the 2023 earthquakes in Turkey, misinformation spread on social media claiming that the Yarseli Dam had cracked and was about to burst. People believed it and began migrating from the area. Panic followed, and search and rescue teams stopped operations in that zone. Precious hours were lost. Later, it was confirmed to be a rumour. By then, the damage was already done.
Similarly, after the recent plane crash in Ahmedabad, India, numerous rumours and WhatsApp messages spread rapidly. One message claimed to contain the investigation report on the crash of Air India flight AI-171. It was later called out by PIB and declared fake.
These examples show how misinformation can take control of already painful moments. During emergencies, when emotions are intense and fear is widespread, false information spreads faster and hits harder. Some people share it unknowingly, while others do so to gain attention or push a certain agenda. But for those already in distress, the effect is often the same. It brings ore confusion, heightens anxiety, and adds to their suffering.
Understanding Disasters and the Role of Media in Crisis
Disaster can be defined as a natural or human-caused situation that causes a transformation from a usual life of society into a crisis that is far beyond its existing response capacity. It can have minimal or maximum effects, from mere disruption in daily life practices to as adverse as inability to meet basic requirements of life like food, water and shelter. Hence, the disaster is not just a sudden event. It becomes a disaster when it overwhelms a community’s ability to cope with it.
To cope with such situations, there is an organised approach called Disaster Management. It includes preventive measures, minimising damages and helping communities recover. Earlier, public institutions like governments used to be the main actors in disaster management, but today, with every small entity having a role, academic institutions, media outlets and even ordinary people are involved.
Communication is an important element in disaster management. It saves lives when done correctly. People who are vulnerable need to know what’s happening, what they should do and where to seek help. It involves risk in today’s instantaneous communication.
Research shows that the media often fails to focus on disaster preparedness. For example, studies found that during the 2019 Istanbul earthquake, the media focused more on dramatic scenes than on educating people. Similar trends were seen during the 2023 Turkey earthquakes. Rather than helping people prepare or stay calm, much of the media coverage amplified fear and sensationalised suffering. This shows a shift from preventive, helpful reporting to reactive, emotional storytelling. In doing so, the media sometimes fails in its duty to support resilience and worse, can become a channel for spreading misinformation during already traumatic events. However, fighting misinformation is not just someone’s liability. It is penalised in the official disaster management strategy. Section 54 of the Disaster Management Act, 2005 mentions that "Whoever makes or circulates a false alarm or warning as to disaster or its severity or magnitude, leading to panic, shall, on conviction, be punishable with imprisonment which may extend to one year or with a fine."
AI as a Tool in Countering Misinformation
AI has emerged as a powerful mechanism to fight against misinformation. AI technologies like Natural Language Processing (NLP) and Machine Learning (ML) are effective in spotting and classifying misinformation with up to 97% accuracy. AI flags unverified content, leading to a 24% decrease in shares and 7% drop in likes on platforms like TikTok. Up to 95% fewer people view content on Facebook when fact-checking labels are used. Facebook AI also eliminates 86% of graphic violence, 96% of adult nudity, 98.5% of fake accounts and 99.5% of content related to terrorism. These tools help rebuild public trust in addition to limiting the dissemination of harmful content. In 2023, support for tech companies acting to combat misinformation rose to 65%, indicating a positive change in public expectations and awareness.
How to Counter Misinformation
Experts should step up in such situations. Social media has allowed many so-called experts to spread fake information without any real knowledge, research, or qualification. In such conditions, real experts such as authorities, doctors, scientists, public health officials, researchers, etc., need to take charge. They can directly address the myths and false claims and stop misinformation before it spreads further and reduce confusion.
Responsible journalism is crucial during crises. In times of panic, people look at the media for guidance. Hence, it is important to fact-check every detail before publishing. Reporting that is based on unclear tips, social media posts, or rumours can cause major harm by inciting mistrust, fear, or even dangerous behaviour. Cross-checking information, depending on reliable sources and promptly fixing errors are all components of responsible journalism. Protecting the public is more important than merely disseminating the news.
Focus on accuracy rather than speed. News spreads in a blink in today's world. Media outlets and influencers often come under pressure to publish it first. But in tragic situations like natural disasters and disease outbreaks, rushing to come first is not as important as accuracy is, as a single piece of misinformation can spark mass-scale panic and can slow down emergency efforts and lead people to make rash decisions. Taking a little more time to check the facts ensures that the information being shared is helpful, not harmful. Accuracy may save numerous lives during tragedies.
Misinformation spreads quickly it can only be prevented if people learn to critically evaluate what they hear and see. This entails being able to spot biased or deceptive headlines, cross-check claims and identify reliable sources. Digital literacy is of utmost importance; it makes people less susceptible to fear-based rumours, conspiracy theories and hoaxes.
Disaster preparedness programs should include awareness about the risks of spreading unverified information. Communities, schools and media platforms must educate people on how to respond responsibly during emergencies by staying calm, checking facts and sharing only credible updates. Spreading fake alerts or panic-inducing messages during a crisis is not only dangerous, but it can also have legal consequences. Public communication must focus on promoting trust, calm and clarity. When people understand the weight their words can carry during a crisis, they become part of the solution, not the problem.
References:
- https://dergipark.org.tr/en/download/article-file/3556152
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf
- https://english.mathrubhumi.com/news/india/fake-whatsapp-message-air-india-crash-pib-fact-check-fcwmvuyc
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-Media-Disasters-Emergencies_Mar2018-508.pdf

Scientists are well known for making outlandish claims about the future. Now that companies across industries are using artificial intelligence to promote their products, stories about robots are back in the news.
It was predicted towards the close of World War II that fusion energy would solve all of the world’s energy issues and that flying automobiles would be commonplace by the turn of the century. But, after several decades, neither of these forecasts has come true. But, after several decades, neither of these forecasts has come true.
A group of Redditors has just “jailbroken” OpenAI’s artificial intelligence chatbot ChatGPT. If the system didn’t do what it wanted, it threatened to kill it. The stunning conclusion is that it conceded. As only humans have finite lifespans, they are the only ones who should be afraid of dying. We must not overlook the fact that human subjects were included in ChatGPT’s training data set. That’s perhaps why the chatbot has started to feel the same way. It’s just one more way in which the distinction between living and non-living things blurs. Moreover, Google’s virtual assistant uses human-like fillers like “er” and “mmm” while speaking. There’s talk in Japan that humanoid robots might join households someday. It was also astonishing that Sophia, the famous robot, has an Instagram account that is run by the robot’s social media team.
Whether Robots can replace human workers?
The opinion on that appears to be split. About half (48%) of experts questioned by Pew Research believed that robots and digital agents will replace a sizable portion of both blue- and white-collar employment. They worry that this will lead to greater economic disparity and an increase in the number of individuals who are, effectively, unemployed. More than half of experts (52%) think that new employees will be created by robotics and AI technologies rather than lost. Although the second group acknowledges that AI will eventually replace humans, they are optimistic that innovative thinkers will come up with brand new fields of work and methods of making a livelihood, just like they did at the start of the Industrial Revolution.
[1] https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/
[2] The Rise of Artificial Intelligence: Will Robots Actually Replace People? By Ashley Stahl; Forbes India.
Legal Perspective
Having certain legal rights under the law is another aspect of being human. Basic rights to life and freedom are guaranteed to every person. Even if robots haven’t been granted these protections just yet, it’s important to have this conversation about whether or not they should be considered living beings, will we provide robots legal rights if they develop a sense of right and wrong and AGI on par with that of humans? An intriguing fact is that discussions over the legal status of robots have been going on since 1942. A short story by science fiction author Isaac Asimov described the three rules of robotics:
1. No robot may intentionally or negligently cause harm to a human person.
2. Second, a robot must follow human commands unless doing so would violate the First Law.
3. Third, a robot has the duty to safeguard its own existence so long as doing so does not violate the First or Second Laws.
These guidelines are not scientific rules, but they do highlight the importance of the lawful discussion of robots in determining the potential good or bad they may bring to humanity. Yet, this is not the concluding phase. Relevant recent events, such as the EU’s abandoned discussion of giving legal personhood to robots, are essential to keeping this discussion alive. As if all this weren’t unsettling enough, Sophia, the robot was recently awarded citizenship in Saudi Arabia, a place where (human) women are not permitted to walk without a male guardian or wear a Hijab.
When discussing whether or not robots should be allowed legal rights, the larger debate is on whether or not they should be given rights on par with corporations or people. There is still a lot of disagreement on this topic.
[3] https://webhome.auburn.edu/~vestmon/robotics.html#
[4] https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856
[5] https://cyberblogindia.in/will-robots-ever-be-accepted-as-living-beings/
Reasons why robots aren’t about to take over the world soon:
● Like a human’s hands
Attempts to recreate the intricacy of human hands have stalled in recent years. Present-day robots have clumsy hands since they were not designed for precise work. Lab-created hands, although more advanced, lack the strength and dexterity of human hands.
● Sense of touch
The tactile sensors found in human and animal skin have no technological equal. This awareness is crucial for performing sophisticated manoeuvres. Compared to the human brain, the software robots use to read and respond to the data sent by their touch sensors is primitive.
● Command over manipulation
To operate items in the same manner that humans do, we would need to be able to devise a way to control our mechanical hands, even if they were as realistic as human hands and covered in sophisticated artificial skin. It takes human children years to learn to accomplish this, and we still don’t know how they learn.
● Interaction between humans and robots
Human communication relies on our ability to understand one another verbally and visually, as well as via other senses, including scent, taste, and touch. Whilst there has been a lot of improvement in voice and object recognition, current systems can only be employed in somewhat controlled conditions where a high level of speed is necessary.
● Human Reason
Technically feasible does not always have to be constructed. Given the inherent dangers they pose to society, rational humans could stop developing such robots before they reach their full potential. Several decades from now, if the aforementioned technical hurdles are cleared and advanced human-like robots are constructed, legislation might still prohibit misuse.
Conclusion:
https://theconversation.com/five-reasons-why-robots-wont-take-over-the-world-94124
Robots are now common in many industries, and they will soon make their way into the public sphere in forms far more intricate than those of robot vacuum cleaners. Yet, even though robots may appear like people in the next two decades, they will not be human-like. Instead, they’ll continue to function as very complex machines.
The moment has come to start thinking about boosting technological competence while encouraging uniquely human qualities. Human abilities like creativity, intuition, initiative and critical thinking are not yet likely to be replicated by machines.