Centre Proposes New Bills for Criminal Law
Introduction
Criminal justice in India is majorly governed by three laws which are – Indian Penal Code, Criminal Procedure Code and Indian Evidence Act. The centre, on 11th August 2023’ Friday, proposes a new bill in parliament Friday, which is replacing the country’s major criminal laws, i.e. Indian Penal Code, Criminal Procedure Code and Indian Evidence Act.
The following three bills are being proposed to replace major criminal laws in the country:
- The Bharatiya Nyaya Sanhita Bill, 2023 to replace Indian Penal Code 1860.
- The Bharatiya Nagrik Suraksha Sanhita Bill, 2023, to replace The Code Of Criminal Procedure, 1973.
- The Bharatiya Sakshya Bill, 2023, to replace The Indian Evidence Act 1872.
Cyber law-oriented view of the new shift in criminal lawNotable changes:Bharatiya Nyaya Sanhita Bill, 2023 Indian Penal Code 1860.
Way ahead for digitalisation
The new laws aim to enhance the utilisation of digital services in court systems, it facilitates online registration of FIR, Online filing of the charge sheet, serving summons in electronic mode, trial and proceedings in electronic mode etc. The new bills also allow the virtual appearance of witnesses, accused, experts, and victims in some instances. This shift will lead to the adoption of technology in courts and all courts to be computerised in the upcoming time.
Enhanced recognition of electronic records
With the change in lifestyle in terms of the digital sphere, significance is given to recognising electronic records as equal to paper records.
Conclusion
The criminal laws of the country play a significant role in establishing law & order and providing justice. The criminal laws of India were the old laws existing under British rule. There have been several amendments to criminal laws to deal with the growing crimes and new aspects. However, there was a need for well-established criminal laws which are in accordance with the present era. The step of the legislature by centralising all criminal laws in their new form and introducing three bills is a good approach which will ultimately strengthen the criminal justice system in India, and it will also facilitate the use of technology in the court system.
Related Blogs

Executive Summary:
A viral social media video falsely claims that Meta AI reads all WhatsApp group and individual chats by default, and that enabling “Advanced Chat Privacy” can stop this. On performing reverse image search we found a blog post of WhatsApp which was posted in the month of April 2025 which claims that all personal and group chats remain protected with end to end (E2E) encryption, accessible only to the sender and recipient. Meta AI can interact only with messages explicitly sent to it or tagged with @MetaAI. The “Advanced Chat Privacy” feature is designed to prevent external sharing of chats, not to restrict Meta AI access. Therefore, the viral claim is misleading and factually incorrect, aimed at creating unnecessary fear among users.
Claim:
A viral social media video [archived link] alleges that Meta AI is actively accessing private conversations on WhatsApp, including both group and individual chats, due to the current default settings. The video further claims that users can safeguard their privacy by enabling the “Advanced Chat Privacy” feature, which purportedly prevents such access.

Fact Check:
Upon doing reverse image search from the keyframe of the viral video, we found a WhatsApp blog post from April 2025 that explains new privacy features to help users control their chats and data. It states that Meta AI can only see messages directly sent to it or tagged with @Meta AI. All personal and group chats are secured with end-to-end encryption, so only the sender and receiver can read them. The "Advanced Chat Privacy" setting helps stop chats from being shared outside WhatsApp, like blocking exports or auto-downloads, but it doesn’t affect Meta AI since it’s already blocked from reading chats. This shows the viral claim is false and meant to confuse people.


Conclusion:
The claim that Meta AI is reading WhatsApp Group Chats and that enabling the "Advance Chat Privacy" setting can prevent this is false and misleading. WhatsApp has officially confirmed that Meta AI only accesses messages explicitly shared with it, and all chats remain protected by end-to-end encryption, ensuring privacy. The "Advanced Chat Privacy" setting does not relate to Meta AI access, as it is already restricted by default.
- Claim: Viral social media video claims that WhatsApp Group Chats are being read by Meta AI due to current settings, and enabling the "Advance Chat Privacy" setting can prevent this.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
Taj Hotels Group is well known for its luxurious ambience and old-world grace and charm, blended with contemporary comforts and amenities for its guests or customers. But what can make all the netizens perplexed is the recent data breach incident which took place in Tata-owned Taj hotels. The hotel suffer from a data breach that compromises nearly 1.5 million customers' data which includes addresses, membership IDs, mobile numbers and other personally identifiable information, according to sources. This news was brought to light which raised concerns about the privacy and data protection of personal data of individuals. We are living in a space influenced by advanced technology and digital communication which throws a concern or challenge to secure the personal information of individuals.
Unveiling the incident
Tata-owned Taj Hotels group has suffered a data breach that compromise information of over 1.5 million customers, according to a news report. A bad actor or entity going by the name “Dnacookies” claimed data set contains data from the 2014-2020 period and has not been disclosed anywhere till now. Such personal data includes name, address, customer ID, mobile number and other personally identifiable information. This shows the risks or challenges of data protection and security. The incidents raise an alarm about the risks and vulnerabilities that might be faced even by the big corporate giants. The bad actor with the handle “Dnacookies” also demanded a ransom of a sum of about Rs 4.16 lakh from the Taj hotel group. In response to the incident, a spokesperson from the concerned hotel group said that we have been made aware of someone claiming possession of a limited data customer data set, which is non-sensitive in nature. Investigation is underway and relevant authorities have been notified about the incident.
A demand for ransom
The report from CNBC-TV18 clears that the bad actor not only purloined the data but also demanded around 4.16 lakh as a ransom for the database. Along with this, the bad actor kept three conditions ahead. Firstly there has to be a middleman for a negotiable deal secondly the data cannot be split either the entire data has to be taken with the ransom demand or no data at all. Thirdly additional samples of data will not be provided. Further, the spokesperson of Indian Hotel Company Limited mentioned that they have been escalated with the fact that someone is claiming authority in a limited data set. The bad actor claimed that the database contains information from 2014- 2020 which has been kept confidential till now. The audacity of the bad actor went to such an extent that the sample containing one thousand rows of unique entries from the bad actor dataset was also provided by the bad actor as proof of the deed. This incident underlines the growing threat in cyberspace and the urgency for individuals, organizations or entities to priorities data security measures and maintain cyber resilience.
Personal Data on Stake
Such data is the personal information of the individuals and also constitutes the personal tastes and preferences of individuals which can be exploited. The biggest gush of winds the hotel and individuals face by such a data breach is not only the volume of data compromised but also the potential ways it can get misused and exploited against the hotel or its customers by cyber crooks. This paves the way for cybercriminals to put forward any demand knowing the sensitivity of the data. Followed by creating a dilemmatic situation for the affected entities to either accept the ransom demands or to stand against ransom. Since the risks are high, going ahead with any of these situations can have an adverse impact on the security of personal data. The organisation or entities holding the personal data need to make sure that data under their realm is well protected and secured.
While the organisation has to sail through the aftermath of this breach, such incidents also pose a challenge for the organisation to maintain the trust and reputation of the organization since these incidents question the cyber security posture of the organisation. It is suggested to be transparent with its stakeholders, and open about the vulnerabilities and steps taken against this. They should also discuss the amplified step added for safeguarding their customer's personal data. Since Taj is well known for its out-of-the-box luxury and for providing comfort to its customers it should take a step ahead to reinforce its digital infrastructure to ensure the security of data.
Digital Personal Data Protection Act, 2023
The newly enacted Digital Personal Data Act, 2023 put certain obligations on data fiduciaries to take reasonable measures to maintain the security of personal data. The Act also requires to inform about the data breach to the data protection board constituted under the Act. The Act aims to protect the individual's digital personal data. The Act casts certain obligations on data principals and data fiduciaries. The Act provides penalty upto 250 crores in case of a data breach. The Act aims to provide consent-based data collection techniques. The Act also establishes the Data Protection Board to ensure compliance with the provisions of the Act and address grievances.
Conclusion
Data breach in such a big giant in the market serves as an alarming concern to be more cautious and proactively take precautionary measures to protect the security of data and compliance with data protection laws and regulations. We are living in an era where digital security is as important as the basic fundamental rights of an individual. Taj Hotels Group has actively taken steps to handle the aftermath of the data breach by informing the incident to law enforcement agencies and taking necessary steps. It is also on our part to be more aware, and vigilant about our personal data. Entities need to ensure compliance and measures to protect personal data and overall ensure a true cyber-safe & digital environment.
References

Introduction
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
- Deepfakes, Crime, and Digital Terrorism
The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
- Solution - Human Oversight and Safety-by-Design
To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
- Equitable Access to AI Technologies
One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies. - Population-Level Skilling and Talent Readiness
AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive. - Responsible and Human-Centric Deployment
Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
- Why International Cooperation Matters
AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
- Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
- Transparency and algorithm audits as a compulsory requirement.
- Independent global oversight bodies.
- Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
- Talent Mobility and Open Innovation
If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
- AI, Equity, and Global Development
The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
References
- https://www.hindustantimes.com/india-news/ai-a-global-good-but-must-guard-against-misuse-pm-101763922179359.html
- https://www.deccanherald.com/india/g20-summit-pm-modi-goes-against-donald-trumps-stand-seeks-global-governance-for-ai-3807928
- https://timesofindia.indiatimes.com/india/need-global-compact-to-prevent-ai-misuse-pm-modi/articleshow/125525379.cms