iOS Lockdown Mode Feature: The Cyber Bouncer for Your iPhone!
Introduction
Your iPhone isn’t just a device: it’s a central hub for almost everything in your life. From personal photos and videos to sensitive data, it holds it all. You rely on it for essential services, from personal to official communications, sharing of information, banking and financial transactions, and more. With so much critical information stored on your device, protecting it from cyber threats becomes essential. This is where the iOS Lockdown Mode feature comes in as a digital bouncer to keep cyber crooks at bay.
Apple introduced the ‘lockdown’ mode in 2022. It is a new optional security feature and is available on iPhones, iPads, and Mac devices. It works as an extreme and optional protection mechanism for a certain segment of users who might be at a higher risk of being targeted by serious cyber threats and intrusions into their digital security. So people like journalists, activists, government officials, celebrities, cyber security professionals, law enforcement professionals, and lawyers etc are some of the intended beneficiaries of the feature. Sometimes the data on their devices can be highly confidential and it can cause a lot of disruption if leaked or compromised by cyber threats. Given how prevalent cyber attacks are in this day and age, the need for such a feature cannot be overstated. This feature aims at providing an additional firewall by limiting certain functions of the device and hence reducing the chances of the user being targeted in any digital attack.
How to Enable Lockdown Mode in Your iPhone
On your iPhone running on iOS 16 Developer Beta 3, you just need to go to Settings - Privacy and Security - Lockdown Mode. Tap on Turn on Lockdown Mode, and read all the information regarding the features that will be unavailable on your device if you go forward, and if you’re satisfied with the same all you have to do is scroll down and tap on Turn on Lockdown Mode. Your iPhone will get restarted with Lockdown Mode enabled.
Easy steps to enable lockdown mode are as follows:
- Open the Settings app.
- Tap Privacy & Security.
- Scroll down, tap Lockdown Mode, then tap Turn On Lockdown Mode.
How Lockdown Mode Protects You
Lockdown Mode is a security feature that prevents certain apps and features from functioning properly when enabled. For example, your device will not automatically connect to Wi-Fi networks without security and will disconnect from a non-secure network when Lockdown Mode is activated. Many other features may be affected because the system will prioritise security standards above the typical operational functions. Since lockdown mode restricts certain features and activities, one can exclude a particular app or website in Safari from being impacted and limited by restrictions. Only exclude trusted apps or websites if necessary.
References:
- https://support.apple.com/en-in/105120#:~:text=Tap%20Privacy%20%26%20Security.,then%20enter%20your%20device%20passcode
- https://www.business-standard.com/technology/tech-news/apple-lockdown-mode-what-is-it-and-how-it-prevents-spyware-attacks-124041200667_1.html
Related Blogs

Modern international trade heavily relies on data transfers for the exchange of digital goods and services. User data travels across multiple jurisdictions and legal regimes, each with different rules for processing it. Since international treaties and standards for data protection are inadequate, states, in an effort to protect their citizens' data, have begun extending their domestic privacy laws beyond their borders. However, this opens a Pandora's box of legal and administrative complexities for both, the data protection authorities and data processors. The former must balance the harmonization of domestic data protection laws with their extraterritorial enforcement, without overreaching into the sovereignty of other states. The latter must comply with the data privacy laws in all states where it collects, stores, and processes data. While the international legal community continues to grapple with these challenges, India can draw valuable lessons to refine the Digital Personal Data Protection Act, 2023 (DPDP) in a way that effectively addresses these complexities.
Why Extraterritorial Application?
Since data moves freely across borders and entities collecting such data from users in multiple states can misuse it or use it to gain an unfair competitive advantage in local markets, data privacy laws carry a clause on their extraterritorial application. Thus, this principle is utilized by states to frame laws that can ensure comprehensive data protection for their citizens, irrespective of the data’s location. The foremost example of this is the European Union’s (EU) General Data Protection Regulation (GDPR), 2016, which applies to any entity that processes the personal data of its citizens, regardless of its location. Recently, India has enacted the DPDP Act of 2023, which includes a clause on extraterritorial application.
The Extraterritorial Approach: GDPR and DPDP Act
The GDPR is considered the toughest data privacy law in the world and sets a global standard in data protection. According to Article 3, its provisions apply not only to data processors within the EU but also to those established outside its territory, if they offer goods and services to and conduct behavioural monitoring of data subjects within the EU. The enforcement of this regulation relies on heavy penalties for non-compliance in the form of fines up to €20 million or 4% of the company’s global turnover, whichever is higher, in case of severe violations. As a result, corporations based in the USA, like Meta and Clearview AI, have been fined over €1.5 billion and €5.5 million respectively, under the GDPR.
Like the GDPR, the DPDP Act extends its jurisdiction to foreign companies dealing with personal data of data principles within Indian territory under section 3(b). It has a similar extraterritorial reach and prescribes a penalty of up to Rs 250 crores in case of breaches. However, the Act or DPDP Rules, 2025, which are currently under deliberation, do not elaborate on an enforcement mechanism through which foreign companies can be held accountable.
Lessons for India’s DPDP on Managing Extraterritorial Application
- Clarity in Definitions: GDPR clearly defines ‘personal data’, covering direct information such as name and identification number, indirect identifiers like location data, and, online identifiers that can be used to identify the physical, physiological, genetic, mental, economic, cultural, or social identity of a natural person. It also prohibits revealing special categories of personal data like religious beliefs and biometric data to protect the fundamental rights and freedoms of the subjects. On the other hand, the DPDP Act/ Rules define ‘personal data’ vaguely, leaving a broad scope for Big Tech and ad-tech firms to bypass obligations.
- International Cooperation: Compliance is complex for companies due to varying data protection laws in different countries. The success of regulatory measures in such a scenario depends on international cooperation for governing cross-border data flows and enforcement. For DPDP to be effective, India will have to foster cooperation frameworks with other nations.
- Adequate Safeguards for Data Transfers: The GDPR regulates data transfers outside the EU via pre-approved legal mechanisms such as standard contractual clauses or binding corporate rules to ensure that the same level of protection applies to EU citizens’ data even when it is processed outside the EU. The DPDP should adopt similar safeguards to ensure that Indian citizens’ data is protected when processed abroad.
- Revised Penalty Structure: The GDPR mandates a penalty structure that must be effective, proportionate, and dissuasive. The supervisory authority in each member state has the power to impose administrative fines as per these principles, up to an upper limit set by the GDPR. On the other hand, the DPDP’s penalty structure is simplistic and will disproportionately impact smaller businesses. It must take into regard factors such as nature, gravity, and duration of the infringement, its consequences, compliance measures taken, etc.
- Governance Structure: The GDPR envisages a multi-tiered governance structure comprising of
- National-level Data Protection Authorities (DPAs) for enforcing national data protection laws and the GDPR,
- European Data Protection Supervisor (EDPS) for monitoring the processing of personal data by EU institutions and bodies,
- European Commission (EC) for developing GDPR legislation
- European Data Protection Board (EDPB) for enabling coordination between the EC, EDPS, and DPAs
In contrast, the Data Protection Board (DPB) under DPDP will be a single, centralized body overseeing compliance and enforcement. Since its members are to be appointed by the Central Government, it raises questions about the Board’s autonomy and ability to apply regulations consistently. Further, its investigative and enforcement capabilities are not well defined.
Conclusion
The protection of the human right to privacy ( under the International Covenant on Civil and Political Rights and the Universal Declaration of Human Rights) in today’s increasingly interconnected digital economy warrants international standard-setting on cross-border data protection. In the meantime, States relying on the extraterritorial application of domestic laws is unavoidable. While India’s DPDP takes measures towards this, they must be refined to ensure clarity regarding implementation mechanisms. They should push for alignment with data protection laws of other States, and account for the complexity of enforcement in cases involving extraterritorial jurisdiction. As India sets out to position itself as a global digital leader, a well-crafted extraterritorial framework under the DPDP Act will be essential to promote international trust in India’s data governance regime.
Sources
- https://gdpr-info.eu/art-83-gdpr/
- https://gdpr-info.eu/recitals/no-150/
- https://gdpr-info.eu/recitals/no-51/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://www.eqs.com/compliance-blog/biggest-gdpr-fines/#:~:text=ease%20the%20burden.-,At%20a%20glance,In%20summary
- https://gdpr-info.eu/art-3-gdpr/
- https://www.legal500.com/developments/thought-leadership/gdpr-v-indias-dpdpa-key-differences-and-compliance-implications/#:~:text=Both%20laws%20cover%20'personal%20data,of%20personal%20data%20as%20sensitive.

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL
.webp)
Introduction
Big Tech has been pushing back against regulatory measures, particularly regarding data handling practices. X Corp (formerly Twitter) has taken a prominent stance in India. The platform has filed a petition against the Central and State governments, challenging content-blocking orders and opposing the Center’s newly launched Sahyog portal. The X Corp has furthermore labelled the Sahyog Portal as a 'censorship portal' that enables government agencies to issue blocking orders using a standardized template.
The key regulations governing the tech space in India include the IT Act of 2000, IT Rules 2021 and 2023 (which stress platform accountability and content moderation), and the DPDP Act 2023, which intersects with personal data governance. This petition by the X Corp raises concerns for digital freedom, platform accountability, and the evolving regulatory frameworks in India.
Elon Musk vs Indian Government: Key Issues at Stake
The 2021 IT Rules, particularly Rule 3(1)(d) of Part II, outline intermediaries' obligations regarding ‘Content Takedowns’. Intermediaries must remove or disable access to unlawful content within 36 hours of receiving a court order or government notification. Notably, the rules do not require government takedown requests to be explicitly in writing, raising concerns about potential misuse.
X’s petition also focuses on the Sahyog Portal, a government-run platform that allows various agencies and state police to request content removal directly. They contend that the failure to comply with such orders can expose intermediaries' officers to prosecution. This has sparked controversy, with platforms like Elon Musk’s X arguing that such provisions grant the government excessive control, potentially undermining free speech and fostering undue censorship.
The broader implications include geopolitical tensions, potential business risks for big tech companies, and significant effects on India's digital economy, user engagement, and platform governance. Balancing regulatory compliance with digital rights remains a crucial challenge in this evolving landscape.
The Global Context: Lessons from Other Jurisdictions
The ‘EU's Digital Services Act’ establishes a baseline 'notice and takedown' system. According to the Act, hosting providers, including online platforms, must enable third parties to notify them of illegal content, which they must promptly remove to retain their hosting defence. The DSA also mandates expedited removal processes for notifications from trusted flaggers, user suspension for those with frequent violations, and enhanced protections for minors. Additionally, hosting providers have to adhere to specific content removal obligations, including the elimination of terrorist content within one hour and deploying technology to detect known or new CSAM material and remove it.
In contrast to the EU, the US First Amendment protects speech from state interference but does not extend to private entities. Dominant digital platforms, however, significantly influence discourse by moderating content, shaping narratives, and controlling advertising markets. This dual role creates tension as these platforms balance free speech, platform safety, and profitability.
India has adopted a model closer to the EU's approach, emphasizing content moderation to curb misinformation, false narratives, and harmful content. Drawing from the EU's framework, India could establish third-party notification mechanisms, enforce clear content takedown guidelines, and implement detection measures for harmful content like terrorist material and CSAM within defined timelines. This would balance content regulation with platform accountability while aligning with global best practices.
Key Concerns and Policy Debates
As the issue stands, the main concerns that arise are:
- The need for transparency in government orders for takedowns, the reasons and a clear framework for why they are needed and the guidelines for doing so.
- The need for balancing digital freedom with national security and the concerns that arise out of it for tech companies. Essentially, the role platforms play in safeguarding the democratic values enshrined in the Constitution of India.
- This court ruling by the Karnataka HC will have the potential to redefine the principles upon which the intermediary guidelines function under the Indian laws.
Potential Outcomes and the Way Forward
While we wait for the Hon’ble Court’s directives and orders in response to the filed suit, while the court's decision could favour either side or lead to a negotiated resolution, the broader takeaway is the necessity of collaborative policymaking that balances governmental oversight with platform accountability. This debate underscores the pressing need for a structured and transparent regulatory framework for content moderation. Additionally, this case also highlights the importance of due process in content regulation and the need for legal clarity for tech companies operating in India. Ultimately, a consultative and principles-based approach will be key to ensuring a fair and open digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/elon-musks-x-sues-union-government-over-alleged-censorship-and-it-act-violations/article69352961.ece
- https://www.hindustantimes.com/india-news/elon-musk-s-x-sues-union-government-over-alleged-censorship-and-it-act-violations-101742463516588.html
- https://www.financialexpress.com/life/technology-explainer-why-has-x-accused-govt-of-censorship-3788648/
- https://thelawreporters.com/elon-musk-s-x-sues-indian-government-over-alleged-censorship-and-it-act-violations
- https://www.linklaters.com/en/insights/blogs/digilinks/2023/february/the-eu-digital-services-act---a-new-era-for-online-harms-and-intermediary-liability