Using incognito mode and VPN may still not ensure total privacy, according to expert
SVIMS Director and Vice-Chancellor B. Vengamma lighting a lamp to formally launch the cybercrime awareness programme conducted by the police department for the medical students in Tirupati on Wednesday.
An awareness meet on safe Internet practices was held for the students of Sri Venkateswara University University (SVU) and Sri Venkateswara Institute of Medical Sciences (SVIMS) here on Wednesday.
“Cyber criminals on the prowl can easily track our digital footprint, steal our identity and resort to impersonation,” cyber expert I.L. Narasimha Rao cautioned the college students.
Addressing the students in two sessions, Mr. Narasimha Rao, who is a Senior Manager with CyberPeace Foundation, said seemingly common acts like browsing a website, and liking and commenting on posts on social media platforms could be used by impersonators to recreate an account in our name.
Turning to the youth, Mr. Narasimha Rao said the incognito mode and Virtual Private Network (VPN) used as a protected network connection do not ensure total privacy as third parties could still snoop over the websites being visited by the users. He also cautioned them tactics like ‘phishing’, ‘vishing’ and ‘smishing’ being used by cybercriminals to steal our passwords and gain access to our accounts.
“After cracking the whip on websites and apps that could potentially compromise our security, the Government of India has recently banned 232 more apps,” he noted.
Additional Superintendent of Police (Crime) B.H. Vimala Kumari appealed to cyber victims to call 1930 or the Cyber Mitra’s helpline 9121211100. SVIMS Director B. Vengamma stressed the need for caution with smartphones becoming an indispensable tool for students, be it for online education, seeking information, entertainment or for conducting digital transactions.
Related Blogs

Introduction
The recent cyber-attack on Jaguar Land Rover (JLR), one of the world's best-known car makers, has revealed extensive weaknesses in the interlinked character of international supply chains. The incident highlights the increasing cybersecurity issues of industries going through digital transformation. With its production stopped in several UK factories, supply chain disruptions, and service delays to its customers worldwide, this cyber-attack shows how cyber events can ripple into operation, finance, and reputation risks for large businesses.
The Anatomy of a Breakdown
Jaguar Land Rover, a Tata Motors subsidiary, was forced to disable its IT infrastructure because of a cyber-attack over the weekend. This shut down was already an emergency shut down to mitigate damage and the disruption to business was serious.
- No Production - The car plants at Halewood (Merseyside) and Solihull (West Midlands) and the engine plant (Wolverhampton) were all completely shut down.
- Sales and Distribution: Car sales were significantly impaired during a high-volume registration period in September, although certain transactions still passed through manual procedures.
- Global Effect: The breakdown did not reach only the UK, dealers and fix experts across the world, including in Australia, suffered with inaccessible parts databases.
JLR called the recovery process "extremely complex" as it involved a controlled recovery of systems and implementing alternative workarounds for offline services. The overall effects include the immediate and massive impact to their suppliers and customers, and has raised larger questions regarding the sustainability of digital ecosystems in the automobile value chain.
The Human Impact: Beyond JLR's Factories
The implications of the cyber-attack have extended beyond the production lines of JLR:
- Independent Garages: Repair centres such as Nyewood Express of West Sussex indicated that they could not use vital parts databases, which brought repair activities to a standstill and left clients waiting indefinitely.
- Global Dealers: Land Rover experts as distant as Tasmania indicated total system crashes, highlighting global dependency on centralized IT systems.
- Customer Frustration: Regular customers in need of urgent repairs were stranded by the inability to order replacement parts from original manufacturers.
This attack is an example of the cascading effect of cyber disruptions among interconnected industries, a single point of failure paralyzing complete ecosystems.
The Culprit: The Hacker Collective
The hack is justifiably claimed by a so-called hacker collective "Scattered Lapsus$ Hunters." The so-called hacking collective says that it consists of young English-speaking hackers and has previously targeted blue-chip brands like Marks & Spencer. While the attackers seem not to have publicly declared whether they exfiltrated sensitive information or deployed ransomware, they went ahead and posted screenshots of internal JLR documents-the kind of documents that probably are not supposed to see the light of day, including troubleshooting guides and system logs-implicating what can only be described as grossly unauthorized access into some of Jaguar Land Rover's core IT systems.
Jaguar Land Rover had gone on record to claim with no apropos proof or evidence that it probably did not see anyone getting into customer data; however, the very occurrence of this attack raises some very serious questions on insider threats, social engineering concepts, and how efficient cybersecurity governance architectures really are.
Cybersecurity Weaknesses and Lessons Learned
The JLR attack depicts some of the common weaknesses associated with large-scale manufacturing organizations:
- Centralized IT Dependencies: Today's auto firms are based on worldwide IT systems for operations, logistics, and customer care. Compromise can lead to broad outages.
- Supply Chain Vulnerabilities: Tier-2 and Tier-1 suppliers use OEM systems for placing and tracing components. Interrupting at the OEM level automatically stops their processes.
- Inadequate Incident Visibility: Several suppliers complained about no clear information from JLR, which increased uncertainty and financial loss.
- Rise of Youth Hacking Groups: Involvement of youth hacker groups highlight the necessity for active monitoring and community-level cybersecurity awareness initiatives.
Broader Industry Context
With ever-increasing cyber-attacks on the automotive industry, an area currently being rapidly digitalised through connected cars, IoT-based factories, and cloud-based operations, this series of incidents falls within such a context. In 2023, JLR awarded an £800 million contract to Tata Consultancy Services (TCS) for services in support of the company's digital transformation and cybersecurity enhancement. This attack shows that, no matter how much is spent, poorly conceptualised security programs can never stand up to ever-changing cyber threats.
What Can Organizations Do? – Cyberpeace Recommendations
To contain risks and develop a resilience against such events, organizations need to implement a multi-layered approach to cybersecurity:
- Adopt Zero Trust Architecture - Presume breach as the new normal. Verify each user, device, and application before access is given, even inside the internal network.
- Enhance Supply Chain Security - Perform targeted assessments on a routine basis to identify risk factors in diminishing suppliers. Include rigorous cybersecurity provisions in the agreements with suppliers, namely disclosure of vulnerabilities and the agreed period for incident response.
- Durable Backups and Their Restoration - Backward hampers are kept isolated and encrypted to continue operations in case of ransomware incidents or any other occur in system compromise.
- Periodic Red Team Exercises - Simulate cyber-attacks on IT and OT systems to examine if vulnerabilities exist and evaluate current incident response measures.
- Employee Training and Insider Threat Monitoring - Social engineering being the forefront of attack vectors, continuous training and behavioural monitoring will have to be done to avoid credential disposal.
- Public-Private Partnership - Interact with several government agencies and cybersecurity groups for sharing threat intelligence and enforcing best practices complementary to ISO/IEC 27001 and NIST Cybersecurity Framework.
Conclusion
The hacking at Jaguar Land Rover is perhaps one of a thousand reminders that cybersecurity can no longer be seen as a back-office job but rather as an issue of business continuity at the very core of the organization. In the process of digital transformation, the attack surface grows, making the entities targeted by cybercriminals. Operation security demands that cybersecurity be ensured on a proactive basis through resilient supply chains and stakeholders working together. The JLR attack is not an isolated event; it is a warning for the entire automobile sector to maintain security at every level of digitalization.
References
- https://www.bbc.com/news/articles/c1jzl1lw4y1o
- https://www.theguardian.com/business/2025/sep/07/disruption-to-jaguar-land-rover-after-cyber-attack-may-last-until-october
- https://uk.finance.yahoo.com/news/jaguar-factory-workers-told-stay-073458122.html

Overview of the Advisory
On 18 November 2025, the Ministry of Information and Broadcasting (I&B) published an Advisory that addresses all of the private satellite television channels in India. The advisory is one of the critical institutional interventions to the broadcast of sensitive content regarding recent security incidents concerning the blast at the Red Fort on November 10th, 2025. This advisory came after the Ministry noticed that some news channels have been broadcasting content related to alleged persons involved in Red Fort blasts, justifying their acts of violence, as well as information/video on explosive material. Broadcasting like this at this critical situation may inadvertently encourage or incite violence, disrupt public order, and pose risks to national security.
Key Instructions under the Advisory
The advisory provides certain guidelines to the TV channels to ensure strict compliance with the Programming and Advertising Code under the Cable Television Networks (Regulation) Act, 1995. The television channels are advised to exercise the highest level of discretion and sensitivity possible in reporting on issues involving alleged perpetrators of violence, and especially when reporting on matters involving the justification of acts of violence or providing instructional media on making explosive materials. The fundamental focus is to be very strict in following the Programme and Advertising Code as stipulated in the Cable Television Network Rules. In particular, broadcasters should not make programming that:
- Contain anything obscene, defamatory, deliberately false, or suggestive innuendos and half-truths.
- Likely to encourage or incite violence, contain anything against the maintenance of law and order, or promote an anti-national attitude.
- Contain anything that affects the integrity of the Nation.
- Could aid, abet or promote unlawful activities.
Responsible Reporting Framework
The advisory does not constitute outright censorship but instead a self-regulatory system that depends on the discretion and sensitivity of the TV channels focused on differentiating between broadcasting legitimate news and the content that crosses the threshold from information dissemination to incitement.
Why This Advisory is Important in a Digital Age
With the modern media systems, there has been an erosion of the line between the journalism of the traditional broadcasting medium and digital virality. The contents of television are no longer limited to the scheduled programs or cable channels of distribution. The contents of a single news piece, especially that of dramatic or contentious nature, can be ripped off, revised and repackaged on social media networks within minutes of airing- often without the context, editorial discretion or timing indicators.
This effect makes sensitive content have a multiplier effect. The short news item about a suspect justifying violence or containing bombs can be viewed by millions on YouTube, WhatsApp, Twitter/X, Facebook, by spreading organically and being amplified by an algorithm. Studies have shown that misinformation and sensational reporting are much faster to circulate compared to factual corrections- a fact that has been noticed in the recent past during conflicts and crisis cases in India and other parts of the world.
Vulnerabilities of Information Ecosystems
- The advisory is created in a definite information setting that is characterised by:
- Rapid Viral Mechanism: Content spreads faster than the process of verification.
- Algorithmic-driven amplification: Platform mechanism boosts emotionally charged content.
- Coordinated amplification networks: Organised groups are there to make these posts, videos viral, to set a narrative for the general public.
- Deepfake and synthetic media risks: Original broadcasts can be manipulated and reposted with false attribution.
Interconnection with Cybersecurity and National Security
Verified or sensationalised reporting of security incidents poses certain weaknesses:
- Trust Erosion: Trust is broken when the masses observe broadcasters in the air giving unverified claims or emotional accounts as facts. This is even to security agencies, law enforcement and government institutions themselves. The lack of trust towards the official information gives rise to information gaps, which are occupied by rumours, conspiracy theories, and enemy tales.
- Cognitive Fragmentation: Misinformation develops multiple versions of the truth among the people. The narratives given to citizens vary according to the sources of the media that they listen to or read. This disintegration complicates organising the collective response of the society an actual security threat because the populations can be organised around misguided stories and not the correct data.
- Radicalisation Pipeline: People who are interested in finding ideological backgrounds to violent action might get exposed to media-created materials that have been carefully distorted to evidence justifications of terrorism as a valid political or religious stand.
How Social Instability Is Exploited in Cyber Operations and Influence Campaigns
Misinformation causes exploitable vulnerability in three phases.
- First, conflicting unverified accounts disintegrate the information environment-populations are presented with conflicting versions of events by various media sources.
- Second, institutional trust in media and security agencies is shaken by exposure to subsequently rectified false information, resulting in an information vacuum.
- Third, in such a distrusted and puzzled setting, the population would be susceptible to organised manipulation by malicious agents.
- Sensationalised broadcasting gives opponents assets of content, narrative frameworks, and information gaps that they can use to promote destabilisation movements. These mechanisms of exploitation are directly opposed by responsible broadcasting.
Media Literacy and Audience Responsibility
Structural Information Vulnerabilities-
A major part of the Indian population is structurally disadvantaged in information access:
- Language barriers: Infrastructure in the field of fact-checking is still highly centralised in English and Hindi, as vernacular-language misinformation goes viral in Tamil, Telugu, Marathi, Punjabi, and others.
- Digital literacy gaps: It is estimated that there are about 40 million people in India who have been trained on digital literacy, but more than 900 million Indians access digital content with different degrees of ability to critically evaluate the content.
- Divides between rural and urban people: Rural citizens and less affluent people experience more difficulty with access to verification tools and media literacy resources.
- Algorithmic capture: social media works to maximise engagement over accuracy, and actively encourages content that is emotionally inflammatory or divisive to its users, according to their history of engagement.
Conclusion
The advisory of the Ministry of Information and Broadcasting is an acknowledgment of the fact that media accountability is a part of state security in the information era. It states the principles of responsible reporting without interference in editorial autonomy, a balance that various stakeholders should uphold. Implementation of the advisory needs to be done in concert with broadcasters, platforms, civil society, government and educational institutions. Information integrity cannot be handled by just a single player. Without media literacy resources, citizens are unable to be responsible in their evaluation of information. Without open and fast communication with the media stakeholders, government agencies are unable to combat misinformation.
The recommendations include collaborative governance, i.e., institutional forms in which media self-regulation, technological protection, user empowerment, and policy frameworks collaborate and do not compete. The successful deployment of measures will decide whether India can continue to have open and free media without compromising on information integrity that is sufficient to provide national security, democratic governance and social stability during the period of high-speed information flow, algorithmic amplification, and information warfare actions.
References
https://mib.gov.in/sites/default/files/2025-11/advisory-18.11.2025.pdf

Introduction
The increasing online interaction and popularity of social media platforms for netizens have made a breeding ground for misinformation generation and spread. Misinformation propagation has become easier and faster on online social media platforms, unlike traditional news media sources like newspapers or TV. The big data analytics and Artificial Intelligence (AI) systems have made it possible to gather, combine, analyse and indefinitely store massive volumes of data. The constant surveillance of digital platforms can help detect and promptly respond to false and misinformation content.
During the recent Israel-Hamas conflict, there was a lot of misinformation spread on big platforms like X (formerly Twitter) and Telegram. Images and videos were falsely shared attributing to the ongoing conflict, and had spread widespread confusion and tension. While advanced technologies such as AI and big data analytics can help flag harmful content quickly, they must be carefully balanced against privacy concerns to ensure that surveillance practices do not infringe upon individual privacy rights. Ultimately, the challenge lies in creating a system that upholds both public security and personal privacy, fostering trust without compromising on either front.
The Need for Real-Time Misinformation Surveillance
According to a recent survey from the Pew Research Center, 54% of U.S. adults at least sometimes get news on social media. The top spots are taken by Facebook and YouTube respectively with Instagram trailing in as third and TikTok and X as fourth and fifth. Social media platforms provide users with instant connectivity allowing them to share information quickly with other users without requiring the permission of a gatekeeper such as an editor as in the case of traditional media channels.
Keeping in mind the data dumps that generated misinformation due to the elections that took place in 2024 (more than 100 countries), the public health crisis of COVID-19, the conflicts in the West Bank and Gaza Strip and the sheer volume of information, both true and false, has been immense. Identifying accurate information amid real-time misinformation is challenging. The dilemma emerges as the traditional content moderation techniques may not be sufficient in curbing it. Traditional content moderation alone may be insufficient, hence the call for a dedicated, real-time misinformation surveillance system backed by AI and with certain human sight and also balancing the privacy of user's data, can be proven to be a good mechanism to counter misinformation on much larger platforms. The concerns regarding data privacy need to be prioritized before deploying such technologies on platforms with larger user bases.
Ethical Concerns Surrounding Surveillance in Misinformation Control
Real-time misinformation surveillance could pose significant ethical risks and privacy risks. Monitoring communication patterns and metadata, or even inspecting private messages, can infringe upon user privacy and restrict their freedom of expression. Furthermore, defining misinformation remains a challenge; overly restrictive surveillance can unintentionally stifle legitimate dissent and alternate perspectives. Beyond these concerns, real-time surveillance mechanisms could be exploited for political, economic, or social objectives unrelated to misinformation control. Establishing clear ethical standards and limitations is essential to ensure that surveillance supports public safety without compromising individual rights.
In light of these ethical challenges, developing a responsible framework for real-time surveillance is essential.
Balancing Ethics and Efficacy in Real-Time Surveillance: Key Policy Implications
Despite these ethical challenges, a reliable misinformation surveillance system is essential. Key considerations for creating ethical, real-time surveillance may include:
- Misinformation-detection algorithms should be designed with transparency and accountability in mind. Third-party audits and explainable AI can help ensure fairness, avoid biases, and foster trust in monitoring systems.
- Establishing clear, consistent definitions of misinformation is crucial for fair enforcement. These guidelines should carefully differentiate harmful misinformation from protected free speech to respect users’ rights.
- Only collecting necessary data and adopting a consent-based approach which protects user privacy and enhances transparency and trust. It further protects them from stifling dissent and profiling for targeted ads.
- An independent oversight body that can monitor surveillance activities while ensuring accountability and preventing misuse or overreach can be created. These measures, such as the ability to appeal to wrongful content flagging, can increase user confidence in the system.
Conclusion: Striking a Balance
Real-time misinformation surveillance has shown its usefulness in counteracting the rapid spread of false information online. But, it brings complex ethical challenges that cannot be overlooked such as balancing the need for public safety with the preservation of privacy and free expression is essential to maintaining a democratic digital landscape. The references from the EU’s Digital Services Act and Singapore’s POFMA underscore that, while regulation can enhance accountability and transparency, it also risks overreach if not carefully structured. Moving forward, a framework for misinformation monitoring must prioritise transparency, accountability, and user rights, ensuring that algorithms are fair, oversight is independent, and user data is protected. By embedding these safeguards, we can create a system that addresses the threat of misinformation and upholds the foundational values of an open, responsible, and ethical online ecosystem. Balancing ethics and privacy and policy-driven AI Solutions for Real-Time Misinformation Monitoring are the need of the hour.
References
- https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
- https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C:2018:233:FULL