Apple launched Passkeys with iOS 16 as a more authentic and secure mechanism. It is safer than passwords, and it is more efficient in comparison to passwords. Apple users using iOS 16 passkeys features should enable two-factor authentication. The passkeys are an unchallenging mechanism than the passwords for the passkeys. The user just has to open the apps and websites, and then the biometric sensor automatically recognises the face and fingerprints. There can be a PIN and pattern used to log instead of passwords. The passkeys add an extra coating of protection to the user’s systems against cyber threats like phishing attacks by SMS and one-time password-based. In a report 9 to 5mac, there is confirmation that 95% of users are using passkeys. Also, with the passkeys, users’ experience will be better, and it is a more security-proof mechanism. The passwords were weak, reused credentials and credentials leaked, and the chances of phishing attacks were real.
What are passkeys?
Passkey is a digital key linked to users’ accounts and websites or applications. Passkeys allow the user to log into any application and website without entering passwords, usernames, or other details. The aim of this new feature is to replace the old long pattern of entering passwords for going through any websites and applications.
The passkeys are developed by Microsoft, Apple, and Google together, and it is also called FIDO Authentication (Fast identity online). It eliminates the need to remember passwords and the need for typing. So, the passkeys work as they replace the password with a unique digital key, which is tied to the account then, the key is stored in the device itself, and it is end-to-end encrypted. The passkeys will always be on the sites on which users specifically created them. the passkeys use the technology of cryptography for more security purposes. And the passkeys guarantee against the phish.
And since the passkeys follow FIDO standards so, this also can be used for third-party nonapple devices as the third-party device generate a QR code that enables the iOS user to scan that to log in. It will recognise the face of the person for authentication and then asks for permission on another device to deny or allow.
How are passkeys more secure than passwords?
The passkeys follow the public key cryptographic protocols that support the security keys, and they work against phishing and other cyber threats. It is more secure than SMS and apps based on one-time passwords. And another type of multi-factor authentication.
Why are passwords insecure?
The users create passwords easily, and it is wondering if they are secure. The very important passwords are short and easy to crack as they generally relate to the user’s personal information or popular words. One password is reused by the user to the different accounts, and then, in this case, hacking one account gives access to all accounts to the hackers. The problem is that passwords have inherent flaws, like they could be easily stolen.
Are passkeys about to become obligatory?
Many websites restrict the type of passwords, as some websites ask for mixtures of numbers and symbols, and many websites ask for two-factor authentication. There is no surety about the obligation of passkeys widespread as it is still a new concept and it will take time, so it is going to be optional for a while.
There was a case of a Heartland payment system data breach, and Heartland was handling over 100 million monthly credit card transactions for 175,000 retailers at the time of the incident. Visa and MasterCard detected the hack in January 2009 when they notified Heartland of suspicious transactions. And this happened due to a password breach. The corporation paid an estimated $145 million in settlement for illegal payments. Today, data-driven breaches affect millions of people’s personal information.
GoDaddy reported a security attack in November that affected the accounts of over a million of its WordPress customers. The attacker acquired unauthorised access to GoDaddy’s Managed WordPress hosting environment by hacking into the provisioning system in the company’s legacy Managed WordPress code.
Conclusion
The use of strong and unique passwords is an essential requirement to safeguard information and data from cyberattacks, but still, passwords have its own disadvantages. And by the replacement of passwords, a passkey, a digital key that ensures proper safety and there is security against cyberattacks and cybercrimes through passkey. There are cases above-mentioned that happened due to the password’s weaker security. And in this technology world, there is a need for something for protection and prevention from cybercrimes, and the world dumps passwords and adopts passkeys.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
A dramatic video showing several people jumping from the upper floors of a building into what appears to be thick snow has been circulating on social media, with users claiming that it captures a real incident in Russia during heavy snowfall. In the footage, individuals can be seen leaping one after another from a multi-storey structure onto a snow-covered surface below, eliciting reactions ranging from amusement to concern. The claim accompanying the video suggests that it depicts a reckless real-life episode in a snow-hit region of Russia.
A thorough analysis by CyberPeace confirmed that the video is not a real-world recording but an AI-generated creation. The footage exhibits multiple signs of synthetic media, including unnatural human movements, inconsistent physics, blurred or distorted edges, and a glossy, computer-rendered appearance. In some frames, a partial watermark from an AI video generation tool is visible. Further verification using the Hive Moderation AI-detection platform indicated that 98.7% of the video is AI-generated, confirming that the clip is entirely digitally created and does not depict any actual incident in Russia.
Claim:
The video was shared on social media by an X (formerly Twitter) user ‘Report Minds’ on January 25, claiming it showed a real-life event in Russia. The post caption read: "People jumping off from a building during serious snow in Russia. This is funny, how they jumped from a storey building. Those kids shouldn't be trying this. It's dangerous." Here is the link to the post, and below is a screenshot.
The Desk used the InVid tool to extract keyframes from the viral video and conducted a reverse image search, which revealed multiple instances of the same video shared by other users with similar claims. Upon close visual examination, several anomalies were observed, including unnatural human movements, blurred and distorted sections, a glossy, digitally-rendered appearance, and a partially concealed logo of the AI video generation tool ‘Sora AI’ visible in certain frames. Screenshots highlighting these inconsistencies were captured during the research .
The video was analyzed on Hive Moderation, an AI-detection platform, which confirmed that 98.7% of the content is AI-generated.
The viral video showing people jumping off a building into snow, claimed to depict a real incident in Russia, is entirely AI-generated. Social media users who shared it presented the digitally created footage as if it were real, making the claim false and misleading.
The IT giant Apple has alerted customers to the impending threat of "mercenary spyware" assaults in 92 countries, including India. These highly skilled attacks, which are frequently linked to both private and state actors (such as the NSO Group’s Pegasus spyware), target specific individuals, including politicians, journalists, activists and diplomats. In sharp contrast to consumer-grade malware, these attacks are in a league unto themselves: highly-customized to fit the individual target and involving significant resources to create and use.
As the incidence of such attacks rises, it is important that all persons, businesses, and officials equip themselves with information about how such mercenary spyware programs work, what are the most-used methods, how these attacks can be prevented and what one must do if targeted. Individuals and organizations can begin protecting themselves against these attacks by enabling "Lockdown Mode" to provide an extra layer of security to their devices and by frequently changing passwords and by not visiting the suspicious URLs or attachments.
Introduction: Understanding Mercenary Spyware
Mercenary spyware is a special kind of spyware that is developed exclusively for law enforcement and government organizations. These kinds of spywares are not available in app stores, and are developed for attacking a particular individual and require a significant investment of resources and advanced technologies. Mercenary spyware hackers infiltrate systems by means of techniques such as phishing (by sending malicious links or attachments), pretexting (by manipulating the individuals to share personal information) or baiting (using tempting offers). They often intend to use Advanced Persistent Threats (APT) where the hackers remain undetected for a prolonged period of time to steal data by continuous stealthy infiltration of the target’s network. The other method to gain access is through zero-day vulnerabilities, which is the process of gaining access to mobile devices using vulnerabilities existing in software. A well-known example of mercenary spyware includes the infamous Pegasus by the NSO Group.
Actions: By Apple against Mercenary Spyware
Apple has introduced an advanced, optional protection feature in its newer product versions (including iOS 16, iPadOS 16, and macOS Ventura) to combat mercenary spyware attacks. These features have been provided to the users who are at risk of targeted cyber attacks.
Apple released a statement on the matter, sharing, “mercenary spyware attackers apply exceptional resources to target a very small number of specific individuals and their devices. Mercenary spyware attacks cost millions of dollars and often have a short shelf life, making them much harder to detect and prevent.”
When Apple's internal threat intelligence and investigations detect these highly-targeted attacks, they take immediate action to notify the affected users. The notification process involves:
Displaying a "Threat Notification" at the top of the user's Apple ID page after they sign in.
Sending an email and iMessage alert to the addresses and phone numbers associated with the user's Apple ID.
Providing clear instructions on steps the user should take to protect their devices, including enabling "Lockdown Mode" for the strongest available security.
Apple stresses that these threat notifications are "high-confidence alerts" - meaning they have strong evidence that the user has been deliberately targeted by mercenary spyware. As such, these alerts should be taken extremely seriously by recipients.
Modus Operandi of Mercenary Spyware
Installing advanced surveillance equipment remotely and covertly.
Using zero-click or one-click attacks to take advantage of device vulnerabilities.
Gain access to a variety of data on the device, including location tracking, call logs, text messages, passwords, microphone, camera, and app information.
Installation by utilizing many system vulnerabilities on devices running particular iOS and Android versions.
Defense by patching vulnerabilities with security updates (e.g., CVE-2023-41991, CVE-2023-41992, CVE-2023-41993).
Utilizing defensive DNS services, non-signature-based endpoint technologies, and frequent device reboots as mitigation techniques.
Prevention Measures: Safeguarding Your Devices
Turn on security measures: Make use of the security features that the device maker has supplied, such as Apple's Lockdown Mode, which is intended to prevent viruses of all types from infecting Apple products, such as iPhones.
Frequent software upgrades: Make sure the newest security and software updates are installed on your devices. This aids in patching holes that mercenary malware could exploit.
Steer clear of misleading connections: Exercise caution while opening attachments or accessing links from unidentified sources. Installing mercenary spyware is possible via phishing links or attachments.
Limit app permissions: Reassess and restrict app permissions to avoid unwanted access to private information.
Use secure networks: To reduce the chance of data interception, connect to secure Wi-Fi networks and stay away from public or unprotected connections.
Install security applications: To identify and stop any spyware attacks, think about installing reliable security programs from reliable sources.
Be alert: If Apple or other device makers send you a threat notice, consider it carefully and take the advised security precautions.
Two-factor authentication: To provide an extra degree of protection against unwanted access, enable two-factor authentication (2FA) on your Apple ID and other significant accounts.
Consider additional security measures: For high-risk individuals, consider using additional security measures, such as encrypted communication apps and secure file storage services
Way Forward: Strengthening Digital Defenses, Strengthening Democracy
People, businesses and administrations must prioritize cyber security measures and keep up with emerging dangers as mercenary spyware attacks continue to develop and spread. To effectively address the growing threat of digital espionage, cooperation between government agencies, cybersecurity specialists, and technology businesses is essential.
In the Indian context, the update carries significant policy implications and must inspire a discussion on legal frameworks for government surveillance practices and cyber security protocols in the nation. As the public becomes more informed about such sophisticated cyber threats, we can expect a greater push for oversight mechanisms and regulatory protocols. The misuse of surveillance technology poses a significant threat to individuals and institutions alike. Policy reforms concerning surveillance tech must be tailored to address the specific concerns of the use of such methods by state actors vs. private players.
There is a pressing need for electoral reforms that help safeguard democratic processes in the current digital age. There has been a paradigm shift in how political activities are conducted in current times: the advent of the digital domain has seen parties and leaders pivot their campaigning efforts to favor the online audience as enthusiastically as they campaign offline. Given that this is an election year, quite possibly the most significant one in modern Indian history, digital outreach and online public engagement are expected to be at an all-time high. And so, it is imperative to protect the electoral process against cyber threats so that public trust in the legitimacy of India’s democratic is rewarded and the digital domain is an asset, and not a threat, to good governance.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.