#FactCheck - Viral Claim of Highway in J&K Proven Misleading
Executive Summary:
A viral post on social media shared with misleading captions about a National Highway being built with large bridges over a mountainside in Jammu and Kashmir. However, the investigation of the claim shows that the bridge is from China. Thus the video is false and misleading.

Claim:
A video circulating of National Highway 14 construction being built on the mountain side in Jammu and Kashmir.

Fact Check:
Upon receiving the image, Reverse Image Search was carried out, an image of an under-construction road, falsely linked to Jammu and Kashmir has been proven inaccurate. After investigating we confirmed the road is from a different location that is G6911 Ankang-Laifeng Expressway in China, highlighting the need to verify information before sharing.


Conclusion:
The viral claim mentioning under-construction Highway from Jammu and Kashmir is false. The post is actually from China and not J&K. Misinformation like this can mislead the public. Before sharing viral posts, take a brief moment to verify the facts. This highlights the importance of verifying information and relying on credible sources to combat the spread of false claims.
- Claim: Under-Construction Road Falsely Linked to Jammu and Kashmir
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Ransomware is one of the serious cyber threats as it causes consequences such as financial losses, data loss, and reputation damage. Recently in 2023, a new ransomware called Akira ransomware emerged or surfaced. It has targeted and affected various enterprises or industries, such as BSFI, Construction, Education, Healthcare, Manufacturing, real estate and consulting, primarily based in the United States. Akira ransomware has targeted industries by exploiting the double-extortion technique by exfiltrating and encrypting sensitive data and imposing the threat on victims to leak or sell the data on the dark web if the ransom is not paid. The Akira ransomware gang has extorted a ransom ranging from $200,000 to millions of dollars.
Uncovering the Akira Ransomware operations and their targets
Akira ransomware gang has gained unauthorised access to computer systems by using sophisticated encryption algorithms to encrypt the Data. When such an encryption process is completed, the affected device or network will not be able to access its files or use its data.
The affected files by Akira ransomware showed the extension named “.akira”, and the file’s icon shows blank white pages. The Akira ransomware has developed a data leak site so as to extort victims. And it has also used the ransom note named “akira_readme.txt”.
Akira ransomware steeled the corporate data of various organisations, which the Akira ransomware gang used as leverage while threatening the affected organisation with high ransom demands. Akira Ransomware gang threaten the victims to leak their sensitive data or corporate data in the public domain if the demanded ransom amount is not paid. Akira ransomware gang has leaked the data of four organisations and the size ranges from 5.9GB to 259 GB of data leakage.
Akira Ransomware gang communicating with Victims
The Akira ransomware has provided a unique negotiation password to each victim to initiate communication. Where the ransomware gang deployed a chat system for the purpose of negotiation and demanding ransom from the affected organisations. They have deployed a ransom note as akira_readme.txt so as to provide information as to how they have affected the victim’s files or data along with links to the Akira data leak site and negotiation site.
How Akira Ransomware is different from Pegasus Spyware
Pegasus, developed in the year 2011, belongs to one of the most powerful family of spyware. Once it has infected, it can spear your phone and your text messages or emails. It has the ability to turn your phone into a surveillance device, from copying your messages to harvesting your photos and recording calls. In fact, it has the ability to record you through your phone camera or record your conversation by using your microphone, it also has the ability to track your pinpoint location. In contrast, newly Akira ransomware affects encrypting your files and preventing access to your Data and then asking for ransom n the pretext of leaking your data or for decryption.
How to recover from malware attacks
If affected by such type of malware attack, you can use anti-malware tools such as SpyHunter 5 or Malwarebytes to scan your system. These are the security software which can scan your system and remove suspicious malware files and entries. If you are unable to perform the scan or antivirus in normal mode due to malware in your system, you can use it in Safe Mode. And try to find a relevant decryptor which can help you to recover your files. Do not fall into a ransomware gang’s trap because there is no guarantee that they will help you to recover or will not leak your data after paying the ransom amount.
Best practices to be safe from such ransomware attacks
Conclusion
The Akira ransomware operation poses serious threats to various organisations worldwide. There is a high need to employ robust cybersecurity measures to safeguard networks and sensitive data. Organisations must ensure to keep their software system updated and backed up to a secure network on a regular basis. Paying the ransom is illegal mean instead you should report the incident to law enforcement agencies and can consult with cybersecurity professionals for the recovery method.

Introduction
The Ministry of Electronics and Information Technology recently released the IT Intermediary Guidelines 2023 Amendment for social media and online gaming. The notification is crucial when the Digital India Bill’s drafting is underway. There is no denying that this bill, part of a series of bills focused on amendments and adding new provisions, will significantly improve the dynamics of Cyberspace in India in terms of reporting, grievance redressal, accountability and protection of digital rights and duties.
What is the Amendment?
The amendment comes as a key feature of cyberspace as the bill introduces fact-checking, a crucial aspect of relating information on various platforms prevailing in cyberspace. Misformation and disinformation were seen rising significantly during the Covid-19 pandemic, and fact-checking was more important than ever. This has been taken into consideration by the policymakers and hence has been incorporated as part of the Intermediary guidelines. The key features of the guidelines are as follows –
- The phrase “online game,” which is now defined as “a game that is offered on the Internet and is accessible by a user through a computer resource or an intermediary,” has been added.
- A clause has been added that emphasises that if an online game poses a risk of harm to the user, intermediaries and complaint-handling systems must advise the user not to host, display, upload, modify, publish, transmit, store, update, or share any data related to that risky online game.
- A proviso to Rule 3(1)(f) has been added, which states that if an online gaming intermediary has provided users access to any legal online real money game, it must promptly notify its users of the change, within 24 hours.
- Sub-rules have been added to Rule 4 that focus on any legal online real money game and require large social media intermediaries to exercise further due diligence. In certain situations, online gaming intermediaries:
- Are required to display a demonstrable and obvious mark of verification of such online game by an online gaming self-regulatory organisation on such permitted online real money game
- Will not offer to finance themselves or allow financing to be provided by a third party.
- Verification of real money online gaming has been added to Rule 4-A.
- The Ministry may name as many self-regulatory organisations for online gaming as it deems necessary for confirming an online real-money game.
- Each online gaming self-regulatory body will prominently publish on its website/mobile application the procedure for filing complaints and the appropriate contact information.
- After reviewing an application, the self-regulatory authority may declare a real money online game to be a legal game if it is satisfied that:
- There is no wagering on the outcome of the game.
- Complies with the regulations governing the legal age at which a person can engage into a contract.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have a new rule 4-B (Applicability of certain obligations after an initial period) that states that the obligations of the rule under rules 3 and 4 will only apply to online games after a three-month period has passed.
- According to Rule 4-C (Obligations in Relation to Online Games Other Than Online Real Money Games), the Central Government may direct the intermediary to make necessary modifications without affecting the main idea if it deems it necessary in the interest of India’s sovereignty and integrity, the security of the State, or friendship with foreign States.
- Intermediaries, such as social media companies or internet service providers, will have to take action against such content identified by this unit or risk losing their “safe harbour” protections under Section 79 of the IT Act, which let intermediaries escape liability for what third parties post on their websites. This is problematic and unacceptable. Additionally, these notified revisions can circumvent the takedown order process described in Section 69A of the IT Act, 2000. They also violated the ruling in Shreya Singhal v. Union of India (2015), which established precise rules for content banning.
- The government cannot decide if any material is “fake” or “false” without a right of appeal or the ability for judicial monitoring since the power to do so could be abused to thwart examination or investigation by media groups. Government takedown orders have been issued for critical remarks or opinions posted on social media sites; most of the platforms have to abide by them, and just a few, like Twitter, have challenged them in court.
Conclusion
The new rules briefly cover the aspects of fact-checking, content takedown by Govt, and the relevance and scope of sections 69A and 79 of the Information Technology Act, 2000. Hence, it is pertinent that the intermediaries maintain compliance with rules to ensure that the regulations are sustainable and efficient for the future. Despite these rules, the responsibility of the netizens cannot be neglected, and hence active civic participation coupled with such efficient regulations will go a long way in safeguarding the Indian cyber ecosystem.

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.