Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
Digital Arrests are a form of scam that involves the digital restraint of individuals. These restraints can vary from restricting access to the account(s), and digital platforms, to implementing measures to prevent further digital activities or being restrained on video calling or being monitored through video calling. Typically, these scams target vulnerable individuals who are unfamiliar with digital fraud tactics, making them more susceptible to manipulation. These scams often target the victims on allegations of drug trafficking, money laundering, falsified documents, etc. These are serious crimes and these scammers scare the victim into thinking that either their identities were used to commit these crimes or they have committed these crimes. Recently there has been an uptick in the digital fraud scams in India highlighting the growing concerns.
The Legality of Digital Arrests in India
There is no legal provision for law enforcement to conduct ‘arrests’ via video calls or online monitoring. If you receive such calls, it is a clear scam. In fact, recently enacted new criminal laws do not provide for any provision for law enforcement agencies conducting a digital arrest. The law only provides for service of the summons and the proceedings in an electronic mode.
The Bhartiya Nagrik Suraksha Sanhita (BNSS), 2023 provides for the summons to be served electronically under section 63. The section defines the form of summons. It states that every summons served electronically shall be encrypted and bear the image of the seal of the Court or digital signature. Further, according to section 532 of the BNSS, the trial and proceedings may be held in electronic mode, by use of electronic communication or by the use of audio-video electronic means.
Modus Operandi
Under digital arrest scams, the scammer makes a connection via video calls (WhatsApp calls, skype, etc) with the victim over their alleged involvement in crimes (financial, drug trafficking, etc) in bogus charges. The victims are intimidated that the arrest will take place soon and till the time the arresting officers do not reach the victim they are to remain on the call and be under digital surveillance and not contact anyone during the ongoing investigation.
During this period, the scammers start collecting information from the victim to confirm their identity and create an atmosphere in which multiple senior officials are on the victim’s case and they are investigating the case thoroughly. By this time, the victim, scared out of their wits, sits through this arrest and it is then that the scammers posing as law enforcement officials make comments that they can avoid arrest by paying a certain amount of the fines to the accounts that they specify. This monitoring/ surveillance continues till the time the victim makes the transfers to the accounts provided by the scammers. These are the common manipulation tactics used by scammers in digital arrest fraud.
Recent Cyber Arrest Cases
- Recently a 35-year-old NBCC official was duped of Rs 55 lakh in a 'digital arrest' scam. Posing as customs officials, fraudsters claimed her details were linked to intercepted illegal items and a pending arrest. They kept her on video calls, convincing her to transfer Rs 55 lakh to avoid money laundering charges. After the transfer, the scammers vanished. A police investigation traced the funds to a fake company, leading to the arrest of suspects.
- Another recent case involved a neurologist who was duped Rs 2.81 crores in a ‘digital arrest’ scam. Fraudsters claimed her phone number and Aadhaar was linked to accounts transferring funds to an Individual. Under pressure, she was convinced to undergo “verification” and made multiple transactions over two days. The scammers threatened legal consequences for money laundering if she didn’t comply. Now a police investigation is ongoing, and her immense financial loss highlights the severity of this cybercrime.
- One another case took place where the victim was duped of Rs 7.67 crores in a prolonged ‘digital arrest’ scam over three months. Fraudsters posing as TRAI officials claimed complaints against her phone number and threatened to suspend it, alleging illegal use of another number linked to her Aadhaar. Pressured and manipulated through video calls, the victim was coerced into transferring large sums, even taking an Rs 80 lakh loan. The case is under investigation as authorities pursue the cybercriminals behind the massive fraud.
Best Practices
- Do not panic when you get any calls where sudden unexpected news is shared with you. Scammers thrive on the panic that they create.
- Do not share personal details such as Aadhaar number, PAN number etc with unknown or suspect entities. Be cautious of your personal and financial information such as credit card numbers, OTPs, or any other passwords with anyone.
- If individuals contact, claiming to be government officials, always verify their identities by contacting the entity through the proper channels.
- Report and block any fraudulent communications that are received and mark them as Spam. This would further inform other users if they see the caller ID being marked as fraud or spam.
- If you have been defrauded then report about the same to the authorities so that action can be taken and authorities can arrest the fraudsters.
- Do not transfer any money as part of ‘fines’ or ‘dues’ to the accounts that these calls or messages link to.
- In case of any threat, issue or discrepancy, file a complaint at cybercrime.gov.in or helpline number 1930. You can also seek assistance from the CyberPeace helpline at +91 9570000066.
References:
- https://www.cyberpeace.org/resources/blogs/digital-arrest-fraud
- https://www.business-standard.com/india-news/what-is-digital-house-arrest-find-out-how-to-avoid-this-new-scam-124052400799_1.html
- https://www.the420.in/ias-ips-officers-major-generals-doctors-and-professors-fall-victim-to-digital-arrest-losing-crores-stay-alert-read-5-real-cases-inside/
- https://indianexpress.com/article/cities/delhi/senior-nbcc-official-duped-in-case-of-digital-arrest-3-arrested-delhi-police-9588418/#:~:text=Of%20the%20duped%20amount%2C%20Rs,a%20Delhi%20police%20officer%20said (case study 1)
- https://timesofindia.indiatimes.com/city/lucknow/lucknow-sgpgims-professor-duped-of-rs-2-81-crore-in-digital-arrest-scam/articleshow/112521530.cms (case study 2)
- https://timesofindia.indiatimes.com/city/jaipur/bits-prof-duped-of-7-67cr-cops-want-cbi-probe-in-case/articleshow/109514200.cms (case study 3)

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/

Executive Summary:
A manipulated image showing someone making an offensive gesture towards Prime Minister Narendra Modi is circulating on social media. However, the original photo does not display any such behavior towards the Prime Minister. The CyberPeace Research Team conducted an analysis and found that the genuine image was published in a Hindustan Times article in May 2019, where no rude gesture was visible. A comparison of the viral and authentic images clearly shows the manipulation. Moreover, The Hitavada also published the same image in 2019. Further investigation revealed that ABPLive also had the image.

Claims:
A picture showing an individual making a derogatory gesture towards Prime Minister Narendra Modi is being widely shared across social media platforms.



Fact Check:
Upon receiving the news, we immediately ran a reverse search of the image and found an article by Hindustan Times, where a similar photo was posted but there was no sign of such obscene gestures shown towards PM Modi.

ABP Live and The Hitavada also have the same image published on their website in May 2019.


Comparing both the viral photo and the photo found on official news websites, we found that almost everything resembles each other except the derogatory sign claimed in the viral image.

With this, we have found that someone took the original image, published in May 2019, and edited it with a disrespectful hand gesture, and which has recently gone viral across social media and has no connection with reality.
Conclusion:
In conclusion, a manipulated picture circulating online showing someone making a rude gesture towards Prime Minister Narendra Modi has been debunked by the Cyberpeace Research team. The viral image is just an edited version of the original image published in 2019. This demonstrates the need for all social media users to check/ verify the information and facts before sharing, to prevent the spread of fake content. Hence the viral image is fake and Misleading.
- Claim: A picture shows someone making a rude gesture towards Prime Minister Narendra Modi
- Claimed on: X, Instagram
- Fact Check: Fake & Misleading