#FactCheck - Viral Clip and Newspaper Article Claiming 18% GST on 'Good Morning' Messages Debunked
Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018
Related Blogs

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Executive Summary:
New Linux malware has been discovered by a cybersecurity firm Volexity, and this new strain of malware is being referred to as DISGOMOJI. A Pakistan-based threat actor alias ‘UTA0137’ has been identified as having espionage aims, with its primary focus on Indian government entities. Like other common forms of backdoors and botnets involved in different types of cyberattacks, DISGOMOJI, the malware allows the use of commands to capture screenshots, search for files to steal, spread additional payloads, and transfer files. DISGOMOJI uses Discord (messaging service) for Command & Control (C2) and uses emojis for C2 communication. This malware targets Linux operating systems.
The DISCOMOJI Malware:
- The DISGOMOJI malware opens a specific channel in a Discord server and every new channel corresponds to a new victim. This means that the attacker can communicate with the victim one at a time.
- This particular malware connects with the attacker-controlled Discord server using Emoji, a form of relay protocol. The attacker provides unique emojis as instructions, and the malware uses emojis as a feedback to the subsequent command status.
- For instance, the ‘camera with flash’ emoji is used to screenshots the device of the victim or to steal, the ‘fox’ emoji cracks all Firefox profiles, and the ‘skull’ emoji kills the malware process.
- This C2 communication is done using emojis to ensure messaging between infected contacts, and it is almost impossible for Discord to shut down the malware as it can always change the account details of Discord it is using once the maliciou server is blocked.
- The malware also has capabilities aside from the emoji-based C2 such as network probing, tunneling, and data theft that are needed to help the UTA0137 threat actor in achieving its espionage goals.
Specific emojis used for different commands by UTA0137:
- Camera with Flash (📸): Captures a picture of the target device’s screen as per the victim’s directions.
- Backhand Index Pointing Down (👇): Extracts files from the targeted device and sends them to the command channel in the form of attachments.
- Backhand Index Pointing Right (👉): This process involves sending a file found on the victim’s device to another web-hosted file storage service known as Oshi or oshi[. ]at.
- Backhand Index Pointing Left (👈): Sends a file from the victim’s device to transfer[. ]sh, which is an online service for sharing files on the Internet.
- Fire (🔥): Finds and transmits all files with certain extensions that exist on the victim’s device, such as *. txt, *. doc, *. xls, *. pdf, *. ppt, *. rtf, *. log, *. cfg, *. dat, *. db, *. mdb, *. odb, *. sql, *. json, *. xml, *. php, *. asp, *. pl, *. sh, *. py, *. ino, *. cpp, *. java,
- Fox (🦊): This works by compressing all Firefox related profiles in the affected device.
- Skull (💀): Kills the malware process in windows using ‘os. Exit()’
- Man Running (🏃♂️): Execute a command on a victim’s device. This command receives an argument, which is the command to execute.
- Index Pointing up (👆) : Upload a file to the victim's device. The file to upload is attached along with this emoji
Analysis:
The analysis was carried out for one of the indicator of compromised SHA-256 hash file- C981aa1f05adf030bacffc0e279cf9dc93cef877f7bce33ee27e9296363cf002.
It is found that most of the vendors have marked the file as trojan in virustotal and the graph explains the malicious nature of the contacted domains and IPs.


Discord & C2 Communication for UTA0137:
- Stealthiness: Discord is a well-known messaging platform used for different purposes, which means that sending any messages or files on the server should not attract suspicion. Such stealthiness makes it possible for UTA0137 to remain dormant for greater periods before launching an attack.
- Customization: UTA0137 connected to Discord is able to create specific channels for distinct victims on the server. Such a framework allows the attackers to communicate with each of the victims individually to make a process more accurate and efficient.
- Emoji-based protocol: For C2 communication, emojis really complicates the attempt that Discord might make to interfere with the operations of the malware. In case the malicious server gets banned, malware could easily be recovered, especially by using the Discord credentials from the C2 server.
- Persistence: The malware, as stated above, has the ability to perpetually exist to hack the system and withstand rebooting of systems so that the virus can continue to operate without being detected by the owner of the hacked system.
- Advanced capabilities: Other features of DISGOMOJI are the Network Map using Nmap scanner, network tunneling through Chisel and Ligolo and Data Exfiltration by File Sharing services. These capabilities thus help in aiding the espionage goals of UTA0137.
- Social engineering: The virus and the trojan can show the pop-up windows and prompt messages, for example the fake update for firefox and similar applications, where the user can be tricked into inputting the password.
- Dynamic credential fetching: The malware does not write the hardcoded values of the credentials in order to connect it to the discord server. This also inconveniences analysts as they are unable to easily locate the position of the C2 server.
- Bogus informational and error messages: They never show any real information or errors because they do not want one to decipher the malicious behavior easily.
Recommendations to mitigate the risk of UTA0137:
- Regularly Update Software and Firmware: It is essential to regularly update all the application software and firmware of different devices, particularly, routers, to prevent hackers from exploiting the discovered and disclosed flaws. This includes fixing bugs such as CVE-2024-3080 and CVE-2024-3912 on ASUS routers, which basically entails solving a set of problems.
- Implement Multi-Factor Authentication: There are statistics that show how often user accounts are attacked, it is important to incorporate multi-factor authentication to further secure the accounts.
- Deploy Advanced Malware Protection: Provide robust guard that will help the user recognize and prevent the execution of the DISGOMOJI malware and similar threats.
- Enhance Network Segmentation: Utilize stringent network isolation mechanisms that seek to compartmentalize the key systems and data from the rest of the network in order to minimize the attack exposure.
- Monitor Network Activity: Scanning Network hour to hour for identifying and handling the security breach and the tools such as Nmap, Chisel, Ligolo etc can be used.
- Utilize Threat Intelligence: To leverage advanced threats intelligence which will help you acquire knowledge on previous threats and vulnerabilities and take informed actions.
- Secure Communication Channels: Mitigate the problem of the leakage of developers’ credentials and ways of engaging with the discord through loss of contact to prevent abusing attacks or gaining control over Discord as an attack vector.
- Enforce Access Control: Regularly review and update the user authentication processes by adopting stricter access control measures that will allow only the right personnel to access the right systems and information.
- Conduct Regular Security Audits: It is important to engage in security audits periodically in an effort to check some of the weaknesses present within the network or systems.
- Implement Incident Response Plan: Conduct a risk assessment, based on that design and establish an efficient incident response kit that helps in the early identification, isolation, and management of security breaches.
- Educate Users: Educate users on cybersecurity hygiene, opportunities to strengthen affinity with the University, and conduct retraining on threats like phishing and social engineering.
Conclusion:
The new threat actor named UTA0137 from Pakistan who was utilizing DISGOMOJI malware to attack Indian government institutions using embedded emojis with a command line through the Discord app was discovered by Volexity. It has the capability to exfiltrate and aims to steal the data of government entities. The UTA0137 was continuously improved over time to permanently communicate with victims. It underlines the necessity of having strong protection from viruses and hacker attacks, using secure passwords and unique codes every time, updating the software more often and having high-level anti-malware tools. Organizations can minimize advanced threats, the likes of DISGOMOJI and protect sensitive data by improving network segmentation, continuous monitoring of activities, and users’ awareness.
References:
https://otx.alienvault.com/pulse/66712446e23b1d14e4f293eb
https://thehackernews.com/2024/06/pakistani-hackers-use-disgomoji-malware.html?m=1
https://cybernews.com/news/hackers-using-emojis-to-command-malware/
https://www.volexity.com/blog/2024/06/13/disgomoji-malware-used-to-target-indian-government/

Introduction
The Ministry of Electronics and Information Technology ( MeitY) through its Information Security Education & Awareness ( ISEA ) came up with an advisory regarding the growing cases of e-challan fraud. Cybercriminals are exploiting the beliefs of individuals by attracting them into clicking malicious links under the impression of paying traffic fines. Cybercriminals employ sending phishing messages and impersonating official e-challan notifications as a primary method. These messages are crafted in such a way that portrays a sense of urgency, provoking individuals to click on a link for spontaneous payment. For building trust, the messages are deviously created by scammers depicting official communication, which in actuality are fake messages targeting individuals for committing online financial fraud.
Unveiling the E-Challan Scam
Scammers send a text message to your phones that closely resembles e-challan alerts. The text appears from the traffic police, informing the netizens of a traffic violation that requires a fine payment. These messages contain a link and a text message urging the recipient to settle the fine by clicking on the links to make the payment. Scammers have started trapping innocent individuals through such fake messages. These scammers are creating and sending fake messages that look like traffic challan alert messages. However, it is a completely deceptive and fake message. Such messages contain malicious links to fake website, leading users to visit the fake website and enter their bank account details, or make the payment which ultimately leads to financial loss to victims. Cyber scammers have meticulously copied the format used by the traffic authorities however a close examination can help us spot the trap. The modus operandi of such type of scam is to get the targeted individuals to click on a malicious link for payment of traffic e-challan. Once you click on such malicious payment link to pay for the e-challan the individuals unknowingly will end up paying the cyber criminals instead of the police in a bid to discharge the traffic e-challan.
How to spot a fake E-Challan?
- Verify the Vehicle Number: Make sure that the vehicle number mentioned in the message matches your vehicle’s number. Cross-check this information with your vehicle’s number plate or the smart card ( blue book) issued by the Regional Transport Office ( RTO).
- Verify the E-challan Number: Verify the validity of the e-challan number by logging into the official traffic police website or app. Legitimate e-challans will have a corresponding record that can be cross-checked for authenticity. The challan number can be verified by logging in to the official e-challan website. It is always advisable to Visit the official government website to check if you have actually been fined.
- Inspect the Message Content: Give attention to the language inculcated in the message. Hackers' messages may contain grammatical errors or unusual phrases. For example, cybercriminals might encourage victims to visit the RTO office in person. Trying to build up confidence among the victims. Also, it is important that you do not make such payments in haste. Vehicle owners must check such messages carefully before clicking on any link.
Best Practices to Stay Safe
- Be aware of unbidden messages: Be cautious when you receive unsolicited e- challan notifications. Abstain yourself by clicking on links or downloading attachments from unknown sources.
- Always stick to legitimate or official websites: The scammers use links which look similar to the official link, and a casual glance can miss the difference. Hence it is strictly advisable to visit the official websites only. Also do note that government websites will always have the domain '.gov.in'. The official website of Traffic Challan is https://echallan.parivahan.gov.in/
- Get it cross-checked through official channels: Always cross-check the authenticity of an e-challan by directly accessing official channels, such as the official traffic police website or application.
- Connect with the RTO directly: If in doubt, independently connect with the Regional Transport Office ( RTO) using official contact details to verify the authenticity of the e-challan. It is best not to solely rely on information received from suspicious messages.
- Software update: Make sure that your device’s security software is up to date to protect against malware and phishing scams.
Conclusion:
Cybercriminals are exploiting the fear of traffic fines to trick individuals into clicking on malicious links and revealing their personal and financial information. These scams can lead to significant financial losses for the victims. To stay safe, it is important to be cautious of unsolicited messages, verify the authenticity of e-challans through official channels, and avoid clicking on links or downloading attachments from unknown sources. Awareness is the first line of defence in the evolving landscape of online threats.
References:
- https://economictimes.indiatimes.com/news/new-updates/ahmedabad-residents-duped-out-of-lakhs-in-e-challan-scam-cops-arrest-jharkhand-man/articleshow/103528317.cms
- https://economictimes.indiatimes.com/wealth/save/new-traffic-e-challan-fraud-heres-how-to-identify-scam-messages-and-avoid-getting-duped/articleshow/104960817.cms
- https://www.ndtv.com/india-news/explained-the-new-e-challan-scam-how-we-can-escape-it-4342129