#FactCheck - Misleading Video Allegedly Depicting Trampling of Indian Tri-colour in Kerala or Tamil Nadu Circulates on Social Media
Executive Summary:
The video that allegedly showed cars running into an Indian flag while Pakistan flags flying in the air in Indian states, went viral on social media but it has been established to be misleading. The video posted is neither from Kerala nor Tamil Nadu as claimed, instead from Karachi, Pakistan. There are specific details like the shop's name, Pakistani flags, car’s number plate, geolocation analyses that locate where the video comes from. The false information underscores the importance of verifying information before sharing it.
Claims:
A video circulating on social media shows cars trampling the Indian Tricolour painted on a road, as Pakistani flags are raised in pride, with the incident allegedly taking place in Tamil Nadu or Kerala.
Fact Check:
Upon receiving the post we closely watched the video, and found several signs that indicated the video was from Pakistan but not from any place in India.
We divided the video into keyframes and found a shop name near the road.
We enhanced the image quality to see the shop name clearly.
We can see that it’s written as ‘Sanam’, also we can see Pakistan flags waving on the road. Taking a cue from this we did some keyword searches with the shop name. We found some shops with the name and one of the shop's name ‘Sanam Boutique’ located in Karachi, Pakistan, was found to be similar when analyzed using geospatial Techniques.
We also found a similar structure of the building while geolocating the place with the viral video.
Additional confirmation of the place is the car’s number plate found in the keyframes of the video.
We found a website that shows the details of the number Plate in Karachi, Pakistan.
Upon thorough investigation, it was found that the location in the viral video is from Karachi, Pakistan, but not from Kerala or Tamil Nadu as claimed by different users in Social Media. Hence, the claim made is false and misleading.
Conclusion:
The video circulating on social media, claiming to show cars trampling the Indian Tricolour on a road while Pakistani flags are waved, does not depict an incident in Kerala or Tamil Nadu as claimed. By fact-checking methodologies, it has been confirmed now that the location in the video is actually from Karachi, Pakistan. The misrepresentation shows the importance of verifying the source of any information before sharing it on social media to prevent the spread of false narratives.
- Claim: A video shows cars trampling the Indian Tricolour painted on a road, as Pakistani flags are raised in pride, taking place in Tamil Nadu or Kerala.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs
Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724
Introduction
Social media has become integral to our lives and livelihood in today’s digital world. Influencers are now strong people who shape trends, views, and consumer behaviour. Influencers have become targets for bad actors aiming to abuse their fame due to their significant internet presence. Unfortunately, account hacking has grown frequently, with significant ramifications for influencers and their followers. Furthermore, the emergence of social media platforms in recent years has opened the way for influencer culture. Influencers exert power over their followers’ ideas, lifestyle choices, and purchase decisions. Influencers and brands frequently collaborate to exploit their reach, resulting in a mutually beneficial environment. As a result, the value of influencer accounts has risen dramatically, attracting the attention of hackers trying to abuse their potential for financial gain or personal advantage.
Instances of recent attacks
Places of worship
The hackers have targeted renowned temples for fulfilling their malicious activities the recent attack happened on The Khautji Shyam Temple, a famous religious institution with enormous cultural and spiritual value for its adherents. It serves as a place of worship, community events, and numerous religious activities. However, since technology has invaded all sectors of life, the temple’s online presence has developed, giving worshippers access to information, virtual darshans (holy viewings), and interactive forums. Unfortunately, this digital growth has also rendered the shrine vulnerable to cyber threats. The hackers hacked the Facebook page twice in the month, demanded donations and hacked the cheques the devotes gave to the trust. The second event happened by posting objectional images on the page and hurting the sentiments of the devotees. The Committee of the temple has filed an FIR under various charges and is also seeking help from the cyber cell.
Social media Influencers
Influencers enjoy a vast online following worldwide, but their presence is limited to the digital space. Hence every video, photo is of importance to them. An incident took place with leading news anchor and reporter Barkha Dutt, where in her youtube channel was hacked into, and all the posts made from the channel were deleted. The hackers also replaced the channel’s logo with Tesla and were streaming a live video on the channel featuring Elon Musk. A similar incident was reported by influencer Tanmay Bhatt, who also lost all the content e had posted on his channel. The hackers use the following methods to con social media influencers:
- Social engineering
- Phishing
- Brute Force Attacks
Such attacks on influencers can cause harm to their reputation, can also cause financial loss, and even lose the trust of the viewers or the followers who follow them, thus further impacting the collaborations.
Safeguards
Social media influencers need to be very careful about their cyber security as their prominent presence is in the online world. The influencers from different platforms should practice the following safeguards to protect themselves and their content better online
Secure your accounts
Protecting your accounts with passphrases or strong passwords is the first step. The best strategy for doing this is to create a passphrase, a phrase only you know. We advise choosing a passphrase with at least four words and 15 characters.
To further secure your accounts, you must enable multi-factor authentication in the second step.
To access your account, a hacker must guess your password and provide a second authentication factor (such as a face scan or fingerprint) that matches yours.
Be careful about who has access
Many social media influencers collaborate with a team to help generate and post content while building their personal brands.
This entails using team members who can write and produce material that influencers can share themselves, according to some of them. In these situations, the influencer is the only person who still has access to the account.
There are more potential weak spots when more people have access. Additionally, it increases the number of ways a password or account access could fall into the hands of a cybercriminal. Only some staff members will be as cautious about password security as you may be.
Stay up-to-date on the threats
What’s the most significant way to combat threats to computer security? Information.
Cybercriminals constantly adapt their methods. It’s crucial to stay informed about these threats and how they can be utilised against you.
But it’s not just threats. Social media platforms and other service providers are likewise changing their offerings to avoid these challenges.
Educate yourself to protect yourself. You can keep one step ahead of the hazards that cybercriminals offer by continuously educating yourself.
Preach cybersecurity
As influencers, cyber security should be preached, no matter your agenda.
This will also enable users to inculcate best practices for digital hygiene.
This will also boost the reporting numbers and increase population awareness, thus eradicating such bad actors from our cyberspace.
Acknowledge the risks
Keeping a blind eye will always hurt the safety aspects, as ignorance always causes issues.
Risks should be kept in mind while creating the digital routine and netiquette
Always inform your users of risk existing and potential risks
Monitor threats
After the acknowledgement, it is essential to monitor threats.
Active lookout for threats will allow you to understand the modus Operandi and the vulnerabilities to avoid criminals
Threats monitoring is also a basic netizens’ responsibility to ensure that the threats are reported as they emerge.
Interpret the data
All cyber nodal agencies release data and trends of cybercrimes, understand the trends and protect your vulnerabilities.
Data interpretation can lead to an early flagging of threats and issues, thus protecting the cyber ecosystem by and large.
Create risk profiles
All influencers should create risk profiles and backup profiles.
This will also help protect one’s data as it can be stored on different profiles.
Risk profiles and having a private profile are essential to safeguard the basic cyber interests of an influencer.
Conclusion
As we go deeper into the digital age, we see more technologies emerging, but along with them, we see a new generation of cyber threats and challenges. The physical, as well as the cyberspace, is now inter twinned and interdependent. Practising basic cyber security practices, hygiene, netiquette, and monitoring best practices will go a long way in protecting the online interests of the Influencers and will impact their followers to engage in best practices thus safeguarding the cyber ecosystem at large.
Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide