Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs

Introduction
In a world teeming with digital complexities, where information wends through networks with the speed and unpredictability of quicksilver, companies find themselves grappling with the paradox of our epoch: the vast potential of artificial intelligence (AI) juxtaposed with glaring vulnerabilities in data security. It's a terrain fraught with risks, but in the intricacies of this digital age emerges a profound alchemy—the application of AI itself to transmute vulnerable data into a repository as secure and invaluable as gold.
The deployment of AI technologies comes with its own set of challenges, chief among them being concerns about the integrity and safety of data—the precious metal of the information economy. Companies cannot afford to remain idle as the onslaught of cyber threats threatens to fray the fabric of their digital endeavours. Instead, they are rallying, invoking the near-miraculous capabilities of AI to transform the very nature of cybersecurity, crafting an armour of untold resilience by empowering the hunter to become the hunted.
The AI’s Untapped Potential
Industries spanning the globe, varied in their scopes and scales, recognize AI's potential to hone their processes and augment decision-making capabilities. Within this dynamic lies a fertile ground for AI-powered security technologies to flourish, serving not merely as auxiliary tools but as essential components of contemporary business infrastructure. Dynamic solutions, such as anomaly detection mechanisms, highlight the subtle and not-so-subtle deviances in application behaviour, shedding light on potential points of failure or provoking points of intrusion, turning what was once a prelude to chaos into a symphony of preemptive intelligence.
In the era of advanced digital security, AI, exemplified by Dynatrace, stands as the pinnacle, swiftly navigating complex data webs to fortify against cyber threats. These digital fortresses, armed with cutting-edge AI, ensure uninterrupted insights and operational stability, safeguarding the integrity of data in the face of relentless cyber challenges.
India’s AI Stride
India, a burgeoning hub of technology and innovation, evidences AI's transformative powers within its burgeoning intelligent automation market. Driven by the voracious adoption of groundbreaking technological paradigms such as machine learning (ML), natural language processing (NLP), and Automated Workflow Management (AWM), sectors as disparate as banking, finance, e-commerce, healthcare, and manufacturing are swept up in an investment maelstrom. This is further bolstered by the Indian government’s supportive policies like 'Make in India' and 'Digital India'—bold initiatives underpinning the accelerating trajectory of intelligent automation in this South Asian powerhouse.
Consider the velocity at which the digital universe expands: IDC posits that the 5 billion internet denizens, along with the nearly 54 billion smart devices they use, generate about 3.4 petabytes of data each second. The implications for enterprise IT teams, caught in a fierce vice of incoming cyber threats, are profound. AI's emergence as the bulwark against such threats provides the assurance they desperately seek to maintain the seamless operation of critical business services.
The AI integration
The list of industries touched by the chilling specter of cyber threats is as extensive as it is indiscriminate. We've seen international hotel chains ensnared by nefarious digital campaigns, financial institutions laid low by unseen adversaries, Fortune 100 retailers succumbing to cunning scams, air traffic controls disrupted, and government systems intruded upon and compromised. Cyber threats stem from a tangled web of origins—be it an innocent insider's blunder, a cybercriminal's scheme, the rancor of hacktivists, or the cold calculation of state-sponsored espionage. The damage dealt by data breaches and security failures can be monumental, staggering corporations with halted operations, leaked customer data, crippling regulatory fines, and the loss of trust that often follows in the wake of such incidents.
However, the revolution is upon us—a rising tide of AI and accelerated computing that truncates the time and costs imperative to countering cyberattacks. Freeing critical resources, businesses can now turn their energies toward primary operations and the cultivation of avenues for revenue generation. Let us embark on a detailed expedition, traversing various industry landscapes to witness firsthand how AI's protective embrace enables the fortification of databases, the acceleration of threat neutralization, and the staunching of cyber wounds to preserve the sanctity of service delivery and the trust between businesses and their clientele.
Public Sector
Examine the public sector, where AI is not merely a tool for streamlining processes but stands as a vigilant guardian of a broad spectrum of securities—physical, energy, and social governance among them. Federal institutions, laden with the responsibility of managing complicated digital infrastructures, find themselves at the confluence of rigorous regulatory mandates, exacting public expectations, and the imperative of protecting highly sensitive data. The answer, increasingly, resides in the AI pantheon.
Take the U.S. Department of Energy's (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER) as a case in point. An investment exceeding $240 million in cybersecurity R&D since 2010 manifests in pioneering projects, including AI applications that automate and refine security vulnerability assessments, and those employing cutting-edge software-defined networks that magnify the operational awareness of crucial energy delivery systems.
Financial Sector
Next, pivot our gaze to financial services—a domain where approximately $6 million evaporates with each data breach incident, compelling the sector to harness AI not merely for enhancing fraud detection and algorithmic trading but for its indispensability in preempting internal threats and safeguarding knightly vaults of valuable data. Ventures like the FinSec Innovation Lab, born from the collaborative spirits of Mastercard and Enel X, demonstrate AI's facility in real-time threat response—a lifeline in preventing service disruptions and the erosion of consumer confidence.
Retail giants, repositories of countless payment credentials, stand at the threshold of this new era, embracing AI to fortify themselves against the theft of payment data—a grim statistic that accounts for 37% of confirmed breaches in their industry. Best Buy's triumph in refining its phishing detection rates while simultaneously dialling down false positives is a testament to AI's defensive prowess.
Smart Cities
Consider, too, the smart cities and connected spaces that epitomize technological integration. Their web of intertwined IoT devices and analytical AI, which scrutinize the flows of urban life, are no strangers to the drumbeat of cyber threat. AI-driven defense mechanisms not only predict but quarantine threats, ensuring the continuous, safe hum of civic life in the aftermath of intrusions.
Telecom Sector
Telecommunications entities, stewards of crucial national infrastructures, dial into AI for anticipatory maintenance, network optimization, and ensuring impeccable uptime. By employing AI to monitor the edges of IoT networks, they stem the tide of anomalies, deftly handle false users, and parry the blows of assaults, upholding the sanctity of network availability and individual and enterprise data security.
Automobile Industry
Similarly, the automotive industry finds AI an unyielding ally. As vehicles become complex, mobile ecosystems unto themselves, AI's cybersecurity role is magnified, scrutinizing real-time in-car and network activities, safeguarding critical software updates, and acting as the vanguard against vulnerabilities—the linchpin for the assured deployment of autonomous vehicles on our transit pathways.
Conclusion
The inclination towards AI-driven cybersecurity permits industries not merely to cope, but to flourish by reallocating their energies towards innovation and customer experience enhancement. Through AI's integration, developers spanning a myriad of industries are equipped to construct solutions capable of discerning, ensnaring, and confronting threats to ensure the steadfastness of operations and consumer satisfaction.
In the crucible of digital transformation, AI is the philosopher's stone—an alchemic marvel transmuting the raw data into the secure gold of business prosperity. As we continue to sail the digital ocean's intricate swells, the confluence of AI and cybersecurity promises to forge a gleaming future where businesses thrive under the aegis of security and intelligence.
References
- https://timesofindia.indiatimes.com/gadgets-news/why-adoption-of-ai-may-be-critical-for-businesses-to-tackle-cyber-threats-and-more/articleshow/106313082.cms
- https://blogs.nvidia.com/blog/ai-cybersecurity-business-resilience/

Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake

Introduction
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The Election Commission of India (ECI) has taken measures to counter misinformation during the 2024 elections. ECI has launched campaigns to educate people and urge them to verify election-related content and share responsibly on social media. In response to the proliferation of fake news and misinformation online, the ECI has introduced initiatives such as ‘Myth vs. Reality’ and 'VerifyBeforeYouAmplify' to clear the air around fake news being spread on social media. EC measures aim to ensure that the spread of misinformation is curbed, especially during election time, when voters consume a lot of information from social media. It is of the utmost importance that voters take in facts and reliable information and avoid any manipulative or fake information that can negatively impact the election process.
EC Collaboration with Tech Platforms
In this new age of technology, the Internet and social media continue to witness a surge in the spread of misinformation, disinformation, synthetic media content, and deepfake videos. This has rightly raised serious concerns. The responsible use of social media is instrumental in maintaining the accuracy of information and curbing misinformation incidents.
The ECI has collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google supports the 2024 Indian General Election by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. The company connects voters to helpful information through product features that show data from trusted organisations across its portfolio. YouTube showcases election information panels, including how to register to vote, how to vote, and candidate information. YouTube's recommendation system prominently features content from authority sources on the homepage, in search results, and in the "Up Next" panel. YouTube highlights high-quality content from authoritative news sources during key moments through its Top News and Breaking News shelves, as well as the news watch page.
Google has also implemented strict policies and restrictions regarding who can run election-related advertising campaigns on its platforms. They require all advertisers who wish to run election ads to undergo an identity verification process, provide a pre-certificate issued by the ECI or anyone authorised by the ECI for each election ad they want to run where necessary, and have in-ad disclosures that clearly show who paid for the ad. Additionally, they have long-standing ad policies that prohibit ads from promoting demonstrably false claims that could undermine trust or participation in elections.
CyberPeace Countering Misinformation
CyberPeace Foundation, a leading organisation in the field of cybersecurity works to promote digital peace for all. CyberPeace is working on the wider ecosystem to counter misinformation and develop a safer and more responsible Internet. CyberPeace has collaborated with Google.org to run a pan-India awareness-building program and comprehensive multilingual digital resource hub with content available in up to 15 Indian languages to empower over 40 million netizens in building resilience against misinformation and practising responsible online behaviour. This step is crucial in creating a strong foundation for a trustworthy Internet and secure digital landscape.
Myth vs Reality Register by ECI
The Election Commission of India (ECI) has launched the 'Myth vs Reality Register' to combat misinformation and ensure the integrity of the electoral process during the general elections 2024. The 'Myth vs Reality Register' can be accessed through the Election Commission's official website (https://mythvsreality.eci.gov.in/). All stakeholders are urged to verify and corroborate any dubious information they receive through any channel with the information provided in the register. The register provides a one-stop platform for credible and authenticated election-related information, with the factual matrix regularly updated to include the latest busted fakes and fresh FAQs. The ECI has identified misinformation as one of the challenges, along with money, muscle, and Model Code of Conduct violations, for electoral integrity. The platform can be used to verify information, prevent the spread of misinformation, debunk myths, and stay informed about key issues during the General Elections 2024.
The ECI has taken proactive steps to combat the challenge of misinformation which could cripple the democratic process. EC has issued directives urging vigilance and responsibility from all stakeholders, including political parties, to verify information before amplifying it. The EC has also urged responsible behaviour on social media platforms and discourse that inspires unity rather than division. The commission has stated that originators of false information will face severe consequences, and nodal officers across states will remove unlawful content. Parties are encouraged to engage in issue-based campaigning and refrain from disseminating unverified or misleading advertisements.
Conclusion
The steps taken by the ECI have been designed to empower citizens and help them affirm the accuracy and authenticity of content before amplifying it. All citizens must be well-educated about the entire election process in India. This includes information on how the electoral rolls are made, how candidates are monitored, a complete database of candidates and candidate backgrounds, party manifestos, etc. For informed decision-making, active reading and seeking information from authentic sources is imperative. The partnership between government agencies, tech platforms and civil societies helps develop strategies to counter the widespread misinformation and promote online safety in general, and electoral integrity in particular.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2016941#:~:text=To%20combat%20the%20spread%20of,the%20ongoing%20General%20Elections%202024
- https://www.business-standard.com/elections/lok-sabha-election/ls-elections-2024-ec-uses-social-media-to-nudge-electors-to-vote-124040700429_1.html
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/