Google’s AI Call Screener: A Glimpse Into the Future of Automated Call Management?
Aditi Pangotra
Research Analyst, Policy & Advocacy, CyberPeace
PUBLISHED ON
Nov 19, 2024
10
Introduction
AI has penetrated most industries and telecom is no exception. According to a survey by Nvidia, enhancing customer experiences is the biggest AI opportunity for the telecom industry, with 35% of respondents identifying customer experiences as their key AI success story. Further, the study found nearly 90% of telecom companies use AI, with 48% in the piloting phase and 41% actively deploying AI. Most telecom service providers (53%) agree or strongly agree that adopting AI would provide a competitive advantage. AI in telecom is primed to be the next big thing and Google has not ignored this opportunity. It is reported that Google will soon add “AI Replies” to the phone app’s call screening feature.
How Does The ‘AI Call Screener’ Work?
With the busy lives people lead nowadays, Google has created a helpful tool to answer the challenge of responding to calls amidst busy schedules. Google Pixel smartphones are now fitted with a new feature that deploys AI-powered calling tools that can help with call screening, note-making during an important call, filtering and declining spam, and most importantly ending the frustration of being on hold.
In the official Google Phone app, users can respond to a caller through “new AI-powered smart replies”. While “contextual call screen replies” are already part of the app, this new feature allows users to not have to pick up the call themselves.
With this new feature, Google Assistant will be able to respond to the call with a customised audio response.
The Google Assistant, responding to the call, will ask the caller’s name and the purpose of the call. If they are calling about an appointment, for instance, Google will show the user suggested responses specific to that call, such as ‘Confirm’ or ‘Cancel appointment’.
Google will build on the call-screening feature by using a “multi-step, multi-turn conversational AI” to suggest replies more appropriate to the nature of the call. Google’s Gemini Nano AI model is set to power this new feature and enable it to handle phone calls and messages even if the phone is locked and respond even when the caller is silent.
Benefits of AI-Powered Call Screening
This AI-powered call screening feature offers multiple benefits:
The AI feature will enhance user convenience by reducing the disruptions caused by spam calls. This will, in turn, increase productivity.
It will increase call privacy and security by filtering high-risk calls, thereby protecting users from attempts of fraud and cyber crimes such as phishing.
The new feature can potentially increase efficiency in business communications by screening for important calls, delegating routine inquiries and optimising customer service.
Key Policy Considerations
Adhering to transparent, ethical, and inclusive policies while anticipating regulatory changes can establish Google as a responsible innovator in AI call management. Some key considerations for AI Call Screener from a policy perspective are:
The AI screen caller will process and transcribe sensitive voice data, therefore, the data handling policies for such need to be transparent to reassure users of regulatory compliance with various laws.
AI has been at a crossroads in its ethical use and mitigation of bias. It will require the algorithms to be designed to avoid bias and reflect inclusivity in its understanding of language.
The data that the screener will be using is further complicated by global and regional regulations such as data privacy regulations like the GDPR, DPDP Act, CCPA etc., for consent to record or transcribe calls while focussing on user rights and regulations.
Conclusion: A Balanced Approach to AI in Telecommunications
Google’s AI Call Screener offers a glimpse into the future of automated call management, reshaping customer service and telemarketing by streamlining interactions and reducing spam. As this technology evolves, businesses may adopt similar tools, balancing customer engagement with fewer unwanted calls. The AI-driven screening will also impact call centres, shifting roles toward complex, human-centred interactions while automation handles routine calls. They could have a potential effect on support and managerial roles. Ultimately, as AI call management grows, responsible design and transparency will be in demand to ensure a seamless, beneficial experience for all users.
Recently, our team came across a widely circulated post on X (formerly Twitter), claiming that the Indian government would abolish paper currency from February 1 and transition entirely to digital money. The post, designed to resemble an official government notice, cited the absence of advertisements in Kerala newspapers as supposed evidence—an assertion that lacked any substantive basis
Claim:
The Indian government will ban paper currency from February 1, 2025, and adopt digital money as the sole legal tender to fight black money.
The claim that the Indian government will ban paper currency and transition entirely to digital money from February 1 is completely baseless and lacks any credible foundation. Neither the government nor the Reserve Bank of India (RBI) has made any official announcement supporting this assertion.
Furthermore, the supposed evidence—the absence of specific advertisements in Kerala newspapers—has been misinterpreted and holds no connection to any policy decisions regarding currency
During our research, we found that this was the prediction of what the newspaper from the year 2050 would look like and was not a statement that the notes will be banned and will be shifted to digital currency.
Such a massive change would necessitate clear communication to the public, major infrastructure improvements, and precise policy announcements which have not happened. This false rumor has widely spread on social media without even a shred of evidence from its source, which has been unreliable and is hence completely false.
We also found a clip from a news channel to support our research by asianetnews on Instagram.
We found that the event will be held in Jain Deemed-to-be University, Kochi from 25th January to 1st February. After this advertisement went viral and people began criticizing it, the director of "The Summit of Future 2025" apologized for this confusion. According to him, it was a fictional future news story with a disclaimer, which was misread by some of its readers.
The X handle of Summit of Future 2025 also posted a video of the official statement from Dr Tom.
Conclusion:
The claim that the Indian government will discontinue paper currency by February 1 and resort to full digital money is entirely false. There's no government announcement nor any evidence to support it. We would like to urge everyone to refer to standard sources for accurate information and be aware to avoid misinformation online.
Claim: India to ban paper currency from February 1, switching to digital money.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
In a setback to the Centre, the Bombay High Court on Friday 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Chronological Overview
On 6th April 2023, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules, 2023). These rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government”. This amendment was done In exercise of the powers conferred by section 87 of the Information Technology Act, 2000. (IT Act).
On 20 March 2024, the Central Government notified the Press Information Bureau (PIB) as FCU under rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
The next day on 21st March 2024, the Supreme Court stayed the Centre's decision on notifying PIB -FCU, considering the pendency of the proceedings before the High Court of Judicature at Bombay. A detailed analysis covered by CyberPeace on the Supreme Court Stay decision can be accessed here.
In the latest development, the Bombay High Court on 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Brief Overview of Bombay High Court decision dated 20th September 2024
Justice AS Chandurkar was appointed as the third judge after a split verdict in January 2023 by a division bench consisting of Justices Gautam Patel and Neela Gokhal. As a Tie-breaker judge' Justice AS Chandurkar delivered the decision striking down provisions for setting up a Fact Check Unit under IT amendment 2023 rules. Striking down the Centre's proposed fact check unit provision, Justice A S Chandurkar of Bombay High Court also opined that there was no rationale to undertake an exercise in determining whether information related to the business of the Central govt was fake or false or misleading when in digital form but not doing the same when such information was in print. It was also contended that there is no justification to introduce an FCU only in relation to the business of the Central Government. Rule 3(1)(b)(v) has a serious chilling effect on the exercise of the freedom of speech and expression under Article 19(1)(a) of the Constitution since the communication of the view of the FCU will result in the intermediary simply pulling down the content for fear of consequences or losing the safe harbour provision given under IT Act.
Justice Chandurkar held that the expressions ‘fake, false or misleading’ are ‘vague and overbroad’, and that the ‘test of proportionality’ is not satisfied. Rule 3(1)(b)(v), was violative of Articles 14 and 19 (1) (a) and 19 (1) (g) of the Constitution and it is “ultra vires”, or beyond the powers, of the IT Act.
Role of Expert Organisations in Curbing Mis/Disinformation and Fake News
In light of the recent developments, and the rising incidents of Mis/Disinformation and Fake News it becomes significantly important that we all stand together in the fight against these challenges. The actions against Mis/Disinformation and fake news should be strengthened by collective efforts, the expert organisations like CyberPeace Foundation plays an key role in enabling and encouraging netizens to exercise caution and rely on authenticated sources, rather than solely rely on govt FCU to block the content.
Mis/Disinformation and Fake News should be stopped, identified and countered by netizens at the very first stage of its spread. In light of the Bombay High Court's decision to stuck down the provision related to setting up the FCU by the Central Government, it entails that the government's intention to address misinformation related solely to its business/operations may not have been effectively communicated in the eyes of the judiciary.
It is high time to exercise collective efforts against Mis/Disinformation and Fake News and support expert organizations who are actively engaged in conducting proactive measures, and campaigns to target these challenges, specifically in the online information landscape. CyberPeace actively publishes fact-checking reports and insights on Prebunking and Debunking, conducts expert sessions and takes various key steps aimed at empowering netizens to build cognitive defences to recognise the susceptible information, disregard misleading claims and prevent further spreads to ensure the true online information landscape.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.