Hyderabad's Cyber Crimes Police warned on 23 Feb 2026 about AI‑driven biometric scams where fraudsters capture victims' facial and voice data via video calls to create deep‑fakes for later fraud. Citizens are advised to avoid handling strangers' phones and to report incidents via helpline 1930.
Overview On 23 February 2026 , the Cyber Crimes Police, Hyderabad issued an alert about a rising menace of AI‑driven biometric scams . Fraudsters target individuals in public spaces—malls, metro stations, markets—by posing as elderly or middle‑aged persons needing assistance with pensions, subsidies, or mobile‑phone operations. The scam exploits video‑call or screen‑recording features to capture the victim’s face and voice within seconds, enabling creation of deep‑fake impersonations for fraud and social engineering. Key Developments Development 1: Perpetrators approach strangers in crowded places, requesting help to check pension or subsidy details, or to operate a smartphone, claiming limited tech‑savviness. Development 2: The victim’s phone is often already on a video call or screen‑recording mode with permissions enabled, allowing accomplices to instantly record facial and vocal biometrics. Development 3: Captured biometric data is fed into generative AI tools to produce realistic voice‑overs and deep‑fake videos, which are later used for financial fraud, identity theft, or bypassing verification systems. Important Facts Fact 1: The alert emphasizes that the scam does not involve direct monetary theft at the point of contact; the real loss occurs later when AI‑fabricated identities are exploited. Fact 2: Victims are urged to immediately contact the national cyber helpline 1930 or lodge a complaint on the cybercrime portal upon suspicion. UPSC Relevance This issue intersects with several UPSC syllabus components: GS Paper II (Governance, Technology & Security) (cyber‑security, digital governance, AI ethics), GS Paper III (Economy & Development) (impact of fraud on financial inclusion), and GS Paper IV (Ethics, Integrity & Aptitude) (citizen responsibility and public awareness). Questions may probe the legal framework (IT Act, 2000, amendments), policy measures for AI regulation, or the role of police and cyber‑cells in safeguarding digital citizens. Way Forward Policymakers should strengthen regulations on AI‑generated synthetic media, mandate explicit consent for biometric data capture, and enhance public‑awareness campaigns targeting vulnerable groups. Coordination between cyber‑cells, telecom operators, and AI‑ethics committees can create real‑time monitoring mechanisms to detect and curb deep‑fake misuse. Continuous up‑skilling of law‑enforcement personnel on emerging AI threats is essential to stay ahead of sophisticated fraudsters.