Phishing attacks shift from people to AI

26 February, 58120, 02:01 PM
  |     Source: ITWeb
  |     Author: Romantia Mashabane
Richard Frost, head of technology solutions and consulting at Armata Cyber Security. AI-to-AI phishing is emerging as a potential serious risk in corporate environments, signalling a shift in how cyber attacks are designed and executed, according to security experts. Instead of manipulating employees into clicking malicious links or disclosing credentials, attackers are now targeting AI systems directly by embedding hidden instructions into everyday e-mails and documents that AI assistants automatically process. Jeeten Bhoora, software developer and founder of Siza AI, says these attacks mark a shift in phishing. "Traditional phishing relies on deceiving human users to gain unauthorised access; AI-targeted attacks shift the focus. Here, the attacker designs payloads specifically to deceive the broader AI system." Bhoora explains that such attacks typically involve two automated systems: the attacker, usually an AI agent using generative tools; and the target, an AI service handling user requests or background operations. "These attacks are designed to bypass the guardrails of large language models (LLMs) used within corporate systems and extract sensitive information or trigger unauthorised actions," he says. Lionel Dartnall, country manager for SADC at Check Point Software Technologies, highlights why conventional security struggles. "Traditional security looks for 'known bad' codes, but AI-to-AI attacks use natural language, which is indistinguishable from legitimate communication to most automated filters." He adds that attackers can hide instructions using invisible text, metadata fields or by distributing them across multiple messages, making detection even harder. Bhoora also details the technical methods behind these attacks. "The principles behind steganography are used as a foundation to design malicious e-mails, which means hiding malicious prompts in plain sight, yet making them undetectable to a human or even a machine," he says. "Attackers can leverage generative AI to embed a malicious payload within the pixel layout of an e-mail signature. They can also exploit the multi-message context memory of an LLM to distribute a payload across several e-mails. When hidden prompt injection is the goal, delivery becomes the primary focus." Dartnall points to vulnerabilities in AI systems themselves. He explains that many LLMs lack effective role separation, so AI can't reliably distinguish between trusted instructions and untrusted data. Modern assistants that use retrieval-augmented generation can "fetch" context from e-mails in an inbox or files, allowing a seemingly innocuous message to trigger a completely unrelated action. "An attacker can 'park' a malicious instruction in a benign-looking e-mail that sits in your inbox until the AI scans it for a completely unrelated task, triggering the attack," Dartnall says. Richard Frost, head of technology solutions and consulting at Armata Cyber Security, stresses the real-world consequences: "Attackers frequently intercept ongoing e-mail threads between companies and their customers and then insert fraudulent instructions that appear legitimate. There have been incidents where an attacker used a compromised customer mailbox to send a fake invoice requesting the remaining balance on a transaction while contacting the supplier to request a refund of the original deposit. The company hadn't been breached, but both the supplier and the company were financially affected." He notes that global research reflects a growing concern around AI-related risk, citing the World Economic Forum's 2025 Global Cybersecurity Outlook, which says 66% of organisations expect AI and machine learning to create new vulnerabilities, and 47% believe AI will drive increasingly sophisticated attacks. Additionally, the Proofpoint 2025 report found a more than 1 300% increase in attacks using AI or automation. All three experts agree that layered protections are essential. Bhoora recommends starting with dedicated checkpoints. "More advanced methods, like isolating flows from checkpoint to checkpoint, similar to air-gapping, can lead to better protection of data and allow for more accurate monitoring of anomalies. "Additionally, companies can implement more sophisticated, granular levels of protection, such as cost-efficient, memory-friendly, supervised machine learning monitored checkpoints that help them better comply with data laws and their own company-specific data policies, preventing sensitive data leaks between their internal data pipelines, be it malicious or accidental." Dartnall advocates prompt segmentation, AI-aware content filtering and strict limits on what AI agents can do without human approval.
phishing
artificial intelligence
email
computer security
large language model
cyberattack
payload (computing)
generative model
software development
generative artificial intelligence

IIM Lucknow Placements 2026: Highest salary package at Rs 1 Crore; top global firms hire over 550 students

JEE Main 2026 Session 2 Registration Ends Tomorrow; NTA to Hold Exams from April 2

AI develops easily understandable solutions for unusual experiments in quantum physics

Celebrate Ramadan with Premium Entertainment: Exclusive Offers on LG TVs | Weekly Voice

Porter is latest Canadian airline to restart service to Mexico - Medicine Hat News

Patient expectations in 2030: Why healthcare technology decisions made today, matter

NCERT updates Class 8 textbook, flags 'corruption in judiciary', 'massive backlog' in courts as key challenges

Egypt: Bonyan reports robust results for 2025; rental revenues up 20.2% YoY

World News | Parliamentary Commitee Applauds 'Grand Success' of India AI Impact Summit, Condemns Feb 20 Protest Stunt | LatestLY

Fitch Solutions forecasts Egypt's gas production to rebound 8% in 2026

Lagos leads as Nigeria records 120+ AI startups

Explained: How IBM share price's 13% plunge on Anthropic's COBOL disruption fears sparked bloodbath in TCS, Infosys, Wipro & other IT stocks

IIM Lucknow Placements 2026: Highest Package Rs 1 Cr; Average Salary Rs 33.2 LPA

Valentine's Day party at Bengaluru villa leads to extortion FIR, gang rape complaint

IGNOU to Hold Mega On-Campus Recruitment Drive on Feb 25; Leading Companies to Join

X account of Middle East Eye journalist covering India-Israel ties blocked

New Strong Sell Stocks for February 24th

China's humanoid robot boom: What to know

BHASHINI Unveils Voice AI Stack VoicERA At AI Impact Summit

Druva launches Deep Analysis Agents to cut forensic investigations from days to minutes - SiliconANGLE