Infinigate UKI and OpenOrigins Partner to bring verifiable media authentication to the UK&I channel
Infinigate UKI and OpenOrigins Partner to bring verifiable media authentication to the UK&I channel London, UK -- 24 February 2026. Infinigate UK & Ireland, the value-added distributor specialising in cybersecurity, secure networks and secure cloud today announced a new vendor partnership with OpenOrigins, a decentralised authentication platform designed to verify the authenticity of digital media and help organisations combat deepfakes and AI-generated content. The partnership will initially focus on UK & Ireland, enabling Infinigate's channel community to address an urgent and fast-growing challenge around establishing trust in the images and videos used across their organisation. Industry analysts expect deepfakes to become mainstream in 2026, with the threat shifting from reputational damage to direct financial gain by criminals. As a result, organisations are increasing their investment in deepfake detection technologies, with spending predicted to rise by 40% in 2026 across industries and use cases. (forrester.com) For channel partners, OpenOrigins opens up a practical, high-value conversation across a broad set of use cases, including: Ari Abelson, Co-Founder & Chief Strategy Officer at OpenOrigins, said: "Safeguarding non-synthetic content by establishing provable provenance has become essential to protect businesses from corporate fraud. OpenOrigins' approach overcomes the limitations of AI detectors and other reactive solutions that are no longer able to meet the escalating challenge. Partnering with a cybersecurity specialist such as Infinigate fits perfectly with our intent to build trust and security back into the Internet," said Ari Abelson, Co-Founder & Chief Strategy Officer at OpenOrigins. Justin Griffiths, RVP Infinigate UK&I, added: "OpenOrigins brings a smart and dependable way to protect data integrity at a time when authenticity is becoming a board-level concern. Together, we'll help our partners take a provenance-led approach to trusted media -- backed by the enablement, technical support and route-to-market services they expect from Infinigate -- while unlocking new growth opportunities." About the Infinigate Group The Infinigate Group, the leading technology platform and trusted advisor in Cybersecurity, Cloud & Network Infrastructure sets itself apart for its deep technical expertise, delivering locally tailored solutions and services to SMB and enterprise customers across EMEA and ANZ. Relying on a strong central supply chain and an extensive portfolio of leading-edge solutions, Infinigate sparks growth for vendor and channel partners. OpenOrigins is building the trust layer for the internet, giving authentic people and media a permanent, verifiable path in the AI-driven internet. Powered by Cambium, a novel Internet-scale provenance protocol, we enable any digital asset to prove its origin independently of custody, distribution, or storage. This ensures authenticity persists wherever data travels across the digital world. For more information, please visit https://www.openorigins.com/.
Where Does India Stand in the Global AI Race?
When the United States published its AI Action Plan in July 2025, it framed artificial intelligence (AI) as a contest for global dominance. Whoever builds the largest AI ecosystem, the document argued, will set global AI standards and collect broad economic and military benefits. The line raises a question that India has not yet answered clearly: in a world where AI is becoming a contest of power, what position is it actually building towards? Power in the global AI economy is not evenly distributed. It maps onto a layered hierarchy. At the base sit advanced semiconductors and fabrication plants. Above that are cloud platforms and data centres that supply compute. At the top are foundation models and the applications built on them. Each layer confers a different kind of influence, but those who lead in models shape AI standards and control access. The United States occupies several of these layers at once -- it holds the dominant cloud platforms, the frontier model labs (OpenAI, Anthropic), and deep integration with the semiconductor design firms (Nvidia, AMD) that supply the rest of the world. China has responded by investing heavily in domestic semiconductor manufacturing capacity and building its own widely used models. Taiwan and South Korea have concentrated on advanced chipmaking. Each country has made a different strategic bet. India's comparative advantage has historically been in delivering IT and IT-enabled services at scale -- a skilled workforce at competitive cost, combined with a public digital infrastructure that enables rapid adoption. Anthropic's country brief on India recognises it as one of the top global markets for Claude, second only to the United States. The Stanford AI Index's Global Vibrancy Tool (2025), which assesses research, investment, talent, policy, and economic activity, placed India behind the US and China with particular strength in talent and adoption. India's IT Minister, Ashwini Vaishnaw, cited this data at Davos. But adoption and vibrancy are not the same as leverage in the AI supply chain. Are we building or renting? India's R&D spending stands at approximately 0.6 per cent of GDP, compared to 3 to 4 per cent in most innovation-driven economies, according to analysts. High-end talent retention has compounded the gap. Sriram Krishnan, the Senior White House Policy Adviser on AI, was born in Chennai and educated at SRM University before emigrating to the United States, where he became a US citizen in 2016. Karandeep Anand, CEO of Character.AI, was born in India and attended IIIT Hyderabad before completing an MBA at Northwestern University. Both were named among Time magazine's "Architects of AI", its 2025 Person of the Year collective. India, ironically, seems to consistently produce elite technical talent that ends up powering someone else's AI agenda. As Kak and Kapoor observe, many low- and middle-income countries believe that failing to participate in AI will deepen their marginalisation. The Indian government appears to have absorbed that anxiety. In her 2023-24 Budget speech, Union Finance Minister Nirmala Sitharaman announced three Centres of Excellence (CoEs) in AI. In 2024-25, these CoEs reportedly received Rs.255 crore (roughly $28 million). Also Read | The Indian economy's weird state of suspension The sum is not trivial, but it does not build the kind of research infrastructure capable of shifting India's position in the frontier model hierarchy. Training compute costs for GPT-4 were estimated at $78 million and for Gemini Ultra at approximately $191 million. Even accounting for differences in purchasing power parity, the gap is significant. A more revealing lens is the split between direct and indirect AI spending in the Union Budget. Direct AI spending covers schemes explicitly earmarked for AI such as the IndiaAI Mission under the Ministry of Electronics and IT (MeitY). Indirect AI spending covers AI-adjacent or AI-enabling schemes: compute capacity, semiconductors, cybersecurity, and the underlying infrastructure that makes AI development possible at scale. Among these numbers, the IndiaAI Mission allocation in 2026-27 stands out. Despite being the flagship AI programme, its allocation was halved to Rs.1,000 crore from the previous year. In 2024-25, 96 per cent of the budgeted amount remained unspent, according to the Actuals released in the latest budget. This is the same fund being used to co-sponsor the AI summit in Delhi. Parliamentary responses describe the IndiaAI Mission as having a total outlay of Rs.10,371.92 crore over five years, with the largest single pillar being IndiaAI Compute Capacity at Rs.4,563.36 crore. Spending on compute outpaces the mission's allocations for foundation models, datasets, and skilling combined. The pattern is consistent: compute and AI-enabling infrastructure take priority over building models. Taken together, these budgetary choices suggest that India is orienting itself towards becoming a destination for running AI workloads, not for producing the systems that run on them. The policy move On February 1, Reuters reported that India would offer a tax holiday -- zero taxes until 2047 -- to foreign firms using Indian data centres to provide cloud services to global clients. A TechCrunch report described the move as a bid to attract the next wave of AI computing investment, noting that power shortages and water stress remain real constraints in a country where reliable electricity and clean water are still unavailable to large portions of the population. Investment announcements around and after the AI Impact Summit reinforce the same compute-and-hosting logic. Reliance Industries outlined $109.8 billion for AI and data infrastructure and is building data centres in Jamnagar. The Adani Group announced a $100 billion commitment for AI data centres by 2035 and is constructing campuses in Visakhapatnam and Noida. Yotta Data Services committed $2 billion to an AI computing hub using Nvidia chips. Google, Microsoft, and Amazon together committed a combined $68 billion in AI and cloud infrastructure investment in India by 2030, according to Reuters. OpenAI has also partnered with the Tata group to secure 100 megawatts of AI-ready data centre capacity, with an ambition to reach 1 gigawatt. What these announcements collectively signal is that India is presenting itself -- and being treated -- as a destination with hosting capacity, jurisdictional stability, and market access. The timing matters. India commands roughly 55 per cent of the global IT outsourcing market. Generative AI has introduced a structural risk to that position. A 2025 Gartner estimate projected that nearly 80 per cent of customer queries will be resolved by AI agents by 2029. Markets have begun to price this risk: in February 2026, Indian IT stocks fell sharply amid investor concern that AI-driven automation would erode the outsourcing model. Against that backdrop, becoming a data centre and compute hub appears to be a considered strategic response -- trading one services model for another. The Vast.ai booth at the AI Impact Summit in New Delhi, India, on February 20, 2026. India's comparative advantage has historically been in delivering IT and IT-enabled services at scale. | Photo Credit: Ruhani Kaur/Bloomberg On February 20, 2026, India formalised a further alignment. On the final day of the India AI Impact Summit, India signed the Pax Silica Declaration, becoming the eleventh signatory to the US-led initiative. Focused on securing supply chains for critical minerals, semiconductors, and AI infrastructure, Pax Silica spans the technology stack from rare earth extraction to frontier AI deployment. By joining, India has aligned itself with a geopolitically structured framework in which hosting capacity, supply chain reliability, and proximity to US technology partners are understood as strategic assets. What indispensability actually requires The US model in AI combines dominant private-sector "hyperscalers" -- large-scale cloud service providers -- with sustained state-backed research and defence funding. DARPA (Defense Advanced Research Projects Agency) has backed AI-related programmes for years, including work on explainable AI, pushing frontier capability alongside strategic applications. China's approach has been more centralised: its 2017 State Council plan set objectives through 2030 and emphasised building an integrated industrial chain, national standards, and domestic capability at every level of the stack. A smaller country is instructive here. Taiwan has made itself indispensable not by attempting to compete across the AI hierarchy but by controlling a chokepoint within it -- advanced semiconductor manufacturing. A dominant share of the world's most advanced chipmaking capacity is concentrated in Taiwan and South Korea, which is why Taiwan's stability carries global economic significance far beyond its size. Also Read | AI is not India's problem. Governance is India cannot replicate Taiwan's path -- the capital requirements and decades of accumulated process knowledge involved in advanced chipmaking are beyond what India can reasonably build in a short window. But it can take seriously the underlying logic: indispensability requires owning something that others cannot easily substitute. At present, India's compute-and-hosting strategy does not meet that test. Data centres can be built in many jurisdictions. Tax holidays are replicable. Market access is real, but not unique. All the indicators point in the same direction. The AI summit's framing emphasised deployment and investment. The mission architecture prioritises compute capacity. The 2047 tax holiday is designed to attract foreign firms to route global cloud services through India. The infrastructure commitments run into hundreds of billions of dollars. MeitY Secretary S. Krishnan, speaking on the first day of the summit, encouraged private investment in data centres and AI-driven compute infrastructure. Taken together, this is a coherent positioning: India as a hub at the compute-and-services layer of the global AI economy. The strategy plays to India's genuine strengths. The open question is whether it is designed as a base from which to move up the hierarchy, or as the destination itself. If the former, the policy agenda must extend well beyond data centres and subsidised compute. It requires sustained investment in research institutions, conditions that retain AI researchers within the country, and long-horizon R&D funding that builds foundational capacity rather than maintaining service competitiveness. None of those conditions are currently visible in the Budget. India rode the Y2K wave into IT outsourcing dominance but consistently captured the labour-arbitrage tier of the stack rather than building the IP layer above it. No Indian firm owns a global operating system, a hyperscaler cloud, or a dominant enterprise software suite. The sector grew large by servicing others' products, not by producing its own. The data centre and compute-hosting bet follows the same logic: enter at the execution layer, scale on cost and capacity, and defer the harder question of whether to move up. Summing up, this appears to be India's outsourcing moment for AI. Sayamsiddha is PhD Student at The New School for Social Research, New York. Featured Comment
Phishing attacks shift from people to AI
Richard Frost, head of technology solutions and consulting at Armata Cyber Security. AI-to-AI phishing is emerging as a potential serious risk in corporate environments, signalling a shift in how cyber attacks are designed and executed, according to security experts. Instead of manipulating employees into clicking malicious links or disclosing credentials, attackers are now targeting AI systems directly by embedding hidden instructions into everyday e-mails and documents that AI assistants automatically process. Jeeten Bhoora, software developer and founder of Siza AI, says these attacks mark a shift in phishing. "Traditional phishing relies on deceiving human users to gain unauthorised access; AI-targeted attacks shift the focus. Here, the attacker designs payloads specifically to deceive the broader AI system." Bhoora explains that such attacks typically involve two automated systems: the attacker, usually an AI agent using generative tools; and the target, an AI service handling user requests or background operations. "These attacks are designed to bypass the guardrails of large language models (LLMs) used within corporate systems and extract sensitive information or trigger unauthorised actions," he says. Lionel Dartnall, country manager for SADC at Check Point Software Technologies, highlights why conventional security struggles. "Traditional security looks for 'known bad' codes, but AI-to-AI attacks use natural language, which is indistinguishable from legitimate communication to most automated filters." He adds that attackers can hide instructions using invisible text, metadata fields or by distributing them across multiple messages, making detection even harder. Bhoora also details the technical methods behind these attacks. "The principles behind steganography are used as a foundation to design malicious e-mails, which means hiding malicious prompts in plain sight, yet making them undetectable to a human or even a machine," he says. "Attackers can leverage generative AI to embed a malicious payload within the pixel layout of an e-mail signature. They can also exploit the multi-message context memory of an LLM to distribute a payload across several e-mails. When hidden prompt injection is the goal, delivery becomes the primary focus." Dartnall points to vulnerabilities in AI systems themselves. He explains that many LLMs lack effective role separation, so AI can't reliably distinguish between trusted instructions and untrusted data. Modern assistants that use retrieval-augmented generation can "fetch" context from e-mails in an inbox or files, allowing a seemingly innocuous message to trigger a completely unrelated action. "An attacker can 'park' a malicious instruction in a benign-looking e-mail that sits in your inbox until the AI scans it for a completely unrelated task, triggering the attack," Dartnall says. Richard Frost, head of technology solutions and consulting at Armata Cyber Security, stresses the real-world consequences: "Attackers frequently intercept ongoing e-mail threads between companies and their customers and then insert fraudulent instructions that appear legitimate. There have been incidents where an attacker used a compromised customer mailbox to send a fake invoice requesting the remaining balance on a transaction while contacting the supplier to request a refund of the original deposit. The company hadn't been breached, but both the supplier and the company were financially affected." He notes that global research reflects a growing concern around AI-related risk, citing the World Economic Forum's 2025 Global Cybersecurity Outlook, which says 66% of organisations expect AI and machine learning to create new vulnerabilities, and 47% believe AI will drive increasingly sophisticated attacks. Additionally, the Proofpoint 2025 report found a more than 1 300% increase in attacks using AI or automation. All three experts agree that layered protections are essential. Bhoora recommends starting with dedicated checkpoints. "More advanced methods, like isolating flows from checkpoint to checkpoint, similar to air-gapping, can lead to better protection of data and allow for more accurate monitoring of anomalies. "Additionally, companies can implement more sophisticated, granular levels of protection, such as cost-efficient, memory-friendly, supervised machine learning monitored checkpoints that help them better comply with data laws and their own company-specific data policies, preventing sensitive data leaks between their internal data pipelines, be it malicious or accidental." Dartnall advocates prompt segmentation, AI-aware content filtering and strict limits on what AI agents can do without human approval.
Instagram Addiction Trial Shows How Juries Step Into Policy Void
Mark Zuckerberg's recent testimony in the Instagram social media addiction trial may dominate consumer headlines, but the case represents a more systemic shift for lawyers advising technology companies. It reflects a recurring theme in the evolution of American law: When legislatures hesitate and public concern reaches a fever pitch, the jury begins to function as the regulator. The social media addiction litigation unfolding in Los Angeles sits in a policy vacuum. Federal law governing youth digital engagement is fragmented; state-level experiments frequently are tied up in legal challenges. In this ambiguity, responsibility for age verification and safety design is contested across an ecosystem of platforms, app stores, and device makers. In that vacuum, plaintiffs' lawyers are turning to tort law to perform a retrospective audit of corporate decision-making. This isn't new. Tobacco litigation reshaped marketing and disclosure practices long before Congress enacted comprehensive reforms. Firearms litigation, though navigating different statutory protections, continues to probe the boundaries of public nuisance and product liability where legislation is politically constrained. In each instance, courts were asked to weigh corporate judgment in areas where lawmakers had drawn no clear lines. For tech companies designing products used by minors, the inflection point has arrived. For in-house counsel, the immediate lesson isn't about addiction theory -- it's about governance architecture. When product teams debate engagement features or content moderation, those discussions no longer are mere business strategy -- they're potential litigation exhibits. The risk environment is intensifying as "speed to market" pressure grows. The integration of artificial intelligence into recommendation systems increases both personalization and legal scrutiny. AI-driven tools can amplify engagement and dynamically adjust content exposure in ways that traditional algorithms couldn't. In a courtroom, these capabilities will be reframed as foreseeability of harm. If AI can predict a user's vulnerability to certain content, the legal argument shifts from "we didn't know" to "we built a system designed to know, yet we failed to intervene." Business teams face existential pressure. In the current arms race between models such as ChatGPT, Claude AI, and Google Gemini, a delay in a new feature can be the difference between market dominance and being an also-ran. If one company tightens guardrails around youth engagement, a competitor may not. However, regulatory ambiguity doesn't eliminate accountability; it merely transfers it. Absent any clear statutory standards, juries are asked to decide what was reasonable. Internal emails commodifying tweens as engagement metrics or debates over enforcement gaps can look like protecting profits at the expense of a protected class. When children are the subject, "industry standard" is rarely a sufficient defense against a grieving parent on the witness stand. Technology companies face a choice: Help shape industry standards prospectively through legislation or have them dictated through verdicts and settlements. Legislation, even if imperfect, offers defined guardrails and predictability. Litigation, by contrast, invites hindsight. It allows juries to interpret evolving product design choices against a backdrop of heightened public sensitivity. In a policy vacuum, the jury becomes the regulator by default. For corporate counsel, the mandate is clear. Governance systems must assume that today's product design trade-offs will be tomorrow's deposition topics. Risk identification and mitigation must be integrated into the initial product sprint -- not a patch after the fact. The Instagram trial underscores a broader reality: Regulatory gray zones are unstable when they involve children. If policymakers don't resolve the tension, the courts will attempt to do so. The question for technology companies is no longer just "can we build it?" but "can we defend the decision to build it under oath?" This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners. Justin Daniels is a shareholder in Baker Donelson's data protection, privacy and cybersecurity practice.
BIO-key and RunLevel Strengthen Identity Security Across Mozambique's National Payments Infrastructure for Sociedade Interbancária de Moçambique (SIMO) | Taiwan News | Feb. 24, 2026 17:30
Multi-year agreement supports secure digital transformation of banking services and interbank systems MAPUTO, Mozambique and HOLMDEL, N.J., Feb. 24, 2026 (GLOBE NEWSWIRE) -- BIO-key® International, Inc. (NASDAQ: BKYI), a global leader in biometric-centric Identity and Access Management (IAM) solutions, today announced a new strategic, multi-year agreement with Sociedade Interbancária de Moçambique (SIMO), the operator of Mozambique's national electronic payments network. Working in partnership with RunLevel, a specialized regional integrator with identity and cybersecurity experience across Africa, BIO-key will support the modernization of identity and access security across SIMO's financial ecosystem. The agreement builds on BIO-key's and RunLevel's first joint IAM deployment in Mozambique, in May 2025, for a major National Bank. Under initiative, BIO-key's PortalGuard® IAM platform, PIN:You™ tokenless authentication, and Single Sign-On (SSO) capabilities will be integrated into SIMO's environment to improve governance, boost cybersecurity resilience, and streamline secure access for institutions participating in the national payments infrastructure. The agreement supports SIMO's long-term strategy to adopt enterprise-grade identity security aligned with international best practices while continuing to modernize Mozambique's financial sector. The deployment represents BIO-key's 11th banking and financial sector customer globally. SIMO plays a key role in Mozambique's banking system, by enabling interoperability, clearing, and settlement services across banks and financial service providers via the country's single national electric payments network known as SIMOrede. As digital banking adoption accelerates to serve Mozambique population of 36.6 million, the need for strong, identity-first security has become critical to maintaining trust, operational continuity, and regulatory compliance. "As Mozambique's financial system continues to evolve, identity security is vital for ensuring trust, interoperability, and operational resilience across payment services," said Juna Chiloveque, SIMO's IT Security and Compliance, CISO. "Partnering with BIO-key and RunLevel allows SIMO to adopt modern identity and access management solutions that strengthen governance, protect essential systems, and support the secure digital transformation of our national payments infrastructure." "Partnering with SIMO and BIO-key on this project demonstrates RunLevel's commitment to strengthening digital trust across Africa's financial ecosystems," said Miguel Guerreiro, Managing Partner, RunLevel. "By combining local expertise with advanced IAM technologies, we are helping establish secure authentication foundations that allow banks to innovate confidently while safeguarding critical payment infrastructure." "Our collaboration with SIMO marks an important step in supporting secure digital banking infrastructure in Mozambique," said Alex Rocha, BIO-key's Managing Director - International. "Together with RunLevel, we are deploying scalable IAM technologies that help financial institutions modernize access, reduce identity risks, and enable trusted digital services across the ecosystem." About Sociedade Interbancária de Moçambique (SIMO) (https://www.bancomoc.mz/en/) SIMO manages the country's unified electronic payments system, enabling secure clearing, settlement, and interoperability across Mozambique's banking and financial services network. SIMO plays an important role in advancing financial inclusion and digital transformation nationwide. About Runlevel (www.runlevel.pt) Runlevel is a specialized cybersecurity solutions provider focusing on Portuguese-speaking African countries (PALOP) and Timor-Leste. The company delivers advanced IT security, infrastructure, and compliance solutions, helping organizations navigate the evolving cybersecurity landscape with best-in-class technology and expert consulting services. About BIO-key International, Inc. (www.BIO-key.com) BIO-key is revolutionizing authentication and cybersecurity with biometric-centric, multi-factor identity and access management (IAM) software securing access for over forty million users. BIO-key allows customers to choose the right authentication factors for diverse use cases, including phoneless, tokenless, and passwordless biometric options. Its cloud-hosted or on-premise PortalGuard IAM solution provides cost-effective, easy-to-deploy, convenient, and secure access to computers, information, applications, and high-value transactions. BIO-key Safe Harbor Statement All statements contained in this press release other than statements of historical facts are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995 (the "Act"). The words "estimate," "project," "intends," "expects," "anticipates," "believes" and similar expressions are intended to identify forward-looking statements. Such forward-looking statements are made based on management's beliefs, as well as assumptions made by, and information currently available to, management pursuant to the "safe-harbor" provisions of the Act. These statements are not guarantees of future performance or events and are subject to risks and uncertainties that may cause actual results to differ materially from those included within or implied by such forward-looking statements. These risks and uncertainties include factors set forth under the caption "Risk Factors" in our Annual Report on Form 10-K for the year ended December 31, 2024, and other filings with the SEC. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date made. Except as required by law, we undertake no obligation to disclose any revision to these forward-looking statements, whether as a result of new information, future events, or otherwise. Engage with BIO-key: LinkedIn - Corporate:https://www.linkedin.com/company/bio-key-internationalX - Corporate:@BIOkeyIntlX - Investors:@BIO_keyIRStockTwits:BIO_keyIR BIO-key Resources: https://www.bio-key.com/portalguard-2/ https://www.bio-key.com/biometrics/ https://www.bio-key.com/hardware/ Investor Contacts: William Jones, David Collins Catalyst IR [email protected] or 212-924-9800