AI Era, Bigger Security Threats: Why Zero Trust Must Become the Shield

By Junyeol Lee, Head of Research, Dtonic

Artificial intelligence is transforming how businesses operate, automate, and grow. But as AI capabilities accelerate, so do the security risks surrounding them.

In late 2023, reports emerged of ransomware developed using generative AI technologies such as ChatGPT-style models. It was a clear signal that AI is not only a tool for productivity—it can also be weaponized.

Since then, ransomware incidents have continued to surge. According to industry monitoring groups, global ransomware attacks rose sharply year over year, demonstrating how quickly the threat landscape is evolving.

AI Is Giving Attackers New Capabilities

The rise of large language models (LLMs) has lowered the barrier for cybercriminals to launch sophisticated attacks at scale.

Examples include:

  • AI-assisted phishing campaigns that generate convincing emails and social engineering messages

  • Automated malware creation and rapid code mutation

  • Self-propagating ransomware variants designed to spread faster across networks

  • Fraud-focused AI tools that help generate fake websites, documents, or impersonation content

What once required advanced expertise can now be produced faster, cheaper, and at greater scale.

Why Cloud AI Alone Is Not Enough

Many AI services today rely on cloud-based processing, where data is transmitted to external servers for inference or training. While powerful, this architecture can introduce additional risks:

  • Sensitive data leaving internal environments

  • Greater exposure during transmission

  • Expanded attack surfaces across networks and APIs

  • Compliance challenges in regulated industries

To reduce these risks, many organizations are turning to On-Device AI, where intelligence runs locally on edge devices instead of sending all data to the cloud.

This model improves privacy, reduces latency, and minimizes unnecessary data movement.

However, On-Device AI is not a complete answer either.

Devices operating in the field may learn from incomplete, corrupted, or biased local data. Over time, this can degrade model quality or create operational blind spots.

The Promise—and Risk—of Federated Learning

To solve this, enterprises are increasingly exploring Federated Learning—a method where distributed AI models learn locally and share updates rather than raw data.

This allows organizations to improve models collaboratively while preserving privacy.

But federated systems still require communication between edge devices and centralized infrastructure. That means new trust boundaries emerge:

  • Device authentication

  • Secure model exchange

  • Update integrity verification

  • Identity and access control

  • Network segmentation

Without proper controls, the cycle of risk simply returns in a new form.

Why Zero Trust Is the Right Security Model

This is where Zero Trust becomes essential.

Zero Trust is based on a simple principle:

Never trust. Always verify.

Rather than assuming anything inside the network is safe, Zero Trust treats every user, device, application, and connection as untrusted until verified.

That means:

  • No implicit trust based on network location

  • Strict identity verification for every request

  • Least-privilege access controls

  • Continuous authentication and monitoring

  • Segmented environments that limit lateral movement

  • Full visibility across users, systems, and endpoints

For AI systems operating across cloud, edge, IoT, robotics, and enterprise infrastructure, this model is no longer optional—it is foundational.

Security for AI, IoT, and Autonomous Systems

Modern AI environments increasingly connect with physical systems such as:

  • IoT sensors

  • Smart city infrastructure

  • Industrial equipment

  • Robotics platforms

  • Retail devices

  • Autonomous operations systems

These environments must align with international security frameworks and industrial standards, including areas such as operational technology (OT), device security, and resilient communications.

Without robust security architecture, efforts to improve efficiency through AI can unintentionally create new operational vulnerabilities.

Dtonic’s Approach: Secure Intelligence by Design

At Dtonic, we apply Zero Trust principles across our platforms including:

  • D.Hub – our enterprise AI data platform

  • D.Edge – our edge AI and on-device intelligence platform

We are continuously strengthening:

  • Secure interoperability between edge and cloud environments

  • Trusted AI collaboration and learning pipelines

  • Identity-centric access controls

  • Secure data governance

  • Standards-aligned architecture for domestic and global markets

Our goal is simple: help customers and partners accelerate AX (AI Transformation) without inheriting avoidable security risks.

What Must Happen Next

Many technology companies are actively working toward stronger Zero Trust adoption, but real barriers remain:

  • Lack of clear implementation roadmaps

  • Complexity of legacy environments

  • Upfront investment concerns

  • Shortage of practical expertise

This is why public-private collaboration matters. Government frameworks, adoption guidance, and implementation support can help accelerate secure AI transformation across industries.

The Future of AI Depends on Trust

Zero Trust is no longer just a cybersecurity trend. It is becoming the default operating model for AI-era enterprises.

As AI adoption scales, security threats will scale with it. The organizations that move fastest must also secure smartest.

The future belongs not only to intelligent systems—but to trusted ones.

Previous
Previous

What Is Enterprise Ontology — And Why It Matters for AI