AI coding assistants run code on Windows workstations with almost no oversight. AI Trace gives security teams a complete record of what each one actually did: every program it launched, every file it touched, every network connection it made, all tied back to the specific AI session that caused it.
AI coding assistants start programs, write files, and reach out to the internet on workstations. Traditional endpoint security tools weren't built to link those actions back to the AI agent that caused them, or to present them in a form a reviewer can sign off on.
AI agents start PowerShell, cmd, and other command-line programs to run the code they generate. These child programs have the same access as the user running them, and standard logs don't record which AI kicked them off.
Nothing stops an AI agent from reading SSH keys, cloud credentials, auth tokens, or any file the user has access to. Traditional endpoint security won't flag it, because the user account is authorised to read those files.
Programs started by an AI agent can reach the internet freely: make web requests, look up any domain, open connections to any server. Windows itself doesn't restrict or log any of it.
When an AI session ends, the chat transcript shows what the user asked and the end result on disk, but not what happened in between. Which files were opened, what commands ran, where traffic went: none of it is recorded. Incident-response teams have nothing to work with.
Code the AI generates runs immediately on the workstation. Dependencies get downloaded and installed without review. Build scripts run with the user's full access; nothing is sandboxed.
When an AI agent starts a program that starts another program, the chain of who-started-what gets lost. Your security team can't tell whether a PowerShell window was opened by the user or by an AI agent three steps removed from it.
A lightweight Windows service plugs into the operating system's own kernel-level event stream (Microsoft's ETW), the same low-level telemetry the OS uses to track what every running program does. Nothing is injected into other processes, so there's nothing for an AI agent to bypass and nothing for an attacker to quietly disable. Every captured event (programs launching, files being touched, network connections, registry changes that plant persistence) is tagged with the AI agent session that caused it.
Records every relevant action (programs starting and stopping, files opened, written, renamed or deleted, network and website activity, and system-configuration changes), read directly from inside Windows. Tamper-resistant by design: no software is injected into other programs, and there are no hooks an attacker can quietly remove.
Automatically identifies running AI coding assistants and tracks every program they start, plus every program those programs start, all the way down. Every action links back to the specific AI session that triggered the chain.
Logs every outbound connection and every website lookup made by an AI agent or anything it launched. A built-in watchlist flags connections to paste sites, anonymous file drops, and URL shorteners, destinations an AI coding assistant has no business reaching. Uncommon connection types (like direct traffic to unfamiliar servers outside normal web channels) are surfaced separately, since they're the channels attackers often use for covert control.
Tracks every file an AI agent or anything it launched opens, writes, renames, or deletes. Built-in rules flag touches to SSH keys, cloud credentials, Kubernetes config, .env files, and browser secrets, and security teams can edit the rules to match their own definitions of sensitive.
Records the full command line of every program an AI agent launches, and scans for known attack patterns: obfuscated PowerShell, download-and-run one-liners, and the standard techniques for installing malware so it survives a reboot. Obfuscated commands are automatically decoded so reviewers see the real script, not a meaningless blob.
Flags the specific Windows system-configuration changes attackers use to make code survive a reboot: auto-start entries, service installs, scheduled tasks, and the other textbook persistence tricks. An AI coding assistant has no legitimate reason to make any of these changes, so each one is surfaced with a clear label of which trick was used.
Catches the classic attack pattern where one program writes an executable file to disk and another program runs it moments later. The write and the run are correlated across the whole AI session, and each alert explains exactly why it was flagged, not just that something looks off.
Every AI session becomes its own reviewable record, bundled from the moment the agent started through the moment it finished. The dashboard groups them into a needs-review queue, assigns a plain severity label (critical, high, medium, low, clean), lets reviewers sign off one click at a time, and keeps an audit log of who reviewed what and when.
Every session has a printable report with the full event list, the severity verdict, and the tracking details that make it defensible as evidence: session ID, when it was captured, when the report was produced. Save-as-PDF gives you a standalone document for compliance hand-off or post-incident review.
The moment an AI session crosses the risk threshold, your team gets notified, delivered to Slack, Discord, Mattermost, Google Chat, or any tool that accepts a standard webhook. Structured data feeds are also available if you want to pipe everything into an existing SIEM or logging system.
Security teams can add or remove watchlist entries (sensitive file paths, suspicious hostnames), adjust how often data is captured and how much of it, and set how long records are kept. Old data is aged out automatically, with a one-click purge for incident-response situations.
AI Trace runs as a small background service on each Windows workstation. It watches activity in real time, figures out which actions belong to which AI agent, and sends the results to a central dashboard your security team reviews.
Identifies running AI coding assistants by program name, install path, and behaviour. Keeps a live list of active sessions and the programs they started from.
Captures programs starting, files being touched, network and website activity, and system-configuration changes as they happen, read directly from inside Windows. Filtering happens on the workstation itself, so only activity tied to an AI agent ever leaves the machine.
Every event is linked back to the AI session that caused it, along with the full chain of parent-and-child programs. Related events are grouped into the sequences a reviewer can actually make sense of.
Each session is scored against a set of rules: sensitive files touched, suspicious hosts contacted, suspicious commands run, shells launched inside an AI session, attempts to install persistence, and code loaded from files the agent itself wrote. Sessions come out tagged critical, high, medium, low, or clean.
Flagged sessions land in the security team's needs-review queue, and a webhook fires once so the team sees each incident the moment it happens. Every session has a printable report, and the mark-reviewed workflow records which reviewer signed off and when.
AI Trace captures the events listed below, every one tied to the AI session that caused it. The capture is deliberately narrow and honest: only metadata, never the contents of your files or the payload of network traffic.
AI Trace ships with built-in detection for the AI agents below. The detection policy is a configuration file; adding a new agent is a config change, not a product release.
AI Trace is in private early access. If your organisation uses AI coding assistants on Windows workstations and needs endpoint-level visibility into what they actually do, get in touch.
Request Early Access