Four months ago, Carl and I started Starseer with a realization that felt obvious to anyone with a cybersecurity background: if you're going to deploy AI systems that make real decisions, you'd better understand how they actually work. Today, I'm excited to share that Gula Tech Adventures agrees! They're leading our $2M seed round, with participation from strategic angels and other notable investors who've seen this movie before in other critical infrastructure.
"Where are your logs?" is so oft-repeated by digital forensics and incident responders that it's become a meme. Yet when it comes to AI, there's been a widely accepted mindset of "we don't know how, but it works." Progress continues to be made, but with minimal understanding of why. No cybersecurity professional would accept deploying systems without logs, monitoring, or debugging capabilities — yet somehow this has become the norm in AI.
This is where cybersecurity brings a unique opportunity to AI interpretability. Moving from zero to basic visibility unlocks unprecedented data that can be used for audit trails, compliance documentation, vulnerability discovery and remediation, threat intelligence on model-specific attacks, and understanding why defenses fail when they do.
The current approach to AI security is essentially perimeter defense around a black box: detecting prompt injections at the input, monitoring outputs for problematic content, but having no insight into what's happening inside the model itself. That's like trying to secure a network by only looking at firewall logs while ignoring everything happening on the endpoints.
With enterprises adopting AI at breakneck speed for fear of being left behind, teams responsible for AI risk management are accumulating massive exposure debt. We're seeing a record number of apologies, rollbacks, and delayed model releases due to AI systems behaving in unintended ways. As we're still at the beginning of this AI deployment maturity lifecycle, bigger challenges lie ahead for most enterprises.
We're building the visibility layer that provides data-backed confidence, reduces risk, enables real forensics when things go wrong, and gives security teams the tools they actually need to defend AI systems.
Our platform does something that sounds simple but is technically complex: it makes AI systems interpretable without requiring a PhD in math. Think of it as the debugging tools and disassemblers of the AI world.
We can take any model (whether it's the latest large language model, an image classification model, or something custom you built in-house) and give you unprecedented visibility into how it actually works. Not just "here's what the model outputs," but "here's how it made that specific decision, here's what parts of the input it was paying attention to, and here's how you can verify it's behaving as expected."
What’s really exciting is that we built this to work with your existing setup! Whether you're running models in a private cloud or on-premises, whether they're open-weight models from Hugging Face or something you developed internally, our tools integrate without requiring you to rebuild your entire AI stack.
And because this is fundamentally a security and compliance problem, we've built everything to be model-agnostic and designed for teams that need audit trails, compliance documentation, and the ability to detect when something's gone wrong, like backdoors, data poisoning, or prompt injection attacks.
This isn't just about the capital, although $2M certainly helps when you're solving problems of this nature! What excites me most is having partners who understand both the technical complexity and the urgency of what we're building.
The team at Gula Tech Adventures has seen how quickly security landscapes evolve. They recognize that the organizations getting ahead of AI transparency and governance now will have a massive competitive advantage when regulations like the EU AI Act fully kick in and when the inevitable high-profile AI failures start making headlines.
We're using this funding strategically:
The response from early customers has been overwhelming. We're working with teams in finance, healthcare, manufacturing, and defense who are discovering that their existing security and compliance frameworks weren't built for AI's unique risks.
We're at an inflection point where AI is becoming infrastructure, but we're still treating it like a research experiment. With the amount of resources enterprises are putting into AI, it's time to protect these investments by maturing the tooling around them and raising our expectations on how we assess AI risk exposure.
The conversations we're having with customers aren't just about compliance checkboxes (we also have those!). They're also facing difficult questions from CISOs and boards: "How do we know this model wasn't tampered with in our supply chain?" "Can we detect if someone poisoned our training data?" "How do we protect against novel prompt injections we've never seen before?"
These are the same questions cybersecurity teams have been asking about traditional software for decades, but the AI industry is still figuring out the answers. The recent Executive Order updates around AI software vulnerability management only reinforce that this isn't a "nice-to-have" anymore—it's becoming required infrastructure.
What excites me most is that we're not just building tools for AI researchers. We're building for the security teams, compliance officers, and engineers who need to deploy AI safely in the real world, right now, without waiting for the field to mature.
If you're wrestling with any of these challenges—whether it's AI governance, supply chain security for models, or just wanting to understand what your AI is actually doing—I'd love to hear from you. This funding gives us the runway to solve these problems properly, and we're just getting started.
— Tim Schulz, CEO & Cofounder
Ready to reduce your AI risk? Our platform helps you reduce exposure, harden models, & move at the speed of business AI.
Join our newsletter to stay up to date on Starseer news, features, and releases.
© 2025 Copyright - Starseer, Inc.