
By: Carl Hurd, Starseer CTO
In the frantic gold rush for AI dominance and adoption, speed is everything. Every board meeting echoes the same mandate: deploy AI now, or risk becoming obsolete. This immense pressure forces development teams into a full-on sprint, grabbing powerful pre-trained models from public hubs and reaching for third-party APIs, all in the pursuit of a quick competitive win.
In the race to build the future, many are unknowingly building their innovations upon an untrusted foundation. Each “off-the-shelf” model accelerates development, but at the cost of transparency. The cost of this transparency is significant, traditional cybersecurity tools do not provide any insights, or protections, for or from these models. In the rush to innovate, organizations are creating massive, unmonitored attack surfaces. The very tools meant to be a competitive advantage are becoming the perfect hiding spot for the next generation of threats.
The threat landscape has expanded rapidly with the adoption of AI models in various parts of the business. While traditional software development has Secure Development Lifecycle (SDLC) and Extended Detection and Response (XDR) for added security during development and deployment, AI models have not yet adopted this approach. This leaves the AI model supply chain vulnerable, starting with training and extending through deployment.
While the systems required for model inference are complex as a whole, in this post we are going to be focusing on risks associated with the models themselves, and not necessarily going into depth on the risks associated with the infrastructure required for operation.
These threats map squarely into the OWASP GenAI Top 10:
When you start to dissect any specific model, the format is not surprising, they contain “code” that has to be executed, and “data” that informs that execution. Most Large Language Models are made up of numerous layers of instructions, not that different from any other executable. These layers are often identical to each other, with different values in the instructions (the weights, or the data).
These threats can be extraordinarily subtle, lying dormant for 99% of requests until specific triggers are used, or have a very subtle nudge on every interaction with the model. These risks are not only for hosting your own models, by leveraging third-party API calls, you are trusting that your providers can, and are, securing thousands, perhaps tens of thousands of deployed models across their infrastructure for your benefit.
Traditional cybersecurity tools will not protect you from these threats for a single reason: Context. The tools that we have available for securing the enterprise are incredible, the breadth of threats that are detected, analyzed, and mitigated is truly incredible. While detecting malware in any form is expected, you need more specialized context to understand what a malicious model looks like. Each of these threats require more context:
In this case the filtering requires context awareness, it would be critically important that the customer support data be redacted, or rerouted internally compared to the public data. Current tools do not have controls in place for application specific role (or app) based access controls.
Tools need implementation specific knowledge and understanding to properly inform users of not only issues, but remediations. Traditional cybersecurity tools lack the ability to collect the necessary information from models, statically through scanning, or dynamically through operation to provide robust solutions to these threats and it is unreasonable to expect that these tools will safely guide your company's path to AI adoption.
Next generation tooling needs AI specific context to provide effective security solutions for the secure adoption of AI. Simple context such as file specific parsing for various file formats (safetensor, pickle, ONNX, etc.) is a starting point. This context only provides a small portion of the information that is required for a holistic security solution. As with any solution, a multi-faceted defense in depth approach is most robust. These solutions must not only have the visibility of requests and responses to the models, but also insights within the model itself.
There are many current approaches to this problem which solely focus on monitoring the inputs and outputs to a model. This is a natural evolution of current cybersecurity approaches to network security and is often effective at combating much of the OWASP GenAI Top 10, but these tools do not provide any insights into the internals of a model and thus cannot easily detect backdoors and poisoning. These solutions lack the fundamental insight, or interpretability, into the inner workings of an AI model and as such only provide part of a solution.
Starseer tackles these problems as part of our holistic approach to secure AI adoption. Each of these challenges has a unique solution that we have integrated into our platform.
As enterprises move to adopt AI to help accelerate workflows across the business they open themselves up to increased risk of untrusted code and models within their environment. Cybersecurity threats are constantly evolving, what was once limited to binaries and attached documents have been transitioned into AI models at an alarming rate. Starseer is building a new generation of cornerstone cybersecurity technologies for AI to build a secure and trusted foundation upon. Don’t let one of your greatest assets become your biggest vulnerability. Partner with us at Starseer to secure your AI journey from the beginning to the end.
Learn more at Starseer.ai or Book a Demo!
Ready to reduce your AI risk? Our platform helps you reduce exposure, harden models, & move at the speed of business AI.
Join our newsletter to stay up to date on Starseer news, features, and releases.
© 2025 Copyright - Starseer, Inc.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript
Ready to reduce your AI risk? Our platform helps you reduce exposure, harden models, & move at the speed of business AI.
Join our newsletter to stay up to date on Starseer news, features, and releases.
© 2025 Copyright - Starseer, Inc.