British university spinoff Mindgard protects companies from AI threats

 

AI creates a dilemma for companies: Don’t implement it yet, and you might miss out on productivity gains and other potential benefits; but do it wrong, and you might expose your business and clients to unmitigated risks. This is where a new wave of “security for AI” startups come in, with the premise that these threats, such as jailbreak and prompt injection, can’t be ignored.

Like Israeli startup Noma and U.S.-based competitors Hidden Layer and Protect AI, British university spinoff Mindgard is one of these. “AI is still software, so all the cyber risks that you probably heard about also apply to AI,” said its CEO and CTO, Professor Peter Garraghan (on the right in the image above). But, “if you look at the opaque nature and intrinsically random behavior of neural networks and systems,” he added, this also justifies a new approach.

In Mindgard’s case, said approach is a Dynamic Application Security Testing for AI (DAST-AI) targeting vulnerabilities that can only be detected during runtime. This involves continuous and automated red teaming, a way to simulate attacks based on Mindgard’s threat library. For instance, it can test the robustness of image classifiers against adversarial inputs.

On that front and beyond, Mindgard’s technology owes to Garraghan’s background as a professor and researcher focused on AI security. The field is fast evolving — ChatGPT didn’t exist when he entered it, but he sensed that NLP and image models could face new threats, he told TechCrunch.

Since then, what sounded future-looking has become reality within a fast-growing sector, but LLMs keep changing, as do threats. Garraghan thinks his ongoing ties to Lancaster University can help the company keep up: Mindgard will automatically own the IP to the work of 18 additional doctorate researchers for the next few years. “There’s no company in the world that gets a deal like this.”

While it has ties to research, Mindgard is very much a commercial product already, and more precisely, a SaaS platform, with co-founder Steve Street leading the charge as COO and CRO. (An early co-founder, Neeraj Suri, who was involved on the research side, is no longer with the company.)

Enterprises are a natural client for Mindgard, as are traditional red teamers and pen testers, but the company also works with AI startups that need to show their customers they do AI risk prevention, Garraghan said.

Since many of these prospective clients are U.S.-based, the company added some American flavor to its cap table. After raising a £3 million seed round in 2023, Mindgard is now announcing a new $8 million round led by Boston-based .406 Ventures, with participation from Atlantic Bridge, WillowTree Investments, and existing investors IQ Capital and Lakestar.

The funding will help with “building the team, product development, R&D, and all the things you might expect from a startup,” but also expand into the U.S. Its recently appointed VP of marketing, former Next DLP CMO Fergal Glynn, is based in Boston. However, the company also plans to keep R&D and engineering in London.

With a headcount of 15, Mindgard’s team is relatively small, and will remain so, with plans to reach 20 to 25 people by the end of next year. That’s because AI security “is not even in its heyday yet.” But when AI starts getting deployed everywhere, and security threats follow suit, Mindgard will be ready. Says Garraghan: “We built this company to do positive good for the world, and the positive good here is people can trust and use AI safely and securely.”

Source link