AI & THE MEDIA

SydeLabs Raises $2.5M To Solve Security And Risk Management For Generative AI

Globally, policymakers continue to be concerned about the security and safety risks of generative AI.1 Now, security and risk management startup SydeLabs announced their seed funding round of $2.5 million to build solutions aimed at securing GenAI systems for enterprises. The funding round was led by RTP Global and also saw participation from Picus Capital and marquee angel investors.

The adoption of Generative AI has opened a new cybersecurity attack surface for those leveraging the technology. SydeLabs’ AI security and risk management solutions address this emerging concern. The company offers solutions to identify security and safety vulnerabilities in enterprise AI systems and prevent them in real time, helping mitigate cybersecurity attacks and abuse. Founded by Ruchir Patwa and Ankita Kumari, SydeLabs says its mission is to make AI applications safe, secure and resistant to abuse.

Patwa is a cybersecurity expert with over 10 years of experience, most recently leading security teams at Google and Mobile Premier League. Kumari is a veteran of product leadership positions, including building fraud and risk management solutions at McKinsey & Co., Mobile Premier League and CRED. Together they have worked extensively on security, risk and fraud management.

Kumari said: “While we were working on solving security and fraud-led business problems, we saw how business growth and profitability was fueled by the adoption of risk mitigation measures, despite these measures usually being seen as cost sinks. Since then we knew we wanted to build solutions in the vulnerability and risk management space to address these growing concerns.”

Kumari added: “The adoption of Generative AI in enterprise organizations has seen the cybersecurity attack surface increase substantially. From a security point of view, companies are now bringing a human-like element into systems that were previously not susceptible to social engineering and manipulation. From a compliance perspective, we see systems having access to internal and user data with the ability to take action on that data. We also see the reputational risks emerging with adoption of GenAI systems that can generate undesirable content which can cause harm to a brand, loss of goodwill and further legal risk.”

In its market research with CISOs globally, SydeLabs says it is seeing increased acknowledgement of these vulnerabilities as enterprises are rapidly adopting Generative AI for various business use cases.

BRAND CONNECTIONS

As the company looks to be a holistic AI security and risk management platform, it says its solution suite “helps detect and prevent vulnerabilities in AI systems thus avoiding attacks, abuse and non-compliance. By focusing on the intent of attackers, rather than using traditional pattern matching approaches that are traditionally used for cybersecurity, the company is going further in its cybersecurity offering.”

Patwa concluded: “We are building a comprehensive platform for risk management of Generative AI systems, across the entire development lifecycle. This can give a huge productivity boost to enterprises and prevent costs associated with inaction around security and compliance threats. We want to give confidence to enterprises to deploy GenAI applications without having to worry about security and safety blindspots.”


Comments (0)

Leave a Reply