Lakera Secures $20 Million to Protect AI from Malicious Prompts
4 min readLakera, a Swiss startup, is making waves in the AI industry with its focus on safeguarding generative AI applications. The company has successfully raised $20 million in a Series A funding round. As generative AI continues to grow, so do concerns about data privacy and security vulnerabilities.
Founded in 2021, Lakera has quickly become a key player in the field of AI security. The company’s technology aims to protect organizations from threats such as data leakage and unauthorized access. This funding will help Lakera enhance its offerings and expand its reach, particularly in the U.S. market.
Protecting Generative AI Applications
Lakera, a Swiss startup, is focusing on safeguarding generative AI applications from malicious prompts and other online threats. The company recently secured $20 million in a Series A funding round led by European venture capital firm Atomico. Given the rise of generative AI, driven by popular apps, security concerns in enterprise settings have also surged, particularly around data privacy and security vulnerabilities.
Large language models (LLMs) power generative AI, allowing machines to understand and generate text similarly to humans. However, these models need prompts to deliver outputs, which can be manipulated to carry out unauthorized activities like revealing confidential information. This exploitation method, known as prompt injection, poses a significant threat that Lakera aims to mitigate.
Lakera’s Origins and Technology
Founded in 2021 in Zurich, Lakera officially launched last October, backed by $10 million in funding. Its primary goal is to protect organizations from LLM security weaknesses, including data leakage and prompt injections. The company’s technology is compatible with various LLMs, including OpenAI’s GPT-3, Google’s Bard, Meta’s LLaMA, and Anthropic’s Claude.
Lakera positions itself as a low-latency AI application firewall that secures data traffic in and out of generative AI applications. Its flagship product, Lakera Guard, is built on a database collating insights from diverse sources such as publicly available datasets on Hugging Face, in-house machine learning research, and an interactive game called Gandalf.
Interactive Learning and Taxonomy
Gandalf, an interactive game developed by Lakera, challenges users to trick it into revealing a secret password. The game gets increasingly sophisticated as the levels advance, helping Lakera create a prompt injection taxonomy that categorizes different types of attacks.
According to co-founder and CEO David Haber, “We are AI-first, building our own models to detect malicious attacks such as prompt injections in real time.” These models learn from vast amounts of generative AI interactions, continuously improving to adapt to new threats.
Lakera Guard Features and Applications
By integrating Lakera Guard’s API, companies can better protect against malicious prompts. Lakera has also developed specialized models to scan for harmful contents like hate speech, violent material, and profanities. These are crucial for public-facing applications like chatbots, and can be easily integrated with a single line of code.
Companies can access a centralized policy control dashboard to fine-tune content thresholds based on the type of material. This makes Lakera Guard flexible and adaptable, suitable for a range of applications and industries.
With $20 million in new funding, Lakera is prepared to expand globally, focusing primarily on the U.S. market. It already has a number of high-profile customers in North America, including AI startup Respell and Canadian unicorn Cohere.
“Large enterprises, SaaS companies, and AI model providers are all racing to roll out secure AI applications,” said Haber. The financial services sector, in particular, has been quick to adopt this technology due to its inherent security and compliance risks, but interest is growing across various industries.
Funding and Future Prospects
Aside from Atomico, Lakera’s Series A funding round saw participation from several prominent investors, including Citi Ventures and Redalpine. This financial backing underscores the growing importance of securing generative AI applications.
The new funding will enable Lakera to scale its operations and refine its technology further, aiming to offer even more robust protection against emerging AI threats.
In summary, Lakera’s focus on safeguarding generative AI applications from malicious prompts showcases its commitment to AI security. The startup’s recent $20 million funding round signals a growing recognition of the importance of AI safety. With innovative solutions like Lakera Guard, the company is well-positioned to tackle emerging AI threats and expand its reach globally.
As AI technology continues to evolve, protecting it from vulnerabilities becomes increasingly crucial. Lakera’s proactive approach and advanced technology set a high standard in the industry. The company’s expansion plans and continued innovation promise a safer future for generative AI applications.