This Founder Had to Train His AI Not to Rickroll People
3 min readWhen Flo Crivello noticed something odd with his company’s AI assistant, he knew something was off. A new client asked for a video tutorial on using the platform. Instead, the client received Rick Astley’s ‘Never Gonna Give You Up.’
Rickrolling, a popular internet prank, involves tricking someone into watching this iconic 1987 music video. It’s amusing when done by friends, but it’s another story when an AI makes this mistake.
What is Rickrolling?
Rickrolling is a bait-and-switch prank that has been part of internet culture for over 15 years. It involves misleading someone to click a link that unexpectedly leads to Rick Astley’s ‘Never Gonna Give You Up.’
The meme gained popularity when a much-anticipated ‘Grand Theft Auto IV’ trailer was shared online. A prankster posted a link promising the trailer but led viewers to the Rick Astley video.
Even after 17 years, the Rick Astley song has over 1.5 billion views on YouTube. This shows the persistence and widespread influence of the prank across the internet.
The AI’s Slip-Up
Crivello discovered that his company’s AI, Lindy, Rickrolled a client when asked for a video tutorial. This situation posed a unique problem for an AI-based platform.
The way these models work is they try to predict the most likely next sequence of text,’ Crivello explains. So when the AI started with ‘I’m going to send you a video,’ it naturally ended with the infamous Rick Astley video.
Out of millions of responses, Lindy only Rickrolled customers twice. However, even a small mistake like this needed correction to maintain professional integrity.
Fixing the Issue
To prevent the Rickrolling mishap from recurring, Crivello added a simple line to the system prompt in Lindy. This prompt explicitly instructs the AI not to Rickroll users.
This quick fix was surprisingly effective. Crivello emphasized that patching AI errors is becoming easier as technology advances, making such incidences increasingly rare.
Internet Culture and AI Models
The incident with Lindy raises larger questions about how much internet culture is absorbed by AI models. These models are often trained on vast amounts of web data, capturing even the most niche internet behaviors.
Another example includes Google’s AI, which once suggested using glue to make cheese stick to pizza dough. This was due to satirical content from Reddit, demonstrating how easily AI can misinterpret user-generated content.
Continual Improvements
Crivello is optimistic about the future of AI. He believes that as Large Language Models (LLMs) improve, such errors will become less frequent.
In the early days, Lindy’s AI would sometimes claim it was working on a task but never deliver. With the release of GPT-4, adding a prompt to admit its limitations solved this issue efficiently.
User Response
Interestingly, the client who received the Rickroll might not have even noticed. The company quickly followed up with the correct link.
I don’t even know that the customer saw it,’ Crivello remarked. This illustrates that swift customer service can mitigate the impact of such slip-ups.
Final Thoughts
This incident highlights the importance of vigilance in AI development. It serves as a reminder that even small errors can lead to significant hiccups.
By continually refining AI models and addressing these quirks, companies can improve user experience and build more reliable systems.
The story of Lindy’s AI Rickrolling a client underscores the challenges of integrating AI with internet culture. However, it also shows the potential for quick fixes and improvements.
As AI technology evolves, vigilance and continuous updates will remain crucial. Mistakes will become rarer, leading to more robust and reliable AI platforms.