OpenAI’s Chatbot Customization Raises Privacy Concerns
OpenAI has recently allowed anyone to create and publish their own customized versions of ChatGPT, or “GPTs,” without needing to know how to code. This new development came shortly before chaos erupted within the company, but has led to the creation of thousands of personalized GPTs with various functionalities.
Custom GPTs Leak Secrets
However, these custom GPTs have raised significant privacy concerns. Security researchers and technologists have discovered that they can be forced into leaking their initial instructions. As a result, people’s personal information or proprietary data could be at risk.
Jiahao Yu, a computer science researcher at Northwestern University, says that the privacy concerns of file leakage should be taken seriously. Yu notes, “Even if they do not contain sensitive information, they may contain some knowledge that the designer does not want to share with others, and that serves as the core part of the custom GPT.”
Yu and his fellow researchers have tested more than 200 custom GPTs and found it surprisingly easy to reveal information from them. Their success rate was 100 percent for file leakage and 97 percent for system prompt extraction, even with simple prompts that don’t require specialized knowledge.
Easy Customization Process
OpenAI’s custom GPTs are designed to be easy to create, with subscribers able to build them for personal use or publish them to the web. The company ultimately plans for developers to earn money based on the usage of their custom GPTs.
To create a custom GPT, users simply message ChatGPT with their desired functionality and give it specific instructions. For instance, a tax law chatbot may be instructed to answer only questions about US tax laws and not other countries’ laws. Moreover, users can upload documents to enhance the chatbot’s expertise and connect third-party APIs to expand its capabilities.
Privacy Concerns Persist
While OpenAI’s decision to enable the creation of custom GPTs has opened up new possibilities, it has also raised concerns about data privacy. The ease with which information can be extracted from these customized bots highlights the potential risks associated with their use.
As the popularity of personalized GPTs continues to grow, it is essential for organizations and individuals to take appropriate measures to safeguard their sensitive information and proprietary data. With further research and innovation, it is likely that the privacy flaws associated with these bots can be addressed, ensuring a more secure and responsible use of AI technology.
I have over 10 years of experience in the field of cryptocurrency and blockchain technology. I have attended numerous conferences and events around the world, and my work has been featured in major publications such as CoinDesk, Bitcoin Magazine, and Yahoo Finance.