Analysis of the variability of Custom GPT, which was posted on arXiv on December 31st last year. I’ve already considered it at OpenAI, which opens the GPT Store, but I’m sharing it with you because it’s worth referencing.
제목: Opening A Pandora’s Box: Things You Should Know in the Era of Custom GPTs
Summary:
The emergence of large language models (LLMs) has greatly accelerated the development of a wide range of applications across different disciplines. There is an increasing trend in the establishment of professional platforms based on LLMs, such as custom GPTs newly introduced by OpenAI. Custom GPTs offer a variety of features, such as web browsing and code execution, but at the same time pose serious security threats. This paper comprehensively analyzes the security and privacy concerns that arise on custom GPT platforms. Through systematic investigation, we classify potential attack scenarios into three threat models based on the role of malicious actors, and identify important data exchange channels in custom GPTs. We utilized the STRIDE threat modeling framework to identify 26 potential attack routes, 19 of which have been validated partially or entirely in a real world setting. The findings highlight the urgent need for strong security and privacy measures in the custom GPT ecosystem, especially considering that OpenAI’s official GPT store is coming soon.
arXiv: https://arxiv.org/abs/2401.00905
Browse: https://browse.arxiv.org/pdf/2401.00905.pdf
PDF: https://arxiv.org/pdf/2401.00905.pdf
arXiv-vanity: https://www.arxiv-vanity.com/papers/2401.00905
Paper page: https://huggingface.co/papers/2401.00905