Stealing Part of a Production Language Model (2024)
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI’s ChatGPT or Google’s PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under $20 USD, our attack extracts the entire projection matrix of OpenAI’s ada and babbage language models. We thereby confirm, for the first time, that these black-box … ⌘ Read more

⤋ Read More