OpenAI says that ChatGPT memory is an optional feature from the start and can be cleared at any time, either in the settings or by simply telling the bot to clear it. Once the Memory settings are cleared, that information will not be used to train your AI model. It’s unclear exactly how much of that personal data is used to train the AI. while someone is chatting with the chatbot. And turning off Memory doesn’t mean you’ve opted out of letting your chats train the OpenAI model; that is a separate opt-out.
The company also states that it will not store certain confidential information in Memory. If you tell ChatGPT your password (don’t do this) or your Social Security number (or this), fortunately App Memory forgets it. Jang also says OpenAI is still seeking feedback on whether other personally identifiable information, such as a user’s ethnicity, is too sensitive for the company to automatically capture.
“We think there are many useful cases for that example, but for now we have trained the model to avoid proactively remembering that information,” Jang says.
It’s easy to see how ChatGPT’s Memory feature could go wrong: cases where a user might have forgotten that they once asked the chatbot about a problem, an abortion clinic, or a non-violent way to deal with a mother-in-law, just to remember it or have others see it in a future chat. How ChatGPT memory handles health data is also an open question. “We prevent ChatGPT from remembering certain health details, but this is still a work in progress,” says OpenAI spokesperson Niko Felix. In this way, ChatGPT is the same song, just in a new era, about the permanence of the Internet: Check out this great new Memory feature, until it’s a mistake.
OpenAI is also not the first entity to play with memory in generative AI. Google has emphasized “multi-shift” technology in Gemini 1.0, your own LLM. This means you can interact with Gemini Pro via a single-turn message (an exchange between the user and the chatbot) or have an ongoing multi-turn conversation where the bot “remembers” the context of previous messages.
An AI framework company called LangChain has been developing a memory module that helps large language models remember previous interactions between an end user and the model. Giving LLMs a long-term memory “can be very powerful for creating unique LLM experiences: a chatbot can start to tailor its responses to you as an individual based on what it knows about you,” says Harrison Chase, co-founder and CEO by LangChain. “Lack of long-term memory can also create an irritating experience. “No one wants to have to tell a chatbot that recommends restaurants over and over again that they are vegetarian.”
This technology is sometimes called “context retention” or “persistent context” instead of “memory,” but the ultimate goal is the same: to make the human-computer interaction feel so fluid, so natural, that the user I can easily forget it. what the chatbot could remember. This is also a potential advantage for companies that implement these chatbots and want to maintain an ongoing relationship with the customer on the other side.
“You can think of them as a series of tokens that come before your conversations,” says Liam Fedus, a research scientist at OpenAI. “The robot has some intelligence and behind the scenes he looks at the memories and says, ‘It looks like they’re related; Let me merge them.’ And that then goes to your symbolic budget.”
Fedus and Jang say ChatGPT’s memory is nowhere near the capacity of the human brain. And yet, around the same time, Fedus explains that with ChatGPT memory, you’re limited to “a few thousand tokens.” If only.
Is this the hypervigilant virtual assistant that tech consumers have been promised for the past decade, or just another data-capturing scheme that uses your tastes, preferences, and personal data to better serve a tech company than its customers? users? Possibly both, although OpenAI may not put it that way. “I think the aides of the past just didn’t have the intelligence,” Fedus said, “and now we’re getting to that point.”
Will Knight contributed to this story.