To paraphrase Bill Clinton’s 1990 campaign slogan, “it’s the conversation, stupid”, and Alan Turing would agree.
ChatGPT shareable links are to the end of the conversation, implying that the purpose of the conversation is to find a conclusion, like an answer to a question or a link to a relevant document. That makes sense for a search engine, but with AI chatbots, the conversation itself is of interest. Much of the value lies in the dialogue, the back-and-forth in which assumptions are tested, errors corrected, alternatives explored, and ideas refined, not a final answer.
I subscribe to the ChatGPT service and find it well worth the monthly fee, but it treats our conversations as ephemeral interface entities rather than durable intellectual work products of interest in their own right. The user interface is misleading. It has a "history" feature, which I assumed meant that it permanently archived conversations, until a reader pointed out that a link to a conversation I had published was broken.
The history feature allows users to view and continue past conversations for an unspecified time, but provides no native way to permanently archive an individual conversation in a standard document format like Word or PDF. There is no per-conversation export, no versioning, and no user-controlled guarantee of long-term retention. The only built-in alternative is an “export all data” function that produces raw HTML and JSON files, requiring additional processing before the material becomes usable.
This is not a minor usability issue. It reflects a deeper assumption: that the conversation is a disposable means to an end, rather than a work product with independent value. That assumption may be reasonable for customer-support chatbots (which I hate), but it is not reasonable when AI systems are used for writing, learning, decision-making, policy formulation, etc. In those contexts, the dialogue itself documents reasoning, uncertainty, correction, and collaboration. Often, it is precisely the conversational path—not just the destination—that one wishes to preserve.
Other software categories recognise this distinction. Word processors save drafts. Version control systems preserve history. Collaborative tools maintain change logs. AI chat services, by contrast, still behave as though conversations are transient means to an end. As a result, users who care about their work are pushed into awkward workarounds: manual copy-and-paste, browser extensions, or, in the case of ChatGPT, "export all data," which mixes valuable conversations with trivial ones.
The solution does not require new breakthroughs in artificial intelligence, just adding a new feature to existing chatbots. (ChatGPT shared links should also offer the option of linking to the beginning or end of the conversation).
AI systems are not mere search engines but tools for thinking and writing, and AI platforms should support per-conversation export in standard formats, user-controlled guarantees of retention or deletion, stable identifiers suitable for citation, and optional versioning to capture the evolution of a dialogue. I'd be happy to pay ChatGPT for storing my conversations or just adding a button that allowed me to save them on Google Drive or Microsoft OneDrive.
Finally, I asked ChatGPT if any of the well-known US and Chinese AI chatbots had an "archive-conversation" feature, and it said none did.
Alan Turing suggested a test for machine intelligence: a machine was intelligent if a human evaluator could not reliably tell whether the transcript of a conversation was with it or a person. In the early 1970s, I installed two public-access Teletypes with dial-up Internet access in the Venice, California Public Library, and, among other things, I saw users carry on lengthy conversations with Eliza, a simple BASIC program, that fed their statements back to them in the manner of a non-directive therapist. Eliza passed the Turing test with those users as evaluators, and chatbot conversations should be archived.
