An esoteric use-case for which I find myself increasingly turning to LLMs is as a thought partner. I don't mean using AI as a therapist or as a girlfriend, like some other prevalent uses. Instead, I'm leveraging LLMs to help me think through complex decision-making and planning processes. While this use-case might seem odd at first, I believe it will become mainstream in due time. Surprisingly, this application is under-discussed in the broader AI community.
So what do I mean by a thought partner? I use AI (primarily Claude) to answer open-ended questions that help me think through both technical and non-technical problems. Here's a recent example: I was contemplating whether to self-host my media collection or simply pay for a higher storage iCloud tier. I initially consulted Claude because I suspected my decision-making wasn't entirely rational; I have a strong bias towards DIY and self-hosting. However, after an in-depth conversation with Claude, I had a complete change of heart. Claude even created a simple breakeven analysis of my situation:
Despite how productive this conversation was, I still find current products lacking for true thought-partnership. To start, you must explicitly prompt the LLM to consider opposing points of view. I believe a great thought partner illuminates gaps in your thinking, and as such, healthy debate is necessary to reach better ideas. A good thought partner also challenges your thinking and asks you to specify your assumptions. LLMs are often too agreeable, making explicit prompting for debate necessary. For example, I prompted Claude as follows:
I generally agree with this assessment, but give me an in-depth bull case for self-hosting and an in-depth bear case for the cloud.
This type of prompting raises another issue: effective brainstorming is usually a highly non-linear process. The linear chat UX can be frustrating because what I really want is a branching conversation, somewhat like brainstorming on Miro. Branching would also help manage LLM context more effectively, as LLMs tend to forget and get confused during long conversations (not unlike humans 😅). Branching enables sending only the necessary context for a given conversation thread. For example, in my case, the "bull case" and "bear case" would function better as separate branches rather than a single command. In theory, this also allows the model more "time to think."
Branching is crucial for divergent thinking in a brainstorming session. However, great brainstorming sessions also feature convergent thinking, where the ideas discussed are synthesized and unified. To this end, I would love for a thought partner AI to summarize key points at different "checkpoints" in a conversation. These "checkpoints" are fairly arbitrary and abstract because they depend on the specific conversation. To draw an analogy, in a human-led brainstorming session, someone is likely jotting down notes of key points raised and discussed. This is critical, as much of brainstorming involves productive meandering; a lot of the conversation content may be irrelevant to the final outcome but is a necessary part of the process.
A more advanced feature for a thought partner AI that combines both convergent and divergent thinking would be comparing the results of two different models. This is akin to asking your personal board of directors a question and comparing individuals' responses. Currently, I do this manually by providing both ChatGPT and Claude the same prompt. I then compare their answers to determine areas of agreement, disagreement, and which assistant exhibited the best reasoning. Due to the subpar UX for this type of conversation, I often short-circuit the process by continuing the conversation with the most thoughtful assistant.
These are just some initial ideas on this emerging use-case. I'll continue to refine this concept, but I'd love to hear if others are using LLM assistants in this way and if these ideas resonate with them.