Working with long conversation

Model context window limit

LLMs work using context windows, which represent how much input an LLM can process at once. Claude 4, the LLM Memex uses, has a context window of 200k tokens, with ability to go to 1M tokens in Long Context Mode.

You can see how much of context window a conversation is using in the usage indicator at the bottom right of the app:

If you ever get close to reaching the model's context window limit, Memex's Context Management capabilities will kick-in, helping you start a new conversation without missing a beat!

Last updated

Was this helpful?