ChatGPT isn’t just an LLM. Claude isn’t just an LLM. These tools have several things that allow them to be as powerful as they are now.
They can:
- Summarize huge amounts of data
- Search the web
- Download and parse CSVs, and almost every file type imaginable
- Store memories and context
And so much more…

To not know how agents work will:
- Limit your capabilities using them.
- Create confusion when they falter or hallucinate.
- Hinder your future-predicting capabilities by not understanding what’s next.
There are 3 fundamental concepts that drive agent abilities.
1) The LLM
The LLM is the engine. Without it, there is NO agent. It’s a very simple input/output system that works with probabilities. Words are fed into it. Tokens come out and then get transcribed into human language.
2) The Harness
Just like a car, the harness is almost EVERYTHING else. It’s the wheels, the body, the ripped seats, electronics, etc. Harnesses are contrived of tools that the LLM can use.
3) The Agent Loop
LLMs can’t just run once and be done to be an agent. They run 5 if not over 100 times within a call. ChatGPT is not just a dumb LLM input/output machine, but it can run over 100 times in the background, doing web searches, recalling memories, pulling out relevant context.
I wanted to write this because some teams don’t realize is that ChatGPT isn’t just an LLM. It’s compacted of specialized harnesses and tuned agent loops (using their planned $600 billion of spend) on top of their suite of models. Engineers only get access to ChatGPT’s models (LLMs), not their agentic loop and harness. So, they have to build the loops and harnesses theirselves, otherwise they’ll have a stupid LLM doing basic inputs and outputs.
Of course, the goal isn’t to compete with OpenAI, but to specialize your own agents into something niche and awesome, as we’re currently doing now.



