AI Agents vs. AI Pipelines - A Practical Guide to Coding Your LLM Application
We use CrewAI to create apps that demonstrate how to choose the right architecture for your LLM application
We can think of an AI agent as an LLM with access to external tools; it runs in a loop, making decisions about how to behave and what tools to use at each iteration.
In doing this, the agent can solve much more complex problems than a conventional LLM app. (I explored how to build such an agent from scratch in the article How to Build a ReAct AI Agent with Claude 3.5 and Python)
AI Agents are powerful and a long step beyond a conventional chat application. But they are not always the right solution.
Sometimes a sequence of more conventional functions is more appropriate. We can think of this type of application as a pipeline, where a solution is developed by passing the output of one function as the input to the next.
In this article, we will explore the use of AI agents and pipelines and what types of applications they are most suited to. We will use the CrewAI open-source framework to create the LLM logic and build a simple front end in Streamlit for an online app.