📘 Introduction
LangChain is a powerful framework designed to simplify the creation of applications powered by large language models (LLMs). With LangChain, you can integrate LLMs into tools, memory, agents, and external data sources like APIs or databases—all with minimal code.
In this tutorial, we’ll walk through the basics of LangChain and build a simple example app.
📦 1. Installation
Make sure you have Python 3.8+ installed.
pip install langchain openai
To use OpenAI models:
export OPENAI_API_KEY="your-api-key"
🧠 2. Key Concepts
LangChain provides modular components:
Component | Description |
---|---|
LLMs | Interfaces to models like OpenAI, Anthropic, etc. |
Chains | Combine LLMs and tools to form a workflow. |
Prompts | Templates for generating effective inputs to LLMs. |
Memory | Maintain state across calls. |
Agents | Dynamic decision-making systems that choose tools. |
Tools | Functions the agent can call (e.g., search, calculator). |
✍️ 3. Your First Chain
Let’s build a simple prompt chain using OpenAI’s gpt-3.5-turbo
.
✅ Prompt + LLM Chain
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(temperature=0.7)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run("smart home devices")
print(response)
🧰 4. Using Tools with Agents
Want to search the web, do math, or fetch data dynamically?
from langchain.agents import initialize_agent, load_tools
from langchain.agents.agent_types import AgentType
llm = ChatOpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
response = agent.run("Who is the president of France and what is 3.4 times his age?")
print(response)
🔑 Requires SERPAPI_API_KEY
for search tool.
🧠 5. Add Memory to a ChatBot
Memory keeps the conversation state.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
llm = ChatOpenAI()
conversation = ConversationChain(llm=llm, memory=memory, verbose=True)
conversation.predict(input="Hi, my name is Alice.")
conversation.predict(input="What’s my name?")
📄 6. Read Documents with LangChain
Read and query local text or PDF files:
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator
loader = TextLoader("example.txt")
index = VectorstoreIndexCreator().from_loaders([loader])
query = index.query("What is the main topic of this document?")
print(query)
For PDF support, install:
pip install pypdf
🛠 7. Useful Integrations
LangChain works with:
- Vector databases: FAISS, Chroma, Pinecone
- Embeddings: OpenAI, Hugging Face
- Tools: Google Search, Wolfram Alpha
- UI: Streamlit, Gradio, LangServe
💡 8. Project Ideas
- AI blog assistant (summarize + rewrite articles)
- Document Q&A system
- Conversational agent with tools and memory
- LLM-powered email responder
🚀 9. Next Steps
- Read the LangChain docs
- Try LangChainHub: community-shared chains & agents
- Deploy with FastAPI + Streamlit
- Combine with Vector DBs like Pinecone or Chroma for RAG (retrieval-augmented generation)
🏁 Summary
LangChain is the fastest way to build intelligent, interactive LLM applications. With just a few lines of code, you can:
- Prompt models effectively
- Chain multiple steps
- Add memory and reasoning
- Use tools and data
Start simple, iterate fast, and unlock the real power of large language models!