Skip to content

LangChain

Master LangChain with 46 free flashcards. Study using spaced repetition and focus mode for effective learning in AI.

🎓 46 cards ⏱️ ~23 min Intermediate
Study Full Deck →
Share: 𝕏 Twitter LinkedIn WhatsApp

🎯 What You'll Learn

Preview Questions

12 shown

What is LangChain?

Show ▼

LangChain is an open-source Python framework for building applications powered by large language models (LLMs). It provides modular components for prompts, chains, agents, memory, and retrieval to compose complex LLM workflows.

pip install langchain langchain-openai

What is LCEL (LangChain Expression Language)?

Show ▼

LCEL is a declarative syntax for composing LangChain components using the pipe operator (|). It creates RunnableSequence chains that support streaming, batching, and async out of the box.

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

chain = (
ChatPromptTemplate.from_template("Explain {topic} simply.")
| ChatOpenAI(model="gpt-4o")
| StrOutputParser()
)
result = chain.invoke({"topic": "recursion"})

How do you create a ChatPromptTemplate?

Show ▼

Use ChatPromptTemplate.from_messages() with a list of (role, template) tuples. Variables are wrapped in curly braces.

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful {role}."),
("human", "{question}"),
])
messages = prompt.invoke({
"role": "tutor",
"question": "What is a linked list?"
})

How do you invoke a ChatModel in LangChain?

Show ▼

Instantiate the model class and call .invoke() with a list of messages or a prompt value. The model returns an AIMessage.

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o", temperature=0)
response = llm.invoke([
HumanMessage(content="What is LangChain?")
])
print(response.content)

How do you stream responses in LangChain?

Show ▼

Call .stream() instead of .invoke() on any Runnable. It returns an iterator of chunks you can process incrementally.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")
for chunk in llm.stream("Tell me a joke"):
print(chunk.content, end="", flush=True)

What is a PromptTemplate in LangChain?

Show ▼

A PromptTemplate formats a single string prompt with variable substitution. Use it for completion-style models.

from langchain_core.prompts import PromptTemplate

template = PromptTemplate.from_template(
"Summarize this text in {n} words:\n\n{text}"
)
prompt = template.invoke({"n": 50, "text": "LangChain is..."})

What is a FewShotPromptTemplate?

Show ▼

A template that dynamically injects example input/output pairs into the prompt to guide the model via in-context learning.

from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate

examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
]
example_prompt = PromptTemplate.from_template(
"Input: {input}\nOutput: {output}"
)
few_shot = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="Give the antonym of the input.",
suffix="Input: {input}\nOutput:",
input_variables=["input"],
)
print(few_shot.invoke({"input": "big"}).text)

What is MessagesPlaceholder?

Show ▼

MessagesPlaceholder inserts a dynamic list of messages (e.g., chat history) into a ChatPromptTemplate.

from langchain_core.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
)

prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
])

What is StrOutputParser?

Show ▼

StrOutputParser extracts the plain text string from an AIMessage. It's the most common output parser used at the end of LCEL chains.

from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI

chain = ChatOpenAI() | StrOutputParser()
text = chain.invoke("Say hello") # returns a plain string

What is JsonOutputParser?

Show ▼

JsonOutputParser parses the LLM's response into a Python dictionary. You can optionally provide a Pydantic model for validation.

from langchain_core.output_parsers import JsonOutputParser
from pydantic import BaseModel, Field

class Joke(BaseModel):
setup: str = Field(description="The setup")
punchline: str = Field(description="The punchline")

parser = JsonOutputParser(pydantic_object=Joke)
prompt = prompt.partial(
format_instructions=parser.get_format_instructions()
)

What is PydanticOutputParser?

Show ▼

PydanticOutputParser parses LLM output directly into a Pydantic model instance with validation and error messages.

from langchain_core.output_parsers import PydanticOutputParser
from pydantic import BaseModel

class Person(BaseModel):
name: str
age: int

parser = PydanticOutputParser(pydantic_object=Person)
chain = prompt | llm | parser
person = chain.invoke({"query": "Tell me about Alice, age 30"})
print(person.name, person.age)

What is RunnableSequence?

Show ▼

A RunnableSequence chains runnables so the output of one becomes the input of the next. Created automatically when you use the | pipe operator in LCEL.

from langchain_core.runnables import RunnableSequence

# These are equivalent:
chain = prompt | llm | parser
chain = RunnableSequence(first=prompt, middle=[llm], last=parser)

result = chain.invoke({"topic": "AI"})

🎓 Start studying LangChain

🎮 Study Modes Available

🔄

Flashcards

Flip to reveal

🧠

Focus Mode

Spaced repetition

Multiple Choice

Test your knowledge

⌨️

Type Answer

Active recall

📚

Learn Mode

Multi-round mastery

🎯

Match Game

Memory challenge

Related Topics in AI

📖 Learning Resources