Chains in LangChain serve as the fundamental orchestration mechanism for guiding language model behavior through structured, multi-step tasks. Unlike agents, which determine the control flow at runtime, chains are deterministic pipelines where each step is predefined and executed in sequence.
Chains are ideal for scenarios that require controlled reasoning, sequential formatting, and repeated application of logic—such as summarizing data, formatting outputs, transforming inputs, or chaining multiple LLM calls.
LangChain supports several chain variants:
- LLMChain: A basic chain that pairs a single prompt with a language model.
- SimpleSequentialChain: A linear chain of multiple LLMs where the output of one is passed to the next.
- SequentialChain: A more advanced version that allows named inputs/outputs and intermediate variable passing.
The simplest chain in LangChain is the LLMChain, which takes a prompt and an LLM and returns a response. This is appropriate for atomic tasks such as Q&A, summarization, or classification.
🔗 Open in Colabllm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = PromptTemplate( input_variables=["topic"], template="What are the latest trends in {topic}?" )
chain = LLMChain(llm=llm, prompt=prompt) response = chain.run({"topic": "machine learning"}) print(response)
In this example, the prompt is dynamically populated based on user input. The LLMChain encapsulates the full round-trip of generation.
Composing Multi-Step Logic with SimpleSequentialChainSimpleSequentialChain allows the developer to stack multiple LLMChain objects in a pipeline. The output of one chain is fed directly into the next as input. This is useful for cases where reasoning and formatting are performed in stages.
Example: Summarize → Rephrase
🔗 Open in Colabfrom langchain.chains import SimpleSequentialChain
First Chain: Summarize input text
summarize_prompt = PromptTemplate( input_variables=["input_text"], template="Summarize this content: {input_text}" ) summarize_chain = LLMChain(llm=llm, prompt=summarize_prompt)
Second Chain: Rephrase the summary
rephrase_prompt = PromptTemplate( input_variables=["text"], template="Rephrase this summary professionally: {text}" ) rephrase_chain = LLMChain(llm=llm, prompt=rephrase_prompt)
Compose the chain
pipeline = SimpleSequentialChain(chains=[summarize_chain, rephrase_chain], verbose=True)
output = pipeline.run("The global AI market is projected to grow significantly over the next decade.") print(output)
This model shows how output from one reasoning stage can be transformed and refined in subsequent steps.
Advanced Composition with SequentialChainSequentialChain introduces greater control by allowing the developer to name inputs and outputs. This enables complex multi-step processes where intermediate outputs are preserved and passed selectively.
Use Case: Extract Entities → Summarize → Format Response
🔗 Open in Colabfrom langchain.chains import SequentialChain
Prompt 1: Extract named entities
entity_prompt = PromptTemplate( input_variables=["input_text"], template="Extract all people, organizations, and locations from the following: {input_text}" ) entity_chain = LLMChain(llm=llm, prompt=entity_prompt, output_key="entities")
Prompt 2: Summarize entities
summary_prompt = PromptTemplate( input_variables=["entities"], template="Summarize the following extracted entities: {entities}" ) summary_chain = LLMChain(llm=llm, prompt=summary_prompt, output_key="summary")
Prompt 3: Format response
format_prompt = PromptTemplate( input_variables=["summary"], template="Format this into a user-friendly paragraph: {summary}" ) format_chain = LLMChain(llm=llm, prompt=format_prompt, output_key="final_output")
Compose the multi-step chain
sequential_chain = SequentialChain( chains=[entity_chain, summary_chain, format_chain], input_variables=["input_text"], output_variables=["final_output"], verbose=True )
result = sequential_chain({"input_text": "OpenAI and Google DeepMind are competing in developing AGI."}) print(result["final_output"])
This modular design enables debugging, logging, and reusability.