Skip to main content

This Week's Best Picks from Amazon

Please see more curated items that we picked from Amazon here .

Chains in LangChain

The Role of Chains in LangChain

Chains in LangChain serve as the fundamental orchestration mechanism for guiding language model behavior through structured, multi-step tasks. Unlike agents, which determine the control flow at runtime, chains are deterministic pipelines where each step is predefined and executed in sequence.

Chains are ideal for scenarios that require controlled reasoning, sequential formatting, and repeated application of logic—such as summarizing data, formatting outputs, transforming inputs, or chaining multiple LLM calls.

LangChain supports several chain variants:

  • LLMChain: A basic chain that pairs a single prompt with a language model.
  • SimpleSequentialChain: A linear chain of multiple LLMs where the output of one is passed to the next.
  • SequentialChain: A more advanced version that allows named inputs/outputs and intermediate variable passing.
Creating an LLMChain for Question Answering

The simplest chain in LangChain is the LLMChain, which takes a prompt and an LLM and returns a response. This is appropriate for atomic tasks such as Q&A, summarization, or classification.

🔗 Open in Colab
llm = ChatOpenAI(model="gpt-4", temperature=0)

prompt = PromptTemplate( input_variables=["topic"], template="What are the latest trends in {topic}?" )

chain = LLMChain(llm=llm, prompt=prompt) response = chain.run({"topic": "machine learning"}) print(response)

In this example, the prompt is dynamically populated based on user input. The LLMChain encapsulates the full round-trip of generation.

Composing Multi-Step Logic with SimpleSequentialChain

SimpleSequentialChain allows the developer to stack multiple LLMChain objects in a pipeline. The output of one chain is fed directly into the next as input. This is useful for cases where reasoning and formatting are performed in stages.

Example: Summarize → Rephrase

🔗 Open in Colab
from langchain.chains import SimpleSequentialChain

First Chain: Summarize input text
summarize_prompt = PromptTemplate( input_variables=["input_text"], template="Summarize this content: {input_text}" ) summarize_chain = LLMChain(llm=llm, prompt=summarize_prompt)

Second Chain: Rephrase the summary
rephrase_prompt = PromptTemplate( input_variables=["text"], template="Rephrase this summary professionally: {text}" ) rephrase_chain = LLMChain(llm=llm, prompt=rephrase_prompt)

Compose the chain
pipeline = SimpleSequentialChain(chains=[summarize_chain, rephrase_chain], verbose=True)

output = pipeline.run("The global AI market is projected to grow significantly over the next decade.") print(output)

This model shows how output from one reasoning stage can be transformed and refined in subsequent steps.

Advanced Composition with SequentialChain

SequentialChain introduces greater control by allowing the developer to name inputs and outputs. This enables complex multi-step processes where intermediate outputs are preserved and passed selectively.

Use Case: Extract Entities → Summarize → Format Response

🔗 Open in Colab
from langchain.chains import SequentialChain

Prompt 1: Extract named entities
entity_prompt = PromptTemplate( input_variables=["input_text"], template="Extract all people, organizations, and locations from the following: {input_text}" ) entity_chain = LLMChain(llm=llm, prompt=entity_prompt, output_key="entities")

Prompt 2: Summarize entities
summary_prompt = PromptTemplate( input_variables=["entities"], template="Summarize the following extracted entities: {entities}" ) summary_chain = LLMChain(llm=llm, prompt=summary_prompt, output_key="summary")

Prompt 3: Format response
format_prompt = PromptTemplate( input_variables=["summary"], template="Format this into a user-friendly paragraph: {summary}" ) format_chain = LLMChain(llm=llm, prompt=format_prompt, output_key="final_output")

Compose the multi-step chain
sequential_chain = SequentialChain( chains=[entity_chain, summary_chain, format_chain], input_variables=["input_text"], output_variables=["final_output"], verbose=True )

result = sequential_chain({"input_text": "OpenAI and Google DeepMind are competing in developing AGI."}) print(result["final_output"])

This modular design enables debugging, logging, and reusability.

Popular posts from this blog

Intelligent Agents and Their Application to Businesses

Intelligent agents, as a key technology in artificial intelligence (AI), have become central to a wide range of applications in both scientific research and business operations. These autonomous entities, designed to perceive their environment and adapt their behavior to achieve specific goals, are reshaping industries and driving innovation. This post provides a detailed analysis of the current state of intelligent agents, including definitions, theoretical and practical perspectives, technical characteristics, examples of business applications, and future prospects. Definitions and Terminology Intelligent agents are broadly defined as autonomous systems that can perceive and interact with their environments using sensors and actuators. Their autonomy enables them to make decisions and execute actions without constant human intervention. They operate with a specific goal or objective, which guides their decision-making processes. These entities may exi...

The Curse of Dimensionality: Why More Data Isn’t Always Better in Data Science

In data science, the phrase "more data leads to better models" is often heard. However, when "more data" means adding dimensions or features, it can lead to unexpected challenges. This phenomenon is known as the Curse of Dimensionality , a fundamental concept that explains the pitfalls of working with high-dimensional datasets. Let’s explore the mathematics behind it and practical techniques to overcome it. What is the Curse of Dimensionality? 1. Volume Growth in High Dimensions The volume of a space increases exponentially as the number of dimensions grows. For example, consider a unit hypercube with side length \(r = 1\). Its volume in \(d\)-dimensions is: \[ V = r^d = 1^d = 1 \] However, if the length of the side is slightly reduced, say \(r = 0.9\), the volume decreases drastically with increasing \(d\): \[ V = 0.9^d \] For \(d = 2\), \(V = 0.81\); for \(d = 10\), \(V = 0.35\); and for \(d = 100\), \(V = 0.00003\). This shows how...

Role of Fourier Transform in Speech Recognition

Speech recognition has become an integral part of modern technology, from voice assistants to transcription services. A key mathematical tool enabling these advancements is the Fourier Transform (FT), particularly its variant, the Short-Time Fourier Transform (STFT). The Fourier Transform provides a way to convert speech signals from the time domain to the frequency domain, allowing us to extract meaningful features for analysis and recognition. Why Use Fourier Transform in Speech Recognition? Speech signals are inherently time-domain signals, with varying amplitude over time. However, speech carries crucial information in its frequency content, such as phonemes, tones, and pitch. The Fourier Transform enables us to analyze these characteristics by breaking the signal into its constituent frequencies. The Fourier Transform is widely used in speech recognition for: Spectrogram Generation: Converting speech signals into visual representations of frequency over time. Fea...