Skip to content
LangChain Blog
  • Website
  • Docs
  • Harrison's Hot Takes
  • Try LangSmith
Sign in Subscribe
Rethinking Our Documentation

Rethinking Our Documentation

LangChain has seen some incredible growth in the last year and half. The Python open-source library is now downloaded over 7 million times per month,

4 min read
LangSmith: Production Monitoring & Automations

LangSmith: Production Monitoring & Automations

Key Links: * YouTube Walkthrough * Sign up for LangSmith here If 2023 was a breakthrough year for LLMs, then 2024 is shaping up to be the

6 min read
LangFriend: a Journal with Long-Term Memory

LangFriend: a Journal with Long-Term Memory

One of the concepts we are most interested in at LangChain is memory. Whenever we are interested in a concept, we like to build an

6 min read
Open Source Extraction Service

Open Source Extraction Service

Earlier this month we announced our most recent OSS use-case accelerant: a service for extracting structured data from unstructured sources, such as text and PDF

7 min read
Using Feedback to Improve Your Application: Self Learning GPTs

Using Feedback to Improve Your Application: Self Learning GPTs

We built and hosted a simple demo app to show how applications can learn and improve from feedback over time. The app is called "

4 min read
LangChain Integrates NVIDIA NIM for GPU-optimized LLM Inference in RAG

LangChain Integrates NVIDIA NIM for GPU-optimized LLM Inference in RAG

Roughly a year and a half ago, OpenAI launched ChatGPT and the generative AI era really kicked off. Since then we’ve seen rapid growth

By LangChain 4 min read
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs

Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs

A practical guide to constructing and retrieving information from knowledge graphs in RAG applications with Neo4j and LangChain Editor's Note: the following is

Partner Post 7 min read
Benchmarking Query Analysis in High Cardinality Situations

Benchmarking Query Analysis in High Cardinality Situations

Several key use cases for LLMs involve returning data in a structured format. Extraction is one such use case - we recently highlighted this with

6 min read
Multi Needle in a Haystack

Multi Needle in a Haystack

Key Links * Video * Code Overview Interest in long context LLMs is surging as context windows expand to 1M tokens. One of the most popular and

6 min read
Iterating Towards LLM Reliability with Evaluation Driven Development

Iterating Towards LLM Reliability with Evaluation Driven Development

Editor's Note: the following is a guest blog post from the Devin Stein, CEO of Dosu. Dosu is an engineering teammate that helps

7 min read
Use Case Accelerant: Extraction Service

Use Case Accelerant: Extraction Service

Today we’re excited to announce our newest OSS use-case accelerant: an extraction service. LLMs are a powerful tool for extracting structured data from unstructured

By LangChain 7 min read
LangGraph for Code Generation

LangGraph for Code Generation

Key Links * LangGraph cookbook * Video Motivation Code generation and analysis are two of most important applications of LLMs, as shown by the ubiquity of products

4 min read

Page 15 of 30

Load More Something went wrong with loading more posts

© LangChain Blog 2026