详细介绍:LangChain Few-Shot Prompt Templates(one)

news/2026/1/25 14:53:39/文章来源:https://www.cnblogs.com/gccbuaa/p/19529703

https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/few_shot_examples

I. First Clarify the Core Objective of the Original Text (Use Case)

The original text defines the task at the very beginning: configure few-shot examples for “question-answering with search”. In simple terms, this means enabling the LLM to first determine “whether an intermediate follow-up question is needed” when faced with a problem, then derive the final answer step by step (e.g., when asked “Who lived longer?”, first follow up with “How old was each person when they died?” before comparing).
All code and steps are designed to achieve this task.

II. Part 1: Use the Example Set Directly (First Major Section of the Original Text)

The first step in the original text explains “using few-shot examples in prompts without LangChain templates—by manually combining examples with questions”. It is divided into 3 specific steps, with each step corresponding to the original code.

Step 1: Create a Few-Shot Example Set (Core Code from the Original Text)

The original text states: “Each example is a dictionary containing input variables”. Here, the input variables are question (the question) and answer (the answer with follow-up steps). The code is copied exactly from the original text:

# Original code: Few-shot example list (4 question-answering examples)
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
examples = [
{
"question": "Who lived longer, Muhammad Ali or Alan Turing?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
"""
},
{
"question": "When was the founder of craigslist born?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
"""
},
{
"question": "Who was the maternal grandfather of George Washington?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
"""
},
{
"question": "Are both the directors of Jaws and Casino Royale from the same country?",
"answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
"""
}
]
  • Key note from the original text: The question and answer in each dictionary are “fixed input variable names”. The subsequent template will use these names to retrieve data, so they cannot be changed arbitrarily.

Step 2: Create a “Formatter for Individual Examples” (Called example_prompt in the Original Text)

The original text explains that this step is to “define how a single example should be displayed”—converting each dictionary (containing question + answer) in examples into text in a fixed format. The code is copied exactly from the original text:

# Original code: Formatter template for individual examples
example_prompt = PromptTemplate(
input_variables=["question", "answer"],  # Must match the keys in the example dictionaries
template="Question: {question}\n{answer}"  # Display format for a single example: question first, then answer
)
# Original test: Print the formatted result of the first example
print(example_prompt.format(**examples[0]))
  • Expected output from the original text (what you will see):
Question: Who lived longer, Muhammad Ali or Alan Turing?Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad Ali
  • Explanation of a difficult point: **examples[0] is “dictionary unpacking”—it automatically passes question and answer from examples[0] to the input_variables of example_prompt. There is no need to manually write question=examples[0]["question"], answer=examples[0]["answer"].

Step 3: Create a Few-Shot Prompt Template (FewShotPromptTemplate)

The original text states that this step “combines the example set, formatter, and final question”. The code is copied exactly from the original text:

# Original code: Create a complete few-shot prompt template
prompt = FewShotPromptTemplate(
examples=examples,  # All examples prepared in Step 1
example_prompt=example_prompt,  # Formatter template for individual examples defined in Step 2
suffix="Question: {input}",  # "Final question" after the examples ({input} is the question to be passed by the user)
input_variables=["input"]  # Tell the template: the parameter the user needs to pass is "input" (i.e., the final question)
)
# Original test: Pass the final question and generate the complete prompt
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
  • Expected output from the original text (what you will see):
    The template will automatically format all examples according to example_prompt, then append the question you passed:
Question: Who lived longer, Muhammad Ali or Alan Turing?Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad AliQuestion: When was the founder of craigslist born?Are follow up questions needed here: Yes.Follow up: Who was the founder of craigslist?Intermediate answer: Craigslist was founded by Craig Newmark.Follow up: When was Craig Newmark born?Intermediate answer: Craig Newmark was born on December 6, 1952.So the final answer is: December 6, 1952Question: Who was the maternal grandfather of George Washington?Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph BallQuestion: Are both the directors of Jaws and Casino Royale from the same country?Are follow up questions needed here: Yes.Follow up: Who is the director of Jaws?Intermediate Answer: The director of Jaws is Steven Spielberg.Follow up: Where is Steven Spielberg from?Intermediate Answer: The United States.Follow up: Who is the director of Casino Royale?Intermediate Answer: The director of Casino Royale is Martin Campbell.Follow up: Where is Martin Campbell from?Intermediate Answer: New Zealand.So the final answer is: NoQuestion: Who was the father of Mary Ball Washington?
  • Key note from the original text: suffix refers to the “suffix of the examples”—i.e., the question to be asked after all examples are displayed. input_variables=["input"] means you must pass the input parameter (your question) when calling format.

III. Part 2: Use an Example Selector (Second Major Section of the Original Text)

The original text describes this as an “advanced usage”: instead of including all examples in the prompt, it “selects the most similar examples to the user’s question” (reducing prompt length and helping the LLM focus). It is divided into 3 steps, with each step corresponding exactly to the original code.

Step 1: Create an Example Selector (SemanticSimilarityExampleSelector)

The original text uses “semantic similarity” to select examples (e.g., if the user asks “Who was Mary Ball Washington’s father?”, it selects the example about “George Washington’s maternal grandfather” since they are most relevant). The code is copied exactly from the original text:

# Original code: Import required tools (vector store, embedding model, example selector)
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
# Original code: Create the example selector
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,  # Still the example set from Step 1 (all selectable examples)
OpenAIEmbeddings(),  # Use OpenAI’s embedding model (converts text into "numbers for similarity calculation")
Chroma,  # Use Chroma vector store (stores embedded numbers for easy similarity search)
k=1  # Select only "1 most similar example"
)
  • Explanation of difficult points:
    • “Embedding model (OpenAIEmbeddings)”: Converts text into a sequence of numbers (e.g., “apple” becomes [0.1, 0.2, …]). The more similar the numbers, the closer the semantic meaning of the text.
    • “Chroma”: A tool specifically designed to store these numbers, enabling quick identification of the “example numbers” most similar to the “numbers of the user’s question”.
    • “k=1”: Selects only the 1 most similar example (as used in the original text). You can also change k=2 to select 2 examples.

Step 2: Test the Example Selector (Select the Most Similar Example)

The original text uses the question “Who was the father of Mary Ball Washington?” to test which example is selected. The code is copied exactly from the original text:

# Original code: Define the test question
question = "Who was the father of Mary Ball Washington?"
# Original code: Select the most similar example based on the question
selected_examples = example_selector.select_examples({"question": question})
# Original code: Print the selected example
print(f"Examples most similar to the input: {question}")
for example in selected_examples:
print("\n")
for k, v in example.items():
print(f"{k}: {v}")
  • Expected output from the original text (what you will see):
    The example about “George Washington’s maternal grandfather” will be selected, as both questions relate to “Mary Ball Washington’s relatives”:
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Examples most similar to the input: Who was the father of Mary Ball Washington?
question: Who was the maternal grandfather of George Washington?
answer:
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball

Step 3: Create a Few-Shot Template with the Example Selector

The original text states: “Replace the previous examples parameter with example_selector”—all other settings remain unchanged. The code is copied exactly from the original text:

# Original code: Create a few-shot template (using the example selector instead of all examples)
prompt = FewShotPromptTemplate(
example_selector=example_selector,  # Example selector created in Step 1
example_prompt=example_prompt,  # Still the formatter template for individual examples from Step 2
suffix="Question: {input}",  # Still the format for the final question
input_variables=["input"]  # Still the user passes the "input" parameter
)
# Original test: Pass the question and generate the complete prompt
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
  • Expected output from the original text (what you will see):
    This time, only “the 1 most similar example” is displayed (instead of all 4), with the question appended at the end:
Question: Who was the maternal grandfather of George Washington?Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph BallQuestion: Who was the father of Mary Ball Washington?
  • Key note from the original text: The advantage of this approach is that “the prompt is shorter”—the LLM does not need to process irrelevant examples, leading to more accurate and faster responses.

IV. Core Summary of the Original Text (Extracted Exclusively from the Original Text)

  1. Two ways to use few-shot prompt templates:
    • Direct use of examples: Include all examples in the prompt (suitable for scenarios with few examples).
    • Use of example_selector: Select the most relevant examples based on semantic similarity (suitable for scenarios with many examples).
  2. 3 Essential Core Components:
    • Example set (examples): A list of dictionaries, where each dictionary contains keys corresponding to input_variables.
    • Individual example formatter template (example_prompt): Defines the display format of a single example.
    • Few-shot template (FewShotPromptTemplate): Combines the example set, formatter template, and final question.
  3. Core Tools for the Example Selector:
    • SemanticSimilarityExampleSelector: Selects examples based on semantics.
    • OpenAIEmbeddings: Converts text into numbers for similarity calculation.
    • Chroma: Stores numbers and performs similarity searches.

All content is derived from the original text without adding any extra cases or code. If you have questions about any line of code, we can break it down further!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/1214830.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

在 SQL Server 2025 CU1 中创建 CAG (Contained Availability Group)

Microsoft SQL Server 2025 RTM GDR & CU1 (2026 年 1 月安全更新 | 累计更新)SQL Server 2025 CU1 中的 CAG (Contained Availability Group,包含的可用性组) SQL Server 2025 - AI ready enterprise database f…

GPT-5.2-Pro与Sora2全面爆发:普通开发者如何低成本构建AGI级应用?(附多模态整合架构图)

摘要: 2026年伊始,AI技术栈再次迎来核弹级更新。 GPT-5.2重塑了逻辑推理的天花板。 Sora2与Veo3彻底消除了视频生成的物理幻觉。 对于开发者而言,这既是黄金时代,也是焦虑时代。 面对动辄上千美元的API订阅费和碎片化的技术栈。 我…

文字游戏:进化之路2.0二开完美版本源码 带后台

内容目录 一、详细介绍二、效果展示1.部分代码2.效果图展示 三、学习资料下载 一、详细介绍 文字游戏:进化之路2.0二开完美版本源码 带后台 基于原版二开。原版没有后台功能,前端某些功能也是没有的! 后端部分功能参考额曜崽i的版本思路&am…

基于Kubo公式的石墨烯电导率与表面阻抗计算(MATLAB实现)

一、理论基础 石墨烯的电导率可通过Kubo公式计算,包含Drude电导率(自由载流子贡献)和带间跃迁电导率(量子干涉贡献)。表面阻抗则由电导率导出,反映电磁波在石墨烯表面的反射/透射特性。 1. Kubo公式核心表达…

C#核心

类和对象 namespace 类和对象 {enum E_SexType{man,woman,}//成员变量,初始值:值类型默认都是0,引用类型默认都是null//访问修饰符public class Person(){public string name;public int age;public enum E_SexType…

C#进阶

Arraylist ArryList array = new ArrayList();Stack 本质是object[]数组,后进先出 Stack stack = new Stack(); //压栈 stack.push(1); //取 object v = stack.Pop(); //查,只能查看栈顶的内容 v = stack.peek; //查看…

2026年市场上有名的打包带厂家排行,市场上有实力的打包带直销厂家广营宏利满足多元需求

在工业制造与物流运输领域,打包带作为关键的紧固与封装材料,其性能与可靠性直接关系到货物安全与运输效率。随着制造业的升级和供应链管理的精细化,市场对打包带产品的需求正从基础的“捆扎”功能,向高强度、耐腐蚀…

BES(恒玄)蓝牙平台EQ 调试和设定

1.EQ调试原理 说这个部分是为了同学们更好的分析解决问题 打开所使用工程的target.mk文件,修改宏PC_CMD_UART ?= 1 同时关闭DEBUG=0 Bes提供了多种滤波方式,可在target.mk中进行配置,下图是我使用的配置,具体要怎么配置,可一个根据实际的项目需求更改。 然后我们看下…

将分散的Pytest测试脚本统一接入测试平台:FastAPI改造方案详解

在上一篇文章《Pytest 测试用例自动生成:接口自动化进阶实践》中,我们已经解决了“如何高效编写和维护接口自动化用例”的问题。 然而,随着业务的发展和团队规模的扩大,很多公司会选择开发自己的测试平台,以实现更…

基于模糊控制的MATLAB避障算法实现

一、算法原理与系统架构 模糊控制避障通过经验规则替代精确数学模型,适用于动态复杂环境。其核心模块包括:输入变量:障碍物距离、角度偏差、目标方向等隶属函数:将精确量转换为模糊集合(如"近"、"…

为什么网络上搜索不到“桑桥网络”这家公司了?

近期有客户在搜索引擎或社交平台上查找“桑桥网络”时,发现相关信息变少,甚至搜索不到公司官网或品牌资料,于是产生疑问:“桑桥网络是不是不做了?” 答案是:不是公司消失了,而是品牌升级了。桑桥网络已正式更名…

说说上海MNS2.0配电柜批量定制,如何选择厂家?

随着工业自动化与数字化转型的加速,低压配电系统作为企业生产运营的能源心脏,其稳定性、智能化水平直接决定了企业的生产效率与运维成本。MNS2.0配电柜作为新一代智能低压开关设备,已成为众多工业企业升级配电系统的…

2026年阜阳地区,为你分享专业的新能源汽修培训职业学校推荐

本榜单依托全维度市场调研与真实行业口碑,深度筛选出五家标杆职业学校,为学生及家长选型提供客观依据,助力精准匹配适配的职业教育伙伴。 TOP1 推荐:合肥东辰职业学校 推荐指数:★★★★★ | 口碑评分:合肥首推专…

自助KTV加盟哪家服务靠谱,长春鱼乐圈资料汇总

在自助KTV行业竞争愈发激烈的当下,一套成熟的加盟服务体系是创业者降低风险、快速盈利的核心保障。面对市场上鱼龙混杂的自助KTV加盟品牌,如何找到既靠谱又契合自身需求的合作伙伴?以下结合行业痛点与服务能力,为你…

2026年全国实力强的博士留学机构排名推荐,这些企业值得关注

(涵盖博士留学申请、科研背景提升、产学研人才对接等核心服务领域服务商推荐) 2026年全球高等教育竞争持续加剧,博士留学申请已从单一的材料递交升级为学术背景+科研能力+院校资源的综合比拼。无论是梦校offer冲刺、…

2025年成都火锅人气排行:3公里内口碑爆表的十大必吃店,牛肉火锅/麻辣烫/美食/市井火锅nbsp;成都火锅约会地点哪家好吃

成都火锅市场格局分析 作为川渝美食文化的代表,成都火锅市场近年来呈现出多元化发展趋势。根据大众点评、小红书等平台数据显示,方圆3公里范围内的火锅品牌竞争激烈,消费者更倾向于选择口碑优良、品质稳定的火锅品牌…

聊聊2026年别墅外墙砖靠谱厂家,广东和陶家居实力上榜

2026年家居建材行业持续升级,外墙砖作为建筑外立面的核心材料,其品质稳定性、耐候性与美学表现力直接决定建筑的使用寿命与视觉质感。无论是农村自建房的耐用需求、别墅工程的定制,还是新农村项目的批量交付,优质外…

读书笔记三:从需求到交付,坚守软件质量的核心底线

软件开发的最终目标,是交付一款满足用户需求、质量可靠的软件产品。而从需求提出到产品交付的整个过程,每一个环节都关乎软件质量,任何一个环节的疏漏,都可能导致软件缺陷,影响用户体验,甚至导致项目失败。邹欣老…

金属带材环保电镀费用知多少,哪家收费合理?

2026年新能源汽车、3C电子产业爆发式增长,金属带材与精密零部件的电镀工艺成为决定终端产品性能与寿命的关键环节。无论是超薄金属带材的高精度镀金、超厚零部件的耐腐蚀镀镍,还是全链路环保合规的电镀解决方案,优质…

详细介绍:STM32外设学习--DMA直接存储器读取--学习笔记。

详细介绍:STM32外设学习--DMA直接存储器读取--学习笔记。pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consola…