调试parlant的大模型配置,最终自己动手写了g4f的模块挂载 - 教程

news/2025/10/3 17:35:20/文章来源:https://www.cnblogs.com/ljbguanli/p/19124790

调试parlant的大模型配置,最终自己动手写了g4f的模块挂载 - 教程

parlant安装参见:https://skywalk.blog.csdn.net/article/details/152094280

手写g4f模块参见:https://skywalk.blog.csdn.net/article/details/152253434

说实话,parlant是我目前接触到的,最难配大模型的一个项目了。它没有展示配置文件,导致要换模型,都不知道该怎么写?主要是没有太仔细看手册....但是手册没有考虑到非官方大模型提供商的情况,官方倒是给了怎么样写自己的大模型,但是太复杂了,还不如我拿一个官方的py文件修改呢!

另外这几天Trae抽风,它是一点忙也没有帮上!

另外这几天g4f的gpt-4o模型也有问题,也增加了调试难度。

先看看parlant的手册

Environment Variables

Configure the Ollama service using these environment variables:

# Ollama server URL (default: http://localhost:11434)
export OLLAMA_BASE_URL="http://localhost:11434"
# Model size to use (default: 4b)
# Options: gemma3:1b, gemma3:4b, llama3.1:8b, gemma3:12b, gemma3:27b, llama3.1:70b, llama3.1:405b
export OLLAMA_MODEL="gemma3:4b"
# Embedding model (default: nomic-embed-text)
# Options: nomic-embed-text, mxbai-embed-large
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
# API timeout in seconds (default: 300)
export OLLAMA_API_TIMEOUT="300"

Example Configuration

# For development (fast, good balance)
export OLLAMA_MODEL="gemma3:4b"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_API_TIMEOUT="180"
# higher accuracy cloud
export OLLAMA_MODEL="gemma3:4b"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_API_TIMEOUT="600"

Recommended Models

⚠️ IMPORTANT: Pull these models before running Parlant to avoid API timeouts during first use:

Text Generation Models

# Recommended for most use cases (good balance of speed/accuracy)
ollama pull gemma3:4b-it-qat
# Fast but may struggle with complex schemas
ollama pull gemma3:1b
# embedding model required for creating embeddings
ollama pull nomic-embed-text

Large Models (Cloud/High-end Hardware Only)

# Better reasoning capabilities
ollama pull llama3.1:8b
# High accuracy for complex tasks
ollama pull gemma3:12b
# Very high accuracy (requires more resources)
ollama pull gemma3:27b-it-qat
# ⚠️ WARNING: Requires 40GB+ GPU memory
ollama pull llama3.1:70b
# ⚠️ WARNING: Requires 200GB+ GPU memory (cloud-only)
ollama pull llama3.1:405b

Embedding Models

To use custom embedding model set OLLAMA_EMBEDDING_MODEL environment value as required name Note that this implementation is tested using nomic-embed-text ⚠️ IMPORTANT: Support for using other embedding models has been added including a custom embedding model of your own choice Ensure to set OLLAMA_EMBEDDING_VECTOR_SIZE which is compatible with your own embedding model before starting the server Tested with snowflake-arctic-embed with vector size of 1024 It is not NECESSARY to put OLLAMA_EMBEDDING_VECTOR_SIZE if you are using the supported nomic-embed-textmxbai-embed-large or bge-m3. The vector size defaults to 768, 1024 and 1024 respectively for these

# Alternative embedding model (512 dimensions)
ollama pull mxbai-embed-large:latest

配置

export PARLANT_MODEL_URL="http://192.168.1.5:1337/v1"
export PARLANT_MODEL_API_KEY="key sample"
export PARLANT_MODEL_NAME="gpt-4o"
set PARLANT_MODEL_URL="http://192.168.1.5:1337/v1"
set PARLANT_MODEL_API_KEY="key sample"
set PARLANT_MODEL_NAME="gpt-4o"
set PARLANT_MODEL_URL="http://192.168.0.98:1337/v1"
set PARLANT_MODEL_API_KEY="key sample"
set PARLANT_MODEL_NAME="gpt-4o"

不行

看看parlant源代码

ollama里面的配置

class OllamaEstimatingTokenizer(EstimatingTokenizer):"""Simple tokenizer that estimates token count for Ollama models."""def __init__(self, model_name: str):self.model_name = model_nameself.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06")@overrideasync def estimate_token_count(self, prompt: str) -> int:"""Estimate token count using tiktoken"""tokens = self.encoding.encode(prompt)return int(len(tokens) * 1.15)
class OllamaSchematicGenerator(SchematicGenerator[T]):"""Schematic generator that uses Ollama models."""supported_hints = ["temperature", "max_tokens", "top_p", "top_k", "repeat_penalty", "timeout"]def __init__(self,model_name: str,logger: Logger,base_url: str = "http://localhost:11434",default_timeout: int | str = 300,) -> None:self.model_name = model_nameself.base_url = base_url.rstrip("/")self._logger = loggerself._tokenizer = OllamaEstimatingTokenizer(model_name)self._default_timeout = default_timeoutself._client = ollama.AsyncClient(host=base_url)@property@overridedef id(self) -> str:return f"ollama/{self.model_name}"@property@overridedef tokenizer(self) -> EstimatingTokenizer:return self._tokenizer@property@overridedef max_tokens(self) -> int:if "1b" in self.model_name.lower():return 12288elif "4b" in self.model_name.lower():return 16384elif "8b" in self.model_name.lower():return 16384elif "12b" in self.model_name.lower() or "70b" in self.model_name.lower():return 16384elif "27b" in self.model_name.lower() or "405b" in self.model_name.lower():return 32768else:return 16384

这里设定了base_url 为:base_url: str = "http://localhost:11434",

openai的相关代码

class OpenAISchematicGenerator(SchematicGenerator[T]):supported_openai_params = ["temperature", "logit_bias", "max_tokens"]supported_hints = supported_openai_params + ["strict"]unsupported_params_by_model: dict[str, list[str]] = {"gpt-5": ["temperature"],}def __init__(self,model_name: str,logger: Logger,tokenizer_model_name: str | None = None,) -> None:self.model_name = model_nameself._logger = loggerself._client = AsyncClient(api_key=os.environ["OPENAI_API_KEY"])self._tokenizer = OpenAIEstimatingTokenizer(model_name=tokenizer_model_name or self.model_name)

deepseek

deepseek的,至少知道怎么设置base url

class DeepSeekEstimatingTokenizer(EstimatingTokenizer):def __init__(self, model_name: str) -> None:self.model_name = model_nameself.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06")@overrideasync def estimate_token_count(self, prompt: str) -> int:tokens = self.encoding.encode(prompt)return len(tokens)
class DeepSeekSchematicGenerator(SchematicGenerator[T]):supported_deepseek_params = ["temperature", "logit_bias", "max_tokens"]supported_hints = supported_deepseek_params + ["strict"]def __init__(self,model_name: str,logger: Logger,) -> None:self.model_name = model_nameself._logger = loggerself._client = AsyncClient(base_url="https://api.deepseek.com",api_key=os.environ["DEEPSEEK_API_KEY"],)self._tokenizer = DeepSeekEstimatingTokenizer(model_name=self.model_name)

问题是,它也要用self.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06") 

这样我没有gpt模型,是不是就不能用了?

glm的调用

class GLMEmbedder(Embedder):supported_arguments = ["dimensions"]def __init__(self, model_name: str, logger: Logger) -> None:self.model_name = model_nameself._logger = loggerself._client = AsyncClient(base_url="https://open.bigmodel.cn/api/paas/v4", api_key=os.environ["GLM_API_KEY"])self._tokenizer = GLMEstimatingTokenizer(model_name=self.model_name)

parlant调用Ollama api的手册

参考Ollama的手册:docs/adapters/nlp/ollama.md · Gitee 极速下载/parlant - 码云 - 开源中国

Environment Variables

Configure the Ollama service using these environment variables:

# Ollama server URL (default: http://localhost:11434)
export OLLAMA_BASE_URL="http://localhost:11434"
# Model size to use (default: 4b)
# Options: gemma3:1b, gemma3:4b, llama3.1:8b, gemma3:12b, gemma3:27b, llama3.1:70b, llama3.1:405b
export OLLAMA_MODEL="gemma3:4b"
# Embedding model (default: nomic-embed-text)
# Options: nomic-embed-text, mxbai-embed-large
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
# API timeout in seconds (default: 300)
export OLLAMA_API_TIMEOUT="300"

Example Configuration

# For development (fast, good balance)
export OLLAMA_MODEL="gemma3:4b"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_API_TIMEOUT="180"
# higher accuracy cloud
export OLLAMA_MODEL="gemma3:4b"
export OLLAMA_EMBEDDING_MODEL="nomic-embed-text"
export OLLAMA_API_TIMEOUT="600"

Recommended Models

⚠️ IMPORTANT: Pull these models before running Parlant to avoid API timeouts during first use:

Text Generation Models

# Recommended for most use cases (good balance of speed/accuracy)
ollama pull gemma3:4b-it-qat
# Fast but may struggle with complex schemas
ollama pull gemma3:1b
# embedding model required for creating embeddings
ollama pull nomic-embed-text

Large Models (Cloud/High-end Hardware Only)

# Better reasoning capabilities
ollama pull llama3.1:8b
# High accuracy for complex tasks
ollama pull gemma3:12b
# Very high accuracy (requires more resources)
ollama pull gemma3:27b-it-qat
# ⚠️ WARNING: Requires 40GB+ GPU memory
ollama pull llama3.1:70b
# ⚠️ WARNING: Requires 200GB+ GPU memory (cloud-only)
ollama pull llama3.1:405b

Embedding Models

To use custom embedding model set OLLAMA_EMBEDDING_MODEL environment value as required name Note that this implementation is tested using nomic-embed-text ⚠️ IMPORTANT: Support for using other embedding models has been added including a custom embedding model of your own choice Ensure to set OLLAMA_EMBEDDING_VECTOR_SIZE which is compatible with your own embedding model before starting the server Tested with snowflake-arctic-embed with vector size of 1024 It is not NECESSARY to put OLLAMA_EMBEDDING_VECTOR_SIZE if you are using the supported nomic-embed-textmxbai-embed-large or bge-m3. The vector size defaults to 768, 1024 and 1024 respectively for these

# Alternative embedding model (512 dimensions)
ollama pull mxbai-embed-large:latest

embedding

关于openai的那个embedding问题,可以使用这个

import tiktoken
class UniversalTokenizer:def __init__(self, encoding_name="cl100k_base"):self.encoding = tiktoken.get_encoding(encoding_name)def estimate(self, text, ratio=1.1):return int(len(self.encoding.encode(text)) * ratio)

调用模型

import parlant.sdk as p
from parlant.sdk import NLPServices
async with p.Server(nlp_service=NLPServices.ollama) as server:agent = await server.create_agent(name="Healthcare Agent",description="Is empathetic and calming to the patient.",)

准备这样做

直接改代码,不用gpt-4o-2024-08-06 ,直接就用字符长度算了

class DeepSeekEstimatingTokenizer(EstimatingTokenizer):def __init__(self, model_name: str) -> None:self.model_name = model_name# self.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06")# self.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06")@overrideasync def estimate_token_count(self, prompt: str) -> int:# tokens = self.encoding.encode(prompt)tokens = promptreturn len(tokens)

I get it! 我知道了!

when i changed base_url to my llm server ,such as 192.168.1.5:1337 or 127.0.0.1:1337

this change effect gpt-4o-2024-08-06 ,then error 

当我修改base_url的时候,我可能也修改了gpt-4o-2024-08-06到自己的自定义llm服务器,导致会报没有这个模型。

这里吐槽一下,我的拼音输入法突然快捷键调不出来了,需要用鼠标点状态栏切换,真是屋漏偏逢连夜雨啊!

so I need to use other llms  such as deepseek or ollama ,then gpt-4o-2024-08-06 can be ok

所以我只需要使用deepseek或者ollama模型的配置,这样就不会干扰pt-4o-2024-08-06模型

首先测试国内pt-4o-2024-08-06模型的连通性:

import tiktoken
import time
import asyncio
prompt="hello world 测试完成"
prompt='国内无法用这个模型怎么办? tiktoken.encoding_for_model("gpt-4o-2024-08-06")'
testencoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06")
tokens = testencoding.encode(prompt)
print(tokens, len(tokens))

output:

[48450, 53254, 5615, 41713, 184232, 50182, 4802, 260, 8251, 2488, 154030, 11903, 10928, 568, 70, 555, 12, 19, 78, 12, 1323, 19, 12, 3062, 12, 3218, 1405] 27

nlp_service=load_custom_nlp_service,
import parlant.sdk as p
from parlant.sdk import NLPServices
async with p.Server(nlp_service=NLPServices.ollama) as server:agent = await server.create_agent(name="Healthcare Agent",description="Is empathetic and calming to the patient.",)

现在的几个问题

想使用deepseek,发现NLPServices里没有它:

from parlant.sdk import NLPServices
dir(NLPServices)

'anthropic',
 'azure',
 'cerebras',
 'gemini',
 'glm',
 'litellm',
 'ollama',
 'openai',
 'qwen',
 'snowflake',
 'together',
 'vertex

使用ollama,发现它要用自己的token模型,不太想整ollama了 。

主要是ollama启动后,整个机器负载有点大,而且只能启动8G或更小的模型,效果跟g4f比有点弱。

'anthropic', self._estimating_tokenizer = AnthropicEstimatingTokenizer(self._client, model_name)
 'azure',self._tokenizer = AzureEstimatingTokenizer(model_name=self.model_name)
 'cerebras', self.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06")
 'gemini',
 'glm',self.encoding = tiktoken.encoding_for_model("gpt-4o-2024-08-06") base_url="https://open.bigmodel.cn/api/paas/v4", api_key=os.environ["GLM_API_KEY"]
 'litellm',
 'ollama', self.model_name = os.environ.get("OLLAMA_EMBEDDING_MODEL", "nomic-embed-text")
 'openai',

self._tokenizer = OpenAIEstimatingTokenizer(

            model_name=tokenizer_model_name or self.model_name

 'qwen',
 'snowflake',
 'together',
 'vertex

要仔细看openai的这部分代码

class OpenAIEstimatingTokenizer(EstimatingTokenizer):def __init__(self, model_name: str) -> None:self.model_name = model_nameself.encoding = tiktoken.encoding_for_model(model_name)@overrideasync def estimate_token_count(self, prompt: str) -> int:tokens = self.encoding.encode(prompt)return len(tokens)
class OpenAISchematicGenerator(SchematicGenerator[T]):supported_openai_params = ["temperature", "logit_bias", "max_tokens"]supported_hints = supported_openai_params + ["strict"]unsupported_params_by_model: dict[str, list[str]] = {"gpt-5": ["temperature"],}def __init__(self,model_name: str,logger: Logger,tokenizer_model_name: str | None = None,) -> None:self.model_name = model_nameself._logger = loggerself._client = AsyncClient(api_key=os.environ["OPENAI_API_KEY"])self._tokenizer = OpenAIEstimatingTokenizer(model_name=tokenizer_model_name or self.model_name)

测试

用这个调试

# 导入必要的库
import tiktoken
import time
import asyncio
import os
os.environ["DEEPSEEK_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["DEEPSEEK_BASE_URL"] = "http://192.168.0.98:1337/"  # 自定义大模型API地址
os.environ["OLLAMA_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["OLLAMA_BASE_URL"] = "http://192.168.0.98:1337/"  # 自定义大模型API地址
os.environ["OLLAMA_MODEL"] = "default"  # 自定义大模型API地址
os.environ["SNOWFLAKE_AUTH_TOKEN"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["SNOWFLAKE_CORTEX_BASE_URL"] = "http://192.168.0.98:1337/"  #
os.environ["SNOWFLAKE_CORTEX_CHAT_MODEL"] = "default"
import asyncio
import parlant.sdk as p
from parlant.sdk import NLPServices
# DEEPSEEK_API_KEY
async def main():async with p.Server(nlp_service=NLPServices.snowflake) as server:agent = await server.create_agent(name="Otto Carmen",description="You work at a car dealership",)
asyncio.run(main())

ollama和deepseek的都不适合自己。

最终解决

最终决定,自己手写g4f的service文件,在紧张的调试之后(trae还抽风,一点忙都帮不上),终于能跑了。

手写g4f的service代码过程见:https://skywalk.blog.csdn.net/article/details/152253434?spm=1011.2415.3001.5331

测试文件test_server.py这样写,把环境变量用os库写了,这里面主要起作用的是类似G4F_API_KEY这样的G4F开头的环境变量:

# 导入必要的库
import tiktoken
import time
import asyncio
import os
os.environ["DEEPSEEK_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["DEEPSEEK_BASE_URL"] = "http://192.168.0.98:1337/"  # 自定义大模型API地址
os.environ["OLLAMA_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["OLLAMA_BASE_URL"] = "http://192.168.0.98:1337/"  # 自定义大模型API地址
os.environ["OLLAMA_MODEL"] = "default"  # 自定义大模型API地址
os.environ["SNOWFLAKE_AUTH_TOKEN"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["SNOWFLAKE_CORTEX_BASE_URL"] = "http://192.168.0.98:1337/"  #
os.environ["SNOWFLAKE_CORTEX_CHAT_MODEL"] = "default"
os.environ["OPENAI_MODEL"] = "default"
os.environ["OPENAI_MODEL"] = "default"
os.environ["OPENAI_MODEL"] = "default"
os.environ["G4F_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["G4F_BASE_URL"] = "http://192.168.0.98:1337/v1"  # 自定义大模型API地址
os.environ["G4F_MODEL"] = "default"  # 自定义大模型API地址
os.environ["OPANAI_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["OPENAI_BASE_URL"] = "http://192.168.0.98:1337/v1"  # 自定义大模型API地址
os.environ["OPENAI_MODEL"] = "default"  # 自定义大模型API地址
import asyncio
import parlant.sdk as p
from parlant.sdk import NLPServices
# DEEPSEEK_API_KEY
async def main():async with p.Server(nlp_service=NLPServices.g4f) as server:agent = await server.create_agent(name="Otto Carmen",description="You work at a car dealership",# model="default")
asyncio.run(main())

跑起来这个样:

调试

没找到snowflake这个模型库

PS E:\work\parlwork> python .\testdeepseek.py
Traceback (most recent call last):File "E:\work\parlwork\testdeepseek.py", line 30, in asyncio.run(main())File "E:\py312\Lib\asyncio\runners.py", line 195, in runreturn runner.run(main)^^^^^^^^^^^^^^^^File "E:\py312\Lib\asyncio\runners.py", line 118, in runreturn self._loop.run_until_complete(task)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "E:\py312\Lib\asyncio\base_events.py", line 691, in run_until_completereturn future.result()^^^^^^^^^^^^^^^File "E:\work\parlwork\testdeepseek.py", line 24, in mainasync with p.Server(nlp_service=NLPServices.snowflake) as server:^^^^^^^^^^^^^^^^^^^^^

我的天,咋这个也没有了?

>>> from parlant.sdk import NLPServices
>>> dir(NLPServices)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'anthropic', 'azure', 'cerebras', 'gemini', 'litellm', 'ollama', 'openai', 'together', 'vertex']
>>>

原来没有安装本地的parlant代码,所以需要在github\parlant\src目录执行才可以,也就是需要再这个目录运行测试文件。

另外对自己手写添加的g4f的库,需要再sdk.py文件里写上相应的导入:

    # 学习openai,加上g4f@staticmethoddef g4f(container: Container) -> NLPService:"""Creates an G4F NLPService instance using the provided container."""from parlant.adapters.nlp.g4f_service import G4FServiceif error := G4FService.verify_environment():raise SDKError(error)return G4FService(container[Logger])

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/926135.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

网站模板如何使用 如何修改吗网站视频插件

合并分支用rebase还是merge? 实际开发工作的时候,我们都是在自己的分支开发,然后将自己的分合并到主分支,那合并分支用2种操作,这2种操作有什么区别呢? git上新建一个项目,默认是有master分支…

迁安做网站教育培训机构设计图

Java核心类库篇6——IO 1、File 1.1、构造方法 方法声明功能介绍public File(File parent, String child)从父抽象路径名和子路径名字符串创建新的 File实例public File(String pathname)通过将给定的路径名字符串转换为抽象路径名来创建新的 File实例public File(String pa…

12380网站建设情况报告网站总体规划设计说明

hive分区重命名后,新的分区的分区大小为0 , 例如 alter table entersv.ods_t_test partition(dt2022-11-08) rename to partition(dt2022-11-21) ods_t_test 的2022-11-21分区大小为0。怎样修复 使用 msck repair table 命令来修复表的元数据,让hive重新…

太极 - MKT

太极 环境 下雨 下午 卧室 附上音乐 (沙石头 鱼儿 本身不也是物质的一部分么,都在不同的层次适应存在。 石头在河里打磨成圆滑,在沙漠变成啥子,这么看好像都是被动的过程。 但本质沙子石头都是原子层面的硅原子在…

佛山营销网站旅游网站建设方案后台

0-1背包理论基础 基础 DP数组与其下标的含义 dp[i][j],i为物品编号,j为背包容量 dp[i][j]表示从下标为[0-i]的物品里任意取,放进容量为j的背包,价值总和最大是多少。 递推公式 分类:是否要放入下标为i的物品&…

网站建设人员职责分布昌吉网站建设咨询电话

一、智能家居与会议系统 智能家居与会议系统分论坛将于3月28日同期举办! 智能会议系统它通过先进的技术手段,提高了会议效率,降低了沟通成本,提升了参会者的会议体验。对于现代企业、政府机构和学术界是不可或缺的。在这里&#x…

题解:P12410 「知りたくなかった、失うのなら」

草 -我ら不会と算に时む复なりlink 说在前面 如果你看了这个东西你最好就看个乐子别真的去写,卡常卡死你。 做法什么的请直接看正文。 注意到其他题解给出了很优美的做法,那么我就来点不优美的。 先设几个数字吧,设…

unity面向组合开发二:EC的代码实践

一、ECCore 需要在Unity项目中使用插件:UniRx,通过UniRx代替Mono的Update,Mono下做轮询性能消耗会有点大。 EntityMono代码:using System; using System.Collections.Generic; using EC; using UniRx; using Unity…

《咳咳,未来编程大师,顶尖程序员的第一条博客》

Helloooooo World!本人目前是一个在校大二的学生,正在备战蓝桥杯,希望有相同目标的朋友联系我,我们可以一起备赛,一起刷题。我的目标是在2026蓝桥杯比赛上拿下国一,哈哈哈哈虽然听起来很扯,但是我是会用拿国一的…

CSP-JF36

CSP-JF36T2 B. 最小的公倍数小题 ((10^L / 210) + 1) * 210 就是最小值#include <bits/stdc++.h> using namespace std;int n; int main(){// for(int i = 2; i <= 18; i++){ // long long x = pow…

airsim多无人机+无人车联合仿真辅导 - 教程

pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "Monaco", "Courier New", …

超越炒作:使用Agentic AI构建系统架构

本文深入探讨了Agentic AI系统的架构设计,分享了实际应用中的模式、反模式和用例,讨论了如何管理这些分布式系统的复杂性和非确定性,并提供了构建可信赖、可扩展生产系统的实用建议。超越炒作:使用Agentic AI构建系…

河北省建设信息网站seo网站优化平台

12.全排列II 题目描述 给定一个可包含重复数字的序列 nums &#xff0c;按任意顺序 返回所有不重复的全排列。 示例 1&#xff1a; 输入&#xff1a;nums [1,1,2] 输出&#xff1a; [[1,1,2],[1,2,1],[2,1,1]]示例 2&#xff1a; 输入&#xff1a;nums [1,2,3] 输出&…

一个网站的建站流程建设安全协会网站

【题目来源】https://leetcode.cn/problems/valid-parenthesis-string/description/【题目描述】 给你一个只包含三种字符的字符串&#xff0c;支持的字符类型分别是 (、) 和 *。请你检验这个字符串是否为有效字符串&#xff0c;如果是有效字符串返回 true 。 有效字符串符合如…

岷县城乡建设局网站wordpress有多大的数据量

欢迎同步关注公众号【逆向通信猿】 远程声控系统技术报告 一、题目要求 实现一个远程声音控制系统。首先采集不同的语音指示信号,进行适当压缩;然后通过噪声信道实现远程传输,远端接收后再通过适当计算识别出是何指示,最后送入一个处于未知状态、但能控/能观的控制系统,…

【进入便捷的系统不解决问题】ubuntu开机出现‘系统出错且无法恢复。请联系系统管理员’

【进入便捷的系统不解决问题】ubuntu开机出现‘系统出错且无法恢复。请联系系统管理员’2025-10-03 17:09 tlnshuju 阅读(0) 评论(0) 收藏 举报pre { white-space: pre !important; word-wrap: normal !important;…

K个节点的组内逆序调整

K个节点的组内逆序调整题目 给定一个单链表的头节点head,和一个正数k实现k个节点的小组内部逆序,如果最后一组不够k个就不调整 例子: 调整前:1->2->3->4->5->6->7->8,k=3 调整后:3->2-&…

【任务】自然语言处理——情感分析 <上>

【任务】自然语言处理——情感分析 <上>pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "…

2025华为 OD 机试2025C卷 机考真题库清单(全真题库)含考点说明(OD上机考试2025年C卷) - 教程

pre { white-space: pre !important; word-wrap: normal !important; overflow-x: auto !important; display: block !important; font-family: "Consolas", "Monaco", "Courier New", …

网站开发 职业环境分析重庆免费做网站

数据库 mysql面试题目&#xff1a; MySQL InnoDB、Mysaim的特点&#xff1f; 乐观锁和悲观锁的区别&#xff1f;&#xff1f; 行锁和表锁的区别&#xff1f; 数据库隔离级别是什么&#xff1f;有什么作用&#xff1f; MySQL主备同步的基本原理。 如何优化数据库性能&#…