2025-2026多智能体与AI代理领域10篇顶会论文精选:从协调行为到安全攻防

本文精选了2025-2026年间的10篇多智能体与AI代理领域的优秀论文,涵盖多智能体协调、异构目标跟踪、自我进化技能获取、时间约束执行、软件工程代理优化、提示注入攻击防御、战略对话生成、检索增强辩论、多模态空间推理及改变感知的缺陷预测等前沿研究方向。这些研究展示了AI代理在复杂环境中的协作能力、安全性和实用性,为开发者提供了宝贵的参考和技术洞见。


我们从2025-09-08到2026-01-02的91篇文章中精选出10篇优秀的工作分享给读者,主要研究方向包括:多智能体选项发现与协调行为, 异构多智能体多目标跟踪, 自我进化的智能体技能获取, LLM代理的时间约束执行, 自动发现分层软件工程代理的优化方法, 客户服务中的利润驱动型提示注入攻击, 信念估计作为对话行为生成的概率约束, 多轮辩论生成中的检索增强方法, 多模态大语言模型的空间推理能力, 改变感知的软件缺陷预测

  1. Discovering Coordinated Joint Options via Inter-Agent Relative Dynamics
  2. Heterogeneous Multi-Agent Multi-Target Tracking using Cellular Sheaves
  3. CASCADE: Cumulative Agentic Skill Creation through Autonomous Development and Evolution
  4. Enforcing Temporal Constraints for LLM Agents
  5. BOAD: Discovering Hierarchical Software Engineering Agents via Bandit Optimization
  6. Language Model Agents Under Attack: A Cross Model-Benchmark of Profit-Seeking Behaviors in Customer Service
  7. BEDA: Belief Estimation as Probabilistic Constraints for Performing Strategic Dialogue Acts
  8. R-Debater: Retrieval-Augmented Debate Generation through Argumentative Memory
  9. VLN-MME: Diagnosing MLLMs as Language-guided Visual Navigation agents
  10. From Illusion to Insight: Change-Aware File-Level Software Defect Prediction Using Agentic AI

1.Discovering Coordinated Joint Options via Inter-Agent Relative Dynamics

Authors: Raul D. Steleac, Mohan Sridharan, David Abel

Affiliations: University of Edinburgh; Google DeepMind

https://arxiv.org/abs/2512.24827

论文摘要

Temporally extended actions improve the ability to explore and plan in single-agent settings. In multi-agent settings, the exponential growth of the joint state space with the number of agents makes coordinated behaviours even more valuable. Yet, this same exponential growth renders the design of multi-agent options particularly challenging. Existing multi-agent option discovery methods often sacrifice coordination by producing loosely coupled or fully independent behaviours. Toward addressing these limitations, we describe a novel approach for multi-agent option discovery. Specifically, we propose a joint-state abstraction that compresses the state space while preserving the information necessary to discover strongly coordinated behaviours. Our approach builds on the inductive bias that synchronisation over agent states provides a natural foundation for coordination in the absence of explicit objectives. We first approximate a fictitious state of maximal alignment with the team, the \textit{Fermat} state, and use it to define a measure of \textit{spreadness}, capturing team-level misalignment on each individual state dimension. Building on this representation, we then employ a neural graph Laplacian estimator to derive options that capture state synchronisation patterns between agents. We evaluate the resulting options across multiple scenarios in two multi-agent domains, showing that they yield stronger downstream coordination capabilities compared to alternative option discovery methods.

论文简评: 本论文旨在解决多智能体环境中选项发现的挑战,尤其是如何通过协调行为来提升智能体的合作能力。作者提出了一种新的联合状态抽象方法,称为“Fermat状态”,通过压缩状态空间并保留必要信息,以便发现高度协调的行为。研究中采用了神经图拉普拉斯估计器来提取智能体间的状态同步模式,并在两个多智能体领域(Level-Based Foraging和Overcooked)中进行了实验,结果表明该方法相较于现有的选项发现方法,能够显著提高智能体的协作能力。

2.Heterogeneous Multi-Agent Multi-Target Tracking using Cellular Sheaves

Authors: Tyler Hanks, Cristian F. Nino, Joana Bou Barcelo, Austin Copeland, Warren Dixon, James Fairbank

Affiliations: University of Florida

https://arxiv.org/abs/2512.24886

论文摘要

Multi-agent target tracking in the presence of nonlinear dynamics and agent heterogeneity, where state-space dimensions may differ, is a challenging problem that traditional graph Laplacian methods cannot easily address. This work leverages the framework of cellular sheaves, a mathematical generalization of graph theory, to natively model such heterogeneous systems. While existing coordination sheaf frameworks focus on cooperative problems like consensus, this work extends them to the non-cooperative target-tracking problem. The tracking of multiple, unknown targets is formulated as a harmonic extension problem on a cellular sheaf, accommodating nonlinear dynamics and external disturbances for all agents. A decentralized control law is developed using the sheaf Laplacian, and a corresponding Lyapunov-based stability analysis is provided to guarantee tracking error convergence, with results validated by simulation.
论文简评: 本论文提出了一种新的框架,用于处理异构非线性智能体动态和状态空间的多智能体多目标跟踪问题。研究的动机在于传统方法难以应对异构系统的挑战,尤其是在非合作跟踪任务中。作者利用细胞层叠的数学结构,将多目标跟踪问题形式化为一个和谐扩展问题,并开发了一种基于细胞层叠拉普拉斯算子的分散控制律,结合李雅普诺夫稳定性分析来保证跟踪误差收敛。通过仿真实验验证了所提出方法的有效性。

3.CASCADE: Cumulative Agentic Skill Creation through Autonomous Development and Evolution

Authors: Xu Huang, Junwu Chen, Yuxing Fei, Zhuohan Li, Philippe Schwaller, Gerbrand Cede

Affiliations: University of California, Berkeley; Lawrence Berkeley National Laboratory; École Polytechnique Fédérale de Lausanne

https://arxiv.org/abs/2512.23880

论文摘要

Large language model (LLM) agents currently depend on predefined tools or brittle tool generation, constraining their capability and adaptability to complex scientific tasks. We introduce CASCADE, a self-evolving agentic framework representing an early instantiation of the transition from “LLM + tool use” to “LLM + skill acquisition”. CASCADE enables agents to master complex external tools and codify knowledge through two meta-skills: continuous learning via web search and code extraction, and self-reflection via introspection and knowledge graph exploration, among others. We evaluate CASCADE on SciSkillBench, a benchmark of 116 materials science and chemistry research tasks. CASCADE achieves a 93.3% success rate using GPT-5, compared to 35.4% without evolution mechanisms. We further demonstrate real-world applications in computational analysis, autonomous laboratory experiments, and selective reproduction of published papers. Along with human-agent collaboration and memory consolidation, CASCADE accumulates executable skills that can be shared across agents and scientists, moving toward scalable AI-assisted scientific research.

论文简评: 本论文提出了CASCADE框架,旨在提升大型语言模型(LLM)在科学任务中的自我进化能力,突破传统的“LLM + 工具使用”模式。CASCADE通过引入连续学习和自我反思等元技能,使智能体能够自主掌握复杂工具并积累可执行技能。实验结果表明,CASCADE在SciSkillBench基准测试中取得了93.3%的成功率,显著高于未采用进化机制的35.4%。此外,CASCADE在实际应用中展示了其在计算分析和实验室自动化中的有效性,展示了其作为AI科学助理的潜力。

4.Enforcing Temporal Constraints for LLM Agents

Authors: dharsh Kamath, Sishen Zhang, Calvin Xu, Shubham Ugare, Gagandeep Singh, Sasa Misailovic

Affiliations: University of Illinois at Urbana-Champaign; Meta

https://arxiv.org/abs/2512.23738

论文摘要

LLM-based agents are deployed in safety-critical applications, yet current guardrail systems fail to prevent violations of temporal safety policies, requirements that govern the ordering and sequencing of agent actions. For instance, agents may access sensitive data before authenticating users or process refunds to unauthorized payment methods, violations that require reasoning about sequences of action rather than an individual action. Existing guardrails rely on imprecise natural language instructions or post-hoc monitoring, and provide no formal guarantees that agents will satisfy temporal constraints. We present Agent-C, a novel framework that provides run-time guarantees ensuring LLM agents adhere to formal temporal safety properties. Agent-C introduces a domain-specific language for expressing temporal properties (e.g., authenticate before accessing data), translates specifications to first-order logic, and uses SMT solving to detect non-compliant agent actions during token generation. When the LLM attempts to generate a non-compliant tool call, Agent-C leverages constrained generation techniques to ensure that every action generated by the LLM complies with the specification, and to generate a compliant alternative to a non-compliant agent action. We evaluate Agent-C across two real-world applications: retail customer service and airline ticket reservation system, and multiple language models (open and closed-source). Our results demonstrate that Agent-C achieves perfect safety (100% conformance, 0% harm), while improving task utility compared to state-of-the-art guardrails and unrestricted agents. On SoTA closed-source models, Agent-C improves conformance (77.4% to 100% for Claude Sonnet 4.5 and 83.7% to 100% for GPT-5), while simultaneously increasing utility (71.8% to 75.2% and 66.1% to 70.6%, respectively), representing a new SoTA frontier for reliable agentic reasoning.

论文简评: 本文提出了Agent-C,一个新的框架,用于确保基于大语言模型(LLM)的代理在执行时遵循正式的时间安全约束。研究的动机在于现有的安全机制无法有效防止代理在执行过程中违反时间顺序的安全政策。Agent-C通过引入一种领域特定语言来表达时间属性,并将规范转化为一阶逻辑,利用SMT求解器在生成令牌的过程中检测不合规的操作。实验结果表明,Agent-C在实际应用中实现了100%的合规性,同时在多个基准测试中提高了任务效用,展示了其在安全和实用性方面的优势。

5.BOAD: Discovering Hierarchical Software Engineering Agents via Bandit Optimization

Authors: Iris Xu, Guangtao Zeng, Zexue He, Charles Jin, Aldo Pareja, Dan Gutfreund, Chuang Gan, Zhang-Wei Hong

Affiliations: Massachusetts Institute of Technology; Stanford University; UMass Amherst; Independent researcher

https://arxiv.org/abs/2512.23631

论文摘要

Large language models (LLMs) have shown strong reasoning and coding capabilities, yet they struggle to generalize to real-world software engineering (SWE) problems that are long-horizon and out of distribution. Existing systems often rely on a single agent to handle the entire workflow-interpreting issues, navigating large codebases, and implementing fixes-within one reasoning chain. Such monolithic designs force the model to retain irrelevant context, leading to spurious correlations and poor generalization. Motivated by how human engineers decompose complex problems, we propose structuring SWE agents as orchestrators coordinating specialized sub-agents for sub-tasks such as localization, editing, and validation. The challenge lies in discovering effective hierarchies automatically: as the number of sub-agents grows, the search space becomes combinatorial, and it is difficult to attribute credit to individual sub-agents within a team. We address these challenges by formulating hierarchy discovery as a multi-armed bandit (MAB) problem, where each arm represents a candidate sub-agent and the reward measures its helpfulness when collaborating with others. This framework, termed Bandit Optimization for Agent Design (BOAD), enables efficient exploration of sub-agent designs under limited evaluation budgets. On SWE-bench-Verified, BOAD outperforms single-agent and manually designed multi-agent systems. On SWE-bench-Live, featuring more recent and out-of-distribution issues, our 36B system ranks second on the leaderboard at the time of evaluation, surpassing larger models such as GPT-4 and Claude. These results demonstrate that automatically discovered hierarchical multi-agent systems significantly improve generalization on challenging long-horizon SWE tasks. Code is available at https://github.com/iamxjy/BOAD-SWE-Agent.
论文简评: 本论文提出了一种名为BOAD的框架,通过将软件工程(SWE)任务视为多臂赌博机问题,自动发现分层的多代理系统,以提高大型语言模型在长时间跨度和超出分布问题上的泛化能力。论文动机在于现有的单代理系统往往难以处理复杂的SWE问题,因此作者建议通过分解任务并使用专门的子代理来协作解决子任务。实验结果表明,BOAD在SWE-bench-Verified和SWE-bench-Live基准测试中均优于传统单代理和手动设计的多代理系统,显著提高了性能和泛化能力,展示了自动发现的层次化多代理系统在挑战性长时程SWE任务中的有效性。

6.Language Model Agents Under Attack: A Cross Model-Benchmark of Profit-Seeking Behaviors in Customer Service

Authors: Jingyu Zhang

Affiliations: University of Washington

https://arxiv.org/abs/2512.24415

论文摘要

Customer-service LLM agents increasingly make policy-bound decisions (refunds, rebooking, billing disputes), but the samehelpfulinteraction style can be exploited: a small fraction of users can induce unauthorized concessions, shifting costs to others and eroding trust in agentic workflows. We present a cross-domain benchmark of profit-seeking direct prompt injection in customer-service interactions, spanning 10 service domains and 100 realistic attack scripts grouped into five technique families. Across five widely used models under a unified rubric with uncertainty reporting, attacks are highly domain-dependent (airline support is most exploitable) and technique-dependent (payload splitting is most consistently effective). We release data and evaluation code to support reproducible auditing and to inform the design of oversight and recovery workflows for trustworthy, human centered agent interfaces.

论文简评: 本论文探讨了客户服务中的大型语言模型(LLM)代理如何受到利润驱动型提示注入攻击的影响。研究的动机在于,随着这些代理在政策决策中的日益普及,用户能够利用其“帮助性”的交互风格进行恶意利用。为此,作者构建了一个跨域基准,涵盖10个服务领域和100个现实攻击脚本,并评估了五种提示注入技术的效果。实验结果表明,攻击的成功率在不同领域和模型之间差异显著,航空服务是最易受攻击的领域,而有效的攻击策略主要集中在负载拆分技术上。通过发布数据和评估代码,作者旨在促进对可信赖代理接口的设计与审计。

7.BEDA: Belief Estimation as Probabilistic Constraints for Performing Strategic Dialogue Acts

Authors: Hengli Li, Zhaoxin Yu, Qi Shen, Chenxi Li, Mengmeng Wang, Tinglang Wu, Yipeng Kang, Yuxuan Wang, Song-Chun Zhu, Zixia Jia, Zilong Zheng

Affiliations: Institute for Artificial Intelligence, PKU; NLCo, BIGAI; Institute of Automation, CAS; School of Artificial Intelligence, BUPT; Department of Automation, THU; Yuanpei College, PKU

https://arxiv.org/abs/2512.24885

论文摘要

Strategic dialogue requires agents to execute distinct dialogue acts, for which belief estimation is essential. While prior work often estimates beliefs accurately, it lacks a principled mechanism to use those beliefs during generation. We bridge this gap by first formalizing two core acts Adversarial and Alignment, and by operationalizing them via probabilistic constraints on what an agent may generate. We instantiate this idea in BEDA, a framework that consists of the world set, the belief estimator for belief estimation, and the conditional generator that selects acts and realizes utterances consistent with the inferred beliefs. Across three settings, Conditional Keeper Burglar (CKBG, adversarial), Mutual Friends (MF, cooperative), and CaSiNo (negotiation), BEDA consistently outperforms strong baselines: on CKBG it improves success rate by at least 5.0 points across backbones and by 20.6 points with GPT-4.1-nano; on Mutual Friends it achieves an average improvement of 9.3 points; and on CaSiNo it achieves the optimal deal relative to all baselines. These results indicate that casting belief estimation as constraints provides a simple, general mechanism for reliable strategic dialogue.

论文简评: 本文提出了一个名为BEDA的框架,通过将信念估计作为概率约束,来执行战略对话行为。研究动机在于现有方法在信念估计与对话生成之间缺乏有效连接,导致生成过程中的信息使用不够精确。该方法结合了世界集、信念估计模块和条件生成器,通过对信念状态的有效利用,在三个不同的对话场景中(如对抗性、合作性和谈判)展示了显著的性能提升,尤其是在成功率和信息交流效率方面,表明信念估计的概率约束能够有效改善对话生成的可靠性。

8.R-Debater: Retrieval-Augmented Debate Generation through Argumentative Memory

Authors: Maoyuan Li, Zhongsheng Wang, Haoyuan Li, Jiamou Li

Affiliations: Wuhan College of Communication; University of Auckland

https://arxiv.org/abs/2512.24684

论文摘要

We present R-Debater, an agentic framework for generating multi-turn debates built on argumentative memory. Grounded in rhetoric and memory studies, the system views debate as a process of recalling and adapting prior arguments to maintain stance consistency, respond to opponents, and support claims with evidence. Specifically, R-Debater integrates a debate knowledge base for retrieving case-like evidence and prior debate moves with a role-based agent that composes coherent utterances across turns. We evaluate on standardized ORCHID debates, constructing a 1,000-item retrieval corpus and a held-out set of 32 debates across seven domains. Two tasks are evaluated: next-utterance generation, assessed by InspireScore (subjective, logical, and factual), and adversarial multi-turn simulations, judged by Debatrix (argument, source, language, and overall). Compared with strong LLM baselines, R-Debater achieves higher single-turn and multi-turn scores. Human evaluation with 20 experienced debaters further confirms its consistency and evidence use, showing that combining retrieval grounding with structured planning yields more faithful, stance-aligned, and coherent debates across turns.

论文简评: 本文提出了R-Debater,一个基于辩论性记忆的多轮辩论生成框架,旨在提升辩论生成的逻辑一致性和说服力。该系统结合了检索增强生成(RAG)和基于角色的规划,通过检索相关的辩论材料和历史论证,生成连贯且立场一致的辩论发言。实验结果表明,R-Debater在标准化的ORCHID辩论数据集上超越了多种基线模型,展现了在逻辑准确性和事实依据上的显著提升,且人类评估显示其生成的辩论在75%以上的比较中更受专家青睐。

9.VLN-MME: Diagnosing MLLMs as Language-guided Visual Navigation agents

Authors: Xunyi Zhao, Gengze Zhou, Qi W

Affiliations: Adelaide University; Australian Institute of Machine Learning

https://arxiv.org/abs/2512.24851

论文摘要

Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across a wide range of vision-language tasks. However, their performance as embodied agents, which requires multi-round dialogue spatial reasoning and sequential action prediction, needs further exploration. Our work investigates this potential in the context of Vision-and-Language Navigation (VLN) by introducing a unified and extensible evaluation framework to probe MLLMs as zero-shot agents by bridging traditional navigation datasets into a standardized benchmark, named VLN-MME. We simplify the evaluation with a highly modular and accessible design. This flexibility streamlines experiments, enabling structured comparisons and component-level ablations across diverse MLLM architectures, agent designs, and navigation tasks. Crucially, enabled by our framework, we observe that enhancing our baseline agent with Chain-of-Thought (CoT) reasoning and self-reflection leads to an unexpected performance decrease. This suggests MLLMs exhibit poor context awareness in embodied navigation tasks; although they can follow instructions and structure their output, their 3D spatial reasoning fidelity is low. VLN-MME lays the groundwork for systematic evaluation of general-purpose MLLMs in embodied navigation settings and reveals limitations in their sequential decision-making capabilities. We believe these findings offer crucial guidance for MLLM post-training as embodied agents.

论文简评: 本文探讨了多模态大语言模型(MLLMs)在视觉与语言导航(VLN)任务中的表现,提出了一个统一的评估框架VLN-MME,以便于对MLLMs进行系统性评估。作者采用模块化设计,通过简化评估过程,便于对不同模型架构和代理设计进行比较与分析。实验结果显示,尽管引入了链式思维和自我反思等方法,模型在空间推理能力上依然表现不佳,揭示了其在执行复杂导航任务时的局限性。这一发现为后续的模型训练和优化提供了重要指导。

10.From Illusion to Insight: Change-Aware File-Level Software Defect Prediction Using Agentic AI

Authors: Mohsen Hesamolhokama, Behnam Rohani, Amirahmad Shafiee, MohammadAmin Fazli, Jafar Habibi

Affiliations: Sharif University of Technology

https://arxiv.org/abs/2512.23875

论文摘要

Much of the reported progress in file-level software defect prediction (SDP) is, in reality, nothing but an illusion of accuracy. Over the last decades, machine learning and deep learning models have reported increasing performance across software versions. However, since most files persist across releases and retain their defect labels, standard evaluation rewards label-persistence bias rather than reasoning about code changes. To address this issue, we reformulate SDP as a change-aware prediction task, in which models reason over code changes of a file within successive project versions, rather than relying on static file snapshots. Building on this formulation, we propose an LLM-driven, change-aware, multi-agent debate framework. Our experiments on multiple PROMISE projects show that traditional models achieve inflated F1, while failing on rare but critical defect-transition cases. In contrast, our change-aware reasoning and multi-agent debate framework yields more balanced performance across evolution subsets and significantly improves sensitivity to defect introductions. These results highlight fundamental flaws in current SDP evaluation practices and emphasize the need for change-aware reasoning in practical defect prediction. The source code is publicly available.

论文简评: 本论文探讨了软件缺陷预测(SDP)中的“准确性幻觉”问题,指出传统模型由于标签持久性偏差导致的性能虚高。为解决这一问题,作者提出了一种改变感知的预测任务,并基于此构建了一个多代理辩论框架,旨在通过推理代码变化来提升缺陷预测的准确性。实验结果表明,该框架在处理稀有但关键的缺陷转移案例时,表现优于传统模型,强调了在实际缺陷预测中采用改变感知推理的必要性。


如何学习AI大模型?

如果你对AI大模型入门感兴趣,那么你需要的话可以点击这里大模型重磅福利:入门进阶全套104G学习资源包免费分享!

这份完整版的大模型 AI 学习和面试资料已经上传CSDN,朋友们如果需要可以微信扫描下方CSDN官方认证二维码免费领取【保证100%免费】

这是一份大模型从零基础到进阶的学习路线大纲全览,小伙伴们记得点个收藏!


第一阶段:从大模型系统设计入手,讲解大模型的主要方法;

第二阶段:在通过大模型提示词工程从Prompts角度入手更好发挥模型的作用;

第三阶段:大模型平台应用开发借助阿里云PAI平台构建电商领域虚拟试衣系统;

第四阶段:大模型知识库应用开发以LangChain框架为例,构建物流行业咨询智能问答系统;

第五阶段:大模型微调开发借助以大健康、新零售、新媒体领域构建适合当前领域大模型;

第六阶段:以SD多模态大模型为主,搭建了文生图小程序案例;

第七阶段:以大模型平台应用与开发为主,通过星火大模型,文心大模型等成熟大模型构建大模型行业应用。

100套AI大模型商业化落地方案

大模型全套视频教程

200本大模型PDF书籍

👉学会后的收获:👈

• 基于大模型全栈工程实现(前端、后端、产品经理、设计、数据分析等),通过这门课可获得不同能力;

• 能够利用大模型解决相关实际项目需求: 大数据时代,越来越多的企业和机构需要处理海量数据,利用大模型技术可以更好地处理这些数据,提高数据分析和决策的准确性。因此,掌握大模型应用开发技能,可以让程序员更好地应对实际项目需求;

• 基于大模型和企业数据AI应用开发,实现大模型理论、掌握GPU算力、硬件、LangChain开发框架和项目实战技能, 学会Fine-tuning垂直训练大模型(数据准备、数据蒸馏、大模型部署)一站式掌握;

• 能够完成时下热门大模型垂直领域模型训练能力,提高程序员的编码能力: 大模型应用开发需要掌握机器学习算法、深度学习框架等技术,这些技术的掌握可以提高程序员的编码能力和分析能力,让程序员更加熟练地编写高质量的代码。

LLM面试题合集

大模型产品经理资源合集

大模型项目实战合集

👉获取方式:
😝有需要的小伙伴,可以保存图片到wx扫描二v码免费领取【保证100%免费】🆓

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/1119949.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

深度学习计算机毕设之基于python的鞋类分类

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

vue-vben-admin 打包编译报错Failed to resolve entry for package “@vben-core/design“ 的解决方法

vue-vben-admin 打包编译报错Failed to resolve entry for package “@vben-core/design” 的解决方法 标签:vue-vben-admin、pnpm build、Failed to resolve entry、Vite 打包、前端工程化 最近我在基于 vue-vben-admin(vbenjs/vue-vben-admin)开发后台项目时,遇到一个非…

深度学习毕设项目:基于机器学习的鞋类分类

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

【毕业设计】人工智能基于深度学习的鞋类分类

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

【课程设计/毕业设计】卷神经网络基于深度学习的鞋类分类

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

救命神器2026研究生必看TOP8 AI论文平台测评与推荐

救命神器2026研究生必看TOP8 AI论文平台测评与推荐 2026年研究生必备AI论文平台测评:选对工具,事半功倍 随着人工智能技术的快速发展,学术写作工具在研究生群体中的应用愈发广泛。然而,面对市场上琳琅满目的AI论文平台&#xff0c…

【毕业设计】机器学习基于cnn卷积网络识别树叶是否存在病变

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

深度学习毕设选题推荐:人工智能基于python的鲜花识别

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

学长亲荐10个AI论文网站,继续教育学生轻松搞定毕业论文!

学长亲荐10个AI论文网站,继续教育学生轻松搞定毕业论文! AI 工具如何改变论文写作的未来 在当前继续教育学生面临毕业论文压力日益增大的背景下,AI 工具正逐渐成为不可或缺的得力助手。从内容生成到降重处理,再到结构优化&#xf…

异步编程实战:构建高性能Python网络应用

异步编程的核心概念异步编程通过非阻塞I/O操作和事件循环机制实现高并发,避免线程切换开销。Python的asyncio库提供原生支持,基于协程(Coroutine)和await语法实现任务调度。关键组件包括事件循环(Event Loop&#xff0…

【课程设计/毕业设计】卷神经网络基于cnn卷积网络识别树叶是否存在病变机器学习

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

交互式世界建模新方案!腾讯混元发布世界模型WorldPlay,兼顾实时生成与长期几何一致性;5万条样本!Med-Banana-50K支持增删病灶双向编辑

世界模型正驱动计算智能领域的重心逐渐从语言任务拓展至视觉与空间推理,通过构建动态三维环境的模拟能力,这些模型使智能体得以感知并与复杂场景交互,从而为具身智能、游戏开发等领域开启了新的研究与应用前景。世界模型的前沿正聚焦于实时交…

长城杯 WEB安全 AI WAF

根据提示,题目明确提到 " NexaData公司存储 ",这通常和数据库相关。并且提示 AI_WAF , “AI WAF”通常指的是具备一定语义分析能力或强正则匹配规则的防火墙,它不仅仅匹配简单的关键 词,而是会识别 SQL 语句…

【计算机毕业设计案例】机器学习基于python深度学习的鲜花识别

博主介绍:✌️码农一枚 ,专注于大学生项目实战开发、讲解和毕业🚢文撰写修改等。全栈领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战 ✌️技术范围:&am…

黄仁勋最新演讲:5项创新加持,Rubin性能数据首曝;多样化开源,覆盖Agent/机器人/自动驾驶/AI4S

新年伊始,素有「科技春晚」之称的 CES 2026(Consumer Electronics Show,国际消费电子展)在美国拉斯维加斯拉开序幕。除了具身智能、人形机器人、自动驾驶等仍然占据核心展示位置之外,作为新芯片亮剑的重要秀场&#xf…

CTF 学习笔记

文章目录一,CTF(CaptureThe Flag)1 CTF简介2 CTF赛事2.1 国家赛事2.2 国内赛事3 CTF意义4 CTF学习4.1 竞赛模式4.2 题目类型4.3 学习建议二,CTF题目案例三,CTF靶机实战一步步拿下WordPres1 实验环境1.1WordPress简介1.…

虚拟机安装麒麟操作系统如何重置root密码

最近我们主要学习的是关于如何在虚拟机里面使用麒麟操作系统进行相关的学习。其中,麒麟操作系统属于国产操作系统,它其实和类似于 Ubuntu、CentOS 这样主流的 Linux 操作系统操作起来没有太大的区别。 但是我在实践的过程中遇到了如下的问题:…

大数据领域Storm的监控与调优实践

大数据领域Storm的监控与调优实践 关键词:Storm分布式计算、实时流处理、集群监控、性能调优、吞吐量优化、延迟控制、资源管理 摘要:本文深入探讨Apache Storm的监控体系与调优策略,结合底层架构原理与实际工程经验,系统解析监控…

基于深度学习的杂草检测系统

博主介绍:java高级开发,从事互联网行业六年,熟悉各种主流语言,精通java、python、php、爬虫、web开发,已经做了多年的设计程序开发,开发过上千套设计程序,没有什么华丽的语言,只有实…

ABAQUS二维混凝土细观模型的数字化重建技术(二)图像映射

上篇文章介绍了基于图像进行混凝土细观模型的几何重构法,详细步骤可查看下面的连接。 ABAQUS二维混凝土细观模型的数字化重建技术(一)几何重构 本篇介绍二维混凝土细观模型在ABAQUS中数字化重建技术的第二种方法——基于ABAQUS背景网格的图像映射方法。混凝土图像…