灰狼优化算法(GWO)文章复现 复现内容包括:改进算法实现、23个基准测试函数、改进策略画图分析、与GWO等对比等。 代码基本上每一步都有注释,非常易懂,代码质量极高,便于新手学习和理解。
嘿,各位小伙伴们!今天来聊聊灰狼优化算法(GWO)文章的复现,这可是个很有趣的话题,尤其是对于想深入了解优化算法的新手们,绝对是一次超有价值的学习之旅。
一、改进算法实现
灰狼优化算法模拟了灰狼群体的狩猎行为。在自然界中,灰狼群体有着严格的等级制度,分别是α、β、δ 和 ω 狼。α 狼是领导者,负责决策;β 狼辅助 α 狼;δ 狼听从 α 和 β 的指挥;ω 狼则是群体中地位最低的。
下面咱来看下基础 GWO 算法的核心代码框架(以 Python 为例):
import numpy as np # 目标函数,这里以一个简单的单峰函数为例 def objective_function(x): return np.sum(x ** 2) # GWO 算法实现 def gwo(pop_size, dim, max_iter, lb, ub): # 初始化灰狼位置 wolves = np.random.uniform(lb, ub, (pop_size, dim)) fitness = np.array([objective_function(w) for w in wolves]) # 初始化最优解 alpha_wolf = wolves[np.argmin(fitness)] alpha_fitness = np.min(fitness) beta_wolf = alpha_wolf.copy() beta_fitness = alpha_fitness delta_wolf = alpha_wolf.copy() delta_fitness = alpha_fitness a = 2 for t in range(max_iter): a = 2 - t * (2 / max_iter) # 线性递减的收敛因子 for i in range(pop_size): r1 = np.random.rand() r2 = np.random.rand() A1 = 2 * a * r1 - a C1 = 2 * r2 D_alpha = np.abs(C1 * alpha_wolf - wolves[i]) X1 = alpha_wolf - A1 * D_alpha r1 = np.random.rand() r2 = np.random.rand() A2 = 2 * a * r1 - a C2 = 2 * r2 D_beta = np.abs(C2 * beta_wolf - wolves[i]) X2 = beta_wolf - A2 * D_beta r1 = np.random.rand() r2 = np.random.rand() A3 = 2 * a * r1 - a C3 = 2 * r2 D_delta = np.abs(C3 * delta_wolf - wolves[i]) X3 = delta_wolf - A3 * D_delta wolves[i] = (X1 + X2 + X3) / 3 fitness = np.array([objective_function(w) for w in wolves]) for i in range(pop_size): if fitness[i] < alpha_fitness: alpha_fitness = fitness[i] alpha_wolf = wolves[i] elif fitness[i] < beta_fitness: beta_fitness = fitness[i] beta_wolf = wolves[i] elif fitness[i] < delta_fitness: delta_fitness = fitness[i] delta_wolf = wolves[i] return alpha_wolf, alpha_fitness在这段代码里,objective_function定义了我们要优化的目标函数,这里是一个简单的平方和函数。gwo函数则实现了 GWO 算法的主体。一开始随机初始化灰狼的位置,然后在每次迭代中,根据收敛因子a和随机数r1、r2来更新灰狼的位置,不断向最优解靠近。
灰狼优化算法(GWO)文章复现 复现内容包括:改进算法实现、23个基准测试函数、改进策略画图分析、与GWO等对比等。 代码基本上每一步都有注释,非常易懂,代码质量极高,便于新手学习和理解。
而改进算法可能会从多个方面入手,比如改变收敛因子的更新方式,或者引入新的机制避免算法陷入局部最优。假设我们对收敛因子进行非线性调整,代码可能会这样改:
# 改进的 GWO 算法实现 def improved_gwo(pop_size, dim, max_iter, lb, ub): # 初始化灰狼位置 wolves = np.random.uniform(lb, ub, (pop_size, dim)) fitness = np.array([objective_function(w) for w in wolves]) # 初始化最优解 alpha_wolf = wolves[np.argmin(fitness)] alpha_fitness = np.min(fitness) beta_wolf = alpha_wolf.copy() beta_fitness = alpha_fitness delta_wolf = alpha_wolf.copy() delta_fitness = alpha_fitness for t in range(max_iter): a = 2 * np.exp(-(4 * t / max_iter) ** 2) # 非线性收敛因子 for i in range(pop_size): r1 = np.random.rand() r2 = np.random.rand() A1 = 2 * a * r1 - a C1 = 2 * r2 D_alpha = np.abs(C1 * alpha_wolf - wolves[i]) X1 = alpha_wolf - A1 * D_alpha r1 = np.random.rand() r2 = np.random.rand() A2 = 2 * a * r1 - a C2 = 2 * r2 D_beta = np.abs(C2 * beta_wolf - wolves[i]) X2 = beta_wolf - A2 * D_beta r1 = np.random.rand() r2 = np.random.rand() A3 = 2 * a * r1 - a C3 = 2 * r2 D_delta = np.abs(C3 * delta_wolf - wolves[i]) X3 = delta_wolf - A3 * D_delta wolves[i] = (X1 + X2 + X3) / 3 fitness = np.array([objective_function(w) for w in wolves]) for i in range(pop_size): if fitness[i] < alpha_fitness: alpha_fitness = fitness[i] alpha_wolf = wolves[i] elif fitness[i] < beta_fitness: beta_fitness = fitness[i] beta_wolf = wolves[i] elif fitness[i] < delta_fitness: delta_fitness = fitness[i] delta_wolf = wolves[i] return alpha_wolf, alpha_fitness这里我们把收敛因子a改成了一个非线性递减的形式,这样可能会让算法在前期有更好的全局搜索能力,后期有更好的局部搜索能力。
二、23 个基准测试函数
基准测试函数是检验优化算法性能的重要工具。这 23 个函数涵盖了单峰、多峰等不同特性的函数。以Sphere函数为例,它是一个简单的单峰函数,用于测试算法的基本收敛能力。
def sphere(x): return np.sum(x ** 2)还有像Rastrigin函数,这是一个典型的多峰函数,具有许多局部最优解,对算法跳出局部最优的能力是个挑战。
def rastrigin(x): A = 10 n = len(x) return A * n + np.sum(x ** 2 - A * np.cos(2 * np.pi * x))在实际复现中,我们会把改进后的 GWO 算法应用到这些基准测试函数上,通过记录每次运行的结果,来评估算法在不同特性函数上的性能。
三、改进策略画图分析
画图分析能让我们更直观地看到改进策略的效果。我们可以绘制算法在每次迭代中的最优适应度值变化曲线。比如用matplotlib库来绘制改进前后 GWO 算法在Sphere函数上的收敛曲线。
import matplotlib.pyplot as plt # 运行基础 GWO 算法 pop_size = 50 dim = 2 max_iter = 100 lb = -100 ub = 100 alpha_wolf, alpha_fitness = gwo(pop_size, dim, max_iter, lb, ub) gwo_fitness_trace = [alpha_fitness] # 运行改进的 GWO 算法 alpha_wolf_improved, alpha_fitness_improved = improved_gwo(pop_size, dim, max_iter, lb, ub) improved_gwo_fitness_trace = [alpha_fitness_improved] for t in range(1, max_iter): alpha_wolf, alpha_fitness = gwo(pop_size, dim, t, lb, ub) gwo_fitness_trace.append(alpha_fitness) alpha_wolf_improved, alpha_fitness_improved = improved_gwo(pop_size, dim, t, lb, ub) improved_gwo_fitness_trace.append(alpha_fitness_improved) plt.plot(range(max_iter), gwo_fitness_trace, label='GWO') plt.plot(range(max_iter), improved_gwo_fitness_trace, label='Improved GWO') plt.xlabel('Iteration') plt.ylabel('Best Fitness') plt.title('Convergence Comparison on Sphere Function') plt.legend() plt.show()从这个图中,我们可以清晰地看到改进后的 GWO 算法是否收敛得更快,或者是否能达到更好的最优解。如果改进后的曲线下降得更快,且最终的最优解更好,那就说明我们的改进策略是有效的。
四、与 GWO 等对比
除了和自身改进前对比,我们还会和其他优化算法进行对比,比如粒子群优化算法(PSO)。
# 粒子群优化算法(PSO) def pso(pop_size, dim, max_iter, lb, ub, w, c1, c2): particles = np.random.uniform(lb, ub, (pop_size, dim)) velocities = np.zeros((pop_size, dim)) pbest = particles.copy() pbest_fitness = np.array([objective_function(p) for p in particles]) gbest_index = np.argmin(pbest_fitness) gbest = pbest[gbest_index] gbest_fitness = pbest_fitness[gbest_index] for t in range(max_iter): for i in range(pop_size): r1 = np.random.rand(dim) r2 = np.random.rand(dim) velocities[i] = w * velocities[i] + c1 * r1 * (pbest[i] - particles[i]) + c2 * r2 * (gbest - particles[i]) particles[i] = particles[i] + velocities[i] particles[i] = np.clip(particles[i], lb, ub) fitness = np.array([objective_function(p) for p in particles]) for i in range(pop_size): if fitness[i] < pbest_fitness[i]: pbest_fitness[i] = fitness[i] pbest[i] = particles[i] if pbest_fitness[i] < gbest_fitness: gbest_fitness = pbest_fitness[i] gbest = pbest[i] return gbest, gbest_fitness然后我们把 GWO、改进后的 GWO 和 PSO 都应用到一系列基准测试函数上,统计它们的最优解、平均解、收敛速度等指标,进行全面对比。通过这样的对比,能更清楚地看到改进后的 GWO 算法在性能上的优势和不足。
好啦,以上就是关于灰狼优化算法(GWO)文章复现的主要内容啦,希望对大家理解和学习优化算法有所帮助!大家可以自己动手实践下,说不定还能想出更厉害的改进策略呢!