Python深度学习基于Tensorflow(16)基于Tensorflow的对话实例

文章目录

      • 基础数据清洗
      • 数据生成词汇表
      • 定义分词器并制作数据集
      • 构建Transformer模型并训练
      • 模型推理

Tensorflow 的核心就是注意力机制,在之前详细的介绍过,具体可以看这个:Python深度学习基于Tensorflow(9)注意力机制_tensorflow的各种注意力机制python代码-CSDN博客

在这里插入图片描述

基础数据清洗

如果有其他数据可以忽略这一步,这里的数据效果出乎意料的差;

书中不知道是哪里来的数据集,并没有介绍,对话数据放在两个文件中,一个文件路径 ./data/movie_lines.txt,另一个文件路径 ./data/movie_conversations.txt

import osdef base_process(max_nums=50000, return_lines=False):"""max_nums 用来限制 conversation pair 个数、 return_lines 用来构建词表"""## 生成 id 与 line 的字典id2line = {}with open('./data/movie_lines.txt', errors='ignore') as f:lines = f.readlines()for line in lines:parts = line.replace('\n', '').split(' +++$+++ ')id2line[parts[0]] = parts[4]## 利用 id2line 查找 conversation ,并将 conversation 依次遍历生成 line_pairX, y = [], []with open('./data/movie_conversations.txt', 'r') as file:lines = file.readlines()for line in lines:parts = line.replace('\n', '').split(' +++$+++ ')conversation = [line[1:-1] for line in parts[3][1:-1].split(', ')]for ix in range(len(conversation)-1):X.append(id2line[conversation[ix]].replace('-', ''))y.append(id2line[conversation[ix+1]].replace('-', ''))if len(X) > max_nums:breakif return_lines == True:return X, y, list(id2line.values())else:return X, y# return_lines 用来构建词表
X, y, lines  = base_process(return_lines=True)# 数据展示
for i in range(5):print(f'inputs: {X[i]} \noutputs: {y[i]} \n')

得到数据展示,简直了,牛头不对马嘴…

inputs: Can we make this quick?  Roxanne Korrine and Andrew Barrett are having an incredibly horrendous public break up on the quad.  Again. 
outputs: Well, I thought we'd start with pronunciation, if that's okay with you. inputs: Well, I thought we'd start with pronunciation, if that's okay with you. 
outputs: Not the hacking and gagging and spitting part.  Please. inputs: Not the hacking and gagging and spitting part.  Please. 
outputs: Okay... then how 'bout we try out some French cuisine.  Saturday?  Night? inputs: You're asking me out.  That's so cute. What's your name again? 
outputs: Forget it. inputs: No, no, it's my fault  we didn't have a proper introduction  
outputs: Cameron.

数据生成词汇表

代码如下

import tensorflow as tf
import tensorflow_text as tf_text
from tensorflow_text.tools.wordpiece_vocab import bert_vocab_from_dataset as bert_vocabdataset = tf.data.Dataset.from_tensor_slices((X, y))
lines_dataset = tf.data.Dataset.from_tensor_slices((lines))## 构建词表,这一步耗时较久 大概时间为2min 21s
bert_vocab_args = dict(vocab_size = 8000, # The target vocabulary sizereserved_tokens = ["[PAD]", "[UNK]", "[START]", "[END]"], # Reserved tokens that must be included in the vocabularybert_tokenizer_params=dict(lower_case=True), # Arguments for `text.BertTokenizer`learn_params={}, # Arguments for `wordpiece_vocab.wordpiece_tokenizer_learner_lib.learn`
)
vocab = bert_vocab.bert_vocab_from_dataset(dataset=lines_dataset, **bert_vocab_args)# print(vocab[: 5], len(vocab))
# ['[PAD]', '[UNK]', '[START]', '[END]', '!'] 7881

得到 vocab 后,定义函数将 vocab 写入文件

def write_vocab_file(filepath, vocab):with open(filepath, 'w') as f:for token in vocab:print(token, file=f)## 保存 vocab 到文件 vocab.txt
write_vocab_file('vocab.txt', vocab)

得到词汇表,vocab.txt

定义分词器并制作数据集

分词器定义可以看这篇:Tokenizing with TF Text | TensorFlow (google.cn);执行代码如下

@tf.function
def process_batch_strings(inputs, outputs, left_pad=tf.constant([2], dtype=tf.int64), right_pad=tf.constant([3], dtype=tf.int64)):""" 这里 left_pad 添加 [START] 其 ids 默认为 2 同样的 [END] 其 ids 默认为3 """inputs = tokenizer.tokenize(inputs).merge_dims(-2, -1)  # 对 RaggedTensor 操作 flat_values 等价于 .merge_dims(-2, -1).merge_dims(-2, -1)# 在 sequence 开头和结尾添加东西 如 tf.constant([0], dtype=tf.int64)inputs = tf_text.pad_along_dimension(inputs, axis=-1, left_pad=left_pad, right_pad=right_pad)inputs = tf_text.pad_model_inputs(inputs, max_seq_length=128, pad_value=0)outputs = tokenizer.tokenize(outputs).merge_dims(-2, -1) # 对 RaggedTensor 操作 flat_values 等价于 .merge_dims(-2, -1).merge_dims(-2, -1)# 在 sequence 开头和结尾添加东西 如 tf.constant([0], dtype=tf.int64)outputs = tf_text.pad_along_dimension(outputs, axis=-1, left_pad=left_pad, right_pad=right_pad)outputs = tf_text.pad_model_inputs(outputs, max_seq_length=128, pad_value=0)# inputs 和 outputs 由 ids 和 mask 组成,由于 embedding 有 mask_zero优化 这里只提取出 ids return (inputs[0], outputs[0][:, :-1]), outputs[0][:,  1:]# tokenizer 定义分词器
tokenizer = tf_text.BertTokenizer('vocab.txt', **dict(lower_case=True))# 处理数据集
dataset = dataset.batch(128).map(process_batch_strings)# dataset.take(1).get_single_element()
# ((<tf.Tensor: shape=(16, 128), dtype=int64, numpy=
#   array([[  2, 276, 259, ...,   0,   0,   0],
#          [  2, 306,  14, ...,   0,   0,   0],
#          [  2, 274, 250, ...,   0,   0,   0],
#          ...,
#          [  2, 253,  10, ...,   0,   0,   0],
#          [  2, 297, 260, ...,   0,   0,   0],
#          [  2, 286,  16, ...,   0,   0,   0]], dtype=int64)>,
#   <tf.Tensor: shape=(16, 127), dtype=int64, numpy=
#   array([[   2,  306,   14, ...,    0,    0,    0],
#          [   2,  274,  250, ...,    0,    0,    0],
#          [   2,  351,   16, ...,    0,    0,    0],
#          ...,
#          [   2,  599, 1322, ...,    0,    0,    0],
#          [   2,  306,   14, ...,    0,    0,    0],
#          [   2,  322,   33, ...,    0,    0,    0]], dtype=int64)>),
#  <tf.Tensor: shape=(16, 127), dtype=int64, numpy=
#  array([[ 306,   14,   47, ...,    0,    0,    0],
#         [ 274,  250, 5477, ...,    0,    0,    0],
#         [ 351,   16,   16, ...,    0,    0,    0],
#         ...,
#         [ 599, 1322,   16, ...,    0,    0,    0],
#         [ 306,   14,  286, ...,    0,    0,    0],
#         [ 322,   33,    3, ...,    0,    0,    0]], dtype=int64)>)

构建Transformer模型并训练

这里使用三角绝对位置编码,采取旋转位置编码的方式进行构建模型,由于 tensorflow 没有旋转位置编码的类,这里定义一个 RotaryEmbedding

class RotaryEmbedding(tf.keras.layers.Layer):def __init__( self, max_wavelength=10000, scaling_factor=1.0, **kwargs):super().__init__(**kwargs)self.max_wavelength = max_wavelengthself.scaling_factor = scaling_factorself.built = Truedef call(self, inputs, start_index=0, positions=None):cos_emb, sin_emb = self._compute_cos_sin_embedding(inputs, start_index, positions)output = self._apply_rotary_pos_emb(inputs, cos_emb, sin_emb)return outputdef _apply_rotary_pos_emb(self, tensor, cos_emb, sin_emb):x1, x2 = tf.split(tensor, 2, axis=-1)half_rot_tensor = tf.stack((-x2, x1), axis=-2)half_rot_tensor = tf.reshape(half_rot_tensor, tf.shape(tensor))return (tensor * cos_emb) + (half_rot_tensor * sin_emb)def _compute_positions(self, inputs, start_index=0):seq_len = tf.shape(inputs)[1]positions = tf.range(seq_len, dtype="float32")return positions + tf.cast(start_index, dtype="float32")def _compute_cos_sin_embedding(self, inputs, start_index=0, positions=None):feature_axis = len(inputs.shape) - 1sequence_axis = 1rotary_dim = tf.shape(inputs)[feature_axis]inverse_freq = self._get_inverse_freq(rotary_dim)if positions is None:positions = self._compute_positions(inputs, start_index)else:positions = tf.cast(positions, "float32")positions = positions / tf.cast(self.scaling_factor, "float32")freq = tf.einsum("i,j->ij", positions, inverse_freq)embedding = tf.stack((freq, freq), axis=-2)# 这里 *tf.shape(freq)[:-1] 使用 model.fit 的话无法计算# embedding = tf.reshape(embedding, (*tf.shape(freq)[:-1], tf.shape(freq)[-1] * 2))embedding = tf.reshape(embedding, (tf.shape(freq)[0], tf.shape(freq)[-1] * 2))if feature_axis < sequence_axis:embedding = tf.transpose(embedding)for axis in range(len(inputs.shape)):if axis != sequence_axis and axis != feature_axis:embedding = tf.expand_dims(embedding, axis)cos_emb = tf.cast(tf.cos(embedding), self.compute_dtype)sin_emb = tf.cast(tf.sin(embedding), self.compute_dtype)return cos_emb, sin_embdef _get_inverse_freq(self, rotary_dim):freq_range = tf.divide(tf.range(0, rotary_dim, 2, dtype="float32"),tf.cast(rotary_dim, "float32"))inverse_freq = 1.0 / (self.max_wavelength**freq_range)return inverse_freq

在注意力机制中融合 RotaryEmbedding 得到 MultiHeadAttention

class MultiHeadAttention(tf.keras.layers.Layer):def __init__(self, num_heads, d_model, with_rotary=True):super(MultiHeadAttention, self).__init__()self.num_heads = num_headsself.d_model = d_modelself.with_rotary = with_rotary## 判断能否被整除assert self.d_model % self.num_heads == 0## 定义需要用到的 layerself.query_dense = tf.keras.layers.Dense(self.d_model)self.key_dense = tf.keras.layers.Dense(self.d_model)self.value_dense = tf.keras.layers.Dense(self.d_model)self.output_dense = tf.keras.layers.Dense(self.d_model)self.rotary_query = RotaryEmbedding()self.rotary_key = RotaryEmbedding()# self.rotary_query = keras_nlp.layers.RotaryEmbedding()# self.rotary_key = keras_nlp.layers.RotaryEmbedding()def call(self, x_query, x_key, x_value, use_casual_mask=False):if self.with_rotary:query = self._split_heads(self.rotary_query(self.query_dense(x_query)))key = self._split_heads(self.rotary_key(self.key_dense(x_key)))else:query = self._split_heads(self.query_dense(x_query))key = self._split_heads(self.key_dense(x_key))value = self._split_heads(self.value_dense(x_value))output, attention_weights = self._scaled_dot_product_attention(query, key, value, use_casual_mask)output = tf.keras.layers.Lambda(lambda output: tf.transpose(output, perm=[0, 2, 1, 3]))(output)output = tf.keras.layers.Lambda(lambda output: tf.reshape(output, [tf.shape(output)[0], -1, self.d_model]))(output)output = self.output_dense(output)return outputdef _split_heads(self, x):# x = tf.reshape(x, [tf.shape(x)[0], -1, self.num_heads, self.d_model / self.num_heads])# x = tf.transpose(x, perm=[0, 2, 1, 3])x = tf.keras.layers.Lambda(lambda x: tf.reshape(x, [tf.shape(x)[0], -1, self.num_heads, self.d_model // self.num_heads]))(x)x = tf.keras.layers.Lambda(lambda x: tf.transpose(x, perm=[0, 2, 1, 3]))(x)return xdef _scaled_dot_product_attention(self, query, key, value, use_casual_mask):dk = tf.cast(tf.shape(key)[-1], tf.float32)scaled_attention_logits = tf.matmul(query, key, transpose_b=True) / tf.math.sqrt(dk)if use_casual_mask:casual_mask = 1 - tf.linalg.band_part(tf.ones_like(scaled_attention_logits), -1, 0)scaled_attention_logits += casual_mask * -1e9attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)output = tf.matmul(attention_weights, value)return output, attention_weights

定义前馈神经网络层 FeedForward

class FeedForward(tf.keras.layers.Layer):def __init__(self, d_model):super(FeedForward, self).__init__()self.dense_1 = tf.keras.layers.Dense(4 * 2 * d_model // 3)self.dense_2 = tf.keras.layers.Dense(d_model)self.dense_3 = tf.keras.layers.Dense(4 * 2 * d_model // 3)def call(self, x):x = self.dense_2(tf.nn.silu(self.dense_1(x)) * self.dense_3(x))return x

再定义 RMSNorm 代替 LayerNorm 加快计算速度;

class RMSNorm(tf.keras.layers.Layer):def __init__(self, d_model, eps=1e-6):super(RMSNorm, self).__init__()self.eps = epsself.gamma = self.add_weight(shape=d_model, initializer='ones', trainable=True)def call(self, x):x = self._norm(x)output = x * self.gammareturn outputdef _norm(self, x):return x * tf.math.rsqrt(tf.reduce_mean(tf.pow(x, 2), axis=-1, keepdims=True) + self.eps)

构建 EncoderLayerEncoder

class EncoderLayer(tf.keras.layers.Layer):def __init__(self, num_heads, d_model):super(EncoderLayer, self).__init__()self.mha = MultiHeadAttention(num_heads, d_model, with_rotary=True)self.ffn = FeedForward(d_model)self.rms_mha = RMSNorm(d_model)self.rms_ffn = RMSNorm(d_model)def call(self, x):## attention 层计算x = self.rms_mha(x)x = x + self.mha(x, x, x, use_casual_mask=False)## feedforward 层计算x = self.rms_ffn(x)x = x + self.ffn(x)return xclass Encoder(tf.keras.layers.Layer):def __init__(self, encoder_layer_nums, vocabulary_size, num_heads, d_model):super(Encoder, self).__init__()self.embedding = tf.keras.layers.Embedding(vocabulary_size, d_model, mask_zero=True)self.encoder_layers = [EncoderLayer(num_heads, d_model) for _ in range(encoder_layer_nums)]def call(self, x):x = self.embedding(x)for encoder_layer in self.encoder_layers:x = encoder_layer(x)return x

同样的,DecoderLayerDecoder

class DecoderLayer(tf.keras.layers.Layer):def __init__(self, num_heads, d_model):super(DecoderLayer, self).__init__()self.mha_1 = MultiHeadAttention(num_heads, d_model, with_rotary=True)self.mha_2 = MultiHeadAttention(num_heads, d_model, with_rotary=True)self.ffn = FeedForward(d_model)self.rms_mha_1 = RMSNorm(d_model)self.rms_mha_2 = RMSNorm(d_model)self.rms_ffn = RMSNorm(d_model)def call(self, x, encoder_output):## mask attention 层计算x = self.rms_mha_1(x)x = x + self.mha_1(x, x, x, use_casual_mask=True)## attention 层计算x = self.rms_mha_2(x)x = x + self.mha_2(x, encoder_output, encoder_output, use_casual_mask=False)## feedforward 层计算x = self.rms_ffn(x)x = x + self.ffn(x)return xclass Decoder(tf.keras.layers.Layer):def __init__(self, decoder_layer_nums, vocabulary_size, num_heads, d_model):super(Decoder, self).__init__()self.embedding = tf.keras.layers.Embedding(vocabulary_size, d_model, mask_zero=True)self.decoder_layers = [DecoderLayer(num_heads, d_model) for _ in range(decoder_layer_nums)]def call(self, x, encoder_output):x = self.embedding(x)for decoder_layer in self.decoder_layers:x = decoder_layer(x, encoder_output)return x

建立 Transformer 模型

class Transformer(tf.keras.Model):def __init__(self, decoder_layer_nums, encoder_layer_nums, vocabulary_size, num_heads, d_model):super(Transformer, self).__init__()self.encoder = Encoder(encoder_layer_nums, vocabulary_size, num_heads, d_model)self.decoder = Decoder(decoder_layer_nums, vocabulary_size, num_heads, d_model)self.final_dense = tf.keras.layers.Dense(vocabulary_size, activation='softmax')def call(self, x):x1, x2 = x[0], x[1]x1 = self.encoder(x1)x2 = self.decoder(x2, x1)output = self.final_dense(x2)return output

定义调度器类,使用 warmup

class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):def __init__(self, d_model, warmup_steps=4000):super().__init__()self.d_model = d_modelself.d_model = tf.cast(self.d_model, tf.float32)self.warmup_steps = warmup_stepsdef __call__(self, step):step = tf.cast(step, dtype=tf.float32)arg1 = tf.math.rsqrt(step)arg2 = step * (self.warmup_steps ** -1.5)return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)

定义模型并开始学习

decoder_layer_nums=2
encoder_layer_nums=2
vocabulary_size=len(vocab)
num_heads=8
d_model=256learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, epsilon=1e-9)model = Transformer(decoder_layer_nums, encoder_layer_nums, vocabulary_size, num_heads, d_model)model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,optimizer=optimizer,metrics=['accuracy']
)## 开始训练
history = model.fit(dataset, epochs=10)# Epoch 1/10
# 391/391 [==============================] - 43s 94ms/step - loss: 2.3794 - accuracy: 0.8041
# Epoch 2/10
# 391/391 [==============================] - 37s 94ms/step - loss: 0.6215 - accuracy: 0.9024
# Epoch 3/10
# 391/391 [==============================] - 37s 95ms/step - loss: 0.5656 - accuracy: 0.9060
# Epoch 4/10
# 391/391 [==============================] - 37s 95ms/step - loss: 0.5365 - accuracy: 0.9077
# Epoch 5/10
# 391/391 [==============================] - 37s 95ms/step - loss: 0.5097 - accuracy: 0.9095
# Epoch 6/10
# 391/391 [==============================] - 37s 96ms/step - loss: 0.4812 - accuracy: 0.9119
# Epoch 7/10
# 391/391 [==============================] - 37s 95ms/step - loss: 0.4549 - accuracy: 0.9145
# Epoch 8/10
# 391/391 [==============================] - 37s 94ms/step - loss: 0.4335 - accuracy: 0.9166
# Epoch 9/10
# 391/391 [==============================] - 37s 94ms/step - loss: 0.4162 - accuracy: 0.9183
# Epoch 10/10
# 391/391 [==============================] - 37s 95ms/step - loss: 0.4047 - accuracy: 0.9192

模型推理

定义推理类 Inference

class Inference(tf.Module):def __init__(self, model, tokenizer):self.model = modelself.tokenizer = tokenizerdef __call__(self, x, max_length=128):from rich.progress import trackx = self.tokenizer.tokenize(x).flat_values## 定义 start 和 endstart = tf.constant([2], dtype=tf.int64)end = tf.constant([3], dtype=tf.int64)x = tf_text.pad_along_dimension(x, axis=-1, left_pad=start, right_pad=end)[tf.newaxis, :]# 定义 TensorArray 要记得使用write需要赋值 zz=zz.write()outputs = tf.TensorArray(tf.int64, size=0, dynamic_size=True)outputs = outputs.write(0, start)for i in track(tf.range(max_length)):temp = tf.transpose(outputs.stack())temp = model.predict((x, temp), verbose=0)[:, -1:, :]output = tf.argmax(temp, axis=-1)[0]if output == end:breakelse:outputs = outputs.write(outputs.size(), output)outputs = outputs.stack()outputs = tf.transpose(outputs)x = tf.strings.reduce_join(self.tokenizer.detokenize(outputs).flat_values[1:], separator=' ').numpy().decode('utf-8')return x

初始化类,并开始推理

infernce = Inference(model, tokenizer)
infernce('what do you want me to do?')
# "i don ' t want to be a good idea ."

可以看到牛头不对马嘴,数据是一个原因,训练层数是一个原因

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/24438.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

在Java中为什么对a赋值为10,在进行a++时还是等于10呢

首先我们看这样一组代码 public class demo1 {public static void main(String[] args) {int a10;aa;System.out.println(a);} } 结果&#xff1a;10不是在第二步有a操作吗&#xff1f;为什么还是10呢&#xff1f; a的执行步骤如下&#xff1a; 保存当前a的值&#xff08;即10…

websocket链接携带参数

前端创建链接时官方提供的构造函数 var aWebSocket new WebSocket(url, [protocols]); url&#xff1a;要连接的URL&#xff1b;这应该是WebSocket服务器将响应的URL。 protocols&#xff1a;可选&#xff1b;一个协议字符串或者一个包含协议字符串的数组。这些字符串用于指定…

智能语音电销机器人可以做哪些事情?ai语音机器人系统

智能语音电销机器人软件的出现&#xff0c;给很多企业都带来了福利&#xff0c;尤其是电销企业&#xff0c;不仅工作效率提升了&#xff0c;成本降低了&#xff0c;还能实现智能化管理客户的出现&#xff0c;给很多企业都带来了福利&#xff0c;尤其是电销企业&#xff0c;不仅…

python初学者笔记(八)——数字阶乘

#python初学者笔记&#xff08;8&#xff09;——数字阶乘 阶乘是基斯顿卡曼于 1808 年发明的运算符号,是数学术语,一个正整数的阶乘(factorial)是所有小于及等于该数的正整数的积。 下面利用Python编写数字阶乘 ##1.方法一:利用函数的方法&#xff0c;求输入值的阶乘 #coding…

WebAPI 前端开发流程:深度解析与实践探索

WebAPI 前端开发流程&#xff1a;深度解析与实践探索 在前端开发的世界里&#xff0c;WebAPI扮演着至关重要的角色&#xff0c;它作为前端与后端沟通的桥梁&#xff0c;确保了数据的流畅传输与功能的完整实现。本文将详细探讨WebAPI前端开发流程&#xff0c;从四个方面、五个方…

什么情况下需要配戴助听器

以下几种情况需要考虑配戴助听器&#xff1a; 1、听力无波动3个月以上的感音神经性听力障碍。如:先天性听力障碍、老年性听力障碍、噪声性听力障碍、突聋的稳定期等&#xff0c;均可选配合适的助听器。 2、年龄方面。使用助听器没有严格的年龄限制&#xff0c;从出生数周的婴…

深度学习Week16——数据增强

文章目录 深度学习Week16——数据增强 一、前言 二、我的环境 三、前期工作 1、配置环境 2、导入数据 2.1 加载数据 2.2 配置数据集 2.3 数据可视化 四、数据增强 五、增强方式 1、将其嵌入model中 2、在Dataset数据集中进行数据增强 六、训练模型 七、自定义增强函数 一、前言…

Geoserver源码解读一(环境搭建)

一、Github地址 https://github.com/geoserver/geoserver 1.1 克隆代码 git clone https://github.com/geoserver/geoserver.git 1.2 选择版本 版本选择参考我的上一篇文章 Geoserver 以及 Geotools各版本和jdk版本对照表 此处我选择的是兼容jdk8的最后一个版本 git che…

netty+springboot+vue聊天室(需要了解netty)

先看看这个使用websocket实现的聊天室&#xff0c;因为前端是使用websocket&#xff0c;和下面的demo的前端差不多就不解释实现原理&#xff0c;所以建议还是看看(要是会websocket的大佬请忽略) springbootwebsocketvue聊天室 目录 一、实现内容二、代码实现1.后端2.前端源码…

html+CSS+js部分基础运用17

在图书列表中&#xff0c;为书名“零基础学JavaScript”和“HTML5CSS3精彩编程200例”添加颜色。&#xff08;请用class或style属性实现&#xff09;&#xff0c;效果如下图1所示&#xff1a; 图1 图书列表 Class和style的综合应用。&#xff08;1&#xff09;应用class的对象、…

命令行打包最简单的android项目从零开始到最终apk文件

准备好需要的工具 AndroidDevTools - Android开发工具 Android SDK下载 Android Studio下载 Gradle下载 SDK Tools下载 jdk的链接我就不发出来,自己选择,我接下来用的是8版本的jdk和android10的sdk sdk的安装和环境变量的配置 sdk tool压缩包打开后是这样子,打开sdk mana…

高防CDN是如何应对DDoS和CC攻击的

高防CDN&#xff08;内容分发网络&#xff09;主要通过分布式的网络架构来帮助网站抵御DDoS&#xff08;分布式拒绝服务&#xff09;和CC&#xff08;挑战碰撞&#xff09;攻击。 下面是高防CDN如何应对这些攻击的详细描述&#xff1a; 1. DDoS攻击防护 DDoS攻击通过大量的恶…

SREC用什么软件编程:全面解析与编程工具选择

SREC用什么软件编程&#xff1a;全面解析与编程工具选择 在嵌入式系统开发中&#xff0c;SREC文件格式扮演着至关重要的角色&#xff0c;用于存储和传输二进制数据。然而&#xff0c;对于许多初学者和开发者来说&#xff0c;如何选择合适的软件来编写SREC文件却是一个令人困惑…

STM32串口DMA 空闲中断使用笔记

这里只记录注意要点&#xff1a; 1&#xff0c;要开启串口 全局中断 和对应的接收DMA 中断&#xff0c;两个中断必须同时开 2&#xff0c;裸机程序需要在主循环外调用一次 这个函数 HAL_UARTEx_ReceiveToIdle_DMA(&huart2, rx_buff, BUFF_SIZE); 3&#xff0c;要在串口中…

【动态规划-BM71 最长上升子序列(一)】

题目 BM71 最长上升子序列(一) 分析 dp[i] 考虑到下标i&#xff0c;其组成的最长上升子序列长度 可以用动态规划的原因&#xff1a; 到i的结果可以由到j &#xff08;j<i) 的结果推出&#xff0c;只需要判断下标j对应的数字是否比下标i 对应的字母小即可 注意&#xf…

vs2013 - 打包

文章目录 vs2013 - 打包概述installshield2013limitededitionMicrosoft Visual Studio 2013 Installer Projects选择哪种来打包? 笔记VS2013打包和VS2019打包的区别打包工程选择view打包工程中单击工程名称节点&#xff0c;就可以在属性框中看到要改的属性(e.g. 默认是x86, 要…

「动态规划」当小偷改行去当按摩师,会发生什么?

一个有名的按摩师会收到源源不断的预约请求&#xff0c;每个预约都可以选择接或不接。在每次预约服务之间要有休息时间&#xff0c;因此她不能接受相邻的预约。给定一个预约请求序列&#xff0c;替按摩师找到最优的预约集合&#xff08;总预约时间最长&#xff09;&#xff0c;…

渗透测试之内核安全系列课程:Rootkit技术初探(三)

今天&#xff0c;我们来讲一下内核安全&#xff01; 本文章仅提供学习&#xff0c;切勿将其用于不法手段&#xff01; 目前&#xff0c;在渗透测试领域&#xff0c;主要分为了两个发展方向&#xff0c;分别为Web攻防领域和PWN&#xff08;二进制安全&#xff09;攻防领域。在…

Linux安装RocketMQ教程【带图文命令巨详细】

巨详细Linux安装Nacos教程RocketMQ教程 1、检查残留版本2、上传压缩包至服务器2.1压缩包获取2.2创建相关目录 3、安装RocketMQ4、配置RocketMQ4.1修改runserver.sh和runbroker.sh启动脚本4.2新增broker.conf配置信息4.3启动关闭rocketmq4.4配置开机自启动&#xff08;扩展项&am…

AI Agentic Design Patterns with AutoGen(下):工具使用、代码编写、多代理群聊

文章目录 四、工具使用: 国际象棋游戏4.1 准备工具4.2 创建两个棋手代理和棋盘代理4.3 注册工具到代理4.4 创建对话流程&#xff0c;开始对话4.5 增加趣味性&#xff1a;加入闲聊 五、代码编写&#xff1a;财务分析5.1导入和配置代码执行器5.2 创建 代码执行/编写 代理5.3 定义…