如何用docker部署ELK? - 教程

news/2025/10/6 8:03:30/文章来源:https://www.cnblogs.com/slgkaifa/p/19127244

如何用docker部署ELK? - 教程

环境:

ELK 8.8.0

Ubuntu20.04

问题描述:

如何用docker部署ELK?
在这里插入图片描述

解决方案:

一、环境准备

(一)主机设置

  1. 安装 Docker Engine :版本需为 18.06.0 或更新。可通过命令 docker --version 检查版本。安装方式根据操作系统不同而有所差异,在 Linux 系统上可通过包管理工具安装,如在 Ubuntu 上使用 sudo apt-get install docker.io;在 CentOS 上使用 sudo yum install docker 。安装完成后,建议添加当前用户到 docker 用户组,以便无需 sudo 即可操作 Docker,执行命令 sudo usermod -aG docker ${USER},然后重新登录使更改生效。

  2. 安装 Docker Compose :版本要求 2.0.0 或更新。可使用 Python 的 pip 包管理工具安装,命令为 pip install -U docker-compose。此外,也可以从 Docker 的官方 GitHub 仓库下载二进制文件进行安装。在 Linux 上,可以通过以下命令安装指定版本的 Docker Compose:

    sudo curl -L "https://github.com/docker/compose/releases/download/$(DOCKER_COMPOSE_VERSION)/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose

    $(DOCKER_COMPOSE_VERSION) 替换为所需的版本号,如 v2.21.0

二、项目克隆与初始化

1.执行以下命令将 ELK 项目的仓库克隆到本地 Docker 主机:

git clone https://github.com/deviantony/docker-elk.git

2.进入项目目录:

cd docker-elk

新建.evn文件

touch .evn

设置下面相关密码

ELASTIC_VERSION=8.8.0
ELASTIC_PASSWORD=dff$#123e12
KIBANA_SYSTEM_PASSWORD=dff$#123e12
LOGSTASH_INTERNAL_PASSWORD=dff$#123e12
METRICBEAT_INTERNAL_PASSWORD=dff$#123e12Yor
FILEBEAT_INTERNAL_PASSWORD=dff$#123e12Your
HEARTBEAT_INTERNAL_PASSWORD=dff$#123e12urSt
MONITORING_INTERNAL_PASSWORD=dff$#123ePa
BEATS_SYSTEM_PASSWORD=dff$#123e12YoPas

按需更改配置文件

3.kibaba.yml新增下面内容

xpack.securitySolution.telemetry.enabled: false

在这里插入图片描述

4.编写docker-compose.yml信息

nano docker-compose.yml
services:
# The 'setup' service runs a one-off script which initializes users inside
# Elasticsearch — such as 'logstash_internal' and 'kibana_system' — with the
# values of the passwords defined in the '.env' file. It also creates the
# roles required by some of these users.
#
# This task only needs to be performed once, during the *initial* startup of
# the stack. Any subsequent run will reset the passwords of existing users to
# the values defined inside the '.env' file, and the built-in roles to their
# default permissions.
#
# By default, it is excluded from the services started by 'docker compose up'
# due to the non-default profile it belongs to. To run it, either provide the
# '--profile=setup' CLI flag to Compose commands, or "up" the service by name
# such as 'docker compose up setup'.
setup:
profiles:
- setup
build:
context: setup/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
init: true
volumes:
- ./setup/entrypoint.sh:/entrypoint.sh:ro,Z
- ./setup/lib.sh:/lib.sh:ro,Z
- ./setup/roles:/roles:ro,Z
environment:
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
METRICBEAT_INTERNAL_PASSWORD: ${METRICBEAT_INTERNAL_PASSWORD:-}
FILEBEAT_INTERNAL_PASSWORD: ${FILEBEAT_INTERNAL_PASSWORD:-}
HEARTBEAT_INTERNAL_PASSWORD: ${HEARTBEAT_INTERNAL_PASSWORD:-}
MONITORING_INTERNAL_PASSWORD: ${MONITORING_INTERNAL_PASSWORD:-}
BEATS_SYSTEM_PASSWORD: ${BEATS_SYSTEM_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
elasticsearch:
build:
context: elasticsearch/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,Z
- elasticsearch:/usr/share/elasticsearch/data:Z
ports:
- 9210:9200
- 9310:9300
environment:
node.name: elasticsearch
ES_JAVA_OPTS: -Xms512m -Xmx512m
# Bootstrap password.
# Used to initialize the keystore during the initial startup of
# Elasticsearch. Ignored on subsequent runs.
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/docs/deploy-manage/deploy/self-managed/bootstrap-checks
discovery.type: single-node
networks:
- elk
restart: unless-stopped
logstash:
build:
context: logstash/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z
ports:
- 5044:5044
- 50000:50000/tcp
- 50000:50000/udp
- 9600:9600
environment:
LS_JAVA_OPTS: -Xms256m -Xmx256m
LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
restart: unless-stopped
kibana:
build:
context: kibana/
args:
ELASTIC_VERSION: ${ELASTIC_VERSION}
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z
ports:
- 7000:5601
environment:
KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
networks:
- elk
depends_on:
- elasticsearch
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
networks:
elk:
driver: bridge
volumes:
elasticsearch:

按需改默认端口
在这里插入图片描述

5.然后,初始化 Elasticsearch 用户和组,执行命令:

docker compose up setup

在这里插入图片描述

docker compose build setup

在这里插入图片描述
docker compose up setup
在这里插入图片描述

6.如果上述初始化过程顺利完成且无错误,接下来启动 ELK 堆栈的其他组件:

docker compose up

在这里插入图片描述

也可以在命令后加上 -d 标志,以背景模式(分离模式)运行所有服务(正式使用):

docker compose up -d

7.等待 Kibana 初始化完成(大约需要一分钟),通过浏览器访问 http://localhost:5601,使用上面你预设的用户名 elastic 和密码 changeme 登录。
在这里插入图片描述

禁用付费功能

可以在 Kibana 的许可证管理面板或使用 Elasticsearch 的 start_basic Licensing API 来取消正在进行的试用,从而恢复为基本许可证。需要指出的是,如果没有在试用到期日期之前将许可证切换到 basic 或进行升级,那么第二种方法是恢复对 Kibana 的访问权限的唯一途径。

二、配置使用

1.使用 Filebeat 采集本地 vllm.log
安装 Filebeat
如果尚未安装 Filebeat,可以参考如下命令(以 Ubuntu 为例):

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.6.3-amd64.deb
sudo dpkg -i filebeat-9.0.1-amd64.deb
进入目标文件夹安装
cd/mnt/program/Qwen3
(base) root@VM-0-2-ubuntu:/mnt/program/Qwen3# sudo dpkg -i filebeat-9.0.1-amd64.deb
Selecting previously unselected package filebeat.
(Reading database ... 162166 files and directories currently installed.)
Preparing to unpack filebeat-9.0.1-amd64.deb ...
Unpacking filebeat (9.0.1) ...
Setting up filebeat (9.0.1) ...

2.启动服务

sudo systemctl start filebeat

查看状态

sudo systemctl status filebeat

在这里插入图片描述
查看日志

sudo journalctl -u filebeat -f

或者通过 apt 安装:

sudo apt-get install filebeat

3.查看配置文件

(base) root@VM-0-2-ubuntu:/mnt/program/Qwen3# cat /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.
# filestream is an input for collecting log messages from files.
#- type: filestream
- type: journald
# seek: cursor
# Unique ID among all inputs, an ID is required.
# id: qwen-vllm-journal
#include_matches:
# - _SYSTEMD_UNIT=qwen-vllm.service
# 可选,设置最大读取的日志条数
#max_entries: 1000
# Change to true to enable this input configuration.
enabled: true
units:
- qwen-vllm.service
# Paths that should be crawled and fetched. Glob based paths.
#paths:
# - /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# journald is an input for collecting logs from Journald
#- type: journald
# Unique ID among all inputs, if the ID changes, all entries
# will be re-ingested
#id: my-journald-id
# The position to start reading from the journal, valid options are:
# - head: Starts reading at the beginning of the journal.
# - tail: Starts reading at the end of the journal.
# This means that no events will be sent until a new message is written.
# - since: Use also the `since` option to determine when to start reading from.
#seek: head
# A time offset from the current time to start reading from.
# To use since, seek option must be set to since.
#since: -24h
# Collect events from the service and messages about the service,
# including coredumps.
#units:
#- docker.service
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
host: "http://localhost:7000"
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.#cloud.auth:# ================================== Outputs ===================================# Configure what output to use when sending the data collected by the beat.# ---------------------------- Elasticsearch Output ----------------------------output.elasticsearch:# Array of hosts to connect to.hosts: ["localhost:9210"]username: "elastic"password: "dff$#123e12"# Performance preset - one of "balanced", "throughput", "scale",# "latency", or "custom".preset: balanced# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"# ------------------------------ Logstash Output -------------------------------#output.logstash:# The Logstash hosts#hosts: ["localhost:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"# ================================= Processors =================================processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~# ================================== Logging ===================================# Sets log level. The default log level is info.# Available log levels are: error, warning, info, debug#logging.level: debug# At debug level, you can selectively enable logging only for some components.# To enable all selectors, use ["*"]. Examples of other selectors are "beat",# "publisher", "service".#logging.selectors: ["*"]# ============================= X-Pack Monitoring ==============================# Filebeat can export internal metrics to a central Elasticsearch monitoring# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The# reporting is disabled by default.# Set to true to enable the monitoring reporter.#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the# Elasticsearch outputs are accepted here as well.# Note that the settings should point to your Elasticsearch *monitoring* cluster.# Any setting that is not set is automatically inherited from the Elasticsearch# output configuration, so if you have the Elasticsearch output configured such# that it is pointing to your Elasticsearch monitoring cluster, you can simply# uncomment the following line.#monitoring.elasticsearch:# ============================== Instrumentation ===============================# Instrumentation support for the filebeat.#instrumentation:# Set to true to enable instrumentation of filebeat.#enabled: false# Environment in which filebeat is running on (eg: staging, production, etc.)#environment: ""# APM Server hosts to report instrumentation results to.#hosts:# - http://localhost:8200# API Key for the APM Server(s).# If api_key is set then secret_token will be ignored.#api_key:# Secret token for the APM Server(s).#secret_token:# ================================= Migration ==================================# This allows to enable 6.7 migration aliases#migration.6_to_7.enabled: true

配置 Filebeat 采集 vllm.log
编辑 Filebeat 配置文件 /etc/filebeat/filebeat.yml,添加 vllm.log 的采集路径:

base) root@VM-0-2-ubuntu:/mnt/program/Qwen3# nano /etc/filebeat/filebeat.yml
(base) root@VM-0-2-ubuntu:/mnt/program/Qwen3# sudo filebeat test config
Config OK

sudo filebeat modules enable system

nano /etc/filebeat/modules.d/system.yml

# Module: system
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-system.html
- module: system
# Syslog
syslog:
enabled: true
var.paths: ["/var/log/syslog*",
"/var/log/messages*"]
journal:
enabled: true
var.include_matches:
- _SYSTEMD_UNIT=qwen-vllm.service
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Use journald to collect system logs
#var.use_journald: false
# Authorization logs
auth:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Use journald to collect auth logs
#var.use_journald: false
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/vllm.log # 替换为实际 vllm.log 文件路径
fields:
log_type: vllm
fields_under_root: true
multiline.pattern: '^\[' # 如果 vllm.log 是多行日志,例如异常堆栈,可以配置 multiline
multiline.negate: true
multiline.match: after
output.elasticsearch:
hosts: ["localhost:9200"] # Elasticsearch 地址,调整为您的地址
fields 可以自定义字段,方便 Kibana 过滤查询;

4.如果 ELK 通过 Logstash 处理日志,可以将输出改为 Logstash:

output.logstash:
hosts: ["localhost:5044"]

启动并测试 Filebeat

sudo systemctl enable filebeat
sudo systemctl start filebeat
sudo tail -f /var/log/filebeat/filebeat

确认 Filebeat 正常采集并发送日志。

5.设置 Filebeat 并重启

sudo filebeat setup
sudo systemctl restart filebeat

在这里插入图片描述
验证是否正常采集日志

sudo journalctl -u qwen-vllm.service -f
sudo tail -f /var/log/filebeat/filebeat

观察 Filebeat 日志是否有采集到相关日志事件。

在这里插入图片描述

6.web配置

1. 创建 Data View

  1. 进入 Kibana
    • 打开 Kibana 并登录。
  2. 进入 Data Views
    • 在左侧导航栏中,点击 Kibana,然后选择 Data Views
  3. 创建新的 Data View:
    • 点击 Create data view
    • Name 字段中,输入一个名称(例如:filebeat-logs)。
    • Index pattern 字段中,输入索引名称(例如:.ds-filebeat-9.0.1-2025.05.25-000001filebeat-*)。
    • 如果需要,选择时间字段(通常是 @timestamp)。
    • 点击 Save data view

2. 查看日志数据

  1. 进入 Discover
    • 在左侧导航栏中,点击 Discover
  2. 选择 Data View:
    • 在左上角的下拉菜单中,选择刚刚创建的 Data View(例如:filebeat-logs)。
  3. 查看日志:
    • Kibana 会显示该索引中的日志数据。
    • 使用时间过滤器(右上角)和搜索栏(顶部)来筛选和搜索日志。
  4. 查看字段:
    • 在左侧的字段列表中,点击字段名称可以查看该字段的值分布。
    • 您可以通过点击字段名称旁边的 Add 按钮,将字段添加到日志表格中。

在这里插入图片描述

3. 使用过滤器

  1. 时间过滤器
    • 在右上角的时间选择器中,选择要查看的时间范围(例如:最近 15 分钟、最近 1 小时等)。
  2. 字段过滤器
    • 在日志表格中,点击某个字段的值,然后选择 Filter for valueFilter out value
  3. 搜索栏
    • 在顶部的搜索栏中,输入关键字或 Lucene 查询语法来过滤日志(例如:message: "error")。

4. 保存和导出

  1. 保存搜索
    • 在顶部点击 Save,可以将当前的搜索条件保存为一个视图,方便下次快速访问。
  2. 导出数据
    • 在顶部点击 Share,然后选择 CSV ReportsRaw Documents,可以将日志数据导出为 CSV 或 JSON 文件。

5. 可视化日志数据

  1. 进入 Visualize Library
    • 在左侧导航栏中,点击 Visualize Library
  2. 创建可视化:
    • 点击 Create visualization,选择图表类型(如柱状图、饼图等)。
    • 选择数据源(如 filebeat-logs Data View)。
    • 配置 X 轴和 Y 轴,例如:按时间统计日志数量,或按字段值分组统计。
  3. 保存可视化:
    • 完成后,点击 Save 保存可视化。

在这里插入图片描述

6. 使用 Dashboard

  1. 进入 Dashboard
    • 在左侧导航栏中,点击 Dashboard
  2. 创建仪表板:
    • 点击 Create dashboard
    • 添加之前创建的可视化图表或保存的搜索视图。
  3. 保存仪表板:
    • 完成后,点击 Save 保存仪表板。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/929077.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

国内自建的海淘网站手机上怎么做投票网站

前言 conda与pip是Python开发中常用的两种工具&#xff0c;conda本质是环境、包管理工具&#xff0c;pip是包管理工具&#xff0c;两者的功能有一定的重叠。本文主要记录开发工作中与两者相关的使用说明与注意事项。 推荐用conda创建隔离的虚拟环境&#xff0c;用pip进行包安…

网站开发的分工网站后台能进前台空白

在三维世界中,显示一张图的大小与摄象机的位置有关,近的地方,图片实际象素就大一些,远的地方图片实际象素就会小一些,就要进行一些压缩,例如一张64*64的图,在近处,显示出来可能是50*50,在远处可能显示出来是20*20. 如果只限于简单的支掉某些像素,将会使缩小后的图片损失很多细节…

物流公司网站模版腾讯云 wordpress

欢迎来到白刘的领域 Miracle_86.-CSDN博客 系列专栏 C语言知识 先赞后看&#xff0c;已成习惯 创作不易&#xff0c;多多支持&#xff01; 在这个波澜壮阔的内存地产世界中&#xff0c;malloc、free、calloc和realloc四位主角&#xff0c;共同演绎着一场场精彩绝伦的楼盘开…

想学编程做网站做it公司网站

1 准备 1.已安装好docker环境 2.已申请好域名 2 申请SSL证书 我使用的是腾讯云&#xff0c;申请免费的TrustAsia的SSL证书&#xff0c;阿里云等或者其他平台一般都会提供TrustAsia的SSL证书的 填好域名等相关信息&#xff0c;一般一天就可以下载证书了 3 docker安装Nginx …

网站类型分析设计说明怎么写200字

💡💡💡本文自研创新改进:MSAM(CBAM升级版):通道注意力具备多尺度性能,多分支深度卷积更好的提取多尺度特征,最后高效结合空间注意力 1)作为注意力MSAM使用; 推荐指数:五星 MSCA | 亲测在多个数据集能够实现涨点,对标CBAM。 改进1结构图如下: 《YOLOv…

服务器建站教程wordpress 目录菜单

系列目录 上一篇&#xff1a;白骑士的C语言教学高级篇 3.5 性能优化 在本项目中&#xff0c;我们将设计并实现一个简单的计算器程序&#xff0c;涵盖程序设计与实现、用户输入处理、算术运算与结果显示。该计算器可以进行基本的加减乘除运算&#xff0c;并能处理用户的连续输入…

免费的海报模板网站如何设置网站子域名

摘要 随着全球气候变化的加剧&#xff0c;生态保护已成为全球关注的焦点。天气预报API作为一种强大的工具&#xff0c;不仅能够提供实时的气象数据&#xff0c;还能在生态保护领域发挥重要作用。本文将探讨天气预报API如何帮助科学家、环保组织和政策制定者更好地理解和预测环…

罗湖做网站多少钱深圳设计平台

注册商标的时候都是要确定具体的产品或服务的&#xff0c;目前我国商标分类是用《类似商品和服务区分表–基于尼斯分类第十一版》2019年版这本分类书。这本分类表也是全球通用的分类表&#xff0c;商标分类总共有45个类别&#xff0c;1-34类是产品类、35-45类是服务类。这45个大…

企业网站的建设过程个人网站建设小江

在这个数字化快速发展的时代&#xff0c;垃圾回收系统的推广对于环境保护和可持续发展具有重要意义。为了更好地服务于垃圾回收行业&#xff0c;本文将分享如何使用第三方制作平台乔拓云网&#xff0c;定制开发搭建垃圾回收系统小程序。 首先&#xff0c;使用乔拓云网账号登录平…

休闲农庄展示网站wordpress 简书风格

1. webpack常用loader有哪些&#xff1f; babel-loader&#xff1a; 用于将 ES6 代码转换为向后兼容的 JavaScript 代码&#xff0c;以便在旧版本的浏览器中运行。css-loader&#xff1a; 用于加载 CSS 文件&#xff0c;并解析其中的 import 和 url 引用关系。style-loader&…

重庆广告公司前十名株洲seo优化官网

在Oracle中&#xff0c;判断联合索引是否生效可以通过以下几种方法&#xff1a; 执行计划&#xff08;Execution Plan&#xff09;: 当你执行一个SQL查询时&#xff0c;Oracle会生成一个执行计划&#xff0c;显示如何最有效地执行该查询。你可以使用EXPLAIN PLAN命令来查看这…

做网站合成APP手机网站建设软件

一 代码 ffmpeg版本5.1.2&#xff0c;dll是&#xff1a;ffmpeg-5.1.2-full_build-shared。x64的。 文件、流地址对使用者来说是一样。 流地址(RTMP、HTTP-FLV、RTSP等)&#xff1a;信令完成后&#xff0c;才进行音视频传输。信令包括音视频格式、参数等协商。 接流的在实际…

做阿里云网站的公司吗wordpress响应式按钮

本系列文章md笔记&#xff08;已分享&#xff09;主要讨论机器学习算法相关知识。机器学习算法文章笔记以算法、案例为驱动的学习&#xff0c;伴随浅显易懂的数学知识&#xff0c;让大家掌握机器学习常见算法原理&#xff0c;应用Scikit-learn实现机器学习算法的应用&#xff0…

推广网站的方法有搜索引擎韩城全员核酸检测

在CT等医学影像显示领域&#xff0c;我们经常会听到窗宽&#xff08;Window Width,简写WW&#xff09;、窗位&#xff08;Window Level,简写WL&#xff09;的概念&#xff0c;那么到底什么是窗宽、窗位&#xff0c;它们跟医学图像之间的关系又是什么&#xff1f; 先说一下CT值…

免费软件安装网站广州花都区

简述Python 中的每个值都有一个数据类型。在 Python 编程中&#xff0c;一切&#xff08;万物&#xff09;皆对象&#xff0c;数据类型实际上是类&#xff0c;变量是这些类的实例&#xff08;对象&#xff09;。简述数据类型Number数字String字符串List列表Tuple元组Set集合Dic…

服装网站建设价格开什么店投资小利润高

以下内容源于朱友鹏嵌入式课程的学习与整理&#xff0c;如有侵权请告知删除。 参考博客 同步通信与异步通信区别_wind19的博客-CSDN博客 SPI、I2C、UART&#xff08;即串口&#xff09;三种串行总线详解_天糊土的博客-CSDN博客_串口总线 一、电子通信相关的概念 1、同步通信和…

安装iTrustSSL证书 去除此网站不支持安全连接提示

当我们访问网站时,经常会遇到这样的问题提示: 此网站不支持安全连接 攻击者能够查看和复改您通过该网站发送或接收的信息。 如果您使用的是公共网络,安全起见,建议您稍后再访问此网站。使用您信任的网络 (例如家中…

中山市做网站实力数据库检索网站建设

Rockchip平台Android应用预安装功能(基于Android13) 1. 预安装应用类型 Android上的应用预安装功能&#xff0c;主要是指配置产品时&#xff0c;根据厂商要求&#xff0c;将事先准备好的第三方应用预置进Android系统。预安装分为以下几种类型&#xff1a; 安装不可卸载应用安…

网站自适应源码建网站要自己买服务器吗

用TyporapicgocloudflareTelegraph-image的免费&#xff0c;无需服务器&#xff0c;无限空间的图床搭建&#xff08;避坑指南&#xff09; 前提&#xff1a;有github何cloudflare (没有的话注册也很快) 首先&#xff0c;是一个别人写的详细的配置流程&#xff0c;傻瓜式教程&am…

河北建设厅官方网站网站设计便宜

gcc的流程 预处理 -E .i 编译 -s .s 把c语言编译为汇编 汇编 -c .o 把汇编编译为二进制 链接工程管理软件&#xff0c;它可以根据文件的时间戳进行编译&#xff0c;根据文件结构编译 app:main.o add.o gcc main.o add.o -o appmain.o:main.c gcc -c main.c -o main.oadd.o:…