建设网站公司兴田德润官方地址个人网站的设计与实现摘要

diannao/2026/1/21 12:57:37/文章来源:
建设网站公司兴田德润官方地址,个人网站的设计与实现摘要,策划一个网站策划书,wordpress更新 ftpKafka 介绍 Kafka 是一个由 Apache 软件基金会开发的开源流式处理平台。它被设计用于处理大规模数据流#xff0c;提供高可靠性、高吞吐量和低延迟的消息传递系统。Kafka 可以用于构建实时数据管道和流式应用程序#xff0c;让不同应用、系统或者数据源之间能够高效地进行数…Kafka 介绍 Kafka 是一个由 Apache 软件基金会开发的开源流式处理平台。它被设计用于处理大规模数据流提供高可靠性、高吞吐量和低延迟的消息传递系统。Kafka 可以用于构建实时数据管道和流式应用程序让不同应用、系统或者数据源之间能够高效地进行数据交换和通信。 Kafka 的核心概念包括以下几个部分 消息: Kafka 是基于发布/订阅模式的消息系统它通过主题Topics来组织消息。消息由生产者发布到主题消费者可以订阅一个或多个主题以接收消息。 主题: 主题是消息的分类每个主题可以包含一个或多个分区Partitions。消息发布到主题后会根据一定规则被分发到不同的分区中。 分区: 主题可以被分为多个分区每个分区都是有序且持久化的消息记录序列。分区使得 Kafka 能够水平扩展允许多个消费者并行地处理消息。 生产者: 生产者负责向 Kafka 的主题发布消息。 消费者: 消费者从 Kafka 主题订阅并处理消息。 代理Broker: Kafka 集群由多个代理组成每个代理是一个独立的 Kafka 服务器负责存储数据和处理消息。 Kafka 的特点包括 持久性: Kafka 将消息持久化存储在磁盘上保证消息不会丢失。 高吞吐量: Kafka 能够处理大量数据并保持低延迟适用于大规模的数据处理和分析场景。 可扩展性: 可以水平扩展以处理更多数据和更高的负载。 容错性: Kafka 集群通过副本机制实现数据备份和容错即使部分节点出现故障仍能保证数据可靠性和可用性。 Kafka 在数据流处理、实时日志处理、指标监控等领域有着广泛的应用被许多公司用于构建实时数据管道和处理大规模数据。 在 Ubuntu 环境下如何安装 Kafka、Kafka with Kraft 安装 Kafka 在 Ubuntu 环境下可以通过以下步骤进行。请注意这里描述的是安装 Kafka 3.6.0 版本的方法。在安装之前请确保已经安装了 Java 8 或更新版本。 了解一下 Kraft Kafka 2.8 版本引入了 KRaftKafka Raft作为 Kafka 的新的元数据管理方式用来替代原本依赖 ZooKeeper 的方案。KRaft 是一个基于 Raft 一致性协议实现的元数据管理系统它可以作为 Kafka 的替代方案不再依赖 ZooKeeper。 Kafka with KRaft 使用 Raft 协议来管理和维护 Kafka 的元数据信息包括分区分配、集群配置等。这样可以简化 Kafka 部署和管理过程不再需要维护额外的 ZooKeeper 集群。 步骤 1. 安装 Java 检查是否已经安装 Java java -version如果未安装 Java 或需要更新可以使用以下命令安装 OpenJDK sudo apt update sudo apt install default-jdk2. 下载 Kafka 在 Apache Kafka 的官方网站下载所需的 Kafka 版本例如 3.6.0 版本。 Kafka 的版本号按照 Scala 版本-Kafka 版本 的格式命名。例如kafka_2.13-3.6.0.tgz 中的 3.6.0 是 Kafka 的版本号而 2.13 表示这个 Kafka 版本是用 Scala 2.13 构建的。Kafka 发布的软件包已经包含了编译后的 Scala 代码因此你只需按照 Kafka 的安装步骤进行操作即可无需单独安装 Scala。 wget https://downloads.apache.org/kafka/3.6.0/kafka_2.13-3.6.0.tgz3. 解压并移动 Kafka 解压下载的 Kafka 压缩包 tar -xzf kafka_2.13-3.6.0.tgz将解压后的文件夹移动到所需位置例如 /opt 目录 sudo mv kafka_2.13-3.6.0 /opt/kafka4. 以 Kraft 方式启动 Kafka 生成集群 UUID KAFKA_CLUSTER_ID$(bin/kafka-storage.sh random-uuid)使用 bin/kafka-storage.sh format 命令来为 Kafka with KRaft 集群的日志目录进行格式化 bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties启动 Kafka 服务器 # 正常运行 bin/kafka-server-start.sh config/kraft/server.properties # 也可以选择后台运行 nohup bin/kafka-server-start.sh config/kraft/server.properties my_kafka_run.log 21 一旦 Kafka 服务器成功启动你就会拥有一个基本的 Kafka 环境可以开始使用了。 启动后的输出信息 [2023-11-28 07:46:27,307] INFO Registered kafka:typekafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2023-11-28 07:46:27,603] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiationtrue to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [2023-11-28 07:46:27,761] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) [2023-11-28 07:46:27,764] INFO [ControllerServer id1] Starting controller (kafka.server.ControllerServer) [2023-11-28 07:46:27,782] INFO authorizerStart completed for endpoint CONTROLLER. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures) [2023-11-28 07:46:28,132] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [2023-11-28 07:46:28,165] INFO [SocketServer listenerTypeCONTROLLER, nodeId1] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer) [2023-11-28 07:46:28,166] INFO [SharedServer id1] Starting SharedServer (kafka.server.SharedServer) [2023-11-28 07:46:28,224] INFO [LogLoader partition__cluster_metadata-0, dir/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [2023-11-28 07:46:28,224] INFO [LogLoader partition__cluster_metadata-0, dir/tmp/kraft-combined-logs] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$) [2023-11-28 07:46:28,224] INFO [LogLoader partition__cluster_metadata-0, dir/tmp/kraft-combined-logs] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$) [2023-11-28 07:46:28,262] INFO Initialized snapshots with IDs SortedSet() from /tmp/kraft-combined-logs/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$) [2023-11-28 07:46:28,301] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) [2023-11-28 07:46:28,490] INFO [RaftManager id1] Completed transition to Unattached(epoch0, voters[1], electionTimeoutMs1226) from null (org.apache.kafka.raft.QuorumState) [2023-11-28 07:46:28,563] INFO [RaftManager id1] Completed transition to CandidateState(localId1, epoch1, retries1, voteStates{1GRANTED}, highWatermarkOptional.empty, electionTimeoutMs1279) from Unattached(epoch0, voters[1], electionTimeoutMs1226) (org.apache.kafka.raft.QuorumState) [2023-11-28 07:46:28,572] INFO [RaftManager id1] Completed transition to Leader(localId1, epoch1, epochStartOffset0, highWatermarkOptional.empty, voterStates{1ReplicaState(nodeId1, endOffsetOptional.empty, lastFetchTimestamp-1, lastCaughtUpTimestamp-1, hasAcknowledgedLeadertrue)}) from CandidateState(localId1, epoch1, retries1, voteStates{1GRANTED}, highWatermarkOptional.empty, electionTimeoutMs1279) (org.apache.kafka.raft.QuorumState) [2023-11-28 07:46:28,596] INFO [kafka-1-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread) [2023-11-28 07:46:28,596] INFO [kafka-1-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread) [2023-11-28 07:46:28,617] INFO [RaftManager id1] High watermark set to LogOffsetMetadata(offset1, metadataOptional[(segmentBaseOffset0,relativePositionInSegment91)]) for the first time for epoch 1 based on indexOfHw 0 and voters [ReplicaState(nodeId1, endOffsetOptional[LogOffsetMetadata(offset1, metadataOptional[(segmentBaseOffset0,relativePositionInSegment91)])], lastFetchTimestamp-1, lastCaughtUpTimestamp-1, hasAcknowledgedLeadertrue)] (org.apache.kafka.raft.LeaderState) [2023-11-28 07:46:28,619] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:28,620] INFO [ControllerServer id1] Waiting for controller quorum voters future (kafka.server.ControllerServer) [2023-11-28 07:46:28,621] INFO [ControllerServer id1] Finished waiting for controller quorum voters future (kafka.server.ControllerServer) [2023-11-28 07:46:28,659] INFO [controller-1-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,660] INFO [controller-1-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,661] INFO [controller-1-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,662] INFO [controller-1-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,678] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,686] INFO [ControllerServer id1] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) [2023-11-28 07:46:28,686] INFO [ControllerServer id1] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) [2023-11-28 07:46:28,686] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:28,686] INFO [SocketServer listenerTypeCONTROLLER, nodeId1] Enabling request processing. (kafka.network.SocketServer) [2023-11-28 07:46:28,690] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor) [2023-11-28 07:46:28,696] INFO [ControllerServer id1] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) [2023-11-28 07:46:28,696] INFO [ControllerServer id1] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) [2023-11-28 07:46:28,696] INFO [ControllerServer id1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) [2023-11-28 07:46:28,696] INFO [ControllerServer id1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) [2023-11-28 07:46:28,698] INFO [BrokerServer id1] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer) [2023-11-28 07:46:28,699] INFO [BrokerServer id1] Starting broker (kafka.server.BrokerServer) [2023-11-28 07:46:28,706] INFO [broker-1-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,707] INFO [broker-1-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,707] INFO [broker-1-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,707] INFO [broker-1-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2023-11-28 07:46:28,724] INFO [BrokerServer id1] Waiting for controller quorum voters future (kafka.server.BrokerServer) [2023-11-28 07:46:28,724] INFO [BrokerServer id1] Finished waiting for controller quorum voters future (kafka.server.BrokerServer) [2023-11-28 07:46:28,729] INFO [broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,731] INFO [broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,755] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [2023-11-28 07:46:28,760] INFO [SocketServer listenerTypeBROKER, nodeId1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) [2023-11-28 07:46:28,764] INFO [broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,764] INFO [broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,782] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,783] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,784] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,786] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,786] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:28,786] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,801] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,804] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,836] INFO [broker-1-to-controller-heartbeat-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,836] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,837] INFO [BrokerLifecycleManager id1] Incarnation rXokDA-kRI2e0TCw3qUr4g of broker 1 in cluster ktQqKm60RwiR-s4Dts0HDg is now STARTING. (kafka.server.BrokerLifecycleManager) [2023-11-28 07:46:28,857] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2023-11-28 07:46:28,877] INFO [BrokerServer id1] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer) [2023-11-28 07:46:28,877] INFO [BrokerServer id1] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer) [2023-11-28 07:46:28,877] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:28,877] INFO [BrokerServer id1] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) [2023-11-28 07:46:28,920] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:28,921] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,972] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:28,977] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:28,979] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:28,979] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,029] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,035] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:29,036] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,077] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,086] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,091] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:29,091] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,141] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,147] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:29,147] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,178] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,197] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,202] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:29,202] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,253] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,270] INFO [BrokerToControllerChannelManager id1 nameheartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) [2023-11-28 07:46:29,271] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,276] INFO [RaftManager id1] Registered the listener org.apache.kafka.image.loader.MetadataLoader382374793 (org.apache.kafka.raft.KafkaRaftClient) [2023-11-28 07:46:29,276] INFO [RaftManager id1] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener859950147 (org.apache.kafka.raft.KafkaRaftClient) [2023-11-28 07:46:29,281] INFO [MetadataLoader id1] initializeNewPublishers: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,288] INFO [MetadataLoader id1] maybePublishMetadata(LOG_DELTA): The loader is still catching up because we have not loaded a controller record as of offset 0 and high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,320] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:46:29,332] INFO [MetadataLoader id1] maybePublishMetadata(LOG_DELTA): The loader finished catching up to the current high water mark of 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,360] INFO [BrokerLifecycleManager id1] Successfully registered broker 1 with broker epoch 5 (kafka.server.BrokerLifecycleManager) [2023-11-28 07:46:29,382] INFO [BrokerLifecycleManager id1] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager) [2023-11-28 07:46:29,383] INFO [BrokerServer id1] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) [2023-11-28 07:46:29,383] INFO [BrokerServer id1] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer) [2023-11-28 07:46:29,386] INFO [MetadataLoader id1] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,386] INFO [MetadataLoader id1] InitializeNewPublishers: initializing FeaturesPublisher with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,386] INFO [MetadataLoader id1] InitializeNewPublishers: initializing DynamicConfigPublisher controller id1 with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,388] INFO [MetadataLoader id1] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id1 with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,389] INFO [MetadataLoader id1] InitializeNewPublishers: initializing ScramPublisher controller id1 with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,390] INFO [MetadataLoader id1] InitializeNewPublishers: initializing DelegationTokenPublisher controller id1 with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,392] INFO [MetadataLoader id1] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,393] INFO [MetadataLoader id1] InitializeNewPublishers: initializing AclPublisher controller id1 with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,394] INFO [MetadataLoader id1] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 5 (org.apache.kafka.image.loader.MetadataLoader) [2023-11-28 07:46:29,394] INFO [BrokerMetadataPublisher id1] Publishing initial metadata at offset OffsetAndEpoch(offset5, epoch1) with metadata.version 3.6-IV2. (kafka.server.metadata.BrokerMetadataPublisher) [2023-11-28 07:46:29,395] INFO [BrokerLifecycleManager id1] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager) [2023-11-28 07:46:29,397] INFO Loading logs from log dirs ArraySeq(/tmp/kraft-combined-logs) (kafka.log.LogManager) [2023-11-28 07:46:29,402] INFO No logs found to be loaded in /tmp/kraft-combined-logs (kafka.log.LogManager) [2023-11-28 07:46:29,409] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) [2023-11-28 07:46:29,410] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [2023-11-28 07:46:29,415] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [2023-11-28 07:46:29,555] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) [2023-11-28 07:46:29,556] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [2023-11-28 07:46:29,557] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) [2023-11-28 07:46:29,557] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2023-11-28 07:46:29,561] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2023-11-28 07:46:29,562] INFO [TransactionCoordinator id1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2023-11-28 07:46:29,563] INFO [TransactionCoordinator id1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [2023-11-28 07:46:29,563] INFO [BrokerMetadataPublisher id1] Updating metadata.version to 14 at offset OffsetAndEpoch(offset5, epoch1). (kafka.server.metadata.BrokerMetadataPublisher) [2023-11-28 07:46:29,566] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2023-11-28 07:46:29,568] INFO [BrokerServer id1] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer) [2023-11-28 07:46:29,570] INFO KafkaConfig values:advertised.listeners PLAINTEXT://localhost:9092alter.config.policy.class.name nullalter.log.dirs.replication.quota.window.num 11alter.log.dirs.replication.quota.window.size.seconds 1authorizer.class.name auto.create.topics.enable trueauto.include.jmx.reporter trueauto.leader.rebalance.enable truebackground.threads 10broker.heartbeat.interval.ms 2000broker.id 1broker.id.generation.enable truebroker.rack nullbroker.session.timeout.ms 9000client.quota.callback.class nullcompression.type producerconnection.failed.authentication.delay.ms 100connections.max.idle.ms 600000connections.max.reauth.ms 0control.plane.listener.name nullcontrolled.shutdown.enable truecontrolled.shutdown.max.retries 3controlled.shutdown.retry.backoff.ms 5000controller.listener.names CONTROLLERcontroller.quorum.append.linger.ms 25controller.quorum.election.backoff.max.ms 1000controller.quorum.election.timeout.ms 1000controller.quorum.fetch.timeout.ms 2000controller.quorum.request.timeout.ms 2000controller.quorum.retry.backoff.ms 20controller.quorum.voters [1localhost:9093]controller.quota.window.num 11controller.quota.window.size.seconds 1controller.socket.timeout.ms 30000create.topic.policy.class.name nulldefault.replication.factor 1delegation.token.expiry.check.interval.ms 3600000delegation.token.expiry.time.ms 86400000delegation.token.master.key nulldelegation.token.max.lifetime.ms 604800000delegation.token.secret.key nulldelete.records.purgatory.purge.interval.requests 1delete.topic.enable trueearly.start.listeners nullfetch.max.bytes 57671680fetch.purgatory.purge.interval.requests 1000group.consumer.assignors [org.apache.kafka.coordinator.group.assignor.RangeAssignor]group.consumer.heartbeat.interval.ms 5000group.consumer.max.heartbeat.interval.ms 15000group.consumer.max.session.timeout.ms 60000group.consumer.max.size 2147483647group.consumer.min.heartbeat.interval.ms 5000group.consumer.min.session.timeout.ms 45000group.consumer.session.timeout.ms 45000group.coordinator.new.enable falsegroup.coordinator.threads 1group.initial.rebalance.delay.ms 3000group.max.session.timeout.ms 1800000group.max.size 2147483647group.min.session.timeout.ms 6000initial.broker.registration.timeout.ms 60000inter.broker.listener.name PLAINTEXTinter.broker.protocol.version 3.6-IV2kafka.metrics.polling.interval.secs 10kafka.metrics.reporters []leader.imbalance.check.interval.seconds 300leader.imbalance.per.broker.percentage 10listener.security.protocol.map CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLlisteners PLAINTEXT://:9092,CONTROLLER://:9093log.cleaner.backoff.ms 15000log.cleaner.dedupe.buffer.size 134217728log.cleaner.delete.retention.ms 86400000log.cleaner.enable truelog.cleaner.io.buffer.load.factor 0.9log.cleaner.io.buffer.size 524288log.cleaner.io.max.bytes.per.second 1.7976931348623157E308log.cleaner.max.compaction.lag.ms 9223372036854775807log.cleaner.min.cleanable.ratio 0.5log.cleaner.min.compaction.lag.ms 0log.cleaner.threads 1log.cleanup.policy [delete]log.dir /tmp/kafka-logslog.dirs /tmp/kraft-combined-logslog.flush.interval.messages 9223372036854775807log.flush.interval.ms nulllog.flush.offset.checkpoint.interval.ms 60000log.flush.scheduler.interval.ms 9223372036854775807log.flush.start.offset.checkpoint.interval.ms 60000log.index.interval.bytes 4096log.index.size.max.bytes 10485760log.local.retention.bytes -2log.local.retention.ms -2log.message.downconversion.enable truelog.message.format.version 3.0-IV1log.message.timestamp.after.max.ms 9223372036854775807log.message.timestamp.before.max.ms 9223372036854775807log.message.timestamp.difference.max.ms 9223372036854775807log.message.timestamp.type CreateTimelog.preallocate falselog.retention.bytes -1log.retention.check.interval.ms 300000log.retention.hours 168log.retention.minutes nulllog.retention.ms nulllog.roll.hours 168log.roll.jitter.hours 0log.roll.jitter.ms nulllog.roll.ms nulllog.segment.bytes 1073741824log.segment.delete.delay.ms 60000max.connection.creation.rate 2147483647max.connections 2147483647max.connections.per.ip 2147483647max.connections.per.ip.overrides max.incremental.fetch.session.cache.slots 1000message.max.bytes 1048588metadata.log.dir nullmetadata.log.max.record.bytes.between.snapshots 20971520metadata.log.max.snapshot.interval.ms 3600000metadata.log.segment.bytes 1073741824metadata.log.segment.min.bytes 8388608metadata.log.segment.ms 604800000metadata.max.idle.interval.ms 500metadata.max.retention.bytes 104857600metadata.max.retention.ms 604800000metric.reporters []metrics.num.samples 2metrics.recording.level INFOmetrics.sample.window.ms 30000min.insync.replicas 1node.id 1num.io.threads 8num.network.threads 3num.partitions 1num.recovery.threads.per.data.dir 1num.replica.alter.log.dirs.threads nullnum.replica.fetchers 1offset.metadata.max.bytes 4096offsets.commit.required.acks -1offsets.commit.timeout.ms 5000offsets.load.buffer.size 5242880offsets.retention.check.interval.ms 600000offsets.retention.minutes 10080offsets.topic.compression.codec 0offsets.topic.num.partitions 50offsets.topic.replication.factor 1offsets.topic.segment.bytes 104857600password.encoder.cipher.algorithm AES/CBC/PKCS5Paddingpassword.encoder.iterations 4096password.encoder.key.length 128password.encoder.keyfactory.algorithm nullpassword.encoder.old.secret nullpassword.encoder.secret nullprincipal.builder.class class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderprocess.roles [broker, controller]producer.id.expiration.check.interval.ms 600000producer.id.expiration.ms 86400000producer.purgatory.purge.interval.requests 1000queued.max.request.bytes -1queued.max.requests 500quota.window.num 11quota.window.size.seconds 1remote.log.index.file.cache.total.size.bytes 1073741824remote.log.manager.task.interval.ms 30000remote.log.manager.task.retry.backoff.max.ms 30000remote.log.manager.task.retry.backoff.ms 500remote.log.manager.task.retry.jitter 0.2remote.log.manager.thread.pool.size 10remote.log.metadata.custom.metadata.max.bytes 128remote.log.metadata.manager.class.name org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManagerremote.log.metadata.manager.class.path nullremote.log.metadata.manager.impl.prefix rlmm.config.remote.log.metadata.manager.listener.name nullremote.log.reader.max.pending.tasks 100remote.log.reader.threads 10remote.log.storage.manager.class.name nullremote.log.storage.manager.class.path nullremote.log.storage.manager.impl.prefix rsm.config.remote.log.storage.system.enable falsereplica.fetch.backoff.ms 1000replica.fetch.max.bytes 1048576replica.fetch.min.bytes 1replica.fetch.response.max.bytes 10485760replica.fetch.wait.max.ms 500replica.high.watermark.checkpoint.interval.ms 5000replica.lag.time.max.ms 30000replica.selector.class nullreplica.socket.receive.buffer.bytes 65536replica.socket.timeout.ms 30000replication.quota.window.num 11replication.quota.window.size.seconds 1request.timeout.ms 30000reserved.broker.max.id 1000sasl.client.callback.handler.class nullsasl.enabled.mechanisms [GSSAPI]sasl.jaas.config nullsasl.kerberos.kinit.cmd /usr/bin/kinitsasl.kerberos.min.time.before.relogin 60000sasl.kerberos.principal.to.local.rules [DEFAULT]sasl.kerberos.service.name nullsasl.kerberos.ticket.renew.jitter 0.05sasl.kerberos.ticket.renew.window.factor 0.8sasl.login.callback.handler.class nullsasl.login.class nullsasl.login.connect.timeout.ms nullsasl.login.read.timeout.ms nullsasl.login.refresh.buffer.seconds 300sasl.login.refresh.min.period.seconds 60sasl.login.refresh.window.factor 0.8sasl.login.refresh.window.jitter 0.05sasl.login.retry.backoff.max.ms 10000sasl.login.retry.backoff.ms 100sasl.mechanism.controller.protocol GSSAPIsasl.mechanism.inter.broker.protocol GSSAPIsasl.oauthbearer.clock.skew.seconds 30sasl.oauthbearer.expected.audience nullsasl.oauthbearer.expected.issuer nullsasl.oauthbearer.jwks.endpoint.refresh.ms 3600000sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms 10000sasl.oauthbearer.jwks.endpoint.retry.backoff.ms 100sasl.oauthbearer.jwks.endpoint.url nullsasl.oauthbearer.scope.claim.name scopesasl.oauthbearer.sub.claim.name subsasl.oauthbearer.token.endpoint.url nullsasl.server.callback.handler.class nullsasl.server.max.receive.size 524288security.inter.broker.protocol PLAINTEXTsecurity.providers nullserver.max.startup.time.ms 9223372036854775807socket.connection.setup.timeout.max.ms 30000socket.connection.setup.timeout.ms 10000socket.listen.backlog.size 50socket.receive.buffer.bytes 102400socket.request.max.bytes 104857600socket.send.buffer.bytes 102400ssl.cipher.suites []ssl.client.auth nonessl.enabled.protocols [TLSv1.2, TLSv1.3]ssl.endpoint.identification.algorithm httpsssl.engine.factory.class nullssl.key.password nullssl.keymanager.algorithm SunX509ssl.keystore.certificate.chain nullssl.keystore.key nullssl.keystore.location nullssl.keystore.password nullssl.keystore.type JKSssl.principal.mapping.rules DEFAULTssl.protocol TLSv1.3ssl.provider nullssl.secure.random.implementation nullssl.trustmanager.algorithm PKIXssl.truststore.certificates nullssl.truststore.location nullssl.truststore.password nullssl.truststore.type JKStransaction.abort.timed.out.transaction.cleanup.interval.ms 10000transaction.max.timeout.ms 900000transaction.partition.verification.enable truetransaction.remove.expired.transaction.cleanup.interval.ms 3600000transaction.state.log.load.buffer.size 5242880transaction.state.log.min.isr 1transaction.state.log.num.partitions 50transaction.state.log.replication.factor 1transaction.state.log.segment.bytes 104857600transactional.id.expiration.ms 604800000unclean.leader.election.enable falseunstable.api.versions.enable falsezookeeper.clientCnxnSocket nullzookeeper.connect nullzookeeper.connection.timeout.ms nullzookeeper.max.in.flight.requests 10zookeeper.metadata.migration.enable falsezookeeper.session.timeout.ms 18000zookeeper.set.acl falsezookeeper.ssl.cipher.suites nullzookeeper.ssl.client.enable falsezookeeper.ssl.crl.enable falsezookeeper.ssl.enabled.protocols nullzookeeper.ssl.endpoint.identification.algorithm HTTPSzookeeper.ssl.keystore.location nullzookeeper.ssl.keystore.password nullzookeeper.ssl.keystore.type nullzookeeper.ssl.ocsp.enable falsezookeeper.ssl.protocol TLSv1.2zookeeper.ssl.truststore.location nullzookeeper.ssl.truststore.password nullzookeeper.ssl.truststore.type null(kafka.server.KafkaConfig) [2023-11-28 07:46:29,577] INFO [BrokerServer id1] Waiting for the broker to be unfenced (kafka.server.BrokerServer) [2023-11-28 07:46:29,612] INFO [BrokerLifecycleManager id1] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager) [2023-11-28 07:46:29,661] INFO [BrokerServer id1] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer) [2023-11-28 07:46:29,662] INFO authorizerStart completed for endpoint PLAINTEXT. Endpoint is now READY. (org.apache.kafka.server.network.EndpointReadyFutures) [2023-11-28 07:46:29,663] INFO [SocketServer listenerTypeBROKER, nodeId1] Enabling request processing. (kafka.network.SocketServer) [2023-11-28 07:46:29,663] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) [2023-11-28 07:46:29,664] INFO [BrokerServer id1] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer) [2023-11-28 07:46:29,664] INFO [BrokerServer id1] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer) [2023-11-28 07:46:29,664] INFO [BrokerServer id1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer) [2023-11-28 07:46:29,664] INFO [BrokerServer id1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer) [2023-11-28 07:46:29,664] INFO [BrokerServer id1] Transition from STARTING to STARTED (kafka.server.BrokerServer) [2023-11-28 07:46:29,665] INFO Kafka version: 3.6.0 (org.apache.kafka.common.utils.AppInfoParser) [2023-11-28 07:46:29,665] INFO Kafka commitId: 60e845626d8a465a (org.apache.kafka.common.utils.AppInfoParser) [2023-11-28 07:46:29,665] INFO Kafka startTimeMs: 1701157589664 (org.apache.kafka.common.utils.AppInfoParser) [2023-11-28 07:46:29,666] INFO [KafkaRaftServer nodeId1] Kafka Server started (kafka.server.KafkaRaftServer) [2023-11-28 07:53:16,542] INFO [broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) [2023-11-28 07:53:16,543] INFO [BrokerLifecycleManager id1] Unable to send a heartbeat because the RPC got timed out before it could be sent. (kafka.server.BrokerLifecycleManager)6. 测试 Kafka 创建一个主题Topic并发送/接收一些消息来测试 Kafka。例如创建名为 test-topic 的主题 bin/kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1生产者发送消息到该主题 bin/kafka-console-producer.sh --topic test-topic --bootstrap-server localhost:9092在另一个终端窗口中启动消费者以接收消息 bin/kafka-console-consumer.sh --topic test-topic --bootstrap-server localhost:9092 --from-beginning这些步骤将帮助你在 Ubuntu 上安装并启动 Kafka并进行简单的测试以确保 Kafka 正常运行。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/89619.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

套别人的网站模板吗国企网站建设需要注意什么

ImageMagick是一款强大的图像处理软件,它可以用于创建、编辑、合并和转换图像。它支持超过200种图像格式,并且提供了丰富的功能,包括图像缩放、旋转、裁剪、加水印、添加特效等。ImageMagick还支持批量处理图像,可以通过命令行或者…

和网站开发公司如何签合同怎么做和京东一样网站

开发服务器&自动化 每次写完代码都需要手动输入指令才能编译代码,太麻烦了,我们希望一切自动化,即修改代码后服务器浏览器自动刷新。 1. 下载包 npm i webpack-dev-server -D2. 配置 webpack.config.js const path require("p…

怎么查看网站是哪个公司建的怎么做网页会议邀请函

C多态实现原理深度解析 目录 C多态实现原理深度解析 一、引言:多态性的基本概念与重要性 二、虚函数表(VTable)与虚函数指针(VPtr) 三、动态绑定与静态绑定 四、纯虚函数与抽象基类 五、继承与多态的关系 六、多…

石狮网站建设科技兰州app开发

列表推导式是一种简洁的方式来创建列表。它允许您通过在单个表达式中定义循环和条件逻辑,以一种更紧凑的方式生成新的列表。使用列表推导式可以使代码更简洁,易于阅读,并且通常比传统的迭代方法更快。 列表推导式的一般语法形式为&#xff1a…

宁波网站设计公司排名网站超级推广

概述 OC6781是一款高效率、高精度的升压型LED恒流驱动控制芯片。OC6781内置高精度误差放大器,振荡器,恒流驱动电路等,特别适合大功率、多个高亮度LED灯串恒流驱动。OC6781采用固定频率的PWM控制方式,工作频率可通过外部电阻进行设…

做老师一些好的网站做网站弄什么语言

文章目录 一、简介二、实现代码三、实现效果参考资料一、简介 之前没有深入了解PCL的八叉树结构,趁着有时间了解了一下其特性,其中有一些非常有趣的操作。比如这里的体素搜索,我们有时候需要将点云进行体素化,这种做法有助于进行局部分析,自然而然就必须要快速获取某个点所…

淘宝客必须建网站吗深圳建筑工务署官网

目录 二、Rust 适合哪些场景? 三、Rust 社区的发展趋势如何? 四、Rust 快速搭建一个WebServer服务器 一、Rust是什么? Rust是一门赋予每个人构建可靠且高效软件能力的语言。 Rust 程序设计语言 一门帮助每个人构建可靠且高效软件的语言。…

qq发网站链接怎么做做网站基本语言

1、安装jdk、设置环境变量并测试 第一步:安装jdk 在部署 Tomcat 之前必须安装好 jdk,因为 jdk 是 Tomcat 运行的必要环境。 1. #关闭防火墙 systemctl stop firewalld systemctl disable firewalld setenforce 02. #将安装 Tomcat 所需软件包传到/opt…

公众号网站开发用什么模板比较好的网站建设平台

文章目录注册账号GPG 安装安装生成密钥上传公钥Maven配置上传到Maven仓库修改项目的配置,填写基本信息执行编译命令登录网站配置发布项目中应用遇到的问题解决方法本文将介绍如何将自己的jar包发布至公共的中央仓库,通过maven方式进行引用 注册账号 注册…

哪个网站建设商招代理WordPress做头部的插件

SAP是亚马逊云的解决方案架构师专业级认证,关于本课程,我会简述已下3点: 在本课程中按照自己的分类讲述考试相关的AWS产品,特别会注明每个产品在考试中可能出现的考点会对一些解决方案做对比,通过一些对比给出不同场景…

今年的公需课在哪个网站做网站字体字号

70 爬楼梯 (进阶) 爬楼梯问题在我们刚开始学习动态规划的时候作为入门的问题。当时题目考虑的是1或2种走法。如果将能走的台阶设为M,则能产生进阶的题目。通过求解完全背包问题得到。 题目如下: 题目页面 如果最多能走m个台阶&#xff0c…

做支付宝二维码网站网页浏览器是系统软件吗

安装完Anaconda后,也配好了框架的环境,接下来就需要在pycharm里面写代码了。 Anaconda里面的一些命令 1.新建环境,pytorch 是自己命的名(新建虚拟环境)conda create -n pytorch python3.82.看conda 环境中&#xff0…

网站建成之后应该怎么做电信改公网ip可以做网站吗

之前,本人写了两篇文章 黑科技抢先尝 | Windows全新终端初体验(附代码Build全过程) 和 程会玩 | 无需自行编译也能玩转 Windows Terminal,介绍了玩转Windows terminal的两种方式。今天这篇文章,主要介绍如何美化 Windows terminal 中 WSL 的 …

怎么做婚恋网站seo推荐

前言:微软于前天发布.net core 3.1正式版,并将长期支持3.1。所以我听到这个消息后就急忙下载.net core 3.1的SDK和Runtime,应该是公司最先用3.1的攻城狮了????。OK!废话少说,今天的目的是基于.net core 3.1建一个web api的项目先下载.net…

做淘宝网站多少钱网站开发设计公司

1).采集数据: #nmon -s10 -c60 -f -m /home/ 参数解释: -s10 每 10 秒采集一次数据。 -c60 采集 60 次,即为采集十分钟的数据。 -f 生成的数据文件名中包含文件创建的时间。 -m 生成的数据文件的存放目录。 这样就会生成一个 nmon 文件&…

360免费建手机网站南充市住房和城乡建设厅官方网站

安装插件 Sonar Scanner 用于扫描项目 配置sonar scanner jenkins集成sonar 1、sonar生成token 生成完保存好,刷新后无法查看 2、jenkins配置全局凭据 3、jenkins配置系统设置

p2p网站策划网站模板优势

使用Microsoft HoloLens投屏时,ip地址填对了,但是仍然报错,说hololens 2没有打开, 首先检查 开发人员选项 都打开,设备门户也打开 然后检查系统–体验共享,把共享都打开就可以了

你愿意做我女朋友吗表白网站百度商城官网首页

连接MySQL服务器:mysql -u用户名 -p密码mysql -u用户名 -pEnter Password:输入密码(无密码则直接回车)mysql -h主机名 -u用户名 -p密码 -D数据库名称退出:exit quit \q ctrlc显示帮助信息:\h取消本行命令:\c 取消当前…

烟台网站制作公司域名可以自己注册吗

书籍地址:http://book.douban.com/subject/11614538/ 一句话点评该书:Bob大叔的职业生涯经验总结,现身说法,可信可敬! (一)专业主义 (1)“专业主义”就意味着担当责…

企业网站建设情况wordpress 前台文件上传

最近百草君在整理粉丝留言的时候,发现关于阿胶四物膏的留言不少,并且有重复问题。百草君特意整理出来几个粉丝们特别关注的问题,给大家统一解答,顺序不分前后:Q1阿胶四物膏什么口感,甜不甜?阿胶…