快盘下载:好资源、好软件、快快下载吧!

快盘排行|快盘最新

当前位置:首页软件教程电脑软件教程 → Kafka:-Windows环境-单机部署和伪集群、集群部署

Kafka:-Windows环境-单机部署和伪集群、集群部署

时间:2022-12-01 14:20:10人气:作者:快盘下载我要评论

1. kafka 单机版部署

1.1 zookeeper 安装

;1;下载安装包

官网;Apache ZooKeeper

我用的是 apache-zookeeper-3.7.1-bin.tar.gz

注意;zookeeper的安装路径不要有中文;建议也不要有空格,比如Program Files这样的路径

下载完成后;解压到本地无中文路径名的目录下;比如; D:/kafka

;2;修改配置文件

在zookeeper的conf目录下复制一份zoo_sample.cfg文件;并重命名为zoo.cfg;

修改zoo.cfg文件里面的路径(data,logs为新建目录)

# 存放内存数据库快照的目录
dataDir=D:/kafka/zookeeper/stand-alone/zookeeper/data
# 存放事务日志目录
dataLogDir=D:/kafka/zookeeper/stand-alone/zookeeper/logs
# AdminServer端口
admin.serverPort=7070
# clientport端口
clientPort=2181

重点避坑;在windows环境中;文件路径必须是 ;; 或者 ;//; ;“/” 是无法识别的。 zookeeper服务启动时会启动一个AdminServer的服务;端口会占用8080;如果你有启动别的项目占了8080端口就会报错无法启动。所以在这添加配置 admin.serverPort=7070 来将启动端口修改;7070随便填的;不冲突就行;。单机集群;必须每个节点的admin.serverPort都不同。

;3;参数说明;

参数说明;
 
tickTime;这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔;也就是每个 tickTime 时间就会发送一个心跳。
 
initLimit;这个配置项是用来配置 Zookeeper 接受客户端;这里所说的客户端不是用户连接 Zookeeper 服务器的客户端;而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器;初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 10 个心跳的时间;也就是 tickTime;长度后 Zookeeper 服务器还没有收到客户端的返回信息;那么表明这个客户端连接失败。总的时间长度就是 5*2000=10 秒
 
syncLimit;这个配置项标识 Leader 与 Follower 之间发送消息;请求和应答时间长度;最长不能超过多少个 tickTime 的时间长度;总的时间长度就是 2*2000=4 秒
 
dataDir;顾名思义就是 Zookeeper 保存数据的目录;默认情况下;Zookeeper 将写数据的日志文件也保存在这个目录里。
 
clientPort;这个端口就是客户端连接 Zookeeper 服务器的端口;Zookeeper 会监听这个端口;接受客户端的访问请求。

;4;创建data 、logs目录

Kafka:-Windows环境-单机部署和伪集群、集群部署

;5;启动服务

进入bin目录下;进入bin目录下,双击zkServer.cmd

 如果出现闪退,检查jdk的环境变量是否安装正确.路径中不要有中文。

;6;验证是否安装成功

在bin目录下双击zkCli.cmd,打开客户端(此时的服务端zkServer的dos窗口不要关闭),出现;欢迎;字样,说明安装成功!

Kafka:

1.2 kafka 安装

;1;下载安装包

在kafka官网下载安装包;并解压。我使用的是kafka_2.13-2.8.0.tgz。解压到本地目录下;这里是;D:kafka。

;2;修改配置文件

kafka需要修改server.propertiespei文件的参数;

#节点id;单机用默认的0;集群每个节点都不一样
broker.id=0
#日志文件路径
log.dirs=D:/kafka/kafka/stand-alone/kafka/kafka-logs
#kafka运行端口;默认9092;单机可以不配置
#listeners=PLAINTEXT://:9092
#表示本地运行(默认的可以不改)
zookeeper.connect=localhost:2181

;3;启动kafka服务器

进入Kafka安装目录,新建cmd窗口:

cd D:kafkakafkastand-alonekafka

输入命令

.inwindowskafka-server-start.bat .configserver.properties

或者填写绝对路径

D:kafkakafkastand-alonekafkainwindowskafka-server-start.bat D:kafkakafkastand-alonekafkaconfigserver.properties

Kafka:

注意;不要关了这个窗口;启用Kafka前请确保ZooKeeper实例已经准备好并开始运行

1.3 测试

;1;创建主题

新建cmd窗口,进入kafka的windows目录下

cd D:kafkakafkastand-alonekafkainwindows

输入以下命令,创建一个叫topic001的主题

.kafka-topics.bat --create --topic test1 --bootstrap-server 127.0.0.1:9092

Kafka:

;2;查看状态

 .kafka-topics.bat --describe --topic test1 --bootstrap-server 127.0.0.1:9092

Kafka:

 ;3;停止kafka

.kafka-server-stop.bat

2. kafka伪集群

2.1 zookeeper集群搭建

;1;创建多节点配置

在单节点的基础上;复制一份;zookeeper集群最少三个节点。三个节点的端口不能相同;分别使用2181、2182、2183。

创建三个节点用的配置文件 zoo.cfg复制多份;分别命名为 zoo-1.cfg、zoo-2.cfg、zoo-3.cfg。

;2;创建多节点的data目录和myid文件、logs目录

Windows环境-单机部署和伪集群、集群部署

data2181、data2182、data2183的myid分别为1、2、3。

;3;三个节点的配置内容;

zoo-1.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=D:/kafka/zookeeper/colony/zookeeper/data2181

dataLogDir=D:/kafka/zookeeper/colony/zookeeper/logs1
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60

admin.serverPort=8080
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to ;0; to disable auto purge feature
#autopurge.purgeInterval=1
# 集群配置
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

 zoo-2.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=D:/kafka/zookeeper/colony/zookeeper/data2182

dataLogDir=D:/kafka/zookeeper/colony/zookeeper/logs2
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60

admin.serverPort=8081
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to ;0; to disable auto purge feature
#autopurge.purgeInterval=1
# 集群配置
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

zoo-3.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=D:/kafka/zookeeper/colony/zookeeper/data2183

dataLogDir=D:/kafka/zookeeper/colony/zookeeper/logs3
# the port at which the clients will connect
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60


admin.serverPort=8083
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to ;0; to disable auto purge feature
#autopurge.purgeInterval=1
# 集群配置
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

;4;配置三个启动脚本

zkServer-2181.cmd

;echo off
REM Licensed to the Apache Software Foundation (ASF) under one or more
REM contributor license agreements.  See the NOTICE file distributed with
REM this work for additional information regarding copyright ownership.
REM The ASF licenses this file to You under the Apache License, Version 2.0
REM (the ;License;); you may not use this file except in compliance with
REM the License.  You may obtain a copy of the License at
REM
REM     http://www.apache.org/licenses/LICENSE-2.0
REM
REM Unless required by applicable law or agreed to in writing, software
REM distributed under the License is distributed on an ;AS IS; BASIS,
REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
REM See the License for the specific language governing permissions and
REM limitations under the License.

setlocal
call %~dp0zkEnv.cmd;

set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
set ZOOCFG=D:/kafka/zookeeper/colony/zookeeper/conf/zoo-1.cfg
set ZOO_LOG_FILE=zookeeper-%USERNAME%-server-%COMPUTERNAME%.log

echo on
call %JAVA% ;-Dzookeeper.audit.enable=true; ;-Dzookeeper.log.dir=%ZOO_LOG_DIR% ;-Dzookeeper.root.logger=%ZOO_LOG4J_PROP% ;-Dzookeeper.log.file=%ZOO_LOG_FILE% ;-XX:;HeapDumpOnOutOfMemoryError; ;-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f; -cp %CLASSPATH% %ZOOMAIN% %ZOOCFG% %*

endlocal

pause

zkServer-2182.cmd

;echo off
REM Licensed to the Apache Software Foundation (ASF) under one or more
REM contributor license agreements.  See the NOTICE file distributed with
REM this work for additional information regarding copyright ownership.
REM The ASF licenses this file to You under the Apache License, Version 2.0
REM (the ;License;); you may not use this file except in compliance with
REM the License.  You may obtain a copy of the License at
REM
REM     http://www.apache.org/licenses/LICENSE-2.0
REM
REM Unless required by applicable law or agreed to in writing, software
REM distributed under the License is distributed on an ;AS IS; BASIS,
REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
REM See the License for the specific language governing permissions and
REM limitations under the License.

setlocal
call %~dp0zkEnv.cmd;

set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
set ZOOCFG=D:/kafka/zookeeper/colony/zookeeper/conf/zoo-2.cfg
set ZOO_LOG_FILE=zookeeper-%USERNAME%-server-%COMPUTERNAME%.log

echo on
call %JAVA% ;-Dzookeeper.audit.enable=true; ;-Dzookeeper.log.dir=%ZOO_LOG_DIR% ;-Dzookeeper.root.logger=%ZOO_LOG4J_PROP% ;-Dzookeeper.log.file=%ZOO_LOG_FILE% ;-XX:;HeapDumpOnOutOfMemoryError; ;-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f; -cp %CLASSPATH% %ZOOMAIN% %ZOOCFG% %*

endlocal

pause

zkServer-2183.cmd

;echo off
REM Licensed to the Apache Software Foundation (ASF) under one or more
REM contributor license agreements.  See the NOTICE file distributed with
REM this work for additional information regarding copyright ownership.
REM The ASF licenses this file to You under the Apache License, Version 2.0
REM (the ;License;); you may not use this file except in compliance with
REM the License.  You may obtain a copy of the License at
REM
REM     http://www.apache.org/licenses/LICENSE-2.0
REM
REM Unless required by applicable law or agreed to in writing, software
REM distributed under the License is distributed on an ;AS IS; BASIS,
REM WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
REM See the License for the specific language governing permissions and
REM limitations under the License.

setlocal
call %~dp0zkEnv.cmd;

set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
set ZOOCFG=D:/kafka/zookeeper/colony/zookeeper/conf/zoo-3.cfg
set ZOO_LOG_FILE=zookeeper-%USERNAME%-server-%COMPUTERNAME%.log

echo on
call %JAVA% ;-Dzookeeper.audit.enable=true; ;-Dzookeeper.log.dir=%ZOO_LOG_DIR% ;-Dzookeeper.root.logger=%ZOO_LOG4J_PROP% ;-Dzookeeper.log.file=%ZOO_LOG_FILE% ;-XX:;HeapDumpOnOutOfMemoryError; ;-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f; -cp %CLASSPATH% %ZOOMAIN% %ZOOCFG% %*

endlocal

pause

2.2 kafka集群搭建

;1;复制单机版为三份

Kafka:

;2;配置文件

  • broker.id;节点id;在同一个集群中;不能重复
  • listeners;节点监听的端口;在同一台机器上;也不能相同
  • log.dris;存储数据的位置
  • zookeeper.connect;zookeeper集群的连接地址

kafka-1; server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the ;License;); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an ;AS IS; BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://127.0.0.1:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for ;listeners; if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for Processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=D:kafkakafkacolonykafka-1kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics ;__consumer_offsets; and ;__transaction_state;
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. ;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002;.
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181,localhost:2182,localhost:2183

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

 kafka-2: server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the ;License;); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an ;AS IS; BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://127.0.0.1:9093

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for ;listeners; if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=D:kafkakafkacolonykafka-2kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics ;__consumer_offsets; and ;__transaction_state;
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. ;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002;.
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181,localhost:2182,localhost:2183


# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

kafka-3: server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the ;License;); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an ;AS IS; BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=3

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://127.0.0.1:9094

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for ;listeners; if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=D:kafkakafkacolonykafka-3kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics ;__consumer_offsets; and ;__transaction_state;
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. ;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002;.
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181,localhost:2182,localhost:2183

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

;3;启动命令

说明;注意路径和写法空格之类的.

第一个节点;kafka-1

 D:kafkakafkacolonykafka-1inwindowskafka-server-start.bat D:kafkakafkacolonykafka-1configserver.properties

Kafka:

第二个节点;kafka-2

 D:kafkakafkacolonykafka-2inwindowskafka-server-start.bat D:kafkakafkacolonykafka-2configserver.properties

Windows环境-单机部署和伪集群、集群部署

 第三个节点;kafka-3

 D:kafkakafkacolonykafka-3inwindowskafka-server-start.bat D:kafkakafkacolonykafka-3configserver.properties

Windows环境-单机部署和伪集群、集群部署

3. kafka分布式集群

在三台服务器分别搭建 zookeeper;kafka的主从集群。

3.1 zookeeper集群搭建

在三台服务器分别搭建 zookeeper;kafka的主从集群。

主机ip主机一192.168.126.135主机二192.168.126.136主机三192.168.126.137

;1;分别在三台win系统下载zookeeper安装包;并分别解压zookeeper压缩包到指定目录下

;2;分别创建zoo.cfg配置文件

复制每台windows服务器的zookeeper/conf/zoo_sample.cfg为 zoo.cfg文件。

;3;创建data、logs目录;并在data目录下新建myid用于集群服务;里面内容填写当前主机id;我是三台服务器的集群;id分别为1;2;3

;4;修改zoo.cfg配置

# 这个地方的路径就是上面创建data文件夹的地址。根据自己的实际地址填写
dataDir=D:/kafka/zookeeper/stand-alone/zookeeper/data
# 存放事务日志目录
dataLogDir=D:/kafka/zookeeper/stand-alone/zookeeper/logs
# AdminServer端口
admin.serverPort=7070
# clientport端口
clientPort=2181

并在文本最后添加节点信息;

server.1=192.168.126.135:2888:3888
server.2=192.168.126.136:2888:3888
server.3=192.168.126.137:2888:3888

节点信息里的 “server.”后面的数字就是约定该服务器的主机id。必须一致;不然集群启动会失败。

;5;启动zookeeper

在命令行执行zookeeper的 zkServer.cmd文件;

D:kafkazookeepercolonyzookeeperinzkServer.cmd

3.2 kafka集群搭建

;1;安装kafka

在三台windows服务器分别下载、解压缩kafka压缩包到指定目录

;2;修改config/server.properties文件

修改broker.id;分别为1;2;3

#对应上面配置zk的每台节点的id
broker.id=1

 修改listeners

#本机主机的ip
listeners=PLAINTEXT://192.168.126.135:9092

修改zookeeper.connect

    #每个节点的信息
    zookeeper.connect=192.168.126.135:2181,192.168.126.136:2181,192.168.126.137:2181

修改日志目录;

log.dirs=D:/kafka/kafka/stand-alone/kafka/logs

;3;进入/bin目录下启动kafka

D:/kafka/kafka/stand-alone/kafka/bin/windows/kafka-server-start.bat D:/kafka/kafka/stand-alone/kafka/bin/config/server.properties

;4;关闭kafka

D:/kafka/kafka/stand-alone/kafka/bin/windows/kafka-server-stop.bat


 

网友评论

快盘下载暂未开通留言功能。

关于我们| 广告联络| 联系我们| 网站帮助| 免责声明| 软件发布

Copyright 2019-2029 【快快下载吧】 版权所有 快快下载吧 | 豫ICP备10006759号公安备案:41010502004165

声明: 快快下载吧上的所有软件和资料来源于互联网,仅供学习和研究使用,请测试后自行销毁,如有侵犯你版权的,请来信指出,本站将立即改正。