Kafkactl
Command Line Tool for managing Apache Kafka
Install / Use
/learn @deviceinsight/KafkactlREADME
:toc: :toclevels: 2
= kafkactl
A command-line interface for interaction with Apache Kafka
image:https://github.com/deviceinsight/kafkactl/actions/workflows/lint_test.yml/badge.svg[Build Status,link=https://github.com/deviceinsight/kafkactl/actions] | image:https://img.shields.io/badge/command-docs-blue.svg[command docs,link=https://deviceinsight.github.io/kafkactl/]
== Features
- command auto-completion for bash, zsh, fish shell including dynamic completion for e.g. topics or consumer groups.
- support for avro schemas
- support for JSON schema registry
- Configuration of different contexts
- directly access kafka clusters inside your kubernetes cluster
- support for consuming and producing protobuf-encoded messages
image::https://asciinema.org/a/vmxrTA0h8CAXPnJnSFk5uHKzr.svg[asciicast,link=https://asciinema.org/a/vmxrTA0h8CAXPnJnSFk5uHKzr]
== Installation
You can install the pre-compiled binary or compile from source.
=== Install the pre-compiled binary
homebrew:
[,bash]
install kafkactl
brew install kafkactl
upgrade kafkactl
brew upgrade kafkactl
winget: [,bash]
winget install kafkactl
deb/rpm:
Download the .deb or .rpm from the https://github.com/deviceinsight/kafkactl/releases[releases page] and install with dpkg -i and rpm -i respectively.
yay (AUR)
There's a kafkactl https://aur.archlinux.org/packages/kafkactl/[AUR package] available for Arch. Install it with your AUR helper of choice (e.g. https://github.com/Jguer/yay[yay]):
[,bash]
yay -S kafkactl
manually:
Download the pre-compiled binaries from the https://github.com/deviceinsight/kafkactl/releases[releases page] and copy to the desired location.
=== Compiling from source
[,bash]
go install github.com/deviceinsight/kafkactl/v5@latest
NOTE: make sure that kafkactl is on PATH otherwise auto-completion won't work.
== Configuration
If no config file is found, a default config is generated in $HOME/.config/kafkactl/config.yml.
This configuration is suitable to get started with a single node cluster on a local machine.
=== Create a config file
Create $HOME/.config/kafkactl/config.yml with a definition of contexts that should be available
[,yaml]
contexts: default: brokers: - localhost:9092 remote-cluster: brokers: - remote-cluster001:9092 - remote-cluster002:9092 - remote-cluster003:9092
# optional: tls config
tls:
enabled: true
ca: my-ca
cert: my-cert
certKey: my-key
# set insecure to true to ignore all tls verification (defaults to false)
insecure: false
# optional: sasl support
sasl:
enabled: true
username: admin
password: admin
# optional configure sasl mechanism as plaintext, scram-sha256, scram-sha512, oauth (defaults to plaintext)
mechanism: oauth
# optional configure sasl version as v0, v1 (defaults to not configured), Refer to: https://github.com/IBM/sarama/issues/3000#issuecomment-2415829478
version: v0
# optional tokenProvider configuration (only used for 'sasl.mechanism=oauth')
tokenprovider:
# plugin to use as token provider implementation (see plugin section)
plugin: azure
# optional: additional options passed to the plugin
options:
key: value
# optional: access clusters running kubernetes
kubernetes:
enabled: false
binary: kubectl #optional
kubeConfig: ~/.kube/config #optional
kubeContext: my-cluster
namespace: my-namespace
# optional: docker image to use (the tag of the image will be suffixed by `-scratch` or `-ubuntu` depending on command)
image: private.registry.com/deviceinsight/kafkactl
# optional: secret for private docker registry
imagePullSecret: registry-secret
# optional: secret containing tls certificates (e.g. ca.crt, cert.crt, key.key)
tlsSecret: tls-secret
# optional: Username to impersonate for the kubectl command
asUser: user
# optional: serviceAccount to use for the pod
serviceAccount: my-service-account
# optional: keep pod after exit (can be set to true for debugging)
keepPod: true
# optional: labels to add to the pod
labels:
key: value
# optional: annotations to add to the pod
annotations:
key: value
# optional: nodeSelector to add to the pod
nodeSelector:
key: value
# optional: resource limits to add to the pod
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# optional: affinity to add to the pod
affinity:
# note: other types of affinity also supported
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "<key>"
operator: "<operator>"
values: [ "<value>" ]
# optional: tolerations to add to the pod
tolerations:
- key: "<key>"
operator: "<operator>"
value: "<value>"
effect: "<effect>"
# optional: clientID config (defaults to kafkactl-{username})
clientID: my-client-id
# optional: kafkaVersion (defaults to 2.5.0)
kafkaVersion: 1.1.1
# optional: timeout for admin requests (defaults to 3s)
requestTimeout: 10s
# optional: avro configuration
avro:
# optional: configure codec for (de)serialization as standard,avro (defaults to standard)
# see: https://github.com/deviceinsight/kafkactl/issues/123
jsonCodec: avro
# optional: schema registry
schemaRegistry:
url: localhost:8081
# optional: timeout for requests (defaults to 5s)
requestTimeout: 10s
# optional: basic auth credentials
username: admin
password: admin
# optional: tls config for avro
tls:
enabled: true
ca: my-ca
cert: my-cert
certKey: my-key
# set insecure to true to ignore all tls verification (defaults to false)
insecure: false
# optional: default protobuf messages search paths
protobuf:
importPaths:
- "/usr/include/protobuf"
protoFiles:
- "someMessage.proto"
- "otherMessage.proto"
protosetFiles:
- "/usr/include/protoset/other.protoset"
# see: https://pkg.go.dev/google.golang.org/protobuf@v1.36.6/encoding/protojson#MarshalOptions
marshalOptions:
allowPartial: true
useProtoNames: true
useEnumNumbers: true
emitUnpopulated: true
emitDefaultValues: true
producer:
# optional: changes the default partitioner
partitioner: "hash"
# optional: changes default required acks in produce request
# see: https://pkg.go.dev/github.com/IBM/sarama?utm_source=godoc#RequiredAcks
requiredAcks: "WaitForAll"
# optional: maximum permitted size of a message (defaults to 1000000)
maxMessageBytes: 1000000
consumer:
# optional: isolationLevel (defaults to ReadCommitted)
isolationLevel: ReadUncommitted
[#_config_file_read_order] The config file location is resolved by
. checking for a provided commandline argument: --config-file=$PATH_TO_CONFIG
. evaluating the environment variable: export KAFKA_CTL_CONFIG=$PATH_TO_CONFIG
. checking for a project config file in the working directory (see <<_project_config_files>>)
. as default the config file is looked up from one of the following locations:
** $HOME/.config/kafkactl/config.yml
** $HOME/.kafkactl/config.yml
** $APPDATA/kafkactl/config.yml
** /etc/kafkactl/config.yml
[#_project_config_files] ==== Project config files
In addition to the config file locations above, kafkactl allows to create a config file on project level. A project config file is meant to be placed at the root level of a git repo and declares the kafka configuration for this repository/project.
In order to identify the config file as belonging to kafkactl the following names can be used:
kafkactl.yml.kafkactl.yml
During initialization kafkactl starts from the current working directory and recursively looks for a project level
config file. The recursive lookup ends at the boundary of a git repository (i.e. if a .git folder is found).
This way, kafkactl can be used conveniently anywhere in the git repository.
[#_current_context] ==== Current context
The current context can be set via commandline argument --context, environment variable CURRENT_CONTEXT or
it can be defined in a file.
If no current context is defined, the first context in the config file is used as current context. Additionally, in this case a file storing the current context is created.
The file is typically stored next to the config file and named current-context.yml.
The location of the file can be overridden via environment variable KAFKA_CTL_WRITABLE_CONFIG.
=== Auto completion
==== bash
source <(kafkactl completion bash)
To load completions for each session, execute once: Linux:
kafkactl completion bash > /etc/bash_completion.d/kafkactl
MacOS:
kafkactl completion bash > /usr/local/etc/bash_completion.d/kafkactl
==== zsh
If shell completion is not already enabled in your environment, you will need to enable it. You can execute the following once:
echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for each session, execute once:
kafkactl completion zsh > "${fpath[1]}/_kafkactl"
You will need to start a new shell for this setup to take effect.
==== Fish
kafkactl completion fish | source
To load completions for each session, execute once:
kafkactl completion fish > ~/.config/fish/completions/kafkactl.fish
== Documentation
The documentation for all availab
Related Skills
node-connect
340.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
xurl
340.2kA CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
frontend-design
84.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
340.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
