SkillAgentSearch skills...

LogGenerator

Generate logs from templates, or get logs from files, Elasticsearch or Kafka. Change the logs on the fly to create new, events and finally send the logs to console, file, Elasticsearch, Kafka, UDP or TCP

Install / Use

/learn @anders-wartoft/LogGenerator
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

LogGenerator

N.B., the format for configuration has radically changed in version 1.1. Please see the Latest Release Notes section below.

LogGenerator is a tool to debug log streams, i.e., syslog, Kafka, UDP diodes and similar chains of log collection systems. The tool reads input from an input module, filters the input (add a header, replace text etc.) and finally writes the output with an output module.

LogGenerator uses input modules, filters and output modules and combines that into a chain. Each event from the input module is processed by zero or more filters, that can rewrite the contents. After filtration the events are written with an output module.

Example: Read a log file, add a syslog header and write to a remote Kafka. In the syslog header, add a counter that starts at 100 and increases with each string. Also, add statistics messages (beginning of transaction etc.). When the events are stored in Kafka, start another LogGenerator that fetches the Kafka events, checks the counter and writes the events to null (to increase performance). Give a measurement of the time elapsed, how many items were received, the event per second and an estimate of the bandwidth usage as well as a list of missed events (counter integers that are missing) and the next counter number we are expecting.

The example above is literally two commands.

java -jar target/LogGenerator{version}.jar -i file --name src/test/data/test.txt -f header -st "<{pri:}>{date:MMM dd HH:mm:ss} {oneOf:mymachine,yourmachine,localhost,{ipv4:192.168.0.0/16}} {string:a-z0-9/9}[{counter:a:100}]: " -o kafka -t OUTPUT -b 192.168.153.129:9092 -ci test2
java -jar target/LogGenerator{version}.jar -i kafka -ci test3 -t OUTPUT -b 192.168.153.129:9092 -f gap -r "\[(\d+)\]:" -o cmd -s true 

When running the last command, press Ctrl-C to see the gaps in the received data. Since we started the counter on 100, there should at least be one gap: 1-99.

Latest Release Notes

1.1-6

Added support for RELP (rsyslog Reliable Event Logging Protocol) input and ouput.

1.1-5

Added a property --print-keys (-pk) to the KafkaInputItem to print the Kafka key for lines read. Added a property --start-number (-sn) to the CounterInputItem to alter the start number.

1.1-4

Changed kafka-clients dependency version from 3.7.1 to 3.9.1 due to CVE-2025-27817

1.1-3

Added a parameter {all} to regex filter, so a regex filter can wrap the log in a new string. Also, fixed a bug that hindered the regex filter to escape quotes (changing " to "). Minor documentation updates.

1.1-2

Update of documentation. E.g., -h is no longer valid as --hostname shorthand. Also, update of -f guard. In 1.1-1, the -f guard command removed all content in the event but not the event itself, so if the event was written to file, an empty line would be the result. In 1.1-2, the event is correctly removed.

New Input: StringInputItem. This works like Template but you don't need to create a file with the template, you can just add it on the command line.

1.1-1

1.1-1 Updated kafka-clients dependency due to security vulnerability in earlier versions of the Kafka-client library used.

CVE-2024-31141 Moderate severity.

1.1-SNAPSHOT

  • Major refactoring of the configuration system. The main method now only accepts the following parameters: -h or --help to get help -i or --input to specify the input module -o or --output to specify the output module -f or --filter to specify the filter module -pf or --property-file to specify a properties file -vf or --variable-file to specify a variable file -l or --limit to specify the number of events to send (0 means no limit) -e or --eps to specify the number of events per second to send (0 means no limit) -s or --statistics to add statistics messages and printouts

    After -i {module} you can add parameters for the input module. The parameters are module specific. After -o {module} you can add parameters for the output module. The parameters are module specific. After -f {module} you can add parameters for the filter module. The parameters are module specific. To see the available parameters for a module, use -h or --help after the module name, e.g., -i file -h.

    The main reason for this change is to be able to add several input modules, filters and output modules of the same type in the same command line, e.g., -i file --name file1.txt -i file --name file2.txt -o cmd -o file --name file3.txt -o file --name file4.txt.

    Properties in a properties file must now be specified in order, since the order is now important.

  • DateSubstitute now supports epoch16 format

  • Headers, Regex and Templates now support different time offsets. This is useful when you want to send events with a timestamp that is not the current time.

Input modules

There are input module for the following tasks:

  • Read files
  • Read files in a directory
  • Read files in a directory with globs
  • Read JSON file
  • Receive UDP
  • Receive TCP
  • Receive TCP SSL
  • Receive RELP
  • Fetch from Kafka topics
  • Fetch from Elasticsearch
  • Static string
  • Static string ending with a counter starting from 1
  • Dynamic string with variable substitution

Read files

Read a local file. (For Template files, see below)

Parameters: -i file --name {file name}

Example: -i file --name ./src/test/data/test.txt

The implementation actually wraps this in a directory input module, so if you print the configuration with log4j2, you will see that the file input is actually a directory input.

Read files in a directory

Read all files in a directory

Parameters: -i file --name {directory name}

Example: -i file --name ./src/test/data/

Read files in a directory with globs

Read all files in a directory that matches a glob. See https://javapapers.com/java/glob-with-java-nio/ for details on how to write globs.

Parameters: -i file --name {directory name -g "{glob}").

Example: -i file --name ./src/test/data/ -g "**.txt"

Note that all * must be within quotes, since otherwise, the OS will expand that variable.

Read JSON file

If a file is in JSON format (not line json but the entire file is one JSON object) you can read the file with the JSON File input.

Parameters: -i json-file --name {filename}

Example: -i json-file --name ./src/test/data/elasticsearch.json

If the file contains an array you would like to extract, use the parameter --path -p. E.g., the JSON output from an Elastic query is structured like this:

{
  "took": 1,
  "timed_out": false,
  ...
  "hits": {
    "total": {
      "value": 33,
      "relation": "eq"
    },
    "max_score": 1,
    "hits": [
      {
        "_index": "testindex",
        "_id": "test2-11",
        ...
      },
      {
        "_index": "testindex",
        "_id": "test2-22",
        ...

To read this response from file, use the json-file input and set -p to hits.hits. The result will be an array of elements and each element will be emitted as a new event. So, to extract the _id from each element, add a json filter with -p _id. Now only the _id field will be propagated.

The command line will then become: java -jar LogGenerator-{version}.jar -i json-file --name ./src/test/data/elasticsearch.json -p hits.hits -f json -p _id -o cmd

Receive UDP

Set up a UDP server.

Parameters: -i udp [--name {host}][ -p portnumber

Example: -i udp --hostname localhost --port 5999 or -i udp -p 5999

Receive TCP

Set up a TCP server.

Parameters: -i tcp [--name {host}] -ip portnumber

Example: -i tcp --hostname localhost --port 5999 or -i tcp -p 5999

If you try these examples, note that there is no output module specified, so the events will be discarded. Try adding -o cmd to see the events.

Receive TCP SSL

Set up a TCP server with encrypted communication.

Parameters and example, see below in the Q&A section.

Receive via RELP

Receive events from clients sending via the RELP (Reliable Event Logging Protocol).

The RELP input item acts as a server, listening for incoming RELP connections from clients (such as rsyslog, LogGenerator, or other RELP-compatible senders).

Parameters: -i relp --port {port number}

Example: -i relp --port 514 Example: -i relp -p 10514

The RELP input item will:

  • Listen on the specified port for incoming connections
  • Handle multiple simultaneous client connections
  • Acknowledge receipt of each message
  • Queue received messages for processing

To start a RELP server that receives events and prints them to the console:

java -jar LogGenerator-{version}.jar -i relp --port 514 -o cmd

To receive events from a RELP server and detect gaps in the event numbering:

java -jar LogGenerator-{version}.jar -i relp --port 514 -f gap --regex "(\d+)$" -o cmd

RELP Integration Example

You can use LogGenerator to test RELP connectivity by running a sender and receiver:

Terminal 1 - Start the RELP receiver:

java -jar LogGenerator-{version}.jar -i relp --port 10514 -f gap --regex "Test:(\d+)$" -o cmd -s true

Terminal 2 - Start the RELP sender:

java -jar LogGenerator-{version}.jar -i counter --string "Test:" -o relp --hostname localhost --port 10514 --limit 1000 -s true

The receiver will count the events, calculate events per second, and detect any gaps in the counter sequence.

RELP Protocol Details

RELP (Reliable Event Logging Protocol) is defined in RFC 3195. Each frame in the protocol follows this format:

txnr command len data\n

Where:

  • txnr - Transaction number (incremented for each command)
  • command - RELP command (open, close, syslog, rsp)
  • len - Length of the data in bytes
  • data - The actual message or command payload

LogGenerator supports the following RELP commands:

  • open - Initiates a connection handshake
  • syslog - Transmits a syslog message
  • close - Gracefully closes the connection
View on GitHub
GitHub Stars4
CategoryDevelopment
Updated4mo ago
Forks0

Languages

Java

Security Score

72/100

Audited on Dec 3, 2025

No findings