Lucenia and FluentBit - Part 2
This tutorial will guide you through the process of building a log aggregation pipeline with Lucenia and FluentBit. We will start by setting up our services with docker compose, and then utilize OpenSearch Dashboards to visualize our data.
In Part 1 of this series, we discussed how Lucenia and FluentBit are transforming how DevOps teams handle log management. Today, we will work with a local demo to get hands-on experience. We are building a foundation for production-grade log pipelines prepared to handle billions of events daily.
Prerequisites
Before we begin, make sure you have the following prerequisites:
- Docker and Docker Compose V2
Clone the lucenia-tutorials repository, then navigate to 7_logging directory and setup your local environment:
git clone git@github.com:lucenia/lucenia-tutorials.git && cd lucenia-tutorials/7_logging && source env.sh
Obtain a Lucenia Product License
Navigate to Lucenia's website, and click Try Lucenia. Follow the steps to obtain your license, and save it to a file named trial.crt in this tutorial directory.
Step 1: Start Up our Services
Services Overview
The files needed to run our services include configuration files for docker compose, FluentBit, and OpenSearch Dashboards.
docker-compose.yml
We define our services we will run in a docker compose file.
name: lucenia
services:
lucenia-node:
image: lucenia/lucenia:0.6.1
container_name: lucenia-node
environment:
- cluster.name=lucenia-cluster # Name the cluster
- node.name=lucenia-node # Name the node that will run in this container
- discovery.type=single-node
- bootstrap.memory_lock=true # Disable JVM heap memory swapping
- "LUCENIA_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
- network.host=0.0.0.0
- plugins.license.certificate_filepath=config/trial.crt
- LUCENIA_INITIAL_ADMIN_PASSWORD=MyStrongPassword123_
ulimits:
memlock:
soft: -1 # Set memlock to unlimited (no soft or hard limit)
hard: -1
nofile:
soft: 65536 # Maximum number of open files for the lucenia user - set to at least 65536
hard: 65536
volumes:
- lucenia-data:/usr/share/lucenia/data # Creates volume called lucenia-data and mounts it to the container
- ./trial.crt:/usr/share/lucenia/config/trial.crt
ports:
- 9200:9200 # REST API
networks:
- lucenia-net # All of the containers will join the same Docker bridge network
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:3.0.0
container_name: opensearch-dashboards
ports:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
environment:
- 'OPENSEARCH_HOSTS=["https://lucenia-node:9200"]'
volumes:
- ./opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml
networks:
- lucenia-net
fluent-bit:
image: fluent/fluent-bit:4.0.7
container_name: fluent-bit
environment:
- LUCENIA_INITIAL_ADMIN_PASSWORD=MyStrongPassword123_
volumes:
- ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
- /var/log:/var/log
networks:
- lucenia-net
volumes:
lucenia-data:
networks:
lucenia-net:
fluent-bit.conf
The FluentBit configuration file contains configuration for logging input, output and a filter.
[INPUT]
Name tail
Path /var/log/*.log
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Skip_Empty_Lines On
Refresh_Interval 10
Tag logs.raw
[FILTER]
Name record_modifier
Match logs.*
Record environment demo
Record team devops
Record service fluent-bit-tutorial
Record version 1.0.0
[OUTPUT]
Name opensearch
Match *
Host lucenia-node
Port 9200
HTTP_User admin
HTTP_Passwd MyStrongPassword123_
tls On
tls.verify Off
Index docker-logs
Generate_ID On
Suppress_Type_Name On
Retry_Limit 5
Workers 1
opensearch_dashboards.yml
Finally, OpenSearch Dashboards configuration defines and ensures connection to Lucenia.
server.port: 5601
server.host: "0.0.0.0"
opensearch.hosts: ["https://lucenia-node:9200"]
opensearch.ssl.verificationMode: none
opensearch.username: "admin"
opensearch.password: "MyStrongPassword123_"
opensearch.requestHeadersWhitelist: [ authorization,securitytenant ]
opensearch.ignoreVersionMismatch: true
Start the Services
Use the following command to setup a Lucenia cluster, FluentBit, and OpenSearch Dashboards:
docker compose up -d
What is Happening?
This command will start up the following services:
- Lucenia Search: Search engine to handle our log data.
- FluentBit: A log processor to collect and enhance our logs, then send to Lucenia Search.
- OpenSearch Dashboards: A tool to visualize our log data.
FluentBit is configured to collect logs from the /var/log directory (from files named *.log), process them, and send them to Lucenia Search. Notice the [FILTER] section. This is how we instruct FluentBit to enrich each log entry with metadata – environment, team, service, and version. In production, you'd pull these from environment variables or service discovery, but we hardcode them for simplicity and to demonstrate the concept.
Our output defined in the Fluent Bit configuration matches on '*', meaning all logs will be sent to Lucenia Search to the docker-logs index. We define all the connection and authentication details here as well so that Fluent Bit and OpenSearch Dashboards can communicate with Lucenia.
Confirm the Services are Running
Confirm the Lucenia cluster is running and healthy:
curl -X GET https://localhost:9200 -ku admin:$LUCENIA_INITIAL_ADMIN_PASSWORD
curl -X GET https://localhost:9200/_cluster/health?pretty -ku admin:$LUCENIA_INITIAL_ADMIN_PASSWORD
Navigate to OpenSearch Dashboards at http://localhost:5601 and log in with the credentials admin and MyStrongPassword123_. You should see the OpenSearch Dashboards interface.
Step 2: Generate Logs
There will be logs from your machine collected and forwarded. Here, we also generate some test logs to simulate application activity to work with these logs in OpenSearch Dashboards.
Next, generate some test logs:
echo "$(date) ERROR: Test-This is an error message" | sudo tee -a /var/log/luceniademo.log
echo "$(date) INFO: Test-Application started successfully" | sudo tee -a /var/log/luceniademo.log
echo "$(date) WARNING: Test-High memory usage detected" | sudo tee -a /var/log/luceniademo.log
Step 3: Visualize Logs in OpenSearch Dashboards
Navigate to http://localhost:5601 in your browser. Login with:
- Username: admin
- Password: MyStrongPassword123_
Once Authenticated, follow these steps to create an index pattern for your logs:
- Go to Discover → Index patterns
- Click + Create index pattern, and fill in
docker-logs*for the name, click Next step - Select
@timestampas the time field, click Create index pattern - Navigate to Discover to see your logs flowing in real-time
Notice each log entry contains metadata we added: environment, team, service, and version. This enables filtering and analyzing logs, and is vital when working with multiple services in many environments.
Take a look at the available fields in the left sidebar. You can filter logs by these fields to quickly drill down to environment or service-specific logs. Here our environment is demo and service is fluent-bit-demo. Think about how impactful this would be when you are running hundreds of services and investigating an issue in one of your production environments.
Restrict or expand the time range for events. When you know where something went wrong, drill down to a smaller window to limit what you are looking at. Search for a specific event like one we created: "High memory usage detected".
Explore Further
Now that you have a working pipeline, try experimenting with other configurations available with Fluent Bit.
You can add parsers, filters, and collect data from multiple inputs. Take this tutorial as a starting point and adapt it to your needs. You have just scratched the surface of the possibilities of Lucenia and FluentBit!
Conclusion
In just a few minutes, you've created a working log aggregation pipeline. FluentBit's simplicity combined with Lucenia's power gives you enterprise-grade capabilities without enterprise-grade complexity.
Ready to process billions of events? You've already taken the first step.

