Skip to main content

Exporting logs to external service

This guide assumes that you have Kurtosis installed.

Kurtosis comes with a logs aggregator component that aggregates logs from services across enclaves. This component uses Vector under the hood. Logs aggregator can be configured independently for each cluster, in the Kurtosis config file.

Kurtosis (and Vector) uses the notion of "sinks" to describe the location where you want your logs exported. Sink configurations are forwarded as-is (with some exceptions, see below) to Vector, therefore Kurtosis can export to all sinks that Vector supports. For a complete list of supported sinks and their configurations, please refer here. Currently, logs exporting only works with Docker.

The following guide walks you through the process of setting up a local Elasticsearch/Kibana instance to which Kurtosis will forward logs to. We also include configuration examples for common sinks, such as AWS OpenSearch, S3, etc.

Setting up Kurtosis

Before you proceed, make sure you have:

Starting a local Elasticsearch/Kibana instance

Start an Elasticsearch container with the following command:

docker run -d --name es01 --net bridge -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:8.17.3

Note, the network must be set to bridge for the logs aggregator to be able to connect to Elasticsearch.

Start a Kibana container with following command:

docker run -d --name kb --net bridge -p 5601:5601 kibana:8.17.3

Generate an Elasticsearch enrollment token with the following command:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana

Access Kibana at http://localhost:5601 and paste the enrollment token generated by the command above. Subsequently, Kibana will ask you for a 6-digit verification code. To view this code, access Kibana container logs:

docker logs kb

Next, Kibana will ask you to sign in. The username for the local Elasticsearch instance is elastic, and you can generate the password with:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

You have now successfully started an Elasticsearch instance.

Configuring Kurtosis to send logs to Elasticsearch

Locate your Kurtosis configuration file with the following command:

kurtosis config path

Determine the IP address of your Elasticsearch container by running the following command:

docker inspect -f '{{.NetworkSettings.Networks.bridge.IPAddress}}' es01

Open the configuration file and paste the following content, replacing <PASSWORD> with the Elasticsearch password generated in the previous section and <ELASTICSEARCH_IP_ADDRESS> with the IP address of your Elasticsearch container:

config-version: 3
should-send-metrics: true
kurtosis-clusters:
docker:
type: "docker"
logs-aggregator:
sinks:
elasticsearch:
type: "elasticsearch"
bulk:
index: "kt-{{ enclave_uuid }}-{{ service_name }}"
auth:
strategy: "basic"
user: "elastic"
password: "<PASSWORD>"
tls:
verify_certificate: false
endpoints:
- "https://<ELASTICSEARCH_IP_ADDRESS>:9200"
info

config-version must be set to 3 for logs aggregator configurations to apply

danger

tls.verify_certificate should not be disabled outside of testing!

Finally, restart Kurtosis engine to apply the changes:

kurtosis engine restart

Verify logs delivery

To verify that Kurtosis are actually exporting logs to Elasticsearch, start by running a package. In this guide, we use ethpandaops/ethereum-package:

kurtosis run github.com/ethpandaops/ethereum-package

Once the package has finished execution, go to http://localhost:5601/app/enterprise_search/content/search_indices. Indices created by Kurtosis will the in the format kt-{{ enclave_uuid }}-{{ service_name }} (as configured above)

This is what you should see if everything went correctly

elasticsearch-dashboard.png

Configuring other sinks

Kurtosis sinks configurations are one-to-one to Vector sink configurations, with the inputs field being injected automatically by Kurtosis. It is not possible at the moment to specify a different input source. Please refer to the official Vector documentation for sink configurations.

Below are examples of some common configurations that serve as good starting points for your custom sink configurations.

AWS OpenSearch Serverless

config-version: 3
should-send-metrics: true
kurtosis-clusters:
docker:
type: "docker"
logs-aggregator:
sinks:
elasticsearch:
type: "elasticsearch"
opensearch_service_type: "serverless"
bulk:
index: "kt-{{ enclave_uuid }}-{{ service_name }}"
aws:
region: "<AWS_REGION>"
auth:
strategy: "aws"
access_key_id: "<ACCESS_KEY_ID>"
secret_access_key: "<SECRET_ACCESS_KEY>"
endpoints:
- "<OPENSEARCH_ENDPOINT>"

AWS CloudWatch

config-version: 3
should-send-metrics: true
kurtosis-clusters:
docker:
type: "docker"
logs-aggregator:
sinks:
cloudwatch:
type: "aws_cloudwatch_logs"
region: "<AWS_REGION>"
auth:
access_key_id: "<ACCESS_KEY_ID>"
secret_access_key: "<SECRET_ACCESS_KEY>"
group_name: "<LOG_GROUP_NAME>"
stream_name: "kt-{{ enclave_uuid }}-{{ service_name }}"
encoding:
codec: "json"

AWS S3

config-version: 3
should-send-metrics: true
kurtosis-clusters:
docker:
type: "docker"
logs-aggregator:
sinks:
s3:
type: "aws_s3"
region: "<AWS_REGION>"
auth:
access_key_id: "<ACCESS_KEY_ID>"
secret_access_key: "<SECRET_ACCESS_KEY>"
bucket: "<BUCKET_NAME>"
key_prefix: "kt-{{ enclave_uuid }}/{{ service_name }}/"
encoding:
codec: "json"