# Simplifying BanzaiCloud Logging Operator in Kubernetes
Written on
Chapter 1: Introduction to BanzaiCloud Logging Operator
In our previous discussion, we explored the functionalities and key features of the BanzaiCloud Logging Operator. Today, we will delve into the implementation process.
To initiate the setup, we first need to install the operator. For this, a Helm chart is available, and we can execute the following commands:
helm upgrade --install --wait --create-namespace --namespace logging logging-operator banzaicloud-stable/logging-operator
These commands will create a logging namespace if it doesn't exist and deploy the operator components as illustrated in the video below:
Chapter 2: Setting Up Logging Resources
Now that the operator is installed, we can start creating the necessary resources using the Custom Resource Definitions (CRDs) we discussed earlier. To summarize, here are the CRDs at our disposal:
- Logging: Defines the logging infrastructure for your cluster, facilitating the collection and transport of log messages. This includes configurations for Fluentd and Fluent-bit.
- Output / ClusterOutput: Specifies the destination for log messages; the output is namespace-specific, while clusteroutput is applicable at the cluster level.
- Flow / ClusterFlow: Describes a logging flow that utilizes filters and outputs to route log messages accordingly. Similar to output, flow is namespace-specific, while clusterflows operate at the cluster level.
For our implementation, I want all logs generated by my workloads—regardless of their namespace—to be sent to a Grafana Loki instance, which I have set up on the same Kubernetes cluster. This will be done using a simple configuration for Grafana Loki.
Creating a Logging Object
To begin, we will create a Logging object to establish our logging infrastructure. The command to do this is as follows:
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd: {}
fluentbit: {}
controlNamespace: logging
EOF
We'll stick with the default settings for Fluentd and Fluent-bit for this example. Future articles will explore more tailored configurations.
Once this CRD is processed, you will notice the components in your logging namespace. For instance, on my three-node cluster, I can expect to see three instances of Fluent-bit deployed as a DaemonSet and a single instance of Fluentd, as depicted in the image below:
Configuring Communication with Loki
Next, we need to set up the communication between our logging setup and Loki. Since I want this to function across all namespaces in my cluster, I will use the ClusterOutput option instead of the namespace-specific Output. The command for this is:
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: loki-output
spec:
loki:
url: http://loki-gateway.default
configure_kubernetes_labels: true
buffer:
timekey: 1m
timekey_wait: 30s
timekey_use_utc: true
EOF
Ensure that the endpoint is correctly specified; in this case, it is loki-gateway.default, as it operates within the Kubernetes cluster.
Establishing a Logging Flow
Finally, we will create a flow to link our Logging configuration to the ClusterOutput we just established. Again, we will utilize the ClusterFlow to maintain a cluster-wide setup. The command is as follows:
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: loki-flow
spec:
filters:
- tag_normaliser: {}
match:
- select: {}
globalOutputRefs:
- loki-output
EOF
After waiting a minute or two for configuration to reload, you should start seeing logs flowing into Loki, indicating that the setup is working correctly. To visualize this data, you can use Grafana.
Conclusion
And that’s it! Modifying our logging configuration is straightforward—simply adjust the CRD components and apply the necessary matches and filters. This approach allows for seamless management of the logging system.
If you found this guide helpful, please consider contributing using the button below to help keep this content updated and relevant!
Chapter 3: Understanding Logs and Metrics
In the realm of observability, understanding how logs translate into metrics is crucial for effective monitoring. The following video provides insights into this relationship.
This concludes our overview of the BanzaiCloud Logging Operator in Kubernetes!