Installation on Kubernetes
DBmarlin current only supports running a DBmarlin Remote Agent inside a container or Kubernetes pod. Running the DBmarlin Server maybe supported in the future but for now it should only be used for evaluation purposes since it cannot be upgraded.
Container imageβ
You can either pull a pre-built container image or build your own container image by customising the example Dockerfile
.
Use the public dbmarlin-agent container imageβ
There is a public container image available on Docker Hub. https://hub.docker.com/repository/docker/dbmarlin/dbmarlin-agent/tags. You could pull the image via docker CLI docker pull dbmarlin/dbmarlin-agent:latest
although this step is not needed for Kubernetes.
Build your own dbmarlin-agent container imageβ
As an alternative to the public container image, you could build you own.
There is an example Dockerfile
available on GitHub https://github.com/DBmarlin/dbmarlin-docker. You can clone the whole repo or simply download the Dockerfile
there and customise it.
The image is based on Ubuntu with LANG set to en_US.utf8
but you could use a different base image, change the LANG setting and make further customisations if you needed to. In this case you would need to build your own image and push it to your own container registry.
Kubernetes deploymentβ
There is an example Kubernetes deployment YAML files available on GitHub https://github.com/DBmarlin/dbmarlin-docker/tree/main/k8s/agent-k8s. You can clone the whole repo using git clone https://github.com/DBmarlin/dbmarlin-docker.git
.
Customizing the YAML filesβ
The file agent-k8s-deploy.yaml
contains a Kubenetes Deployment
which you will need to customise:
- Optional: By default it will attempt to pull the latest
dbmarlin/dbmarlin-agent:latest
. If you want to use a specific version of the agent you can changelatest
to a version tag such as3.5.0
- Optional: The default resource limits are set to 256Mi-512Mi for memory and 0.5-1 for cpu. If you are monitoring a lot of databases you might need to increase these.
- Mandatory: Edit the 3 Environment variables
DBMARLIN_AGENT_NAME
DBMARLIN_ARCHIVER_URL
andDBMARLIN_API_KEY
(see below)
- DBMARLIN_AGENT_NAME can be any unique identifier for your remote agent up to 50 characters in length. If it is not set then the agent name defaults to βdefaultβ, which is name of the built-in agent. It is important therefore that each agent has its own unique name.
- DBMARLIN_ARCHIVER_URL is the URL endpoint of the archiver on your DBmarlin server. Remember to include the full URL including scheme://host:port/archiver. To reach a DBmarlin server outside of the Kubernetes cluster, you will need to expose the service (see below).
- DBMARLIN_API_KEY is the Base64-encoded
username:password
for your DBmarlin server if you are using Nginx Basic Auth. If you are not using authentication then this environment variable can be omitted.
Deploying to your Kubernetes clusterβ
Make sure you are pointing to the correct cluster and namespace.
Use kubectl
to apply the dbmarlin-agent deployment. There is a wrapper shell script agent-k8s-deploy.sh
if you prefer.
kubectl apply -f agent-k8s-deploy.yaml
Removing the deploymentβ
Use kubectl
to remove the dbmarlin-agent deployment. There is a wrapper shell script agent-k8s-remove-all.sh
if you prefer.
kubectl delete deployment dbmarlin-agent
Connecting to a DBmarlin server outside of Kubernetesβ
To make your DBmarlin server available within your Kubernetes cluster, you can create a Service
of type ExternalName
For example if your DBmarlin server outside the Kubernetes cluster is running at http://staging1.dbmarlin.com:9090 you could make that available inside the cluster as http://staging1-external-service:9090 by applying the YAML below via kubectl
. The DBMARLIN_ARCHIVER_URL
env variable for your agent would then point to http://staging1-external-service:9090/archiver and that would get routed to your DBmarlin server on the outside.
apiVersion: v1
kind: Service
metadata:
name: staging1-external-service
spec:
type: ExternalName
externalName: staging1.dbmarlin.com
ports:
- protocol: TCP
port: 9090
targetPort: 9090
Troubleshootingβ
-
Pod not started. Once you run the deployment you can use
kubectl get pods
to see if the pod has started. If not, check the reason. It could be that there aren't enough resources for it or maybe that it can't pull the container image. -
Agent can't connect to DBmarlin server. If your agent has connectivity to your DBmarlin server, it should have registered itself at startup and appear in the dropdown list of agents when you go to Settings to add a new sensor. You can also use the API endpoint http://dbmarlin-service:9090/archiver/rest/v1/agent to get a list of all agents. If the agent has not registered itself, then you can check the log files for the pod using
kubectl logs dbmarlin-server-8d884577d-qxr2k
(you can find the name of your pod by first runningkubectl get pods
) -
Agent can't connect to monitored database. When you add a new sensor under Settings and start it up, it will attempt to connect to the target database using the host, port, username and password you specified. It could be that one or more of these was entered incorrectly. Testing these from inside the pod where the agent is running is the best way to identify any problems. The container comes with
curl
installed so you cankubectl exec -it dbmarlin-agent-795859c744-mvtrs -- /bin/bash
to get a shell inside the dbmarlin-agent pod (you can find the name of your pod by first runningkubectl get pods
). Once inside the pod you can use curl to check the the host and port can be reached. For example, if you have a mysql namespace with a mysql80 service running inside it on port 3306 you could use the following curl command to :
curl -v telnet://mysql.mysql80:3306
* Trying 10.43.16.102:3306...
* Connected to mysql.mysql80 (10.43.16.102) port 3306 (#0)