How to Build a Dovecot Docker Image

Another image I build for myself because I don’t want to use 3rd party non-official images from DockerHub. They could do anything with my precious emails. Also, building a docker image for dovecot is pretty straightforward.

The Dockerfile just has to install the Dovecot packages, expose the imap ports and start Dovecot:

Dockerfile
FROM alpine:3.16

# You might not need the pigeonhole (sieve) plugin or the rspamd package
RUN apk update && \
apk add dovecot dovecot-ldap dovecot-lmtpd dovecot-pigeonhole-plugin rspamd-client && \
rm -rf /var/cache/apk/*
COPY config/conf.d/10-logging.conf /etc/dovecot/conf.d/10-logging.conf

EXPOSE 24/tcp
EXPOSE 143/tcp
EXPOSE 993/tcp

CMD /usr/sbin/dovecot -F

Dovecot will start in foreground, keeping the container alive. We want to check the logs via docker logs so we redirect all dovecot log output to stderr:

config/conf.d/10-logging.conf
log_path = /dev/stderr

The Dockerfile will copy that config file to the image.

Start the build like this:

docker build -t dovecot:2.3.19.1-r0-2 .

Done. Again, you can pick up my Dovecot image on DockerHub. But I guess if you read this article, you want to build your own image?

Also, check out my Postfix image.

Read More

How to Build a Postfix Docker Image

Update 2022-10-16: I added some details on logging and the postmap operations and image link.

Did you ever need a postfix docker image and the found out that there is no official image? Well, a lot of people did, so there are dozens if not hundreds of postfix images on DockerHub. But do you really want to use an image made by a complete stranger? After all, the image could do anything, or send your mails anywhere.

Well, I hat the same problem whilst preparing an upcoming post. I decided to create yet another image, but share the code so you can build one yourself. And actually, it’s super easy. Here is the dockerfile:

Dockerfile
FROM alpine:3.16

# I need the ldap package, you might not
RUN apk update && \
apk add postfix postfix-ldap && \
rm -rf /var/cache/apk/*

EXPOSE 25/tcp
EXPOSE 465/tcp
EXPOSE 587/tcp

# You might need more or less map operations before startup
CMD postmap /etc/postfix/aliases && \
postmap /etc/postfix/roleaccount_exceptions && \
postmap /etc/postfix/virtual && \
/usr/sbin/postfix start-fg

I assume that you will also mount your config files into /etc/postfix. For us not to have to mount the compiled map files, I added the postmap commands. When the container starts it will update the maps you want and then start Postfix in foreground.

To redirect all postfix output to the console, add this to main.cf:

/etc/postfix/main.cf
maillog_file = /dev/stdout

That way you can access the output via docker logs. You might need an additional entry in the master.cf, but it should be already present. Look it up in the Postfix documentation.

Start the build like this:

docker build -t postfix:3.7.2-r0-1 .

And you are done. Or you can my Postfix docker image if you dare to. Also, check out my Dovecot image.

Read More

ApacheDS container gitlab repository

ApacheDS container update: I copied my code to a Github repository for easier access. I also updated ApacheDS and the base container to the current release versions. The earlier instructions on how to set up an ApachDS docker container still apply.

If you just want to use the container, use my Dockerhub repository.

A word on versioning: The tags follow the ApacheDS naming scheme with an added build number: For version “2.0.0.AM26” the tag name will be “2.0.0.AM26-0” for the first build, “2.0.0.AM26-1” for the second and so on.

Read More

Hosting Multiple Domains and Custom Certificates With Traefik

I got the question how to configure multiple domains in Traefik when one of the domains is a network internal domain without the possibility for a Let’s Encrypt certificate. Actually, it’s pretty easy: Just add your services. Let’s look at an example.

Multiple Domains

Traefik will discover your services using the method you specified in the configuration file. There are several discovery variants available. Here, we will use auto discovery on docker containers.
When Traefik encounters a Docker container it will read the labels of the container to deduct the domain the service shall run on and wether or not you want to use TLS. When you enable the Let’s Encrypt certificate resolver beforehand and set the right labels, the new domain with TLS will just work.

Read More

Grafana Datasource With Custom CA Certificate

Today, I had to figure out how to format a Grafana datasource with a custom CA literally for the 99th time. How can this be so hard? Also, it doesn’t help that there is conflicting information on how to format certs in yaml even in the Grafana community threads. That’s always the first page I find about the topic. The second is the stackoverflow post on how to break strings in yaml which sends you down a totally wrong path.
The error messages you get from the Grafana server logs are also not very helpful, given you can access them:

Failed to call resource" error="Get \"https:///tnglab.fritz.box/prometheus...": x509: certificate signed by unknown authority

Well, thanks for nothing!

So, once and for all! This is the way:

Read More

Collecting logs in the cloud with Grafana Loki

In the good old days you had one server running your services. When something failed, you logged in via SSH and checked the corresponding log file. Today, most of the time no one server is running all your services. So the log files are distributed over multiple machines and ways of accessing them. From journald, docker logs, over syslog to simple files there are just too many options to check the logs efficiently, especially if you use scale sets on Azure or something equivalent to dynamically adjust the number of VMs to the workload.
Sometimes one solves this problem by introducing an Elasticsearch, Logstash and Kibana (ELK) stack that gathers the logs and makes them searchable. That’s a nice solution, albeit a resource intensive one.

We want to look at a more lightweight alternative: The log aggregator Grafana Loki. Like Elasticsearch it stores logs that are gathered by log shippers like Promtail. You can then display the logs using Grafana.
But unlike Elasticsearch Loki is more lightweight. That’s mostly because it omits the main feature of Elasticsearch: search. Instead, and much more like Prometheus, Loki stores log lines annotated with tags that you can later filter on. So there is no real-time search on log text.
The upside is low hardware requirements. I myself run Loki comfortably on a Raspi 3B where it collects logs from several systems using below 1% CPU at all times. An ELK stack would have serious problems even running on the Raspi 3B, mostly due to the 1GB of system memory.

Read More

Azure Scale Set Monitoring With Prometheus and Grafana

When running more and more machines it becomes impractical to check on each of them by logging in and going through the numbers yourself. This is especially true for a variable number of machines like in cloud scale sets.
So what can we do? Prometheus is a popular solution to collect and store metrics from your machines. You can then browse them either via its included web interface or third party apps like Grafana.

In this post we will look at a practical example of metric collection with Prometheus on Microsoft Azure scale sets. I assume that you already have an Azure deployment set up. If not, check out my post on Microsoft Azure VM deployment.

We will run Prometheus in a docker container on a jumphost VM utilizing the also present Traefik. I got a post about how to set up Traefik with Ansible on your jumphost if you need it. Prometheus will then fetch the metrics from a small exporter app on each of the Azure scale set VMs. Finally, we display the data with Grafana that also runs in a container on the jumphost.

Read More

Webcam Roundup 2022: StreamCam, Brio, Kiyo Pro, Facecam, HQ Cam

My last comparison between the Logitech c922 and the Raspberry Pi High Quality Camera left me wanting: The Logitech c922 has a very bad image quality but the Raspberry Pi High Quality Camera is very cumbersome to use as a webcam.
So, off to our favourite online bookstore. Shortly after, 4 new shiny contemporary webcams arrived for testing:

  • Logitech StreamCam
  • Logitech Brio
  • Razer Kiyo Pro
  • Elgato Facecam

I will again include the Raspberry Pi High Quality Camera as a point of reference and because it’s fun to see what you could do in the DIY department.

Read More

Raspberry Pi Streaming update: Raspberry Pi OS

The latest version of the Raspberry Pi operating system brings significant changes. Apart from the ususal software updates to the Debian 11 Bullseye base, Raspbian now got rebranded to Raspberry Pi OS. It now comes with a completely new software stack to access its various camera options called libcamera. Unfortunately that means that the old commands raspivid and raspistill will no longer work. So it’s time to update our previous post about how to stream with a Raspberry Pi.

Read More

Mining Monero On Azure

Privacy coins rallied in the last days in light of the current geopolitical events. So is it the perfect time to finally start mining? And how do we do it? We already saw that mining Monero on Raspberry Pis makes no sense. And we surely don’t want to buy a whole server farm, do we?

So how about mining in the cloud? After all, Monero prices are rising, so it should be profitable, right?

Spoiler: No, it isn’t. But if you want to know how to set up Monero mining on Azure with Packer and Ansible and see the gathered data, keep on reading.

Read More