Raspberry Pi Camera Module 3 First Impressions

A few days ago on Jan 9, 2023 the Raspberry Pi Ltd. announced the Raspberry Pi Camera Module 3 and I was fortunate enough to get one of the last available in german online stores. So this is my first impressions review with some images that I captured. I got the standard Camera Module 3 with 66 deg angle of view and IR filter.


Read More

How to Build a Rspamd Docker Image

Rspamd is a powerful and free spam filtering system. Unfortunately, there is no official Docker image available so let’s build one for ourselves.

If you only want rspamd to run in a container the Dockerfile is very simple: Just install the rspamd packages and set the command to /usr/sbin/rspamd -f. But rspamd can also expose a minimal web interface with statistics, logs and the ability to manually submit ham and spam. Of course we want that, too.
We use nginx to serve the necessary style sheets and icons for the rspamd web interface. These are included in the rspamd package. To make things easier we run both rspamd and nginx in the same docker container. We start them both at the same time using supervisord, one of the methods recommended by the Docker docs.

So this is our Dockerfile: it installs the necessary packages, copies the supervisord config and nginx config and starts supervisord.

Dockerfile
FROM alpine:3.17

RUN apk update && \
apk add rspamd rspamd-proxy rspamd-utils rspamd-controller supervisor nginx && \
rm -rf /var/cache/apk/*

COPY supervisord/supervisord.conf /etc/supervisord/supervisord.conf
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY rspamd/local.d/logging.inc /etc/rspamd/local.d/logging.inc

EXPOSE 11332/tcp
EXPOSE 80/tcp

CMD exec /usr/bin/supervisord -c /etc/supervisord/supervisord.conf

We let nginx pick the static files from the rspamd package location and proxy the http requests to rspamd itself. Output goes to /dev/stdout to populate the Docker logs.

nginx/nginx.conf
worker_processes  2;
user nginx nginx;

pid /var/run/nginx.pid;

error_log /dev/stdout info;

events {
worker_connections 8192;
use epoll;
}

http {
include mime.types;
default_type text/plain;

sendfile on;
tcp_nopush on;
tcp_nodelay on;

gzip on;

server {
access_log /dev/stdout;

location / {
alias /usr/share/rspamd/www/;
try_files $uri @proxy;
}
location @proxy {
proxy_pass http://127.0.0.1:11334;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
server_tokens off;
}
}

We let rspamd also log to the console.

rspamd/local.d/logging.inc
type = console
level = "notice";

Supervisord has the ability to start multiple processes but stay in foreground itself which is very convenient in docker containers as the main command stays alive. We use it to start rspamd and nginx, again, in foreground. Rspamd has a switch for that, nginx needs an additional option string.

supervisord/supervisord.conf
[supervisord]
nodaemon=true
user=root

[program:rspamd]
command=/usr/sbin/rspamd -f -u rspamd -g rspamd
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

Build your image like this:

docker build -t dovecot:2.3.19.1-r0-2 .

And that’s it. Build your image from the recipe above or just take my prebuilt Rspamd Docker image.

Read More

A Dockerized Self Hosted Mail Server With Postfix And Dovecot

Wether you want to have control over your own mails, not share data with an unknown 3rd party, not pay for several accounts when you could just pay for one server or you just wanna waste some time with another project: there are a lot of reasons to run your own mail server. But actually setting up a mail server that works can be tricky.

In this article we explore how to set up a self hosted mail server that can serve multiple mail domains on the internet. We run all applications like the required mail transfer agent (MTA) and IMAP server on one machine. The users we want to receive mails for and send mails from don’t have an account on that machine. Rather, we store their user data in a database that the other applications can access. We run the mail server processes in docker containers to make the setup more modular and get rid of the dependency to the underlying operating system and its library versions.
I already show how to create a docker image for the mail transfer agent (MTA) Postfix and IMAP Server Dovecot in other posts. We will not go through that information here as it would make this article too long and cluttered. The config settings in this article are geared towards these docker images but you can of course adapt them to fit other installations if you want to. For the sake of brevity there is no section on spam filtering or mail frontends. We will pick up these topics in future posts.
Also: This posts mostly focuses on the configuration aspect and less on the technical details of a mail server and the protocols it uses.

A word of advice: for your mail server to accept mail from other domains you have to expose it TO THE INTERNET. If you now think: well, that sounds like a dumb idea - it probably is. So take running a mail server seriously, do your updates and learn how to secure your server and data. Or to put it another way:

With that said: let’s create our mail server!

Read More

How to Build a Dovecot Docker Image

Another image I build for myself because I don’t want to use 3rd party non-official images from DockerHub. They could do anything with my precious emails. Also, building a docker image for dovecot is pretty straightforward.

The Dockerfile just has to install the Dovecot packages, expose the imap ports and start Dovecot:

Dockerfile
FROM alpine:3.16

# You might not need the pigeonhole (sieve) plugin or the rspamd package
RUN apk update && \
apk add dovecot dovecot-ldap dovecot-lmtpd dovecot-pigeonhole-plugin rspamd-client && \
rm -rf /var/cache/apk/*
COPY config/conf.d/10-logging.conf /etc/dovecot/conf.d/10-logging.conf

EXPOSE 24/tcp
EXPOSE 143/tcp
EXPOSE 993/tcp

CMD /usr/sbin/dovecot -F

Dovecot will start in foreground, keeping the container alive. We want to check the logs via docker logs so we redirect all dovecot log output to stderr:

config/conf.d/10-logging.conf
log_path = /dev/stderr

The Dockerfile will copy that config file to the image.

Start the build like this:

docker build -t dovecot:2.3.19.1-r0-2 .

Done. Again, you can pick up my Dovecot image on DockerHub. But I guess if you read this article, you want to build your own image?

Also, check out my Postfix image.

Read More

How to Build a Postfix Docker Image

Update 2022-10-16: I added some details on logging and the postmap operations and image link.

Did you ever need a postfix docker image and the found out that there is no official image? Well, a lot of people did, so there are dozens if not hundreds of postfix images on DockerHub. But do you really want to use an image made by a complete stranger? After all, the image could do anything, or send your mails anywhere.

Well, I hat the same problem whilst preparing an upcoming post. I decided to create yet another image, but share the code so you can build one yourself. And actually, it’s super easy. Here is the dockerfile:

Dockerfile
FROM alpine:3.16

# I need the ldap package, you might not
RUN apk update && \
apk add postfix postfix-ldap && \
rm -rf /var/cache/apk/*

EXPOSE 25/tcp
EXPOSE 465/tcp
EXPOSE 587/tcp

# You might need more or less map operations before startup
CMD postmap /etc/postfix/aliases && \
postmap /etc/postfix/roleaccount_exceptions && \
postmap /etc/postfix/virtual && \
/usr/sbin/postfix start-fg

I assume that you will also mount your config files into /etc/postfix. For us not to have to mount the compiled map files, I added the postmap commands. When the container starts it will update the maps you want and then start Postfix in foreground.

To redirect all postfix output to the console, add this to main.cf:

/etc/postfix/main.cf
maillog_file = /dev/stdout

That way you can access the output via docker logs. You might need an additional entry in the master.cf, but it should be already present. Look it up in the Postfix documentation.

Start the build like this:

docker build -t postfix:3.7.2-r0-1 .

And you are done. Or you can my Postfix docker image if you dare to. Also, check out my Dovecot image.

Read More

ApacheDS container gitlab repository

ApacheDS container update: I copied my code to a Github repository for easier access. I also updated ApacheDS and the base container to the current release versions. The earlier instructions on how to set up an ApachDS docker container still apply.

If you just want to use the container, use my Dockerhub repository.

A word on versioning: The tags follow the ApacheDS naming scheme with an added build number: For version “2.0.0.AM26” the tag name will be “2.0.0.AM26-0” for the first build, “2.0.0.AM26-1” for the second and so on.

Read More

Hosting Multiple Domains and Custom Certificates With Traefik

I got the question how to configure multiple domains in Traefik when one of the domains is a network internal domain without the possibility for a Let’s Encrypt certificate. Actually, it’s pretty easy: Just add your services. Let’s look at an example.

Multiple Domains

Traefik will discover your services using the method you specified in the configuration file. There are several discovery variants available. Here, we will use auto discovery on docker containers.
When Traefik encounters a Docker container it will read the labels of the container to deduct the domain the service shall run on and wether or not you want to use TLS. When you enable the Let’s Encrypt certificate resolver beforehand and set the right labels, the new domain with TLS will just work.

Read More

Grafana Datasource With Custom CA Certificate

Today, I had to figure out how to format a Grafana datasource with a custom CA literally for the 99th time. How can this be so hard? Also, it doesn’t help that there is conflicting information on how to format certs in yaml even in the Grafana community threads. That’s always the first page I find about the topic. The second is the stackoverflow post on how to break strings in yaml which sends you down a totally wrong path.
The error messages you get from the Grafana server logs are also not very helpful, given you can access them:

Failed to call resource" error="Get \"https:///tnglab.fritz.box/prometheus...": x509: certificate signed by unknown authority

Well, thanks for nothing!

So, once and for all! This is the way:

Read More

Collecting logs in the cloud with Grafana Loki

In the good old days you had one server running your services. When something failed, you logged in via SSH and checked the corresponding log file. Today, most of the time no one server is running all your services. So the log files are distributed over multiple machines and ways of accessing them. From journald, docker logs, over syslog to simple files there are just too many options to check the logs efficiently, especially if you use scale sets on Azure or something equivalent to dynamically adjust the number of VMs to the workload.
Sometimes one solves this problem by introducing an Elasticsearch, Logstash and Kibana (ELK) stack that gathers the logs and makes them searchable. That’s a nice solution, albeit a resource intensive one.

We want to look at a more lightweight alternative: The log aggregator Grafana Loki. Like Elasticsearch it stores logs that are gathered by log shippers like Promtail. You can then display the logs using Grafana.
But unlike Elasticsearch Loki is more lightweight. That’s mostly because it omits the main feature of Elasticsearch: search. Instead, and much more like Prometheus, Loki stores log lines annotated with tags that you can later filter on. So there is no real-time search on log text.
The upside is low hardware requirements. I myself run Loki comfortably on a Raspi 3B where it collects logs from several systems using below 1% CPU at all times. An ELK stack would have serious problems even running on the Raspi 3B, mostly due to the 1GB of system memory.

Read More

Azure Scale Set Monitoring With Prometheus and Grafana

When running more and more machines it becomes impractical to check on each of them by logging in and going through the numbers yourself. This is especially true for a variable number of machines like in cloud scale sets.
So what can we do? Prometheus is a popular solution to collect and store metrics from your machines. You can then browse them either via its included web interface or third party apps like Grafana.

In this post we will look at a practical example of metric collection with Prometheus on Microsoft Azure scale sets. I assume that you already have an Azure deployment set up. If not, check out my post on Microsoft Azure VM deployment.

We will run Prometheus in a docker container on a jumphost VM utilizing the also present Traefik. I got a post about how to set up Traefik with Ansible on your jumphost if you need it. Prometheus will then fetch the metrics from a small exporter app on each of the Azure scale set VMs. Finally, we display the data with Grafana that also runs in a container on the jumphost.

Read More