Using Consul and Nomad to run LibreTranslate for a Mastodon Server

Dall-e Generated image of a Cartoon Mastodon with brown fur. Tusks are a bit of a mess with one growing in and the other growing up.

I am an admin for a Mastodon Instance called

When I first started setting up the server Mastodon, PostgreSQL, Redis, Elasticache, and LibreTranslate all ran on the same VM.  Unfortunately, I found that LibreTranslate timed out pretty often, and I wondered if there was an issue with resources (I had several languages loaded). Doing some basic testing, I found that with all the services running, I was hitting the 8GB I had assigned to the VM.

I have been working with and learning HashiCorp Consul (OSS) at work for about 6 months at this point, and I thought that I could use HashiCorp Nomad to run a number LibreTranslate instances exposed to my Mastodon Instance through a service mesh provided by Consul.

The first thing I did was I created a custom Docker image that would already have all the languages I wanted to have installed in, as well at the entrypoint defined so that it would start the LibreTranslate Service.

I then defined a Nomad Job, complete with a proxy stanza to expose the LibreTranslate Service over the Service Mesh.

The first hiccup I ran into was that that image I created was very large, and to upload it to Docker Hub was going to take a lot longer than I wanted.  I decided that it would be faster to just rebuild the Docker image on my Nomad Nodes, and reference a local tag (Yes I am aware this is not optimal, but for the purpose I was using it works).

With the Docker Image created on the Nomad Hosts, I was able to deploy the Nomad Job for LibreTranslate.  Since the servers are effectively stateless, I didn’t include persistent storage on my NAS.

Once the LibreTranslate job was up, I tested connecting to it via the Service Mesh from another server, and I was able to connect and get translations back extremely quickly.

I installed Consul on the Mastodon server and configured Envoy to have the LibreTranslate jobs on Nomad locally on port 5000 on the Mastodon Server (so I wouldn’t need to reconfigure Mastodon).  I stopped and disabled to local LibreTranslate and started the Envoy Proxy for the Mastodon server.  I posted a test status, and had success with the new LibreTranslate.

Later, I did see that with every new translation, the memory usage on the jobs goes up slowly.  Once it reaches a certain point, the translations start to take longer and even timeout.  Recycling the jobs fixes the issue, but it is interesting to see how much RAM the LibreTranslate service uses over time.  I may tweak the entrypoint script to automatically recycle itself after a random amount of time just to keep the performance fresh.

Posted in ,
If you would like to comment on this post, please follow this account through an ActivityPub account (like Mastodon)

1 Comment

  1. Josh Knapp :verified: on March 1, 2023 at 9:17 pm

    @me I had looked at using DeepL translation, but after reading their TOS I wasn't wanting to use the service, so I will continue using LibreTranslate.