Rabbitmq volume is huge on a document server in docker

I have installed the community edition of the Document Server in docker
After moderate activity during 6 months and few users, the rabbitmq volume is huge (88Gb)

Question 1: is it normal or the trace of a bad configuration ?
Question 2: is there a tool/method for cleanup ?
Question 3: if no tool, can I remove older entries/folders by hand ?

Context info:
docker image is onlyoffice/documentserver:latest
ubuntu 22.04 on a PC with 32Gb RAM, only 250Gb of SSD
I used GitHub - ONLYOFFICE/docker-onlyoffice-nextcloud

Thanks for any hint
JC

Hello @jcdufourd,
Based on the provided info, nothing can be said yet. But this behavior of rabbitmq is not the usual one.
Please provide the following additional info:

  1. docker ps command’s output
  2. Describe how you update Document Server
  3. Provide us the content of /var/log/rabbitmq (inside the container)
  4. Provide the screenshot showing the excessive data usage by rabbitmq

Thank you for your message, DmitriiV

  1. ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34a868bae122 jcdufourd/nextcloud:latest “/entrypoint.sh php-…” 17 hours ago Up 16 hours 80/tcp, 3478/tcp, 9000/tcp app-server
d7997e575a75 mariadb:lts “docker-entrypoint.s…” 36 hours ago Up 16 hours 3306/tcp db-server
a853c79da9f9 onlyoffice/documentserver:latest “/app/ds/run-documen…” 36 hours ago Up 16 hours 80/tcp, 443/tcp onlyoffice-document-server
56d1c0607d19 nginx “/docker-entrypoint.…” 36 hours ago Up 16 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp nginx-server
75269030a571 jcdufourd/epitapi:latest “docker-entrypoint.s…” 36 hours ago Up 16 hours 0.0.0.0:2000->2000/tcp, [::]:2000->2000/tcp epitapi-server

  1. update document server

when I need to update one of my containers, I recreate all images and containers, and the document server image is onlyoffice/documentserver:latest, so it is updated regularly

  1. /var/log/rabbitmq

in http://jcdufourd.free.fr/log.zip

  1. screenshot of excessive data: du in docker volume

root@linux:/var/lib/docker/volumes/bed5d9ab69b119145572eed15a09cdd2f5a38a2779105c76cf1b0da46f01e26d/_data# du -s -k *

88556360 mnesia

0 MnesiaCore.rabbit@6a0688b00df2_1730_500648_656580

root@linux:/var/lib/docker/volumes/bed5d9ab69b119145572eed15a09cdd2f5a38a2779105c76cf1b0da46f01e26d/_data#

Thanks again.

Please provide the following command’s output: rabbitmqctl list_queues name messages memory
Also, please upload the logs to some other hosting such as Dropbox, was unable to download the logs

root@a853c79da9f9:/# rabbitmqctl list_queues name messages memory

Timeout: 60.0 seconds …

Listing queues for vhost / …

name messages memory

amq.gen-3_mj9x1x9CDHHGSjEIuZgw 0 34936

ds.delayed 0 55440

ds.converttask 0 164760

ds.convertresponse 0 55736

ds.convertdead 0 55584

root@a853c79da9f9:/#

  1. probably you are using chrome which refuses the non-https server, here is an https URL for the logs

https://perso.telecom-paris.fr/dufourd/log.zip

Thanks
JC

Please check what files are present in the folder mnesia
du -sh * | sort -hr | head -20
Basically, the issue could be the result from not cleaning up the volumes after running new containers when updating

Here is the result of the command you gave:

root@linux:/var/lib/docker/volumes/bed5d9ab69b119145572eed15a09cdd2f5a38a2779105c76cf1b0da46f01e26d/_data/mnesia# du -sh * | sort -hr | head -20

567M rabbit@64a1cd0f9194

562M rabbit@435efdeff2a3

557M rabbit@be39c35cf465

556M rabbit@cf3d15c02692

555M rabbit@d8363a67cd51

555M rabbit@b63c93732daa

549M rabbit@3b6938770c0e

546M rabbit@6bde4f808269

544M rabbit@c7b33635d7e8

544M rabbit@9907e7f09be1

542M rabbit@3a3ba8c221b8

538M rabbit@70d1f687afc7

537M rabbit@8eb71a95e7a7

537M rabbit@774f2b5f4ab5

535M rabbit@a9ce46951493

534M rabbit@77eae4cce5d9

534M rabbit@0db733a80b7f

533M rabbit@a5c5ff1a371f

530M rabbit@8942ae8ee083

529M rabbit@8cb0979a967d

You are writing “result from not cleaning up the volumes”: does this mean I can just remove all older folders in there ?

Thanks again
JC

Hello there,
familiarize yourself with the docker prune command.

Hello bermuda
Thank you for your suggestion.
The 85G volume is actually still used, so it will not get pruned.
docker prune was actually one of the first things I tried…

My bad,
I was under the impression you had old dangling volumes leftovers. But this seems to be an external volume which contains old rabbit vhost-data. Yes, you can delete the old ones with the exception of the latest one.

It seems hostname: can be set within the docker-compose file inside the servicename: tag. This way rabbitmq is able to use the same vhost and does not recreate everything after the container-id has changed.

Example:

services:
  onlyoffice:
    hostname: onlyoffice
    ...

Please provide the screenshot of docker volume ls command’s output

dufourd@linux:/var/lib$ docker volume ls
DRIVER VOLUME NAME
local 9c1610f99b11a58a3851490e6474bde05a6f88fa87b31bb071e40ca878d7743e
local b4aebe2d743c0ae51a890e3ab84c9ac2f10af5ba13de2ac88c3f658c226a3cfc
local b4ee177a2831651dabee4cf070e5ce93d9ee841d9a328ad7df3dc09127ec33df
local bed5d9ab69b119145572eed15a09cdd2f5a38a2779105c76cf1b0da46f01e26d
local docker-onlyoffice-nextcloud_app_data
local docker-onlyoffice-nextcloud_document_data
local docker-onlyoffice-nextcloud_document_log
local efe886d4ed0cae0afac37da4e77a0e1ba03eaa2e575afb84639d9a7fb62e1b04
local f512a3458943887d390295753b62b23f0337f5bd017505051ae140950a97947f
local nextcloud_app_data
local nextcloud_document_data
local nextcloud_document_log
local no_api_data
local no_app_data
local no_backups
local no_document_data
local no_document_log
local no_mysql_data
local s3_app_data
local s3_document_data
local s3_document_log

I develop a node.js app in a container beside the onlyoffice and nextcloud ones.
Whenever I updated my container (could be multiple times a day), my rebuild script ended with “docker compose up -d --force-recreate”, thus rebuilding all containers including the onlyoffice one.
The number and dates of the rabbitmq folders is consistent with periods of development.
I believe this part of my behaviour is the cause of the multiplication of rabbitmq folders in that 85G volume.
I have changed my rebuild script to only recreate my container.

Question 1: is my belief correct ?
Question 2: may I just delete older folders in the rabbitmq volume (as bermuda seems to suggest) ?
Question 3: should something be added in onlyoffice to remove useless folders in that volume ?

Thanks
JC

Yes, the belief is correct, that is exactly the reason, we do not recommend to do it this way.
You may delete older volumes, folders, you do not need older volumes at all as you recreate from scratch.
Nothing should be added on onlyoffice side, cleaning up the volumes should do it