Docsapce3 backup in docker container run VERY slowly

I am running an on demand backup to a room in docspace to hold backup tarsets.

The backup DOES run, but it runs VERY slowly.

The total volume of data in the rooms is only 8.1Gbytes

*docker ps --quiet | xargs --no-run-if-empty docker inspect *
–format ‘table {{.Name}}\t{{.Config.Image}}’

table /onlyoffice-router\tonlyoffice/docspace-router:3.0.0.1
table /onlyoffice-people-server\tonlyoffice/docspace-people-server:3.0.0.1
table /onlyoffice-ssoauth\tonlyoffice/docspace-ssoauth:3.0.0.1
table /onlyoffice-doceditor\tonlyoffice/docspace-doceditor:3.0.0.1
table /onlyoffice-notify\tonlyoffice/docspace-notify:3.0.0.1
table /onlyoffice-backup-background-tasks\tonlyoffice/docspace-backup-background:3.0.0.1
table /onlyoffice-login\tonlyoffice/docspace-login:3.0.0.1
table /onlyoffice-studio-notify\tonlyoffice/docspace-studio-notify:3.0.0.1
table /onlyoffice-files\tonlyoffice/docspace-files:3.0.0.1
table /onlyoffice-api\tonlyoffice/docspace-api:3.0.0.1
table /onlyoffice-backup\tonlyoffice/docspace-backup:3.0.0.1
table /onlyoffice-studio\tonlyoffice/docspace-studio:3.0.0.1
table /onlyoffice-socket\tonlyoffice/docspace-socket:3.0.0.1
table /onlyoffice-api-system\tonlyoffice/docspace-api-system:3.0.0.1
table /onlyoffice-files-services\tonlyoffice/docspace-files-services:3.0.0.1
table /onlyoffice-clear-events\tonlyoffice/docspace-clear-events:3.0.0.1
table /onlyoffice-proxy\tnginx:latest
table /onlyoffice-healthchecks\tonlyoffice/docspace-healthchecks:3.0.0.1
table /onlyoffice-identity-api\tonlyoffice/docspace-identity-api:3.0.0.1
table /onlyoffice-identity-authorization\tonlyoffice/docspace-identity-authorization:3.0.0.1
table /onlyoffice-document-server\tonlyoffice/documentserver:8.2.2.1
table /onlyoffice-redis\tredis:7
table /onlyoffice-rabbitmq\trabbitmq:3
table /onlyoffice-mysql-server\tmysql:8.3.0
table /onlyoffice-opensearch-dashboards\topensearchproject/opensearch-dashboards:2.11.1
table /onlyoffice-opensearch\tonlyoffice/opensearch:2.11.1
table /onlyoffice-fluent-bit\tfluent/fluent-bit:3.0.2

dpkg --list|grep docker|awk ‘{print $2, “\t” , $3}’

docker-buildx-plugin 	 0.17.1-1~ubuntu.22.04~jammy
docker-ce 	 5:27.3.1-1~ubuntu.22.04~jammy
docker-ce-cli 	 5:27.3.1-1~ubuntu.22.04~jammy
docker-ce-rootless-extras 	 5:27.3.1-1~ubuntu.22.04~jammy
docker-compose-plugin 	 2.29.7-1~ubuntu.22.04~jammy

Ubuntu 22.04

Thanks
Ivan

Hello @ivan

How slow is it exactly? As far as I know it backs up not only documents, but rooms and other files too. What is the total size of a backup that you’ve made previously?

Hi Constantine. happy new year. backing up docspace is NEW to me. There are 25 rooms (or so) and only 9G of data across all those rooms so I’m surprised it’s slow. Where can I get metrics I can share with you perhaps?
thanks

1 Like

Thanks. Happy New Year to you too.

I’m not aware of such, that is why I am asking how long it takes in your case. 9Gb of data is already a significant value, but it also backs up rooms too. On your screenshot, rooms take 33 additional Gb.

Hi. Is there a way to exclude specific rooms from backup?

I was looking at this OLD code

https://hub.docker.com/layers/onlyoffice/4testing-appserver-backup-background/develop-2022090914/images/sha256-beb552ab927aa01d37533f3eb415e1aacf03577e2bd339990860ce759e01e923?context=explore

Is the backup created INSIDE the container and THEN copied to (say) Webdav, or is the socket opened to Webdav and the backup created directly on the (file) serving host?

The ASC.Data.Backup.BackgroundTasks.dll seems to run single threaded also - is this the only way it can be run?

Thanks
Ivan

I’m afraid not as it would break logics upon restoration, i.e. if the file is in a room, but the room is not backed up, then there is no place to put the file back once restoration is initiated and the file will not be available from the interface anymore.

Backup is created locally first and then transferred to the external storage. This is done to make sure that backup is created in general in case there are issues with external storage, if there are potential external storage restrictions or, for instance, storage “speed limits”.

I’m not sure if I follow this question. What behavior do you expect?

Thanks Constantine. I think the backup creation bottle neck in my case is the resource limits I set on the docker containers themselves given the backup is generated locally first.

I’ll investigate resource consumption there. In terms of the backup process itself, can you confirm it’s single threaded?

I also think it would be good to offer an option to “create backup remote immediately” given the packet conversation would happen via TCP and over SSL (I’d hope :wink: )

By NOT offering this is means there is “double duty” happening in docspace at every backup e.g. [1] create backup and then [2] copy backup. As opposed to just create my backup remotely NOW.

As always, thanks for your help!

Which container runs the backup process Constantine?
thanks
ivan

Backup service runs in a single process.

It runs in a container onlyoffice-backup.

I really would like the team to consider allowing backups to be written DIRECTLY to webdav/box etc from the backup container
without generating via overlay2 first.

Why?

Today I ran out of space in /var on the host.

Backups were created in a backup room then copied elsewhere and then deleted using the UI BUT there were LARGE files left in two places.

/var/lib/docker/overlay2/cf6xxxxx/diff/var/www/services/ASC.Data.Backup.BackgroundTasks/temp

AND

/var/lib/docker/overlay2/cf63xxxxx/merged/var/www/services/ASC.Data.Backup.BackgroundTasks/temp

This led to /var filling on the host and docspace was unreachable.

Can we make it an official RFE?

Thanks for considering
Ivan

PS at a minimum there needs to be a “sweeper” that warns/and or cleans out “junk” lefty behind as part of backups in overlay2