Last night, more than 35gb of logs were generated on our Doc Space server. All the logs entries were the same and were saved on the file aws-logger-errors.txt which was saved on 4 different locations, /ASC.Studio.Notify, /ASC.ClearEvents, /ASC.Notify and /ASC.ApiSystem.
I checked the connection to AWS which we use for backups and it works fine.
Has this happened to anyone before? Why would this happen just out of the blue?
System info:
Ubuntu 22.04
Docspace installed with packages via script.
Log Entry :
01/10/2024 5:13:54 AM
:
:Amazon.Runtime.AmazonServiceException: Unable to get IAM security credentials from EC2 Instance Metadata Service.
at Amazon.Runtime.DefaultInstanceProfileAWSCredentials.FetchCredentials()
at Amazon.Runtime.DefaultInstanceProfileAWSCredentials.GetCredentials()
at Amazon.Runtime.DefaultInstanceProfileAWSCredentials.GetCredentialsAsync()
at Amazon.Runtime.Internal.CredentialsRetriever.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.ErrorCallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.MetricsHandler.InvokeAsync[T](IExecutionContext executionContext)
at AWS.Logger.Core.AWSLoggerCore.LogEventTransmissionSetup(CancellationToken token)
at AWS.Logger.Core.AWSLoggerCore.Monitor(CancellationToken token)
We will run some tests on our side to replicate the issue and gather as much details as possible. I will let you know if we find out anything or in case any additional details from you are required.
I also tried changing AWS credentials, but that did not change anything. I have configured log rotate to remove the spam logs, but this is just a workaround.
I think this issue is connected to the backup issue i have mentioned earlier.
If I do not restart docspace-backup-background.service docspace-backup.service daily the daily backups fail.
Was there any particular action after which the log has appeared?
Not that i recall.
Do I understand correctly that S3 is connected for as a storage for automatic backups?
Yes, it is used as automatic daily backup storage.
Did you change access keys to the S3? Can be backup performed there now?
Yes, i changed and the backup works for a couple of days and it stops again.
Have exactly did you configure DocSpace to print spam logs?
It is the default config. I have changed nothing related to logs.
Does DocSpace return any errors if the service is not restarted?
I found out that every time the backup failed, two config file permission were changed from onlyoffice:onlyoffice to root:root which i found really odd. I set up a watcher to monitor those two files and i found out that there seems to be a nodejs script running which recreates the two configs file and removes the older ones. After they are recreated they get the wrong permissions and seems that the backup services can not access the configuration.
The two files which are constantly recreated with wrong permissions are these:
Are you sure? Please double check /etc/onlyoffice/docspace/appsettings.json config and aws.cloudWatch section in it. Logs that you’ve mentioned can appear only when Cloud Watch is configured and there are problems with access to it.