[bug] Possible memory leak in 3.11.0 #711
Labels
No Label
approved, awaiting change
bug
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs docs
needs tests
not a bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: AkkomaGang/akkoma#711
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Your setup
Docker
Extra details
Host: OpenSuse Tumbleweed/Slowroll; Docker-CE; Official docker container setup
Version
2024.02
PostgreSQL version
14
What were you trying to do?
Running the instance normally. There is a cronjob running at night which runs the cleanup tasks, as well as vacuum analyze, which does not shut down the instance while doing so. Could this be the culprit?
What did you expect to happen?
Memory usage should be somewhat steady, friend reports his instance at ~1G total.
What actually happened?
RAM usage climbs slowly, to no observable limit. First time observed the beam.smp process was using ~29G of memory. After an instance restart, it climbed back to ~11G in around 2 days.
Logs
No response
Severity
I cannot use it as easily as I'd like
Have you searched for this issue?
Update: since I reported this I have disabled my nightly script to check if that is the culprit. It unfortunately made no difference, the memory usage of the beam.smp process has again climbed to ~8GB by now and shows no sign of stopping.
I am unsure what I have changed, but it seems to have stopped. I removed one relay and rebuilt the instance, no idea if that was coincidence or not. Perhaps this helps someone in the future. If it stays like this I will close this issue tomorrow.
great that the problem solved itself :)
For reference for anyone else encountering something like this, it’d be helpful to check the live dashboard (
/phoenix/live_dashboard
) for what consume so much memory. The “Home”, “Processes” and “Ecto Stats” tabs are probably particularly helpfulTesting if enabling more frequent garbage collectors sweeps (
ERL_FULLSWEEP_AFTER
) helps, might also be good to know@Oneric Thanks for the info, will keep that in mind should something like that happen again.
Memory usage remains unchanged at around 600MB, so I am closing this issue.