[bug] Possible memory leak in 3.11.0 #711

Closed
opened 2024-02-28 07:56:28 +00:00 by fyr77 · 4 comments

Your setup

Docker

Extra details

Host: OpenSuse Tumbleweed/Slowroll; Docker-CE; Official docker container setup

Version

2024.02

PostgreSQL version

14

What were you trying to do?

Running the instance normally. There is a cronjob running at night which runs the cleanup tasks, as well as vacuum analyze, which does not shut down the instance while doing so. Could this be the culprit?

What did you expect to happen?

Memory usage should be somewhat steady, friend reports his instance at ~1G total.

What actually happened?

RAM usage climbs slowly, to no observable limit. First time observed the beam.smp process was using ~29G of memory. After an instance restart, it climbed back to ~11G in around 2 days.

Logs

No response

Severity

I cannot use it as easily as I'd like

Have you searched for this issue?

  • I have double-checked and have not found this issue mentioned anywhere.
### Your setup Docker ### Extra details Host: OpenSuse Tumbleweed/Slowroll; Docker-CE; Official docker container setup ### Version 2024.02 ### PostgreSQL version 14 ### What were you trying to do? Running the instance normally. There is a cronjob running at night which runs the cleanup tasks, as well as vacuum analyze, which does not shut down the instance while doing so. Could this be the culprit? ### What did you expect to happen? Memory usage should be somewhat steady, friend reports his instance at ~1G total. ### What actually happened? RAM usage climbs slowly, to no observable limit. First time observed the beam.smp process was using ~29G of memory. After an instance restart, it climbed back to ~11G in around 2 days. ### Logs _No response_ ### Severity I cannot use it as easily as I'd like ### Have you searched for this issue? - [x] I have double-checked and have not found this issue mentioned anywhere.
fyr77 added the
bug
label 2024-02-28 07:56:28 +00:00
Author

Update: since I reported this I have disabled my nightly script to check if that is the culprit. It unfortunately made no difference, the memory usage of the beam.smp process has again climbed to ~8GB by now and shows no sign of stopping.

Update: since I reported this I have disabled my nightly script to check if that is the culprit. It unfortunately made no difference, the memory usage of the beam.smp process has again climbed to ~8GB by now and shows no sign of stopping.
Author

I am unsure what I have changed, but it seems to have stopped. I removed one relay and rebuilt the instance, no idea if that was coincidence or not. Perhaps this helps someone in the future. If it stays like this I will close this issue tomorrow.

I am unsure what I have changed, but it seems to have stopped. I removed one relay and rebuilt the instance, no idea if that was coincidence or not. Perhaps this helps someone in the future. If it stays like this I will close this issue tomorrow.
Member

great that the problem solved itself :)

For reference for anyone else encountering something like this, it’d be helpful to check the live dashboard (/phoenix/live_dashboard) for what consume so much memory. The “Home”, “Processes” and “Ecto Stats” tabs are probably particularly helpful
Testing if enabling more frequent garbage collectors sweeps (ERL_FULLSWEEP_AFTER) helps, might also be good to know

great that the problem solved itself :) For reference for anyone else encountering something like this, it’d be helpful to check the live dashboard (`/phoenix/live_dashboard`) for _what_ consume so much memory. The “Home”, “Processes” and “Ecto Stats” tabs are probably particularly helpful Testing if enabling more frequent garbage collectors sweeps (`ERL_FULLSWEEP_AFTER`) helps, might also be good to know
Author

@Oneric Thanks for the info, will keep that in mind should something like that happen again.
Memory usage remains unchanged at around 600MB, so I am closing this issue.

@Oneric Thanks for the info, will keep that in mind should something like that happen again. Memory usage remains unchanged at around 600MB, so I am closing this issue.
fyr77 closed this issue 2024-03-01 06:46:09 +00:00
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: AkkomaGang/akkoma#711
No description provided.