akkoma/lib
Oneric 4b5a398f22
All checks were successful
ci/woodpecker/pr/lint Pipeline was successful
ci/woodpecker/pr/test Pipeline was successful
ci/woodpecker/pr/build-amd64 Pipeline was successful
ci/woodpecker/pr/build-arm64 Pipeline was successful
ci/woodpecker/pr/docs Pipeline was successful
Avoid accumulation of stale data in websockets
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.

There are many reports unrelated to Akkoma of long-lived Phoenix
websockets getting into a state unfavourable for the garbage
collector depending on usage pattern, resulting in exactly the
observed behaviour.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.

Unfortunately none of the affected instances responded to inquieries
to test this more selective gc tuning, so this is not fully veriefied.
However, given the general reports regarding websockets and Pleroma —
as it turns out — also applying and having properly tested a very
similar tweak seems to support this theory.

Ref.:
  https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226
  https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafter
  https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060
2024-06-16 17:39:40 +02:00
..
mix Merge pull request 'Preserve Meilisearch’s result ranking' (#772) from Oneric/akkoma:search-meili-order into develop 2024-05-31 14:12:05 +00:00
phoenix/transports/web_socket Migrate to phoenix 1.7 (#626) 2023-08-15 10:22:18 +00:00
pleroma Avoid accumulation of stale data in websockets 2024-06-16 17:39:40 +02:00