Avoid accumulation of stale data in websockets #806

Merged
floatingghost merged 1 commit from Oneric/akkoma:websocket_fullsweep into develop 2024-06-23 02:19:37 +00:00

View file

@ -18,6 +18,8 @@ defmodule Pleroma.Web.MastodonAPI.WebsocketHandler do
@timeout :timer.seconds(60)
# Hibernate every X messages
@hibernate_every 100
# Tune garabge collect for long-lived websocket process
@fullsweep_after 20
def init(%{qs: qs} = req, state) do
with params <- Enum.into(:cow_qs.parse_qs(qs), %{}),
@ -59,6 +61,10 @@ def websocket_init(state) do
"#{__MODULE__} accepted websocket connection for user #{(state.user || %{id: "anonymous"}).id}, topic #{state.topic}"
)
# process is long-lived and can sometimes accumulate stale data in such a way it's
# not freed by young garbage cycles, thus make full collection sweeps more frequent
:erlang.process_flag(:fullsweep_after, @fullsweep_after)

it looks like you shoullllld be able to specify this in ws options rather than having to use the lower-level Erlang functions

https://hexdocs.pm/phoenix/Phoenix.Endpoint.html#socket/3-websocket-configuration

it looks like you shoullllld be able to specify this in ws options rather than having to use the lower-level Erlang functions https://hexdocs.pm/phoenix/Phoenix.Endpoint.html#socket/3-websocket-configuration

We currently don’t use Phoenix’ higher level Websocket interface for this, instead WebsocketHandler is listed as a higher prio endpoint than Pleroma.Web:Endpoint. I don’t know why this is the case (maybe the higher-level interface didn’t exist yet when streaming support was introduced seven years ago) and when i wrote this patch somem onths ago i had hoped to first get confirmation whether this really is the issue. Pleroma migrated to Phoenix’ WebSockets which is why they can use this setting in their patch. If there’s no reason to stick with the custom implementation we prob should migrate as well.

There’s also lib/phoenix/transports/web_socket/raw.ex which defines a module in Phoenix’ namespace Phoenix.Transports.WebSocket.Raw but i have no idea what this does.

We currently don’t use Phoenix’ higher level Websocket interface for this, instead `WebsocketHandler` is listed as a higher prio endpoint than `Pleroma.Web:Endpoint`. I don’t know why this is the case (maybe the higher-level interface didn’t exist yet when streaming support was introduced seven years ago) and when i wrote this patch somem onths ago i had hoped to first get confirmation whether this really is the issue. Pleroma migrated to Phoenix’ WebSockets which is why they can use this setting in their patch. If there’s no reason to stick with the custom implementation we prob should migrate as well. There’s also `lib/phoenix/transports/web_socket/raw.ex` which defines a module in Phoenix’ namespace `Phoenix.Transports.WebSocket.Raw` but i have no idea what this does.
Outdated
Review

I will try go use a Source based instance the next days to test it.

I will try go use a Source based instance the next days to test it.

Thanks! When you start your from-source server, it be great if you could use

MIX_ENV=prod elixir --sname akkoma --cookie akkoma_cookie -S mix phx.server

such that — in case the patch doesn't work — you can run the memleak diagnostic tool later: diagnostic-tools/binary-leak-checker.sh

This way we’ll (hopefully) be able to get insight into which processes are actually responsible for the leaks.
Ideally test if connecting a remote shell works ahead of time; if it doesn’t you may have configured epmd too strictly, but you can just ask on IRC or here about it.

Thanks! When you start your from-source server, it be great if you could use ``` MIX_ENV=prod elixir --sname akkoma --cookie akkoma_cookie -S mix phx.server ``` such that — in case the patch doesn't work — you can run the memleak diagnostic tool later: [diagnostic-tools/binary-leak-checker.sh](https://akkoma.dev/AkkomaGang/akkoma/src/branch/develop/diagnostic-tools/binary-leak-checker.sh) This way we’ll (hopefully) be able to get insight into which processes are actually responsible for the leaks. Ideally test if connecting a remote shell works ahead of time; if it doesn’t you may have configured `epmd` too strictly, but you can just ask on IRC or here about it.
Streamer.add_socket(state.topic, state.oauth_token)
{:ok, %{state | timer: timer()}}
end