Avoid accumulation of stale data in websockets #806
1 changed files with 6 additions and 0 deletions
|
@ -18,6 +18,8 @@ defmodule Pleroma.Web.MastodonAPI.WebsocketHandler do
|
||||||
@timeout :timer.seconds(60)
|
@timeout :timer.seconds(60)
|
||||||
# Hibernate every X messages
|
# Hibernate every X messages
|
||||||
@hibernate_every 100
|
@hibernate_every 100
|
||||||
|
# Tune garabge collect for long-lived websocket process
|
||||||
|
@fullsweep_after 20
|
||||||
|
|
||||||
def init(%{qs: qs} = req, state) do
|
def init(%{qs: qs} = req, state) do
|
||||||
with params <- Enum.into(:cow_qs.parse_qs(qs), %{}),
|
with params <- Enum.into(:cow_qs.parse_qs(qs), %{}),
|
||||||
|
@ -59,6 +61,10 @@ def websocket_init(state) do
|
||||||
"#{__MODULE__} accepted websocket connection for user #{(state.user || %{id: "anonymous"}).id}, topic #{state.topic}"
|
"#{__MODULE__} accepted websocket connection for user #{(state.user || %{id: "anonymous"}).id}, topic #{state.topic}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# process is long-lived and can sometimes accumulate stale data in such a way it's
|
||||||
|
# not freed by young garbage cycles, thus make full collection sweeps more frequent
|
||||||
|
:erlang.process_flag(:fullsweep_after, @fullsweep_after)
|
||||||
|
|||||||
|
|
||||||
Streamer.add_socket(state.topic, state.oauth_token)
|
Streamer.add_socket(state.topic, state.oauth_token)
|
||||||
{:ok, %{state | timer: timer()}}
|
{:ok, %{state | timer: timer()}}
|
||||||
end
|
end
|
||||||
|
|
Loading…
Reference in a new issue
it looks like you shoullllld be able to specify this in ws options rather than having to use the lower-level Erlang functions
https://hexdocs.pm/phoenix/Phoenix.Endpoint.html#socket/3-websocket-configuration
We currently don’t use Phoenix’ higher level Websocket interface for this, instead
WebsocketHandler
is listed as a higher prio endpoint thanPleroma.Web:Endpoint
. I don’t know why this is the case (maybe the higher-level interface didn’t exist yet when streaming support was introduced seven years ago) and when i wrote this patch somem onths ago i had hoped to first get confirmation whether this really is the issue. Pleroma migrated to Phoenix’ WebSockets which is why they can use this setting in their patch. If there’s no reason to stick with the custom implementation we prob should migrate as well.There’s also
lib/phoenix/transports/web_socket/raw.ex
which defines a module in Phoenix’ namespacePhoenix.Transports.WebSocket.Raw
but i have no idea what this does.I will try go use a Source based instance the next days to test it.
Thanks! When you start your from-source server, it be great if you could use
such that — in case the patch doesn't work — you can run the memleak diagnostic tool later: diagnostic-tools/binary-leak-checker.sh
This way we’ll (hopefully) be able to get insight into which processes are actually responsible for the leaks.
Ideally test if connecting a remote shell works ahead of time; if it doesn’t you may have configured
epmd
too strictly, but you can just ask on IRC or here about it.