Meta: Finch deficiencies #994
Labels
No labels
approved, awaiting change
broken setup
bug
cannot reproduce
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs change/feedback
needs docs
needs tests
not a bug
not our bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
AkkomaGang/akkoma#994
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Meta issue to track bugs or deficiencies in Finch affecting us. Spun off from and extended since from a comment in #980.
Unfortunately the especially important points are also the hardest to realise.
The only other Tesla backend with built-in connection pooling atm is Hackney and supposedly also plagued with issues.
If we try to slap our own pooling onto another backend (or wrapping some lib into a custom Tesla backend), we’ll likely run into our own race conditions, spurious-copy ineffeciencies etc making it questionable whether this is worth the effort or if fixing Finch instead would be more productive.
General
Made less likely already by us raising the poll timeout;
ref: #880, finch#292;
was mostly (but not fully) fixed by finch#292
cacertsin default pool opts without borking all plain-HTTP connections;fixed by finch#333
Even the proposed workaround of manually using a stream and cancelling when receiving too much doesn't actually fully work for avoiding OOMs, due to Finch (or Mint, not sure) behind the scenes eagerly preloading further received response chunks and loading them into memory.
Allowing the limit to be rounded up to a multiple of chunk sizes is probably ok.
ref finch#224 and finch#282
Required for enabling HTTP/2.0 (with ALPN alongside HTTP/1.1)
Akkoma-side workaround possible and already employed;
ref. Finch#325;
fixed by finch#333
Else some requests will just immediately fail when we already have ongoing communication with the same remote.
For us just supporting waiting for a slot with a timeout as done in HTTP1 should be good enough. More generally though, HTTP2 should prob also support multiple connections per target OR multiple pools per target with fallback to another pool.
ref finch#165
This (from Akkoma’s POV unpredictably) breaks outgoing requests with a body .
Probably needs major rework of Finch’s pooling logic; currently ALPN negotiated links are treated as if HTTP1;
ref finch#265 :\
Akkoma TODOs
> 0.20.0) Finch release (#1058)::finchversion requirement inmix.exsand refreshmix.lockcacertsinstead of relying on theCAStoredefault (as we tried before but attempt was noop)