http: workaround finch bug by selectively forcing HTTP1 #980
No reviewers
Labels
No labels
approved, awaiting change
broken setup
bug
cannot reproduce
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs change/feedback
needs docs
needs tests
not a bug
not our bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
AkkomaGang/akkoma!980
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "Oneric:workaround-finch-http2-window-size"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Should avoid #978
@floatingghost can you test to confirm it works?
155b77bd23444ce437d7unfortunately no, it still uses the http2 pool
log
the protocols are correctly set, but they seem to get ignored
Soooo, the thing is
Finch.requestcannot ever override pool options and only takes three request-specific options as documented::receive_timeout:pool_timeout:request_timeoutWith the latter two being limited to HTTP1. Checking finch’s code confirms this.
This means we should actually NEVER add
AdapterHelper.optionsto individiual requests and the per-requestcacertsaddition is also actually doing nothing at all andwe rely on Mint’s default of querying theCAStorepackage. It is true though that settingcacertsoutselves on pool creation breaks plainhttp://connections.This means we can (best as part of or follow-up to #973 since it already modifies this part quite a bit) remove
AdapterHelperusage from requests and don’t need to set cacerts (or set it on pool creation and use two different pools for https and http).It unfortunately also means there’s no way to workaround the body-size Finch bug short of indiscriminately forcing
HTTP1for every connection as we used to before, or spin up two different pools for body-less and body-having requests with only the latter being limited to:http1(or:http2exclusively, though this might break federation with ill-configured instances)A somewhat shoddy approach could also be to always use the stream API and limit the size of chunks we send out to the minimum HTTP2 must support, or if there is no such minimum size a sufficiently small size to be unlikely to cause issues in practice. However, this will degrade performance and kind of defeats the point of allowing HTTP2 in the first place.
Given we have a number of issues related to Finch, it might also be worthwhile to consider whether we want to continue using Finch going forward. Finch is designed with performance as a primary focus and apparently a preference for connections only to modern and/or ahead-of-time well-known peers, and it seems this focus might distract it a bit from what is needed to be used in general-purpose connections with untrusted remotes.
Namely our pain points with Finch atm are, with more severe issues near the end:
:httppools implicitly created from the default config would just need to removecacertsif set)[:http1, :http2])Checking alternatives supported as Tesla backends, only hackney appears to already have pooling built-in though there are various complaints about hackney pool having poor performance but idk if this is still up-to-date. (I also have no idea how its pooling even works and whether it utilises HTTP2 multiplexing). It has a built-in response body-size limit, though it notes it will be rounded up to the connections chunk size. But much better than Finch in this regard at least.
For everything else we’d need to implement our own pool (at which point it might make more sense to either use hackney or invest effort into fixing Finch)
Idk what motivated the selection of Inch when Akkoma was consolidating to a single Tesla backend and have no actual experience with the other backends. What do you think @floatingghost?
so hackney is... not good. it frequently has upstream issues that need specific workarounds, which was half of the reason alternate backends were supported in pleroma, i'd probably advise against using it for most things
the reason why i selected finch was that it seemed like it'd be the best supported going forwards, though i'm not wed to it or anything
hmm.. welp hacnkey was the only other Tesla backend with built-in pooling.
All others options for a proper resolution (and allowing HTTP2) will be harder *(either implementing custom pooling for another backend, or fixing Finch).
Any preference whether to just revert to HTTP1-only or set up two Finch instances until we have a better solution?
hmmm
probably the easiest is just to force http1 i'd think?
setting up two instances is fine and all but do we gain that much from baseline http2 support for only some requests?
In theory better interoperability (some (few) websites don't accept HTTP1). other advantages of HTTP2 are admittedly invalidated by either Finch limitations (using HTTP1 pooling logic for HTTP2-supporting remotes if ALPN is used) or from the overhead of having two instances.
But yeah, given HTTP1-only worked fine enough before, just reverting to it seems good.
I’ll follow up with a new PR setting default protocols to
[:http1]only and cleaning up the misplaced pool options in requestsPull request closed