Tweak fetch security checks #819
No reviewers
Labels
No labels
approved, awaiting change
bug
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs docs
needs tests
not a bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: AkkomaGang/akkoma#819
Loading…
Reference in a new issue
No description provided.
Delete branch "Oneric/akkoma:id-refetch"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
As discussed when id checking was initially introduced, refetch can be more robust, but if done improperly also create exploits of its own. I believe this one should now work, but best apply an extra dose of scrutiny during review
Refetching will fix some theoretical cases, the Peertube display-url array brought up on IRC and should allow using a properly cehcked
fetch_and_contain
for keys over in #816Things worth highlighting:
This would allow us to skip refetches for fragment keys, but might be confusing in other instances. E.g. Mastodon’s intentionally (albeit spec-breaking) unfetchable fragment activities might end up getting "fetched" as the object they act on
a) use non-fragment keys (which will still use the actor’s id in the top-level)
b) refetch on id mismatch
c) require signed fetch for actors
This way they’ll be able to confirm signatures, since neither can retrieve they key of the other.
However, for one current Akkoma already has this problem if auth fetch is enabled, but future Akkoma with #816 won’t have it due to using fragment keys and serving a minimal, key-only actor on unsigned requests even if auth fetch is enabled, so this shouldn’t be a concern here
Additionally, this also reallows cross-domain redirects instead enforcing authenticity by using the final location for containment checks (preparation for this already happened in 3e134b07fa4e382f1f4cfdbe90e74f8e73336a4e,). This fixes e.g. lookups via Peertube display URLs for content from a remote instance (also brought up on IRC before)
i think this is fine, nothing jumps out at me as being off - wrt the fragment keys, we can prooooobably ignore fragments? i think?
@ -109,2 +99,2 @@
_ -> :error
end
def contain_id_to_fetch(url, %{"id" => id}) when is_binary(id) do
# XXX: should we also ignore fragments?
probably? fragments aren't technically part of the URL as far as the pure URL spec is concerned and really only matter to a browser
I’ve been meaning to adjust this for fragments but didn’t get to it yet, sorry
While grepping through logs though i found one more odd case this will fix and seems worth documenting for future reference: case mismatches in the domain part
While paths are generally case sensitive, domain names are explicitly case insensitive but i got an id rejection for
https://QuarteredCircle.net/users/LexPendragon/collections/featured
whose AP object id ishttps://quarteredcircle.net/users/LexPendragon/collections/featured
The associated user’s AP id though actually is
https://QuarteredCircle.net/users/LexPendragon
and refers to owned collections with CamelCased URI.I’m not sure why casing differs; the newest posts also use CamelCasing in their id, so it doesn't seem like they changed casing in their config (but while odd and almost certainly a bad idea, technically those URIs are equivalent)
While we could manually normalise all domain parts prior to comparison, some servers might also allow for case insensitive paths, but we can’t generally assume this and thus shouldn't alter the path part of an URI
bc2b776109
tod5b0720596
rebased and allowed non-matching/non-existent fragments and different domain casing
View command line instructions
Checkout
From your project repository, check out a new branch and test the changes.