WIP: Extract keys to their own table, match keyID #816
No reviewers
Labels
No labels
approved, awaiting change
bug
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs docs
needs tests
not a bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: AkkomaGang/akkoma#816
Loading…
Reference in a new issue
No description provided.
Delete branch "keys-extraction"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
removes the flimsy and generally awful inherited system of assuming key IDs must be their user's AP ID with a suffix. keeping up with these suffixes was a pain to put it mildly
what this does:
signing_keys
- User has one Signing Keyi have set up a testing instance and made sure it works with all major implementations
for the future:
Currently 6 tests fail, but the broad scheme works
i will ensure the last 6 work before removing the wip tag
@ -339,7 +339,6 @@ defmodule Pleroma.Object.Fetcher do
final_url
end
@doc "Do NOT use; only public for use in tests"
for anyone seeing this: this was already used everywhere...
grepping current develop
3ff0f46b9f
forget_object($|[^A-Za-z_])
inlib/
only yields:fetch_and_contain_remote_object_from_id
aboveget_object
inPleroma.Object.Containment
Changing its definition to
defp
also doesn’t lead to any errors inmix compile
, only a warning about a doc string for a private functionWhere are those current uses of safety-checks-bypassing
get_object
and should they be changed tofetch_and_contain_remote_object_from_id
?very fair point, it would probably be better to use the contain function
The (afaict only one) usage of
get_object
added in this PR infetch_remote_key
can't use the normal containtment checks since keys don't follow usual AP semantics; their a subproperty of the whole object, so their ID won't directly match the URL. Even non-fragment keys, e.g. GTS, still provide the real key as apublickey
subproperty and use the owning actor’s ID for the root object.A key-specific containment fetch might help avoid the spoofed key IDs you mentioned in the comment below; haven't looked deeper into it yet though (e.g. if owner equals key ID without fragments or key id and owner are on the same domain consider it valid, otherwise also fetch the owner and check it links back to the key ID)
If there really are pre-existing
get_object
outside of tests and the fetcher itself somewhere, they likely should be switched over@ -0,0 +197,4 @@
key_id: key_id
}
Repo.insert(key, on_conflict: :replace_all, conflict_target: :key_id)
hm. i now i read this back, i don't like the potential for spoofed key IDs. uniq check should probably be user_id/key_id pair?
Even with
user_id
+key_id
key spoofing is possible due to only reading data from the HTTP response body inhandle_signature_Response
and usingFetcher.get_object
which doesn't verify the authenticity of its data at all; that’s why its comment saidDo NOT use
E.g. an attacker might trigger lookup of key id
https://evil.com/spoofed#key
but the served response then poses ashttps://fluffy.example/users/fox#main-key
belonging to another user.If I understand correctly on failure to verify we’ll refetch the key in case it was recently rotated. Until such a refetch gets triggered though an attacker should be able to impersonate whomever’s key they spoofed.
Instead the received data needs to be checked similar to other AP fetches. As mentioned in the other thread, URL comparisons can’t work quite the same way due to key ids not following the usual AP semantics. The checkes decribed in the other thread probably work, but i think it’ll be cleaner to switch to a "refetch once if id differs" approach in
fetch_and_contain
; we’ll end up fetching the user object then (even for non-fragment keys) and can use the top-level id directly asuser_id
and extract itspublicKey
. This allows us to use to share the strict-id logic between normal AP objects and keysA
user_id
+key_Id
composite primary key would allow shared keys to work if anyone uses them. But to avoid storing key data over and over again a separate_user_key
join table would probably be better. Considering such shared keys don’t work with our old key logic either though, i think it’s safe to ignore them for now just like “multiple keys per user”.done in #819
I’m concerned about the spoofing potential from unchecked
get_object
, but everything else is just nits@ -19,3 +14,1 @@
|> Map.put(:fragment, nil)
|> Map.put(:query, nil)
|> remove_suffix(@known_suffixes)
# Given the key ID, first attempt to look it up in the signing keys table.
comment doesn't match the visible code beneath it
Comment implies the current function will fall back to something else if db lookup fails, but only attempts one function call
@ -224,1 +224,4 @@
# FOR THE FUTURE: We might want to make this a one-to-many relationship
# it's entirely possible right now, but we don't have a use case for it
has_one(:signing_key, SigningKey, foreign_key: :user_id)
this new
has_one
relationship is added, but the old public_key field isn’t removed yet@ -0,0 +6,4 @@
add :user_id, references(:users, type: :uuid, on_delete: :delete_all)
add :key_id, :text, primary_key: true
add :public_key, :text
add :private_key, :text
Might
:binary
be more efficient to not have to reparse the text representation each time we actually verify/sign something? On the other hand, for local users, serving the text representation via AP fetches might be more common than actually using the key...@ -0,0 +10,4 @@
timestamps()
end
create unique_index(:signing_keys, [:key_id])
key_id
already is the primary key; manually creating an index here just duplicates the already existing unique primary key indexBefore I forget again:
03662501c3
already added a first building block for solving the related federation-bootstrapping issue ~2 years ago by settinglimited_ap
inmapped_signature_to_identity_plug
for user routes if the signature is valid but can’t be mapped to a user identitiy. Howeverlimited_ap
is never checked afterwards and idk if thevaliod_signature
can even end up astrue
if identity mapping fails.This bit should be removed now that a more comprehensive solution is in place to avoid confusion.
View command line instructions
Checkout
From your project repository, check out a new branch and test the changes.