Compare commits

...
Sign in to create a new pull request.

86 commits

Author SHA1 Message Date
7d731c1a1a Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-05-19 02:21:23 -04:00
3bc50e6e5a Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-05-12 18:54:20 -04:00
29c07bb844 Merge remote-tracking branch 'oneric/fix_digest' into akko.wtf 2025-05-11 18:09:42 -04:00
aa2e28f17d Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-05-10 15:05:01 -04:00
9f3055a6eb Be less aggressive about user refreshes
TODO: atm this still refreshes all users actually actively
sending stuff to the inbox, but this will change with the
signing key migration (only user/inbox endpoints still do)
2025-05-08 17:04:27 -04:00
b0a109a146 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-05-06 12:28:23 -04:00
550d2b30dd Merge remote-tracking branch 'oneric/shared-inbox' into akko.wtf 2025-05-06 11:56:49 -04:00
0ad6bfe0eb Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-05-03 17:43:49 -04:00
fe20d12e94 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-04-11 00:41:33 -04:00
ab0355a210 Merge remote-tracking branch 'oneric/pleroma_unlisted' into akko.wtf
Available as pending PR #885
2025-03-23 15:10:11 -04:00
9495a61f00 federation: always prefer the shared inbox
Some checks failed
ci/woodpecker/pr/lint Pipeline failed
ci/woodpecker/pr/test/1 unknown status
ci/woodpecker/pr/test/2 unknown status
ci/woodpecker/pr/build-amd64 unknown status
ci/woodpecker/pr/build-arm64 unknown status
ci/woodpecker/pr/docs unknown status
In theory a pedantic reading of the spec indeed suggests
DMs must only be delivered to personal inboxes. However,
in practice the normative force of real-world implementations
disagrees. Mastodon, Iceshrimp.NET and GtS (the latter notably has a
config option to never use sharedInboxes) all unconditionally prefer
sharedInbox for everything without ill effect. This saves on duplicate
deliveries on the sending and processing on the receiving end.
(Typically the receiving side ends up rejecting
 all but the first copy as duplicates)

Furthermore current determine_inbox logic also actually needs up
forcing personal inboxes for follower-only posts, unless they
additionally explicitly address at least one specific actor.
This is even much wasteful and directly contradicts
the explicit intent of the spec.

There’s one part where the use of sharedInbox falls apart,
namely spec-compliant bcc and bto addressing. AP spec requires
bcc/bto fields to be stripped before delivery and then implicitly
reconstructed by the receiver based on the addressed personal inbox.
In practice however, this addressing mode is almost unused. Neither of
the three implementations brought up above supports it and while *oma
does use bcc for list addressing, it does not use it in a spec-compliant
way and even copies same-host recipients into cc before delivery.
Messages with bcc addressing are handled in another function clause,
always force personal inboxes for every recipient and not affected by
this commit.
In theory it would be beneficial to use sharedInbox there too for all
but bcc recipients. But in practice list addressing has been broken for
quite some time already and is not actually exposed in any frontend,
as discussed in #812.
Therefore any changes here have virtually no effect anyway
and all code concerning it may just be outright removed.
2025-03-15 01:12:12 +01:00
75fda65065 publisher: don't mangle between string and atom
Oban jobs only can have string args and there’s no reason to insist on atoms here.

Plus this used unchecked string_to_atom
2025-03-14 20:31:07 +01:00
5b9011aa57 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-03-11 20:43:25 -04:00
44a95e2396 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-03-04 19:17:33 -05:00
ace04da7b8 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-03-01 14:56:35 -05:00
e8eb01b9df Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-03-01 10:06:00 -05:00
29d3be800e Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-02-22 12:04:44 -05:00
7ce0445eea Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-01-06 09:59:52 -05:00
c3418694b9 Sync changelog with upstream version 2025-01-05 12:07:48 -05:00
1946834e07 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-01-05 11:52:59 -05:00
2d9358c432 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-01-05 11:27:37 -05:00
e934cd4ca5 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-01-05 11:23:48 -05:00
309d574ff9 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2025-01-03 14:07:45 -05:00
b736086ab6 Set customize_hostname_check for Swoosh.Adapters.SMTP
This should hopefully fix issues with connecting to SMTP servers
with wildcard TLS certificates.

Taken from https://erlef.github.io/security-wg/secure_coding_and_deployment_hardening/ssl

Fixes #660
2024-12-17 18:32:20 -05:00
c738f27e77 Merge remote-tracking branch 'upstream/develop' into akko.wtf 2024-12-15 04:42:20 -05:00
7618ac3104 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-11-26 10:31:32 -05:00
51bebd32f2 Merge remote-tracking branch 'oneric/mrf-fix-oage' into akko.wtf 2024-11-17 00:51:22 -05:00
ce4def63b7 Merge remote-tracking branch 'oneric/attachcleanup-overeager' into akko.wtf 2024-11-17 00:50:06 -05:00
b31388b5df Delay attachment deletion
Some checks are pending
ci/woodpecker/pr/build-amd64 Pipeline is pending approval
ci/woodpecker/pr/build-arm64 Pipeline is pending approval
ci/woodpecker/pr/docs Pipeline is pending approval
ci/woodpecker/pr/lint Pipeline is pending approval
ci/woodpecker/pr/test Pipeline is pending approval
Otherwise attachments have a high chance to disappear with akkoma-fe’s
“delete & redraft” feature when cleanup is enabled in the backend. Since
we don't know whether a deletion was intended to be part of a redraft
process or even if whether the redraft was abandoned we still have to
delete attachments eventually.
A thirty minute delay should provide sufficient time for redrafting.

Fixes: #775
2024-11-17 00:41:23 +01:00
615ac14d6d Don't try to cleanup remote attachments
The cleanup attachment worker was run for every deleted post,
even if it’s a remote post whose attachments we don't even store.
This was especially bad due to attachment cleanup involving a
particularly heavy query wasting a bunch of database perf for nil.

This was uncovered by comparing statistics from
#784 and
#765 (comment)
2024-11-17 00:40:54 +01:00
34f650bc26 Merge remote-tracking branch 'oneric/mrf-fix-oage' into akko.wtf 2024-11-16 11:53:32 -05:00
5a3c6a6896 mrf/object_age: fix handling of non-public objects
Some checks are pending
ci/woodpecker/pr/build-amd64 Pipeline is pending approval
ci/woodpecker/pr/build-arm64 Pipeline is pending approval
ci/woodpecker/pr/docs Pipeline is pending approval
ci/woodpecker/pr/lint Pipeline is pending approval
ci/woodpecker/pr/test Pipeline is pending approval
Currently logic unconditionally adds public adressing to "cc"
and follower adressing to "to" after attempting to strip it
from the other one. This is horrible for two reasons:

First the bug prompting this investigation and fix,
this creates duplicates when adressing URIs already
were in their intended final field; e.g. prominently
the case for all "unlisted" posts.
Since List.delete only removes the first occurence,
this then broke follower-adress stripping later on.

It’s also not safe in general wrt to non-public adressing:
e.g. pre-existing duplicates didn’t get fully stripped,
bespoke adressing modes with only one of public addressing
or follower addressing — and most importantly:
any belatedly received DM also got public adressing added!
Shockingly this last point was actually checked as "correct" in tests
albeit this appears to be a mistake from mindless match adjustments
from when genuine crashes on nil adressing lists got fixed in
10c792110e.

Clean up this sloppy logic up, making sure all instances of relevant
adresses are purged and only readded when they actually existed to begin
with. To avoid problems with any List.delete usages remaining in other
places, also ensure we never create duplicate entries.
2024-11-16 03:19:41 +01:00
54b1678639 Merge remote-tracking branch 'oneric/ap-anonymous-errata' into akko.wtf 2024-11-14 17:18:01 -05:00
67f3ee45b4 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-10-31 19:01:11 -04:00
ef22f3a92b Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-10-28 19:39:50 -04:00
cb7f6fb3d6 Update hashtag prune to account for followed hashtags 2024-10-25 11:09:20 -04:00
370894d153 Merge remote-tracking branch 'oneric/custom-source' into akko.wtf 2024-10-20 01:07:57 -04:00
7a16e12029 Update the nsfwCensorImage fix from the update-nsfwCensorImage PR 2024-10-20 01:07:01 -04:00
f98d5f3f20 Teach admin-fe about custom source URLs
Some checks are pending
ci/woodpecker/pr/build-amd64 Pipeline is pending
ci/woodpecker/pr/build-arm64 Pipeline is pending
ci/woodpecker/pr/docs Pipeline is pending
ci/woodpecker/pr/lint Pipeline is pending
ci/woodpecker/pr/test Pipeline is pending
Matching  AkkomaGang/akkoma-fe#421
2024-10-18 00:47:43 +02:00
cd151999cb update nsfwCensorImage suggestion in config/description.exs
Turns out this is also used to set the default values in adminfe, so
update this to the correct URL for the NSFW banner so that it doesn't
break if you reset this value.
2024-10-17 02:39:14 -04:00
345e3ed04c Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-10-16 20:52:11 -04:00
17a0b6c546 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-10-13 22:25:24 -04:00
b15698a772 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-09-15 19:19:04 -04:00
d6ac8cb457 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-23 15:02:59 -04:00
1414e709e3 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-17 21:35:18 -04:00
b9e7b100a2 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-12 19:52:57 -04:00
a8243d2d51 Convert rich media backfill to oban task 2024-06-11 13:16:10 -04:00
a41545a781 Merge branch 'pool-timeouts' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-10 10:17:55 -04:00
c7b4e34cf9 Merge branch 'websocket_fullsweep' of https://akkoma.dev/Oneric/akkoma into akko.wtf 2024-06-09 20:09:51 -04:00
e3f90afc62 Revert "TEMP: add cleanup worker logging"
This reverts commit e41d35180e.

No longer needed anymore.
2024-06-09 16:57:04 -04:00
f02d4d094d Merge branch 'pool-timeouts' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-09 16:28:45 -04:00
7c2c11fdd8 Merge branch 'pool-timeouts' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-09 15:33:03 -04:00
baee4acde1 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-07 14:23:58 -04:00
cdf05077fe Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-06-01 23:27:34 -04:00
deb64d113e Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-05-31 11:09:14 -04:00
e41d35180e TEMP: add cleanup worker logging 2024-05-30 21:08:25 -04:00
fe8395c2cd Revert "TEMP: add logging for cleanup worker"
This reverts commit d89d189bd3.
2024-05-30 20:47:04 -04:00
d89d189bd3 TEMP: add logging for cleanup worker 2024-05-30 20:46:05 -04:00
d6592053e9 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-05-27 23:32:37 -04:00
387c368b8c Fix Exiftool stderr being read as an image description
Fixes: #773
2024-05-23 14:40:25 -04:00
6288682173 Pull security updates from upstream develop 2024-05-22 15:00:18 -04:00
f5a8f7ba2d Merge remote-tracking branch 'oneric/prune-batch' into akko.wtf 2024-05-14 22:56:23 -04:00
c127d48308 dbprune/activites: prune array activities first
Some checks failed
ci/woodpecker/pr/test Pipeline is pending
ci/woodpecker/pr/lint Pipeline failed
ci/woodpecker/pr/test/1 unknown status
ci/woodpecker/pr/test/2 unknown status
ci/woodpecker/pr/test/3 unknown status
ci/woodpecker/pr/test/4 unknown status
ci/woodpecker/pr/build-amd64 unknown status
ci/woodpecker/pr/docs unknown status
ci/woodpecker/pr/build-arm64 Pipeline was successful
This query is less costly; if something goes wrong or gets aborted later
at least this part will arelady be done.
2024-05-15 03:45:50 +02:00
40ae91a45c dbprune: allow splitting array and single activity prunes
The former is typically just a few reports; it doesn't make sense to
rerun it over and over again in batched prunes or if a full prune OOMed.
2024-05-15 03:45:50 +02:00
3c319ea732 dbprune: use query! 2024-05-15 01:43:53 +02:00
91e4f4f885 dbprune: add more logs
Pruning can go on for a long time; give admins some insight into that
something is happening to make it less frustrating and to make it easier
which part of the process is stalled should this happen.

Again most of the changes are merely reindents;
review with whitespace changes hidden recommended.
2024-05-15 01:43:53 +02:00
7e03886886 dbprune: shortcut array activity search
This brought down query costs from 7,953,740.90 to 47,600.97
2024-05-15 01:43:53 +02:00
1caac640da Test both standalone and flag mode for pruning orphaned activities 2024-05-15 01:43:53 +02:00
b03947917a Also allow limiting the initial prune_object
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.

The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().

Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
2024-05-15 01:43:53 +02:00
3258842d0c Log number of deleted rows in prune_orphaned_activities
This gives feedback when to stop rerunning limited batches.

Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
2024-05-14 23:45:10 +02:00
ff684ba8ea Add standalone prune_orphaned_activities CLI task
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.

If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
2024-05-14 23:45:10 +02:00
f5b5838c4d refactor: move prune_orphaned_activities into own function
No logic changes. Preparation for standalone orphan pruning.
2024-05-14 23:45:10 +02:00
2007b1c586 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-05-11 14:47:40 -04:00
5a90aa50f1 Revert "temp add logging for collection fetching"
This reverts commit 9486abca22.
2024-04-29 13:51:09 -04:00
36f2422650 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-04-27 11:33:02 -04:00
6ed176ba45 Merge remote-tracking branch 'upstream/develop' into akko.wtf 2024-04-20 03:03:33 -04:00
9486abca22 temp add logging for collection fetching 2024-04-06 11:37:09 -04:00
1a3624f45f Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-04-06 11:36:42 -04:00
4f44d08816 [TEST] Avoid accumulating stale data in websockets
For some but not all instances (likely depending on usage patterns)
those [i’m guessing, to be tested] process end up accumulating
stale binary data in such a way it’s not included into young garbage
collection cycles. At the same time, full cycles are barely ever
triggered making it seem like a memory leak.
To avoid this, make full sweeps more frequent for only the affected
processes.

TODO: actually test this theory + fix

ref:
  https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226
  https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafter
    (showed up in search results and inspired this)
  https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060
    (different patch due to different socket implementation)
2024-04-04 17:32:05 +02:00
47896ae170 Merge branch 'develop' of https://akkoma.dev/AkkomaGang/akkoma into akko.wtf 2024-03-30 11:01:48 -04:00
c648f4af9d Merge remote-tracking branch 'upstream/develop' into akko.wtf 2024-02-24 15:40:23 +00:00
bb327870f7 Test both standalone and flag mode for pruning orphaned activities 2024-02-20 19:32:01 -05:00
4fcf2cbf85 Also allow limiting the initial prune_object
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.

The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().

Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
2024-02-20 19:32:01 -05:00
92e6839d46 Log number of deleted rows in prune_orphaned_activities
This gives feedback when to stop rerunning limited batches.

Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
2024-02-20 19:32:01 -05:00
c4923b6ed8 Add standalone prune_orphaned_activities CLI task
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.

If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
2024-02-20 19:32:01 -05:00
ba14196856 refactor: move prune_orphaned_activities into own function
No logic changes. Preparation for standalone orphan pruning.
2024-02-20 19:32:01 -05:00
5 changed files with 24 additions and 10 deletions

View file

@ -976,16 +976,17 @@ defp maybe_send_registration_email(%User{email: email} = user) when is_binary(em
defp maybe_send_registration_email(_), do: {:ok, :noop}
def needs_update?(user, options \\ [])
def needs_update?(%User{local: true}, _options), do: false
def needs_update?(%User{local: false, last_refreshed_at: nil}, _options), do: true
defp needs_update?(user, options)
defp needs_update?(%User{local: true}, _options), do: false
defp needs_update?(%User{local: false, last_refreshed_at: nil}, _options), do: true
def needs_update?(%User{local: false} = user, options) do
NaiveDateTime.diff(NaiveDateTime.utc_now(), user.last_refreshed_at) >=
Keyword.get(options, :maximum_age, 86_400)
defp needs_update?(%User{local: false} = user, options) do
Keyword.get(options, :update_existing, true) &&
NaiveDateTime.diff(NaiveDateTime.utc_now(), user.last_refreshed_at) >=
Keyword.get(options, :maximum_age, 86_400)
end
def needs_update?(_, _options), do: true
defp needs_update?(_, _options), do: true
# "Locked" (self-locked) users demand explicit authorization of follow requests
@spec can_direct_follow_local(User.t(), User.t()) :: true | false

View file

@ -249,7 +249,7 @@ def stringify_keys(object), do: object
def fetch_actor(object) do
with actor <- Containment.get_actor(object),
{:ok, actor} <- ObjectValidators.ObjectID.cast(actor) do
User.get_or_fetch_by_ap_id(actor)
User.get_or_fetch_by_ap_id(actor, update_existing: false)
end
end

View file

@ -108,7 +108,8 @@ defp remote_mention_resolver(
(t["name"] == mention || mention == "#{t["name"]}@#{initial_host}")
end),
false <- is_nil(mention_tag),
{:ok, %User{} = user} <- User.get_or_fetch_by_ap_id(mention_tag["href"]) do
{:ok, %User{} = user} <-
User.get_or_fetch_by_ap_id(mention_tag["href"], update_existing: false) do
link = Pleroma.Formatter.mention_tag(user, nickname, opts)
{link, %{acc | mentions: MapSet.put(acc.mentions, {"@" <> nickname, user})}}
else

View file

@ -163,7 +163,7 @@ def fix_addressing(object) do
{:ok, %User{follower_address: follower_collection}} =
object
|> Containment.get_actor()
|> User.get_or_fetch_by_ap_id()
|> User.get_or_fetch_by_ap_id(update_existing: false)
object
|> fix_addressing_list_key("to")

View file

@ -161,6 +161,9 @@ test "publish to url with with different ports" do
inbox = "http://404.site/users/nick1/inbox"
<<<<<<< HEAD
assert {:error, _} = Publisher.publish_one(%{"inbox" => inbox, "json" => "{}", "actor" => actor, "id" => 1})
=======
assert {:error, _} =
Publisher.publish_one(%{
"inbox" => inbox,
@ -168,6 +171,7 @@ test "publish to url with with different ports" do
"actor" => actor,
"id" => 1
})
>>>>>>> 30b1684e28f3a945c7e8a3cb464e21d425e26981
assert called(Instances.set_unreachable(inbox))
end
@ -184,12 +188,16 @@ test "publish to url with with different ports" do
assert capture_log(fn ->
assert {:error, _} =
<<<<<<< HEAD
Publisher.publish_one(%{"inbox" => inbox, "json" => "{}", "actor" => actor, "id" => 1})
=======
Publisher.publish_one(%{
"inbox" => inbox,
"json" => "{}",
"actor" => actor,
"id" => 1
})
>>>>>>> 30b1684e28f3a945c7e8a3cb464e21d425e26981
end) =~ "connrefused"
assert called(Instances.set_unreachable(inbox))
@ -205,6 +213,9 @@ test "publish to url with with different ports" do
inbox = "http://200.site/users/nick1/inbox"
<<<<<<< HEAD
assert {:ok, _} = Publisher.publish_one(%{"inbox" => inbox, "json" => "{}", "actor" => actor, "id" => 1})
=======
assert {:ok, _} =
Publisher.publish_one(%{
"inbox" => inbox,
@ -212,6 +223,7 @@ test "publish to url with with different ports" do
"actor" => actor,
"id" => 1
})
>>>>>>> 30b1684e28f3a945c7e8a3cb464e21d425e26981
refute called(Instances.set_unreachable(inbox))
end