Compare commits

...

244 commits

Author SHA1 Message Date
a913c11230 bump version
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
2026-03-14 13:20:10 +00:00
Oneric
9d1e169472 webfinger/finger: allow WebFinger endpoint delegation with FEP-2c59
The ban on redirects was based on a misreading of FEP-2c59’s
requirements. It is only meant to forbid addresses other than
the canonical ActivityPub ID being advertised as such in the
returned WebFinger data.
This does not meaningfully lessen security and verification still
remains stricter than without FEP-2c59.

Notably this allows Mastodon with its backwards WebFinger redirect
(redirecting from the canonical WebFinger domain to the AP domain)
to adopt FEP-2c59 without causing issues or extra effort to existing
deplyoments which already adopted the Mastodon-recommended setup.
2026-03-13 00:00:00 +00:00
Oneric
4f03a3b709 federation.md: document our WebFinger behaviour 2026-03-13 00:00:00 +00:00
Oneric
e8d6d054d0 test/webfinger/finger: fix tests
One asserted the response format of finger_actor on a finger_mention
call as a previous iteration of the implementation mistakenly returned.
The other didn’t actually test anything WebFinger but fundamental id
containment and verification for generic AP fetches. Now it does.
2026-03-12 00:00:00 +00:00
Oneric
ab2210f02d test/webfinger/finger: add more validation tests 2026-03-12 00:00:00 +00:00
Oneric
5256785b9a test/webfinger/finger: mock url in all new tests
It is used in security checks and only due to an abudance of
existing tests lacking it, _only_ the test env is allowed to
fallback to the query URL for theses tests as a temporary
(well, ... it’s been a while now) measure.
We really shouldn’t be adding more deficient tests like that.
2026-03-12 00:00:00 +00:00
Oneric
4460c9c26d test/webfinger/finger: improve new test names and comments 2026-03-12 00:00:00 +00:00
f87a2f52e1 add extra happy and unhappy path tests for webfingers 2026-03-12 00:00:00 +00:00
627ac3645a add some more webfinger tests 2026-03-12 00:00:00 +00:00
2020a2395a add baseline webfinger FEP-2c59 tests 2026-03-12 00:00:00 +00:00
Oneric
fd734c5a7b webfinger/finger: normalise mention resources to more common format 2026-03-12 00:00:00 +00:00
Oneric
d86c290c25 user: drop unused function variants and parameters 2026-03-12 00:00:00 +00:00
Oneric
838d0b0a74 ap/views: use consistent structure for root collections
Notably user follow* collections faked a zero totalItem count
rather than simply omitting the field and included a link to a first
page even when knowing this page cannot be fetched while most others
omitted it. Omitting it will spare us receiving requests doomed to
failure for this page and matches the format used by GtS and Mastodon.

Such requests occur e.g. when other *oma servers try to determine
whether follow* relationships should be publicly shown. Other
implementations like Mastodon¹ simply treat the presence of a (link to)
a first page aready as an indicator for a public collection. By
omitting the link Mastodon servers will now too honour our users
request to hide follow* details.

The "featured" user collection are typically quite small and thus
the sole occurence of the alternative form where all items are directly
inlined into the root without an intermediate page. Thus it is not
converted but also no helper for this format created.

1: eb848d082a/app/services/activitypub/process_account_service.rb (L303)
2026-03-12 00:00:00 +00:00
Oneric
ddcc1626f8 ap/user_view: optimise total count of follow* collections
The count is precomputed as a user property.
Masto API already relies on this cached value.
This let’s us skip actually querying  follow* details unless
follow(ing/ed) users are publicly exposed and thus will be served.

In fact this could now be optimised further by using keyset pagination
to only fetch what’s actually needed for the current page. This would
also completely obsolete the need for the _offset collection page
helpers. However, for this pagination to be efficient it needs to happen
o the follow relation table, not users. This is left to a future commit.

Due to an ambiguity with PhoenixHtmlHelpers the Ecto.Query
select import was unusable without extra qualification,
therfore it is converted to a require expression.
2026-03-12 00:00:00 +00:00
Oneric
fbf02025e0 user/query: drop unused legacy parameter
There deactivated db column since 860b5c7804
and users of this legacy kludge introduced in 10ff01acd9
all migrated to an equivalent newer parameter
2026-03-12 00:00:00 +00:00
Oneric
a899663ddc Fix malformed is_active qualifiers in User.Query usages
While "is_active" is a property of users, it is not a recognised keyword
for the User.Query helper; instead "deactivated" with negated value must
be used.
This arose because originally the user property was also called
"deactivated" until it was renamed adn negated five years ago
in 860b5c7804. This included renaiming
the parameter in most but not all User.Query usages.
Notably the parameter in User.get_followers_query was renamed
but not in User.get_friends_query (friends == following).
The accepted query parameter in User.Query however was not changed.
This lead to the former mistakenly including deleted users causing
too large values to be reported in the ActivityPub follower, but not
following collection as reported in #1078.

In Masto API responses filtering by `User.is_visible` weeded out
the extra accounts before they got displayed to API users.

On the surface it might seem logical to align the name of the User.Query
parameter with the actual property name. However, User.Query already
accepts an "active" parameter which is an alias for limiting to accounts
which are neither deleted nor deactivated by moderators (both indicated
by is_active) and also are not newly created account requests still
pending an admin approval (is_approved) or necessary email confirmation
(is_confirmed); in short as the alias suggests whether the account is
active. Two highly similar parameter names like this would be much too
confusing.

The renamed "is_active" on the other hand does not actually suffice to
say whether an account is actually active, only whether it has (not yet)
ceased to be active (by its own volition or moderator action).
Meaning its "new" name is actively misleading. Arguably the rename
made things worse for no reason whatsoever and should not ever have
happened.

For now, we’ll just revert the incorrect query helper parameter renames.

Fixes: https://akkoma.dev/AkkomaGang/akkoma/ issues/ 1078
2026-03-12 00:00:00 +00:00
Oneric
d2fda68afd user/fetch: properly flag remote, hidden follow* counts
Instead of treating them like a public zero count.
2026-03-12 00:00:00 +00:00
Oneric
9fb6993e1b cosmetic/user/fetch: reorder functions
Such that helper functions are near their sole caller
instead of being interpsarsed with other public functions
2026-03-12 00:00:00 +00:00
Oneric
4bae78d419 user/fetcher: assume collection to be private on fetch errors
With the follow info update now actually running after being fixed
a bunch of errors about :not_found and :forbidden collection fetches
started to be logged
2026-03-12 00:00:00 +00:00
Oneric
f3821628e3 test: fix module name of GettextCompanion tests
Oversight in 54fd8379ad
2026-03-12 00:00:00 +00:00
Oneric
b4c6a90fe8 webfinger/finger: allow leading @ in handles
At least for FEP-2c59 this shouldn’t be the case
but in theory some WebFinger implementations may
serve their subject with an extra @
2026-03-12 00:00:00 +00:00
Oneric
a37d60d741 changelog: add missing changes 2026-03-12 00:00:00 +00:00
Oneric
71757f6562 user/fetcher: utilise existing verified nick on user refetch
Unless the force-revalidation config is enabled (currently the default).
Also avoids an unneecessarily duplicated db query for the old user.
2026-03-12 00:00:00 +00:00
Oneric
892628d16d user/fetcher: always detect nickname changes on Update activities
Even when the "always force revalidation" option is not enabled
while avoiding unnecessary revalidations if nothing changed.

With this heuristic we should be able to change the default to "false"
soon, but for now keep it enabled to help amend recent bugs.
2026-03-12 00:00:00 +00:00
Oneric
2a1b6e2873 user/fetcher: drop nonsense type-based follow update skip
The real intent behind the commit introducing this seemed to have been
avoiding running this when the actor does not expose follow collection ids
ec24c70db8.
This is already taken care of with the :collections_available check.
Some Implementations use other actor type like Group etc for visible,
followable actors making skipping undesirable.

Notably though, this actually has _always_ skipped counter updates
as even when this check was introduced, the user changeset data and
struct used the :actor_type key not :type.

In some situations fetch_follow_information_for_user is called directly
from other modules thus occasionally counters still got updated
for accounts with closer federation relationships masking the issue.
2026-03-12 00:00:00 +00:00
Oneric
756cfb6798 user/fetcher: fix follow count update
Users no longer have an info substruct for over 6 years
since e8843974cb.
Instead the counters are now directly part of the user struct itself.
2026-03-12 00:00:00 +00:00
Oneric
698ee181b4 webfinger/finger: fix error return format for invalid XML
By default just a plain :error atom is returned,
differing from the return spec of the JSON version
2026-03-12 00:00:00 +00:00
Oneric
c80aec05de webfinger: rewrite finger query and validation from and to actors
Resolves all security issues discussed in 5a120417fd86bbd8d1dd1ab720b24ba02c879f09
and thus reactivates skipped tests.
Since the necessary logic significantly differs for WebFinger handle dicovery/validation
and fetching of actors from just the webfinger handle the relevant public function was split
necessitating also a partial rewrite of the user fetch logic.

This works with all of the following:
  - ActivityPub domain is identical to WebFinger handle domain
  - AP domain set up host-meta LRDD link to WebFinger domain
  - AP domain set up a HTTP redirect on /.well-known/webfinger
    to the WebFinger domain
  - Mastodon style: WebFinger domain set up a HTTP redirect
    on its well-known path to AP backend (only for discovery
    from nickname though until Mastodon supports FEP-2c59)

This intentionally does not work for setups where FEP-2c59 is not
supported and the initially queried domain simply directly responds with
data containing a nickname from another domain’s authority without any
redirecty. (This includes the setup currently recommended by Mastodon,
when enriching an AP actor. Once Mastodon supports FEP-2c59 though its
setup will start to work again too automatically).
While technically possible to cross-verify the data with the nickname
domain, the existing validation logic is already complex enough and
such cross-validation needs extra safety measures to not get trapped
into infinite loops. Such setups are considered broken.
2026-03-12 00:00:00 +00:00
Oneric
69622b3321 Drop obsolete kludge for a specific, dead instance
It doesn’t make sense in general (many implementations use ids not nicks in ap_id)
and just wastes time by making additional, unnecessary, failing network requests.
This arguably should have never been committed.
2026-03-12 00:00:00 +00:00
Oneric
1e6332039f user/fetcher: also validate user data from Update
And fixup sloppy test data
2026-03-12 00:00:00 +00:00
Oneric
eb15e04d24 Split user fetching out of general ActivityPub module
The ActivityPub module is already overloaded and way too large.
Logic for fetching users and user information is isolated from
all other parts of the ActivityPub module, so let’s split it out.
2026-03-12 00:00:00 +00:00
Oneric
25461b75f7 webfinger: split remote queries and local data generation
They do not share any logic and the lack of separation makese it easy
to end up in the wrong section with ensuing confusion.
2026-03-12 00:00:00 +00:00
Oneric
4a35cbd23d fed/out: expose webfinger property in local actors (FEP-2c59)
It makes discovery and validation of the desired webfinger address
much easier. Future commits will actually use it for validation
and nick change discovery.
2026-03-12 00:00:00 +00:00
Oneric
1cdc247c63 Temporarily disable customised webfinger hosts
Proper validation of nicknames must consider both the domain
the nickname is associated with _and_ the actor to be assigned this
nickname consent to this.
Prior attempts at securing this wer emade in
a953b1d927 and
0d66237205 but they do not suffice.

The existing code attempted to validate webfinger responses independent
of the actual ActivityPub actor data and only ever consider the former
(yet failed to properly validate even this).

When resolving a user via a user-provided nickname the assignment
done by the provided URL was simply trusted regardless of the actors
AP host or data. When the nresolving the AP id, the nickname from this
original WebFinger query was passed along as pre-trusted data overriding
any discovery or validation from the actual actors side.
This allowed malicious actors to serve rogue WebFinger data associating
arbitrary actors with any nicknames they wished. Prompting servers to
resolve this rogue nickname then creates this nonconsensual assignment.

Notably, the existing code’s attempt at verification (only for domain
consent) used the originally requested URL for comparison against the
domain in the nickname handle. This effectively disabled custom
WebFinger domains for honest servers unless using an host-meta LRDD link.
(While LRDD links were recommend in the past by both *oma nad Mastodon,
 today most implementations other than *oma instead
 recommend setups emplyoing HTTP redirects.)
Still, this strictness did not prevent spoofing by malicious server.
It did however mean that rogue nickname assignments from an initial
nickname-based discovery were at least undone on the next user refresh
provided :pleroma, Pleroma.Web.WebFinger, :update_nickname_on_user_fetch
was not changed from its default truthy value.
(A renewed fetch via the rogue nickname would re-establish it though)

When enriching an already resolved ActivityPub actor to discover its
nickname the WebFinger query was not done with the unique AP id as a
resource, but a guessed nickname handle.
Furthermore, the received WebFinger response was not validated
to ensure the ActivityPub ID the WebFinger server pointed to
for the final nickname matched the actual ID in the considered
AP actor data.
While the faulty request URI check described above provides some
friction for malicious actors, it is still possible for mischiveous
AP instances to setup a rogue LRDD link poitning to a third-party
domain’s WebFInger and using the freedom provided by the LRDD link
to overwrite the resource value we provide in the lookup. Thus usurping
existing nicknames in another domains authority.

Proposed tweaks to the existing, faulty checks to work with
HTTP-redirect-based custom WebFInger domains would have made it
even easier to usurp nicknames from foreign domains.

For now simply disable custom WebFinger domains as a quick hotfix.
Subsequent commits will partially de-spaghettify the relevant code
and completely overhaul webfinger and nickname handling and validation.
2026-03-12 00:00:00 +00:00
Oneric
4c63243692 emoji/pack: fix in-place changes
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Deleting a whole pack at once didn’t remove its emoji from memory
and ewly updated or added emoji used a wrong path. While the pack
also contains a path, this is the filesystem path while in-memory
Emoji need the URL path constructed from a constant prefix,
pack name and just the filename component of the filesystem file.
Test used to not check for this at all.

Fixes oversight in 318ee6ee17
2026-03-12 00:00:00 +00:00
Oneric
7b9a0e6d71 twitter_api/remote_follow: allow leading @ in nicknames
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
And never attempt to fetch nicknames as URLs
2026-03-02 00:00:00 +00:00
Oneric
5873e40484 grafana: update reference dashboard
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
2026-02-19 00:00:00 +00:00
Oneric
f8abae1f58 docs/admin/monitoring: document reference dashboard requires VM
As reported on IRC.
What exactly Prometheus takes offense with isn’t clear yet.
2026-02-19 00:00:00 +00:00
Oneric
4912e1782d docs/admin/monitoring: add instructions to setup outlier statistics 2026-02-19 00:00:00 +00:00
Oneric
6ed678dfa6 mix/uploads: fix rewrite_media_domain for user images
Fixes: #1064
2026-02-19 00:00:00 +00:00
7c0deab8c5 Merge pull request 'Fetcher: Only check SimplePolicy rules when policy is enabled' (#1044) from mkljczk/akkoma:fetcher-simple-policy into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1044
Reviewed-by: Oneric <oneric@noreply.akkoma>
2026-02-18 13:37:27 +00:00
00fcffe5b9 fix test
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-02-17 14:32:59 +01:00
246e864ce4 Merge pull request 'Mastodon-flavour (read) quotes API compat' (#1059) from Oneric/akkoma:masto-quotes-api into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1059
2026-02-07 22:39:47 +00:00
c4bcfb70df Merge pull request 'Use local elixir-captcha clone' (#1060) from use-local-captcha-clone into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1060
Reviewed-by: Oneric <oneric@noreply.akkoma>
2026-02-07 20:11:07 +00:00
cf8010a33e Merge pull request 'ensure utf-8 nicknames on nickname GETs and user validator' (#1057) from user-utf8 into develop
Some checks failed
ci/woodpecker/push/publish/4 Pipeline failed
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline failed
ci/woodpecker/push/publish/2 Pipeline failed
Reviewed-on: #1057
Reviewed-by: Oneric <oneric@noreply.akkoma>
2026-02-07 19:41:26 +00:00
4c657591a7 use version with git history
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2026-02-07 19:40:09 +00:00
6ae0635da7 mix format
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2026-02-07 19:28:13 +00:00
11dbfe75b9 pleroma git OBLITERATED
Some checks failed
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/pr/test/1 Pipeline failed
2026-02-07 19:16:32 +00:00
58ee25bfbb correct typings, duplicated check
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2026-02-07 19:09:02 +00:00
Oneric
fd87664b9e api/statuses: allow quoting local statuses locally
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2026-02-07 00:00:00 +00:00
Oneric
731863af9c api/statuses: allow quoting own private posts
Provided the quote is private too.

Ideally we’d inline the quoted, private status since not all
remotes may already know the old post and some implmentations
(ourselves included) have trouble fetching private posts.
In practice, at least we cannot yet make use of such an inlined post
anyway defeating the point. Implementing the inlining and ability to
make use of the inlined copy is thus deferred to a future patch.

Resolves: #952
2026-02-07 00:00:00 +00:00
Oneric
5b72099802 api/statuses: provide polyglot masto-and-*oma quote object
However, we cannot provide Masto-style shallow quotes this way.

Inspired-by: https://issues.iceshrimp.dev/issue/ISH-871#comment-019c24ed-c841-7de2-9c69-85e2951135ca
Resolves: #1009
2026-02-07 00:00:00 +00:00
Oneric
c67848d473 api/statuses: accept and prefer masto-flavour quoted_status_id
The quote creation interface still isn’t exactly drop-in compatible for
masto-only clients since we do not provide or otherwise deal with
quote-authorization objects which clients are encouraged to check before
even offering the possibility of attempting a quote. Still, having a
consistent paramter name will be easier on clients.

Also dropped unused quote_id parameter from ActivityDraft struct
2026-02-07 00:00:00 +00:00
Oneric
a454af32f5 view/nodeinfo: use string keys
This makes embedded nodeinfo data
consistent between local and remote users
2026-02-07 00:00:00 +00:00
Oneric
e557bbcd9d api/masto/account: filter embedded nodeinfo
The only kown user is akkoma-fe and it only ever
accesses the software information. For *oma instances
the full, unfiltered nodeinfo data can be quite large
adding unneeded bloat to API responses.
This would have become worse with the duplication of
account data needed for Masto quote post interop.

In case a client we’re not aware of actually uses more fields from
nodeinfo, a new but temporary config setting is provided as a workaround.

Fixes: #827
2026-02-07 00:00:00 +00:00
b20576da2e Merge pull request 'http: allow compressed responses, use system CA certs instead of CAStore fallback' (#1058) from Oneric/akkoma:http-lib-updates into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1058
2026-01-30 20:14:53 +00:00
dee0e01af9 object/fetcher: only check SimplePolicy rules when policy is enabled
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-30 00:00:00 +00:00
Oneric
e488cc0a42 http/adapter_helper: explicitly enable IPv4
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Mint was upgraded in b1397e1670
2026-01-27 00:00:00 +00:00
Oneric
be21f914f4 mix: bump finch and use system cacerts
This upgrade pulls in a fix to better avoid killing re-actived pools,
obsoletes the need for our own HTTP2 server push workaround and allows
us to use system CA certs without breaking plain HTTP connections.

We tried to to the latter before on a per request basis, but this didn’t
actually do anything and we actually relied on the CAStore package
fallback the entire time. The broken attempt was removed in
ed5d609ba4.

Resolves: #880
2026-01-27 00:00:00 +00:00
Oneric
b9eeebdfd7 http: accept compressed responses
Resolves: #755
2026-01-27 00:00:00 +00:00
Oneric
c79e8fb086 mix: update Tesla to >= 1.16.0
This is the first release containg fixes making DecompressResponse
stable enough and suitable to be used by default allowing us to
profit from transport compression in obtained responses.

(Note: no compression is used in bodies we send out, i.e.
 ActivityPub documents federated to remote inboxes, since
 this will likely break signatures depending on whether
 the checksum is generated and checked before or after compression)

Ref: 5bc9b82823
Ref: 288699e8f5
2026-01-27 00:00:00 +00:00
8da6785c46 mix format
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2026-01-25 01:31:26 +00:00
3deb267333 if we don't have a preferredUsername, accept standard fallback 2026-01-25 01:30:25 +00:00
0d7bbab384 ensure utf-8 nicknames on nickname gets and user validator 2026-01-25 01:29:10 +00:00
aafe0f8a81 Merge pull request 'scrubbers/default: Allow "mention hashtag" classes used by Mastodon' (#1056) from mkljczk/akkoma:allow-mention-hashtag into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1056
2026-01-24 14:39:56 +00:00
24faec8de2 scrubbers/default: Allow "mention hashtag" classes used by Mastodon
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-24 14:17:33 +01:00
816d2332ab Merge pull request 'Update docs/docs/administration/backup.md' (#1050) from patatas/akkoma:develop into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1050
Reviewed-by: Oneric <oneric@noreply.akkoma>
2026-01-18 17:28:49 +00:00
a4a547e76e Update docs/docs/administration/backup.md
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
separate commands with semicolon (consistent with previous step in restore instructions)
2026-01-17 20:08:57 +00:00
Oneric
6cec7d39d6 db/migration/20251227000002: improve performance with older PostgreSQL
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
On fedi.absturztau.be the planner did not utilise the context
index for the original version leading to a query plan
100× worse than with this tweaked version.

With PostgreSQL 18 the relative difference is much smaller
but also in favour for the new version with the best observed
instance resulting in nearly half the estimated cost.
2026-01-13 00:00:00 +00:00
Oneric
3fbf7e03cf db/migration/20251227000000: add analyze statement
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
The second next migration greatly profits from this index
but sometimes the planner may not pick it up immediately
without an explicit ANALYZE call.
2026-01-12 00:00:00 +00:00
Oneric
31d277ae34 db: (re)add activity type index
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Before 32a2a0e5fa the context index
acted as a (based on the name surprising) type index too.
Type-based queries are used in the daily pruning of old e.g. Delete
activities by PruneDatabaseWorker, when querying reports for the admin
API and inferring by the significant change in average execution time
a mysterious COUNT query we couldn’t associated with any code so far:

  SELECT count(a0."id") FROM "activities" AS a0 WHERE (a0."data"->>$2 = $1);

Having this as a separate index without pre-sorting results in
an overall smaller index size than merging this into the context index
again and based on it not having been sorted before non-context queries
appear to not significantly profit from presorting.
2026-01-11 00:00:00 +00:00
Oneric
3487e93128 api/v1/custom_emoji: improve performance
All checks were successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Metrics showed this endpoint taking unexpectedly long processing times
for a simple readout of an ETS table. Further profiling with
fprof revealed almost all time was spent in URI.merge.

Endpoint.url() is per its documented API contract guaranteed to have a
blank path and all our emoji file paths start with a slash.
Thus, a simple binary concat is equivalent to the result of URI.merge.

This cuts down the profiled time of just the fetching and
rendering of all emoji to a tenth for me. For the overall API request
with overhead for handling of the incoming response as well as encoding
and seding out of the data, the overall time as reported by phoenix
metrics dropped down by a factor of about 2.5.
With a higher count of installed emoji the overall relative time
reduction is assumed to get closer to the relative time reduction of
the actual processing in the controller + view alone.

For reference, this was measured with 4196 installed emoji:
(keep in mind enabling fprof slows down the overall execution)

          fprof'ed controller   Phoenix stop duration
 BEFORE:     (10 ± 0.3)s             ~250 ms
  AFTER:    (0.9 ± 0.06)s            ~100 ms

Note: akkoma-fe does not use this endpoint, but /api/v1/pleroma/emoji
defined in Pleroma.Web.TwitterAPI.UtilController:emoji/2 which only
emits absolute-path references and thus had no use for URI.merge anyway.
2026-01-11 00:00:00 +00:00
93b513d09c Merge pull request 'Fix conversations API' (#1039) from Oneric/akkoma:fix-conv-api into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1039
2026-01-11 15:54:49 +00:00
Oneric
6443db213a conversation: remove unused users relationship
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
It is never used outside tests
and even there its correctness and/or
worth was questionable. The participations recipients
or just testing over the particiaptions’ user_id seem
like a better fit.
In particular this was always preloaded for API responses
needlessly slowing them down
2026-01-11 00:00:00 +00:00
Oneric
263c915d40 Create and bump conversations on new remote posts
The handling of remote and local posts should really be unified.
Currently new remote posts did not bump conversations
(notably though, until before the previous commit remote
 edits (and only edits), did in fact bump conversations due to being
 the sole caller of notify_and_stream outside its parent module)
2026-01-11 00:00:00 +00:00
Oneric
388d67f5b3 Don't mark conversations as unread on post edits
Without any indication of which post was edited this is only confusing and annoying.
2026-01-11 00:00:00 +00:00
Oneric
6adf0be349 notifications: always defer sending
And consistently treat muted notifications.

Before, when Notifications where immediately sent out, it correctly
skipped streaming nad web pushes for muted notifications, but the
distinction was lost when notification sending was deferred.
Furthermore, for users which are muted, but are still allowed to
create notifications, it previously (sometimes) automatically marked
them as read. Now notifications are either always silent, meaning
 - will not be streamed or create web pushes
 - will automatically be marked as read on creation
 - should not show up unless passing with_muted=true
or active, meaning
 - will be streamed to active websockets and cause web pushes if any
 - must be marked as read manually
 - show up without with_muted=true

Deferring sending out of notifications isdesirable to avoid duplicate
sending and ghost notifications when processing of the actiity fails
later in the pipeline and avoids stalling the ongoing db transaction.

Inspired by https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4032
but the actual implementation differs to not break with_muted=true.
2026-01-11 00:00:00 +00:00
Oneric
2516206a31 conversation: include owner in recipients upon creation
Participation.set_Recipients already does something equivalent,
but it was forgotten here.
2026-01-11 00:00:00 +00:00
Oneric
9311d603fb conversation_controller: skip superfluous second order clause
This might also have prevented utilising the pre-sorted index.
2026-01-11 00:00:00 +00:00
Oneric
34df23e468 api/masto/conversation: fix pagination over filtered blocks
Previously the view received pre-filtered results and therefore
pagination headers were also only generated with pre-filtered
results in mind. Thus the next page would again traverse over
previously seen but filtered out entries.
Curucially, since this last_activity filtering is happening _after_
the SQL query, it can lead to fewer results being returned than
requested and in particular even zero results remaining.
Sufficiently large blocks of filtered participations thus caused
pagination to get stuck and abort early.

Ideally we would like for this filtering to occur as part of the SQL
query ensuring we will laways return as many results as are allowed as
long as there are more to be found. This would oboslete this issue.
However, for the reasons discussed in the previous commit’s message
this is currently not feasible.

The view is the only caller inside the actual application of the
for_user_with_pagiantion* functions, thus we can simply move filtering
inside the view allowing the full results set to be used when generating
pagination headers.
This still means there might be empty results pages, but now with
correct pagination headers one can follow to eventually get the full set
2026-01-11 00:00:00 +00:00
Oneric
1029aa97d2 api/masto/conversations: paginate by last status id
The old pagination logic was inconsistent and thus broken.
It used to order conversations based on updated_at but generated
pagination HTTP Link headers based on participation IDs.
Thus entries could repeat or be left out entirely.

Notably using updated_at also lead to bumps when
merely marking an individual conversation as read,
though not when marking _all_ conversations as read in bulk
since Repo.update_all does not touch date fields autoamtically.

For consistent and sensible "last active" ordering this is replaced
by using the flake ID (which contains the date) of the last status
which bumped the conversation for both ordering and pagination
parameters. This takes into account whether the status was
visible to the participation owner at the time of posting.

Notably however, it does not care about whether the status continues to
exist or continues to be visible to the owner. Thus this is not marked
as a foreign key and MUST NOT be blindly used as such!
For the most part, this should be considered as just a funny timestamp,
which is more resiliant against spurious bumps than updated_at, offers
deterministic sorting for simulateanously bumped conversations and
better usability in pagination HTTP headers and requests.

Implementing this as a proper foreign key with continuously enforced
visibility restrictions was considered. This would allow to just load
the "last activity" by joining on this foreign key, to immediately
delete participations once they become empty and obsoleting the
pre-existing post-query result filtering.
However, maintaining this such that visibility restrictions are
respected at all times is challenging and incurs overhead.
E.g. unfollows may retroactively change whether the participation owner
is allowed to access activities in the context.

This may be reconsidered in the future once we are clearer
on how we want to (not) extend conversation features.

This also improves on query performance by now using a multi-row,
pre-sorted index such that one user querying their latest conversations
(a frequent operation) only needs to perform an index lookup (and
loading full details from the heap).
Before queries from rarely active users needed to traverse newer
conversations of all other users to collect results.
2026-01-11 00:00:00 +00:00
Oneric
ebd22c07d1 test/factory: take override paramters at more factories 2026-01-11 00:00:00 +00:00
Oneric
97b2dffcb9 pagination: allow custom pagination ids
While we already use wrapped return lists for
HTML pagination headers, currently SQL queries
from the SQL pagination helper use the primary key "id"
of some given table binding. However, for participations
we would like to be able to sort by a field which is not
a primary key. Thus allow custom field names.
2026-01-11 00:00:00 +00:00
Oneric
613135a402 ap: fix streamer crashes on new, locally initiated DM threads
Since side effect handling of local notes currently can only immediately
stream out changes and notifications (eventhough it really shouldn’t for
many more reasons then this) the transaction inserting the various
created objects into the database is not finished yet when StreamerView
tries to render the conversation.

This would be fine were it using the same db connection as the
inserting transaction. However, the streamer is a different process
and gets sent the in-memory version of the participation.

In the case of newly created threads the preloaded the streamer process
will not be able to load preload it itself since it uses a different db
connection and thus cannot see any effects of the unfinished transaction
yet. Thus it must be preloaded before passing it to Streamer.

Notably, the same reasoning applies to the new status activity itself.
Though fetching the activity takes more time with several prepatory
queries and in practice it appears rare for the actual activity query to
occur before the transaction finished.
Nonetheless, this too should and will be fixed in a future commit.

Fixes: #887
2026-01-11 00:00:00 +00:00
Oneric
120a86953e conversation: don't create participations for remote users
They can never query the participation via API nor edit the recipients,
thus such particiaptions would just forever remain unused and are
a waste of space.
2026-01-11 00:00:00 +00:00
Oneric
8d0bf2d2de conversation/participation: fix restrict_recipients
For unfathomable reasons, this did not actually use recipiepent (ship)
information in any way, but compared against all users owning a
participation within the same context.
This is obviously contradicting *oma’s manually managed recipient data
and even without this extension breaks if there are multiple distinct
subthreads under the same (possibly public) root post.
2026-01-11 00:00:00 +00:00
Oneric
32ec7a3446 cosmetic/conversation/participation: mark eligible functions as private 2026-01-11 00:00:00 +00:00
Oneric
9fffc49aaa conversation/participation: delete unused function
Redundant with unread_count/1 and only the latter is actually used
2026-01-11 00:00:00 +00:00
Oneric
32a2a0e5fa db: tweak activity context index
Presorting the index will speed up all context queries
and avoids having to load the full context before sorting
and limiting can even start.
This is relevant when get the context of a status
and for dm conversation timelines.

Including the id column adds to the index size,
but dropping non-Creates also brings it down again
for an overall only moderate size increase.
The speedup when querying large threads/conversations
is worth it either way.

Otherwise, the context attribute is only used as a condition in
queries related to notifications, but only to filter an otherwise
pre-selected set of notifications and this index will not be relevant.
2026-01-11 00:00:00 +00:00
Oneric
5608f974a3 api: support with_muted in pleroma/conversation/:id/statuses
It does not make sense to check for thread mutes here though.
Even if this thread was muted, it being explicitly fetched
indicates it is desired to be displayed.
While this used to load thread mutes, it didn’t actually apply them
until now, so in regard it does not change behaviour either and we
can optimise the query by not loading thread mutes at all.

This does change behavior of conversation timelines however
by now omitting posts of muted users by default, aligning it
with all other timelines.
2026-01-11 00:00:00 +00:00
Oneric
f280dfa26f docs/monitoring: note reference dashboard testing 2026-01-11 00:00:00 +00:00
Oneric
0326330d66 telemetry: log which activities failed to be delivered 2026-01-11 00:00:00 +00:00
d35705912f Merge pull request 'webfinger: accept canoncial AP type in XML and don’t serve response for remote users' (#1045) from Oneric/akkoma:fix-webfinger-type into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1045
2026-01-10 20:23:53 +00:00
Oneric
74fa8f5581 webfinger: don’t serve response for remote users’ AP id
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2026-01-10 00:00:00 +00:00
Oneric
967e2d0e71 webfinger: mark represent_user as private 2026-01-10 00:00:00 +00:00
Oneric
ee7e6d87f2 fed/in: accept canoncial AP type in XML webfinger data
This was supposed to be already handled for both XML and JSON
with d1f6ecf607 though the code
failed to consider variable scopes and thus was broken and
actually just a noop for XML.

For inexplicable reasons 1a250d65af
just outright removed both the failed attempt to parse the canonical
type in XML documents and also serving of the canonical type in our own
XML (and only XML) response.

With the refactor in 28f7f4c6de
the canoncial type was again served in both our own JSON and XML
responses but parsing of the canonical type remained missing
from XML until now.
2026-01-10 00:00:00 +00:00
e326285085 Merge pull request 'Various fixes' (#1043) from Oneric/akkoma:varfixes into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1043
2026-01-05 14:38:31 +00:00
Oneric
80817ac65e fed/out: also represent emoji as anonymous objects in reactions
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
It did not use the same Emoji object template as other occurrences.
This also fixes an issue with the icon URL not being properly encoded
as well as an inconsistency regarding the domain part of remote
reactions in retractions. All places use the image URL domain
except the query to find the activity to retract relie on the id.
Even before this change this made it impossible to retract remote
emoji reactions if the remote either doesn't send emoji IDs or
doesn't store images on the ActivityPub domain.

Addresses omission in 4ff5293093

Fixes: #1042
2026-01-05 00:00:00 +00:00
Oneric
5f4083888d api/stream: don’t leak hidden follow* counts in relationship updates
Based on the Pleroma patch linked below, but not needlessly
hiding the count if only the listing of the specific follow* accounts is
hidden while counts are still public.
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4205

Co-authored-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-05 00:00:00 +00:00
Oneric
eb08a3fff2 api/pleroma/conversation: accept JSON body to update conversation 2026-01-05 00:00:00 +00:00
Oneric
d6209837b3 api/v1/filters: escape regex sequence in user-provided phrases 2026-01-05 00:00:00 +00:00
Oneric
59b524741d web/activity_pub: drop duplicated restrict_filtered 2026-01-05 00:00:00 +00:00
e941f8c7c1 Merge pull request 'Support Mastodon-compatible translations API' (#1024) from mkljczk/akkoma:akkoma-mastoapi-translations into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1024
2026-01-04 16:11:27 +00:00
b147d2b19d Add /api/v1/instance/translation_languages
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-04 00:00:00 +00:00
d65758d8f7 Deduplicate translations code
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-04 00:00:00 +00:00
f5ed0e2e66 Inline translation provider names in function
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-04 00:00:00 +00:00
3b74ab8623 Support Mastodon-compatible translations API
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2026-01-04 00:00:00 +00:00
c971f297a5 Merge pull request 'Deal with elixir 1.19 warnings and test failures' (#1029) from Oneric/akkoma:elixir-1.19-warnings into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1029
2025-12-29 00:09:37 +00:00
720b51d08e Merge pull request 'Update ci build scripts for 1.19' (#1038) from ci-builds-otp28 into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1038
2025-12-28 21:57:15 +00:00
27b725e382 Update ci/build-all.sh
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2025-12-28 21:52:01 +00:00
Oneric
86d62173ff test: fix regex compare for OTP28
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
This was technically already incorrect before and pointed out as such in
documentation, but in practice worked well until OTP28’s regex changes.
2025-12-28 00:00:00 +00:00
Oneric
cbae0760d0 ci: adjust elixir and OTP versions
Some checks failed
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/pr/test/1 Pipeline was successful
2025-12-28 00:00:00 +00:00
Oneric
1fed47d0e0 user/signing_key: fix public_key functions and test
Some checks failed
ci/woodpecker/pr/test/1 Pipeline failed
ci/woodpecker/pr/test/2 Pipeline was successful
2025-12-25 00:00:00 +00:00
Oneric
712a629d84 Fix test in- and exclusion
We had a feww files matching neither inclusion nor exclusion rules.
With elixir 1.19 this creates a warning.
Most were intended to be ignored and thus we now override the default
rules in mix.exs to be explicit about this. However, two test files were
simply misnamed and intended to run, but untill now skipped.

Notably, the signing key test for shared key ids currently fails due to
a missing mock. For now this faulty test is simply disabled, the next
commit will fix it
2025-12-25 00:00:00 +00:00
Oneric
84ad11452e test: fix elixir 1.19 warnings in tests
Except for the struct comparisons, which was a real bug,
the changes are (in current versions) just cosmetic
2025-12-25 00:00:00 +00:00
Oneric
ae17ad49ff utils: comply with elixir 1.19 soft-requirement for parallel compiles
Elixir 1.19 now requires (with a deprecation warning) return_diagnostics
to be set to true for parallel compiles. However, this parameter was only
added in Elixir 1.15, so this raises our base requirement.
Since Elixir 1.14 is EOL anyway now this shoulöd be fine though.
2025-12-25 00:00:00 +00:00
Oneric
e2f9315c07 cosmetic: adjust for elixir 1.19 struct update requirments
When feasible actually enforce the to-be-updated data being the desired struct type.
For ActivityDraft this would add too much clutter and isn't necessary
since all affected functions are private functions we can ensure only
get correct data, thus just use simple map-update syntax here.
2025-12-25 00:00:00 +00:00
Oneric
fdd6bb5f1a mix: define preferred env in cli()
Defining inside project() is deprecated
2025-12-25 00:00:00 +00:00
Oneric
7936c01316 test/config/deprecations: fix warning comparsion for elixir 1.19
It can include a terminal-colour sequence before the final newline
2025-12-25 00:00:00 +00:00
Oneric
d92f246c56 web/ap/object_validators/tag: fix hashtag name normalisation
When federating out, we add the leading hashtag back in though
2025-12-25 00:00:00 +00:00
Oneric
8f166ed705 cosmetic: adjust for elixir 1.19 mix format 2025-12-25 00:00:00 +00:00
Oneric
b44292650e web/telemetry: fix HTTP error mapping for Prometheus
Some checks failed
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline failed
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Fixes omission in commit 2b4b68eba7
which introduced a more concise response format for HTTP errors.
2025-12-24 00:00:00 +00:00
68c79595fd Merge pull request 'Fix more interactions with invisible posts and corresponding data leaks' (#1036) from Oneric/akkoma:fix-interacting-nonvisible-posts into develop
All checks were successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1036
2025-12-24 02:43:00 +00:00
Oneric
be7ce02295 test/mastodon_api/status: insert mute before testing unmute
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Currently the test is still correctly sensitive to the visibility issue
it wants to test, but it easy to see how this could chane in the future
if it starts considering whether a mute existed in the first place.
Inserting a mute first ensures the test will keep working as intended.
2025-12-24 00:00:00 +00:00
Oneric
b50028cf73 changelog: add entries for recent fixes
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2025-12-24 00:00:00 +00:00
Oneric
82dd0b290a api/statuses/unfav: do not leak post when acces to post was lost
If a user successfully favourited a post in the past (implying they once
had access), but now no longer are alllowed to see  the (potentially
since edited) post, the request would still process and leak the current
status data in the response.

As a compromise to still allow retracting past favourites (if IDs are
still cached), the unfavouriting operation will still be processed, but
at the end lie to the user and return a "not found" error instead of
a success with forbidden data.

This was originally found by Phantasm and fixed in Pleroma as part of
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4400
but by completely preventing the favourite retraction.
2025-12-24 00:00:00 +00:00
Oneric
981997a621 api/statuses/bookmark: improve response for hidden or bogus targets
Also fixes Bookmark.destroy crashing when called with
parameters not mapping to any existing bookmark.

Partially-based-on: fe7108cbc2
Co-authored-by: Phantasm <phantasm@centrum.cz>
2025-12-24 00:00:00 +00:00
Lain Soykaf
126ac6e0e7 Transmogrifier: Handle user updates.
Cherry-picked-from: 98f300c5ae
2025-12-24 00:00:00 +00:00
Lain Soykaf
3e3baa089b TransmogrifierTest: Add failing test for Update.
Cherry-picked-from: ed538603fb
2025-12-24 00:00:00 +00:00
Oneric
25d27edddb ap/transmogrifier: ensure attempts to update non-updateable data are logged
Often raised errors get logged automatically,
but not always and here it doesn't seem to happen.
I’m not sure what the criteria for it being logged or not are tbh.
2025-12-24 00:00:00 +00:00
Phantasm
300744b577 CommonAPI: Forbid disallowed status (un)muting and unpinning
When a user tried to unpin a status not belonging to them, a full
MastoAPI response was sent back even if status was not visible to them.

Ditto with (un)mutting except ownership.

Cherry-picked-from: 2b76243ec8
2025-12-24 00:00:00 +00:00
Phantasm
ac94214ee6 Transmogrifier: update internal fields list according to constant
Adjusted original patch to drop fields not present in Akkoma

Cherry-picked-from: 3f16965178
2025-12-24 00:00:00 +00:00
Oneric
a2d156aa22 cosmetic/common_api: simplify check_statuses_visibility 2025-12-24 00:00:00 +00:00
Phantasm
31d5f556f0 CommonAPI: Fail when user sends report with posts not visible to them
Cherry-picked-from: 2b76243ec8
2025-12-24 00:00:00 +00:00
83832f4f53 bump version, update changelog
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
2025-12-07 05:02:26 +00:00
Oneric
3175b5aa25 docs: update for recent deprecations and removals
Some checks failed
ci/woodpecker/pr/test/1 Pipeline is running
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline failed
ci/woodpecker/push/publish/1 Pipeline failed
ci/woodpecker/push/publish/2 Pipeline failed
2025-12-05 00:00:00 +00:00
Oneric
6c1afa42d2 changelog: mention (repeated) deprecation of dm timeline
Some checks failed
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
As discussed on IRC this is already buggy on large instnaces
and conflicts with future cleanup and optimisation work.

The only remaining users currently appear to be Pleroma-derived web
frontends, namely akkoma-fe, pleroma-fe, pl-fe and Mangane.
We will switch akkoma-fe over to conversations before the actual
removal.
2025-11-28 18:37:59 +01:00
3a04241e3b Merge pull request 'include pl-fe in available frontends' (#1023) from mkljczk/akkoma:pl-fe into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1023
2025-11-28 17:04:52 +00:00
353c23c6cd include pl-fe in available frontends
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2025-11-28 09:22:31 +01:00
70877306af Merge pull request 'RFC: handling of third-party frontends in default available FE list' (#945) from Oneric/akkoma:third-party-frontends into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #945
2025-11-27 21:32:02 +00:00
9abce8876a Merge pull request 'Add mix task to rewrite media domains' (#961) from rewrite-media-urls into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #961
Reviewed-by: Oneric <oneric@noreply.akkoma>
2025-11-27 20:41:49 +00:00
22d1b08456 Merge pull request 'Tweak users database indexes and drop exclude_visibilities' (#1019) from Oneric/akkoma:db-index-tweaks into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1019
2025-11-27 19:43:23 +00:00
Oneric
453ab11fb2 changelog: add missing entries
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2025-11-27 00:00:00 +00:00
Oneric
0aeeaeb973 api: tentatively remove exclude_visibility parameter
This was originally added in a97b642289 as
part of https://git.pleroma.social/pleroma/pleroma/-/merge_requests/1818
with the hope/intent of using it to hide DMs by default from timelines.

This never came to be with thisd parameter still being unused today in
al of akkoma-fe, pleroma-fe, Husky, pl-fe, Mangange, fedibird-fe and
masto-fe. Given thypically only a small percentage of posts are DMs,
as well as issues with the SQL function and index used to implement this
a removal likely won't bother anyone while allowing us to clean up
bugged code and improve actually used functions.

While unlikely, we don't have a way to ascertain for sure whether there
are still users so for now only drop it from API Spec which leads to
the parameter being stripped out before processing and keep the actual
logic. If any users exist and complain we can easily revert this, if not
we will follow up with a proper cleanup.
2025-11-27 00:00:00 +00:00
Oneric
aac6086ca6 db: convert indexes to partial ones where appropriate
This reduces both the size the indexes take on the disk
and the overhead of maintaining it since it now only
needs to be updated for some entries.

For the email index restriction queries needed to be updated,
the last_active_at index is used only by one query which already
also restricted by "local" and for the remaining restrictions
no query adaptations are needed.
2025-11-27 00:00:00 +00:00
078c73ee2c Merge pull request 'Redirect /users/:nickname.rss to /users/:nickname/feed.rss instead of .atom' (#1022) from mkljczk/akkoma:akkoma-feed-redirect into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1022
Reviewed-by: Oneric <oneric@noreply.akkoma>
2025-11-26 20:56:12 +00:00
8fd8e7153e Redirect /users/:nickname.rss to /users/:nickname/feed.rss instead of .atom
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2025-11-26 16:55:50 +01:00
90e18796cf Merge pull request 'Hide private keys and password hashes from logs by default' (#1021) from Oneric/akkoma:log-sanitisation into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1021
2025-11-25 21:13:50 +00:00
Oneric
6ad9f87a93 deps: upgrade http_signatures
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
The only change is hiding key material from logs
2025-11-25 00:00:00 +00:00
Oneric
6d241a18da Hide the most sensitive user data from logs
Else admins may accidentally leak password (hashes)
or expose their users emails when sharing logs for debugging
2025-11-25 00:00:00 +00:00
Oneric
d81b8a9e14 http/middleware/httpsig: drop opt-in privkey scrubbing
This was replaced by the always-on exclusion from the inspect protocol
implemented in the previous commit which is far more robus.

This partially reverts commit 2b4b68eba7;
the more detailed response for generic HTTP errors is retained.
2025-11-25 00:00:00 +00:00
Oneric
5d8cdd6416 Always hide key material of user signing keys
This is far more robst than trying to manually scrub this in all affected paths
2025-11-25 00:00:00 +00:00
f5d78b374c Merge pull request 'Fix ActivityPub fetch sanitisation' (#1018) from Oneric/akkoma:fix-fetch-serve-sanitisation into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1018
2025-11-24 00:35:24 +00:00
Oneric
272799da62 test: add more representation tests for perpare_outgoing
Some checks failed
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline failed
In particular this covers the case
e88f36f72b was meant to fix and
the case from #1017
2025-11-24 00:00:00 +00:00
Oneric
85171750f1 fed/fetch: use same sanitisation logic as when delivering to inboxes
Duped code just means double the chance to mess up. This would have
prevented the leak of confidential info more minimally fixed in
6a8b8a14999f3ed82fdaedf6a53f9a391280df2f and  now furthermore
fixes the representation of Update activites which _need_ to have their
object inlined, as well as better interop for follow Accept and Reject
activities and all other special cases already handled in Transmogrifier.
It also means we get more thorough tests for free.

This also already adds JSON-LD context and does not add bogus Note-only
fields as happened before due to this views misuse of prepare_object
for activities. The doc of prepare_object clearly states it is only
intended for creatable objects, i.e. (for us) Notes and Questions.
2025-11-24 00:00:00 +00:00
Oneric
eaad13e422 fed/out: ensure we never serve Updates for objects we deem static 2025-11-24 00:00:00 +00:00
Oneric
6ef13aef47 Add voters key to internal object fields
It is inlined and used to keep track of who already voted for a poll.
This is expected to be confidential information and must no be exposed
2025-11-24 00:00:00 +00:00
Oneric
ef029d23db fed/fetch: don't serve unsanitised object data for some activities
When the object associated with the activity was preloaded
(which happens automatically with Activity.normalize used in the
 controller) Object.normalize’s "id_only" option did not actually work.
This option and it’s usage were introduced to fix display of Undo
activities in e88f36f72b.
For "Undo"s (and "Delete"s) there is no object preloaded
(since it is already gone from the database) thus this appeared
to work and for the particular case considered there in fact did.
Create activities use different rendering logic and thus remained
unaffected too.

However, for all other types of Activities (yes, including Update
which really _should_ include a properly sanitised, full object)
this new attempt at including "just the id", lead to it instead
including the full, unsanitised data of the referenced object.

This is obviously bad and can get worse due to access restrictions
on the activity being solely performed based on the addressing
of the activity itself, not of the (unintentionally) embedded
object.

Starting with the obvious, this leaks all "internal" fields
but as already mentioned in 8243fc0ef4
all current "internal" fields from Constants.object_internal_fields
are already publicised via MastoAPI etc anyway. Assuming matching
addressing of the referenced object and activity this isn't problematic
with regard to confidentiality.
Except, the internal "voters" field recording who voted for a poll
is currently just omitted from Constants.object_internal_fields
and indeed confidential information (fix in subsequent commit).
Fortunately this list is for the poll as a whole and there are no
inlined lists for individual choices. While this thus leaks _who_
voted for a poll, it at least doesn't directly expose _what_ each voter
chose if there are multiple voters.

As alluded to before, the access restriction not being aware
of the misplaced object data into account makes the issue worse.
If the activity addressing is not a subset of the referenced object’s
addressing, this will leak private objects to unauthorised users.
This begs the question whether such mismatched addressing can occur.
For remote activities the answer is ofc a resounding YES,
but we only serve local ActivityPub objects and for the latter
it currently(!) seems like a "no".
For all intended interactions, the user interacting must already have
access to the object of interest and our ActivityPub Builder
already uses a subset of the original posts addressing for
posts not publicly accessible. This addressing creation logic
was last touched six years ago predating the introduction of this
exposure blunder.
The rather big caveat her being, until it was fixed just yesterday in
dff532ac72 it was indeed possible to
interact with posts one is not allowed to actually see. Combined, this
allowed unauthorised access to private posts. (The API ID of such
private posts can be obtained e.g. from replies one _is_ allowed to see)

During the time when ActivityPub C2S was supported there might have been
more ways to create activities with mismatched addressing and sneak a
peek on private posts. (The AP id can be obtained in an analogous way)

Replaces and fixes e88f36f72b.
Since there never were any users of the
bugged "id_only" option it is removed.

This was reported by silverpill <silverpill@firemail.cc> as an
ActivityPub interop issue, since this blunder of course also
leads to invalid AP documents by adding an additional layer
in form of the "data" key and directly exposing the internal
Pleroma representation which is not always identical to valid AP.

Fixes: #1017
2025-11-24 00:00:00 +00:00
8fd2fb995f Merge pull request 'test: raise default assert_receive timeout' (#1016) from Oneric/akkoma:tmp into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1016
2025-11-23 15:34:02 +00:00
a5683966a8 Merge pull request 'Add banner.png back' (#1015) from mkljczk/akkoma:banner-png into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1015
Reviewed-by: Oneric <oneric@noreply.akkoma>
2025-11-23 12:18:21 +00:00
Henry Jameson
798beec97b config: add pleroma’s pleroma-fe to frontends available by default
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2025-11-23 00:00:00 +00:00
Oneric
e6ce7751a9 test: raise default assert_receive timeout
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
In our CI we sometimes fail assert_receive statmenets even in retry
making CI results unreliabl'ish. The default is 100ms, now it is 1s.

See e.g: https://ci.akkoma.dev/repos/1/pipeline/504/5
2025-11-23 00:00:00 +00:00
0b049c3621 Add banner.png back
Some checks failed
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline failed
It was mistakenly removed as part of the bundled frontend in
82fa766ed7 but actually it
is used as the default value for user profile banners.

Mastodon API typespecs define the relevant field as non-nullable in
585545d0d5/app/javascript/mastodon/api_types/accounts.ts (L30)
and clients do in fact rely on it being non-null, meaning we can't just
omit the value if a user didn't setup a bespoke banner.

Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2025-11-23 00:00:00 +00:00
Oneric
7d480c67d1 frontend: print warning for third-party frontends
For future use. Mark all current frontends as first-party.
The key is named 'blind_trust' instead of 'first_party' to (hopefully)
increase the chances someone notices if some random, malicous frontend
adds the key into their install instructions
2025-11-23 00:00:00 +00:00
Oneric
8352c1f49d frontend: mark functions without external users as private 2025-11-23 00:00:00 +00:00
c7d75ca0d3 Merge pull request 'api: ensure only visible posts are interactable' (#1014) from Oneric/akkoma:can-you-see-it-too into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1014
2025-11-22 22:47:12 +00:00
e3dd94813c Merge pull request 'Fix mentioning complex usernames' (#1012) from Oneric/akkoma:fix-nonalphanum-mentions into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1012
2025-11-22 21:21:51 +00:00
bc68761b1d add doc, change IO to shell_info
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2025-11-22 17:42:42 +00:00
c144bc118d Merge branch 'develop' into rewrite-media-urls 2025-11-22 17:31:08 +00:00
b9e70c29ef Merge pull request 'Adjust rss/atom PR' (#1007) from akkoma-hashtag-feeds-restricted into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1007
2025-11-22 17:24:03 +00:00
Oneric
dff532ac72 api: ensure only visible posts are interactable
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
It doesn't make sense to like, react, reply, etc to something you cannot
see and is unexpected for the author of the interacted with post and
might make them believe the reacting user actually _can_ see the post.

Wrt to fav, reblog, reaction indexes the missing visibility check was
also leaking some (presumably/hopefully) low-severity data.

Add full-API test for all modes of interactions with private posts.
2025-11-22 00:00:00 +00:00
Oneric
810e3b1201 Fix mentioning complex usernames
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
The updated linkify is much more liberal about usernames in mentions.

Fixes: #1011
2025-11-19 00:00:00 +00:00
5e4475d61e Merge pull request 'Purge broken, unused and/or useless features' (#1008) from Oneric/akkoma:purge-broken into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1008
2025-11-18 22:08:04 +00:00
d96c6f438d Merge pull request 'Fix generic type and alt text for incoming federated attachments' (#1010) from Oneric/akkoma:fedin-attachment-fixes into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1010
2025-11-17 16:29:42 +00:00
Oneric
9d19dbab99 app: probe if any users of thread containment exist
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
If "thread containment" isn’t skipped the API will only return posts
whose entire ancestor chain is also visible to the current user.
For example if this contianment is active and the current user
follows A but not B and A replies to a followers-only post of B
the current user will not be able to see A’s post eventhough
per ActivityPub-semantics (an behaviour of other implementations)
the current user was addressed and has read permissions to A’s post.
(Though this semantics frequently surprise some users.)

There is a user-level option to control whether or not to perform
this kind of filtering and a server-wide config option under :instance.
If this containment is _disabled_ (i.e. skip_thread_containment: true)
server-wide, user-level options are ignored an filtering never takes
place.

This is cheked via the database function "thread_visibility" which
recursively calls itself on the ancestor of the currently inspected post
and for each post performs additional queries to check the follow
relationship between author and current user.
While this implementation introduced in
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/971
performs better than the previous elixir-side iteration due to
less elixir-database roundtrips the perf impact is still ridiculously
large and when fetching and entire conversation / context at once there
are many redundnat checks and queries.
Due to this an option to dis/enable the "containment" was added and
promptly defaulted to disabling "containment" in
593b8b1e6a six years ago.
This default remained unchanged since and the implementation wasn’t
overhauled for improved performance either. Essentially this means
the feature has already been entirely disabled oout-of-the box
without too much discoverability for the last six years. It is thus
not too unlikely that no actual users of it exist today.

The user-level setting also didn’t made its way into any known clients.
Surveying current versions of akkoma-fe, husky, pleroma-fe, pl-fe,
Mangane and just to be sure also admin-fe, fedibird-fe and masto-fe
none of them appears to expose a control for the user-level setting.
pl-fe’s pl-api acknowledges the existance of the setting in the API
definition but the parameter appears unused in any actual logic.
Similarly Mangane and pl-fe have a few matches for
"skip_thread_visibility" in test samples of user setting responses
but again no actual uses in active code.

While the idea behind the feature is sensible (though care must be taken
to not mislead users into believing _all_ software would apply the same
restrictions!), the current implementation is very much not sensible.
With the added code complexity and apparent lack of users it is unclear
whether keeping the feature around and attempting to overhaul the
implementation is even worth it.
Thus start pritning a big fat warning for any potentially existing users
prompting for feedback. If there are no responses two releases from now
on it will presumably be safe to just entirely drop it.
2025-11-17 00:00:00 +00:00
Oneric
ffeb70f787 Drop counter_cache stubbing out /api/v1/pleroma/admin/stats
It only served for a niche, admin nice-to-have informational stat
without too much value but was unreasonably costly to maintain
adding overhead with multiple queries added to all modifications
to the fairly busy activities table.

The database schema of the counter table and the activity_visibility function
used for counter updates also did not know about "local" visibility (nor the
recently removed "list" visibility) and misattributed them to the "direct" counter.

On my small instance this nearly halved the average
insert time for activiteis from 0.926 ms to 0.465 ms.
2025-11-17 00:00:00 +00:00
Oneric
865cfabf88 Drop conversation addressing
No known client ever used this. Currently among akkoma-fe, pleroma-fe,
Husky, Mangane and pl-fe only the latter acknowledes the existence of
the in_reply_to_conversation_id paramter in its API definitions,
but even pl-fe does never actually use the parameter anywhere.

Since the API parameter already was converted to DMs internally,
we do not need to make special considerations for already existing
old conversation-addressed posts. Since they share the context they
should also continue to show up in the intended thread anyway.

The pleroma.participants subkey mentioned in docs did already not exist
prior to this commit. Instead the accounts key doesn’t automatically update
and this affects conversations retrieved from the Mastodon API endpoint too
(which may be considered a pre-existing bug).

If desired clients can already avoid unintended participant additions
by using the explicit-addressing feature originally introduced in
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/1239.
With the above-mentioned feature/bug of conversation participants
not updating automatically it can replace almost everything
conversation addressing was able to do. The sole exception being
creating new non-reply posts in the same context.
Neither conversation addressing nor explicit addressing
achieves robust, federated group chats though.

Resolves: #812
2025-11-17 00:00:00 +00:00
Oneric
271110c1a5 Drop broken list addressing feature
This feature was both conceptually broken and through bitrotting
the implementation was also buggy with the handling of certain
list-post interactions just crashing.

Remote servers had no way to know who belongs to a list and thus
posts basically showed just up as weird DM threads with different
participants on each instance. And while on the local instance
addition and removal from a listed grated and revoked post
access retroactively, it never acted retroactively on remotes.

Notably our "activity_visibility" database function also didn’t
know about "list visibility" instead treating them as direct messages.

Furthermore no known client actualy allows creating such messages
and the lack of complaints about the accumulutaed bugs supports
the absence of any users.

Given this there seems no point in fixing the implementation.
To reduce complexity of visibility handling it will be dropped instead.
Note, a similar effect with less federation weirdness can already be achieved
client-side using the explicit-addressing feature originally introduced in
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/1239.

Ref: #812
2025-11-17 00:00:00 +00:00
Oneric
180a6ba962 api/views/status: prefer 'summary' for attachment alt texts
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
GtS started exclusively using it and it already worked with Mastodon.
See: https://codeberg.org/superseriousbusiness/gotosocial/issues/4524

Since we used to (implicitly) strip the summary field
this will not take effect retroactively.
2025-11-16 00:00:00 +00:00
Oneric
9bbebab1a2 api/views/status: fallback to generic type when deducing attachment type
While *oma, *key, GtS and even Mastodon federate a full media type for attachments,
posts from Bridgy only contain a generic type and the URLs also appear to never end
with a file exstension. This lead to our old type detection always classifying them
as "unknown" and it showing up like a generic document attachment in frontends.
We can (for vanilla Masto API clients) avoid this by falling back to the
federated generic type.
(Note: all other software mentioned at the start appears to always use "Document"
for the generic type of attachments regardless of the also federated actual full type)

For clients relying on the full mime type provided by an *oma extension,
like currently akkoma-fe, this in itself does not fix the display
but it is a necessary prerequisite to handling this more gracefully.
2025-11-16 00:00:00 +00:00
Oneric
33dbca5b3a api/views/status: fallback to binary MIME type instead of invalid 'image' type
A MIME type MUST always contain both the type and subtype part.
Also, we already add this binary type for new incoming attachments
without a pre-existing MIME type entry anyway.
2025-11-16 00:00:00 +00:00
c2e9af76a5 Merge branch 'develop' into akkoma-hashtag-feeds-restricted
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
2025-11-13 11:35:16 +00:00
703db7eaef use verified paths
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2025-11-13 11:34:18 +00:00
0140643761 Merge pull request 'reverse_proxy: don't rely on header for body size' (#989) from Oneric/akkoma:revproxy-content-size into develop
All checks were successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #989
2025-11-13 10:44:25 +00:00
d4a86697d9 Merge pull request 'upgrade all deps' (#1002) from Oneric/akkoma:dep-update into develop
All checks were successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
Reviewed-on: #1002
2025-11-09 14:38:07 +00:00
Oneric
6b0d4296d9 ci/publish: actually select arm64 releaser image
Some checks failed
ci/woodpecker/pr/test/1 Pipeline is pending
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/4 Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline was successful
ci/woodpecker/push/publish/2 Pipeline was successful
2025-11-09 00:00:00 +00:00
Oneric
6f971f10cf test/meilisearch: maybe fix flakyness
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
CI sometimes failed due to Mock already being started,
though I couldn’t reproduce it locally. Using non-global
mocks hopefully avoids this
2025-11-09 00:00:00 +00:00
Oneric
1bff36b990 ci/publish: fix syntax 2025-11-09 00:00:00 +00:00
Oneric
7a5c28a12a mix/deps: upgrade phoenix_ecto 2025-11-09 00:00:00 +00:00
Oneric
631f0e817b Revert "ci: allow docs to build on all runners"
Some checks failed
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/push/publish/4 Pipeline failed
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
ci/woodpecker/push/publish/1 Pipeline failed
ci/woodpecker/push/publish/2 Pipeline failed
This reverts commit 9f05b19b6b.
We download a amd64 bild of the scaleway CLI.
2025-11-09 00:00:00 +00:00
949d641715 Merge pull request 'ci: try to parallelise test jobs' (#1003) from Oneric/akkoma:ci-parallel-testing into develop
Some checks failed
ci/woodpecker/push/publish/2 Pipeline is pending
ci/woodpecker/push/docs Pipeline failed
ci/woodpecker/push/publish/4 Pipeline failed
ci/woodpecker/push/publish/1 Pipeline failed
Reviewed-on: #1003
2025-11-09 13:09:31 +00:00
Oneric
9b99a7f902 cachex: reduce default user and object cache lifetime
This caches query results from our own database, but does not respect
transaction boundaries or syncs to transaction success/failure. Long
lifetimes increase the chance of such desync occuring and being written
back to the database, see: #956

Until 1d02a9a35d recently fixed it
this used a 3 second lifetime anyway, so this won’t result in
performance degradations but hopefully prevents a rise in desyncs.
2025-11-09 00:00:00 +00:00
Oneric
9f05b19b6b ci: allow docs to build on all runners
All checks were successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
The docker image already offers variants for all our runner arches
2025-11-09 00:00:00 +00:00
Oneric
74cfb2e0db cosmetic/app: use appropriate timer function instead of scaling up lower res units 2025-11-09 00:00:00 +00:00
Oneric
dcd664ffbc ci: dedupe build+release job definitions
If releasr images provide amd64 and arm64 in the same tag
this can be slightly simplified further.
2025-11-09 00:00:00 +00:00
Oneric
534124cae2 mix/deps: upgrade cachex to 4.x 2025-11-09 00:00:00 +00:00
Oneric
c3195b2011 ci: move one test job to arm64 to allow parallel execution 2025-11-09 00:00:00 +00:00
Oneric
d1050dab76 Fix cachex TTL options
For some reason most caches were never updated to
the new setting format introduced in the 3.0 release.
2025-11-09 00:00:00 +00:00
Oneric
dc95f95738 mix/deps: upgrade timex
Unlocked by the gettext upgrade
2025-11-09 00:00:00 +00:00
Oneric
086d0a052b mix/deps: upgrade to new gettext API
This supposedly improves compile times
and also unlocks a minor timex upgrade.
1.0.0 was released without API changes after the 0.26 series.
2025-11-09 00:00:00 +00:00
Oneric
54fd8379ad web/gettext: split Gettext Backend and additional utility functions
This avoids the importing the heavy Gettext backend in some places
and makes it clearer what’s actually used in the project and what’s
used by the Gettext library.
With the to-be-pulled-in Gettext API change this split
will be even more helpful for code clarity.

As a bonus documentation is improved and
the unused locale_or_default function removed.
2025-11-09 00:00:00 +00:00
Oneric
b344c80ad2 cosmetic/app: dedupe always added task chhildren
As a side effect this gets rid of a compiler warning
about the prior task_children(:test) function clause
being dead code in non-test builds
2025-11-09 00:00:00 +00:00
Oneric
264202c7b3 mix/deps: upgrade dialyxir patch version 2025-11-09 00:00:00 +00:00
Oneric
4e321f4f47 mix/deps: upgrade elixir_argon2 to 4.x series
Only the default parameters changed from 3.x to 4.x.
It now matches the proposed defaults suggested in the RFC
for constrained environments.
The prior defaults have worse latency, but probably slightly better
security while spawning fewer threads, so let’s stick with them.

By setting the defaults in the config instance owners
can (continue to) tweak these for the specific setup.
2025-11-09 00:00:00 +00:00
Oneric
be8846bd89 mix/deps: upgrade telemetry_metrics to 1.x series
The 0.x series was promoted to 1.x without API changes
to indicate stability of the interface.
2025-11-09 00:00:00 +00:00
Oneric
b1397e1670 mix/deps: force minor updates not happening automatically
earmark is intentionally excluded due to a bug in current releases
starting with 1.4.47 affecting our tests. Admittedly it was also bugged
before but in a different way.

See: https://github.com/pragdave/earmark/issues/361#issuecomment-3494355774
2025-11-09 00:00:00 +00:00
Oneric
0afe1ab4b0 Resolve Phoenix 1.8 deprecations 2025-11-09 00:00:00 +00:00
Oneric
57dc812b70 mix/deps: upgrade phoenix family 2025-11-09 00:00:00 +00:00
Oneric
e91d7c3291 mix/deps: follow branches of git repos
To allow easy updates via  later on.
The only actual new changes pulled in in this commit are
build tweaks to pleroma’s captcha library
2025-11-09 00:00:00 +00:00
Oneric
bf5e2f205e mix/deps: allow ueberauth updates again
e2f749b5b0 pinned the version to avoid
a partiuclar broken release, but the issue has since been fixed.
See: https://github.com/ueberauth/ueberauth/issues/194
2025-11-09 00:00:00 +00:00
Oneric
53f866b583 mix/deps: upgrade everything to compatible newer versions
Or at least they were supposed to be compatible.
Due to an ecto optimisation change it is now illegal to use
replace_all_except if the union of conflict_targets and the fields
exempted from updating are equal to _all_ fields of this table.
See: https://github.com/elixir-ecto/ecto/issues/4633
2025-11-09 00:00:00 +00:00
467e75e3b1 Merge pull request 'api: prefer duration param for mute expiration' (#1004) from Oneric/akkoma:mute-expiry-duration into develop
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Reviewed-on: #1004
2025-11-08 13:39:55 +00:00
8532f789ac api: prefer duration param for mute expiration
Some checks failed
ci/woodpecker/pr/test/1 Pipeline failed
ci/woodpecker/pr/test/2 Pipeline was successful
Mastodon 3.3 added support for temproary mutes but uses "duration"
instead of our older "expires_in". Even Husky only sets "duration"
nowadays.

Signed-off-by: marcin mikołajczak <git@mkljczk.pl>

Cherry-picked-from: 5d3d6a58f7
2025-11-08 00:00:00 +00:00
679c4e1403 Merge pull request 'api: return error when replying to already deleted post' (#1001) from Oneric/akkoma:replying-to-a-ghost into develop
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Reviewed-on: #1001
2025-11-06 16:46:57 +00:00
Oneric
d635a39141 api: return error when replying to already deleted post
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
Of course the aprent post might still be deleted after the reply was
already created, but in this case the reply will still show up as a
reply and be federated as a reply with a reference to the parent post.
If the parent was already deleted before the reply gets created however
it used to be indistinguishable from a root post both in Masto API and
ActivityPub.

From a UX perspective, users likely will like to know if the post
they’re replying to no longer exists by the time they finished writing.
The natural language error will show up in akkoma-fe without clearing
the post form, meaning users can decide to discard the reply or copy it
to post as a new root post. It seems sensibly to for other clients to
behave like this too, but so far no more clients were actually tested.

Furthermore, this used to allow replying to all sorts of activities not
just posts which was rather non-sensical (and after all processsing
steps turned into a reply to the object referenced by the activity).
In particular this allowed replying to an user object by specifying the
db ID of a follow request activity (if the latter was somehow obtained).

Note: empty-string in_reply_to parameters are explicitly ignored since
45ebc8dd9a to workaround one buggy client;
see: https://git.pleroma.social/pleroma/pleroma/-/issues/355.
It’s not clear if this workaround is still necessary,
but it is preserved by this commit.

Resolves: #522
2025-11-06 15:58:40 +01:00
8da0828b4a Merge pull request 'reload emoji asynchronously and optimise emoji updates' (#998) from Oneric/akkoma:async-emoji-reload into develop
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Reviewed-on: #998
2025-11-06 14:55:56 +00:00
ccde26725f Merge pull request 'api_spec/cast: iteratively retry to clean all offending parameters' (#995) from Oneric/akkoma:apispec-cast-multitolerance into develop
Some checks failed
ci/woodpecker/push/build-amd64 Pipeline failed
ci/woodpecker/push/docs unknown status
ci/woodpecker/push/build-arm64 Pipeline failed
Reviewed-on: #995
2025-11-06 14:55:38 +00:00
Oneric
7e6efb2356 api_spec/cast: iteratively retry to clean all offending parameters
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
While the function signature allows returning many errors at once,
OpenApiSpex.cast_and_validate currently only ever returns the first
invalid field it encounters. Thus we need to retry multiple times to
clean up all offenders.

Fixes: #992 (comment)
2025-11-05 00:00:00 +00:00
Oneric
bd6dda2cd0 docs: fix multi-paragraph list items
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Markdown requires an indentation of 4 for a following paragraph to
continue a list item. Previously, the continuing paragraphs were only
indented by 2 spaces, leading to the list being interrupted and
numbering restarted each time.
2025-11-02 00:00:00 +00:00
Oneric
e7d76bb194 emoji: reload asynchronously
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
No caller of `reload` actually uses the result in any way
so there’s no need to wait for a response and risk running
into a timeout (by default 5 seconds).

Discovered-by: sn0w <me@sn0w.re>
Based-on: 1fb54d5c2c
2025-10-30 00:00:00 +00:00
Oneric
318ee6ee17 emoji: avoid full reloads when possible
Reloading the entire emoji set from disk, reparsing all pack JSON files,
etc is unnecessarily costly for large emoji sets. We already know which
single or few changes we want to apply, so do just that instead.
2025-10-29 00:00:00 +00:00
Oneric
f86a88ca19 emoji: store in unordered set
No caller cares about the order
(and although, rare with concurrent reads at same time like a write
the table might return unordered results anyway).
Unordered sets have a constant read time,
ordered sets logarithmic times, but there’s no benfit for us
2025-10-29 00:00:00 +00:00
Oneric
d38ca268c4 cosmetic/emoji: fix misleading docs and var names 2025-10-29 00:00:00 +00:00
Oneric
0cb2807667 emoji/pack: fix newly created pack having nil name
At the next reload the name was already set to the directory name,
but if using the created pack directly issues arose.
2025-10-29 00:00:00 +00:00
862fba5ac5 Merge pull request 'Do not try to redirect to post display URLs for non-Create activities' (#997) from Oneric/akkoma:fix-non-create-html-redirect into develop
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Reviewed-on: #997
2025-10-26 12:45:22 +00:00
Oneric
47ac4ee817 Do not try to redirect to post display URLs for non-Create activities
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
Display will fail for all but Create and Announce anyway since
0c9bb0594a. We exclude Announce
activities from redirects here since they are not identical
with the announced post and akkoma-fe stripping the repeat header
on he /notice/ page might lead to confusion about which is which.

In particular those redirects exiting breaks the assumptions from
the above commit’s commit message and made it possible to obtain
database IDs for activities other than one’s own likes allowing
slightly more mischief with the rendering bug it fixed.

Note: while 0c9bb0594a speculated about
public likes also leaking IDs to other users, the public like endpoint
is actually paginated by post id/date not like id/date like the private
endpoint. Thus it does not allow getting database IDs of others’ likes.
2025-10-26 00:00:00 +00:00
9d12c7c00c Merge pull request 'Preserve mastodon-style quote-fallback marker' (#977) from mastodon-quotes into develop
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Reviewed-on: #977
Reviewed-by: Oneric <oneric@noreply.akkoma>
2025-10-24 21:51:49 +00:00
Oneric
b7107a9e33 docs: adjust for include_types deprecation
Some checks failed
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
And replace an overlooked usage in tests.
Fixes omissions in b3a0833d30
2025-10-24 00:00:00 +00:00
8857c98eaf Merge pull request 'Use types for filtering notifications' (#993) from mkljczk/akkoma:akkoma-notification-types into develop
Some checks failed
ci/woodpecker/push/build-amd64 Pipeline is pending
ci/woodpecker/push/build-arm64 Pipeline failed
ci/woodpecker/push/docs unknown status
Reviewed-on: #993
Reviewed-by: Oneric <oneric@noreply.akkoma>
2025-10-24 19:05:27 +00:00
c3dd582659 Allow mastodon-style quotes
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
2025-10-24 00:00:00 +00:00
b3a0833d30 Use types for filtering notifications
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2025-10-15 09:06:43 +02:00
300302d432 Merge pull request 'Treat known quotes and replies as such even if parent unavailable' (#991) from Oneric/akkoma:replies-unknown into develop
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Reviewed-on: #991
2025-10-13 12:24:27 +00:00
Oneric
0907521971 Treat known quotes and replies as such even if parent unavailable
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
Happens commonly for e.g. replies to follower-only posts
if no one one your instance follows the replied-to account
or replies/quotes of deleted posts.
Before this change Masto API response would treat those
replies as root posts, making it hard to automatically or
mentally filter them out.

With this change replies already show up sensibly as
recognisable  replies in akkoma-fe.
Quotes of unavailable posts however still show up as if they
weren’t quotes at all, but this can only be improved client-side.

Fixes: #715
2025-10-13 10:26:57 +00:00
Weblate
4744ae4328 Translated using Weblate (Chinese (Simplified Han script))
All checks were successful
ci/woodpecker/push/build-arm64 Pipeline was successful
ci/woodpecker/push/build-amd64 Pipeline was successful
ci/woodpecker/push/docs Pipeline was successful
Currently translated at 99.0% (106 of 107 strings)

Co-authored-by: Poesty Li <poesty7450@gmail.com>
Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-errors/zh_Hans/
Translation: Pleroma fe/Akkoma Backend (Errors)
2025-10-13 10:01:37 +00:00
Weblate
2b076e59c1 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (1004 of 1004 strings)

Co-authored-by: Poesty Li <poesty7450@gmail.com>
Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/zh_Hans/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2025-10-13 10:01:37 +00:00
Oneric
6878e4de60 reverse_proxy: don't trust header about body size
All checks were successful
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline was successful
Except of course for HEAD requests
2025-10-10 00:00:00 +00:00
9d8583314b Repesct :restrict_unauthenticated for hashtag rss/atom feeds
Some checks failed
ci/woodpecker/pr/test/1 Pipeline was successful
ci/woodpecker/pr/test/2 Pipeline failed
Signed-off-by: nicole mikołajczyk <git@mkljczk.pl>
2025-10-09 15:29:26 +02:00
290171d006 don't care about the type as long as it has an attachment
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2025-08-04 18:29:46 +01:00
4cfbef3b09 fix from_url
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2025-08-04 18:20:33 +01:00
27eaddaa05 inspect the whole data on dry run
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2025-08-04 18:07:07 +01:00
31e25efbfb include previously missed formats
Some checks are pending
ci/woodpecker/pr/test/1 Pipeline is pending approval
ci/woodpecker/pr/test/2 Pipeline is pending approval
2025-08-04 18:02:58 +01:00
c588c3f674 remove inspect
Some checks failed
ci/woodpecker/pr/test/2 Pipeline failed
ci/woodpecker/pr/test/1 Pipeline failed
2025-08-03 23:18:40 +01:00
b5e17f131a correct pipeline 2025-08-03 23:17:55 +01:00
2c34bb32ca rewrite media domains mix task 2025-08-03 23:06:41 +01:00
279 changed files with 9130 additions and 6382 deletions

View file

@ -1,110 +0,0 @@
labels:
platform: linux/amd64
when:
event:
- push
- tag
branch:
- develop
- stable
variables:
- &scw-secrets
SCW_ACCESS_KEY:
from_secret: SCW_ACCESS_KEY
SCW_SECRET_KEY:
from_secret: SCW_SECRET_KEY
SCW_DEFAULT_ORGANIZATION_ID:
from_secret: SCW_DEFAULT_ORGANIZATION_ID
- &setup-hex "mix local.hex --force && mix local.rebar --force"
- &on-stable
when:
event:
- push
- tag
branch:
- stable
- &tag-build "export BUILD_TAG=$${CI_COMMIT_TAG:-\"$CI_COMMIT_BRANCH\"} && export PLEROMA_BUILD_BRANCH=$BUILD_TAG"
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean"
steps:
# Canonical amd64
debian-bookworm:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612
environment:
MIX_ENV: prod
DEBIAN_FRONTEND: noninteractive
commands:
- apt-get update && apt-get install -y cmake libmagic-dev rclone zip imagemagick libmagic-dev git build-essential g++ wget
- *clean
- echo "import Config" > config/prod.secret.exs
- *setup-hex
- *tag-build
- mix deps.get --only prod
- mix release --path release
- zip akkoma-amd64.zip -r release
release-debian-bookworm:
image: akkoma/releaser
environment: *scw-secrets
commands:
- export SOURCE=akkoma-amd64.zip
# AMD64
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-amd64.zip
- /bin/sh /entrypoint.sh
# Ubuntu jammy (currently compatible)
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-amd64-ubuntu-jammy.zip
- /bin/sh /entrypoint.sh
debian-bullseye:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bullseye-20230612
environment:
MIX_ENV: prod
DEBIAN_FRONTEND: noninteractive
commands:
- apt-get update && apt-get install -y cmake libmagic-dev rclone zip imagemagick libmagic-dev git build-essential g++ wget
- *clean
- echo "import Config" > config/prod.secret.exs
- *setup-hex
- *tag-build
- mix deps.get --only prod
- mix release --path release
- zip akkoma-amd64-debian-bullseye.zip -r release
release-debian-bullseye:
image: akkoma/releaser
environment: *scw-secrets
commands:
- export SOURCE=akkoma-amd64-debian-bullseye.zip
# AMD64
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-amd64-debian-bullseye.zip
- /bin/sh /entrypoint.sh
# Canonical amd64-musl
musl:
image: hexpm/elixir:1.15.4-erlang-26.0.2-alpine-3.18.2
<<: *on-stable
environment:
MIX_ENV: prod
commands:
- apk add git gcc g++ musl-dev make cmake file-dev rclone wget zip imagemagick
- *clean
- *setup-hex
- *mix-clean
- *tag-build
- mix deps.get --only prod
- mix release --path release
- zip akkoma-amd64-musl.zip -r release
release-musl:
image: akkoma/releaser
<<: *on-stable
environment: *scw-secrets
commands:
- export SOURCE=akkoma-amd64-musl.zip
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-amd64-musl.zip
- /bin/sh /entrypoint.sh

View file

@ -1,84 +0,0 @@
labels:
platform: linux/arm64
when:
event:
- push
- tag
branch:
- develop
- stable
variables:
- &scw-secrets
SCW_ACCESS_KEY:
from_secret: SCW_ACCESS_KEY
SCW_SECRET_KEY:
from_secret: SCW_SECRET_KEY
SCW_DEFAULT_ORGANIZATION_ID:
from_secret: SCW_DEFAULT_ORGANIZATION_ID
- &setup-hex "mix local.hex --force && mix local.rebar --force"
- &on-stable
when:
event:
- push
- tag
branch:
- stable
- &tag-build "export BUILD_TAG=$${CI_COMMIT_TAG:-\"$CI_COMMIT_BRANCH\"} && export PLEROMA_BUILD_BRANCH=$BUILD_TAG"
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean"
steps:
# Canonical arm64
debian-bookworm:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612
environment:
MIX_ENV: prod
DEBIAN_FRONTEND: noninteractive
commands:
- apt-get update && apt-get install -y cmake libmagic-dev rclone zip imagemagick libmagic-dev git build-essential g++ wget
- *clean
- echo "import Config" > config/prod.secret.exs
- *setup-hex
- *tag-build
- mix deps.get --only prod
- mix release --path release
- zip akkoma-arm64.zip -r release
release-debian-bookworm:
image: akkoma/releaser:arm64
environment: *scw-secrets
commands:
- export SOURCE=akkoma-arm64.zip
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-arm64-ubuntu-jammy.zip
- /bin/sh /entrypoint.sh
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-arm64.zip
- /bin/sh /entrypoint.sh
# Canonical arm64-musl
musl:
image: hexpm/elixir:1.15.4-erlang-26.0.2-alpine-3.18.2
<<: *on-stable
environment:
MIX_ENV: prod
commands:
- apk add git gcc g++ musl-dev make cmake file-dev rclone wget zip imagemagick
- *clean
- *setup-hex
- *mix-clean
- *tag-build
- mix deps.get --only prod
- mix release --path release
- zip akkoma-arm64-musl.zip -r release
release-musl:
image: akkoma/releaser:arm64
<<: *on-stable
environment: *scw-secrets
commands:
- export SOURCE=akkoma-arm64-musl.zip
- export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-arm64-musl.zip
- /bin/sh /entrypoint.sh

View file

@ -1,9 +1,6 @@
labels:
platform: linux/amd64
depends_on:
- build-amd64
when:
event:
- push

104
.woodpecker/publish.yml Normal file
View file

@ -0,0 +1,104 @@
when:
event:
- push
- tag
branch:
- develop
- stable
evaluate: 'SKIP_DEVELOP != "YES" || CI_COMMIT_BRANCH != "develop"'
matrix:
include:
# Canonical amd64
- ARCH: amd64
SUFFIX:
IMG_VAR: debian-bookworm-20230612
UBUNTU_EXPORT: YES
# old debian variant of amd64
- ARCH: amd64
SUFFIX: -debian-bullseye
IMG_VAR: debian-bullseye-20230612
# Canonical amd64-musl
- ARCH: amd64
SUFFIX: -musl
IMG_VAR: alpine-3.18.2
SKIP_DEVELOP: YES
# Canonical arm64
- ARCH: arm64
SUFFIX:
RELEASER_TAG: :arm64
IMG_VAR: debian-bookworm-20230612
UBUNTU_EXPORT: YES
# Canonical arm64-musl
- ARCH: arm64
SUFFIX: -musl
RELEASER_TAG: :arm64
IMG_VAR: alpine-3.18.2
SKIP_DEVELOP: YES
labels:
platform: linux/${ARCH}
steps:
# Canonical amd64
build:
image: hexpm/elixir:1.15.4-erlang-26.0.2-${IMG_VAR}
environment:
MIX_ENV: prod
DEBIAN_FRONTEND: noninteractive
commands: |
# install deps
case "${IMG_VAR}" in
debian*)
apt-get update && apt-get install -y \
cmake libmagic-dev rclone zip git wget \
build-essential g++ imagemagick libmagic-dev
;;
alpine*)
apk add git gcc g++ musl-dev make cmake file-dev rclone wget zip imagemagick
;;
*)
echo "No package manager defined for ${BASE_IMG}!"
exit 1
esac
# clean leftovers
rm -rf release
rm -rf _build
rm -rf /root/.mix
# setup
echo "import Config" > config/prod.secret.exs
mix local.hex --force
mix local.rebar --force
export BUILD_TAG=$${CI_COMMIT_TAG:-\"$CI_COMMIT_BRANCH\"}
export PLEROMA_BUILD_BRANCH=$BUILD_TAG
# actually build and zip up
mix deps.get --only prod
mix release --path release
zip akkoma-${ARCH}${SUFFIX}.zip -r release
release:
image: akkoma/releaser${RELEASER_TAG}
environment:
SCW_ACCESS_KEY:
from_secret: SCW_ACCESS_KEY
SCW_SECRET_KEY:
from_secret: SCW_SECRET_KEY
SCW_DEFAULT_ORGANIZATION_ID:
from_secret: SCW_DEFAULT_ORGANIZATION_ID
commands: |
export SOURCE=akkoma-${ARCH}${SUFFIX}.zip
export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/$${SOURCE}
/bin/sh /entrypoint.sh
if [ "${UBUNTU_EXPORT}" = "YES" ] ; then
# Ubuntu jammy (currently compatible with our default debian builds)
export DEST=scaleway:akkoma-updates/$${CI_COMMIT_TAG:-"$CI_COMMIT_BRANCH"}/akkoma-${ARCH}-ubuntu-jammy.zip
/bin/sh /entrypoint.sh
fi

View file

@ -1,18 +1,20 @@
labels:
platform: linux/amd64
when:
- event: pull_request
matrix:
# test the lowest and highest versions
include:
- ELIXIR_VERSION: 1.14
- ELIXIR_VERSION: 1.15
OTP_VERSION: 25
LINT: NO
- ELIXIR_VERSION: 1.18
OTP_VERSION: 27
PLATFORM: linux/amd64
- ELIXIR_VERSION: 1.19
OTP_VERSION: 28
LINT: YES
PLATFORM: linux/arm64
labels:
platform: ${PLATFORM}
services:
postgres:
@ -33,6 +35,7 @@ steps:
DB_HOST: postgres
LINT: ${LINT}
commands:
- sh -c 'uname -a && cat /etc/os-release || :'
- mix local.hex --force
- mix local.rebar --force
- mix deps.get

View file

@ -4,7 +4,114 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Unreleased
## 2026.03
### BREAKING
- Elixir 1.14 is no longer suported, and it's EOL! Upgrade to Elixir 1.15+
- `account` entities in API responses now only contain a cut down version of their servers nodeinfo.
TEMPORARILY a config option is provided to serve the full nodeinfo data again.
HOWEVER this option WILL be removed soon. If you encounter any issues with third-party clients fixed
by using this setting, tell us so we can include all actually needed keys by default.
### REMOVED
### Added
- Mastodon-compatible translation endpoints are now supported too;
the older Akkoma endpoints are deprecated but no immediate plans for removal
- `GET pleroma/conversation/:id/statuses` now supports `with_muted`
- `POST /api/v1/statuses` accepts and now prefers the Mastodon-compatible `quoted_status_id` parameter for quoting a post
- `status` API entities now expose non-shallow quotes in a manner also compatible with Mastodon clients
- support for WebFinger backlinks in ActivityPub actors (FEP-2c59)
### Fixed
- pinning, muting or unmuting a status one is not allowed to access no longer leaks its content
- revoking a favourite on a post one lost access to no longer leaks its content
- user info updates again are actively federated to other servers;
this was accidentally broken in the previous release
- it is no longer possible to reference posts one cannot access when reporting another user
- streamed relationship updates no longer leak follow* counts for users who chose to hide their counts
- WebFinger data and user nicknames no longer allow non-consential associations
- Correctly setup custom WebFinger domains work again
- fix paths of emojis added or updated at runtime and remove emoji from runtime when deleting an entire pack without requiring a full emoji reload
- fix retraction of remote emoji reaction when id is not present or its domain differs from image host
- fix AP ids declared with the canonical type being ignored in XML WebFinger responses
- fix many, many bugs in the conversations API family
- notifications about muted entities are no longer streamed out
- non-UTF-8 usernames no longer lead to internal server errors in API endpoints
- when SimplePolicy rules are configured but the MRF not enabled, its rules no longer interfere with fetching
- fixed remote follow counter refresh on user (re)fetch
- remote users whose follow* counts are private are now actually shown as such in API instead of represeneting them with public zero counters
- fix local follow* collections counting and including AP IDs of deleted users
### Changed
- `PATCH /api/v1/pleroma/conversations/:id` now accepts update parameters via JSON body too
- it is now possible to quote local and ones own private posts provided a compatible scope is used
- on final activity failures the error log now includes the afected activity
- improved performance of `GET api/v1/custom_emoji`
- outgoing HTTP requests now accept compressed responses
- the system CA certificate store is now used by default
- when refreshing remote follow* stats all fetch-related erros are now treated as stats being private;
this avoids spurious error logs and better matches the intent of implementations serving fallback HTML responses on the AP collection endpoints
## 2025.12
### REMOVED
- DEPRECATE `/api/v1/timelines/direct`.
Technically this was already deprecated, given we extend mastodon 2.7.2 API
and Mastodon already deprecated it in 2.6.0 before removing it in 3.0.0.
But now we have concrete plans to remove this endpoint in a coming release.
The few remaining useres should switch to the conversations API.
- DEPRECATE `config :pleroma, :instance, skip_thread_containment: false`.
It is due to be removed in one of the next releases if no strong arguments for keeping it are brought up.
It is already semi-broken for large threads and conflicts with pending optimisation and cleanup work.
- support for `exclude_visibilities` in timeline and notification endpoints has been dropped
- support for list visibility / list addressing has been dropped due to lack of usage, maintenance burden and redundancy with the still supported explicit-addressing feature
- support for conversations addressing has been dropped due to lack of usage, maintenance burden and being mostly redundant with explicit addressing
- per-visibility status counters have been dropped from `/api/v1/pleroma/admin/stats`
due to unreasonably perf costs added on most database operations.
For now, the response still contains the fields, but with stubbed-out values.
### Added
- status responses include two new fields for ActivityPub cross-referencing: `akkoma.quote_apid` and `akkoma.in_reply_to_apid`
- attempting to reply to an already deleted post will return an error
(in akkoma-fe the error will be shown and your draft message retained so you can decide
for yourself whether to discard it or copy and repost as a, now intentional, new thread)
- the notification endpoint now supports the `types` parameter for filtering added in vanilla Mastodon
- the mute endpoint now supports the `duration` parameter added in vanilla Mastodon
(fixes temporary mutes created via e.g. Husky)
### Fixed
- replies and quotes to unresolvable posts now fill out IDs for replied to
status, user or quoted status with a 404-ing ID to make them recognisable as
replies/quotes instead of pretending theyre root posts
- querying a status using the ID of a non-post AP activity no longer displays
a duplicate of the post referenced by said activity with mangled author information
- fix users being able to interact (like, emoji react, ...) with posts they cannot access
- fix AP fetches of local non-Create, non-Undo activities exposing the raw, unsanitised content of the referenced object
- the above two combined allowed local users to gain access to private posts
of user they do not follow, but follow a follower of the author.
(remote users and other scenarios were to our knowledge not able to achieve this due to other restrictions)
- fix RSS and Atom feeds of hashtag timelines potentially exposing more information than Mastodon API when restricting unauthenticated API access
- fix mentioning and sending DMs to users with non-ASCII-alphanumerical usernames
- correctly hide and show inlined fallback links for quotes from Mastodon instances
- API requests with multiple unsupported parameters now will ignore all of them up to a certain limit.
If there are too many unsupported parameters this is indicated in the returned error message.
- expose generic type of attachment via Masto API if remote did not send a full MIME type but indicated a generic one
(the \*oma-specific full mime type field in the API response remains generic however, since we don't have this info)
- add back the default banner image we advertise in Masto API
- correctly redirect `/users/:nickname.rss` to the RSS instead of Atom feed
### Changed
- depreacted the `included_types` parameter in the notification endpoint; replaced by `types`
- depreacted the `expires_in` parameter in the mute endpoint; replaced by `duration`
- optimised emoji addition and removal
- emoji reloading now happens asynchronously so you won't run into timeout issues with many emoji and/or a slow disk
- upgraded all of our dependencies; this should reduce issues when running akoma with OTP28
- prefer "summary" over "name" for the attachment alt text of incoming ActivityPub documents;
this fixes alt text federation from GtS and Honk
- slightly improve index overhead for the users table
## 2025.10

View file

@ -10,6 +10,7 @@
## Supported FEPs
- [FEP-67ff: FEDERATION](https://codeberg.org/fediverse/fep/src/branch/main/fep/67ff/fep-67ff.md)
- [FEP-2c59: Discovery of a Webfinger address from an ActivityPub actor](https://codeberg.org/fediverse/fep/src/branch/main/fep/2c59/fep-2c59.md)
- [FEP-dc88: Formatting Mathematics](https://codeberg.org/fediverse/fep/src/branch/main/fep/dc88/fep-dc88.md)
- [FEP-f1d5: NodeInfo in Fediverse Software](https://codeberg.org/fediverse/fep/src/branch/main/fep/f1d5/fep-f1d5.md)
- [FEP-fffd: Proxy Objects](https://codeberg.org/fediverse/fep/src/branch/main/fep/fffd/fep-fffd.md)
@ -37,6 +38,21 @@ Depending on instance configuration the same may be true for GET requests.
We set the optional extension term `htmlMfm: true` when using content type "text/x.misskeymarkdown".
Incoming messages containing `htmlMfm: true` will not have their content re-parsed.
## WebFinger
Akkoma requires WebFinger implmentations to respond to queries about a given user both when
`acct:user@domain` or the canonical ActivityPub id of the actor is passed as the `resource`.
Akkoma strongly encourages ActivityPub implementations to include
a FEP-2c59-compliant WebFinger backlink in their actor documents.
Without FEP-2c59 and if different domains are used for ActivityPub and the Webfinger subject,
Akkoma relies on the presence of an host-meta LRDD template on the ActivityPub domain
or a HTTP redirect from the ActivityPub domains `/.well-known/webfinger` to an equivalent endpoint
on the domain used in the `subject` to discover and validate the domain association.
Without FEP-2c59 Akkoma may not become aware of changes to the
preferred WebFinger `subject` domain for already discovered users.
## Nodeinfo
Akkoma provides many additional entries in its nodeinfo response,

View file

@ -1,3 +1,2 @@
./build.sh 1.14-otp25 1.14.3-erlang-25.3.2-alpine-3.18.0
./build.sh 1.15-otp25 1.15.8-erlang-25.3.2.18-alpine-3.19.7
./build.sh 1.18-otp27 1.18.2-erlang-27.2.4-alpine-3.19.7
./build.sh 1.15-otp25 1.15.8-erlang-25.3.2.18-alpine-3.22.2
./build.sh 1.19-otp28 1.19-erlang-28.0-alpine-3.23.2

View file

@ -51,6 +51,11 @@ config :pleroma, Pleroma.Repo,
queue_target: 20_000,
migration_lock: nil
# password hash strength
config :argon2_elixir,
t_cost: 8,
parallelism: 2
config :pleroma, Pleroma.Captcha,
enabled: true,
seconds_valid: 300,
@ -244,6 +249,7 @@ config :pleroma, :instance,
remote_post_retention_days: 90,
skip_thread_containment: true,
limit_to_local_content: :unauthenticated,
filter_embedded_nodeinfo: true,
user_bio_length: 5000,
user_name_length: 100,
max_account_fields: 10,
@ -776,7 +782,9 @@ config :pleroma, :frontends,
available: %{
"pleroma-fe" => %{
"name" => "pleroma-fe",
"git" => "https://akkoma.dev/AkkomaGang/pleroma-fe",
"blind_trust" => true,
"git" => "https://akkoma.dev/AkkomaGang/akkoma-fe",
"bugtracker" => "https://akkoma.dev/AkkomaGang/akkoma-fe/issues",
"build_url" =>
"https://akkoma-updates.s3-website.fr-par.scw.cloud/frontend/${ref}/akkoma-fe.zip",
"ref" => "stable",
@ -785,7 +793,9 @@ config :pleroma, :frontends,
# Mastodon-Fe cannot be set as a primary - this is only here so we can update this seperately
"mastodon-fe" => %{
"name" => "mastodon-fe",
"blind_trust" => true,
"git" => "https://akkoma.dev/AkkomaGang/masto-fe",
"bugtracker" => "https://akkoma.dev/AkkomaGang/masto-fe/issues",
"build_url" =>
"https://akkoma-updates.s3-website.fr-par.scw.cloud/frontend/${ref}/masto-fe.zip",
"build_dir" => "distribution",
@ -793,7 +803,9 @@ config :pleroma, :frontends,
},
"fedibird-fe" => %{
"name" => "fedibird-fe",
"blind_trust" => true,
"git" => "https://akkoma.dev/AkkomaGang/fedibird-fe",
"bugtracker" => "https://akkoma.dev/AkkomaGang/fedibird-fe/issues",
"build_url" =>
"https://akkoma-updates.s3-website.fr-par.scw.cloud/frontend/${ref}/fedibird-fe.zip",
"build_dir" => "distribution",
@ -801,7 +813,9 @@ config :pleroma, :frontends,
},
"admin-fe" => %{
"name" => "admin-fe",
"blind_trust" => true,
"git" => "https://akkoma.dev/AkkomaGang/admin-fe",
"bugtracker" => "https://akkoma.dev/AkkomaGang/admin-fe/issues",
"build_url" =>
"https://akkoma-updates.s3-website.fr-par.scw.cloud/frontend/${ref}/admin-fe.zip",
"ref" => "stable"
@ -809,10 +823,31 @@ config :pleroma, :frontends,
# For developers - enables a swagger frontend to view the openapi spec
"swagger-ui" => %{
"name" => "swagger-ui",
"blind_trust" => true,
"git" => "https://github.com/swagger-api/swagger-ui",
# API spec definitions are part of the backend (and the swagger-ui build outdated)
"bugtracker" => "https://akkoma.dev/AkkomaGang/akkoma/issues",
"build_url" => "https://akkoma-updates.s3-website.fr-par.scw.cloud/frontend/swagger-ui.zip",
"build_dir" => "dist",
"ref" => "stable"
},
# Third-party frontends
"pleroma-fe-vanilla" => %{
"name" => "pleroma-fe-vanilla",
"git" => "https://git.pleroma.social/pleroma/pleroma-fe/",
"build_url" =>
"https://git.pleroma.social/pleroma/pleroma-fe/-/jobs/artifacts/${ref}/download?job=build",
"ref" => "develop",
"build_dir" => "dist",
"bugtracker" => "https://git.pleroma.social/pleroma/pleroma-fe/-/issues"
},
"pl-fe" => %{
"name" => "pl-fe",
"git" => "https://codeberg.org/mkljczk/pl-fe",
"build_url" => "https://pl.mkljczk.pl/pl-fe.zip",
"ref" => "develop",
"build_dir" => ".",
"bugtracker" => "https://codeberg.org/mkljczk/pl-fe/issues"
}
}
@ -869,7 +904,11 @@ config :pleroma, ConcurrentLimiter, [
{Pleroma.Search, [max_running: 30, max_waiting: 50]}
]
config :pleroma, Pleroma.Web.WebFinger, domain: nil, update_nickname_on_user_fetch: true
config :pleroma, Pleroma.Web.WebFinger,
domain: nil,
# this _forces_ a nickname rediscovery and validation, otherwise only updates when detecting a change
# TODO: default this to false after the fallout from recent WebFinger bugs is healed
update_nickname_on_user_fetch: true
config :pleroma, Pleroma.Search, module: Pleroma.Search.DatabaseSearch

View file

@ -61,6 +61,18 @@ frontend_options = [
type: :string,
description: "The directory inside the zip file "
},
%{
key: "blind_trust",
label: "Blindly trust frontend devs?",
type: :boolean,
description: "Do NOT change this unless youre really sure"
},
%{
key: "bugtracker",
label: "Bug tracker",
type: :string,
description: "Where to report bugs (for third-party FEs)"
},
%{
key: "custom-http-headers",
label: "Custom HTTP headers",
@ -3483,7 +3495,7 @@ config :pleroma, :config_description, [
key: :module,
type: :module,
description: "Translation module.",
suggestions: {:list_behaviour_implementations, Pleroma.Akkoma.Translator}
suggestions: {:list_behaviour_implementations, Pleroma.Akkoma.Translator.Provider}
}
]
},

View file

@ -19,3 +19,26 @@
- `--delete` - delete local uploads after migrating them to the target uploader
A list of available uploaders can be seen in [Configuration Cheat Sheet](../../configuration/cheatsheet.md#pleromaupload)
## Rewriting old media URLs
After a migration has taken place, old URLs in your database will not have been changed. You
will want to run this task to update these URLs.
Use the full URL here. So if you moved from `media.example.com/media` to `media.another.com/data`, you'd run with arguments
`old_url = https://media.example.com/media` and `new_url = https://media.another.com/data`.
=== "OTP"
```sh
./bin/pleroma_ctl uploads rewrite_media_domain <old_url> <new_url>
```
=== "From Source"
```sh
mix pleroma.uploads rewrite_media_domain <old_url> <new_url>
```
### Options
- `--dry-run` - Do not action any update and simply print what _would_ happen

View file

@ -18,7 +18,7 @@
3. Go to the working directory of Akkoma (default is `/opt/akkoma`)
4. Copy the above-mentioned files back to their original position.
5. Drop the existing database and user[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';` `sudo -Hu postgres psql -c 'DROP USER akkoma;'`
6. Restore the database schema and akkoma role[¹] (replace the password with the one you find in the configuration file), `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-configuration-file>';"` `sudo -Hu postgres psql -c "CREATE DATABASE akkoma OWNER akkoma;"`.
6. Restore the database schema and akkoma role[¹] (replace the password with the one you find in the configuration file), `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-configuration-file>';";` `sudo -Hu postgres psql -c "CREATE DATABASE akkoma OWNER akkoma;"`.
7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
8. If you installed a newer Akkoma version, you should run the database migrations `./bin/pleroma_ctl migrate`[²].
9. Restart the Akkoma service.

View file

@ -53,6 +53,10 @@ curl -i -H 'Authorization: Bearer $ACCESS_TOKEN' https://myinstance.example/api/
You may use the eponymous [Prometheus](https://prometheus.io/)
or anything compatible with it like e.g. [VictoriaMetrics](https://victoriametrics.com/).
The latter claims better performance and storage efficiency.
However, at the moment our reference dashboard only works with VictoriaMetrics,
thus if you wish to use use the reference as an easy dropin template you must
use VictoriaMetrics.
Patches to allow the dashboard to work with plain Prometheus are welcome though.
Both of them can usually be easily installed via distro-packages or docker.
Depending on your distro or installation method the preferred way to change the CLI arguments and the location of config files may differ; consult the documentation of your chosen method to find out.
@ -254,6 +258,44 @@ as well as database diagnostics.
BEAM VM stats include detailed memory consumption breakdowns
and a full list of running processes for example.
## Postgres Statements Statistics
The built-in dashboard can list the queries your instances spends the
most accumulative time on giving insight into potential bottlenecks
and what might be worth optimising.
This is the “Outliers” tab in “Ecto Stats”.
However for this to work you first need to enable a PostgreSQL extension
as follows:
Add the following two lines two your `postgresql.conf` (typically placed in your data dir):
```
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
```
Now restart PostgreSQL. Then connect to your akkoma database using `psql` and run:
```sql
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
```
Execution time statistics will now start to be gathered.
To get a representative sample of your instances workload you should wait a week or at least a day.
These statistics are never reset automatically, but with new Akkoma releases and
changes in the servers your instance federates with the workload will evolve.
Thus its a good idea to reset this occasionally using:
```sql
-- get user oid: SELECT oid FROM pg_roles WHERE rolname = 'akkoma';
-- get db oid: SELECT oid FROM pg_database WHERE datname = 'akkoma';
SELECT pg_stat_statements_reset('<akkoma user oid>'::regclass::oid, '<akkoma database oid>'::regclass::oid);
-- or alternatively, to just reset stats for all users and databases:
-- SELECT pg_stat_statements_reset();
```
## Oban Web
This too requires administrator rights to access and can be found under `/akkoma/oban` if enabled.

View file

@ -54,7 +54,8 @@ To add configuration to your config file, you can copy it from the base config.
* `remote_post_retention_days`: The default number of days to retain remote posts when pruning the database.
* `user_bio_length`: A user bio maximum length (default: `5000`).
* `user_name_length`: A user name maximum length (default: `100`).
* `skip_thread_containment`: Skip filter out broken threads. The default is `false`.
* `skip_thread_containment`: **DEPRECATED**, DO NOT CHANGE THE DEFAULT!
Skip filter out broken threads. The default is `false`.
* `limit_to_local_content`: Limit unauthenticated users to search for local statutes and users only. Possible values: `:unauthenticated`, `:all` and `false`. The default is `:unauthenticated`.
* `max_account_fields`: The maximum number of custom fields in the user profile (default: `10`).
* `max_remote_account_fields`: The maximum number of custom fields in the remote user profile (default: `20`).

View file

@ -7,72 +7,72 @@ The configuration of Akkoma (and Pleroma) has traditionally been managed with a
1. Run the mix task to migrate to the database.
**Source:**
**Source:**
```
$ mix pleroma.config migrate_to_db
```
```
$ mix pleroma.config migrate_to_db
```
or
or
**OTP:**
**OTP:**
*Note: OTP users need Akkoma to be running for `pleroma_ctl` commands to work*
*Note: OTP users need Akkoma to be running for `pleroma_ctl` commands to work*
```
$ ./bin/pleroma_ctl config migrate_to_db
```
```
$ ./bin/pleroma_ctl config migrate_to_db
```
```
Migrating settings from file: /home/pleroma/config/dev.secret.exs
```
Migrating settings from file: /home/pleroma/config/dev.secret.exs
Settings for key instance migrated.
Settings for group :pleroma migrated.
```
Settings for key instance migrated.
Settings for group :pleroma migrated.
```
2. It is recommended to backup your config file now.
```
cp config/dev.secret.exs config/dev.secret.exs.orig
```
```
cp config/dev.secret.exs config/dev.secret.exs.orig
```
3. Edit your Akkoma config to enable database configuration:
```
config :pleroma, configurable_from_database: true
```
```
config :pleroma, configurable_from_database: true
```
4. ⚠️ **THIS IS NOT REQUIRED** ⚠️
Now you can edit your config file and strip it down to the only settings which are not possible to control in the database. e.g., the Postgres (Repo) and webserver (Endpoint) settings cannot be controlled in the database because the application needs the settings to start up and access the database.
Now you can edit your config file and strip it down to the only settings which are not possible to control in the database. e.g., the Postgres (Repo) and webserver (Endpoint) settings cannot be controlled in the database because the application needs the settings to start up and access the database.
Any settings in the database will override those in the config file, but you may find it less confusing if the setting is only declared in one place.
Any settings in the database will override those in the config file, but you may find it less confusing if the setting is only declared in one place.
A non-exhaustive list of settings that are only possible in the config file include the following:
A non-exhaustive list of settings that are only possible in the config file include the following:
* config :pleroma, Pleroma.Web.Endpoint
* config :pleroma, Pleroma.Repo
* config :pleroma, configurable\_from\_database
* config :pleroma, :database, rum_enabled
* config :pleroma, :connections_pool
* config :pleroma, Pleroma.Web.Endpoint
* config :pleroma, Pleroma.Repo
* config :pleroma, configurable\_from\_database
* config :pleroma, :database, rum_enabled
* config :pleroma, :connections_pool
Here is an example of a server config stripped down after migration:
Here is an example of a server config stripped down after migration:
```
use Mix.Config
```
use Mix.Config
config :pleroma, Pleroma.Web.Endpoint,
url: [host: "cool.pleroma.site", scheme: "https", port: 443]
config :pleroma, Pleroma.Web.Endpoint,
url: [host: "cool.pleroma.site", scheme: "https", port: 443]
config :pleroma, Pleroma.Repo,
adapter: Ecto.Adapters.Postgres,
username: "akkoma",
password: "MySecretPassword",
database: "akkoma_prod",
hostname: "localhost"
config :pleroma, Pleroma.Repo,
adapter: Ecto.Adapters.Postgres,
username: "akkoma",
password: "MySecretPassword",
database: "akkoma_prod",
hostname: "localhost"
config :pleroma, configurable_from_database: true
```
config :pleroma, configurable_from_database: true
```
5. Restart your instance and you can now access the Settings tab in admin-fe.
@ -81,28 +81,28 @@ The configuration of Akkoma (and Pleroma) has traditionally been managed with a
1. Run the mix task to migrate back from the database. You'll receive some debugging output and a few messages informing you of what happened.
**Source:**
**Source:**
```
$ mix pleroma.config migrate_from_db
```
```
$ mix pleroma.config migrate_from_db
```
or
or
**OTP:**
**OTP:**
```
$ ./bin/pleroma_ctl config migrate_from_db
```
```
$ ./bin/pleroma_ctl config migrate_from_db
```
```
10:26:30.593 [debug] QUERY OK source="config" db=9.8ms decode=1.2ms queue=26.0ms idle=0.0ms
SELECT c0."id", c0."key", c0."group", c0."value", c0."inserted_at", c0."updated_at" FROM "config" AS c0 []
```
10:26:30.593 [debug] QUERY OK source="config" db=9.8ms decode=1.2ms queue=26.0ms idle=0.0ms
SELECT c0."id", c0."key", c0."group", c0."value", c0."inserted_at", c0."updated_at" FROM "config" AS c0 []
10:26:30.659 [debug] QUERY OK source="config" db=1.1ms idle=80.7ms
SELECT c0."id", c0."key", c0."group", c0."value", c0."inserted_at", c0."updated_at" FROM "config" AS c0 []
Database configuration settings have been saved to config/dev.exported_from_db.secret.exs
```
10:26:30.659 [debug] QUERY OK source="config" db=1.1ms idle=80.7ms
SELECT c0."id", c0."key", c0."group", c0."value", c0."inserted_at", c0."updated_at" FROM "config" AS c0 []
Database configuration settings have been saved to config/dev.exported_from_db.secret.exs
```
2. Remove `config :pleroma, configurable_from_database: true` from your config. The in-database configuration still exists, but it will not be used. Future migrations will erase the database config before importing your config file again.

View file

@ -1218,24 +1218,10 @@ Loads JSON generated from `config/descriptions.exs`.
## `GET /api/v1/pleroma/admin/stats`
### Stats
**DEPRECATED; DO NOT USE**!!
- Query Params:
- *optional* `instance`: **string** instance hostname (without protocol) to get stats for
- Example: `https://mypleroma.org/api/v1/pleroma/admin/stats?instance=lain.com`
- Response:
```json
{
"status_visibility": {
"direct": 739,
"private": 9,
"public": 17,
"unlisted": 14
}
}
```
Returned information is only stubbed out.
The endpoint will be removed entirely in an upcoming release.
## `GET /api/v1/pleroma/admin/oauth_app`

View file

@ -14,8 +14,6 @@ by the administrator. It is available under `/api/v1/timelines/bubble`.
Adding the parameter `with_muted=true` to the timeline queries will also return activities by muted (not by blocked!) users.
Adding the parameter `exclude_visibilities` to the timeline queries will exclude the statuses with the given visibilities. The parameter accepts an array of visibility types (`public`, `unlisted`, `private`, `direct`), e.g., `exclude_visibilities[]=direct&exclude_visibilities[]=private`.
Adding the parameter `reply_visibility` to the public, bubble or home timelines queries will filter replies. Possible values: without parameter (default) shows all replies, `following` - replies directed to you or users you follow, `self` - replies directed to you.
Adding the parameter `instance=lain.com` to the public timeline will show only statuses originating from `lain.com` (or any remote instance).
@ -32,7 +30,7 @@ Home, public, hashtag & list timelines further accept:
## Statuses
- `visibility`: has additional possible values `list` and `local` (for local-only statuses)
- `visibility`: has additional possible value `local` (for local-only statuses)
- `emoji_reactions`: additional field since Akkoma 3.2.0; identical to `pleroma/emoji_reactions`
Has these additional fields under the `pleroma` object:
@ -60,6 +58,7 @@ The `GET /api/v1/statuses/:id/source` endpoint additionally has the following at
Has these additional fields in `params`:
- `expires_in`: the number of seconds the posted activity should expire in.
**Deprecated**; replaced by Mastodon-compatible `duration`
## Media Attachments
@ -90,7 +89,6 @@ The `id` parameter can also be the `nickname` of the user. This only works in th
- `with_muted`: include statuses/reactions from muted accounts
- `exclude_reblogs`: exclude reblogs
- `exclude_replies`: exclude replies
- `exclude_visibilities`: exclude visibilities
Endpoints which accept `with_relationships` parameter:
@ -191,8 +189,8 @@ The `type` value is `pleroma:report`
Accepts additional parameters:
- `exclude_visibilities`: will exclude the notifications for activities with the given visibilities. The parameter accepts an array of visibility types (`public`, `unlisted`, `private`, `direct`). Usage example: `GET /api/v1/notifications?exclude_visibilities[]=direct&exclude_visibilities[]=private`.
- `include_types`: will include the notifications for activities with the given types. The parameter accepts an array of types (`mention`, `follow`, `reblog`, `favourite`, `move`, `pleroma:emoji_reaction`, `pleroma:report`). Usage example: `GET /api/v1/notifications?include_types[]=mention&include_types[]=reblog`.
**Deprecated:** replaced by `types` which is equivalent but (by now) also supported by vanilla Mastodon.
## DELETE `/api/v1/notifications/destroy_multiple`
@ -214,8 +212,8 @@ Additional parameters can be added to the JSON body/Form data:
- `content_type`: string, contain the MIME type of the status, it is transformed into HTML by the backend. You can get the list of the supported MIME types with the nodeinfo endpoint.
- `to`: A list of nicknames (like `admin@otp.akkoma.dev` or `admin` on the local server) that will be used to determine who is going to be addressed by this post. Using this will disable the implicit addressing by mentioned names in the `status` body, only the people in the `to` list will be addressed. The normal rules for post visibility are not affected by this and will still apply.
- `visibility`: string, besides standard MastoAPI values (`direct`, `private`, `unlisted`, `local` or `public`) it can be used to address a List by setting it to `list:LIST_ID`.
- `expires_in`: The number of seconds the posted activity should expire in. When a posted activity expires it will be deleted from the server, and a delete request for it will be federated. This needs to be longer than an hour.
- `in_reply_to_conversation_id`: Will reply to a given conversation, addressing only the people who are part of the recipient set of that conversation. Sets the visibility to `direct`.
- `expires_in`: **Deprecated**; replaced by `duration`.
The number of seconds the posted activity should expire in. When a posted activity expires it will be deleted from the server, and a delete request for it will be federated. This needs to be longer than an hour.
## GET `/api/v1/statuses`
@ -361,10 +359,6 @@ The message payload consists of:
- `follower_count`: follower count
- `following_count`: following count
## User muting and thread muting
Both user muting and thread muting can be done for only a certain time by adding an `expires_in` parameter to the API calls and giving the expiration time in seconds.
## Not implemented
Akkoma is generally compatible with the Mastodon 2.7.2 API, but some newer features and non-essential features are omitted. These features usually return an HTTP 200 status code, but with an empty response. While they may be added in the future, they are considered low priority.

View file

@ -376,13 +376,8 @@ See [Admin-API](admin_api.md)
Pleroma Conversations have the same general structure that Mastodon Conversations have. The behavior differs in the following ways when using these endpoints:
1. Pleroma Conversations never add or remove recipients, unless explicitly changed by the user.
1. Pleroma Conversations never add or remove recipients (`accounts` key), unless explicitly changed by the user.
2. Pleroma Conversations statuses can be requested by Conversation id.
3. Pleroma Conversations can be replied to.
Conversations have the additional field `recipients` under the `pleroma` key. This holds a list of all the accounts that will receive a message in this conversation.
The status posting endpoint takes an additional parameter, `in_reply_to_conversation_id`, which, when set, will set the visiblity to direct and address only the people who are the recipients of that Conversation.
⚠ Conversation IDs can be found in direct messages with the `pleroma.direct_conversation_id` key, do not confuse it with `pleroma.conversation_id`.

View file

@ -267,17 +267,33 @@ special meaning to the potential local-scope identifier.
however those are also shown publicly on the local web interface
and are thus visible to non-members.
## List post scope
Messages originally addressed to a custom list will contain
a `listMessage` field with an unresolvable pseudo ActivityPub id.
# Deprecated and Removed Extensions
The following extensions were used in the past but have been dropped.
Documentation is retained here as a reference and since old objects might
still contains related fields.
## List post scope
Messages originally addressed to a custom list will contain
a `listMessage` field with an unresolvable pseudo ActivityPub id.
!!! note
The concept did not work out too well in practice with even remote servers
recognising the `listMessage` extension being unaware of the state of the
list and resulting weird desyncs in thread display and handling between
servers.
As it was it also never found its way in any known clients or frontends.
A more consistent superset of what this was able to actually do
can be achieved without ActivityPub extensions by explicitly addressing
all intended participants without inline mentions.
While true federated and moderated "lists" or "groups"
will need more work and a different approach.
Thus suport for it was removed and it is recommended
to not create any new implementation of it.
## Actor endpoints
The following endpoints used to be present:

View file

@ -1,8 +1,8 @@
## Required dependencies
* PostgreSQL 12+
* Elixir 1.14.1+ (currently tested up to 1.18)
* Erlang OTP 25+ (currently tested up to OTP27)
* Elixir 1.15+ (currently tested up to 1.19)
* Erlang OTP 25+ (currently tested up to OTP28)
* git
* file / libmagic
* gcc (clang might also work)

View file

@ -723,6 +723,8 @@
},
"displayName": "Run Queue",
"mappings": [],
"max": 1.5,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
@ -732,11 +734,11 @@
},
{
"color": "yellow",
"value": 15
"value": 0.2
},
{
"color": "red",
"value": 25
"value": 1
}
]
},
@ -784,6 +786,12 @@
{
"id": "displayName",
"value": "Memory"
},
{
"id": "min"
},
{
"id": "max"
}
]
},
@ -836,7 +844,7 @@
"disableTextWrap": false,
"editorMode": "builder",
"exemplar": false,
"expr": "rate(vm_memory_total_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"expr": "increase(vm_memory_total_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -854,7 +862,7 @@
"disableTextWrap": false,
"editorMode": "builder",
"exemplar": false,
"expr": "rate(vm_memory_total_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"expr": "increase(vm_memory_total_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -882,7 +890,7 @@
"disableTextWrap": false,
"editorMode": "builder",
"exemplar": false,
"expr": "rate(vm_total_run_queue_lengths_cpu_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__rate_interval])",
"expr": "increase(vm_total_run_queue_lengths_cpu_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -900,7 +908,7 @@
"disableTextWrap": false,
"editorMode": "builder",
"exemplar": false,
"expr": "rate(vm_total_run_queue_lengths_cpu_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__rate_interval])",
"expr": "increase(vm_total_run_queue_lengths_cpu_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -928,7 +936,7 @@
"disableTextWrap": false,
"editorMode": "builder",
"exemplar": false,
"expr": "rate(vm_total_run_queue_lengths_io_fsum_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__rate_interval])",
"expr": "increase(vm_total_run_queue_lengths_io_fsum_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -946,7 +954,7 @@
"disableTextWrap": false,
"editorMode": "builder",
"exemplar": false,
"expr": "rate(vm_total_run_queue_lengths_io_fsum_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__rate_interval])",
"expr": "increase(vm_total_run_queue_lengths_io_fsum_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -1974,6 +1982,7 @@
"type": "prometheus",
"uid": "${DATASOURCE}"
},
"description": "Times are counted upon job completion/failure and may contain IO or network wait times.",
"fieldConfig": {
"defaults": {
"color": {
@ -2356,7 +2365,7 @@
"type": "prometheus",
"uid": "${DATASOURCE}"
},
"description": "Jobs intentionally held back until a later start data",
"description": "Jobs intentionally held back until a later start date. This also (but not only) includes retries of previously failed jobs since theres a cooldown between re-attempts.",
"fieldConfig": {
"defaults": {
"color": {
@ -2681,7 +2690,7 @@
},
"disableTextWrap": false,
"editorMode": "builder",
"expr": "rate(vm_memory_total_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"expr": "increase(vm_memory_total_psum{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -2698,7 +2707,7 @@
},
"disableTextWrap": false,
"editorMode": "builder",
"expr": "rate(vm_memory_total_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"expr": "increase(vm_memory_total_pcount{instance=\"${INSTANCE}\", job=\"${SCRAPE_JOB}\"}[$__interval])",
"fullMetaSearch": false,
"hide": true,
"includeNullMetadata": true,
@ -3598,6 +3607,6 @@
"timezone": "utc",
"title": "Akkoma Dashboard",
"uid": "edzowz85niznkc",
"version": 29,
"version": 54,
"weekStart": ""
}

View file

@ -33,7 +33,7 @@ defmodule Mix.Tasks.Pleroma.Email do
Pleroma.User.Query.build(%{
local: true,
is_active: true,
deactivated: false,
is_confirmed: false,
invisible: false
})

View file

@ -43,6 +43,7 @@ defmodule Mix.Tasks.Pleroma.NotificationSettings do
defp build_query(hide_notification_contents, options) do
query =
from(u in Pleroma.User,
where: u.local,
update: [
set: [
notification_settings:

View file

@ -1,68 +0,0 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Mix.Tasks.Pleroma.RefreshCounterCache do
@shortdoc "Refreshes counter cache"
use Mix.Task
alias Pleroma.Activity
alias Pleroma.CounterCache
alias Pleroma.Repo
import Ecto.Query
def run([]) do
Mix.Pleroma.start_pleroma()
instances =
Activity
|> distinct([a], true)
|> select([a], fragment("split_part(?, '/', 3)", a.actor))
|> Repo.all()
instances
|> Enum.with_index(1)
|> Enum.each(fn {instance, i} ->
counters = instance_counters(instance)
CounterCache.set(instance, counters)
Mix.Pleroma.shell_info(
"[#{i}/#{length(instances)}] Setting #{instance} counters: #{inspect(counters)}"
)
end)
Mix.Pleroma.shell_info("Done")
end
defp instance_counters(instance) do
counters = %{"public" => 0, "unlisted" => 0, "private" => 0, "direct" => 0}
Activity
|> where([a], fragment("(? ->> 'type'::text) = 'Create'", a.data))
|> where([a], fragment("split_part(?, '/', 3) = ?", a.actor, ^instance))
|> select(
[a],
{fragment(
"activity_visibility(?, ?, ?)",
a.actor,
a.recipients,
a.data
), count(a.id)}
)
|> group_by(
[a],
fragment(
"activity_visibility(?, ?, ?)",
a.actor,
a.recipients,
a.data
)
)
|> Repo.all(timeout: :timer.minutes(30))
|> Enum.reduce(counters, fn {visibility, count}, acc ->
Map.put(acc, visibility, count)
end)
end
end

View file

@ -5,6 +5,7 @@
defmodule Mix.Tasks.Pleroma.Uploads do
use Mix.Task
import Mix.Pleroma
import Ecto.Query
alias Pleroma.Upload
alias Pleroma.Uploaders.Local
require Logger
@ -97,4 +98,106 @@ defmodule Mix.Tasks.Pleroma.Uploads do
shell_info("Done!")
end
@doc """
Rewrite media domains to somewhere new
"""
def run(["rewrite_media_domain", from_url, to_url | args]) do
dry_run = Enum.member?(args, "--dry-run")
start_pleroma()
shell_info("Rewriting media domain from #{from_url} to #{to_url}")
shell_info("Dry run: #{dry_run}")
# actually selecting based on the attachment URL is stupidly difficult due to it being
# stored as a JSONB array in the `data` field... the easier way to do this is just to iterate though
# local posts
from(o in Pleroma.Object)
|> where(
[o],
fragment(
"?->'url'->0->>'href' LIKE ?
OR
?->'attachment'->0->'url'->0->>'href' LIKE ?",
o.data,
^"#{from_url}%",
o.data,
^"#{from_url}%"
)
)
|> Pleroma.Repo.chunk_stream(100, :batches, timeout: :infinity)
|> Stream.each(fn chunk ->
# now we just rewrite it and save it back, ezpz
chunk
|> Enum.each(fn object ->
new_data =
rewrite_url_object(Map.get(object, :id), Map.get(object, :data), from_url, to_url)
if dry_run do
shell_info(
"Dry run: would update object #{object.id} to new media domain (#{inspect(new_data)})"
)
else
Pleroma.Repo.update!(Ecto.Changeset.change(object, data: new_data))
shell_info("Updated object #{object.id} to new media domain")
end
end)
end)
|> Stream.run()
end
defp rewrite_url(id, url, from_url, to_url) do
new_uri = String.replace(url, from_url, to_url)
check = URI.parse(new_uri)
case check do
%URI{scheme: nil, host: nil} ->
raise("Invalid URL after rewriting: #{new_uri} (object ID: #{id})")
_ ->
new_uri
end
end
# The base object - we're looking for this, it has the actual url
defp rewrite_url_object(id, %{"type" => "Link", "href" => href} = link, from_url, to_url) do
Map.put(link, "href", rewrite_url(id, href, from_url, to_url))
end
defp rewrite_url_object(id, %{"type" => type, "url" => urls} = object, from_url, to_url)
when type in ["Document", "Image"] do
# Document and Image contain url field, which will always be an array of links
Map.put(
object,
"url",
Enum.map(
urls,
fn url -> rewrite_url_object(id, url, from_url, to_url) end
)
)
end
defp rewrite_url_object(
id,
%{"type" => _type, "attachment" => attachments} = object,
from_url,
to_url
) do
# Note will contain an attachment field, which is an array of documents
Map.put(
object,
"attachment",
Enum.map(attachments, fn attachment ->
rewrite_url_object(id, attachment, from_url, to_url)
end)
)
end
defp rewrite_url_object(
_id,
object,
_,
_
) do
shell_info(inspect(object))
raise("Unhandled object format!")
end
end

View file

@ -262,7 +262,7 @@ defmodule Mix.Tasks.Pleroma.User do
Pleroma.User.Query.build(%{
external: true,
is_active: true
deactivated: false
})
|> refetch_public_keys()
end
@ -408,7 +408,7 @@ defmodule Mix.Tasks.Pleroma.User do
Pleroma.User.Query.build(%{
local: true,
is_active: true,
deactivated: false,
is_moderator: false,
is_admin: false,
invisible: false
@ -426,7 +426,7 @@ defmodule Mix.Tasks.Pleroma.User do
Pleroma.User.Query.build(%{
local: true,
is_active: true,
deactivated: false,
is_moderator: false,
is_admin: false,
invisible: false

View file

@ -59,6 +59,8 @@ defmodule Pleroma.Activity.HTML do
object = Object.normalize(activity, fetch: false)
add_cache_key_for(activity.id, key)
# callback already produces :commit or :ignore tuples
HTML.ensure_scrubbed_html(content, scrubbers, object.data["fake"] || false, callback)
end)
end

View file

@ -1,8 +1,15 @@
defmodule Pleroma.Akkoma.Translator do
@callback translate(String.t(), String.t() | nil, String.t()) ::
{:ok, String.t(), String.t()} | {:error, any()}
@callback languages() ::
{:ok, [%{name: String.t(), code: String.t()}],
[%{name: String.t(), code: String.t()}]}
| {:error, any()}
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
def languages do
module = Pleroma.Config.get([:translator, :module])
@cachex.fetch!(:translations_cache, "languages:#{module}}", fn _ ->
with {:ok, source_languages, dest_languages} <- module.languages() do
{:commit, {:ok, source_languages, dest_languages}}
else
{:error, err} -> {:ignore, {:error, err}}
end
end)
end
end

View file

@ -1,5 +1,5 @@
defmodule Pleroma.Akkoma.Translators.ArgosTranslate do
@behaviour Pleroma.Akkoma.Translator
@behaviour Pleroma.Akkoma.Translator.Provider
alias Pleroma.Config
@ -23,7 +23,7 @@ defmodule Pleroma.Akkoma.Translators.ArgosTranslate do
end
end
@impl Pleroma.Akkoma.Translator
@impl Pleroma.Akkoma.Translator.Provider
def languages do
with {response, 0} <- safe_languages() do
langs =
@ -83,7 +83,7 @@ defmodule Pleroma.Akkoma.Translators.ArgosTranslate do
defp htmlify_response(string, _), do: string
@impl Pleroma.Akkoma.Translator
@impl Pleroma.Akkoma.Translator.Provider
def translate(string, nil, to_language) do
# Akkoma's Pleroma-fe expects us to detect the source language automatically.
# Argos-translate doesn't have that option (yet?)
@ -106,4 +106,7 @@ defmodule Pleroma.Akkoma.Translators.ArgosTranslate do
{response, _} -> {:error, "ArgosTranslate failed to translate (#{response})"}
end
end
@impl Pleroma.Akkoma.Translator.Provider
def name, do: "Argos Translate"
end

View file

@ -1,5 +1,5 @@
defmodule Pleroma.Akkoma.Translators.DeepL do
@behaviour Pleroma.Akkoma.Translator
@behaviour Pleroma.Akkoma.Translator.Provider
alias Pleroma.HTTP
alias Pleroma.Config
@ -21,7 +21,7 @@ defmodule Pleroma.Akkoma.Translators.DeepL do
Config.get([:deepl, :tier])
end
@impl Pleroma.Akkoma.Translator
@impl Pleroma.Akkoma.Translator.Provider
def languages do
with {:ok, %{status: 200} = source_response} <- do_languages("source"),
{:ok, %{status: 200} = dest_response} <- do_languages("target"),
@ -48,7 +48,7 @@ defmodule Pleroma.Akkoma.Translators.DeepL do
end
end
@impl Pleroma.Akkoma.Translator
@impl Pleroma.Akkoma.Translator.Provider
def translate(string, from_language, to_language) do
with {:ok, %{status: 200} = response} <-
do_request(api_key(), tier(), string, from_language, to_language),
@ -97,4 +97,7 @@ defmodule Pleroma.Akkoma.Translators.DeepL do
]
)
end
@impl Pleroma.Akkoma.Translator.Provider
def name, do: "DeepL"
end

View file

@ -1,5 +1,5 @@
defmodule Pleroma.Akkoma.Translators.LibreTranslate do
@behaviour Pleroma.Akkoma.Translator
@behaviour Pleroma.Akkoma.Translator.Provider
alias Pleroma.Config
alias Pleroma.HTTP
@ -13,7 +13,7 @@ defmodule Pleroma.Akkoma.Translators.LibreTranslate do
Config.get([:libre_translate, :url])
end
@impl Pleroma.Akkoma.Translator
@impl Pleroma.Akkoma.Translator.Provider
def languages do
with {:ok, %{status: 200} = response} <- do_languages(),
{:ok, body} <- Jason.decode(response.body) do
@ -30,7 +30,7 @@ defmodule Pleroma.Akkoma.Translators.LibreTranslate do
end
end
@impl Pleroma.Akkoma.Translator
@impl Pleroma.Akkoma.Translator.Provider
def translate(string, from_language, to_language) do
with {:ok, %{status: 200} = response} <- do_request(string, from_language, to_language),
{:ok, body} <- Jason.decode(response.body) do
@ -79,4 +79,7 @@ defmodule Pleroma.Akkoma.Translators.LibreTranslate do
HTTP.get(to_string(url))
end
@impl Pleroma.Akkoma.Translator.Provider
def name, do: "LibreTranslate"
end

View file

@ -0,0 +1,9 @@
defmodule Pleroma.Akkoma.Translator.Provider do
@callback translate(String.t(), String.t() | nil, String.t()) ::
{:ok, String.t(), String.t()} | {:error, any()}
@callback languages() ::
{:ok, [%{name: String.t(), code: String.t()}],
[%{name: String.t(), code: String.t()}]}
| {:error, any()}
@callback name() :: String.t()
end

View file

@ -74,7 +74,7 @@ defmodule Pleroma.Application do
Pleroma.Web.Telemetry
] ++
elasticsearch_children() ++
task_children(@mix_env) ++
task_children() ++
dont_run_in_test(@mix_env)
# See http://elixir-lang.org/docs/stable/elixir/Supervisor.html
@ -144,34 +144,90 @@ defmodule Pleroma.Application do
defp cachex_children do
[
build_cachex("used_captcha", ttl_interval: seconds_valid_interval()),
build_cachex("user", default_ttl: 25_000, ttl_interval: 1000, limit: 2500),
build_cachex("object", default_ttl: 25_000, ttl_interval: 1000, limit: 2500),
build_cachex("rich_media", default_ttl: :timer.minutes(120), limit: 5000),
build_cachex("scrubber", limit: 2500),
build_cachex("scrubber_management", limit: 2500),
build_cachex("idempotency", expiration: idempotency_expiration(), limit: 2500),
build_cachex("web_resp", limit: 2500),
build_cachex("emoji_packs", expiration: emoji_packs_expiration(), limit: 10),
build_cachex("failed_proxy_url", limit: 2500),
build_cachex("banned_urls", default_ttl: :timer.hours(24 * 30), limit: 5_000),
build_cachex("translations", default_ttl: :timer.hours(24 * 30), limit: 2500),
build_cachex("instances", default_ttl: :timer.hours(24), ttl_interval: 1000, limit: 2500),
build_cachex("rel_me", default_ttl: :timer.hours(24 * 30), limit: 300),
build_cachex("host_meta", default_ttl: :timer.minutes(120), limit: 5000),
build_cachex("http_backoff", default_ttl: :timer.hours(24 * 30), limit: 10000)
build_cachex(
"used_captcha",
expiration: expiration(interval: seconds_valid_interval())
),
build_cachex(
"user",
expiration: expiration(default: 3_000, interval: 1_000),
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"object",
expiration: expiration(default: 3_000, interval: 1_000),
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"rich_media",
expiration: expiration(default: :timer.hours(2)),
hooks: [cachex_sched_limit(5000)]
),
build_cachex(
"scrubber",
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"scrubber_management",
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"idempotency",
expiration: expiration(default: :timer.hours(6), interval: :timer.minutes(1)),
hooks: [cachex_sched_limit(2500, [], frequency: :timer.minutes(1))]
),
build_cachex(
"web_resp",
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"emoji_packs",
expiration: expiration(default: :timer.minutes(5), interval: :timer.minutes(1)),
hooks: [cachex_sched_limit(10)]
),
build_cachex(
"failed_proxy_url",
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"banned_urls",
expiration: expiration(default: :timer.hours(24 * 30)),
hooks: [cachex_sched_limit(5_000, [], frequency: :timer.minutes(5))]
),
build_cachex(
"translations",
expiration: expiration(default: :timer.hours(24 * 30)),
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"instances",
expiration: expiration(default: :timer.hours(24), interval: 1000),
hooks: [cachex_sched_limit(2500)]
),
build_cachex(
"rel_me",
expiration: expiration(default: :timer.hours(24 * 30)),
hooks: [cachex_sched_limit(300, [], frequency: :timer.minutes(1))]
),
build_cachex(
"host_meta",
expiration: expiration(default: :timer.minutes(120)),
hooks: [cachex_sched_limit(5000, [], frequency: :timer.minutes(1))]
),
build_cachex(
"http_backoff",
expiration: expiration(default: :timer.hours(24 * 30)),
hooks: [cachex_sched_limit(10_000, [], frequency: :timer.minutes(5))]
)
]
end
defp emoji_packs_expiration,
do: expiration(default: :timer.seconds(5 * 60), interval: :timer.seconds(60))
defp idempotency_expiration,
do: expiration(default: :timer.seconds(6 * 60 * 60), interval: :timer.seconds(60))
defp seconds_valid_interval,
do: :timer.seconds(Config.get!([Pleroma.Captcha, :seconds_valid]))
defp cachex_sched_limit(limit, prune_opts \\ [], sched_opts \\ []),
do: hook(module: Cachex.Limit.Scheduled, args: {limit, prune_opts, sched_opts})
@spec build_cachex(String.t(), keyword()) :: map()
def build_cachex(type, opts),
do: %{
@ -199,31 +255,29 @@ defmodule Pleroma.Application do
]
end
@spec task_children(atom()) :: [map()]
@spec task_children() :: [map()]
defp task_children() do
always =
[
%{
id: :web_push_init,
start: {Task, :start_link, [&Pleroma.Web.Push.init/0]},
restart: :temporary
}
]
defp task_children(:test) do
[
%{
id: :web_push_init,
start: {Task, :start_link, [&Pleroma.Web.Push.init/0]},
restart: :temporary
}
]
end
defp task_children(_) do
[
%{
id: :web_push_init,
start: {Task, :start_link, [&Pleroma.Web.Push.init/0]},
restart: :temporary
},
%{
id: :internal_fetch_init,
start: {Task, :start_link, [&Pleroma.Web.ActivityPub.InternalFetchActor.init/0]},
restart: :temporary
}
]
if @mix_env == :test do
always
else
[
%{
id: :internal_fetch_init,
start: {Task, :start_link, [&Pleroma.Web.ActivityPub.InternalFetchActor.init/0]},
restart: :temporary
}
| always
]
end
end
@spec elasticsearch_children :: [Pleroma.Search.Elasticsearch.Cluster]

View file

@ -53,13 +53,15 @@ defmodule Pleroma.Bookmark do
end
@spec destroy(FlakeId.Ecto.CompatType.t(), FlakeId.Ecto.CompatType.t()) ::
{:ok, Bookmark.t()} | {:error, Changeset.t()}
:ok | {:error, any()}
def destroy(user_id, activity_id) do
from(b in Bookmark,
where: b.user_id == ^user_id,
where: b.activity_id == ^activity_id
)
|> Repo.one()
|> Repo.delete()
{cnt, _} =
from(b in Bookmark,
where: b.user_id == ^user_id,
where: b.activity_id == ^activity_id
)
|> Repo.delete_all()
if cnt >= 1, do: :ok, else: {:error, :not_found}
end
end

View file

@ -97,7 +97,7 @@ defmodule Pleroma.Captcha do
defp mark_captcha_as_used(token) do
ttl = seconds_valid() |> :timer.seconds()
@cachex.put(:used_captcha_cache, token, true, ttl: ttl)
@cachex.put(:used_captcha_cache, token, true, expire: ttl)
end
defp method, do: Pleroma.Config.get!([__MODULE__, :method])

View file

@ -22,6 +22,43 @@ defmodule Pleroma.Config.DeprecationWarnings do
"\n* `config :pleroma, :instance, :quarantined_instances` is now covered by `:pleroma, :mrf_simple, :reject`"}
]
def check_skip_thread_containment do
# The default in config/config.exs is "true" since 593b8b1e6a8502cca9bf5559b8bec86f172bbecb
# but when the default is retrieved in code the fallback is still "false"
uses_thread_visibility_filtering = !Config.get([:instance, :skip_thread_containment], false)
if uses_thread_visibility_filtering do
Logger.warning("""
!!!DEPRECATION WARNING!!!
Your config is explicitly enabling thread-based visibility containment by setting the below:
```
config :pleroma, :instance, skip_thread_containment: false
```
This feature comes with a very high performance overhead and is considered for removal.
If you actually need or strongly prefer keeping it, speak up NOW(!) by filing a ticket at
https://akkoma.dev/AkkomaGang/akkoma/issues
Complaints only after the removal happened are much less likely to have any effect.
""")
end
end
def check_truncated_nodeinfo_in_accounts do
if !Config.get!([:instance, :filter_embedded_nodeinfo]) do
Logger.warning("""
!!!BUG WORKAROUND DETECTED!!!
Your config is explicitly disabling filtering of nodeinfo data embedded in other Masto API responses
config :pleroma, :instance, filter_embedded_nodeinfo: false
This setting will soon be removed. Any usage of it merely serves as a temporary workaround.
Make sure to file a bug telling us which problems you encountered and circumvented by setting this!
https://akkoma.dev/AkkomaGang/akkoma/issues
We cant fix bugs we dont know about.
""")
end
end
def check_exiftool_filter do
filters = Config.get([Pleroma.Upload]) |> Keyword.get(:filters, [])
@ -222,7 +259,8 @@ defmodule Pleroma.Config.DeprecationWarnings do
check_http_adapter(),
check_uploader_base_url_set(),
check_uploader_base_url_is_not_base_domain(),
check_exiftool_filter()
check_exiftool_filter(),
check_skip_thread_containment()
]
|> Enum.reduce(:ok, fn
:ok, :ok -> :ok

View file

@ -7,7 +7,9 @@ defmodule Pleroma.ConfigDB do
import Ecto.Changeset
import Ecto.Query, only: [select: 3, from: 2]
import Pleroma.Web.Gettext
use Gettext,
backend: Pleroma.Web.Gettext
alias __MODULE__
alias Pleroma.Repo

View file

@ -19,7 +19,8 @@ defmodule Pleroma.Constants do
"context_id",
"deleted_activity_id",
"pleroma_internal",
"generator"
"generator",
"voters"
]
)

View file

@ -15,7 +15,6 @@ defmodule Pleroma.Conversation do
# This is the context ap id.
field(:ap_id, :string)
has_many(:participations, Participation)
has_many(:users, through: [:participations, :user])
timestamps()
end
@ -45,7 +44,11 @@ defmodule Pleroma.Conversation do
participation = Repo.preload(participation, :recipients)
if Enum.empty?(participation.recipients) do
recipients = User.get_all_by_ap_id(activity.recipients)
recipients =
[activity.actor | activity.recipients]
|> Enum.uniq()
|> User.get_all_by_ap_id()
RecipientShip.create(recipients, participation)
end
end
@ -64,15 +67,16 @@ defmodule Pleroma.Conversation do
ap_id when is_binary(ap_id) and byte_size(ap_id) > 0 <- object.data["context"],
{:ok, conversation} <- create_for_ap_id(ap_id) do
users = User.get_users_from_set(activity.recipients, local_only: false)
local_users = Enum.filter(users, & &1.local)
participations =
Enum.map(users, fn user ->
Enum.map(local_users, fn user ->
invisible_conversation = Enum.any?(users, &User.blocks?(user, &1))
opts = Keyword.put(opts, :invisible_conversation, invisible_conversation)
{:ok, participation} =
Participation.create_for_user_and_conversation(user, conversation, opts)
Participation.create_or_bump(user, conversation, activity.id, opts)
maybe_create_recipientships(participation, activity)
participation

View file

@ -12,9 +12,12 @@ defmodule Pleroma.Conversation.Participation do
import Ecto.Changeset
import Ecto.Query
@type t() :: %__MODULE__{}
schema "conversation_participations" do
belongs_to(:user, User, type: FlakeId.Ecto.CompatType)
belongs_to(:conversation, Conversation)
field(:last_bump, FlakeId.Ecto.CompatType)
field(:read, :boolean, default: false)
field(:last_activity_id, FlakeId.Ecto.CompatType, virtual: true)
@ -24,24 +27,26 @@ defmodule Pleroma.Conversation.Participation do
timestamps()
end
def creation_cng(struct, params) do
defp creation_cng(struct, params) do
struct
|> cast(params, [:user_id, :conversation_id, :read])
|> validate_required([:user_id, :conversation_id])
|> cast(params, [:user_id, :conversation_id, :last_bump, :read])
|> validate_required([:user_id, :conversation_id, :last_bump])
end
def create_for_user_and_conversation(user, conversation, opts \\ []) do
def create_or_bump(user, conversation, status_id, opts \\ []) do
read = !!opts[:read]
invisible_conversation = !!opts[:invisible_conversation]
update_on_conflict =
if(invisible_conversation, do: [], else: [read: read])
|> Keyword.put(:updated_at, NaiveDateTime.utc_now())
|> Keyword.put(:last_bump, status_id)
%__MODULE__{}
|> creation_cng(%{
user_id: user.id,
conversation_id: conversation.id,
last_bump: status_id,
read: invisible_conversation || read
})
|> Repo.insert(
@ -51,7 +56,7 @@ defmodule Pleroma.Conversation.Participation do
)
end
def read_cng(struct, params) do
defp read_cng(struct, params) do
struct
|> cast(params, [:read])
|> validate_required([:read])
@ -99,43 +104,90 @@ defmodule Pleroma.Conversation.Participation do
{:ok, user, participations}
end
# used for tests
def mark_as_unread(participation) do
participation
|> read_cng(%{read: false})
|> Repo.update()
end
def for_user(user, params \\ %{}) do
def for_user_with_pagination(user, params \\ %{}) do
from(p in __MODULE__,
where: p.user_id == ^user.id,
order_by: [desc: p.updated_at],
preload: [conversation: [:users]]
preload: [:conversation]
)
|> restrict_recipients(user, params)
|> Pleroma.Pagination.fetch_paginated(params)
|> select([p], %{id: p.last_bump, entry: p})
|> Pleroma.Pagination.fetch_paginated(Map.put(params, :pagination_field, :last_bump))
end
def restrict_recipients(query, user, %{recipients: user_ids}) do
def preload_last_activity_id_and_filter(participations) when is_list(participations) do
participations
|> Enum.map(fn p -> load_last_activity_id(p) end)
|> Enum.filter(fn p -> p.last_activity_id end)
end
defp load_last_activity_id(%__MODULE__{} = participation) do
%{
participation
| last_activity_id: last_activity_id(participation)
}
end
@spec last_activity_id(t(), User.t() | nil) :: Flake.t()
def last_activity_id(participation, user \\ nil)
def last_activity_id(
%__MODULE__{conversation: %Conversation{}} = participation,
user
) do
user =
if user && user.id == participation.user_id do
user
else
case participation.user do
%User{} -> participation.user
_ -> User.get_cached_by_id(participation.user_id)
end
end
ActivityPub.fetch_latest_direct_activity_id_for_context(
participation.conversation.ap_id,
%{
user: user,
blocking_user: user
}
)
end
def last_activity_id(%__MODULE__{} = participation, user) do
case Repo.preload(participation, :conversation) do
%{conversation: %Conversation{}} = p -> last_activity_id(p, user)
_ -> nil
end
end
defp restrict_recipients(query, user, %{recipients: user_ids}) do
user_binary_ids =
[user.id | user_ids]
|> Enum.uniq()
|> User.binary_id()
conversation_subquery =
__MODULE__
|> group_by([p], p.conversation_id)
recipient_subquery =
RecipientShip
|> group_by([r], r.participation_id)
|> having(
[p],
count(p.user_id) == ^length(user_binary_ids) and
fragment("array_agg(?) @> ?", p.user_id, ^user_binary_ids)
[r],
count(r.user_id) == ^length(user_binary_ids) and
fragment("array_agg(?) @> ?", r.user_id, ^user_binary_ids)
)
|> select([p], %{id: p.conversation_id})
|> select([r], %{pid: r.participation_id})
query
|> join(:inner, [p], c in subquery(conversation_subquery), on: p.conversation_id == c.id)
|> join(:inner, [p], r in subquery(recipient_subquery), on: p.id == r.pid)
end
def restrict_recipients(query, _, _), do: query
defp restrict_recipients(query, _, _), do: query
def for_user_and_conversation(user, conversation) do
from(p in __MODULE__,
@ -145,26 +197,6 @@ defmodule Pleroma.Conversation.Participation do
|> Repo.one()
end
def for_user_with_last_activity_id(user, params \\ %{}) do
for_user(user, params)
|> Enum.map(fn participation ->
activity_id =
ActivityPub.fetch_latest_direct_activity_id_for_context(
participation.conversation.ap_id,
%{
user: user,
blocking_user: user
}
)
%{
participation
| last_activity_id: activity_id
}
end)
|> Enum.reject(&is_nil(&1.last_activity_id))
end
def get(_, _ \\ [])
def get(nil, _), do: nil
@ -213,14 +245,6 @@ defmodule Pleroma.Conversation.Participation do
|> Repo.aggregate(:count, :id)
end
def unread_conversation_count_for_user(user) do
from(p in __MODULE__,
where: p.user_id == ^user.id,
where: not p.read,
select: %{count: count(p.id)}
)
end
def delete(%__MODULE__{} = participation) do
Repo.delete(participation)
end

View file

@ -1,79 +0,0 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.CounterCache do
alias Pleroma.CounterCache
alias Pleroma.Repo
use Ecto.Schema
import Ecto.Changeset
import Ecto.Query
schema "counter_cache" do
field(:instance, :string)
field(:public, :integer)
field(:unlisted, :integer)
field(:private, :integer)
field(:direct, :integer)
end
def changeset(struct, params) do
struct
|> cast(params, [:instance, :public, :unlisted, :private, :direct])
|> validate_required([:instance])
|> unique_constraint(:instance)
end
def get_by_instance(instance) do
CounterCache
|> select([c], %{
"public" => c.public,
"unlisted" => c.unlisted,
"private" => c.private,
"direct" => c.direct
})
|> where([c], c.instance == ^instance)
|> Repo.one()
|> case do
nil -> %{"public" => 0, "unlisted" => 0, "private" => 0, "direct" => 0}
val -> val
end
end
def get_sum do
CounterCache
|> select([c], %{
"public" => type(sum(c.public), :integer),
"unlisted" => type(sum(c.unlisted), :integer),
"private" => type(sum(c.private), :integer),
"direct" => type(sum(c.direct), :integer)
})
|> Repo.one()
end
def set(instance, values) do
params =
Enum.reduce(
["public", "private", "unlisted", "direct"],
%{"instance" => instance},
fn param, acc ->
Map.put_new(acc, param, Map.get(values, param, 0))
end
)
%CounterCache{}
|> changeset(params)
|> Repo.insert(
on_conflict: [
set: [
public: params["public"],
private: params["private"],
unlisted: params["unlisted"],
direct: params["direct"]
]
],
returning: true,
conflict_target: :instance
)
end
end

View file

@ -4,7 +4,7 @@
defmodule Pleroma.Docs.Translator do
require Pleroma.Docs.Translator.Compiler
require Pleroma.Web.Gettext
use Gettext, backend: Pleroma.Web.Gettext
@before_compile Pleroma.Docs.Translator.Compiler
end

View file

@ -7,6 +7,8 @@ defmodule Pleroma.Docs.Translator.Compiler do
@raw_config Pleroma.Config.Loader.read("config/description.exs")
@raw_descriptions @raw_config[:pleroma][:config_description]
require Gettext.Macros
defmacro __before_compile__(_env) do
strings =
__MODULE__.descriptions()
@ -21,7 +23,8 @@ defmodule Pleroma.Docs.Translator.Compiler do
ctxt = msgctxt_for(path, type)
quote do
Pleroma.Web.Gettext.dpgettext_noop(
Gettext.Macros.dpgettext_noop_with_backend(
Pleroma.Web.Gettext,
"config_descriptions",
unquote(ctxt),
unquote(string)

View file

@ -5,12 +5,13 @@
defmodule Pleroma.Emails.UserEmail do
@moduledoc "User emails"
require Pleroma.Web.Gettext
require Pleroma.Web.GettextCompanion
use Gettext, backend: Pleroma.Web.Gettext
use Pleroma.Web, :mailer
alias Pleroma.Config
alias Pleroma.User
alias Pleroma.Web.Gettext
alias Pleroma.Web.GettextCompanion
import Swoosh.Email
import Phoenix.Swoosh, except: [render_body: 3]
@ -29,7 +30,7 @@ defmodule Pleroma.Emails.UserEmail do
@spec welcome(User.t(), map()) :: Swoosh.Email.t()
def welcome(user, opts \\ %{}) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
new()
|> to(recipient(user))
|> from(Map.get(opts, :sender, sender()))
@ -37,7 +38,7 @@ defmodule Pleroma.Emails.UserEmail do
Map.get(
opts,
:subject,
Gettext.dpgettext(
dpgettext(
"static_pages",
"welcome email subject",
"Welcome to %{instance_name}!",
@ -49,7 +50,7 @@ defmodule Pleroma.Emails.UserEmail do
Map.get(
opts,
:html,
Gettext.dpgettext(
dpgettext(
"static_pages",
"welcome email html body",
"Welcome to %{instance_name}!",
@ -61,7 +62,7 @@ defmodule Pleroma.Emails.UserEmail do
Map.get(
opts,
:text,
Gettext.dpgettext(
dpgettext(
"static_pages",
"welcome email text body",
"Welcome to %{instance_name}!",
@ -73,11 +74,11 @@ defmodule Pleroma.Emails.UserEmail do
end
def password_reset_email(user, token) when is_binary(token) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
password_reset_url = url(~p[/api/v1/pleroma/password_reset/#{token}])
html_body =
Gettext.dpgettext(
dpgettext(
"static_pages",
"password reset email body",
"""
@ -93,9 +94,7 @@ defmodule Pleroma.Emails.UserEmail do
new()
|> to(recipient(user))
|> from(sender())
|> subject(
Gettext.dpgettext("static_pages", "password reset email subject", "Password reset")
)
|> subject(dpgettext("static_pages", "password reset email subject", "Password reset"))
|> html_body(html_body)
end
end
@ -106,11 +105,11 @@ defmodule Pleroma.Emails.UserEmail do
to_email,
to_name \\ nil
) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
registration_url = url(~p[/registration/#{user_invite_token.token}])
html_body =
Gettext.dpgettext(
dpgettext(
"static_pages",
"user invitation email body",
"""
@ -127,7 +126,7 @@ defmodule Pleroma.Emails.UserEmail do
|> to(recipient(to_email, to_name))
|> from(sender())
|> subject(
Gettext.dpgettext(
dpgettext(
"static_pages",
"user invitation email subject",
"Invitation to %{instance_name}",
@ -139,11 +138,11 @@ defmodule Pleroma.Emails.UserEmail do
end
def account_confirmation_email(user) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
confirmation_url = url(~p[/api/account/confirm_email/#{user.id}/#{user.confirmation_token}])
html_body =
Gettext.dpgettext(
dpgettext(
"static_pages",
"confirmation email body",
"""
@ -159,7 +158,7 @@ defmodule Pleroma.Emails.UserEmail do
|> to(recipient(user))
|> from(sender())
|> subject(
Gettext.dpgettext(
dpgettext(
"static_pages",
"confirmation email subject",
"%{instance_name} account confirmation",
@ -171,9 +170,9 @@ defmodule Pleroma.Emails.UserEmail do
end
def approval_pending_email(user) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
html_body =
Gettext.dpgettext(
dpgettext(
"static_pages",
"approval pending email body",
"""
@ -187,7 +186,7 @@ defmodule Pleroma.Emails.UserEmail do
|> to(recipient(user))
|> from(sender())
|> subject(
Gettext.dpgettext(
dpgettext(
"static_pages",
"approval pending email subject",
"Your account is awaiting approval"
@ -198,9 +197,9 @@ defmodule Pleroma.Emails.UserEmail do
end
def successful_registration_email(user) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
html_body =
Gettext.dpgettext(
dpgettext(
"static_pages",
"successful registration email body",
"""
@ -216,7 +215,7 @@ defmodule Pleroma.Emails.UserEmail do
|> to(recipient(user))
|> from(sender())
|> subject(
Gettext.dpgettext(
dpgettext(
"static_pages",
"successful registration email subject",
"Account registered on %{instance_name}",
@ -234,7 +233,7 @@ defmodule Pleroma.Emails.UserEmail do
"""
@spec digest_email(User.t()) :: Swoosh.Email.t() | nil
def digest_email(user) do
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
notifications = Pleroma.Notification.for_user_since(user, user.last_digest_emailed_at)
mentions =
@ -295,7 +294,7 @@ defmodule Pleroma.Emails.UserEmail do
|> to(recipient(user))
|> from(sender())
|> subject(
Gettext.dpgettext(
dpgettext(
"static_pages",
"digest email subject",
"Your digest from %{instance_name}",
@ -336,12 +335,12 @@ defmodule Pleroma.Emails.UserEmail do
def backup_is_ready_email(backup, admin_user_id \\ nil) do
%{user: user} = Pleroma.Repo.preload(backup, :user)
Gettext.with_locale_or_default user.language do
GettextCompanion.with_locale_or_default user.language do
download_url = Pleroma.Web.PleromaAPI.BackupView.download_url(backup)
html_body =
if is_nil(admin_user_id) do
Gettext.dpgettext(
dpgettext(
"static_pages",
"account archive email body - self-requested",
"""
@ -353,7 +352,7 @@ defmodule Pleroma.Emails.UserEmail do
else
admin = Pleroma.Repo.get(User, admin_user_id)
Gettext.dpgettext(
dpgettext(
"static_pages",
"account archive email body - admin requested",
"""
@ -369,7 +368,7 @@ defmodule Pleroma.Emails.UserEmail do
|> to(recipient(user))
|> from(sender())
|> subject(
Gettext.dpgettext(
dpgettext(
"static_pages",
"account archive email subject",
"Your account archive is ready"

View file

@ -16,7 +16,7 @@ defmodule Pleroma.Emoji do
@ets __MODULE__.Ets
@ets_options [
:ordered_set,
:set,
:protected,
:named_table,
{:read_concurrency, true}
@ -25,6 +25,8 @@ defmodule Pleroma.Emoji do
defstruct [:code, :file, :tags, :safe_code, :safe_file]
@type t :: %__MODULE__{}
@doc "Build emoji struct"
def build({code, file, tags}) do
%__MODULE__{
@ -43,14 +45,14 @@ defmodule Pleroma.Emoji do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
@doc "Reloads the emojis from disk."
@doc "Reloads the emojis from disk (asynchronous)"
@spec reload() :: :ok
def reload do
GenServer.call(__MODULE__, :reload)
GenServer.cast(__MODULE__, :reload)
end
@doc "Returns the path of the emoji `name`."
@spec get(String.t()) :: String.t() | nil
@doc "Returns the emoji struct of the given `name` if it exists."
@spec get(String.t()) :: t() | nil
def get(name) do
name =
if String.starts_with?(name, ":") do
@ -62,11 +64,23 @@ defmodule Pleroma.Emoji do
end
case :ets.lookup(@ets, name) do
[{_, path}] -> path
[{_, emoji}] -> emoji
_ -> nil
end
end
@doc "Updates or inserts new emoji (asynchronous)"
@spec add_or_update(t()) :: :ok
def add_or_update(%__MODULE__{} = emoji) do
GenServer.cast(__MODULE__, {:add, emoji})
end
@doc "Delete emoji with given shortcode if it exists (asynchronous)"
@spec delete(String.t()) :: :ok
def delete(code) do
GenServer.cast(__MODULE__, {:delete, code})
end
@spec exist?(String.t()) :: boolean()
def exist?(name), do: not is_nil(get(name))
@ -89,10 +103,14 @@ defmodule Pleroma.Emoji do
{:noreply, state}
end
@doc false
def handle_call(:reload, _from, state) do
update_emojis(Loader.load())
{:reply, :ok, state}
def handle_cast({:add, %__MODULE__{} = emoji}, state) do
:ets.insert(@ets, {emoji.code, emoji})
{:noreply, state}
end
def handle_cast({:delete, code}, state) do
:ets.delete(@ets, code)
{:noreply, state}
end
@doc false

View file

@ -49,12 +49,15 @@ defmodule Pleroma.Emoji.Pack do
Path.join(dir, safe_path)
end
defp tags(%__MODULE__{} = pack), do: ["pack:" <> pack.name]
@spec create(String.t()) :: {:ok, t()} | {:error, File.posix()} | {:error, :empty_values}
def create(name) do
with :ok <- validate_not_empty([name]),
dir <- path_join_name_safe(emoji_path(), name),
:ok <- File.mkdir(dir) do
save_pack(%__MODULE__{
name: name,
path: dir,
pack_file: Path.join(dir, "pack.json")
})
@ -90,9 +93,13 @@ defmodule Pleroma.Emoji.Pack do
@spec delete(String.t()) ::
{:ok, [binary()]} | {:error, File.posix(), binary()} | {:error, :empty_values}
def delete(name) do
with :ok <- validate_not_empty([name]),
pack_path <- path_join_name_safe(emoji_path(), name) do
File.rm_rf(pack_path)
with {_, :ok} <- {:empty, validate_not_empty([name])},
{:ok, pack} <- load_pack(name) do
Enum.each(pack.files, fn {shortcode, _} -> Emoji.delete(shortcode) end)
File.rm_rf(pack.path)
else
{:empty, error} -> error
_ -> {:ok, []}
end
end
@ -142,8 +149,6 @@ defmodule Pleroma.Emoji.Pack do
{item, updated_pack}
end)
Emoji.reload()
{:ok, updated_pack}
after
File.rm_rf(tmp_dir)
@ -169,16 +174,14 @@ defmodule Pleroma.Emoji.Pack do
with :ok <- validate_not_empty([shortcode, filename]),
:ok <- validate_emoji_not_exists(shortcode),
{:ok, updated_pack} <- do_add_file(pack, shortcode, filename, file) do
Emoji.reload()
{:ok, updated_pack}
end
end
defp do_add_file(pack, shortcode, filename, file) do
with :ok <- save_file(file, pack, filename) do
pack
|> put_emoji(shortcode, filename)
|> save_pack()
with :ok <- save_file(file, pack, filename),
{:ok, pack} <- put_emoji(pack, shortcode, filename) do
{:ok, pack}
end
end
@ -188,7 +191,7 @@ defmodule Pleroma.Emoji.Pack do
with :ok <- validate_not_empty([shortcode]),
:ok <- remove_file(pack, shortcode),
{:ok, updated_pack} <- pack |> delete_emoji(shortcode) |> save_pack() do
Emoji.reload()
Emoji.delete(shortcode)
{:ok, updated_pack}
end
end
@ -203,9 +206,8 @@ defmodule Pleroma.Emoji.Pack do
{:ok, updated_pack} <-
pack
|> delete_emoji(shortcode)
|> put_emoji(new_shortcode, new_filename)
|> save_pack() do
Emoji.reload()
|> put_emoji(new_shortcode, new_filename) do
if shortcode != new_shortcode, do: Emoji.delete(shortcode)
{:ok, updated_pack}
end
end
@ -455,7 +457,7 @@ defmodule Pleroma.Emoji.Pack do
# if pack.json MD5 changes, the cache is not valid anymore
%{hash: hash, pack_data: result},
# Add a minute to cache time for every file in the pack
ttl: overall_ttl
expire: overall_ttl
)
result
@ -519,7 +521,17 @@ defmodule Pleroma.Emoji.Pack do
defp put_emoji(pack, shortcode, filename) do
files = Map.put(pack.files, shortcode, filename)
%{pack | files: files, files_count: length(Map.keys(files))}
pack = %{pack | files: files, files_count: length(Map.keys(files))}
url_path = path_join_name_safe("/emoji/", pack.name) |> path_join_safe(filename)
with {:ok, pack} <- save_pack(pack) do
{shortcode, url_path, tags(pack)}
|> Emoji.build()
|> Emoji.add_or_update()
{:ok, pack}
end
end
defp delete_emoji(pack, shortcode) do

View file

@ -193,6 +193,12 @@ defmodule Pleroma.Filter do
end
end
defp escape_for_regex(plain_phrase) do
# Escape all active characters:
# .^$*+?()[{\|
Regex.replace(~r/\.\^\$\*\+\?\(\)\[\{\\\|/, plain_phrase, fn m -> "\\" <> m end)
end
@spec compose_regex(User.t() | [t()], format()) :: String.t() | Regex.t() | nil
def compose_regex(user_or_filters, format \\ :postgres)
@ -207,7 +213,7 @@ defmodule Pleroma.Filter do
def compose_regex([_ | _] = filters, format) do
phrases =
filters
|> Enum.map(& &1.phrase)
|> Enum.map(&escape_for_regex(&1.phrase))
|> Enum.join("|")
case format do

View file

@ -3,6 +3,7 @@
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Formatter do
alias PhoenixHTMLHelpers.Tag
alias Pleroma.HTML
alias Pleroma.User
@ -37,10 +38,10 @@ defmodule Pleroma.Formatter do
nickname_text = get_nickname_text(nickname, opts)
:span
|> Phoenix.HTML.Tag.content_tag(
Phoenix.HTML.Tag.content_tag(
|> Tag.content_tag(
Tag.content_tag(
:a,
["@", Phoenix.HTML.Tag.content_tag(:span, nickname_text)],
["@", Tag.content_tag(:span, nickname_text)],
"data-user": id,
class: "u-url mention",
href: user_url,
@ -68,7 +69,7 @@ defmodule Pleroma.Formatter do
url = "#{Pleroma.Web.Endpoint.url()}/tag/#{tag}"
link =
Phoenix.HTML.Tag.content_tag(:a, tag_text,
Tag.content_tag(:a, tag_text,
class: "hashtag",
"data-tag": tag,
href: url,

View file

@ -14,6 +14,8 @@ defmodule Pleroma.Frontend do
"build_dir" => opts[:build_dir]
}
explicit_source = !!(opts[:file] || opts[:build_dir] || opts[:build_url])
frontend_info =
[:frontends, :available, name]
|> Config.get(%{})
@ -28,6 +30,25 @@ defmodule Pleroma.Frontend do
raise "No ref given or configured"
end
if Map.get(frontend_info, "blind_trust", false) !== true do
bugtracker = frontend_info["bugtracker"]
unless bugtracker || explicit_source do
raise "Configured third-party frontend without a bugtracker; refusing install."
end
bugtracker = bugtracker || "the external frontend developers"
Logger.warning("""
!!!!!!!!
You are installing a third-party frontend not vetted by the Akkoma team.
THERE ARE NO GUARANTTES ABOUT SAFETY AND FUNCTIONALITY!
Do NOT report problems to Akkoma, instead
all bugs must be reported to #{bugtracker}
!!!!!!!!
""")
end
dest = Path.join([dir(), name, ref])
label = "#{name} (#{ref})"
@ -69,7 +90,7 @@ defmodule Pleroma.Frontend do
end
end
def unzip(zip, dest) do
defp unzip(zip, dest) do
File.rm_rf!(dest)
File.mkdir_p!(dest)

View file

@ -61,12 +61,7 @@ defmodule Pleroma.HTTP do
options = options |> Keyword.delete(:params)
headers = maybe_add_user_agent(headers)
client =
Tesla.client([
Tesla.Middleware.FollowRedirects,
Pleroma.HTTP.Middleware.HTTPSignature,
Tesla.Middleware.Telemetry
])
client = build_client(method)
Logger.debug("Outbound: #{method} #{url}")
@ -84,6 +79,37 @@ defmodule Pleroma.HTTP do
{:error, :fetch_error}
end
defp build_client(method) do
# Orders of middlewares matters!
# We start construction with the middlewares _last_ to run
# on outgoing requests (and first on incoming responses).
# This allows using more efficient list prepending.
middlewares = [Tesla.Middleware.Telemetry]
# XXX: just like the user-agent header below, our current mocks can't handle extra headers
# and would break if we used the decompression middleware during tests.
# The :test condition can and should be removed once mocks are fixed.
#
# HEAD responses won't contain a body to compress anyway and we sometimes use
# HEAD requests to determine whether a remote resource is within size limits before fetching it.
# If the server would send a compressed response however, Content-Length will be the size of
# the _compressed_ response body skewing results.
middlewares =
if method != :head and @mix_env != :test do
[Tesla.Middleware.DecompressResponse | middlewares]
else
middlewares
end
middlewares = [
Tesla.Middleware.FollowRedirects,
Pleroma.HTTP.Middleware.HTTPSignature | middlewares
]
Tesla.client(middlewares)
end
# XXX: our test mocks are (too) strict about headers and cannot handle user-agent atm
if @mix_env == :test do
defp maybe_add_user_agent(headers) do
with true <- Pleroma.Config.get([:http, :send_user_agent]) do

View file

@ -29,13 +29,11 @@ defmodule Pleroma.HTTP.AdapterHelper do
conn_max_idle_time: Config.get!([:http, :receive_timeout]),
protocols: Config.get!([:http, :protocols]),
conn_opts: [
# Do NOT add cacerts here as this will cause issues for plain HTTP connections!
# (when we upgrade our deps to Mint >= 1.6.0 we can also explicitly enable "inet4: true")
transport_opts: [inet6: true],
# up to at least version 0.20.0, Finch leaves server_push enabled by default for HTTP2,
# but will actually raise an exception when receiving such a response. Tell servers we don't want it.
# see: https://github.com/sneako/finch/issues/325
client_settings: [enable_push: false]
transport_opts: [
inet6: true,
inet4: true,
cacerts: :public_key.cacerts_get()
]
]
]
}

View file

@ -94,7 +94,7 @@ defmodule Pleroma.HTTP.Backoff do
log_ratelimit(status, host, timestamp)
ttl = Timex.diff(timestamp, DateTime.utc_now(), :seconds)
# we will cache the host for 5 minutes
@cachex.put(@backoff_cache, host, true, ttl: ttl)
@cachex.put(@backoff_cache, host, true, expire: ttl)
{:error, :ratelimit}
_ ->

View file

@ -16,20 +16,6 @@ defmodule Pleroma.HTTP.Middleware.HTTPSignature do
(Note: the third argument holds static middleware options from client creation)
"""
@doc """
If logging raw Tesla.Env use this if you wish to redact signing key details
"""
def redact_keys(env) do
case get_in(env, [:opts, :httpsig, :signing_key]) do
nil -> env
key -> put_in(env, [:opts, :httpsig, :signing_key], redact_key_details(key))
end
end
defp redact_key_details(%SigningKey{key_id: id}), do: id
defp redact_key_details(key), do: key
@impl true
def call(env, next, _options) do
env = maybe_sign(env)

View file

@ -78,7 +78,7 @@ defmodule Pleroma.Marker do
defp get_marker(user, timeline) do
case Repo.find_resource(get_query(user, timeline)) do
{:ok, marker} -> %__MODULE__{marker | user: user}
{:ok, %__MODULE__{} = marker} -> %__MODULE__{marker | user: user}
_ -> %__MODULE__{timeline: timeline, user_id: user.id}
end
end

View file

@ -54,6 +54,7 @@ defmodule Pleroma.MFA do
end
@doc false
@spec fetch_settings(User.t()) :: Settings.t()
def fetch_settings(%User{} = user) do
user.multi_factor_authentication_settings || %Settings{}
end

View file

@ -8,7 +8,8 @@ defmodule Pleroma.MFA.Changeset do
alias Pleroma.User
def disable(%Ecto.Changeset{} = changeset, force \\ false) do
settings =
%Settings{} =
settings =
changeset
|> Ecto.Changeset.apply_changes()
|> MFA.fetch_settings()
@ -22,18 +23,18 @@ defmodule Pleroma.MFA.Changeset do
def disable_totp(%User{multi_factor_authentication_settings: settings} = user) do
user
|> put_change(%Settings{settings | totp: %Settings.TOTP{}})
|> put_change(%{settings | totp: %Settings.TOTP{}})
end
def confirm_totp(%User{multi_factor_authentication_settings: settings} = user) do
totp_settings = %Settings.TOTP{settings.totp | confirmed: true}
totp_settings = %{settings.totp | confirmed: true}
user
|> put_change(%Settings{settings | totp: totp_settings, enabled: true})
|> put_change(%{settings | totp: totp_settings, enabled: true})
end
def setup_totp(%User{} = user, attrs) do
mfa_settings = MFA.fetch_settings(user)
%Settings{} = mfa_settings = MFA.fetch_settings(user)
totp_settings =
%Settings.TOTP{}
@ -45,7 +46,7 @@ defmodule Pleroma.MFA.Changeset do
def cast_backup_codes(%User{} = user, codes) do
user
|> put_change(%Settings{
|> put_change(%{
user.multi_factor_authentication_settings
| backup_codes: codes
})

View file

@ -15,7 +15,6 @@ defmodule Pleroma.Notification do
alias Pleroma.Repo
alias Pleroma.ThreadMute
alias Pleroma.User
alias Pleroma.Web.CommonAPI
alias Pleroma.Web.CommonAPI.Utils
alias Pleroma.Web.Push
alias Pleroma.Web.Streamer
@ -388,40 +387,46 @@ defmodule Pleroma.Notification do
end
end
@spec create_notifications(Activity.t(), keyword()) :: {:ok, [Notification.t()] | []}
def create_notifications(activity, options \\ [])
@doc """
Create notifications for given Activity in database, but does NOT send them to streams and webpush.
On success returns :ok triple with non-muted notifications in the second position and
muted (i.e. likely not supposed to be pro-actively sent) notifications in the third position.
"""
@spec create_notifications(Activity.t()) ::
{:ok, [Notification.t()] | [], [Notification.t()] | []}
def create_notifications(activity)
def create_notifications(%Activity{data: %{"to" => _, "type" => "Create"}} = activity, options) do
def create_notifications(%Activity{data: %{"to" => _, "type" => "Create"}} = activity) do
object = Object.normalize(activity, fetch: false)
if object && object.data["type"] == "Answer" do
{:ok, []}
{:ok, [], []}
else
do_create_notifications(activity, options)
do_create_notifications(activity)
end
end
def create_notifications(%Activity{data: %{"type" => type}} = activity, options)
def create_notifications(%Activity{data: %{"type" => type}} = activity)
when type in ["Follow", "Like", "Announce", "Move", "EmojiReact", "Flag", "Update"] do
do_create_notifications(activity, options)
do_create_notifications(activity)
end
def create_notifications(_, _), do: {:ok, []}
defp do_create_notifications(%Activity{} = activity, options) do
do_send = Keyword.get(options, :do_send, true)
def create_notifications(_), do: {:ok, [], []}
defp do_create_notifications(%Activity{} = activity) do
{enabled_receivers, disabled_receivers} = get_notified_from_activity(activity)
potential_receivers = enabled_receivers ++ disabled_receivers
notifications =
Enum.map(potential_receivers, fn user ->
do_send = do_send && user in enabled_receivers
create_notification(activity, user, do_send: do_send)
end)
notifications_active =
enabled_receivers
|> Enum.map(&create_notification(activity, &1))
|> Enum.reject(&is_nil/1)
{:ok, notifications}
notifications_silent =
disabled_receivers
|> Enum.map(&create_notification(activity, &1, seen: true))
|> Enum.reject(&is_nil/1)
{:ok, notifications_active, notifications_silent}
end
defp type_from_activity(%{data: %{"type" => type}} = activity) do
@ -467,9 +472,9 @@ defmodule Pleroma.Notification do
defp type_from_activity_object(%{data: %{"type" => "Create"}}), do: "mention"
# TODO move to sql, too.
def create_notification(%Activity{} = activity, %User{} = user, opts \\ []) do
do_send = Keyword.get(opts, :do_send, true)
defp create_notification(%Activity{} = activity, %User{} = user, opts \\ []) do
type = Keyword.get(opts, :type, type_from_activity(activity))
seen = Keyword.get(opts, :seen, false)
unless skip?(activity, user, opts) do
{:ok, %{notification: notification}} =
@ -477,17 +482,12 @@ defmodule Pleroma.Notification do
|> Multi.insert(:notification, %Notification{
user_id: user.id,
activity: activity,
seen: mark_as_read?(activity, user),
seen: seen,
type: type
})
|> Marker.multi_set_last_read_id(user, "notifications")
|> Repo.transaction()
if do_send do
Streamer.stream(["user", "user:notification"], notification)
Push.send(notification)
end
notification
end
end
@ -678,6 +678,12 @@ defmodule Pleroma.Notification do
end
end
def skip?(:internal, %Activity{} = activity, _user, _opts) do
actor = activity.data["actor"]
user = User.get_cached_by_ap_id(actor)
User.is_internal_user?(user)
end
def skip?(:invisible, %Activity{} = activity, _user, _opts) do
actor = activity.data["actor"]
user = User.get_cached_by_ap_id(actor)
@ -740,11 +746,6 @@ defmodule Pleroma.Notification do
def skip?(_type, _activity, _user, _opts), do: false
def mark_as_read?(activity, target_user) do
user = Activity.user_actor(activity)
User.mutes_user?(target_user, user) || CommonAPI.thread_muted?(target_user, activity)
end
def for_user_and_activity(user, activity) do
from(n in __MODULE__,
where: n.user_id == ^user.id,
@ -764,4 +765,12 @@ defmodule Pleroma.Notification do
)
|> Repo.update_all(set: [seen: true])
end
@spec send(list(Notification.t())) :: :ok
def send(notifications) do
Enum.each(notifications, fn notification ->
Streamer.stream(["user", "user:notification"], notification)
Push.send(notification)
end)
end
end

View file

@ -144,7 +144,7 @@ defmodule Pleroma.Object do
Logger.debug("Backtrace: #{inspect(Process.info(:erlang.self(), :current_stacktrace))}")
end
def normalize(_, options \\ [fetch: false, id_only: false])
def normalize(_, options \\ [fetch: false])
# If we pass an Activity to Object.normalize(), we can try to use the preloaded object.
# Use this whenever possible, especially when walking graphs in an O(N) loop!
@ -173,9 +173,6 @@ defmodule Pleroma.Object do
def normalize(ap_id, options) when is_binary(ap_id) do
cond do
Keyword.get(options, :id_only) ->
ap_id
Keyword.get(options, :fetch) ->
case Fetcher.fetch_object_from_id(ap_id, options) do
{:ok, object} -> object

View file

@ -10,6 +10,7 @@ defmodule Pleroma.Object.Fetcher do
alias Pleroma.Object.Containment
alias Pleroma.Repo
alias Pleroma.Web.ActivityPub.InternalFetchActor
alias Pleroma.Web.ActivityPub.MRF
alias Pleroma.Web.ActivityPub.ObjectValidator
alias Pleroma.Web.ActivityPub.Transmogrifier
alias Pleroma.Web.Federator
@ -138,10 +139,7 @@ defmodule Pleroma.Object.Fetcher do
{:valid_uri_scheme, true} <-
{:valid_uri_scheme, uri.scheme == "http" or uri.scheme == "https"},
# If we have instance restrictions, apply them here to prevent fetching from unwanted instances
{:mrf_reject_check, {:ok, nil}} <-
{:mrf_reject_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_reject(uri)},
{:mrf_accept_check, {:ok, _}} <-
{:mrf_accept_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_accept(uri)},
{_, {:ok, _}} <- {:mrf_check, maybe_restrict_uri_mrf(uri)},
{_, nil} <- {:fetch_object, Object.get_cached_by_ap_id(id)},
{_, true} <- {:allowed_depth, Federator.allowed_thread_distance?(options[:depth])},
{_, {:ok, data}} <- {:fetch, fetch_and_contain_remote_object_from_id(id)},
@ -161,11 +159,7 @@ defmodule Pleroma.Object.Fetcher do
log_fetch_error(id, e)
{:error, :invalid_uri_scheme}
{:mrf_reject_check, _} = e ->
log_fetch_error(id, e)
{:reject, :mrf}
{:mrf_accept_check, _} = e ->
{:mrf_check, _} = e ->
log_fetch_error(id, e)
{:reject, :mrf}
@ -213,6 +207,17 @@ defmodule Pleroma.Object.Fetcher do
Logger.error("Object rejected while fetching #{id} #{inspect(error)}")
end
defp maybe_restrict_uri_mrf(uri) do
with {:enabled, true} <- {:enabled, MRF.SimplePolicy in MRF.get_policies()},
{:ok, _} <- MRF.SimplePolicy.check_reject(uri),
{:ok, _} <- MRF.SimplePolicy.check_accept(uri) do
{:ok, nil}
else
{:enabled, false} -> {:ok, nil}
{:reject, reason} -> {:reject, reason}
end
end
defp prepare_activity_params(data) do
%{
"type" => "Create",
@ -298,10 +303,7 @@ defmodule Pleroma.Object.Fetcher do
with {:valid_uri_scheme, true} <- {:valid_uri_scheme, String.starts_with?(id, "http")},
%URI{} = uri <- URI.parse(id),
{:mrf_reject_check, {:ok, nil}} <-
{:mrf_reject_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_reject(uri)},
{:mrf_accept_check, {:ok, _}} <-
{:mrf_accept_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_accept(uri)},
{_, {:ok, _}} <- {:mrf_check, maybe_restrict_uri_mrf(uri)},
{:local_fetch, :ok} <- {:local_fetch, Containment.contain_local_fetch(id)},
{:ok, final_id, body} <- get_object(id),
# a canonical ID shouldn't be a redirect
@ -422,7 +424,7 @@ defmodule Pleroma.Object.Fetcher do
# connection/protocol-related error
{:ok, %Tesla.Env{} = env} ->
{:error, {:http_error, :connect, Pleroma.HTTP.Middleware.HTTPSignature.redact_keys(env)}}
{:error, {:http_error, :connect, env}}
{:error, e} ->
{:error, e}

View file

@ -97,6 +97,9 @@ defmodule Pleroma.Pagination do
defp do_unwrap([], acc), do: Enum.reverse(acc)
defp cast_params(params) do
# Ecto doesnt support atom types
pfield = params[:pagination_field] || :id
param_types = %{
min_id: params[:id_type] || :string,
since_id: params[:id_type] || :string,
@ -108,54 +111,54 @@ defmodule Pleroma.Pagination do
order_asc: :boolean
}
params = Map.delete(params, :id_type)
params = Map.drop(params, [:id_type, :pagination_field])
changeset = cast({%{}, param_types}, params, Map.keys(param_types))
changeset.changes
Map.put(changeset.changes, :pagination_field, pfield)
end
defp order_statement(query, table_binding, :asc) do
defp order_statement(query, table_binding, :asc, %{pagination_field: fname}) do
order_by(
query,
[{u, table_position(query, table_binding)}],
fragment("? asc nulls last", u.id)
fragment("? asc nulls last", field(u, ^fname))
)
end
defp order_statement(query, table_binding, :desc) do
defp order_statement(query, table_binding, :desc, %{pagination_field: fname}) do
order_by(
query,
[{u, table_position(query, table_binding)}],
fragment("? desc nulls last", u.id)
fragment("? desc nulls last", field(u, ^fname))
)
end
defp restrict(query, :min_id, %{min_id: min_id}, table_binding) do
where(query, [{q, table_position(query, table_binding)}], q.id > ^min_id)
defp restrict(query, :min_id, %{min_id: min_id, pagination_field: fname}, table_binding) do
where(query, [{q, table_position(query, table_binding)}], field(q, ^fname) > ^min_id)
end
defp restrict(query, :since_id, %{since_id: since_id}, table_binding) do
where(query, [{q, table_position(query, table_binding)}], q.id > ^since_id)
defp restrict(query, :since_id, %{since_id: since_id, pagination_field: fname}, table_binding) do
where(query, [{q, table_position(query, table_binding)}], field(q, ^fname) > ^since_id)
end
defp restrict(query, :max_id, %{max_id: max_id}, table_binding) do
where(query, [{q, table_position(query, table_binding)}], q.id < ^max_id)
defp restrict(query, :max_id, %{max_id: max_id, pagination_field: fname}, table_binding) do
where(query, [{q, table_position(query, table_binding)}], field(q, ^fname) < ^max_id)
end
defp restrict(query, :order, %{skip_order: true}, _), do: query
defp restrict(%{order_bys: [_ | _]} = query, :order, %{skip_extra_order: true}, _), do: query
defp restrict(query, :order, %{min_id: _}, table_binding) do
order_statement(query, table_binding, :asc)
defp restrict(query, :order, %{min_id: _} = options, table_binding) do
order_statement(query, table_binding, :asc, options)
end
defp restrict(query, :order, %{max_id: _}, table_binding) do
order_statement(query, table_binding, :desc)
defp restrict(query, :order, %{max_id: _} = options, table_binding) do
order_statement(query, table_binding, :desc, options)
end
defp restrict(query, :order, options, table_binding) do
dir = if options[:order_asc], do: :asc, else: :desc
order_statement(query, table_binding, dir)
order_statement(query, table_binding, dir, options)
end
defp restrict(query, :offset, %{offset: offset}, _table_binding) do

View file

@ -109,7 +109,9 @@ defmodule Pleroma.ReverseProxy do
with {:ok, nil} <- @cachex.get(:failed_proxy_url_cache, url),
{:ok, status, headers, body} <- request(method, url, req_headers, client_opts),
:ok <-
header_length_constraint(
check_length_constraint(
method,
body,
headers,
Keyword.get(opts, :max_body_length, @max_body_length)
) do
@ -342,7 +344,9 @@ defmodule Pleroma.ReverseProxy do
List.keystore(headers, "content-security-policy", 0, {"content-security-policy", "sandbox"})
end
defp header_length_constraint(headers, limit) when is_integer(limit) and limit > 0 do
defp check_length_constraint(_, _, _, limit) when not is_integer(limit) or limit <= 0, do: :ok
defp check_length_constraint(:head, _, headers, limit) do
with {_, size} <- List.keyfind(headers, "content-length", 0),
{size, _} <- Integer.parse(size),
true <- size <= limit do
@ -356,7 +360,15 @@ defmodule Pleroma.ReverseProxy do
end
end
defp header_length_constraint(_, _), do: :ok
defp check_length_constraint(_, body, _, limit) when is_binary(body) do
if byte_size(body) <= limit do
:ok
else
{:error, :body_too_large}
end
end
defp check_length_constraint(_, _, _, _), do: :ok
defp track_failed_url(url, error, opts) do
ttl =
@ -366,6 +378,6 @@ defmodule Pleroma.ReverseProxy do
nil
end
@cachex.put(:failed_proxy_url_cache, url, true, ttl: ttl)
@cachex.put(:failed_proxy_url_cache, url, true, expire: ttl)
end
end

View file

@ -7,7 +7,6 @@ defmodule Pleroma.Stats do
import Ecto.Query
alias Pleroma.CounterCache
alias Pleroma.Repo
alias Pleroma.User
alias Pleroma.Instances.Instance
@ -107,15 +106,6 @@ defmodule Pleroma.Stats do
}
end
@spec get_status_visibility_count(String.t() | nil) :: map()
def get_status_visibility_count(instance \\ nil) do
if is_nil(instance) do
CounterCache.get_sum()
else
CounterCache.get_by_instance(instance)
end
end
@impl true
def handle_continue(:calculate_stats, _) do
stats = calculate_stat_data()

View file

@ -82,7 +82,7 @@ defmodule Pleroma.Upload do
def store(upload, opts \\ []) do
opts = get_opts(opts)
with {:ok, upload} <- prepare_upload(upload, opts),
with {:ok, %__MODULE__{} = upload} <- prepare_upload(upload, opts),
upload = %__MODULE__{upload | path: upload.path || "#{upload.id}/#{upload.name}"},
{:ok, upload} <- Pleroma.Upload.Filter.filter(opts.filters, upload),
description = Map.get(upload, :description) || "",

View file

@ -3,7 +3,8 @@
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Uploaders.Uploader do
import Pleroma.Web.Gettext
use Gettext,
backend: Pleroma.Web.Gettext
@mix_env Mix.env()

View file

@ -31,6 +31,7 @@ defmodule Pleroma.User do
alias Pleroma.Registration
alias Pleroma.Repo
alias Pleroma.User
alias Pleroma.User.Fetcher
alias Pleroma.UserRelationship
alias Pleroma.Web.ActivityPub.ActivityPub
alias Pleroma.Web.ActivityPub.Builder
@ -91,6 +92,9 @@ defmodule Pleroma.User do
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
# hide sensitive data from logs
@derive {Inspect, except: [:password, :password_hash, :email]}
schema "users" do
field(:bio, :string, default: "")
field(:raw_bio, :string)
@ -270,13 +274,13 @@ defmodule Pleroma.User do
def cached_blocked_users_ap_ids(user) do
@cachex.fetch!(:user_cache, "blocked_users_ap_ids:#{user.ap_id}", fn _ ->
blocked_users_ap_ids(user)
{:commit, blocked_users_ap_ids(user)}
end)
end
def cached_muted_users_ap_ids(user) do
@cachex.fetch!(:user_cache, "muted_users_ap_ids:#{user.ap_id}", fn _ ->
muted_users_ap_ids(user)
{:commit, muted_users_ap_ids(user)}
end)
end
@ -831,7 +835,7 @@ defmodule Pleroma.User do
candidates = Config.get([:instance, :autofollowed_nicknames])
autofollowed_users =
User.Query.build(%{nickname: candidates, local: true, is_active: true})
User.Query.build(%{nickname: candidates, local: true, deactivated: false})
|> Repo.all()
follow_all(user, autofollowed_users)
@ -1100,16 +1104,6 @@ defmodule Pleroma.User do
|> Repo.all()
end
# This is mostly an SPC migration fix. This guesses the user nickname by taking the last part
# of the ap_id and the domain and tries to get that user
def get_by_guessed_nickname(ap_id) do
domain = URI.parse(ap_id).host
name = List.last(String.split(ap_id, "/"))
nickname = "#{name}@#{domain}"
get_cached_by_nickname(nickname)
end
@spec set_cache(
{:error, any}
| {:ok, User.t()}
@ -1162,7 +1156,7 @@ defmodule Pleroma.User do
@spec get_cached_user_friends_ap_ids(User.t()) :: [String.t()]
def get_cached_user_friends_ap_ids(user) do
@cachex.fetch!(:user_cache, "friends_ap_ids:#{user.ap_id}", fn _ ->
get_user_friends_ap_ids(user)
{:commit, get_user_friends_ap_ids(user)}
end)
end
@ -1208,14 +1202,18 @@ defmodule Pleroma.User do
end
def get_cached_by_nickname(nickname) do
key = "nickname:#{nickname}"
if String.valid?(nickname) do
key = "nickname:#{nickname}"
@cachex.fetch!(:user_cache, key, fn _ ->
case get_or_fetch_by_nickname(nickname) do
{:ok, user} -> {:commit, user}
{:error, _error} -> {:ignore, nil}
end
end)
@cachex.fetch!(:user_cache, key, fn _ ->
case get_or_fetch_by_nickname(nickname) do
{:ok, user} -> {:commit, user}
{:error, _error} -> {:ignore, nil}
end
end)
else
nil
end
end
def get_cached_by_nickname_or_id(nickname_or_id, opts \\ []) do
@ -1238,10 +1236,14 @@ defmodule Pleroma.User do
@spec get_by_nickname(String.t()) :: User.t() | nil
def get_by_nickname(nickname) do
Repo.get_by(User, nickname: nickname) ||
if Regex.match?(~r(@#{Pleroma.Web.Endpoint.host()})i, nickname) do
Repo.get_by(User, nickname: local_nickname(nickname))
end
if String.valid?(nickname) do
Repo.get_by(User, nickname: nickname) ||
if Regex.match?(~r(@#{Pleroma.Web.Endpoint.host()})i, nickname) do
Repo.get_by(User, nickname: local_nickname(nickname))
end
else
nil
end
end
def get_by_email(email), do: Repo.get_by(User, email: email)
@ -1250,7 +1252,7 @@ defmodule Pleroma.User do
get_by_nickname(nickname_or_email) || get_by_email(nickname_or_email)
end
def fetch_by_nickname(nickname), do: ActivityPub.make_user_from_nickname(nickname)
def fetch_by_nickname(nickname), do: Fetcher.make_user_from_nickname(nickname)
def get_or_fetch_by_nickname(nickname) do
with %User{} = user <- get_by_nickname(nickname) do
@ -1266,72 +1268,54 @@ defmodule Pleroma.User do
end
end
@spec get_followers_query(User.t(), pos_integer() | nil) :: Ecto.Query.t()
def get_followers_query(%User{} = user, nil) do
User.Query.build(%{followers: user, is_active: true})
end
def get_followers_query(%User{} = user, page) do
user
|> get_followers_query(nil)
|> User.Query.paginate(page, 20)
end
@spec get_followers_query(User.t()) :: Ecto.Query.t()
def get_followers_query(%User{} = user), do: get_followers_query(user, nil)
def get_followers_query(%User{} = user) do
User.Query.build(%{followers: user, deactivated: false})
end
@spec get_followers(User.t(), pos_integer() | nil) :: {:ok, list(User.t())}
def get_followers(%User{} = user, page \\ nil) do
@spec get_followers(User.t()) :: {:ok, list(User.t())}
def get_followers(%User{} = user) do
user
|> get_followers_query(page)
|> get_followers_query()
|> Repo.all()
end
@spec get_external_followers(User.t(), pos_integer() | nil) :: {:ok, list(User.t())}
def get_external_followers(%User{} = user, page \\ nil) do
@spec get_external_followers(User.t()) :: {:ok, list(User.t())}
def get_external_followers(%User{} = user) do
user
|> get_followers_query(page)
|> get_followers_query()
|> User.Query.build(%{external: true})
|> Repo.all()
end
def get_followers_ids(%User{} = user, page \\ nil) do
def get_followers_ids(%User{} = user) do
user
|> get_followers_query(page)
|> get_followers_query()
|> select([u], u.id)
|> Repo.all()
end
@spec get_friends_query(User.t(), pos_integer() | nil) :: Ecto.Query.t()
def get_friends_query(%User{} = user, nil) do
@spec get_friends_query(User.t()) :: Ecto.Query.t()
def get_friends_query(%User{} = user) do
User.Query.build(%{friends: user, deactivated: false})
end
def get_friends_query(%User{} = user, page) do
def get_friends(%User{} = user) do
user
|> get_friends_query(nil)
|> User.Query.paginate(page, 20)
end
@spec get_friends_query(User.t()) :: Ecto.Query.t()
def get_friends_query(%User{} = user), do: get_friends_query(user, nil)
def get_friends(%User{} = user, page \\ nil) do
user
|> get_friends_query(page)
|> get_friends_query()
|> Repo.all()
end
def get_friends_ap_ids(%User{} = user) do
user
|> get_friends_query(nil)
|> get_friends_query()
|> select([u], u.ap_id)
|> Repo.all()
end
def get_friends_ids(%User{} = user, page \\ nil) do
def get_friends_ids(%User{} = user) do
user
|> get_friends_query(page)
|> get_friends_query()
|> select([u], u.id)
|> Repo.all()
end
@ -1399,7 +1383,7 @@ defmodule Pleroma.User do
end
def fetch_follow_information(user) do
with {:ok, info} <- ActivityPub.fetch_follow_information_for_user(user) do
with {:ok, info} <- Fetcher.fetch_follow_information_for_user(user) do
user
|> follow_information_changeset(info)
|> update_and_set_cache()
@ -1451,7 +1435,7 @@ defmodule Pleroma.User do
@spec get_users_from_set([String.t()], keyword()) :: [User.t()]
def get_users_from_set(ap_ids, opts \\ []) do
local_only = Keyword.get(opts, :local_only, true)
criteria = %{ap_id: ap_ids, is_active: true}
criteria = %{ap_id: ap_ids, deactivated: false}
criteria = if local_only, do: Map.put(criteria, :local, true), else: criteria
User.Query.build(criteria)
@ -1462,7 +1446,7 @@ defmodule Pleroma.User do
def get_recipients_from_activity(%Activity{recipients: to, actor: actor}) do
to = [actor | to]
query = User.Query.build(%{recipients_from_activity: to, local: true, is_active: true})
query = User.Query.build(%{recipients_from_activity: to, local: true, deactivated: false})
query
|> Repo.all()
@ -1472,17 +1456,17 @@ defmodule Pleroma.User do
{:ok, list(UserRelationship.t())} | {:error, String.t()}
def mute(%User{} = muter, %User{} = mutee, params \\ %{}) do
notifications? = Map.get(params, :notifications, true)
expires_in = Map.get(params, :expires_in, 0)
duration = Map.get(params, :duration, 0)
with {:ok, user_mute} <- UserRelationship.create_mute(muter, mutee),
{:ok, user_notification_mute} <-
(notifications? && UserRelationship.create_notification_mute(muter, mutee)) ||
{:ok, nil} do
if expires_in > 0 do
if duration > 0 do
Pleroma.Workers.MuteExpireWorker.enqueue(
"unmute_user",
%{"muter_id" => muter.id, "mutee_id" => mutee.id},
schedule_in: expires_in
schedule_in: duration
)
end
@ -1974,12 +1958,16 @@ defmodule Pleroma.User do
def html_filter_policy(_), do: Config.get([:markup, :scrub_policy])
def fetch_by_ap_id(ap_id), do: ActivityPub.make_user_from_ap_id(ap_id)
def fetch_by_ap_id(ap_id), do: Fetcher.make_user_from_ap_id(ap_id)
defp refetch_or_fetch_by_ap_id(%User{} = user, _), do: Fetcher.refetch_user(user)
defp refetch_or_fetch_by_ap_id(_, ap_id), do: Fetcher.make_user_from_ap_id(ap_id)
def get_or_fetch_by_ap_id(ap_id, options \\ []) do
cached_user = get_cached_by_ap_id(ap_id)
maybe_fetched_user = needs_update?(cached_user, options) && fetch_by_ap_id(ap_id)
maybe_fetched_user =
needs_update?(cached_user, options) && refetch_or_fetch_by_ap_id(cached_user, ap_id)
case {cached_user, maybe_fetched_user} do
{_, {:ok, %User{} = user}} ->
@ -2067,7 +2055,7 @@ defmodule Pleroma.User do
|> set_cache()
end
defdelegate public_key(user), to: SigningKey
defdelegate public_key(user), to: SigningKey, as: :public_key_pem
@doc "Gets or fetch a user by uri or nickname."
@spec get_or_fetch(String.t()) :: {:ok, User.t()} | {:error, String.t()}
@ -2200,7 +2188,7 @@ defmodule Pleroma.User do
@spec all_superusers() :: [User.t()]
def all_superusers do
User.Query.build(%{super_users: true, local: true, is_active: true})
User.Query.build(%{super_users: true, local: true, deactivated: false})
|> Repo.all()
end

View file

@ -7,7 +7,9 @@ defmodule Pleroma.User.Backup do
import Ecto.Changeset
import Ecto.Query
import Pleroma.Web.Gettext
use Gettext,
backend: Pleroma.Web.Gettext
require Pleroma.Constants

443
lib/pleroma/user/fetcher.ex Normal file
View file

@ -0,0 +1,443 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# Copyright © 2026 Akkoma Authors <https://akkoma.dev/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.User.Fetcher do
alias Akkoma.Collections
alias Pleroma.Config
alias Pleroma.Object
alias Pleroma.Object.Fetcher, as: APFetcher
alias Pleroma.Repo
alias Pleroma.User
alias Pleroma.Web.ActivityPub.MRF
alias Pleroma.Web.ActivityPub.ObjectValidators.UserValidator
alias Pleroma.Web.ActivityPub.Transmogrifier
alias Pleroma.Web.WebFinger
import Pleroma.Web.ActivityPub.Utils
require Logger
@spec get_actor_url(any()) :: binary() | nil
defp get_actor_url(url) when is_binary(url), do: url
defp get_actor_url(%{"href" => href}) when is_binary(href), do: href
defp get_actor_url(url) when is_list(url) do
url
|> List.first()
|> get_actor_url()
end
defp get_actor_url(_url), do: nil
defp normalize_image(%{"url" => url}) do
%{
"type" => "Image",
"url" => [%{"href" => url}]
}
end
defp normalize_image(urls) when is_list(urls), do: urls |> List.first() |> normalize_image()
defp normalize_image(_), do: nil
defp normalize_also_known_as(aka) when is_list(aka), do: aka
defp normalize_also_known_as(aka) when is_binary(aka), do: [aka]
defp normalize_also_known_as(nil), do: []
defp normalize_attachment(%{} = attachment), do: [attachment]
defp normalize_attachment(attachment) when is_list(attachment), do: attachment
defp normalize_attachment(_), do: []
defp maybe_make_public_key_object(data) do
if is_map(data["publicKey"]) && is_binary(data["publicKey"]["publicKeyPem"]) do
%{
public_key: data["publicKey"]["publicKeyPem"],
key_id: data["publicKey"]["id"]
}
else
nil
end
end
defp try_fallback_nick(%{"id" => ap_id, "preferredUsername" => name})
when is_binary(name) and is_binary(ap_id) do
with true <- name != "",
domain when domain != nil and domain != "" <- URI.parse(ap_id).host do
"#{name}@#{domain}"
else
_ -> nil
end
end
defp try_fallback_nick(_), do: nil
defp object_to_user_data(data, verified_nick) do
fields =
data
|> Map.get("attachment", [])
|> normalize_attachment()
|> Enum.filter(fn
%{"type" => t} -> t == "PropertyValue"
_ -> false
end)
|> Enum.map(fn fields -> Map.take(fields, ["name", "value"]) end)
emojis =
data
|> Map.get("tag", [])
|> Enum.filter(fn
%{"type" => "Emoji"} -> true
_ -> false
end)
|> Map.new(fn %{"icon" => %{"url" => url}, "name" => name} ->
{String.trim(name, ":"), url}
end)
is_locked = data["manuallyApprovesFollowers"] || false
data = Transmogrifier.maybe_fix_user_object(data)
is_discoverable = data["discoverable"] || false
invisible = data["invisible"] || false
actor_type = data["type"] || "Person"
{featured_address, pinned_objects} =
case process_featured_collection(data["featured"]) do
{:ok, featured_address, pinned_objects} -> {featured_address, pinned_objects}
_ -> {nil, %{}}
end
# first, check that the owner is correct
signing_key =
if data["id"] !== data["publicKey"]["owner"] do
Logger.error(
"Owner of the public key is not the same as the actor - not saving the public key."
)
nil
else
maybe_make_public_key_object(data)
end
shared_inbox =
if is_map(data["endpoints"]) && is_binary(data["endpoints"]["sharedInbox"]) do
data["endpoints"]["sharedInbox"]
end
# can still be nil if no name was indicated in AP data
nickname = verified_nick || try_fallback_nick(data)
# also_known_as must be a URL
also_known_as =
data
|> Map.get("alsoKnownAs", [])
|> normalize_also_known_as()
|> Enum.filter(fn url ->
case URI.parse(url) do
%URI{scheme: "http"} -> true
%URI{scheme: "https"} -> true
_ -> false
end
end)
%{
ap_id: data["id"],
uri: get_actor_url(data["url"]),
banner: normalize_image(data["image"]),
background: normalize_image(data["backgroundUrl"]),
fields: fields,
emoji: emojis,
is_locked: is_locked,
is_discoverable: is_discoverable,
invisible: invisible,
avatar: normalize_image(data["icon"]),
name: data["name"],
follower_address: data["followers"],
following_address: data["following"],
featured_address: featured_address,
bio: data["summary"] || "",
actor_type: actor_type,
also_known_as: also_known_as,
signing_key: signing_key,
inbox: data["inbox"],
shared_inbox: shared_inbox,
pinned_objects: pinned_objects,
nickname: nickname
}
end
defp collection_private(%{"first" => %{"type" => type}})
when type in ["CollectionPage", "OrderedCollectionPage"],
do: false
defp collection_private(%{"first" => first}) do
with {:ok, %{"type" => type}} when type in ["CollectionPage", "OrderedCollectionPage"] <-
APFetcher.fetch_and_contain_remote_object_from_id(first) do
false
else
_ -> true
end
end
defp collection_private(_data), do: true
defp counter_private(%{"totalItems" => _}), do: false
defp counter_private(_), do: true
defp normalize_counter(counter) when is_integer(counter), do: counter
defp normalize_counter(_), do: 0
defp eval_collection_counter(apid) when is_binary(apid) do
case APFetcher.fetch_and_contain_remote_object_from_id(apid) do
{:ok, data} ->
{collection_private(data), counter_private(data), normalize_counter(data["totalItems"])}
_ ->
Logger.debug("Failed to fetch follower/ing collection #{apid}; assuming private")
{true, true, 0}
end
end
defp eval_collection_counter(_), do: {true, 0}
def fetch_follow_information_for_user(user) do
{hide_follows, hide_follows_count, following_count} =
eval_collection_counter(user.following_address)
{hide_followers, hide_followers_count, follower_count} =
eval_collection_counter(user.follower_address)
{:ok,
%{
hide_follows: hide_follows,
hide_follows_count: hide_follows_count,
following_count: following_count,
hide_followers: hide_followers,
hide_followers_count: hide_followers_count,
follower_count: follower_count
}}
end
def maybe_update_follow_information(user_data) do
with {:enabled, true} <- {:enabled, Config.get([:instance, :external_user_synchronization])},
{_, true} <-
{:collections_available,
!!(user_data[:following_address] && user_data[:follower_address])},
{:ok, follow_info} <-
fetch_follow_information_for_user(user_data) do
Map.merge(user_data, follow_info)
else
{:user_type_check, false} ->
user_data
{:collections_available, false} ->
user_data
{:enabled, false} ->
user_data
e ->
Logger.error(
"Follower/Following counter update for #{user_data.ap_id} failed.\n" <> inspect(e)
)
user_data
end
end
def maybe_handle_clashing_nickname(data) do
with nickname when is_binary(nickname) <- data[:nickname],
%User{} = old_user <- User.get_by_nickname(nickname),
{_, false} <- {:ap_id_comparison, data[:ap_id] == old_user.ap_id} do
Logger.info(
"Found an old user for #{nickname}, the old ap id is #{old_user.ap_id}, new one is #{data[:ap_id]}, renaming.
"
)
old_user
|> User.remote_user_changeset(%{nickname: "#{old_user.id}.#{old_user.nickname}"})
|> User.update_and_set_cache()
else
{:ap_id_comparison, true} ->
Logger.info(
"Found an old user for #{data[:nickname]}, but the ap id #{data[:ap_id]} is the same as the new user. Race
condition? Not changing anything."
)
_ ->
nil
end
end
def process_featured_collection(nil), do: {:ok, nil, %{}}
def process_featured_collection(""), do: {:ok, nil, %{}}
def process_featured_collection(featured_collection) do
featured_address =
case get_ap_id(featured_collection) do
id when is_binary(id) -> id
_ -> nil
end
# TODO: allow passing item/page limit as function opt and use here
case Collections.Fetcher.fetch_collection(featured_collection) do
{:ok, items} ->
now = NaiveDateTime.utc_now()
dated_obj_ids = Map.new(items, fn obj -> {get_ap_id(obj), now} end)
{:ok, featured_address, dated_obj_ids}
error ->
Logger.error(
"Could not decode featured collection at fetch #{inspect(featured_collection)}: #{inspect(error)}"
)
error =
case error do
{:error, e} -> e
e -> e
end
{:error, error}
end
end
def enqueue_pin_fetches(%{pinned_objects: pins}) do
# enqueue a task to fetch all pinned objects
Enum.each(pins, fn {ap_id, _} ->
if is_nil(Object.get_cached_by_ap_id(ap_id)) do
Pleroma.Workers.RemoteFetcherWorker.enqueue("fetch_remote", %{
"id" => ap_id,
"depth" => 1
})
end
end)
end
def enqueue_pin_fetches(_), do: nil
def validate_and_cast(data, verified_nick) do
with {:ok, data} <- MRF.filter(data),
{:valid, {:ok, _, _}} <- {:valid, UserValidator.validate(data, [])} do
{:ok, object_to_user_data(data, verified_nick)}
else
{:valid, reason} ->
{:error, {:validate, reason}}
e ->
{:error, e}
end
end
defp insert_or_update(%User{} = olduser, newdata) do
olduser
|> User.remote_user_changeset(newdata)
|> User.update_and_set_cache()
end
defp insert_or_update(nil, newdata) do
newdata
|> User.remote_user_changeset()
|> Repo.insert()
|> User.set_cache()
end
defp make_user_from_apdata_and_nick(ap_data, verified_nick, olduser \\ nil) do
with {:ok, data} <- validate_and_cast(ap_data, verified_nick) do
olduser = olduser || User.get_cached_by_ap_id(data.ap_id)
if !olduser || olduser.nickname != data.nickname do
maybe_handle_clashing_nickname(data)
end
data = maybe_update_follow_information(data)
with {:ok, newuser} <- insert_or_update(olduser, data) do
enqueue_pin_fetches(data)
{:ok, newuser}
end
end
end
defp discover_nick_from_actor_data(data) do
case WebFinger.Finger.finger_actor(data) do
{:ok, nil} ->
Logger.debug("No WebFinger found for #{data["id"]}; using fallback")
nil
{:ok, nick} ->
nick
{:error, error} ->
Logger.error(
"Invalid WebFinger for #{data["id"]}; spoof attempt or just misconfiguration? Using safe fallback: #{inspect(error)}"
)
nil
end
end
defp needs_nick_update(%{"webfinger" => "acct:" <> nick}, nick), do: false
defp needs_nick_update(%{"webfinger" => nick}, nick), do: false
defp needs_nick_update(%{"preferredUsername" => name}, oldnick) when is_binary(name) do
String.starts_with?(oldnick, name <> "@")
end
defp needs_nick_update(ap_data, oldnick) do
ap_nick = ap_data["webfinger"] || ap_data["preferredUsername"]
(!oldnick && ap_nick) || (oldnick && !ap_nick)
end
defp refreshed_nick(ap_data, olduser) do
if Config.get!([Pleroma.Web.WebFinger, :update_nickname_on_user_fetch]) ||
!olduser || needs_nick_update(ap_data, olduser.nickname) do
discover_nick_from_actor_data(ap_data)
else
olduser.nickname
end
end
defp refresh_or_fetch_from_ap_id(ap_id, olduser) do
with {:ok, data} <- APFetcher.fetch_and_contain_remote_object_from_id(ap_id),
# if AP id somehow changed on refetch, discard old info
verified_olduser <- (olduser && olduser.ap_id == data["id"] && olduser) || nil,
verified_nick <- refreshed_nick(data, verified_olduser) do
make_user_from_apdata_and_nick(data, verified_nick, verified_olduser)
else
# If this has been deleted, only log a debug and not an error
{:error, {"Object has been deleted", _, _} = e} ->
Logger.debug("User was explicitly deleted #{ap_id}, #{inspect(e)}")
{:error, :not_found}
{:reject, _reason} = e ->
{:error, e}
{:error, e} ->
{:error, e}
end
end
def make_user_from_ap_id(ap_id), do: refresh_or_fetch_from_ap_id(ap_id, nil)
def refetch_user(%User{ap_id: ap_id} = u), do: refresh_or_fetch_from_ap_id(ap_id, u)
def make_user_from_nickname(nickname) do
case WebFinger.Finger.finger_mention(nickname) do
{:ok, handle, actor_data} ->
make_user_from_apdata_and_nick(actor_data, handle)
error ->
error
end
end
def update_user_with_apdata(%{"id" => ap_id} = new_ap_data) do
with %User{} = old_user <- User.get_cached_by_ap_id(ap_id) do
new_nick = refreshed_nick(new_ap_data, old_user)
make_user_from_apdata_and_nick(new_ap_data, new_nick, old_user)
else
nil ->
Logger.warning("Cannot update unknown user #{ap_id}")
{:error, :not_found}
end
end
end

View file

@ -144,11 +144,6 @@ defmodule Pleroma.User.Query do
|> where([u], u.is_confirmed == true)
end
defp compose_query({:legacy_active, _}, query) do
query
|> where([u], fragment("not (?->'deactivated' @> 'true')", u.info))
end
defp compose_query({:deactivated, false}, query) do
where(query, [u], u.is_active == true)
end

View file

@ -8,6 +8,7 @@ defmodule Pleroma.User.SigningKey do
require Logger
@derive {Inspect, only: [:user_id, :key_id]}
@primary_key false
schema "signing_keys" do
belongs_to(:user, Pleroma.User, type: FlakeId.Ecto.CompatType)
@ -109,7 +110,7 @@ defmodule Pleroma.User.SigningKey do
{:ok, :public_key.pem_encode([public_key])}
end
@spec public_key(__MODULE__) :: {:ok, binary()} | {:error, String.t()}
@spec public_key_decoded(__MODULE__) :: {:ok, binary()} | {:error, String.t()}
@doc """
Return public key data in binary format.
"""
@ -123,8 +124,12 @@ defmodule Pleroma.User.SigningKey do
{:ok, decoded}
end
def public_key(_), do: {:error, "key not found"}
def public_key_decoded(_), do: {:error, "key not found"}
@spec public_key_pem(__MODULE__) :: {:ok, binary()} | {:error, String.t()}
@doc """
Return public key data for user in PEM format
"""
def public_key_pem(%User{} = user) do
case Repo.preload(user, :signing_key) do
%User{signing_key: %__MODULE__{public_key: public_key_pem}} -> {:ok, public_key_pem}

View file

@ -67,7 +67,7 @@ defmodule Pleroma.UserRelationship do
target_id: target.id
})
|> Repo.insert(
on_conflict: {:replace_all_except, [:id, :inserted_at]},
on_conflict: {:replace, [:relationship_type, :source_id, :target_id]},
conflict_target: [:source_id, :relationship_type, :target_id],
returning: true
)

View file

@ -16,7 +16,7 @@ defmodule Pleroma.Utils do
def compile_dir(dir) when is_binary(dir) do
dir
|> elixir_files()
|> Kernel.ParallelCompiler.compile()
|> Kernel.ParallelCompiler.compile(return_diagnostics: true)
end
defp elixir_files(dir) when is_binary(dir) do

View file

@ -31,21 +31,19 @@ defmodule Pleroma.Web do
def controller do
quote do
use Phoenix.Controller, namespace: Pleroma.Web
use Phoenix.Controller,
formats: [html: "View", json: "View"],
layouts: [html: Pleroma.Web.LayoutView]
import Plug.Conn
import Pleroma.Web.Gettext
use Gettext,
backend: Pleroma.Web.Gettext
import Pleroma.Web.TranslationHelpers
unquote(verified_routes())
plug(:set_put_layout)
defp set_put_layout(conn, _) do
put_layout(conn, Pleroma.Config.get(:app_layout, "app.html"))
end
# Marks plugs intentionally skipped and blocks their execution if present in plugs chain
defp skip_plug(conn, plug_modules) do
plug_modules
@ -233,14 +231,18 @@ defmodule Pleroma.Web do
def channel do
quote do
use Phoenix.Channel
import Pleroma.Web.Gettext
use Gettext,
backend: Pleroma.Web.Gettext
end
end
defp view_helpers do
quote do
# Use all HTML functionality (forms, tags, etc)
use Phoenix.HTML
import Phoenix.HTML
import Phoenix.HTML.Form
use PhoenixHTMLHelpers
# Import LiveView and .heex helpers (live_render, live_patch, <.form>, etc)
import Phoenix.LiveView.Helpers
@ -249,7 +251,10 @@ defmodule Pleroma.Web do
import Phoenix.View
import Pleroma.Web.ErrorHelpers
import Pleroma.Web.Gettext
use Gettext,
backend: Pleroma.Web.Gettext
unquote(verified_routes())
end
end

View file

@ -3,7 +3,6 @@
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.ActivityPub.ActivityPub do
alias Akkoma.Collections
alias Pleroma.Activity
alias Pleroma.Activity.Ir.Topics
alias Pleroma.Config
@ -16,16 +15,13 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
alias Pleroma.Notification
alias Pleroma.Object
alias Pleroma.Object.Containment
alias Pleroma.Object.Fetcher
alias Pleroma.Pagination
alias Pleroma.Repo
alias Pleroma.Upload
alias Pleroma.User
alias Pleroma.Web.ActivityPub.MRF
alias Pleroma.Web.ActivityPub.ObjectValidators.UserValidator
alias Pleroma.Web.ActivityPub.Transmogrifier
alias Pleroma.Web.ActivityPub.Visibility
alias Pleroma.Web.Streamer
alias Pleroma.Web.WebFinger
alias Pleroma.Workers.BackgroundWorker
alias Pleroma.Workers.PollWorker
@ -208,21 +204,19 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
end
def notify_and_stream(activity) do
Notification.create_notifications(activity)
original_activity =
case activity do
%{data: %{"type" => "Update"}, object: %{data: %{"id" => id}}} ->
Activity.get_create_by_object_ap_id_with_object(id)
_ ->
activity
end
conversation = create_or_bump_conversation(original_activity, original_activity.actor)
participations = get_participations(conversation)
# XXX: all callers of this should be moved to side_effect handling, such that
# notifications can be collected and only be sent out _after_ the transaction succeed
{:ok, notifications, _} = Notification.create_notifications(activity)
Notification.send(notifications)
stream_out(activity)
stream_out_participations(participations)
end
defp maybe_bump_conversation(activity) do
if Visibility.is_direct?(activity) do
conversation = create_or_bump_conversation(activity, activity.actor)
participations = get_participations(conversation)
stream_out_participations(participations)
end
end
defp maybe_create_activity_expiration(
@ -239,7 +233,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
defp maybe_create_activity_expiration(activity), do: {:ok, activity}
defp create_or_bump_conversation(activity, actor) do
def create_or_bump_conversation(activity, actor) do
with {:ok, conversation} <- Conversation.create_or_bump_for(activity),
%User{} = user <- User.get_cached_by_ap_id(actor) do
Participation.mark_as_read(user, conversation)
@ -258,7 +252,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
def stream_out_participations(participations) do
participations =
participations
|> Repo.preload(:user)
|> Repo.preload([:user, :conversation])
Streamer.stream("participation", participations)
end
@ -323,6 +317,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
{:ok, _actor} <- increase_note_count_if_public(actor, activity),
{:ok, _actor} <- update_last_status_at_if_public(actor, activity),
_ <- notify_and_stream(activity),
_ <- maybe_bump_conversation(activity),
:ok <- maybe_schedule_poll_notifications(activity),
:ok <- maybe_federate(activity) do
{:ok, activity}
@ -482,9 +477,9 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
from(activity in Activity)
|> maybe_preload_objects(opts)
|> maybe_preload_bookmarks(opts)
|> maybe_set_thread_muted_field(opts)
|> restrict_blocked(opts)
|> restrict_blockers_visibility(opts)
|> restrict_muted_users(opts)
|> restrict_recipients(recipients, opts[:user])
|> restrict_filtered(opts)
|> where(
@ -1096,24 +1091,35 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
defp restrict_reblogs(query, _), do: query
defp restrict_muted(query, %{with_muted: true}), do: query
defp restrict_muted(query, opts) do
query
|> restrict_muted_users(opts)
|> restrict_muted_threads(opts)
end
defp restrict_muted(query, %{muting_user: %User{} = user} = opts) do
defp restrict_muted_users(query, %{with_muted: true}), do: query
defp restrict_muted_users(query, %{muting_user: %User{} = user} = opts) do
mutes = opts[:muted_users_ap_ids] || User.muted_users_ap_ids(user)
query =
from([activity] in query,
where: fragment("not (? = ANY(?))", activity.actor, ^mutes),
where:
fragment(
"not (?->'to' \\?| ?) or ? = ?",
activity.data,
^mutes,
activity.actor,
^user.ap_id
)
)
from([activity] in query,
where: fragment("not (? = ANY(?))", activity.actor, ^mutes),
where:
fragment(
"not (?->'to' \\?| ?) or ? = ?",
activity.data,
^mutes,
activity.actor,
^user.ap_id
)
)
end
defp restrict_muted_users(query, _), do: query
defp restrict_muted_threads(query, %{with_muted: true}), do: query
defp restrict_muted_threads(query, %{muting_user: %User{} = _user} = opts) do
unless opts[:skip_preload] do
from([thread_mute: tm] in query, where: is_nil(tm.user_id))
else
@ -1121,7 +1127,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
end
end
defp restrict_muted(query, _), do: query
defp restrict_muted_threads(query, _), do: query
defp restrict_blocked(query, %{blocking_user: %User{} = user} = opts) do
blocked_ap_ids = opts[:blocked_users_ap_ids] || User.blocked_users_ap_ids(user)
@ -1447,7 +1453,6 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
|> restrict_muted_reblogs(restrict_muted_reblogs_opts)
|> restrict_instance(opts)
|> restrict_announce_object_actor(opts)
|> restrict_filtered(opts)
|> maybe_restrict_deactivated_users(opts)
|> exclude_poll_votes(opts)
|> exclude_invisible_actors(opts)
@ -1536,361 +1541,6 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
defp sanitize_upload_file(upload), do: upload
@spec get_actor_url(any()) :: binary() | nil
defp get_actor_url(url) when is_binary(url), do: url
defp get_actor_url(%{"href" => href}) when is_binary(href), do: href
defp get_actor_url(url) when is_list(url) do
url
|> List.first()
|> get_actor_url()
end
defp get_actor_url(_url), do: nil
defp normalize_image(%{"url" => url}) do
%{
"type" => "Image",
"url" => [%{"href" => url}]
}
end
defp normalize_image(urls) when is_list(urls), do: urls |> List.first() |> normalize_image()
defp normalize_image(_), do: nil
defp normalize_also_known_as(aka) when is_list(aka), do: aka
defp normalize_also_known_as(aka) when is_binary(aka), do: [aka]
defp normalize_also_known_as(nil), do: []
defp normalize_attachment(%{} = attachment), do: [attachment]
defp normalize_attachment(attachment) when is_list(attachment), do: attachment
defp normalize_attachment(_), do: []
defp maybe_make_public_key_object(data) do
if is_map(data["publicKey"]) && is_binary(data["publicKey"]["publicKeyPem"]) do
%{
public_key: data["publicKey"]["publicKeyPem"],
key_id: data["publicKey"]["id"]
}
else
nil
end
end
defp object_to_user_data(data, additional) do
fields =
data
|> Map.get("attachment", [])
|> normalize_attachment()
|> Enum.filter(fn
%{"type" => t} -> t == "PropertyValue"
_ -> false
end)
|> Enum.map(fn fields -> Map.take(fields, ["name", "value"]) end)
emojis =
data
|> Map.get("tag", [])
|> Enum.filter(fn
%{"type" => "Emoji"} -> true
_ -> false
end)
|> Map.new(fn %{"icon" => %{"url" => url}, "name" => name} ->
{String.trim(name, ":"), url}
end)
is_locked = data["manuallyApprovesFollowers"] || false
data = Transmogrifier.maybe_fix_user_object(data)
is_discoverable = data["discoverable"] || false
invisible = data["invisible"] || false
actor_type = data["type"] || "Person"
{featured_address, pinned_objects} =
case process_featured_collection(data["featured"]) do
{:ok, featured_address, pinned_objects} -> {featured_address, pinned_objects}
_ -> {nil, %{}}
end
# first, check that the owner is correct
signing_key =
if data["id"] !== data["publicKey"]["owner"] do
Logger.error(
"Owner of the public key is not the same as the actor - not saving the public key."
)
nil
else
maybe_make_public_key_object(data)
end
shared_inbox =
if is_map(data["endpoints"]) && is_binary(data["endpoints"]["sharedInbox"]) do
data["endpoints"]["sharedInbox"]
end
# if WebFinger request was already done, we probably have acct, otherwise
# we request WebFinger here
nickname = additional[:nickname_from_acct] || generate_nickname(data)
# also_known_as must be a URL
also_known_as =
data
|> Map.get("alsoKnownAs", [])
|> normalize_also_known_as()
|> Enum.filter(fn url ->
case URI.parse(url) do
%URI{scheme: "http"} -> true
%URI{scheme: "https"} -> true
_ -> false
end
end)
%{
ap_id: data["id"],
uri: get_actor_url(data["url"]),
banner: normalize_image(data["image"]),
background: normalize_image(data["backgroundUrl"]),
fields: fields,
emoji: emojis,
is_locked: is_locked,
is_discoverable: is_discoverable,
invisible: invisible,
avatar: normalize_image(data["icon"]),
name: data["name"],
follower_address: data["followers"],
following_address: data["following"],
featured_address: featured_address,
bio: data["summary"] || "",
actor_type: actor_type,
also_known_as: also_known_as,
signing_key: signing_key,
inbox: data["inbox"],
shared_inbox: shared_inbox,
pinned_objects: pinned_objects,
nickname: nickname
}
end
defp generate_nickname(%{"preferredUsername" => username} = data) when is_binary(username) do
generated = "#{username}@#{URI.parse(data["id"]).host}"
if Config.get([WebFinger, :update_nickname_on_user_fetch]) do
case WebFinger.finger(generated) do
{:ok, %{"subject" => "acct:" <> acct}} -> acct
_ -> generated
end
else
generated
end
end
# nickname can be nil because of virtual actors
defp generate_nickname(_), do: nil
def fetch_follow_information_for_user(user) do
with {:ok, following_data} <-
Fetcher.fetch_and_contain_remote_object_from_id(user.following_address),
{:ok, hide_follows} <- collection_private(following_data),
{:ok, followers_data} <-
Fetcher.fetch_and_contain_remote_object_from_id(user.follower_address),
{:ok, hide_followers} <- collection_private(followers_data) do
{:ok,
%{
hide_follows: hide_follows,
follower_count: normalize_counter(followers_data["totalItems"]),
following_count: normalize_counter(following_data["totalItems"]),
hide_followers: hide_followers
}}
else
{:error, _} = e -> e
e -> {:error, e}
end
end
defp normalize_counter(counter) when is_integer(counter), do: counter
defp normalize_counter(_), do: 0
def maybe_update_follow_information(user_data) do
with {:enabled, true} <- {:enabled, Config.get([:instance, :external_user_synchronization])},
{_, true} <- {:user_type_check, user_data[:type] in ["Person", "Service"]},
{_, true} <-
{:collections_available,
!!(user_data[:following_address] && user_data[:follower_address])},
{:ok, info} <-
fetch_follow_information_for_user(user_data) do
info = Map.merge(user_data[:info] || %{}, info)
user_data
|> Map.put(:info, info)
else
{:user_type_check, false} ->
user_data
{:collections_available, false} ->
user_data
{:enabled, false} ->
user_data
e ->
Logger.error(
"Follower/Following counter update for #{user_data.ap_id} failed.\n" <> inspect(e)
)
user_data
end
end
defp collection_private(%{"first" => %{"type" => type}})
when type in ["CollectionPage", "OrderedCollectionPage"],
do: {:ok, false}
defp collection_private(%{"first" => first}) do
with {:ok, %{"type" => type}} when type in ["CollectionPage", "OrderedCollectionPage"] <-
Fetcher.fetch_and_contain_remote_object_from_id(first) do
{:ok, false}
else
{:error, _} -> {:ok, true}
end
end
defp collection_private(_data), do: {:ok, true}
def user_data_from_user_object(data, additional \\ []) do
with {:ok, data} <- MRF.filter(data) do
{:ok, object_to_user_data(data, additional)}
else
e -> {:error, e}
end
end
defp fetch_and_prepare_user_from_ap_id(ap_id, additional) do
with {:ok, data} <- Fetcher.fetch_and_contain_remote_object_from_id(ap_id),
{:valid, {:ok, _, _}} <- {:valid, UserValidator.validate(data, [])},
{:ok, data} <- user_data_from_user_object(data, additional) do
{:ok, maybe_update_follow_information(data)}
else
# If this has been deleted, only log a debug and not an error
{:error, {"Object has been deleted", _, _} = e} ->
Logger.debug("User was explicitly deleted #{ap_id}, #{inspect(e)}")
{:error, :not_found}
{:reject, _reason} = e ->
{:error, e}
{:valid, reason} ->
{:error, {:validate, reason}}
{:error, e} ->
{:error, e}
end
end
def maybe_handle_clashing_nickname(data) do
with nickname when is_binary(nickname) <- data[:nickname],
%User{} = old_user <- User.get_by_nickname(nickname),
{_, false} <- {:ap_id_comparison, data[:ap_id] == old_user.ap_id} do
Logger.info(
"Found an old user for #{nickname}, the old ap id is #{old_user.ap_id}, new one is #{data[:ap_id]}, renaming."
)
old_user
|> User.remote_user_changeset(%{nickname: "#{old_user.id}.#{old_user.nickname}"})
|> User.update_and_set_cache()
else
{:ap_id_comparison, true} ->
Logger.info(
"Found an old user for #{data[:nickname]}, but the ap id #{data[:ap_id]} is the same as the new user. Race condition? Not changing anything."
)
_ ->
nil
end
end
def process_featured_collection(nil), do: {:ok, nil, %{}}
def process_featured_collection(""), do: {:ok, nil, %{}}
def process_featured_collection(featured_collection) do
featured_address =
case get_ap_id(featured_collection) do
id when is_binary(id) -> id
_ -> nil
end
# TODO: allow passing item/page limit as function opt and use here
case Collections.Fetcher.fetch_collection(featured_collection) do
{:ok, items} ->
now = NaiveDateTime.utc_now()
dated_obj_ids = Map.new(items, fn obj -> {get_ap_id(obj), now} end)
{:ok, featured_address, dated_obj_ids}
error ->
Logger.error(
"Could not decode featured collection at fetch #{inspect(featured_collection)}: #{inspect(error)}"
)
error =
case error do
{:error, e} -> e
e -> e
end
{:error, error}
end
end
def enqueue_pin_fetches(%{pinned_objects: pins}) do
# enqueue a task to fetch all pinned objects
Enum.each(pins, fn {ap_id, _} ->
if is_nil(Object.get_cached_by_ap_id(ap_id)) do
Pleroma.Workers.RemoteFetcherWorker.enqueue("fetch_remote", %{
"id" => ap_id,
"depth" => 1
})
end
end)
end
def enqueue_pin_fetches(_), do: nil
def make_user_from_ap_id(ap_id, additional \\ []) do
user = User.get_cached_by_ap_id(ap_id)
with {:ok, data} <- fetch_and_prepare_user_from_ap_id(ap_id, additional) do
user =
if data.ap_id != ap_id do
User.get_cached_by_ap_id(data.ap_id)
else
user
end
if user do
user
|> User.remote_user_changeset(data)
|> User.update_and_set_cache()
|> tap(fn _ -> enqueue_pin_fetches(data) end)
else
maybe_handle_clashing_nickname(data)
data
|> User.remote_user_changeset()
|> Repo.insert()
|> User.set_cache()
|> tap(fn _ -> enqueue_pin_fetches(data) end)
end
end
end
def make_user_from_nickname(nickname) do
with {:ok, %{"ap_id" => ap_id, "subject" => "acct:" <> acct}} when not is_nil(ap_id) <-
WebFinger.finger(nickname) do
make_user_from_ap_id(ap_id, nickname_from_acct: acct)
else
_e -> {:error, "No AP id in WebFinger"}
end
end
# filter out broken threads
defp contain_broken_threads(%Activity{} = activity, %User{} = user) do
entire_thread_visible_for_user?(activity, user)

View file

@ -57,6 +57,17 @@ defmodule Pleroma.Web.ActivityPub.Builder do
{:ok, data, []}
end
@spec emoji_object!({String.t(), String.t()}) :: map()
def emoji_object!({name, url}) do
# TODO: we should probably send mtime instead of unix epoch time for updated
%{
"icon" => %{"url" => "#{URI.encode(url)}", "type" => "Image"},
"name" => Emoji.maybe_quote(name),
"type" => "Emoji",
"updated" => "1970-01-01T00:00:00Z"
}
end
defp unicode_emoji_react(_object, data, emoji) do
data
|> Map.put("content", emoji)
@ -67,18 +78,7 @@ defmodule Pleroma.Web.ActivityPub.Builder do
data
|> Map.put("content", Emoji.maybe_quote(emoji))
|> Map.put("type", "EmojiReact")
|> Map.put("tag", [
%{}
|> Map.put("id", url)
|> Map.put("type", "Emoji")
|> Map.put("name", Emoji.maybe_quote(emoji))
|> Map.put(
"icon",
%{}
|> Map.put("type", "Image")
|> Map.put("url", url)
)
])
|> Map.put("tag", [emoji_object!({emoji, url})])
end
defp remote_custom_emoji_react(

View file

@ -165,7 +165,6 @@ defmodule Pleroma.Web.ActivityPub.MRF.StealEmojiPolicy do
if !Enum.empty?(new_emojis) do
Logger.info("Stole new emojis: #{inspect(new_emojis)}")
Pleroma.Emoji.reload()
end
end

View file

@ -15,6 +15,7 @@ defmodule Pleroma.Web.ActivityPub.ObjectValidators.AttachmentValidator do
field(:type, :string)
field(:mediaType, :string, default: "application/octet-stream")
field(:name, :string)
field(:summary, :string)
field(:blurhash, :string)
embeds_many :url, UrlObjectValidator, primary_key: false do
@ -44,7 +45,7 @@ defmodule Pleroma.Web.ActivityPub.ObjectValidators.AttachmentValidator do
|> fix_url()
struct
|> cast(data, [:id, :type, :mediaType, :name, :blurhash])
|> cast(data, [:id, :type, :mediaType, :name, :summary, :blurhash])
|> cast_embed(:url, with: &url_changeset/2, required: true)
|> validate_inclusion(:type, ~w[Link Document Audio Image Video])
|> validate_required([:type, :mediaType])

View file

@ -44,9 +44,9 @@ defmodule Pleroma.Web.ActivityPub.ObjectValidators.TagValidator do
|> validate_required([:type, :href])
end
def changeset(struct, %{"type" => "Hashtag", "name" => name} = data) do
def changeset(struct, %{"type" => "Hashtag", "name" => full_name} = data) do
name =
cond do
case full_name do
"#" <> name -> name
name -> name
end

View file

@ -25,6 +25,7 @@ defmodule Pleroma.Web.ActivityPub.ObjectValidators.UserValidator do
when type in Pleroma.Constants.actor_types() do
with :ok <- validate_pubkey(data),
:ok <- validate_inbox(data),
:ok <- validate_nickname(data),
:ok <- contain_collection_origin(data) do
{:ok, data, meta}
else
@ -83,4 +84,18 @@ defmodule Pleroma.Web.ActivityPub.ObjectValidators.UserValidator do
_, error -> error
end)
end
defp validate_nickname(%{"preferredUsername" => nick}) when is_binary(nick) do
if String.valid?(nick) do
:ok
else
{:error, "Nickname is not valid UTF-8"}
end
end
defp validate_nickname(%{"preferredUsername" => _nick}) do
{:error, "Nickname is not a valid string"}
end
defp validate_nickname(_), do: :ok
end

View file

@ -86,7 +86,7 @@ defmodule Pleroma.Web.ActivityPub.Publisher do
do: {:http_error, code, headers}
defp format_error_response(%Tesla.Env{} = env),
do: {:http_error, :connect, Pleroma.HTTP.Middleware.HTTPSignature.redact_keys(env)}
do: {:http_error, :connect, env}
defp format_error_response(response), do: response

View file

@ -15,12 +15,12 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
alias Pleroma.Object
alias Pleroma.Repo
alias Pleroma.User
alias Pleroma.User.Fetcher, as: UserFetcher
alias Pleroma.Web.ActivityPub.ActivityPub
alias Pleroma.Web.ActivityPub.Builder
alias Pleroma.Web.ActivityPub.Pipeline
alias Pleroma.Web.ActivityPub.Utils
alias Pleroma.Web.ActivityPub.Visibility
alias Pleroma.Web.Push
alias Pleroma.Web.Streamer
alias Pleroma.Workers.PollWorker
@ -121,7 +121,7 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
nil
end
{:ok, notifications} = Notification.create_notifications(object, do_send: false)
{:ok, notifications, _} = Notification.create_notifications(object)
meta =
meta
@ -180,7 +180,8 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
liked_object = Object.get_by_ap_id(object.data["object"])
Utils.add_like_to_object(object, liked_object)
Notification.create_notifications(object)
{:ok, notifications, _} = Notification.create_notifications(object)
meta = add_notifications(meta, notifications)
{:ok, object, meta}
end
@ -199,7 +200,7 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
def handle(%{data: %{"type" => "Create"}} = activity, meta) do
with {:ok, object, meta} <- handle_object_creation(meta[:object_data], activity, meta),
%User{} = user <- User.get_cached_by_ap_id(activity.data["actor"]) do
{:ok, notifications} = Notification.create_notifications(activity, do_send: false)
{:ok, notifications, _} = Notification.create_notifications(activity)
{:ok, _user} = ActivityPub.increase_note_count_if_public(user, object)
{:ok, _user} = ActivityPub.update_last_status_at_if_public(user, object)
@ -211,6 +212,18 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
reply_depth = (meta[:depth] || 0) + 1
participations =
with true <- Visibility.is_direct?(activity),
{:ok, conversation} <-
ActivityPub.create_or_bump_conversation(activity, activity.actor) do
conversation
|> Repo.preload(:participations)
|> Map.get(:participations)
|> Repo.preload(:user)
else
_ -> []
end
Pleroma.Workers.NodeInfoFetcherWorker.enqueue("process", %{
"source_url" => activity.data["actor"]
})
@ -233,6 +246,7 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
meta =
meta
|> add_notifications(notifications)
|> add_streamables([{"participation", participations}])
ap_streamer().stream_out(activity)
@ -255,9 +269,11 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
Utils.add_announce_to_object(object, announced_object)
if !User.is_internal_user?(user) do
Notification.create_notifications(object)
{:ok, notifications, _} = Notification.create_notifications(object)
meta = add_notifications(meta, notifications)
if !User.is_internal_user?(user) do
# XXX: this too should be added to meta and only done after transaction
ap_streamer().stream_out(object)
end
@ -280,7 +296,8 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
reacted_object = Object.get_by_ap_id(object.data["object"])
Utils.add_emoji_reaction_to_object(object, reacted_object)
Notification.create_notifications(object)
{:ok, notifications, _} = Notification.create_notifications(object)
meta = add_notifications(meta, notifications)
{:ok, object, meta}
end
@ -411,11 +428,7 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
changeset
|> User.update_and_set_cache()
else
{:ok, new_user_data} = ActivityPub.user_data_from_user_object(updated_object)
User.get_by_ap_id(updated_object["id"])
|> User.remote_user_changeset(new_user_data)
|> User.update_and_set_cache()
UserFetcher.update_user_with_apdata(updated_object)
end
{:ok, object, meta}
@ -557,10 +570,7 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
defp send_notifications(meta) do
Keyword.get(meta, :notifications, [])
|> Enum.each(fn notification ->
Streamer.stream(["user", "user:notification"], notification)
Push.send(notification)
end)
|> Notification.send()
meta
end
@ -574,13 +584,17 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
meta
end
defp add_notifications(meta, notifications) do
existing = Keyword.get(meta, :notifications, [])
meta
|> Keyword.put(:notifications, notifications ++ existing)
defp add_to_list(meta, key, entries) do
existing = Keyword.get(meta, key, [])
Keyword.put(meta, key, entries ++ existing)
end
defp add_notifications(meta, notifications),
do: add_to_list(meta, :notifications, notifications)
defp add_streamables(meta, streamables),
do: add_to_list(meta, :streamables, streamables)
@impl true
def handle_after_transaction(meta) do
meta

View file

@ -339,6 +339,7 @@ defmodule Pleroma.Web.ActivityPub.Transmogrifier do
}
|> Maps.put_if_present("mediaType", media_type)
|> Maps.put_if_present("name", data["name"])
|> Maps.put_if_present("summary", data["summary"])
|> Maps.put_if_present("blurhash", data["blurhash"])
else
nil
@ -878,6 +879,29 @@ defmodule Pleroma.Web.ActivityPub.Transmogrifier do
{:ok, data}
end
def prepare_outgoing(%{"type" => "Update", "object" => %{"type" => objtype} = object} = data)
when objtype in Pleroma.Constants.actor_types() do
object =
object
|> maybe_fix_user_object()
|> strip_internal_fields()
data =
data
|> Map.put("object", object)
|> strip_internal_fields()
|> Map.merge(Utils.make_json_ld_header())
|> Map.delete("bcc")
{:ok, data}
end
def prepare_outgoing(%{"type" => "Update", "object" => %{}} = data) do
err_msg = "Requested to serve an Update for non-updateable object type: #{inspect(data)}"
Logger.error(err_msg)
raise err_msg
end
def prepare_outgoing(%{"type" => "Announce", "actor" => ap_id, "object" => object_id} = data) do
object =
object_id
@ -1004,29 +1028,19 @@ defmodule Pleroma.Web.ActivityPub.Transmogrifier do
def take_emoji_tags(%User{emoji: emoji}) do
emoji
|> Map.to_list()
|> Enum.map(&build_emoji_tag/1)
|> Enum.map(&Builder.emoji_object!/1)
end
# TODO: we should probably send mtime instead of unix epoch time for updated
def add_emoji_tags(%{"emoji" => emoji} = object) do
tags = object["tag"] || []
out = Enum.map(emoji, &build_emoji_tag/1)
out = Enum.map(emoji, &Builder.emoji_object!/1)
Map.put(object, "tag", tags ++ out)
end
def add_emoji_tags(object), do: object
defp build_emoji_tag({name, url}) do
%{
"icon" => %{"url" => "#{URI.encode(url)}", "type" => "Image"},
"name" => ":" <> name <> ":",
"type" => "Emoji",
"updated" => "1970-01-01T00:00:00Z"
}
end
def set_conversation(object) do
Map.put(object, "conversation", object["context"])
end

View file

@ -101,6 +101,8 @@ defmodule Pleroma.Web.ActivityPub.Utils do
"@context" => [
"https://www.w3.org/ns/activitystreams",
"#{Endpoint.url()}/schemas/litepub-0.1.jsonld",
# FEP-2c59
"https://purl.archive.org/socialweb/webfinger",
%{
"@language" => "und",
"htmlMfm" => "https://w3id.org/fep/c16b#htmlMfm"