This was accidentally broken in c8e0f7848b
due to a one-letter mistake in the plug option name and an absence of
tests. Therefore it was once again possible to serve e.g. Javascript or
CSS payloads via uploads and emoji.
However due to other protections it was still NOT possible for anyone to
serve any payload with an ActivityPub Content-Type. With the CSP policy
hardening from previous JS payload exloits predating the Content-Type
sanitisation, there is currently no known way of abusing this weakened
Content-Type sanitisation, but should be fixed regardless.
This commit fixes the option name and adds tests to ensure
such a regression doesn't occur again in the future.
Reported-by: Lain Soykaf <lain@lain.com>
When note editing support was added, it was omitted to strip internal
fields from edited notes and their history.
This was uncovered due to Mastodon inlining the like count as a "likes"
collection conflicting with our internal "likes" list causing validation
failures. In a spot check with likes/like_count it was not possible to
inject those internal fields into the local db via Update, but this
was not extensively tested for all fields and avenues.
Similarly address normalisation did not normalise addressing in the
object history, although this was never at risk of being exploitable.
The revision history of the Pleroma MR adding edit support reveals
recusrive stripping was intentionally avoided, since it will end up
removing e.g. emoji from outgoing activities. This appears to still
be true. However, all current internal fields ("pleroma_interal"
appears to be unused) contain data already publicised otherwise anyway.
In the interest of fixing a federation bug (and at worst potential data
injection) quickly outgoing stripping is left non-recursive for now.
Of course the ultimate fix here is to not mix remote and internal data
into the same map in the first place, but unfortunately having a single
map of all truth is a core assumption of *oma's AP doc processing.
Changing this is a masive undertaking and not suitable for providing
a short-term fix.
We expect most requests to be made for the actual canonical ID,
so check this one first (starting without query headers matching the
predominant albeit spec-breaking version).
Also avoid unnecessary rerewrites of the digest header on each route
alias by just setting it once before iterating through aliases.
This matches behaviour prioir to the SigningKey migration
and the expected semantics of the http_signatures lib.
Additionally add a min interval paramter, to avoid
refetch floods on bugs causing incompatible signatures
(like e.g. currently with Bridgy)
User updates broke with the migration to separate signing keys
since user data carries signing keys but we didn't allow the
association data to be updated.
Previously there were mainly two attack vectors:
- for raw keys the owner <-> key mapping wasn't verified at all
- keys were retrieved with refetching allowed
and only the top-level ID was sanitised while
usually keys are but a subobject
This reintroduces public key checks in the user actor,
previously removed in 9728e2f8f7
but now adapted to account for the new mapping mechanism.
Notably at least two instances were not properly guarded from path
traversal attack before and are only now fixed by using SafeZip:
- frontend installation did never check for malicious paths.
But given a malicious froontend could already, e.g. steal
all user tokens even without this, in the real world
admins should only use frontends from trusted sources
and the practical implications are minimal
- the emoji pack update/upload API taking a ZIP file
did not protect against path traversal. While atm
only admins can use these emoji endpoints, emoji
packs are typically considered "harmless" and used
without prior verification from various sources.
Thus this appears more concerning.
This will replace all the slightly different safety workarounds at
different ZIP handling sites and ensure safety is actually consistently
enforced everywhere while also making code cleaner and easiert to
follow.
The first collection page is (sometimes?) inlined
which caused crashes when attempting to log the fetch failure.
But there’s no need to fetch and we can treat it like the other inline collection
And for now treat partial fetches as a success, since for all
current users partial collection data is better than no data at all.
If an error occurred while fetching a page, this previously
returned a bogus {:ok, {:error, _}} success, causing the error
to be attached to the object as an reply list subsequently
leading to the whole post getting rejected during validation.
Also the pinned collection caller did not actually handle
the preexisting error case resulting in process crashes.
Oban cataches crashes to handle job failure and retry,
thus it never bubbles up all the way and nothing is logged by default.
For better debugging, catch and log any crashes.
The object lookup is later repeated in the validator, but due to
caching shouldn't incur any noticeable performance impact.
It’s actually preferable to check here, since it avoids the otherwise
occuring user lookup and overhead from starting and aborting a
transaction
Most of them actually only accept either activities or a
non-activity object later; querying both is then a waste
of resources and may create false positives.
The only thing this does is changing the updated_at field of the user.
Afaict this came to be because prior to pins federating this was split
into two functions, one of which created a changeset, the other applying
a given changeset. When this was merged the bits were just copied into
place.
User.get_or_fetch_by_(apid|nickname) are the only external users of fetch_and_prepare_user_from_ap_id,
thus there’s no point in duplicating logging, expecially not at error level.
Currently (duplicated) _not_found errors for users make up the bulk of my logs
and are created almost every second. Deleted users are a common occurence and not
worth logging outside of debug
To facilitate this ObjectValidator.fetch_actor_and_object is adapted to
return an informative error. Otherwise we’d be unable to make an
informed decision on retrying or not later. There’s no point in
retrying to fetch MRF-blocked stuff or private posts for example.
This is the only user of fetch_actor_and_object which previously just
always preteneded to be successful. For all the activity types handled
here, we absolutely need the referenced object to be able to process it
(other than Announce whether or not processing those activity types for
unknown remote objects is desirable in the first place is up for debate)
All other users of the similar fetch_actor already properly check success.
Note, this currently lumps all reolv failure reasons together,
so even e.g. boosts of MRF rejected posts will still exhaust all
retries. The following commit improves on this.
It makes decisions based on error sources harder since all possible
nesting levels need to be checked for. As shown by the return values
handled in the receiver worker something else still nests those,
but this is a first start.
This value is currently only used by Prometheus metrics
but (after optimisng the peer query inthe preceeding commit)
the most costly part of instance stats.
This query is one of the top cost offenders during an instances
lifetime. For small instances it was shown to take up 30-50% percent of
the total database query time, while for bigger isntaces it still held
a spot in the top 3 — alost as or even more expensive overall than
timeline queries!
The good news is, there’s a cheaper way using the instance table:
no need to process each entry, no need to filter NULLs
and no need to dedupe. EXPLAIN estimates the cost of the
old query as 13272.39 and the cost of the new query as 395.74
for me; i.e. a 33-fold reduction.
Results can slightly differ. E.g. we might have an old user
predating the instance tables existence and no interaction with since
or no instance table entry due to failure to query nodeinfo.
Conversely, we might have an instance entry but all known users got
deleted since.
However, this seems unproblematic in practice
and well worth the perf improvment.
Given the previous query didn’t exclude unreachable instances
neither does the new query.
Ideally we’d like to split this up more and count most invalid documents
as an error, but silently drop e.g. Deletes for unknown objects.
However, this is hard to extract from the changeset and jobs canceled
with :discard don’t count as exceptions and I’m not aware of a idiomatic
way to cancel further retries while retaining the exception status.
Thus at least keep a log, but since superfluous "Delete"s
seem kinda frequent, don't log at error, only info level.
Retrying seems unlikely to be helpful:
- if it timed out, chances are the short delay before reattempting
won't give the remote enough time to recover from its outage and
a longer delay makes the job pointless as users likely scrolled
further already. (Likely this is already be the case after the
first 20s timeout)
- if the remote data is so borked we couldn't even parse it far enough
for an "invalid metadata" error, chances are it will remain borked
upon reattempt
There were two issues leading to needles effort:
Most importnatly, the use of AP IDs as "source_url" meant multiple
simultaneous jobs got scheduled for the same instance even with the
default unique settings.
Also jobs were scheduled uncontionally for each processed AP object
meaning we incured oberhead from managing Oban jobs even if we knew it
wasn't necessary. By comparison the single query to check if an update
is needed should be cheaper overall.
The return value is never used here; later stages which actually need it
fetch the user themselves and it doesn't matter wheter we wait for the
fech here or later (if needed at all).
Even more, this early fetch always fails if the user was already deleted
or never known to begin with, but we get something referencing it; e.g.
the very Delete action carrying out the user deletion.
This prevents processing of the Delete, but before that it will be
reattempted several times, each time attempring to fetch the
non-existing profile, wasting resources.
It was used to migrate OStatus connections to ActivityPub if possible,
but support for OStatus was long since dropped, all new actors always AP
and if anything wasn't migrated before, their instance is already marked
as unreachable anyway.
The associated logic was also buggy in several ways and deleted users
got set to ap_enabled=false also causing some issues.
This patch is a pretty direct port of the original Pleroma MR;
follow-up commits will further fix and clean up remaining issues.
Changes made (other than trivial merge conflict resolutions):
- converted CHANGELOG format
- adapted migration id for Akkoma’s timeline
- removed ap_enabled from additional tests
Ported-from: https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3880
Otherwise attachments have a high chance to disappear with akkoma-fe’s
“delete & redraft” feature when cleanup is enabled in the backend. Since
we don't know whether a deletion was intended to be part of a redraft
process or even if whether the redraft was abandoned we still have to
delete attachments eventually.
A thirty minute delay should provide sufficient time for redrafting.
Fixes: AkkomaGang/akkoma#775
The cleanup attachment worker was run for every deleted post,
even if it’s a remote post whose attachments we don't even store.
This was especially bad due to attachment cleanup involving a
particularly heavy query wasting a bunch of database perf for nil.
This was uncovered by comparing statistics from
AkkomaGang/akkoma#784 and
AkkomaGang/akkoma#765 (comment)
Current logic unconditionally adds public adressing to "cc"
and follower adressing to "to" after attempting to strip it
from the other one. This creates serious problems:
First the bug prompting this investigation and fix,
unconditional addition creates duplicates when adressing
URIs already were in their intended final field; e.g.
this is prominently the case for all "unlisted" posts.
Since List.delete only removes the first occurence,
this then broke follower-adress stripping later on
making the policy ineffective.
It’s also just not safe in general wrt to non-public adressing:
e.g. pre-existing duplicates didn’t get fully stripped,
bespoke adressing modes with only one of public addressing
or follower addressing are mangled — and most importantly:
any belatedly received DM or follower-only post
also got public adressing added!
Shockingly this last point was actually asserted as "correct" in tests;
it appears to be a mistake from mindless match adjustments
while fixing crashes on nil adressing in
10c792110e.
Clean up this sloppy logic up, making sure no more duplicates are
added by us, all instances of relevant adresses are purged and only
readded when they actually existed to begin with.
Current AP spec demands anonymous objects to have an id value,
but explicitly set it to JSON null. Howeveras it turns out this is
incompatible with JSON-LD requiring `@id` to be a string and thus AP
spec is incompatible iwth the Ativity Streams spec it is based on.
This is an issue for (the few) AP implementers actually performing
JSON-LD processing, like IceShrimp.NET.
This was uncovered by IceShrimp.NET’s zotan due to our adoption of
anonymous objects for emoj in f101886709.
The issues is being discussed by W3C, and will most likely be resolved
via an errata redefining anonymous objects to completely omit the id
field just like transient objects already do. See:
https://github.com/w3c/activitypub/issues/476
Fixes: AkkomaGang/akkoma#848
Currently pruning hashtags with the prune_objects task only accounts
for whether that hashtag is associated with an object, but this may
lead to a foreign key constraint violation if that hashtag has no
objects but is followed by a local user.
This adds an additional check to see if that hashtag has any followers
before proceeding to delete it.
Instead of trying to update the version of asdf being used, just point
users to the guide on their website.
Ideally we'd do this for Elixir and Erlang as well, but new versions of
those packages may sometimes have compatibility issues with Akkoma. For
now, update those to the latest OTP and Elixir versions known to be
comaptible with Akkoma.
Turns out this is also used to set the default values in adminfe.
However, this URL may break with newer Akkoma-FE versions.
Instead, set this to blank so that it falls back to the default
NSFW cover image set at build time on Akkoma-FE.
Since we now remember the final location redirects lead to
and use it for all further checks since
3e134b07fa, these redirects
can no longer be exploited to serve counterfeit objects.
This fixes:
- display URLs from independent webapp clients
redirecting to the canonical domain
- Peertube display URLs for remote content
(acting like the above)
As hinted at in the commit message when strict checking
was added in 8684964c5d,
refetching is more robust than display URL comparison
but in exchange is harder to implement correctly.
A similar refetch approach is also employed by
e.g. Mastodon, IceShrimp and FireFish.
To make sure no checks can be bypassed by forcing
a refetch, id checking is placed at the very end.
This will fix:
- Peertube display URL arrays our transmogrifier fails to normalise
- non-canonical display URLs from alternative frontends
(theoretical; we didnt’t get any actual reports about this)
It will also be helpful in the planned key handling overhaul.
The modified user collision test was introduced in
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/461
and unfortunately the issues this fixes aren’t public.
Afaict it was just meant to guard against someone serving
faked data belonging to an unrelated domain. Since we now
refetch and the id actually is mocked, lookup now succeeds
but will use the real data from the authorative server
making it unproblematic. Instead modify the fake data further
and make sure we don’t end up using the spoofed version.
- pass env vars the proper™ way
- write log to file
- drop superfluous command_background
- make settings easily overwritable via conf.d
to avoid needing to edit the service file directly
if e.g. Akkoma was installed to another location
There was one test who used MFM and now failed due to the new representation. This is now adapted so it doesn't fail any more.
There was another test failing, but I don't see how this could have been affected by the MFM changes...
But I did draw in newer dependencies, so I thought maybe a newer EARMARK dependency was now failing, and indeed.
By explicitly asking for 1.4.46 (according to mix.lock the version it was before), it now works again.
This is what was failing. It seems that earmark 1.4.47 removed everything before the comment, which it should not do.
1) test format_input/3 with markdown raw HTML (Pleroma.Web.CommonAPI.UtilsTest)
test/pleroma/web/common_api/utils_test.exs:213
Assertion with == failed
code: assert result == ~s"<a href=\"http://example.org/\">OwO</a>"
left: ""
right: "<a href=\"http://example.org/\">OwO</a>"
stacktrace:
test/pleroma/web/common_api/utils_test.exs:216: (test)
See <AkkomaGang/akkoma#381>
We can't use the HTML content as-is.
[FEP-c16b](https://codeberg.org/fediverse/fep/src/branch/main/fep/c16b/fep-c16b.md) was
written to have a "scrubber friendly" way of representing MFM functions in HTML. Here
we add support in the backend for the functions Foundkey supports. The front-ends then
needs to add support to make sure the HTML representation is turned into a correct view.
(I.e. by help of CSS and Javascript)
FEP-c16b also has a discovery mechanism to indicate to recieving servers that they can
use the `content` directly. This is not implemented in this commit
Ever since the browser frontend switcher was introduced in
de64c6c54a /akkoma counts as
an API prefix and thus gets skipped by frontend plugs
breaking the old swagger ui path of /akkoma/swagger-ui.
Do the simple thing and change the frontend path to
/pleroma/swaggerui which isn't an API path and can't collide
with frontend user paths given pleroma is areserved nickname.
Reported in
https://meta.akkoma.dev/t/view-all-endpoints/269/7https://meta.akkoma.dev/t/swagger-ui-not-loading/728
Mastodon API demands this be null unless it’s a multi-selection poll.
Not abiding by this can mess up display in some clients.
Fixes: AkkomaGang/akkoma#190
Multiple profiles can be specified as a space-separated list
and the possibility of additional profiles is explicitly brought up
in ActivityStream spec
Not _yet_ supported as of exiftool 12.87, though
at first glance it seems like standard BMP files
can't store any metadata besides colour profiles
Fixes the specific case from
AkkomaGang/akkoma-fe#396
although the frontend shouldn’t get bricked regardless.
The debug logs are very noisy and can be enabled during analysis
of a specific error believed to be SQL-related
--
Before log capturing those debug messages were still hidden,
but with log capturing they show up in the output of failed
tests unless disabled.
Cherry-picked-from: e628d00a81
Currently `mix test` prints a slew of logs in the terminal
with messages from different tests intermsparsed. Globally
enabling capture log hides log messages unless a test fails
reducing noise and making it easier to anylse the important
(from failed tests) messages.
Compiler warnings and a few messages not printed via Logger
still show up but its much more readable than before.
Ported from: 3aed111a42
We have a bunch of mysterious sporadic failures which usually disappear
when rerunning failed jobs only. Ideally we should locate and fix the
cause of those psoradic failures, but until we figure this out retrying
once makes CI status less useless.
Fragments are already always stripped anyway
so listing one specific fragment here is
unnecessary and potentially confusing.
This effectively reverts
4457928e32
but keeps the added bridgy testcase.
Usually an id should point to another AP object
and the image file isn’t an AP object. We currently
do not provide standalone AP objects for emoji and
don't keep track of remote emoji at all.
Thus just federate them as anonymous objects,
i.e. objects only existing within a parent context
and using an explicit null id.
IceShrimp.NET previously adopted anonymous objects
for remote emoji without any apparent issues. See:
333611f65e
Fixes: AkkomaGang/akkoma#694
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.
Apparently long-lived Phoenix websocket processes are known to
often cause exactly this by getting into a state unfavourable
for the garbage collector.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.
Testing on one affected instance appears to confirm this theory
Ref.:
https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafterhttps://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060
Tested-by: bjo
Since those old migrations will now most likely only run during db init,
there’s not much point in running them in the background concurrently
anyway, so just drop the cncurrent setting rather than disabling
migration locks.
Currently Akkoma doesn't have any proper mitigations against BREACH,
which exploits the use of HTTP compression to exfiltrate sensitive data.
(see: AkkomaGang/akkoma#721 (comment))
To err on the side of caution, disable gzip compression for now until we
can confirm that there's some sort of mitigation in place (whether that
would be Heal-The-Breach on the Caddy side or any Akkoma-side
mitigations).
Ever since 364b6969eb
this setting wasn't used by the backend and a noop.
The stated usecase is better served by setting the base_url
to a local subdomain and using proxying in nginx/Caddy/...
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.
Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.
Implementation notes:
- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering
Overall performance of timelines and creating new posts which contain URLs is greatly improved.
This lets us:
- avoid issues with broken hash indices for PostgreSQL <10
- drop runtime checks and legacy codepaths for <11 in db search
- always enable custom query plans for performance optimisation
PostgreSQL 11 is already EOL since 2023-11-09, so
in theory everyone should already have moved on to 12 anyway.
From experience, setting DB type to "Online transaction processing
system" seems to give the most optimal configuration in terms of
performance.
I also increased the recomended max connections to 25-30 as that leaves
some room for maintenance tasks to run without running out of
connections.
Finally, I removed the example configs since they're probably out of
date and I think it's better to direct people to use PGTune instead.
Logger output being visible depends on user configuration, but most of
the prints in mix tasks should always be shown. When running inside a
mix shell, it’s probably preferable to send output directly to it rather
than using raw IO.puts and we already have shell_* functions for this,
let’s use them everywhere.
Pruning can go on for a long time; give admins some insight into that
something is happening to make it less frustrating and to make it easier
which part of the process is stalled should this happen.
Again most of the changes are merely reindents;
review with whitespace changes hidden recommended.
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.
The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().
Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
This gives feedback when to stop rerunning limited batches.
Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.
If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
Using only the admin key works as well currently
and Akkoma needs to know the admin key to be able
to add new entries etc. However the Meilisearch
key descriptions suggest the admin key is not
supposed to be used for searches, so let’s not.
For compatibility with existings configs, search_key remains optional.
This makes show-key’s output match our documentation as of Meilisearch
1.8.0-8-g4d5971f343c00d45c11ef0cfb6f61e83a8508208. Since I’m not sure
if older versions maybe only provided description, it will fallback to
the latter if no name parameter exists.
Meilisearch is already configured to return results sorted by a
particular ranking configured in the meilisearch CLI task.
Resorting the returned top results by date partially negates this and
runs counter to what someone with tweaked settings expects.
Issue and fix identified by AdamK2003 in
AkkomaGang/akkoma#579
But instead of using a O(n^2) resorting, this commit directly
retrieves results in the correct order from the database.
Closes: AkkomaGang/akkoma#579
And while add it point to this via a top-level
FEDERATION.md document as standardised by FEP-67ff.
Also add a few missing descriptions to the config cheatsheet
and move the recently removed C2S extension into an appropiate
subsection.
Trying to display non-media as media crashed the renderer,
but when posting a status with a valid, non-media object id
the post was still created, but then crashed e.g. timeline rendering.
It also crashed C2S inbox reads, so this could not be used to leak
private posts.
Afaict this was never used, but keeping this (in theory) possible
hinders detecting which objects are actually media uploads and
which proper ActivityPub objects.
It was originally added as part of upload support itself in
02d3dc6869 without being used
and `git log -S:activity_type` and `git log -Sactivity_type:`
don't find any other commits using this.
In Mastodon media can only be used by owners and only be associated with
a single post. We currently allow media to be associated with several
posts and until now did not limit their usage in posts to media owners.
However, media update and GET lookup was already limited to owners.
(In accordance with allowing media reuse, we also still allow GET
lookups of media already used in a post unlike Mastodon)
Allowing reuse isn’t problematic per se, but allowing use by non-owners
can be problematic if media ids of private-scoped posts can be guessed
since creating a new post with this media id will reveal the uploaded
file content and alt text.
Given media ids are currently just part of a sequentieal series shared
with some other objects, guessing media ids is with some persistence
indeed feasible.
E.g. sampline some public media ids from a real-world
instance with 112 total and 61 monthly-active users:
17.465.096 at t0
17.472.673 at t1 = t0 + 4h
17.473.248 at t2 = t1 + 20min
This gives about 30 new ids per minute of which most won't be
local media but remote and local posts, poll answers etc.
Assuming the default ratelimit of 15 post actions per 10s, scraping all
media for the 4h interval takes about 84 minutes and scraping the 20min
range mere 6.3 minutes. (Until the preceding commit, post updates were
not rate limited at all, allowing even faster scraping.)
If an attacker can infer (e.g. via reply to a follower-only post not
accessbile to the attacker) some sensitive information was uploaded
during a specific time interval and has some pointers regarding the
nature of the information, identifying the specific upload out of all
scraped media for this timerange is not impossible.
Thus restrict media usage to owners.
Checking ownership just in ActivitDraft would already be sufficient,
since when a scheduled status actually gets posted it goes through
ActivityDraft again, but would erroneously return a success status
when scheduling an illegal post.
Independently discovered and fixed by mint in Pleroma
1afde067b1
In MastoAPI media descriptions are updated via the
media update API not upon post creation or post update.
This functionality was originally added about 6 years ago in
ba93396649 which was part of
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/626 and
https://git.pleroma.social/pleroma/pleroma-fe/-/merge_requests/450.
They introduced image descriptions to the front- and backend,
but predate adoption of Mastodon API.
For a while adding an `descriptions` array on post creation might have
continued to work as an undocumented Pleroma extension to Masto API, but
at latest when OpenAPI specs were added for those endpoints four years
ago in 7803a85d2c, these codepaths ceased
to be used. The API specs don’t list a `descriptions` parameter and
any unknown parameters are stripped out.
The attachments_from_ids function is only called from
ScheduledActivity and ActivityDraft.create with the latter
only being called by CommonAPI.{post,update} whihc in turn
are only called from ScheduledActivity again, MastoAPI controller
and without any attachment or description parameter WelcomeMessage.
Therefore no codepath can contain a descriptions parameter.
Direct users to add in the appropriate headers and update the listening
port instead of copy/pasting a config that's already outdated and
probably would otherwise have to be synced with the main example nginx
config.
Since the configuration options on the nginx side already exist in the
sample config, there's no need to tell users to copy-paste those
settings in again.
The /var/tmp directory is not mounted as tmpfs unlike /tmp which is
mounted as such on some distros like Fedora or Arch. Since there isn't
really a benefit to having the cache on tmpfs, this change should allow
for a larger cache if needed without worrying about running out of RAM.
Applying works fine with a 20220220135625 version, but it won’t be
rolled back in the right order. Fortunately this action is idempotent
so we can just rename and reapply it with a new id.
To also not break large-scale rollbacks past 2022 for anyone
who already applied it with the old id, keep a stub migration.
This promotes and expands our existing optional migration.
Based on usage statistics from several instances, see:
AkkomaGang/akkoma#764
activities_hosts is now retained after all since it’s essential
for the "instance" query parameter of *oma’s public timeline to
reliably work in a reasonable amount of time. (Although akkoma-fe has
no support for this feature and apparently barely anyone uses it.)
activities_actor_index was already dropped before in
20221211234352_remove_unused_indices; no need to drop it again.
Birthday indices were introduced in pleroma starting with
20220116183110_add_birthday_to_users which is past the
last common migration 20210416051708.
The current 10 GiB cache size is too large to fit into tmpfs for VMs and
other machines with smaller RAM sizes. Most non-Debian distros mount
/tmp on tmpfs.
Documentation was already clear on this only stripping GPS tags.
But there are more potentially sensitive metadata tags (e.g. author
and possibly description) and the name alone suggests a broader effect.
Thus change the filter to strip all metadata except for colourspace info
and orientation (technically it strips everything and then readds
selected tags).
Explicitly stripping CommonIFD0 is needed since -all does not modify
IFD0 due to TIFF storing some actual image data there. CommonIFD0 then
strips a bunch of commonly used actual metadata tags from IFD0, to my
understanding leaving TIFF image data and custom metadata tags intact.
As of exiftool 12.57 both formats are supported, but EXIF data is
optional for JXL and if exiftool doesn’t find a preexisting metadata
chunk it will create one and treat it as a minor error resulting in
a non-zero exit code.
Setting -ignoreMinorErrors avoids failing on such uploads.
Due to JSON-LD compaction the full address of public scope
may also occur in shorter forms and the spec requires us to treat them
all equivalently. To save us the pain of repeatedly checking for all
variants internally, normalise inbound data to just one form.
See note at: https://www.w3.org/TR/activitypub/#public-addressing
This needs to happen very early, even before the other addressing fixes
else an earlier validator will reject the object. This in turn required
to move the list-tpye normalisation earlier as well, but since I was
unsure about putting empty lists into the data when no such field
existed before, I excluded this case and thus the later fixing had to be
kept as well.
Fixes: AkkomaGang/akkoma#670
Alongside moving to certbot's nginx plugin, also use conf.d instead of
recreating the sites-{available,enabled} setup that Debian/Ubuntu uses.
Furthermore, also request a certificate for the media domain at the same
time since that's now required.
The API parameter is not a timestamp but an offset.
If a sufficient amount of time passes between the tests
expires_at calculation and the internal calculation during processing
of the request the strict equality assertion fails. (Either a direct
assertion or indirect via job lookup).
To avoid this lower comparison granularity.
literally nothing uses C2S AP, and it's another route into core
systems which requires analysis and maintenance. A second API
is just extra surface for potentially bad things so let's take
it out back and obliterate it
by default just prevent job floods with a 1-seconds
uniqueness check, but override in RemoteFetcherWorker
for 5 minute uniqueness check over all states
:infinity is an option we can go for maybe at some point,
but that would prevent any refetches so maybe not idk.
We were overzealous with matching on a raw error from the object fetch that should have never been relied on like this. If we can't fetch successfully we should assume that the collection is private.
Building a more expressive and universal error struct to match on may be something to consider.
These tests relied on the removed Fetcher.fetch_object_from_id!/2 function injecting the error tuple into a log message with the exact words "Object containment failed."
We will keep this behavior by generating a similar log message, but perhaps this should do a better job of matching on the error tuple returned by Transmogrifier.handle_incoming/1
"id" is used for the canonical link to the AS2 representation of an object.
"url" is typically used for the canonical link to the HTTP representation.
It is what we use, for example, when following the "external source" link
in the frontend. However, it's not the link we include in the post contents
for quote posts.
Using URL instead means we include a more user-friendly URL for Mastodon,
and a working (in the browser) URL for Threads
previously we would uncritically take data and format it into
tags for static-fe and the like - however, instances can be
configured to disallow unauthenticated access to these resources.
this means that OG tags as a vector for information leakage.
_technically_ this should only occur if you have both
restrict_unauthenticated *AND* you run static-fe, which makes no
sense since static-fe is for unauthenticated people in particular,
but hey ho.
Per the XRD specification:
> 2.4. Element <Alias>
>
> The <Alias> element contains a URI value that is an additional
> identifier for the resource described by the XRD. This value
> MUST be an absolute URI. The <Alias> element does not identify
> additional resources the XRD is describing, **but rather provides
> additional identifiers for the same resource.**
(http://docs.oasis-open.org/xri/xrd/v1.0/os/xrd-1.0-os.html#element.alias, emphasis mine)
In other words, the alias list is expected to link to things which are
not just semantically the same, but exactly the same. Old user accounts
don't do that
This change should not pose a compatibility issue: Mastodon does not
list old accounts here (See e1fcb02867/app/serializers/webfinger_serializer.rb (L12))
The use of as:alsoKnownAs is also not quite semantically right here
(see https://www.w3.org/TR/did-core/#dfn-alsoknownas, which defines
it to be used to refer to identifiers which are interchangable) but
that's what DID get for reusing a property definition that Mastodon
already squatted long before they got to it
The newest git HEAD of MIME already knows about APNG, but this
hasn’t been released yet. Without this, APNG attachments from
remote posts won’t display as images in frontends.
Fixes: akkoma#657
This protects us from falling for obvious spoofs as from the current
upload exploit (unfortunately we can’t reasonably do anything about
spoofs with exact matches as was possible via emoji and proxy).
Such objects being invalid is supported by the spec, sepcifically
sections 3.1 and 3.2: https://www.w3.org/TR/activitypub/#obj-id
Anonymous objects are not relevant here (they can only exists within
parent objects iiuc) and neither is client-to-server or transient objects
(as those cannot be fetched in the first place).
This leaves us with the requirement for `id` to (a) exist and
(b) be a publicly dereferencable URI from the originating server.
This alone does not yet demand strict equivalence, but the spec then
further explains objects ought to be fetchable _via their ID_.
Meaning an object not retrievable via its ID, is invalid.
This reading is supported by the fact, e.g. GoToSocial (recently) and
Mastodon (for 6+ years) do already implement such strict ID checks,
additionally proving this doesn’t cause federation issues in practice.
However, apart from canonical IDs there can also be additional display
URLs. *omas first redirect those to their canonical location, but *keys
and Mastodon directly serve the AP representation without redirects.
Mastodon and GTS deal with this in two different ways,
but both constitute an effective countermeasure:
- Mastodon:
Unless it already is a known AP id, two fetches occur.
The first fetch just reads the `id` property and then refetches from
the id. The last fetch requires the returned id to exactly match the
URL the content was fetched from. (This can be optimised by skipping
the second fetch if it already matches)
05eda8d193/app/helpers/jsonld_helper.rb (L168)63f0979799
- GTS:
Only does a single fetch and then checks if _either_ the id
_or_ url property (which can be an object) match the original fetch
URL. This relies on implementations always including their display URL
as "url" if differing from the id. For actors this is true for all
investigated implementations, for posts only Mastodon includes an
"url", but it is also the only one with a differing display URL.
2bafd7daf5 (diff-943bbb02c8ac74ac5dc5d20807e561dcdfaebdc3b62b10730f643a20ac23c24fR222)
Albeit Mastodon’s refetch offers higher compatibility with theoretical
implmentations using either multiple different display URL or not
denoting any of them as "url" at all, for now we chose to adopt a
GTS-like refetch-free approach to avoid additional implementation
concerns wrt to whether redirects should be allowed when fetching a
canonical AP id and potential for accidentally loosening some checks
(e.g. cross-domain refetches) for one of the fetches.
This may be reconsidered in the future.
Since we always followed redirects (and until recently allowed fuzzy id
matches), the ap_id of the received object might differ from the iniital
fetch url. This lead to us mistakenly trying to insert a new user with
the same nickname, ap_id, etc as an existing user (which will fail due
to uniqueness constraints) instead of updating the existing one.
Since we reject cross-domain redirects, this doesn’t yet
make a difference, but it’s requried for stricter checking
subsequent commits will introduce.
To make sure (and in case we ever decide to reallow
cross-domain redirects) also use the final location
for containment and reachability checks.
In order to properly process incoming notes we need
to be able to map the key id back to an actor.
Also, check collections actually belong to the same server.
Key ids of Hubzilla and Bridgy samples were updated to what
modern versions of those output. If anything still uses the
old format, we would not be able to verify their posts anyway.
If it’s not already in the database,
it must be counterfeit (or just not exists at all)
Changed test URLs were only ever used from "local: false" users anyway.
This brings it in line with its name and closes an,
in practice harmless, verification hole.
This was/is the only user of contain_origin making it
safe to change the behaviour on actor-less objects.
Until now refetched objects did not ensure the new actor matches the
domain of the object. We refetch polls occasionally to retrieve
up-to-date vote counts. A malicious AP server could have switched out
the poll after initial posting with a completely different post
attribute to an actor from another server.
While we indeed fell for this spoof before the commit,
it fortunately seems to have had no ill effect in practice,
since the asociated Create activity is not changed. When exposing the
actor via our REST API, we read this info from the activity not the
object.
This at first thought still keeps one avenue for exploit open though:
the updated actor can be from our own domain and a third server be
instructed to fetch the object from us. However this is foiled by an
id mismatch. By necessity of being fetchable and our longstanding
same-domain check, the id must still be from the attacker’s server.
Even the most barebone authenticity check is able to sus this out.
Such redirects on AP queries seem most likely to be a spoofing attempt.
If the object is legit, the id should match the final domain anyway and
users can directly use the canonical URL.
The lack of such a check (and use of the initially queried domain’s
authority instead of the final domain) was enabling the current exploit
to even affect instances which already migrated away from a same-domain
upload/proxy setup in the past, but retained a redirect to not break old
attachments.
(In theory this redirect could, with some effort, have been limited to
only old files, but common guides employed a catch-all redirect, which
allows even future uploads to be reachable via an initial query to the
main domain)
Same-domain redirects are valid and also used by ourselves,
e.g. for redirecting /notice/XXX to /objects/YYY.
Turns out we already had a test for activities spoofed via upload due
to an exploit several years. Back then *oma did not verify content-type
at all and doing so was the only adopted countermeasure.
Even the added test sample though suffered from a mismatching id, yet
nobody seems to have thought it a good idea to tighten id checks, huh
Since we will add stricter id checks later, make id and URL match
and also add a testcase for no content type at all. The new section
will be expanded in subsequent commits.
No new path traversal attacks are known. But given the many entrypoints
and code flow complexity inside pack.ex, it unfortunately seems
possible a future refactor or addition might reintroduce one.
Furthermore, some old packs might still contain traversing path entries
which could trigger undesireable actions on rename or delete.
To ensure this can never happen, assert safety during path construction.
Path.safe_relative was introduced in Elixir 1.14, but
fortunately, we already require at least 1.14 anyway.
To save on bandwith and avoid OOMs with large files.
Ofc, this relies on the remote server
(a) sending a content-length header and
(b) being honest about the size.
Common fedi servers seem to provide the header and (b) at least raises
the required privilege of an malicious actor to a server infrastructure
admin of an explicitly allowed host.
A more complete defense which still works when faced with
a malicious server requires changes in upstream Finch;
see https://github.com/sneako/finch/issues/224
Certain attacks rely on predictable paths for their payloads.
If we weren’t so overly lax in our (id, URL) check, the current
counterfeit activity exploit would be one of those.
It seems plausible for future attacks to hinge on
or being made easier by predictable paths too.
In general, letting remote actors place arbitrary data at
a path within our domain of their choosing (sans prefix)
just doesn’t seem like a good idea.
Using fully random filenames would have worked as well, but this
is less friendly for admins checking emoji dirs.
The generated suffix should still be more than enough;
an attacker needs on average 140 trillion attempts to
correctly guess the final path.
This will decouple filenames from shortcodes and
allow more image formats to work instead of only
those included in the auto-load glob. (Albeit we
still saved other formats to disk, wasting space)
Furthermore, this will allow us to make
final URL paths infeasible to predict.
Since 3 commits ago we restrict shortcodes to a subset of
the POSIX Portable Filename Character Set, therefore
this can never have a directory component.
E.g. *key’s emoji URLs typically don’t have file extensions, but
until now we just slapped ".png" at its end hoping for the best.
Furthermore, this gives us a chance to actually reject non-images,
which before was not feasible exatly due to those extension-less URLs
As suggested in b387f4a1c1, only steal
emoji with alphanumerc, dash, or underscore characters.
Also consolidate all validation logic into a single function.
===
Taken from akkoma#703 with cosmetic tweaks
This matches our existing validation logic from Pleroma.Emoji,
and apart from excluding the dot also POSIX’s Portable Filename
Character Set making it always safe for use in filenames.
Mastodon is even stricter also disallowing U+002D HYPEN-MINUS
and requiring at least two characters.
Given both we and Mastodon reject shortcodes excluded
by this anyway, this doesn’t seem like a loss.
Even more than with user uploads, a same-domain proxy setup bears
significant security risks due to serving untrusted content under
the main domain space.
A risky setup like that should never be the default.
Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.
Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.
Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.
Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
By mapping all extensions related to our custom privileged types
back to innocuous text/plain, our custom types will never automatically
be inserted which was one of the factors making impersonation possible.
Note, this does not invalidate the upload and emoji Content-Type
restrictions from previous commits. Apart from counterfeit AP objects
there are other payloads with standard types this protects against,
e.g. *.js Javascript payloads as used in prior frontend injections.
Else malicious emoji packs or our EmojiStealer MRF can
put payloads into the same domain as the instance itself.
Sanitising the content type should prevent proper clients
from acting on any potential payload.
Note, this does not affect the default emoji shipped with Akkoma
as they are handled by another plug. However, those are fully trusted
and thus not in needed of sanitisation.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.
Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.
While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.
Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.
Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
The lack thereof enables spoofing ActivityPub objects.
A malicious user could upload fake activities as attachments
and (if having access to remote search) trick local and remote
fedi instances into fetching and processing it as a valid object.
If uploads are hosted on the same domain as the instance itself,
it is possible for anyone with upload access to impersonate(!)
other users of the same instance.
If uploads are exclusively hosted on a different domain, even the most
basic check of domain of the object id and fetch url matching should
prevent impersonation. However, it may still be possible to trick
servers into accepting bogus users on the upload (sub)domain and bogus
notes attributed to such users.
Instances which later migrated to a different domain and have a
permissive redirect rule in place can still be vulnerable.
If — like Akkoma — the fetching server is overly permissive with
redirects, impersonation still works.
This was possible because Plug.Static also uses our custom
MIME type mappings used for actually authentic AP objects.
Provided external storage providers don’t somehow return ActivityStream
Content-Types on their own, instances using those are also safe against
their users being spoofed via uploads.
Akkoma instances using the OnlyMedia upload filter
cannot be exploited as a vector in this way — IF the
fetching server validates the Content-Type of
fetched objects (Akkoma itself does this already).
However, restricting uploads to only multimedia files may be a bit too
heavy-handed. Instead this commit will restrict the returned
Content-Type headers for user uploaded files to a safe subset, falling
back to generic 'application/octet-stream' for anything else.
This will also protect against non-AP payloads as e.g. used in
past frontend code injection attacks.
It’s a slight regression in user comfort, if say PDFs are uploaded,
but this trade-off seems fairly acceptable.
(Note, just excluding our own custom types would offer no protection
against non-AP payloads and bear a (perhaps small) risk of a silent
regression should MIME ever decide to add a canonical extension for
ActivityPub objects)
Now, one might expect there to be other defence mechanisms
besides Content-Type preventing counterfeits from being accepted,
like e.g. validation of the queried URL and AP ID matching.
Inserting a self-reference into our uploads is hard, but unfortunately
*oma does not verify the id in such a way and happily accepts _anything_
from the same domain (without even considering redirects).
E.g. Sharkey (and possibly other *keys) seem to attempt to guard
against this by immediately refetching the object from its ID, but
this is easily circumvented by just uploading two payloads with the
ID of one linking to the other.
Unfortunately *oma is thus _both_ a vector for spoofing and
vulnerable to those spoof payloads, resulting in an easy way
to impersonate our users.
Similar flaws exists for emoji and media proxy.
Subsequent commits will fix this by rigorously sanitising
content types in more areas, hardening our checks, improving
the default config and discouraging insecure config options.
The default refresh interval of 1 day is woefully inadequate here;
users expect to be able to add the alias to their new account and
press the move button on their old account and have it work.
This allows callers to specify a maximum age before a refetch is
triggered. We set that to 5s for the move code, as a nice compromise
between Making Things Work and ensuring that this can't be used
to hammer a remote server
Currently translated at 18.1% (183 of 1006 strings)
Translated using Weblate (Polish)
Currently translated at 6.6% (67 of 1006 strings)
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: subtype <subtype@hollow.capital>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/pl/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
This will crop them to a square matching behaviour of Husky and *key
and allowing us to never worry about consistent alignment.
Note, akkoma-fe instead displays the full image with inserted spacing.
Mastodon at the very least seems to prevent the creation of emoji with
dots in their name (and refuses to accept them in federation). It feels
like being cautious in what we accept is reasonable here.
Colons are the emoji separator and so obviously should be blocked.
Perhaps instead of filtering out things like this we should just
do a regex match on `[a-zA-Z0-9_-]`? But that's plausibly a decision
for another day
Perhaps we should also have a centralised "is this a valid emoji shortcode?"
function
This partly reverts 1d884fd914
while fixing both the issue it addressed and the issue it caused.
The above commit successfully fixed OpenGraph metadata tags
which until then always showed the user bio instead of post content
by handing the activities AP ID as url to the Metadata builder
_instead_ of passing the internal ID as activity_id.
However, in doing so the commit instead inflicted this very problem
onto Twitter metadata tags which ironically are used by akkoma-fe.
This is because while the OpenGraph builder wants an URL as url,
the Twitter builder needs the internal ID to build the URL to the
embedded player for videos and has no URL property.
Thanks to twpol for tracking down this root cause in #644.
Now, once identified the problem is simple, but this simplicity
invites multiple possible solutions to bikeshed about.
1. Just pass both properties to the builder and let them pick
2. Drop the url parameter from the OpenGraph builder and instead
a) build static-fe URL of the post from the ID (like Twitter)
b) use the passed-in object’s AP ID as an URL
Approach 2a has the disadvantage of hardcoding the expected URL outside
the router, which will be problematic should it ever change.
Approach 2b is conceptually similar to how the builder works atm.
However, the og:url is supposed to be a _permanent_ ID, by changing it
we might, afaiui, technically violate OpenGraph specs(?). (Though its
real-world consequence may very well be near non-existent.)
This leaves just approach 1, which this commit implements.
Albeit it too is not without nits to pick, as it leaves the metadata
builders with an inconsistent interface.
Additionally, this will resolve the subotpimal Discord previews for
content-less image posts reported in #664.
Discord already prefers OpenGraph metadata, so it’s mostly unaffected.
However, it appears when encountering an explicitly empty OpenGraph
description and a non-empty Twitter description, it replaces just the
empty field with its Twitter counterpart, resulting in the user’s bio
slipping into the preview.
Secondly, regardless of any OpenGraph tags, Discord uses twitter:card to
decide how prominently images should be, but due to the bug the card
type was stuck as "summary", forcing images to always remain small.
Root cause identified by: twpol
Fixes: AkkomaGang/akkoma#644
Fixes: AkkomaGang/akkoma#664
fixed up some grammer / wording. removed a setence and made wording more in line with what I could find in Admin-FE (especially wording of "rejecting" vs. dropping)
This vastly reduces idle CPU usage, which should generally be beneficial
for most small-to-medium sized instances.
Additionally update the documentation to specify how to override the vm.args
file for OTP installs
This fixes an oversight in e99e2407f3
which added background_removal as a possible SimplePolicy setting.
However, it did _not_ add a default value to the base config and
as it turns out instance_list doesn’t handle unset options well.
In effect this caused federating instances with SimplePolicy enabled
but background_removal not explicitly configured to always trip up for
outgoing account updates in check_background_removal (and incoming
updates from Sharkey).
For added ""fun"" this error was able to block account updates made
e.g. via /api/v1/accounts/update_credentials.
Tests were unaffected since they explicitly override
all relevant config options.
Set a default to avoid all this
(note to self: don’t forget next time, baka!)
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.
Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.
Ref.: 4e64397635
Fixes misspelling and omission of and example in commit
0cfd5b4e89 which added the
status_ttl_property. This was the only place this commit
referred to the property as note_ttl_days.
Partially fixes the omitted schema update of the instance metadata addition
from commit b7e8ce2350. A proper full schema
for nodeinfo is still missing.
OTP’s default SSL/TLS settings are rather restricitive
and in particular do not use system CA certs.
In our case using system CA certs is virtually always desired
and the lack of it leads to non-obvious errors. Manually configuring
system CA certs from in-database config also isn’t straightforward.
Furthermore, gen_smtp uses a different set of connection options
for direct SSL/TLS and a later TLS upgrade providing additional
confusion and complexity in how to configure this.
Thus provide some suitable defaults for sending SMTP emails.
Everything can still be overriden by admins if necessary.
Note: defaults are not appended when validating the config
in hopes of improving the error message (as the required relay key
is already accessed to generate defaults for optional fields)
Fixes: AkkomaGang/akkoma#660
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.
Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
the previous code passed a state parameter to ueberauth with info
about where to go after the user logged in, etc.
since ueberauth 0.7, this parameter is ignored and oauth state is used
for actual CSRF reasons.
we now set a cookie with the state we need to keep track of, and read
it once the callback happens.
Implements the preferences endpoint in the Mastodon API, but returns
default values for most of the preferences right now. The only supported
preference we can access is default post visibility, and a relevant test
is added as well.
Currently, Akkoma sorts by published date first before everything else.
This however makes search results pretty bad since Meilisearch uses a
bucket sort algorithm in order of the ranking rules specified:
https://www.meilisearch.com/docs/learn/core_concepts/relevancy#behavior
Since the `published` attribute is a unix timestamp, the resulting
buckets are pretty small so the other rules essentially have little to
no effect on the rankings of search results.
This fixes that issue by moving the `published:desc` rule further down
so it still sorts by date, but only after considering everything else.
AFAIK attribute and sort doesn't really affect results for Akkoma since
the only attribute considered is the `content` attribute and the `sort`
parameter isn't used in Akkoma searches. Everything else is made to
match more closely to Meilisearch's defaults.
The docker-compose.yml file is likely to be edited quite extensively by
admins when setting up an instance. This would likely cause problems
when dealing with updating Akkoma as merge conflicts would likely occur.
Docker-compose already has the ability to use override files in addition
to the main `docker-compose.yml` file. Admins can instead put any
overrides (additional volumes, container for elasticsearch, etc.) into a
file that won't be tracked by git and thus won't run into merge
conflicts in the future. In particular, the
`docker-compose.override.yml` will be checked by docker compose in
addition to the main file if it exists and override definitions from the
latter with the former.
OTP builds to 1.15
Changelog entry
Ensure policies are fully loaded
Fix :warn
use main branch for linkify
Fix warn in tests
Migrations for phoenix 1.17
Revert "Migrations for phoenix 1.17"
This reverts commit 6a3b2f15b74ea5e33150529385215b7a531f3999.
Oban upgrade
Add default empty whitelist
mix format
limit test to amd64
OTP 26 tests for 1.15
use OTP_VERSION tag
baka
just 1.15
Massive deps update
Update locale, deps
Mix format
shell????
multiline???
?
max cases 1
use assert_recieve
don't put_env in async tests
don't async conn/fs tests
mix format
FIx some uploader issues
Fix tests
Here we make it a generic placeholder which should make accidental copy-pasting of this command happen less.
We also had one case of someone who got errors because the SHELL variable wasn't set. This is the case for Alpine.
Here I added a line to fill it in when not set.
Commit 11ec9daa5b (released with 3.2.0)
added the fedibird frontend and tweaked and extended Mastodon API for
compatibility with it. Document these changes.
Prior to this change, anyone, authenticated or not, could submit a search
query for an activity by URL, and cause the fetcher to go fetch it. That
shouldn't happen if `limit_to_local_content` is set to `:all` or if it's
set to `:unauthenticated` and the query came from an unauthenticated
source.
Set it to `inline` because the vast majority of what's sent is multimedia
content while `attachment` would have the side-effect of triggering a
download dialog.
Closes: https://git.pleroma.social/pleroma/pleroma/-/issues/3114
TwitterCard meta tags are supposed to use the attributes "name" and "content".
OpenGraph tags use the attributes "property" and "content".
Twitter itself is smart enough to detect broken meta tags and discover the TwitterCard
using "property" and "content", but other platforms that only implement parsing of TwitterCards
and not OpenGraph may fail to correctly detect the tags as they're under the wrong attributes.
> "Open Graph protocol also specifies the use of property and content attributes for markup while
> Twitter cards use name and content. Twitter’s parser will fall back to using property and content,
> so there is no need to modify existing Open Graph protocol markup if it already exists." [0]
[0] https://developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
`context` fields for objects and activities can now be generated based
on the object/activity `inReplyTo` field or its ActivityPub ID, as a
fallback method in cases where `context` fields are missing for incoming
activities and objects.
This is useful for people who want to migrate back to Pleroma.
It's also added in the docs, but also noted that this is barely tested and to be used at their own risk.
A recent group of vulnerabilities have been found in Pleroma (and
inherited by Akkoma) that involve media files either uploaded by local
users or proxied from remote instances (if media proxy is enabled).
It is recommended that media files are served on a separate subdomain
in order to mitigate this class of vulnerabilities.
Based on https://meta.akkoma.dev/t/another-vector-for-the-injection-vulnerability-found/483/2
There were two warnings, these are now fixed.
I moved the fonts folder into the css folder. Antother option was to change the relative path,
but it seems that after changing it in the css file, the path got changed back when rebuilding the site.
Maybe it needs to be changed somewhere else, idk, this worked.
CREATE DATABASE was running in a transaction block with CREATE USER. This isn't allowed (any more?).
This is now two separate commands.
I also did some other touch-ups including
* making it OTP-first,
* add backup of static directory because this contains e.g. custom emoji, and
* remove the suggestion for using the setup_db.psql file. The reason is because I fear it causes more confusion than what it's worth.
* Firstly, OTP installations won't have this file because it's created in /tmp.
* Secondly, the instance has been reinstalled and thus a new setup_db.psql with different password may have been created, causing only more confusion.
When doing prune_objects, it's possible that bookmarked objects are deleted.
This gave problems when fetching the bookmark TL.
Here we clean up the bookmarks during pruning in the case were it's possible that bookmarked objects are deleted.
This make the behavior consistent between when UserNote doesn't exist and when comment is null.
The current behavior may return null in APIs, which misleads some clients doing feature detection into thinking the server does not support comments.
For example, see https://codeberg.org/husky/husky/issues/92
weird values in href will cause base64 encoding to fail later down the
line, so let's make sure the value we're passing on is somewhat sane, or
at the very least a binary
this fixes#482
When no image description is filled in, Pleroma allowed fallbacks.
Those were (based on a setting) either the filename, or a fixed description.
Neither are good options for image descriptions imo, so here we remove this.
Note that there's two tests removed who supposedly tested something else.
But examining closer, they didn't seem to test what they claimed to test,
so I removed them rather than try to "fix" them.
Expose quote posting in the api as a feature.
Copies what the quote post PR for pleroma does to allow external clients to enable and disable features based on the feature-set of the instance.
As far as I am aware, akkoma doesn't allow you to disable quote posting, so this doesn't need anything fancy and it's just a hard on switch.
I tried to get one for the bubble tl to work also, but I'm not quite sure how to do it so that it switches off the feature when the bubble tl is disabled. I would argue that it could and ideally should be done as well though.
I also discovered a pretty tame bug in the testing of it, that deleting the DB entry for the bubble tl does not stop the bubble TL from actually working and it will continue to display the panel on the about page, I'll just leave it as a note here.
Reviewed-on: AkkomaGang/akkoma#496
Co-authored-by: foxing <foxing@noreply.akkoma>
Co-committed-by: foxing <foxing@noreply.akkoma>
E.g. Flag activities have an array of objects
We prune the activity when NONE of the objects can be found
Note that the cost of finding and deleting these is ~4x higher than finding and deleting the non-array ones
Only string:
Delete on activities (cost=506573.48..506580.38 rows=0 width=0)
Only Array:
Delete on activities (cost=3570359.68..4276365.34 rows=0 width=0)
(They are still executed separately, so the total cost is the sum of the two)
We add an option to also prune remote activities who don't have existing objects any more they reference.
Rn, we only check for activities who only reference one object, not an array or embeded object.
By default Postgresql first restores the data and then the indexes when dumping and restoring the database.
Restoring index activities_visibility_index took a very long time.
users_ap_id_COALESCE_follower_address_index was later added because having this could speed up the restoration tremendously.
The problem now is that restoration apparently happens in alphabetical order, so this new index wasn't created yet
by the time activities_visibility_index needed it.
There were several work-arounds which included more complex steps during backup/restore.
By renaming this index, it should be restored first and thus activities_visibility_index can make use of it.
This speeds up restoration significantly without requiring more complex or unexpected steps from people.
I added info about installing front ends from the development branch
I also rearanged the list of exceptions (what's different than "normal" installation)
so the order is closer to how you'd encounter things in the installation docs + small fixes
This should help mitigate negative impacts related to block-retaliation
and block-circumvention when blocks become visible to the blocked party.
Instances interested in broadcasting blocks can turn this on if they
wish. This should have always been the default.
See also: AkkomaGang/akkoma-fe#274
Docs used to be a separate repo who cloned pleroma and pelroma-fe.
Now the docs are just the BE docs and completely part of the Akkoma repo.
I moved back to using venv because that's what I used and cleaner imo since it keeps everything nice in the repo.
(Iirc virtualenv stored things in the Home folder or smthng)
Credit where credit is due; I inspired myself by looking at the yunohost docs
* https://yunohost.org/en/dev
* https://yunohost.org/en/packaging_apps_start
I try to be inviting to new developers and guide them in their first steps into Akkoma development.
I try to keep the page itself as short as possible and link to relevant places.
That way people can quickly skim over parts that they don't need, while people who do need more can simply follow the links.
I experienced that it may be better to tell pgtune you have lower resoures than what you have when you have other services running.
I added that now.
I also moved the examples as part of the pgtune section.
This adds an option to the prune_objects mix task.
The original way deleted all non-local public posts older than a certain time frame.
Here we add a different query which you can call using the option --keep-threads.
We query from the activities table all context id's where
1. the newest activity with this context is still old
2. none of the activities with this context is is local
3. none of the activities with this context is bookmarked
and delete all objects with these contexts.
The idea is that posts with local activities (posts, replies, likes, repeats...) may be interesting to keep.
Besides that, a post lives in a certain context (the thread), so we keep the whole thread as well.
Caveats:
* ~~Quotes have a different context. Therefore, when someone quotes a post, it's possible the quoted post will still be deleted.~~ fixed in AkkomaGang/akkoma#379
* Although undocumented (in docs/docs/administration/CLI_tasks/database.md/#prune-old-remote-posts-from-the-database), the 'normal' delete action still kept old remote non-public posts. I added an option to keep this behaviour, but this also means that you now have to explicitly provide that option. **This could be considered a breaking change!**
* ~~Note that this removes from the objects table, but not from the activities.~~ See AkkomaGang/akkoma#427 for that.
Some statistics from explain analyse:
(cost=1402845.92..1933782.00 rows=3810907 width=62) (actual time=2562455.486..2562455.495 rows=0 loops=1)
Planning Time: 505.327 ms
Trigger for constraint chat_message_references_object_id_fkey: time=651939.797 calls=921740
Trigger for constraint deliveries_object_id_fkey: time=52036.009 calls=921740
Trigger for constraint hashtags_objects_object_id_fkey: time=20665.778 calls=921740
Execution Time: 3287933.902 ms
***
**TODO**
1. [x] **Question:** Is it OK to keep it like this in regard to quote posts? If not (ie post quoted by local users should also be kept), should we give quotes the same context as the post they are quoting? (If we don't want to give them the same context, I'll have to see how/if I can do it without being too costly)
* See AkkomaGang/akkoma#379
2. [x] **Question:** the "original" query only deletes public posts (this is undocumented, but you can check the code). This new one doesn't care for scope. From the docs I get that the idea is that posts can be refetched when needed. But I have from a trusted source that Pleroma can't refetch non-public posts. I assume that's the reason why they are kept here. I see different options to deal with this
1. ~~We keep it as currently implemented and just don't care about scope with this option~~
2. ~~We add logic to not delete non-public posts either (I'll have to see how costly that becomes)~~
3. We add an extra --keep-non-public parameter. This is technically speaking breakage (you didn't have to provide a param before for this, now you do), but I'm inclined to not care much because it wasn't documented nor tested in the first place.
3. [x] See if we can do the query using Elixir
4. [x] Test on a bigger DB to see that we don't run into a timeout
5. [x] Add docs
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: AkkomaGang/akkoma#350
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Some users post posts with spoofed timestamp, and some clients will have issues with certain dates. Tusky for example crashes if the date is any sooner than 1 BCE (“year zero” in the representation).
I limited the range of what is considered a valid date to be somewhere between the years 1583 and 9999 (inclusive).
The numbers have been chosen because:
- ISO 8601 only allows years before 1583 with “mutual agreement”
- Years after 9999 could cause issues with certain clients as well
Co-authored-by: Charlotte 🦝 Delenk <lotte@chir.rs>
Reviewed-on: AkkomaGang/akkoma#425
Co-authored-by: darkkirb <lotte@chir.rs>
Co-committed-by: darkkirb <lotte@chir.rs>
Faced with this issue today, Pleroma responds with status 400 (Bad request) if Exiftool.StripLocation is added to the list of filter modules for uploads. Here is logs:
```
13:27:25.201 [info] POST /api/v1/media
13:27:25.232 request_id=FzdspaAnrA6cyv0APgVR [error] Elixir.Pleroma.Upload.Filter: Filter Elixir.Pleroma.Upload.Filter.Exiftool.StripLocation failed: {:error, "Elixir.Pleroma.Upload.Filter.Exiftool.StripLocation: %ErlangError{original: :enoent}"}
13:27:25.232 request_id=FzdspaAnrA6cyv0APgVR [error] Elixir.Pleroma.Upload store (using Pleroma.Uploaders.Local) failed: "Elixir.Pleroma.Upload.Filter.Exiftool.StripLocation: %ErlangError{original: :enoent}"
```
# This fix solves this problem.
Reviewed-on: AkkomaGang/akkoma#421
Co-authored-by: ihor <ikandreew@gmail.com>
Co-committed-by: ihor <ikandreew@gmail.com>
See AkkomaGang/akkoma#350 (comment)
When making quotes through Mast-API, they will now have the same context as the quoted post. This also results in them being showed when fetching the thread. I checked Misskey to see how it's there, and they show the quotes there as well, see e.g. <https://mk.toast.cafe/notes/98u1g0tulg>.
An example from Akkoma:
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: AkkomaGang/akkoma#379
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
I managed to steal some emoji, but I had to figure out the specifics the hard way. This should make it easier for future criminals.
Feel free to close if this documentation was omitted on purpose, I can imagine some reasons for why it might have.
Co-authored-by: timorl <timorl@disroot.org>
Reviewed-on: AkkomaGang/akkoma#364
Co-authored-by: timorl <timorl+akkomadev@disroot.org>
Co-committed-by: timorl <timorl+akkomadev@disroot.org>
Since Akkoma doesn't include precompiled frontends in the main repo anymore, it doesn't make sense to keep treating the few js/css files remaining as binary files.
Argos Translate is a Python module for translation and can be used as a command line tool.
This is also the engine for LibreTranslate, for which we already have a module.
Here we can use the engine directly from our server without doing requests to a third party or having to install our own LibreTranslate webservice (obviously you do have to install Argos Translate).
One thing that's currently still missing from Argos Translate is auto-detection of languages (see <https://github.com/argosopentech/argos-translate/issues/9>). For now, when no source language is provided, we just return the text unchanged, supposedly translated from the target language. That way you get a near immediate response in pleroma-fe when clicking Translate, after which you can select the source language from a dropdown.
Argos Translate also doesn't seem to handle html very well. Therefore we give admins the option to strip the html before translating. I made this an option because I'm unsure if/how this will change in the future.
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: AkkomaGang/akkoma#351
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
this didn't actually _do_ anything in the past,
the users would be prevented from accessing the resource,
but they shouldn't be able to even create them
This makes them consistent with the update instructions that are in the
release announcements.
Also adds in the command to update the frontend as well.
Co-authored-by: Francis Dinh <normandy@biribiri.dev>
Reviewed-on: AkkomaGang/akkoma#361
Co-authored-by: Norm <normandy@biribiri.dev>
Co-committed-by: Norm <normandy@biribiri.dev>
Until now it was returning a 500 because the upload plug were going
through the changeset and ending in the JSON encoder, which raised
because struct has to @derive the encoder.
It's unclear why this is the default as this is highly not recommended.
KillMode=process ends up leaving leftover orphaned processes that
escape resource management and process lifecycles, wasting resources
on servers.
Signed-off-by: r3g_5z <june@girlboss.ceo>
Objects who got updated would just pass through several of the MRF policies, undoing moderation in some situations.
In the relevant cases we now check not only for Create activities, but also Update activities.
I checked which ones checked explicitly on type Create using `grep '"type" => "Create"' lib/pleroma/web/activity_pub/mrf/*`.
The following from that list have not been changed:
* lib/pleroma/web/activity_pub/mrf/follow_bot_policy.ex
* Not relevant for moderation
* lib/pleroma/web/activity_pub/mrf/keyword_policy.ex
* Already had a test for Update
* lib/pleroma/web/activity_pub/mrf/object_age_policy.ex
* In practice only relevant when fetching old objects (e.g. through Like or Announce). These are always wrapped in a Create.
* lib/pleroma/web/activity_pub/mrf/reject_non_public.ex
* We don't allow changing scope with Update, so not relevant here
Objects who got updated would just pass the TagPolicy, undoing the moderation that was set in place for the Actor.
Now we check not only for Create activities, but also Update activities.
makes static-fe look more like pleroma-fe, with the stylesheets matching pleroma-dark and pleroma-light based on `prefers-color-scheme`.
- [x] navbar
- [x] about sidebar
- [x] background image
- [x] statuses
- [x] "reply to" or "edited" tags
- [x] accounts
- [x] show more / show less
- [x] posts / with replies / media / followers / following
- [x] followers/following would require user card snippets
- [x] admin/bot indicators
- [x] attachments
- [x] nsfw attachments
- [x] fontawesome icons
- [x] clean up and sort css
- [x] add pleroma-light
- [x] replace hardcoded strings
also i forgot
- [x] repeated headers
how it looks + sneak peek at statuses:

Co-authored-by: Sol Fisher Romanoff <sol@solfisher.com>
Reviewed-on: AkkomaGang/akkoma#236
Co-authored-by: sfr <sol@solfisher.com>
Co-committed-by: sfr <sol@solfisher.com>
a bunch of ways to get query plans to help with debugging
Co-authored-by: FloatingGhost <hannah@coffee-and-dreams.uk>
Reviewed-on: AkkomaGang/akkoma#348
Mostly add how to speed up restoration by adding activities_visibility_index later. Also some small other improvements.
This is based on what I did on a Pleroma instance. I assume the activities_visibility_index taking so long is still true for Akkoma, but can't really test because I don't have a big enough Akkoma DB yet 🙃
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: AkkomaGang/akkoma#332
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Only real change here is making MRF rejects log as debug instead of info (AkkomaGang/akkoma#234)
I don't know if it's the best way to do it, but it seems it's just MRF using this and almost always this is intended.
The rest are just minor docs changes and syncing the restricted nicknames stuff.
I compiled and ran my changes with Docker and they all work.
Co-authored-by: r3g_5z <june@terezi.dev>
Reviewed-on: AkkomaGang/akkoma#313
Co-authored-by: @r3g_5z@plem.sapphic.site <june@girlboss.ceo>
Co-committed-by: @r3g_5z@plem.sapphic.site <june@girlboss.ceo>
Close#304.
Notes:
- This patch was made on top of Pleroma develop, so I created a separate cachex worker for request signature actions, instead of Akkoma's instance cache. If that is a merge blocker, I can attempt to move logic around for that.
- Regarding the `has_request_signatures: true -> false` state transition: I think that is a higher level thing (resetting instance state on new instance actor key) which is separate from the changes relevant to this one.
Co-authored-by: Luna <git@l4.pm>
Reviewed-on: AkkomaGang/akkoma#312
Co-authored-by: @luna@f.l4.pm <akkoma@l4.pm>
Co-committed-by: @luna@f.l4.pm <akkoma@l4.pm>
Changes follow_operation schema to use BooleanLike instead of :boolean so that strings like "0" and "1" (used by mastodon.py) can be accepted. Rest of file uses the same. For more info please see https://git.pleroma.social/pleroma/pleroma/-/issues/2999
(I'm also sending this here as I'm not hopeful about upstream not ignoring it)
Co-authored-by: ave <ave@ave.zone>
Reviewed-on: AkkomaGang/akkoma#301
Co-authored-by: ave <ave@noreply.akkoma>
Co-committed-by: ave <ave@noreply.akkoma>
- Drop Expect-CT
Expect-CT has been redundant since 2018 when Certificate Transparency became mandated and required for all CAs and browsers. This header is only implemented in Chrome and is now deprecated. HTTP header analysers do not check this anymore as this is enforced by default. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expect-CT
- Raise HSTS to 2 years and explicitly preload
The longer age for HSTS, the better. Header analysers prefer 2 years over 1 year now as free TLS is very common using Let's Encrypt.
For HSTS to be fully effective, you need to submit your root domain (domain.tld) to https://hstspreload.org. However, a requirement for this is the "preload" directive in Strict-Transport-Security. If you do not have "preload", it will reject your domain.
- Drop X-Download-Options
This is an IE8-era header when Adobe products used to use the IE engine for making outbound web requests to embed webpages in things like Adobe Acrobat (PDFs). Modern apps are using Microsoft Edge WebView2 or Chromium Embedded Framework. No modern browser checks or header analyser check for this.
- Set base-uri to 'none'
This is to specify the domain for relative links (`<base>` HTML tag). pleroma-fe does not use this and it's an incredibly niche tag.
I use all of these myself on my instance by rewriting the headers with zero problems. No breakage observed.
I have not compiled my Elixr changes, but I don't see why they'd break.
Co-authored-by: r3g_5z <june@terezi.dev>
Reviewed-on: AkkomaGang/akkoma#294
Co-authored-by: @r3g_5z@plem.sapphic.site <june@terezi.dev>
Co-committed-by: @r3g_5z@plem.sapphic.site <june@terezi.dev>
In addition to making the page refer to Akkoma instead of Pleroma, I've
also removed clients that were not updated in a year or more and updated
links to websites and the contact links of authors.
Also removed language that suggested these clients are in any way
"officially supported".
Co-authored-by: Francis Dinh <normandy@biribiri.dev>
Reviewed-on: AkkomaGang/akkoma#284
Co-authored-by: Norm <normandy@biribiri.dev>
Co-committed-by: Norm <normandy@biribiri.dev>
The header name was Report-To, not Reply-To.
In any case, that's now being changed to the Reporting-Endpoints HTTP
Response Header.
https://w3c.github.io/reporting/#headerhttps://github.com/w3c/reporting/issues/177
CanIUse says the Report-To header is still supported by current Chrome
and friends.
https://caniuse.com/mdn-http_headers_report-to
It doesn't have any data for the Reporting-Endpoints HTTP header, but
this article says Chrome 96 supports it.
https://web.dev/reporting-api/
(Even though that's come out one year ago, that's not compatible with
Network Error Logging which's still using the Report-To version of the
API)
Signed-off-by: Thomas Citharel <tcit@tcit.fr>
There were async calls happening, so they weren't always finished when assert happened.
I also fixed some bugs in the erratic tests that were introduced when removing :shout.:shout is a key where restart is needed, and was changed in the test to use :rate_limit (which also requires a restart). But there was a bug in the syntax that didn't get caught because the test was tagged as erratic and therefor didn't fail. Here I fixed it.
During compilation, we had a warning `:logger is used by the current application but the current application does not depend on :logger` which is now fixed as well (see commit message for complete stacktrace).
Co-authored-by: Ilja <ilja@ilja.space>
Reviewed-on: AkkomaGang/akkoma#237
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Fixes one of the 'erratic' tests
It used a timer to sleep.
But time also goes on when doing other things, so depending on hardware, the timings could be off.
I slightly changed the tests so we still test what we functionally want.
Instead of waiting until the cache expires I now have a function to expire the test and use that.
That means we're not testing any more if the cache really expires after a certain amount of time,
but that's the responsability of the dependency imo, so shouldn't be a problem.
I also changed `Pleroma.Web.Endpoint, :http, :ip` in the tests to `127.0.0.1`
Currently it was set to 8.8.8.8, but I see no reason for that and, while I assume that no calls
are made to it, it may come over as weird or suspicious to people.
Co-authored-by: Ilja <ilja@ilja.space>
Reviewed-on: AkkomaGang/akkoma#233
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Simple fix for LDAP user registration. I'm not sure what changed but I managed to get Akkoma running in a debug session and figured out it was missing a match for an extra value at the end. I don't know Elixir all that well so I'm not sure if this was the correct way to do it... but it works. :)
Reviewed-on: AkkomaGang/akkoma#229
Co-authored-by: nullobsi <me@nullob.si>
Co-committed-by: nullobsi <me@nullob.si>
During attachment upload Pleroma returns a "description" field.
* This MR allows Pleroma to read the EXIF data during upload and return the description to the FE using this field.
* If a description is already present (e.g. because a previous module added it), it will use that
* Otherwise it will read from the EXIF data. First it will check -ImageDescription, if that's empty, it will check -iptc:Caption-Abstract
* If no description is found, it will simply return nil, which is the default value
* When people set up a new instance, they will be asked if they want to read metadata and this module will be activated if so
There was an Exiftool module, which has now been renamed to Exiftool.StripLocation
The problem was double. On the one hand, the function didn't actually return what was in the DB.
On the other hand the test was flaky because it used NaiveDateTime.utc_now() so test could fail or pass depending on a difference of microseconds.
Both are fixed now.
It was tested if the updated_at after marking as "read" was equal as the updated_at at insertion, but that seems wrong.
Firstly, if a record is updated, you expect the updated_at to also update.
Secondly, the insert and update happen almost at the same time, so it's flaky regardless.
Here I make sure it has a much older updated_at during insert so we can clealy see the effect after update.
I also check that the updated_at is actually updated because I expect that this is the expected behaviour and it's also the current behaviour.
Pulled from https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3721.
This makes backups require its own scope (`read:backups`) instead of the `read:accounts` scope.
Co-authored-by: Tusooa Zhu <tusooa@kazv.moe>
Reviewed-on: AkkomaGang/akkoma#218
Co-authored-by: Norm <normandy@biribiri.dev>
Co-committed-by: Norm <normandy@biribiri.dev>
As this plug is called on every request, this should reduce load on the
database by not requiring to select on the users table every single
time, and to instead use the by-ID user cache whenever possible.
This fixes a race condition bug where keys could be regenerated
post-federation, causing activities and HTTP signatures from an user to
be dropped due to key differences.
User keys are now generated on user creation instead of "when needed",
to prevent race conditions in federation and a few other issues. This
migration will generate keys missing for local users.
Non-Create/Listen activities had their associated object field
normalized and fetched, but only to use their `id` field, which is both
slow and redundant. This also failed on Undo activities, which delete
the associated object/activity in database.
Undo activities will now render properly and database loads should
improve ever so slightly.
These don't really serve a purpose now and aren't even recognized by
Gitea.
Co-authored-by: Francis Dinh <normandy@biribiri.dev>
Reviewed-on: AkkomaGang/akkoma#203
Co-authored-by: Norm <normandy@biribiri.dev>
Co-committed-by: Norm <normandy@biribiri.dev>
Use Websockex to replace websocket_client
Test that server will disconnect websocket upon token revocation
Lint
Execute session disconnect in background
Refactor streamer test
allow multi-streams
rebase websocket change
This field replaces the now deprecated conversation_id field, and now
exposes the ActivityPub object `context` directly via the MastoAPI
instead of relying on StatusNet-era data concepts.
This field seems to be a left-over from the StatusNet era.
If your application uses `pleroma.conversation_id`: this field is
deprecated.
It is currently stubbed instead by doing a CRC32 of the context, and
clearing the MSB to avoid overflow exceptions with signed integers on
the different clients using this field (Java/Kotlin code, mostly; see
Husky and probably other mobile clients.)
This should be removed in a future version of Pleroma. Pleroma-FE
currently depends on this field, as well.
Incoming Pleroma replies to a Misskey thread were rejected due to a
broken context fix, which caused them to not be visible until a
non-Pleroma user interacted with the replies.
This fix properly sets the post-fix object context to its parent Create
activity as well, if it was changed.
Some dependencies will refuse to work on Elixir 1.10 (and presumably 1.9). One dependency states 1.13 as a requirement but will still work on 1.12 just fine.
Currently translated at 100.0% (103 of 103 strings)
Translated using Weblate (Catalan)
Currently translated at 0.3% (4 of 1002 strings)
Translated using Weblate (Catalan)
Currently translated at 100.0% (83 of 83 strings)
Translated using Weblate (Catalan)
Currently translated at 13.5% (14 of 103 strings)
Added translation using Weblate (Catalan)
Merge branch 'origin/develop' into Weblate.
Merge branch 'origin/develop' into Weblate.
Merge branch 'origin/develop' into Weblate.
Translated using Weblate (Catalan)
Currently translated at 100.0% (0 of 0 strings)
Added translation using Weblate (Catalan)
Added translation using Weblate (Catalan)
Added translation using Weblate (Catalan)
Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: sola <spla@mastodont.cat>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/ca/
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-errors/ca/
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-static-pages/ca/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
Translation: Pleroma fe/Akkoma Backend (Errors)
Translation: Pleroma fe/Akkoma Backend (Static pages)
Thanks for taking the time to file this bug report! Please try to be as specific and detailed as you can, so we can track down the issue and fix it as soon as possible.
# General information
- type:dropdown
id:installation
attributes:
label:"Your setup"
description:"What sort of installation are you using?"
options:
- "OTP"
- "From source"
- "Docker"
validations:
required:true
- type:input
id:setup-details
attributes:
label:"Extra details"
description:"If installing from source or docker, please specify your distro or docker setup."
placeholder:"e.g. Alpine Linux edge"
- type:input
id:version
attributes:
label:"Version"
description:"Which version of Akkoma are you running? If running develop, specify the commit hash."
placeholder:"e.g. 2022.11, 4e4bd248"
- type:input
id:postgres
attributes:
label:"PostgreSQL version"
placeholder:"14"
validations:
required:true
- type:markdown
attributes:
value:"# The issue"
- type:textarea
id:attempt
attributes:
label:"What were you trying to do?"
validations:
required:true
- type:textarea
id:expectation
attributes:
label:"What did you expect to happen?"
validations:
required:true
- type:textarea
id:reality
attributes:
label:"What actually happened?"
validations:
required:true
- type:textarea
id:logs
attributes:
label:"Logs"
description:"Please copy and paste any relevant log output, if applicable."
render:shell
- type:dropdown
id:severity
attributes:
label:"Severity"
description:"Does this issue prevent you from using the software as normal?"
options:
- "I cannot use the software"
- "I cannot use it as easily as I'd like"
- "I can manage"
validations:
required:true
- type:checkboxes
id:searched
attributes:
label:"Have you searched for this issue?"
description:"Please double-check that your issue is not already being tracked on [the forums](https://meta.akkoma.dev) or [the issue tracker](https://akkoma.dev/AkkomaGang/akkoma/issues)."
options:
- label:"I have double-checked and have not found this issue mentioned anywhere."
value:"Thanks for taking the time to request a new feature! Please be as concise and clear as you can in your proposal, so we could understand what you're going for."
- type:textarea
id:idea
attributes:
label:"The idea"
description:"What do you think you should be able to do in Akkoma?"
validations:
required:true
- type:textarea
id:reason
attributes:
label:"The reasoning"
description:"Why would this be a worthwhile feature? Does it solve any problems? Have people talked about wanting it?"
validations:
required:true
- type:checkboxes
id:searched
attributes:
label:"Have you searched for this feature request?"
description:"Please double-check that your issue is not already being tracked on [the forums](https://meta.akkoma.dev), [the issue tracker](https://akkoma.dev/AkkomaGang/akkoma/issues), or the one for [pleroma-fe](https://akkoma.dev/AkkomaGang/pleroma-fe/issues)."
options:
- label:"I have double-checked and have not found this feature request mentioned anywhere."
- label:"This feature is related to the Akkoma backend specifically, and not pleroma-fe."
* For support use https://git.pleroma.social/pleroma/pleroma-support or [community channels](https://git.pleroma.social/pleroma/pleroma#community-channels).
* Please do a quick search to ensure no similar bug has been reported before. If the bug has not been addressed after 2 weeks, it's fine to bump it.
* Try to ensure that the bug is actually related to the Pleroma backend. For example, if a bug happens in Pleroma-FE but not in Mastodon-FE or mobile clients, it's likely that the bug should be filed in [Pleroma-FE](https://git.pleroma.social/pleroma/pleroma-fe/issues/new) repository.
-->
### Environment
* Installation type (OTP or From Source):
* Pleroma version (could be found in the "Version" tab of settings in Pleroma-FE):
* Elixir version (`elixir -v` for from source installations, N/A for OTP):