Since we now remember the final location redirects lead to
and use it for all further checks since
3e134b07fa, these redirects
can no longer be exploited to serve counterfeit objects.
This fixes:
- display URLs from independent webapp clients
redirecting to the canonical domain
- Peertube display URLs for remote content
(acting like the above)
As hinted at in the commit message when strict checking
was added in 8684964c5d,
refetching is more robust than display URL comparison
but in exchange is harder to implement correctly.
A similar refetch approach is also employed by
e.g. Mastodon, IceShrimp and FireFish.
To make sure no checks can be bypassed by forcing
a refetch, id checking is placed at the very end.
This will fix:
- Peertube display URL arrays our transmogrifier fails to normalise
- non-canonical display URLs from alternative frontends
(theoretical; we didnt’t get any actual reports about this)
It will also be helpful in the planned key handling overhaul.
The modified user collision test was introduced in
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/461
and unfortunately the issues this fixes aren’t public.
Afaict it was just meant to guard against someone serving
faked data belonging to an unrelated domain. Since we now
refetch and the id actually is mocked, lookup now succeeds
but will use the real data from the authorative server
making it unproblematic. Instead modify the fake data further
and make sure we don’t end up using the spoofed version.
- pass env vars the proper™ way
- write log to file
- drop superfluous command_background
- make settings easily overwritable via conf.d
to avoid needing to edit the service file directly
if e.g. Akkoma was installed to another location
Ever since the browser frontend switcher was introduced in
de64c6c54a /akkoma counts as
an API prefix and thus gets skipped by frontend plugs
breaking the old swagger ui path of /akkoma/swagger-ui.
Do the simple thing and change the frontend path to
/pleroma/swaggerui which isn't an API path and can't collide
with frontend user paths given pleroma is areserved nickname.
Reported in
https://meta.akkoma.dev/t/view-all-endpoints/269/7https://meta.akkoma.dev/t/swagger-ui-not-loading/728
Multiple profiles can be specified as a space-separated list
and the possibility of additional profiles is explicitly brought up
in ActivityStream spec
Not _yet_ supported as of exiftool 12.87, though
at first glance it seems like standard BMP files
can't store any metadata besides colour profiles
Fixes the specific case from
AkkomaGang/akkoma-fe#396
although the frontend shouldn’t get bricked regardless.
The debug logs are very noisy and can be enabled during analysis
of a specific error believed to be SQL-related
--
Before log capturing those debug messages were still hidden,
but with log capturing they show up in the output of failed
tests unless disabled.
Cherry-picked-from: e628d00a81
Currently `mix test` prints a slew of logs in the terminal
with messages from different tests intermsparsed. Globally
enabling capture log hides log messages unless a test fails
reducing noise and making it easier to anylse the important
(from failed tests) messages.
Compiler warnings and a few messages not printed via Logger
still show up but its much more readable than before.
Ported from: 3aed111a42
We have a bunch of mysterious sporadic failures which usually disappear
when rerunning failed jobs only. Ideally we should locate and fix the
cause of those psoradic failures, but until we figure this out retrying
once makes CI status less useless.
Fragments are already always stripped anyway
so listing one specific fragment here is
unnecessary and potentially confusing.
This effectively reverts
4457928e32
but keeps the added bridgy testcase.
Usually an id should point to another AP object
and the image file isn’t an AP object. We currently
do not provide standalone AP objects for emoji and
don't keep track of remote emoji at all.
Thus just federate them as anonymous objects,
i.e. objects only existing within a parent context
and using an explicit null id.
IceShrimp.NET previously adopted anonymous objects
for remote emoji without any apparent issues. See:
333611f65e
Fixes: #694
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.
Apparently long-lived Phoenix websocket processes are known to
often cause exactly this by getting into a state unfavourable
for the garbage collector.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.
Testing on one affected instance appears to confirm this theory
Ref.:
https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafterhttps://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060
Tested-by: bjo
Since those old migrations will now most likely only run during db init,
there’s not much point in running them in the background concurrently
anyway, so just drop the cncurrent setting rather than disabling
migration locks.
Currently Akkoma doesn't have any proper mitigations against BREACH,
which exploits the use of HTTP compression to exfiltrate sensitive data.
(see: #721 (comment))
To err on the side of caution, disable gzip compression for now until we
can confirm that there's some sort of mitigation in place (whether that
would be Heal-The-Breach on the Caddy side or any Akkoma-side
mitigations).
Ever since 364b6969eb
this setting wasn't used by the backend and a noop.
The stated usecase is better served by setting the base_url
to a local subdomain and using proxying in nginx/Caddy/...
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.
Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.
Implementation notes:
- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering
Overall performance of timelines and creating new posts which contain URLs is greatly improved.
This lets us:
- avoid issues with broken hash indices for PostgreSQL <10
- drop runtime checks and legacy codepaths for <11 in db search
- always enable custom query plans for performance optimisation
PostgreSQL 11 is already EOL since 2023-11-09, so
in theory everyone should already have moved on to 12 anyway.
From experience, setting DB type to "Online transaction processing
system" seems to give the most optimal configuration in terms of
performance.
I also increased the recomended max connections to 25-30 as that leaves
some room for maintenance tasks to run without running out of
connections.
Finally, I removed the example configs since they're probably out of
date and I think it's better to direct people to use PGTune instead.
Logger output being visible depends on user configuration, but most of
the prints in mix tasks should always be shown. When running inside a
mix shell, it’s probably preferable to send output directly to it rather
than using raw IO.puts and we already have shell_* functions for this,
let’s use them everywhere.
Pruning can go on for a long time; give admins some insight into that
something is happening to make it less frustrating and to make it easier
which part of the process is stalled should this happen.
Again most of the changes are merely reindents;
review with whitespace changes hidden recommended.
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.
The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().
Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
This gives feedback when to stop rerunning limited batches.
Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.
If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
Using only the admin key works as well currently
and Akkoma needs to know the admin key to be able
to add new entries etc. However the Meilisearch
key descriptions suggest the admin key is not
supposed to be used for searches, so let’s not.
For compatibility with existings configs, search_key remains optional.
This makes show-key’s output match our documentation as of Meilisearch
1.8.0-8-g4d5971f343c00d45c11ef0cfb6f61e83a8508208. Since I’m not sure
if older versions maybe only provided description, it will fallback to
the latter if no name parameter exists.
Meilisearch is already configured to return results sorted by a
particular ranking configured in the meilisearch CLI task.
Resorting the returned top results by date partially negates this and
runs counter to what someone with tweaked settings expects.
Issue and fix identified by AdamK2003 in
#579
But instead of using a O(n^2) resorting, this commit directly
retrieves results in the correct order from the database.
Closes: #579
And while add it point to this via a top-level
FEDERATION.md document as standardised by FEP-67ff.
Also add a few missing descriptions to the config cheatsheet
and move the recently removed C2S extension into an appropiate
subsection.
Trying to display non-media as media crashed the renderer,
but when posting a status with a valid, non-media object id
the post was still created, but then crashed e.g. timeline rendering.
It also crashed C2S inbox reads, so this could not be used to leak
private posts.
Afaict this was never used, but keeping this (in theory) possible
hinders detecting which objects are actually media uploads and
which proper ActivityPub objects.
It was originally added as part of upload support itself in
02d3dc6869 without being used
and `git log -S:activity_type` and `git log -Sactivity_type:`
don't find any other commits using this.
In Mastodon media can only be used by owners and only be associated with
a single post. We currently allow media to be associated with several
posts and until now did not limit their usage in posts to media owners.
However, media update and GET lookup was already limited to owners.
(In accordance with allowing media reuse, we also still allow GET
lookups of media already used in a post unlike Mastodon)
Allowing reuse isn’t problematic per se, but allowing use by non-owners
can be problematic if media ids of private-scoped posts can be guessed
since creating a new post with this media id will reveal the uploaded
file content and alt text.
Given media ids are currently just part of a sequentieal series shared
with some other objects, guessing media ids is with some persistence
indeed feasible.
E.g. sampline some public media ids from a real-world
instance with 112 total and 61 monthly-active users:
17.465.096 at t0
17.472.673 at t1 = t0 + 4h
17.473.248 at t2 = t1 + 20min
This gives about 30 new ids per minute of which most won't be
local media but remote and local posts, poll answers etc.
Assuming the default ratelimit of 15 post actions per 10s, scraping all
media for the 4h interval takes about 84 minutes and scraping the 20min
range mere 6.3 minutes. (Until the preceding commit, post updates were
not rate limited at all, allowing even faster scraping.)
If an attacker can infer (e.g. via reply to a follower-only post not
accessbile to the attacker) some sensitive information was uploaded
during a specific time interval and has some pointers regarding the
nature of the information, identifying the specific upload out of all
scraped media for this timerange is not impossible.
Thus restrict media usage to owners.
Checking ownership just in ActivitDraft would already be sufficient,
since when a scheduled status actually gets posted it goes through
ActivityDraft again, but would erroneously return a success status
when scheduling an illegal post.
Independently discovered and fixed by mint in Pleroma
1afde067b1
In MastoAPI media descriptions are updated via the
media update API not upon post creation or post update.
This functionality was originally added about 6 years ago in
ba93396649 which was part of
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/626 and
https://git.pleroma.social/pleroma/pleroma-fe/-/merge_requests/450.
They introduced image descriptions to the front- and backend,
but predate adoption of Mastodon API.
For a while adding an `descriptions` array on post creation might have
continued to work as an undocumented Pleroma extension to Masto API, but
at latest when OpenAPI specs were added for those endpoints four years
ago in 7803a85d2c, these codepaths ceased
to be used. The API specs don’t list a `descriptions` parameter and
any unknown parameters are stripped out.
The attachments_from_ids function is only called from
ScheduledActivity and ActivityDraft.create with the latter
only being called by CommonAPI.{post,update} whihc in turn
are only called from ScheduledActivity again, MastoAPI controller
and without any attachment or description parameter WelcomeMessage.
Therefore no codepath can contain a descriptions parameter.
Direct users to add in the appropriate headers and update the listening
port instead of copy/pasting a config that's already outdated and
probably would otherwise have to be synced with the main example nginx
config.
Since the configuration options on the nginx side already exist in the
sample config, there's no need to tell users to copy-paste those
settings in again.
The /var/tmp directory is not mounted as tmpfs unlike /tmp which is
mounted as such on some distros like Fedora or Arch. Since there isn't
really a benefit to having the cache on tmpfs, this change should allow
for a larger cache if needed without worrying about running out of RAM.
Applying works fine with a 20220220135625 version, but it won’t be
rolled back in the right order. Fortunately this action is idempotent
so we can just rename and reapply it with a new id.
To also not break large-scale rollbacks past 2022 for anyone
who already applied it with the old id, keep a stub migration.
This promotes and expands our existing optional migration.
Based on usage statistics from several instances, see:
#764
activities_hosts is now retained after all since it’s essential
for the "instance" query parameter of *oma’s public timeline to
reliably work in a reasonable amount of time. (Although akkoma-fe has
no support for this feature and apparently barely anyone uses it.)
activities_actor_index was already dropped before in
20221211234352_remove_unused_indices; no need to drop it again.
Birthday indices were introduced in pleroma starting with
20220116183110_add_birthday_to_users which is past the
last common migration 20210416051708.
The current 10 GiB cache size is too large to fit into tmpfs for VMs and
other machines with smaller RAM sizes. Most non-Debian distros mount
/tmp on tmpfs.
Documentation was already clear on this only stripping GPS tags.
But there are more potentially sensitive metadata tags (e.g. author
and possibly description) and the name alone suggests a broader effect.
Thus change the filter to strip all metadata except for colourspace info
and orientation (technically it strips everything and then readds
selected tags).
Explicitly stripping CommonIFD0 is needed since -all does not modify
IFD0 due to TIFF storing some actual image data there. CommonIFD0 then
strips a bunch of commonly used actual metadata tags from IFD0, to my
understanding leaving TIFF image data and custom metadata tags intact.
As of exiftool 12.57 both formats are supported, but EXIF data is
optional for JXL and if exiftool doesn’t find a preexisting metadata
chunk it will create one and treat it as a minor error resulting in
a non-zero exit code.
Setting -ignoreMinorErrors avoids failing on such uploads.
Due to JSON-LD compaction the full address of public scope
may also occur in shorter forms and the spec requires us to treat them
all equivalently. To save us the pain of repeatedly checking for all
variants internally, normalise inbound data to just one form.
See note at: https://www.w3.org/TR/activitypub/#public-addressing
This needs to happen very early, even before the other addressing fixes
else an earlier validator will reject the object. This in turn required
to move the list-tpye normalisation earlier as well, but since I was
unsure about putting empty lists into the data when no such field
existed before, I excluded this case and thus the later fixing had to be
kept as well.
Fixes: #670
Alongside moving to certbot's nginx plugin, also use conf.d instead of
recreating the sites-{available,enabled} setup that Debian/Ubuntu uses.
Furthermore, also request a certificate for the media domain at the same
time since that's now required.
The API parameter is not a timestamp but an offset.
If a sufficient amount of time passes between the tests
expires_at calculation and the internal calculation during processing
of the request the strict equality assertion fails. (Either a direct
assertion or indirect via job lookup).
To avoid this lower comparison granularity.
literally nothing uses C2S AP, and it's another route into core
systems which requires analysis and maintenance. A second API
is just extra surface for potentially bad things so let's take
it out back and obliterate it
by default just prevent job floods with a 1-seconds
uniqueness check, but override in RemoteFetcherWorker
for 5 minute uniqueness check over all states
:infinity is an option we can go for maybe at some point,
but that would prevent any refetches so maybe not idk.
We were overzealous with matching on a raw error from the object fetch that should have never been relied on like this. If we can't fetch successfully we should assume that the collection is private.
Building a more expressive and universal error struct to match on may be something to consider.
These tests relied on the removed Fetcher.fetch_object_from_id!/2 function injecting the error tuple into a log message with the exact words "Object containment failed."
We will keep this behavior by generating a similar log message, but perhaps this should do a better job of matching on the error tuple returned by Transmogrifier.handle_incoming/1
"id" is used for the canonical link to the AS2 representation of an object.
"url" is typically used for the canonical link to the HTTP representation.
It is what we use, for example, when following the "external source" link
in the frontend. However, it's not the link we include in the post contents
for quote posts.
Using URL instead means we include a more user-friendly URL for Mastodon,
and a working (in the browser) URL for Threads
previously we would uncritically take data and format it into
tags for static-fe and the like - however, instances can be
configured to disallow unauthenticated access to these resources.
this means that OG tags as a vector for information leakage.
_technically_ this should only occur if you have both
restrict_unauthenticated *AND* you run static-fe, which makes no
sense since static-fe is for unauthenticated people in particular,
but hey ho.
This brings it in line with its documentation and akkoma-fe’s
expectations. For backwards compatibility URL parameters are still
accept with lower priority. Unfortunately this means duplicating
parameters and descriptions in the API spec.
Usually Plug already pre-merges parameters from different sources into
the plain 'params' parameter which then gets forwarded by Phoenix.
However, OpenApiSpex 3.x prevents this; 4.x is set to change this
https://github.com/open-api-spex/open_api_spex/issues/334https://github.com/open-api-spex/open_api_spex/issues/92
Fixes: #691
Fixes: #722
Per the XRD specification:
> 2.4. Element <Alias>
>
> The <Alias> element contains a URI value that is an additional
> identifier for the resource described by the XRD. This value
> MUST be an absolute URI. The <Alias> element does not identify
> additional resources the XRD is describing, **but rather provides
> additional identifiers for the same resource.**
(http://docs.oasis-open.org/xri/xrd/v1.0/os/xrd-1.0-os.html#element.alias, emphasis mine)
In other words, the alias list is expected to link to things which are
not just semantically the same, but exactly the same. Old user accounts
don't do that
This change should not pose a compatibility issue: Mastodon does not
list old accounts here (See e1fcb02867/app/serializers/webfinger_serializer.rb (L12))
The use of as:alsoKnownAs is also not quite semantically right here
(see https://www.w3.org/TR/did-core/#dfn-alsoknownas, which defines
it to be used to refer to identifiers which are interchangable) but
that's what DID get for reusing a property definition that Mastodon
already squatted long before they got to it
The newest git HEAD of MIME already knows about APNG, but this
hasn’t been released yet. Without this, APNG attachments from
remote posts won’t display as images in frontends.
Fixes: akkoma#657
This protects us from falling for obvious spoofs as from the current
upload exploit (unfortunately we can’t reasonably do anything about
spoofs with exact matches as was possible via emoji and proxy).
Such objects being invalid is supported by the spec, sepcifically
sections 3.1 and 3.2: https://www.w3.org/TR/activitypub/#obj-id
Anonymous objects are not relevant here (they can only exists within
parent objects iiuc) and neither is client-to-server or transient objects
(as those cannot be fetched in the first place).
This leaves us with the requirement for `id` to (a) exist and
(b) be a publicly dereferencable URI from the originating server.
This alone does not yet demand strict equivalence, but the spec then
further explains objects ought to be fetchable _via their ID_.
Meaning an object not retrievable via its ID, is invalid.
This reading is supported by the fact, e.g. GoToSocial (recently) and
Mastodon (for 6+ years) do already implement such strict ID checks,
additionally proving this doesn’t cause federation issues in practice.
However, apart from canonical IDs there can also be additional display
URLs. *omas first redirect those to their canonical location, but *keys
and Mastodon directly serve the AP representation without redirects.
Mastodon and GTS deal with this in two different ways,
but both constitute an effective countermeasure:
- Mastodon:
Unless it already is a known AP id, two fetches occur.
The first fetch just reads the `id` property and then refetches from
the id. The last fetch requires the returned id to exactly match the
URL the content was fetched from. (This can be optimised by skipping
the second fetch if it already matches)
05eda8d193/app/helpers/jsonld_helper.rb (L168)63f0979799
- GTS:
Only does a single fetch and then checks if _either_ the id
_or_ url property (which can be an object) match the original fetch
URL. This relies on implementations always including their display URL
as "url" if differing from the id. For actors this is true for all
investigated implementations, for posts only Mastodon includes an
"url", but it is also the only one with a differing display URL.
2bafd7daf5 (diff-943bbb02c8ac74ac5dc5d20807e561dcdfaebdc3b62b10730f643a20ac23c24fR222)
Albeit Mastodon’s refetch offers higher compatibility with theoretical
implmentations using either multiple different display URL or not
denoting any of them as "url" at all, for now we chose to adopt a
GTS-like refetch-free approach to avoid additional implementation
concerns wrt to whether redirects should be allowed when fetching a
canonical AP id and potential for accidentally loosening some checks
(e.g. cross-domain refetches) for one of the fetches.
This may be reconsidered in the future.
Since we always followed redirects (and until recently allowed fuzzy id
matches), the ap_id of the received object might differ from the iniital
fetch url. This lead to us mistakenly trying to insert a new user with
the same nickname, ap_id, etc as an existing user (which will fail due
to uniqueness constraints) instead of updating the existing one.
Since we reject cross-domain redirects, this doesn’t yet
make a difference, but it’s requried for stricter checking
subsequent commits will introduce.
To make sure (and in case we ever decide to reallow
cross-domain redirects) also use the final location
for containment and reachability checks.
In order to properly process incoming notes we need
to be able to map the key id back to an actor.
Also, check collections actually belong to the same server.
Key ids of Hubzilla and Bridgy samples were updated to what
modern versions of those output. If anything still uses the
old format, we would not be able to verify their posts anyway.
If it’s not already in the database,
it must be counterfeit (or just not exists at all)
Changed test URLs were only ever used from "local: false" users anyway.
This brings it in line with its name and closes an,
in practice harmless, verification hole.
This was/is the only user of contain_origin making it
safe to change the behaviour on actor-less objects.
Until now refetched objects did not ensure the new actor matches the
domain of the object. We refetch polls occasionally to retrieve
up-to-date vote counts. A malicious AP server could have switched out
the poll after initial posting with a completely different post
attribute to an actor from another server.
While we indeed fell for this spoof before the commit,
it fortunately seems to have had no ill effect in practice,
since the asociated Create activity is not changed. When exposing the
actor via our REST API, we read this info from the activity not the
object.
This at first thought still keeps one avenue for exploit open though:
the updated actor can be from our own domain and a third server be
instructed to fetch the object from us. However this is foiled by an
id mismatch. By necessity of being fetchable and our longstanding
same-domain check, the id must still be from the attacker’s server.
Even the most barebone authenticity check is able to sus this out.
Such redirects on AP queries seem most likely to be a spoofing attempt.
If the object is legit, the id should match the final domain anyway and
users can directly use the canonical URL.
The lack of such a check (and use of the initially queried domain’s
authority instead of the final domain) was enabling the current exploit
to even affect instances which already migrated away from a same-domain
upload/proxy setup in the past, but retained a redirect to not break old
attachments.
(In theory this redirect could, with some effort, have been limited to
only old files, but common guides employed a catch-all redirect, which
allows even future uploads to be reachable via an initial query to the
main domain)
Same-domain redirects are valid and also used by ourselves,
e.g. for redirecting /notice/XXX to /objects/YYY.
Turns out we already had a test for activities spoofed via upload due
to an exploit several years. Back then *oma did not verify content-type
at all and doing so was the only adopted countermeasure.
Even the added test sample though suffered from a mismatching id, yet
nobody seems to have thought it a good idea to tighten id checks, huh
Since we will add stricter id checks later, make id and URL match
and also add a testcase for no content type at all. The new section
will be expanded in subsequent commits.
No new path traversal attacks are known. But given the many entrypoints
and code flow complexity inside pack.ex, it unfortunately seems
possible a future refactor or addition might reintroduce one.
Furthermore, some old packs might still contain traversing path entries
which could trigger undesireable actions on rename or delete.
To ensure this can never happen, assert safety during path construction.
Path.safe_relative was introduced in Elixir 1.14, but
fortunately, we already require at least 1.14 anyway.
To save on bandwith and avoid OOMs with large files.
Ofc, this relies on the remote server
(a) sending a content-length header and
(b) being honest about the size.
Common fedi servers seem to provide the header and (b) at least raises
the required privilege of an malicious actor to a server infrastructure
admin of an explicitly allowed host.
A more complete defense which still works when faced with
a malicious server requires changes in upstream Finch;
see https://github.com/sneako/finch/issues/224
Certain attacks rely on predictable paths for their payloads.
If we weren’t so overly lax in our (id, URL) check, the current
counterfeit activity exploit would be one of those.
It seems plausible for future attacks to hinge on
or being made easier by predictable paths too.
In general, letting remote actors place arbitrary data at
a path within our domain of their choosing (sans prefix)
just doesn’t seem like a good idea.
Using fully random filenames would have worked as well, but this
is less friendly for admins checking emoji dirs.
The generated suffix should still be more than enough;
an attacker needs on average 140 trillion attempts to
correctly guess the final path.
This will decouple filenames from shortcodes and
allow more image formats to work instead of only
those included in the auto-load glob. (Albeit we
still saved other formats to disk, wasting space)
Furthermore, this will allow us to make
final URL paths infeasible to predict.
Since 3 commits ago we restrict shortcodes to a subset of
the POSIX Portable Filename Character Set, therefore
this can never have a directory component.
E.g. *key’s emoji URLs typically don’t have file extensions, but
until now we just slapped ".png" at its end hoping for the best.
Furthermore, this gives us a chance to actually reject non-images,
which before was not feasible exatly due to those extension-less URLs
As suggested in b387f4a1c1, only steal
emoji with alphanumerc, dash, or underscore characters.
Also consolidate all validation logic into a single function.
===
Taken from akkoma#703 with cosmetic tweaks
This matches our existing validation logic from Pleroma.Emoji,
and apart from excluding the dot also POSIX’s Portable Filename
Character Set making it always safe for use in filenames.
Mastodon is even stricter also disallowing U+002D HYPEN-MINUS
and requiring at least two characters.
Given both we and Mastodon reject shortcodes excluded
by this anyway, this doesn’t seem like a loss.
Even more than with user uploads, a same-domain proxy setup bears
significant security risks due to serving untrusted content under
the main domain space.
A risky setup like that should never be the default.
Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.
Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.
Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.
Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
By mapping all extensions related to our custom privileged types
back to innocuous text/plain, our custom types will never automatically
be inserted which was one of the factors making impersonation possible.
Note, this does not invalidate the upload and emoji Content-Type
restrictions from previous commits. Apart from counterfeit AP objects
there are other payloads with standard types this protects against,
e.g. *.js Javascript payloads as used in prior frontend injections.
Else malicious emoji packs or our EmojiStealer MRF can
put payloads into the same domain as the instance itself.
Sanitising the content type should prevent proper clients
from acting on any potential payload.
Note, this does not affect the default emoji shipped with Akkoma
as they are handled by another plug. However, those are fully trusted
and thus not in needed of sanitisation.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.
Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.
While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.
Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.
Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
The lack thereof enables spoofing ActivityPub objects.
A malicious user could upload fake activities as attachments
and (if having access to remote search) trick local and remote
fedi instances into fetching and processing it as a valid object.
If uploads are hosted on the same domain as the instance itself,
it is possible for anyone with upload access to impersonate(!)
other users of the same instance.
If uploads are exclusively hosted on a different domain, even the most
basic check of domain of the object id and fetch url matching should
prevent impersonation. However, it may still be possible to trick
servers into accepting bogus users on the upload (sub)domain and bogus
notes attributed to such users.
Instances which later migrated to a different domain and have a
permissive redirect rule in place can still be vulnerable.
If — like Akkoma — the fetching server is overly permissive with
redirects, impersonation still works.
This was possible because Plug.Static also uses our custom
MIME type mappings used for actually authentic AP objects.
Provided external storage providers don’t somehow return ActivityStream
Content-Types on their own, instances using those are also safe against
their users being spoofed via uploads.
Akkoma instances using the OnlyMedia upload filter
cannot be exploited as a vector in this way — IF the
fetching server validates the Content-Type of
fetched objects (Akkoma itself does this already).
However, restricting uploads to only multimedia files may be a bit too
heavy-handed. Instead this commit will restrict the returned
Content-Type headers for user uploaded files to a safe subset, falling
back to generic 'application/octet-stream' for anything else.
This will also protect against non-AP payloads as e.g. used in
past frontend code injection attacks.
It’s a slight regression in user comfort, if say PDFs are uploaded,
but this trade-off seems fairly acceptable.
(Note, just excluding our own custom types would offer no protection
against non-AP payloads and bear a (perhaps small) risk of a silent
regression should MIME ever decide to add a canonical extension for
ActivityPub objects)
Now, one might expect there to be other defence mechanisms
besides Content-Type preventing counterfeits from being accepted,
like e.g. validation of the queried URL and AP ID matching.
Inserting a self-reference into our uploads is hard, but unfortunately
*oma does not verify the id in such a way and happily accepts _anything_
from the same domain (without even considering redirects).
E.g. Sharkey (and possibly other *keys) seem to attempt to guard
against this by immediately refetching the object from its ID, but
this is easily circumvented by just uploading two payloads with the
ID of one linking to the other.
Unfortunately *oma is thus _both_ a vector for spoofing and
vulnerable to those spoof payloads, resulting in an easy way
to impersonate our users.
Similar flaws exists for emoji and media proxy.
Subsequent commits will fix this by rigorously sanitising
content types in more areas, hardening our checks, improving
the default config and discouraging insecure config options.
The default refresh interval of 1 day is woefully inadequate here;
users expect to be able to add the alias to their new account and
press the move button on their old account and have it work.
This allows callers to specify a maximum age before a refetch is
triggered. We set that to 5s for the move code, as a nice compromise
between Making Things Work and ensuring that this can't be used
to hammer a remote server
Currently translated at 18.1% (183 of 1006 strings)
Translated using Weblate (Polish)
Currently translated at 6.6% (67 of 1006 strings)
Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: subtype <subtype@hollow.capital>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/pl/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
This will crop them to a square matching behaviour of Husky and *key
and allowing us to never worry about consistent alignment.
Note, akkoma-fe instead displays the full image with inserted spacing.
Mastodon at the very least seems to prevent the creation of emoji with
dots in their name (and refuses to accept them in federation). It feels
like being cautious in what we accept is reasonable here.
Colons are the emoji separator and so obviously should be blocked.
Perhaps instead of filtering out things like this we should just
do a regex match on `[a-zA-Z0-9_-]`? But that's plausibly a decision
for another day
Perhaps we should also have a centralised "is this a valid emoji shortcode?"
function
This partly reverts 1d884fd914
while fixing both the issue it addressed and the issue it caused.
The above commit successfully fixed OpenGraph metadata tags
which until then always showed the user bio instead of post content
by handing the activities AP ID as url to the Metadata builder
_instead_ of passing the internal ID as activity_id.
However, in doing so the commit instead inflicted this very problem
onto Twitter metadata tags which ironically are used by akkoma-fe.
This is because while the OpenGraph builder wants an URL as url,
the Twitter builder needs the internal ID to build the URL to the
embedded player for videos and has no URL property.
Thanks to twpol for tracking down this root cause in #644.
Now, once identified the problem is simple, but this simplicity
invites multiple possible solutions to bikeshed about.
1. Just pass both properties to the builder and let them pick
2. Drop the url parameter from the OpenGraph builder and instead
a) build static-fe URL of the post from the ID (like Twitter)
b) use the passed-in object’s AP ID as an URL
Approach 2a has the disadvantage of hardcoding the expected URL outside
the router, which will be problematic should it ever change.
Approach 2b is conceptually similar to how the builder works atm.
However, the og:url is supposed to be a _permanent_ ID, by changing it
we might, afaiui, technically violate OpenGraph specs(?). (Though its
real-world consequence may very well be near non-existent.)
This leaves just approach 1, which this commit implements.
Albeit it too is not without nits to pick, as it leaves the metadata
builders with an inconsistent interface.
Additionally, this will resolve the subotpimal Discord previews for
content-less image posts reported in #664.
Discord already prefers OpenGraph metadata, so it’s mostly unaffected.
However, it appears when encountering an explicitly empty OpenGraph
description and a non-empty Twitter description, it replaces just the
empty field with its Twitter counterpart, resulting in the user’s bio
slipping into the preview.
Secondly, regardless of any OpenGraph tags, Discord uses twitter:card to
decide how prominently images should be, but due to the bug the card
type was stuck as "summary", forcing images to always remain small.
Root cause identified by: twpol
Fixes: #644
Fixes: #664
fixed up some grammer / wording. removed a setence and made wording more in line with what I could find in Admin-FE (especially wording of "rejecting" vs. dropping)
This vastly reduces idle CPU usage, which should generally be beneficial
for most small-to-medium sized instances.
Additionally update the documentation to specify how to override the vm.args
file for OTP installs
This fixes an oversight in e99e2407f3
which added background_removal as a possible SimplePolicy setting.
However, it did _not_ add a default value to the base config and
as it turns out instance_list doesn’t handle unset options well.
In effect this caused federating instances with SimplePolicy enabled
but background_removal not explicitly configured to always trip up for
outgoing account updates in check_background_removal (and incoming
updates from Sharkey).
For added ""fun"" this error was able to block account updates made
e.g. via /api/v1/accounts/update_credentials.
Tests were unaffected since they explicitly override
all relevant config options.
Set a default to avoid all this
(note to self: don’t forget next time, baka!)
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.
Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.
Ref.: 4e64397635
Fixes misspelling and omission of and example in commit
0cfd5b4e89 which added the
status_ttl_property. This was the only place this commit
referred to the property as note_ttl_days.
Partially fixes the omitted schema update of the instance metadata addition
from commit b7e8ce2350. A proper full schema
for nodeinfo is still missing.
OTP’s default SSL/TLS settings are rather restricitive
and in particular do not use system CA certs.
In our case using system CA certs is virtually always desired
and the lack of it leads to non-obvious errors. Manually configuring
system CA certs from in-database config also isn’t straightforward.
Furthermore, gen_smtp uses a different set of connection options
for direct SSL/TLS and a later TLS upgrade providing additional
confusion and complexity in how to configure this.
Thus provide some suitable defaults for sending SMTP emails.
Everything can still be overriden by admins if necessary.
Note: defaults are not appended when validating the config
in hopes of improving the error message (as the required relay key
is already accessed to generate defaults for optional fields)
Fixes: #660
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.
Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
the previous code passed a state parameter to ueberauth with info
about where to go after the user logged in, etc.
since ueberauth 0.7, this parameter is ignored and oauth state is used
for actual CSRF reasons.
we now set a cookie with the state we need to keep track of, and read
it once the callback happens.
Implements the preferences endpoint in the Mastodon API, but returns
default values for most of the preferences right now. The only supported
preference we can access is default post visibility, and a relevant test
is added as well.
Currently, Akkoma sorts by published date first before everything else.
This however makes search results pretty bad since Meilisearch uses a
bucket sort algorithm in order of the ranking rules specified:
https://www.meilisearch.com/docs/learn/core_concepts/relevancy#behavior
Since the `published` attribute is a unix timestamp, the resulting
buckets are pretty small so the other rules essentially have little to
no effect on the rankings of search results.
This fixes that issue by moving the `published:desc` rule further down
so it still sorts by date, but only after considering everything else.
AFAIK attribute and sort doesn't really affect results for Akkoma since
the only attribute considered is the `content` attribute and the `sort`
parameter isn't used in Akkoma searches. Everything else is made to
match more closely to Meilisearch's defaults.
The docker-compose.yml file is likely to be edited quite extensively by
admins when setting up an instance. This would likely cause problems
when dealing with updating Akkoma as merge conflicts would likely occur.
Docker-compose already has the ability to use override files in addition
to the main `docker-compose.yml` file. Admins can instead put any
overrides (additional volumes, container for elasticsearch, etc.) into a
file that won't be tracked by git and thus won't run into merge
conflicts in the future. In particular, the
`docker-compose.override.yml` will be checked by docker compose in
addition to the main file if it exists and override definitions from the
latter with the former.
OTP builds to 1.15
Changelog entry
Ensure policies are fully loaded
Fix :warn
use main branch for linkify
Fix warn in tests
Migrations for phoenix 1.17
Revert "Migrations for phoenix 1.17"
This reverts commit 6a3b2f15b7.
Oban upgrade
Add default empty whitelist
mix format
limit test to amd64
OTP 26 tests for 1.15
use OTP_VERSION tag
baka
just 1.15
Massive deps update
Update locale, deps
Mix format
shell????
multiline???
?
max cases 1
use assert_recieve
don't put_env in async tests
don't async conn/fs tests
mix format
FIx some uploader issues
Fix tests
Here we make it a generic placeholder which should make accidental copy-pasting of this command happen less.
We also had one case of someone who got errors because the SHELL variable wasn't set. This is the case for Alpine.
Here I added a line to fill it in when not set.
Commit 11ec9daa5b (released with 3.2.0)
added the fedibird frontend and tweaked and extended Mastodon API for
compatibility with it. Document these changes.
Set it to `inline` because the vast majority of what's sent is multimedia
content while `attachment` would have the side-effect of triggering a
download dialog.
Closes: https://git.pleroma.social/pleroma/pleroma/-/issues/3114
TwitterCard meta tags are supposed to use the attributes "name" and "content".
OpenGraph tags use the attributes "property" and "content".
Twitter itself is smart enough to detect broken meta tags and discover the TwitterCard
using "property" and "content", but other platforms that only implement parsing of TwitterCards
and not OpenGraph may fail to correctly detect the tags as they're under the wrong attributes.
> "Open Graph protocol also specifies the use of property and content attributes for markup while
> Twitter cards use name and content. Twitter’s parser will fall back to using property and content,
> so there is no need to modify existing Open Graph protocol markup if it already exists." [0]
[0] https://developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
`context` fields for objects and activities can now be generated based
on the object/activity `inReplyTo` field or its ActivityPub ID, as a
fallback method in cases where `context` fields are missing for incoming
activities and objects.
This is useful for people who want to migrate back to Pleroma.
It's also added in the docs, but also noted that this is barely tested and to be used at their own risk.
A recent group of vulnerabilities have been found in Pleroma (and
inherited by Akkoma) that involve media files either uploaded by local
users or proxied from remote instances (if media proxy is enabled).
It is recommended that media files are served on a separate subdomain
in order to mitigate this class of vulnerabilities.
Based on https://meta.akkoma.dev/t/another-vector-for-the-injection-vulnerability-found/483/2
There were two warnings, these are now fixed.
I moved the fonts folder into the css folder. Antother option was to change the relative path,
but it seems that after changing it in the css file, the path got changed back when rebuilding the site.
Maybe it needs to be changed somewhere else, idk, this worked.
CREATE DATABASE was running in a transaction block with CREATE USER. This isn't allowed (any more?).
This is now two separate commands.
I also did some other touch-ups including
* making it OTP-first,
* add backup of static directory because this contains e.g. custom emoji, and
* remove the suggestion for using the setup_db.psql file. The reason is because I fear it causes more confusion than what it's worth.
* Firstly, OTP installations won't have this file because it's created in /tmp.
* Secondly, the instance has been reinstalled and thus a new setup_db.psql with different password may have been created, causing only more confusion.
During attachment upload Pleroma returns a "description" field.
* This MR allows Pleroma to read the EXIF data during upload and return the description to the FE using this field.
* If a description is already present (e.g. because a previous module added it), it will use that
* Otherwise it will read from the EXIF data. First it will check -ImageDescription, if that's empty, it will check -iptc:Caption-Abstract
* If no description is found, it will simply return nil, which is the default value
* When people set up a new instance, they will be asked if they want to read metadata and this module will be activated if so
There was an Exiftool module, which has now been renamed to Exiftool.StripLocation
2022-10-23 14:46:16 +02:00
654 changed files with 153701 additions and 11839 deletions
- Meilisearch: it is now possible to use separate keys for search and admin actions
- New standalone `prune_orphaned_activities` mix task with configurable batch limit
- The `prune_objects` mix task now accepts a `--limit` parameter for initial object pruning
## Fixed
- Meilisearch: order of results returned from our REST API now actually matches how Meilisearch ranks results
- Emoji are now federated as anonymous objects, fixing issues with
some strict servers e.g. rejecting e.g. remote emoji reactions
- AP objects with additional JSON-LD profiles beyond ActivityStreams can now be fetched
- Single-selection polls no longer expose the voter_count; MastoAPI demands it be null
and this confused some clients leading to vote distributions >100%
## Changed
- Refactored Rich Media to cache the content in the database. Fetching operations that could block status rendering have been eliminated.
## 2024.04.1 (Security)
## Fixed
- Issue allowing non-owners to use media objects in posts
- Issue allowing use of non-media objects as attachments and crashing timeline rendering
- Issue allowing webfinger spoofing in certain situations
## 2024.04
## Added
- Support for [FEP-fffd](https://codeberg.org/fediverse/fep/src/branch/main/fep/fffd/fep-fffd.md) (proxy objects)
- Verified support for elixir 1.16
- Uploadfilter `Pleroma.Upload.Filter.Exiftool.ReadDescription` returns description values to the FE so they can pre fill the image description field
NOTE: this filter MUST be placed before `Exiftool.StripMetadata` to work
## Changed
- Inbound pipeline error handing was modified somewhat, which should lead to less incomprehensible log spam. Hopefully.
- Uploadfilter `Pleroma.Upload.Filter.Exiftool` was replaced by `Pleroma.Upload.Filter.Exiftool.StripMetadata`;
the latter strips all non-essential metadata by default but can be configured.
To regain the old behaviour of only stripping GPS data set `purge: ["gps:all"]`.
- Uploadfilter `Pleroma.Upload.Filter.Exiftool` has been renamed to `Pleroma.Upload.Filter.Exiftool.StripMetadata`
- MRF.InlineQuotePolicy now prefers to insert display URLs instead of ActivityPub IDs
- Old accounts are no longer listed in WebFinger as aliases; this was breaking spec
## Fixed
- Issue preventing fetching anything from IPv6-only instances
- Issue allowing post content to leak via opengraph tags despite :estrict\_unauthenticated being set
- Move activities no longer operate on stale user data
- Missing definitions in our JSON-LD context
- Issue mangling newlines in code blocks for RSS/Atom feeds
- static\_fe squeezing non-square avatars and emoji
- Issue leading to properly JSON-LD compacted emoji reactions being rejected
- We now use a standard-compliant Accept header when fetching ActivityPub objects
- /api/pleroma/notification\_settings was rejecting body parameters;
this also broke changing this setting via akkoma-fe
- Issue leading to Mastodon bot accounts being rejected
- Scope misdetection of remote posts resulting from not recognising
JSON-LD-compacted forms of public scope; affected e.g. federation with bovine
- Ratelimits encountered when fetching objects are now respected; 429 responses will cause a backoff when we get one.
## Removed
- ActivityPub Client-To-Server write API endpoints have been disabled;
read endpoints are planned to be removed next release unless a clear need is demonstrated
## 2024.03
## Added
- CLI tasks best-effort checking for past abuse of the recent spoofing exploit
- new `:mrf_steal_emoji, :download_unknown_size` option; defaults to `false`
## Changed
- `Pleroma.Upload, :base_url` now MUST be configured explicitly if used;
use of the same domain as the instance is **strongly** discouraged
- `:media_proxy, :base_url` now MUST be configured explicitly if used;
use of the same domain as the instance is **strongly** discouraged
- StealEmoji:
- now uses the pack.json format;
existing users must migrate with an out-of-band script (check release notes)
- only steals shortcodes recognised as valid
- URLs of stolen emoji is no longer predictable
- The `Dedupe` upload filter is now always active;
`AnonymizeFilenames` is again opt-in
- received AP data is sanity checked before we attempt to parse it as a user
- Uploads, emoji and media proxy now restrict Content-Type headers to a safe subset
- Akkoma will no longer fetch and parse objects hosted on the same domain
## Fixed
- Critical security issue allowing Akkoma to be used as a vector for
(depending on configuration) impersonation of other users or creation
of bogus users and posts on the upload domain
- Critical security issue letting Akkoma fall for the above impersonation
payloads due to lack of strict id checking
- Critical security issue allowing domains redirect to to pose as the initial domain
(e.g. with media proxy's fallback redirects)
- refetched objects can no longer attribute themselves to third-party actors
(this had no externally visible effect since actor info is read from the Create activity)
- our litepub JSON-LD schema is now served with the correct content type
- remote APNG attachments are now recognised as images
## Upgrade Notes
- As mentioned in "Changed", `Pleroma.Upload, :base_url`**MUST** be configured. Uploads will fail without it.
- Akkoma will refuse to start if this is not set.
- Same with media proxy.
## 2024.02
## Added
- Full compatibility with Erlang OTP26
- handling of GET /api/v1/preferences
- Akkoma API is now documented
- ability to auto-approve follow requests from users you are already following
- The SimplePolicy MRF can now strip user backgrounds from selected remote hosts
## Changed
- OTP builds are now built on erlang OTP26
- The base Phoenix framework is now updated to 1.7
- An `outbox` field has been added to actor profiles to comply with AP spec
- User profile backgrounds do now federate with other Akkoma instances and Sharkey
## Fixed
- Documentation issue in which a non-existing nginx file was referenced
- Issue where a bad inbox URL could break federation
- Issue where hashtag rel values would be scrubbed
- Issue where short domains listed in `transparency_obfuscate_domains` were not actually obfuscated
## 2023.08
## Added
- Added a new configuration option to the MediaProxy feature that allows the blocking of specific domains from using the media proxy or being explicitly allowed by the Content-Security-Policy.
- Please make sure instances you wanted to block media from are not in the MediaProxy `whitelist`, and instead use `blocklist`.
- `OnlyMedia` Upload Filter to simplify restricting uploads to audio, image, and video types
- ARM64 OTP builds
- Ubuntu22 builds are available for develop and stable
- other distributions are stable only
- Support for Elixir 1.15
- 1.14 is still supported
- OTP26 is currently "unsupported". It will probably work, but due to the way
it handles map ordering, the test suite will not pass for it as yet.
## Changed
- Alpine OTP builds are now from alpine 3.18, which is OpenSSLv3 compatible.
If you use alpine OTP builds you will have to update your local system.
- Debian OTP builds are now from a base of bookworm, which is OpenSSLv3 compatible.
If you use debian OTP builds you will have to update your local system to
bookworm (currently: stable).
- Ubuntu and debian builds are compatible again! (for now...)
- Blocks/Mutes now return from max ID to min ID, in line with mastodon.
- The AnonymizeFilename filter is now enabled by default.
## Fixed
- Deactivated users can no longer show up in the emoji reaction list
- Embedded posts can no longer bypass `:restrict\_unauthenticated`
- GET/HEAD requests will now work when requesting AWS-based instances.
## Security
- Add `no_new_privs` hardening to OpenRC and systemd service files
- XML parsers cannot load any entities (thanks @Mae@is.badat.dev!)
- Reduced permissions of config files and directories, distros requiring greater permissions like group-read need to pre-create the directories
## Removed
- Builds for debian oldstable (bullseye)
- If you are on oldstable you should NOT attempt to update OTP builds without
first updating your machine.
## 2023.05
## 2023.05
## Added
## Added
@ -86,7 +259,6 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- Rich media will now hard-exit after 5 seconds, to prevent timeline hangs
- Rich media will now hard-exit after 5 seconds, to prevent timeline hangs
- HTTP Content Security Policy is now far more strict to prevent any potential XSS/CSS leakages
- HTTP Content Security Policy is now far more strict to prevent any potential XSS/CSS leakages
- Follow requests are now paginated, matches mastodon API spec, so use the Link header to paginate.
- Follow requests are now paginated, matches mastodon API spec, so use the Link header to paginate.
- `internal.fetch` and `relay` actors are now represented with the actor type `Application`
### Fixed
### Fixed
- /api/v1/accounts/lookup will now respect restrict\_unauthenticated
- /api/v1/accounts/lookup will now respect restrict\_unauthenticated
Currently, Pleroma offers bugfixes and security patches only for the latest minor release.
| Version | Support
|---------| --------
| 2.2 | Bugfixes and security patches
## Reporting a vulnerability
## Reporting a vulnerability
Please use confidential issues (tick the "This issue is confidential and should only be visible to team members with at least Reporter access." box when submitting) at our [bugtracker](https://git.pleroma.social/pleroma/pleroma/-/issues/new) for reporting vulnerabilities.
New releases are announced at [pleroma.social](https://pleroma.social/announcements/). All security releases are tagged with ["Security"](https://pleroma.social/announcements/tags/security/). You can be notified of them by subscribing to an Atom feed at <https://pleroma.social/announcements/tags/security/feed.xml>.
New releases and security issues are announced at
[meta.akkoma.dev](https://meta.akkoma.dev/c/releases) and
"Base URL for the uploads. Required if you use a CDN or host attachments under a different domain.",
"Base URL for the uploads. Required if you use a CDN or host attachments under a different domain - it is HIGHLY recommended that you **do not** set this to be the same as the domain akkoma is hosted on.",
"List of MIME (main) types uploads are allowed to identify themselves with. Other types may still be uploaded, but will identify as a generic binary to clients. WARNING: Loosening this over the defaults can lead to security issues. Removing types is safe, but only add to the list if you are sure you know what you are doing.",
"""
suggestions:[
"image",
"audio",
"video",
"font"
]
},
},
%{
%{
key::filename_display_max_length,
key::filename_display_max_length,
@ -209,6 +214,26 @@
}
}
]
]
},
},
%{
group::pleroma,
key:Pleroma.Upload.Filter.Exiftool.StripMetadata,
type::group,
description:"Strip specified metadata from image uploads",
children:[
%{
key::purge,
description:"Metadata fields or groups to strip",
type:{:list,:string},
suggestions:["all","CommonIFD0"]
},
%{
key::preserve,
description:"Metadata fields or groups to preserve (takes precedence over stripping)",
type:{:list,:string},
suggestions:["ColorSpaces","Orientation"]
}
]
},
%{
%{
group::pleroma,
group::pleroma,
key:Pleroma.Emails.Mailer,
key:Pleroma.Emails.Mailer,
@ -1081,7 +1106,7 @@
key::level,
key::level,
type:{:dropdown,:atom},
type:{:dropdown,:atom},
description:"Log level",
description:"Log level",
suggestions:[:debug,:info,:warn,:error]
suggestions:[:debug,:info,:warning,:error]
},
},
%{
%{
key::ident,
key::ident,
@ -1114,7 +1139,7 @@
key::level,
key::level,
type:{:dropdown,:atom},
type:{:dropdown,:atom},
description:"Log level",
description:"Log level",
suggestions:[:debug,:info,:warn,:error]
suggestions:[:debug,:info,:warning,:error]
},
},
%{
%{
key::format,
key::format,
@ -1558,7 +1583,21 @@
%{
%{
key::whitelist,
key::whitelist,
type:{:list,:string},
type:{:list,:string},
description:"List of hosts with scheme to bypass the MediaProxy",
@ -50,9 +50,39 @@ This will prune remote posts older than 90 days (configurable with [`config :ple
- `--keep-threads` - Don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...). It also wont delete posts when at least one of the posts in that thread is kept (e.g. because one of the posts has seen recent activity).
- `--keep-threads` - Don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...). It also wont delete posts when at least one of the posts in that thread is kept (e.g. because one of the posts has seen recent activity).
- `--keep-non-public` - Keep non-public posts like DM's and followers-only, even if they are remote.
- `--keep-non-public` - Keep non-public posts like DM's and followers-only, even if they are remote.
- `--limit` - limits how many remote posts get pruned. This limit does **not** apply to any of the follow up jobs. If wanting to keep the database load in check it is thus advisable to run the standalone `prune_orphaned_activities` task with a limit afterwards instead of passing `--prune-orphaned-activities` to this task.
- `--prune-orphaned-activities` - Also prune orphaned activities afterwards. Activities are things like Like, Create, Announce, Flag (aka reports)... They can significantly help reduce the database size.
- `--prune-orphaned-activities` - Also prune orphaned activities afterwards. Activities are things like Like, Create, Announce, Flag (aka reports)... They can significantly help reduce the database size.
- `--vacuum` - Run `VACUUM FULL` after the objects are pruned. This should not be used on a regular basis, but is useful if your instance has been running for a long time before pruning.
- `--vacuum` - Run `VACUUM FULL` after the objects are pruned. This should not be used on a regular basis, but is useful if your instance has been running for a long time before pruning.
## Prune orphaned activities from the database
This will prune activities which are no longer referenced by anything.
Such activities might be the result of running `prune_objects` without `--prune-orphaned-activities`.
The same notes and warnings apply as for `prune_objects`.
The task will print out how many rows were freed in total in its last
line of output in the form `Deleted 345 rows`.
When running the job in limited batches this can be used to determine
- `--limit n` - Only delete up to `n` activities in each query making up this job, i.e. if this job runs two queries at most `2n` activities will be deleted. Running this task repeatedly in limited batches can help maintain the instance’s responsiveness while still freeing up some space.
- `--no-singles` - Do not delete activites referencing single objects
- `--no-arrays` - Do not delete activites referencing an array of objects
@ -42,7 +42,7 @@ For a frontend configured under the `available` key, it's enough to install it b
This will download the latest build for the pre-configured `ref` and install it. It can then be configured as the one of the served frontends in the config file (see `primary` or `admin`).
This will download the latest build for the pre-configured `ref` and install it. It can then be configured as the one of the served frontends in the config file (see `primary` or `admin`).
You can override any of the details. To install a Pleroma-FE build from a different URL, you could do this:
You can override any of the details. To install an Akkoma-FE build from a different URL, you could do this:
2. Go to the working directory of Akkoma (default is `/opt/akkoma`)
2. Go to the working directory of Akkoma (default is `/opt/akkoma`)
3. Run[¹]`sudo -Hu postgres pg_dump -d akkoma --format=custom -f </path/to/backup_location/akkoma.pgdump>` (make sure the postgres user has write access to the destination file)
3. Run `sudo -Hu postgres pg_dump -d akkoma --format=custom -f </path/to/backup_location/akkoma.pgdump>`[¹] (make sure the postgres user has write access to the destination file)
4. Copy `akkoma.pgdump`, `config/prod.secret.exs`[²], `config/setup_db.psql` (if still available) and the `uploads` folder to your backup destination. If you have other modifications, copy those changes too.
4. Copy `akkoma.pgdump`, `config/config.exs`[²], `uploads` folder, and [static directory](../configuration/static_dir.md) to your backup destination. If you have other modifications, copy those changes too.
5. Restart the Akkoma service.
5. Restart the Akkoma service.
[¹]: We assume the database name is "akkoma". If not, you can find the correct name in your config files.
[¹]: We assume the database name is "akkoma". If not, you can find the correct name in your configuration files.
[²]: If you've installed using OTP, you need `config/config.exs` instead of `config/prod.secret.exs`.
[²]: If you have a from source installation, you need `config/prod.secret.exs` instead of `config/config.exs`. The `config/config.exs` file also exists, but in case of from source installations, it only contains the default values and it is tracked by Git, so you don't need to back it up.
## Restore/Move
## Restore/Move
@ -17,19 +17,16 @@
2. Stop the Akkoma service.
2. Stop the Akkoma service.
3. Go to the working directory of Akkoma (default is `/opt/akkoma`)
3. Go to the working directory of Akkoma (default is `/opt/akkoma`)
4. Copy the above mentioned files back to their original position.
4. Copy the above mentioned files back to their original position.
5. Drop the existing database and user if restoring in-place[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';``sudo -Hu postgres psql -c 'DROP USER akkoma;'`
5. Drop the existing database and user[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';``sudo -Hu postgres psql -c 'DROP USER akkoma;'`
6. Restore the database schema and akkoma role using either of the following options
6. Restore the database schema and akkoma role[¹] (replace the password with the one you find in the configuration file), `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-configuration-file>';"``sudo -Hu postgres psql -c "CREATE DATABASE akkoma OWNER akkoma;"`.
* You can use the original `setup_db.psql` if you have it[²]: `sudo -Hu postgres psql -f config/setup_db.psql`.
* Or recreate the database and user yourself (replace the password with the one you find in the config file) `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-config-file>'; CREATE DATABASE akkoma OWNER akkoma;"`.
7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
8. If you installed a newer Akkoma version, you should run `MIX_ENV=prod mix ecto.migrate`[³]. This task performs database migrations, if there were any.
8. If you installed a newer Akkoma version, you should run the database migrations `./bin/pleroma_ctl migrate`[²].
9. Restart the Akkoma service.
9. Restart the Akkoma service.
10. Run `sudo -Hu postgres vacuumdb --all --analyze-in-stages`. This will quickly generate the statistics so that postgres can properly plan queries.
10. Run `sudo -Hu postgres vacuumdb --all --analyze-in-stages`. This will quickly generate the statistics so that postgres can properly plan queries.
11. If setting up on a new server configure Nginx by using the `installation/akkoma.nginx` config sample or reference the Akkoma installation guide for your OS which contains the Nginx configuration instructions.
11. If setting up on a new server, configure Nginx by using the `installation/nginx/akkoma.nginx` configuration sample or reference the Akkoma installation guide which contains the Nginx configuration instructions.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your configuration files.
[²]: You can recreate the `config/setup_db.psql` by running the `mix pleroma.instance gen` task again. You can ignore most of the questions, but make the database user, name, and password the same as found in your backed up config file. This will also create a new `config/generated_config.exs` file which you may delete as it is not needed.
[²]: If you have a from source installation, the command is `MIX_ENV=prod mix ecto.migrate`. Note that we prefix with `MIX_ENV=prod` to use the `config/prod.secret.exs` configuration file.
[³]: Prefix with `MIX_ENV=prod` to run it using the production config file.
## Remove
## Remove
@ -45,3 +42,16 @@
8. Remove the dependencies that you don't need anymore (see installation guide). Make sure you don't remove packages that are still needed for other software that you have running!
8. Remove the dependencies that you don't need anymore (see installation guide). Make sure you don't remove packages that are still needed for other software that you have running!
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files.
## Docker installations
If running behind Docker, it is required to run the above commands inside of a running database container.
### Example
Running `docker compose run --rm db pg_dump <...>` will fail and return:
```
pg_dump: error: connection to server on socket "/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?"
```
However, first starting just the database container with `docker compose up db -d`, and then running `docker compose exec db pg_dump -d akkoma --format=custom -f </your/backup/dir/akkoma.pgdump>` will successfully generate a database dump.
Then to make the file accessible on the host system you can run `docker compose cp db:</your/backup/dir/akkoma.pgdump> </your/target/location>` to copy if from the container.
@ -63,6 +63,8 @@ To add configuration to your config file, you can copy it from the base config.
* `local_bubble`: Array of domains representing instances closely related to yours. Used to populate the `bubble` timeline. e.g `["example.com"]`, (default: `[]`)
* `local_bubble`: Array of domains representing instances closely related to yours. Used to populate the `bubble` timeline. e.g `["example.com"]`, (default: `[]`)
* `languages`: List of Language Codes used by the instance. This is used to try and set a default language from the frontend. It will try and find the first match between the languages set here and the user's browser languages. It will default to the first language in this setting if there is no match.. (default `["en"]`)
* `languages`: List of Language Codes used by the instance. This is used to try and set a default language from the frontend. It will try and find the first match between the languages set here and the user's browser languages. It will default to the first language in this setting if there is no match.. (default `["en"]`)
* `export_prometheus_metrics`: Enable prometheus metrics, served at `/api/v1/akkoma/metrics`, requiring the `admin:metrics` oauth scope.
* `export_prometheus_metrics`: Enable prometheus metrics, served at `/api/v1/akkoma/metrics`, requiring the `admin:metrics` oauth scope.
* `privileged_staff`: Set to `true` to give moderators access to a few higher responsibility actions.
* `federated_timeline_available`: Set to `false` to remove access to the federated timeline for all users.
## :database
## :database
* `improved_hashtag_timeline`: Setting to force toggle / force disable improved hashtags timeline. `:enabled` forces hashtags to be fetched from `hashtags` table for hashtags timeline. `:disabled` forces object-embedded hashtags to be used (slower). Keep it `:auto` for automatic behaviour (it is auto-set to `:enabled` [unless overridden] when HashtagsTableMigrator completes).
* `improved_hashtag_timeline`: Setting to force toggle / force disable improved hashtags timeline. `:enabled` forces hashtags to be fetched from `hashtags` table for hashtags timeline. `:disabled` forces object-embedded hashtags to be used (slower). Keep it `:auto` for automatic behaviour (it is auto-set to `:enabled` [unless overridden] when HashtagsTableMigrator completes).
@ -104,31 +106,60 @@ To add configuration to your config file, you can copy it from the base config.
## Message rewrite facility
## Message rewrite facility
### :mrf
### :mrf
* `policies`: Message Rewrite Policy, either one or a list. Here are the ones available by default:
* `Pleroma.Web.ActivityPub.MRF.DropPolicy`: Drops all activities. It generally doesn’t makes sense to use in production.
* `Pleroma.Web.ActivityPub.MRF.SimplePolicy`: Restrict the visibility of activities from certains instances (See [`:mrf_simple`](#mrf_simple)).
* `Pleroma.Web.ActivityPub.MRF.TagPolicy`: Applies policies to individual users based on tags, which can be set using pleroma-fe/admin-fe/any other app that supports Pleroma Admin API. For example it allows marking posts from individual users nsfw (sensitive).
* `Pleroma.Web.ActivityPub.MRF.SubchainPolicy`: Selectively runs other MRF policies when messages match (See [`:mrf_subchain`](#mrf_subchain)).
* `Pleroma.Web.ActivityPub.MRF.RejectNonPublic`: Drops posts with non-public visibility settings (See [`:mrf_rejectnonpublic`](#mrf_rejectnonpublic)).
* `Pleroma.Web.ActivityPub.MRF.EnsureRePrepended`: Rewrites posts to ensure that replies to posts with subjects do not have an identical subject and instead begin with re:.
* `Pleroma.Web.ActivityPub.MRF.AntiLinkSpamPolicy`: Rejects posts from likely spambots by rejecting posts from new users that contain links.
* `Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy`: Crawls attachments using their MediaProxy URLs so that the MediaProxy cache is primed.
* `Pleroma.Web.ActivityPub.MRF.MentionPolicy`: Drops posts mentioning configurable users. (See [`:mrf_mention`](#mrf_mention)).
* `Pleroma.Web.ActivityPub.MRF.VocabularyPolicy`: Restricts activities to a configured set of vocabulary. (See [`:mrf_vocabulary`](#mrf_vocabulary)).
* `Pleroma.Web.ActivityPub.MRF.ObjectAgePolicy`: Rejects or delists posts based on their age when received. (See [`:mrf_object_age`](#mrf_object_age)).
* `Pleroma.Web.ActivityPub.MRF.ActivityExpirationPolicy`: Sets a default expiration on all posts made by users of the local instance. Requires `Pleroma.Workers.PurgeExpiredActivity` to be enabled for processing the scheduled delections.
* `Pleroma.Web.ActivityPub.MRF.ForceBotUnlistedPolicy`: Makes all bot posts to disappear from public timelines.
* `Pleroma.Web.ActivityPub.MRF.FollowBotPolicy`: Automatically follows newly discovered users from the specified bot account. Local accounts, locked accounts, and users with "#nobot" in their bio are respected and excluded from being followed.
* `Pleroma.Web.ActivityPub.MRF.AntiFollowbotPolicy`: Drops follow requests from followbots. Users can still allow bots to follow them by first following the bot.
* `Pleroma.Web.ActivityPub.MRF.KeywordPolicy`: Rejects or removes from the federated timeline or replaces keywords. (See [`:mrf_keyword`](#mrf_keyword)).
* `Pleroma.Web.ActivityPub.MRF.NormalizeMarkup`: Pass inbound HTML through a scrubber to make sure it doesn't have anything unusual in it. On by default, cannot be turned off.
* `Pleroma.Web.ActivityPub.MRF.InlineQuotePolicy`: Append a link to a post that quotes another post with the link to the quoted post, to ensure that software that does not understand quotes can have full context. On by default, cannot be turned off.
* `transparency`: Make the content of your Message Rewrite Facility settings public (via nodeinfo).
* `transparency`: Make the content of your Message Rewrite Facility settings public (via nodeinfo).
* `transparency_exclusions`: Exclude specific instance names from MRF transparency. The use of the exclusions feature will be disclosed in nodeinfo as a boolean value.
* `transparency_exclusions`: Exclude specific instance names from MRF transparency. The use of the exclusions feature will be disclosed in nodeinfo as a boolean value.
* `transparency_obfuscate_domains`: Show domains with `*` in the middle, to censor them if needed. For example, `ridingho.me` will show as `rid*****.me`
* `transparency_obfuscate_domains`: Show domains with `*` in the middle, to censor them if needed. For example, `ridingho.me` will show as `rid*****.me`
* `policies`: Message Rewrite Policy, either one or a list. Here are the ones available by default:
* `Pleroma.Web.ActivityPub.MRF.DropPolicy`: Drops all activities. It generally doesn’t makes sense to use in production.
* `Pleroma.Web.ActivityPub.MRF.ActivityExpirationPolicy`: Sets a default expiration on all posts made by users of the local instance. Requires `Pleroma.Workers.PurgeExpiredActivity` to be enabled for processing the scheduled delections.
(See [`:mrf_activity_expiration`](#mrf_activity_expiration))
* `Pleroma.Web.ActivityPub.MRF.AntiFollowbotPolicy`: Drops follow requests from followbots. Users can still allow bots to follow them by first following the bot.
* `Pleroma.Web.ActivityPub.MRF.AntiLinkSpamPolicy`: Rejects posts from likely spambots by rejecting posts from new users that contain links.
* `Pleroma.Web.ActivityPub.MRF.EnsureRePrepended`: Rewrites posts to ensure that replies to posts with subjects do not have an identical subject and instead begin with re:.
* `Pleroma.Web.ActivityPub.MRF.ForceBotUnlistedPolicy`: Makes all bot posts to disappear from public timelines.
* `Pleroma.Web.ActivityPub.MRF.HellthreadPolicy`: Blocks messages with too many mentions.
(See [`mrf_hellthread`](#mrf_hellthread))
* `Pleroma.Web.ActivityPub.MRF.KeywordPolicy`: Rejects or removes from the federated timeline or replaces keywords. (See [`:mrf_keyword`](#mrf_keyword)).
* `Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy`: Crawls attachments using their MediaProxy URLs so that the MediaProxy cache is primed.
* `Pleroma.Web.ActivityPub.MRF.MentionPolicy`: Drops posts mentioning configurable users. (See [`:mrf_mention`](#mrf_mention)).
* `Pleroma.Web.ActivityPub.MRF.NoEmptyPolicy`: Drops local activities which have no actual content.
(e.g. no attachments and only consists of mentions)
* `Pleroma.Web.ActivityPub.MRF.NoPlaceholderTextPolicy`: Strips content placeholders from posts
(such as the dot from mastodon)
* `Pleroma.Web.ActivityPub.MRF.ObjectAgePolicy`: Rejects or delists posts based on their age when received. (See [`:mrf_object_age`](#mrf_object_age)).
* `Pleroma.Web.ActivityPub.MRF.RejectNewlyCreatedAccountNotesPolicy`: Rejects posts of users the server only recently learned about for a while. Great to block spam accounts. (See [`:mrf_reject_newly_created_account_notes`](#mrf_reject_newly_created_account_notes))
* `Pleroma.Web.ActivityPub.MRF.RejectNonPublic`: Drops posts with non-public visibility settings (See [`:mrf_rejectnonpublic`](#mrf_rejectnonpublic)).
* `Pleroma.Web.ActivityPub.MRF.SimplePolicy`: Restrict the visibility of activities from certains instances (See [`:mrf_simple`](#mrf_simple)).
* `Pleroma.Web.ActivityPub.MRF.StealEmojiPolicy`: Steals all eligible emoji encountered in posts from remote instances
(See [`:mrf_steal_emoji`](#mrf_steal_emoji))
* `Pleroma.Web.ActivityPub.MRF.SubchainPolicy`: Selectively runs other MRF policies when messages match (See [`:mrf_subchain`](#mrf_subchain)).
* `Pleroma.Web.ActivityPub.MRF.TagPolicy`: Applies policies to individual users based on tags, which can be set using pleroma-fe/admin-fe/any other app that supports Pleroma Admin API. For example it allows marking posts from individual users nsfw (sensitive).
* `Pleroma.Web.ActivityPub.MRF.UserAllowListPolicy`: Drops all posts except from users specified in a list.
(See [`:mrf_user_allowlist`](#mrf_user_allowlist))
* `Pleroma.Web.ActivityPub.MRF.VocabularyPolicy`: Restricts activities to a configured set of vocabulary. (See [`:mrf_vocabulary`](#mrf_vocabulary)).
Additionally the following MRFs will *always* be aplied and cannot be disabled:
* `Pleroma.Web.ActivityPub.MRF.DirectMessageDisabledPolicy`: Strips users limiting who can send them DMs from the recipients of non-eligible DMs
* `Pleroma.Web.ActivityPub.MRF.HashtagPolicy`: Depending on a post’s hashtags it can be rejected, get its sensitive flags force-enabled or removed from the global timeline
(See [`:mrf_hashtag`](#mrf_hashtag))
* `Pleroma.Web.ActivityPub.MRF.InlineQuotePolicy`: Append a link to a post that quotes another post with the link to the quoted post, to ensure that software that does not understand quotes can have full context.
(See [`:mrf_inline_quote`](#mrf_inline_quote))
* `Pleroma.Web.ActivityPub.MRF.NormalizeMarkup`: Pass inbound HTML through a scrubber to make sure it doesn't have anything unusual in it.
(See [`:mrf_normalize_markup`](#mrf_normalize_markup))
## Federation
## Federation
### :activitypub
* `unfollow_blocked`: Whether blocks result in people getting unfollowed
* `outgoing_blocks`: Whether to federate blocks to other instances
* `blockers_visible`: Whether a user can see the posts of users who blocked them
* `deny_follow_blocked`: Whether to disallow following an account that has blocked the user in question
* `sign_object_fetches`: Sign object fetches with HTTP signatures
* `authorized_fetch_mode`: Require HTTP signatures for AP fetches
* `max_collection_objects`: The maximum number of objects to fetch from a remote AP collection.
### MRF policies
### MRF policies
!!! note
!!! note
@ -144,6 +175,7 @@ To add configuration to your config file, you can copy it from the base config.
* `report_removal`: List of instances to reject reports from and the reason for doing so.
* `report_removal`: List of instances to reject reports from and the reason for doing so.
* `avatar_removal`: List of instances to strip avatars from and the reason for doing so.
* `avatar_removal`: List of instances to strip avatars from and the reason for doing so.
* `banner_removal`: List of instances to strip banners from and the reason for doing so.
* `banner_removal`: List of instances to strip banners from and the reason for doing so.
* `background_removal`: List of instances to strip user backgrounds from and the reason for doing so.
* `reject_deletes`: List of instances to reject deletions from and the reason for doing so.
* `reject_deletes`: List of instances to reject deletions from and the reason for doing so.
* `scrub_policy`: the scrubbing module to use (by default a built-in HTML sanitiser)
## Pleroma.User
## Pleroma.User
@ -246,11 +290,11 @@ Notes:
### :frontend_configurations
### :frontend_configurations
This can be used to configure a keyword list that keeps the configuration data for any kind of frontend. By default, settings for `pleroma_fe` and `masto_fe` are configured. You can find the documentation for `pleroma_fe` configuration into [Pleroma-FE configuration and customization for instance administrators](https://docs-fe.akkoma.dev/stable/CONFIGURATION/#options).
This can be used to configure a keyword list that keeps the configuration data for any kind of frontend. By default, settings for `pleroma_fe` and `masto_fe` are configured. You can find the documentation for `pleroma_fe` configuration into [Akkoma-FE configuration and customization for instance administrators](https://docs-fe.akkoma.dev/stable/CONFIGURATION/#options).
Frontends can access these settings at `/api/v1/pleroma/frontend_configurations`
Frontends can access these settings at `/api/v1/pleroma/frontend_configurations`
To add your own configuration for Pleroma-FE, use it like this:
To add your own configuration for Akkoma-FE, use it like this:
```elixir
```elixir
config :pleroma, :frontend_configurations,
config :pleroma, :frontend_configurations,
@ -294,7 +338,7 @@ config :pleroma, :frontends,
* `:primary` - The frontend that will be served at `/`
* `:primary` - The frontend that will be served at `/`
* `:admin` - The frontend that will be served at `/pleroma/admin`
* `:admin` - The frontend that will be served at `/pleroma/admin`
* `:swagger` - Config for developers to act as an API reference to be served at `/akkoma/swaggerui/` (trailing slash _needed_). Disabled by default.
* `:swagger` - Config for developers to act as an API reference to be served at `/pleroma/swaggerui/` (trailing slash _needed_). Disabled by default.
* `:mastodon` - The mastodon-fe configuration. This shouldn't need to be changed. This is served at `/web` when installed.
* `:mastodon` - The mastodon-fe configuration. This shouldn't need to be changed. This is served at `/web` when installed.
### :static\_fe
### :static\_fe
@ -356,7 +400,8 @@ This section describe PWA manifest instance-specific values. Currently this opti
## :media_proxy
## :media_proxy
* `enabled`: Enables proxying of remote media to the instance’s proxy
* `enabled`: Enables proxying of remote media to the instance’s proxy
* `base_url`: The base URL to access a user-uploaded file. Useful when you want to proxy the media files via another host/CDN fronts.
* `base_url`: The base URL to access a user-uploaded file.
Using a (sub)domain distinct from the instance endpoint is **strongly** recommended.
* `proxy_opts`: All options defined in `Pleroma.ReverseProxy` documentation, defaults to `[max_body_length: (25*1_048_576)]`.
* `proxy_opts`: All options defined in `Pleroma.ReverseProxy` documentation, defaults to `[max_body_length: (25*1_048_576)]`.
* `whitelist`: List of hosts with scheme to bypass the mediaproxy (e.g. `https://example.com`)
* `whitelist`: List of hosts with scheme to bypass the mediaproxy (e.g. `https://example.com`)
* `invalidation`: options for remove media from cache after delete object:
* `invalidation`: options for remove media from cache after delete object:
@ -557,9 +602,9 @@ the source code is here: [kocaptcha](https://github.com/koto-bank/kocaptcha). Th
* `uploader`: Which one of the [uploaders](#uploaders) to use.
* `uploader`: Which one of the [uploaders](#uploaders) to use.
* `filters`: List of [upload filters](#upload-filters) to use.
* `filters`: List of [upload filters](#upload-filters) to use.
* `link_name`: When enabled Akkoma will add a `name` parameter to the url of the upload, for example `https://instance.tld/media/corndog.png?name=corndog.png`. This is needed to provide the correct filename in Content-Disposition headers when using filters like `Pleroma.Upload.Filter.Dedupe`
* `link_name`: When enabled Akkoma will add a `name` parameter to the url of the upload, for example `https://instance.tld/media/corndog.png?name=corndog.png`. This is needed to provide the correct filename in Content-Disposition headers
* `base_url`: The base URL to access a user-uploaded file. Useful when you want to host the media files via another domain or are using a 3rd party S3 provider.
* `base_url`: The base URL to access a user-uploaded file; MUST be configured explicitly.
* `proxy_remote`: If you're using a remote uploader, Akkoma will proxy media requests instead of redirecting to it.
Using a (sub)domain distinct from the instance endpoint is **strongly** recommended. A good value might be `https://media.myakkoma.instance/media/`.
* `proxy_opts`: Proxy options, see `Pleroma.ReverseProxy` documentation.
* `proxy_opts`: Proxy options, see `Pleroma.ReverseProxy` documentation.
* `filename_display_max_length`: Set max length of a filename to display. 0 = no limit. Default: 30.
* `filename_display_max_length`: Set max length of a filename to display. 0 = no limit. Default: 30.
@ -598,20 +643,35 @@ config :ex_aws, :s3,
### Upload filters
### Upload filters
#### Pleroma.Upload.Filter.Dedupe
**Always** active; cannot be turned off.
Renames files to their hash and prevents duplicate files filling up the disk.
No specific configuration.
#### Pleroma.Upload.Filter.AnonymizeFilename
#### Pleroma.Upload.Filter.AnonymizeFilename
This filter replaces the filename (not the path) of an upload. For complete obfuscation, add
This filter replaces the declared filename (not the path) of an upload.
`Pleroma.Upload.Filter.Dedupe` before AnonymizeFilename.
* `text`: Text to replace filenames in links. If empty, `{random}.extension` will be used. You can get the original filename extension by using `{extension}`, for example `custom-file-name.{extension}`.
* `text`: Text to replace filenames in links. If empty, `{random}.extension` will be used. You can get the original filename extension by using `{extension}`, for example `custom-file-name.{extension}`.
#### Pleroma.Upload.Filter.Dedupe
#### Pleroma.Upload.Filter.Exiftool.StripMetadata
This filter strips metadata with Exiftool leaving color profiles and orientation intact.
* `purge`: List of Exiftool tag names or tag group names to purge
* `preserve`: List of Exiftool tag names or tag group names to preserve even if they occur in the purge list
This would serve the frontend from the the folder at `$instance_static/frontends/pleroma/stable`. You have to copy the frontend into this folder yourself. You can choose the name and ref any way you like, but they will be used by mix tasks to automate installation in the future, the name referring to the project and the ref referring to a commit.
This would serve the frontend from the folder at `$instance_static/frontends/pleroma/stable`. You have to copy the frontend into this folder yourself. You can choose the name and ref any way you like, but they will be used by mix tasks to automate installation in the future, the name referring to the project and the ref referring to a commit.
Refer to [the frontend CLI task](../../administration/CLI_tasks/frontend) for how to install the frontend's files
Refer to [the frontend CLI task](../../administration/CLI_tasks/frontend) for how to install the frontend's files
@ -60,4 +60,4 @@ config :pleroma, :frontends,
Then run the [pleroma.frontend cli task](../../administration/CLI_tasks/frontend) with the name of `swagger-ui` to install the distribution files.
Then run the [pleroma.frontend cli task](../../administration/CLI_tasks/frontend) with the name of `swagger-ui` to install the distribution files.
You will now be able to view documentation at `/akkoma/swaggerui`
You will now be able to view documentation at `/pleroma/swaggerui`
@ -17,6 +17,16 @@ This sets the Akkoma application server to only listen to the localhost interfac
This sets the `secure` flag on Akkoma’s session cookie. This makes sure, that the cookie is only accepted over encrypted HTTPs connections. This implicitly renames the cookie from `pleroma_key` to `__Host-pleroma-key` which enforces some restrictions. (see [cookie prefixes](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#Cookie_prefixes))
This sets the `secure` flag on Akkoma’s session cookie. This makes sure, that the cookie is only accepted over encrypted HTTPs connections. This implicitly renames the cookie from `pleroma_key` to `__Host-pleroma-key` which enforces some restrictions. (see [cookie prefixes](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#Cookie_prefixes))
### `Pleroma.Upload, :uploader, :base_url`
> Recommended value: *anything on a different domain than the instance endpoint; e.g. https://media.myinstance.net/*
Uploads are user controlled and (unless you’re running a true single-user
instance) should therefore not be considered trusted. But the domain is used
as a pivilege boundary e.g. by HTTP content security policy and ActivityPub.
Having uploads on the same domain enabled several past vulnerabilities
If you came here from one of the installation guides, take a look at the example configuration `/installation/nginx/akkoma.nginx`, where this part is already included.
* Append the following to your `prod.secret.exs` or `dev.secret.exs` (depends on which mode your instance is running):
* Append the following to your `prod.secret.exs` or `dev.secret.exs` (depends on which mode your instance is running):
```
```elixir
# Replace media.example.td with the subdomain you set up earlier
config :pleroma, :media_proxy,
config :pleroma, :media_proxy,
enabled: true,
enabled: true,
proxy_opts: [
proxy_opts: [
redirect_on_failure: true
redirect_on_failure: true
]
],
#base_url: "https://cache.akkoma.social"
base_url: "https://media.example.tld"
```
```
If you want to use a subdomain to serve the files, uncomment `base_url`, change the url and add a comma after `true` in the previous line.
You **really** should use a subdomain to serve proxied files; while we will fix bugs resulting from this, serving arbitrary remote content on your main domain namespace is a significant attack surface.
@ -6,7 +6,7 @@ To add a custom theme to your instance, you'll first need to get a custom theme,
### Create your own theme
### Create your own theme
* You can create your own theme using the Pleroma FE by going to settings (gear on the top right) and choose the Theme tab. Here you have the options to create a personal theme.
* You can create your own theme using the Akkoma FE by going to settings (gear on the top right) and choose the Theme tab. Here you have the options to create a personal theme.
* To download your theme, you can do Save preset
* To download your theme, you can do Save preset
* If you want to upload a theme to customise it further, you can upload it using Load preset
* If you want to upload a theme to customise it further, you can upload it using Load preset
@ -60,7 +60,7 @@ Example of `my-awesome-theme.json` where we add the name "My Awesome Theme"
### Set as default theme
### Set as default theme
Now we can set the new theme as default in the [Pleroma FE configuration](https://docs-fe.akkoma.dev/stable/CONFIGURATION).
Now we can set the new theme as default in the [Pleroma FE configuration](https://docs-fe.akkoma.dev/stable/CONFIGURATION/).
Example of adding the new theme in the back-end config files
Example of adding the new theme in the back-end config files
@ -35,6 +35,7 @@ Once `SimplePolicy` is enabled, you can configure various groups in the `:mrf_si
* `media_removal`: Servers in this group will have media stripped from incoming messages.
* `media_removal`: Servers in this group will have media stripped from incoming messages.
* `avatar_removal`: Avatars from these servers will be stripped from incoming messages.
* `avatar_removal`: Avatars from these servers will be stripped from incoming messages.
* `banner_removal`: Banner images from these servers will be stripped from incoming messages.
* `banner_removal`: Banner images from these servers will be stripped from incoming messages.
* `background_removal`: User background images from these servers will be stripped from incoming messages.
* `report_removal`: Servers in this group will have their reports (flags) rejected.
* `report_removal`: Servers in this group will have their reports (flags) rejected.
* `federated_timeline_removal`: Servers in this group will have their messages unlisted from the public timelines by flipping the `to` and `cc` fields.
* `federated_timeline_removal`: Servers in this group will have their messages unlisted from the public timelines by flipping the `to` and `cc` fields.
* `reject_deletes`: Deletion requests will be rejected from these servers.
* `reject_deletes`: Deletion requests will be rejected from these servers.
@ -61,6 +62,32 @@ config :pleroma, :mrf_simple,
The effects of MRF policies can be very drastic. It is important to use this functionality carefully. Always try to talk to an admin before writing an MRF policy concerning their instance.
The effects of MRF policies can be very drastic. It is important to use this functionality carefully. Always try to talk to an admin before writing an MRF policy concerning their instance.
## Hiding or Obfuscating Policies
You can opt out of publicly displaying all MRF policies or only hide or obfuscate selected domains.
transparency_exclusions: [{"ghost.club", "even a fragment is too spoopy for humans"}]
```
## More MRF Policies
See the [documentation cheatsheet](cheatsheet.md)
for all available MRF policies and their options.
## Writing your own MRF Policy
## Writing your own MRF Policy
As discussed above, the MRF system is a modular system that supports pluggable policies. This means that an admin may write a custom MRF policy in Elixir or any other language that runs on the Erlang VM, by specifying the module name in the `policies` config setting.
As discussed above, the MRF system is a modular system that supports pluggable policies. This means that an admin may write a custom MRF policy in Elixir or any other language that runs on the Erlang VM, by specifying the module name in the `policies` config setting.
If using an OTP release, set the `RELEASE_VM_ARGS` environment variable to the path to the vm.args file.
Check your OS documentation to adopt a similar strategy on other platforms.
Check your OS documentation to adopt a similar strategy on other platforms.
### Virtual Machine and/or few CPU cores
### Virtual Machine and/or few CPU cores
Disable the busy-waiting. This should generally only be done if you're on a platform that does burst scheduling, like AWS.
Disable the busy-waiting. This should generally be done if you're on a platform that does burst scheduling, like AWS, or if you're running other
services on the same machine.
**vm.args:**
**vm.args:**
@ -39,6 +42,8 @@ Disable the busy-waiting. This should generally only be done if you're on a plat
+sbwtdio none
+sbwtdio none
```
```
These settings are enabled by default for OTP releases
### Dedicated Hardware
### Dedicated Hardware
Enable more busy waiting, increase the internal maximum limit of BEAM processes and ports. You can use this if you run on dedicated hardware, but it is not necessary.
Enable more busy waiting, increase the internal maximum limit of BEAM processes and ports. You can use this if you run on dedicated hardware, but it is not necessary.
@ -4,47 +4,10 @@ Akkoma performance is largely dependent on performance of the underlying databas
## PGTune
## PGTune
[PgTune](https://pgtune.leopard.in.ua) can be used to get recommended settings. Be sure to set "Number of Connections" to 20, otherwise it might produce settings hurtful to database performance. It is also recommended to not use "Network Storage" option.
[PgTune](https://pgtune.leopard.in.ua) can be used to get recommended settings. Make sure to set the DB type to "Online transaction processing system" for optimal performance. Also set the number of connections to between 25 and 30. This will allow each connection to have access to more resources while still leaving some room for running maintenance tasks while the instance is still running.
If your server runs other services, you may want to take that into account. E.g. if you have 4G ram, but 1G of it is already used for other services, it may be better to tell PGTune you only have 3G. In the end, PGTune only provides recomended settings, you can always try to finetune further.
It is also recommended to not use "Network Storage" option.
### Example configurations
If your server runs other services, you may want to take that into account. E.g. if you have 4G ram, but 1G of it is already used for other services, it may be better to tell PGTune you only have 3G.
Here are some configuration suggestions for PostgreSQL 10+.
In the end, PGTune only provides recomended settings, you can always try to finetune further.
#### 1GB RAM, 1 CPU
```
shared_buffers = 256MB
effective_cache_size = 768MB
maintenance_work_mem = 64MB
work_mem = 13107kB
```
#### 2GB RAM, 2 CPU
```
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
work_mem = 26214kB
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
```
## Disable generic query plans
When PostgreSQL receives a query, it decides on a strategy for searching the requested data, this is called a query plan. The query planner has two modes: generic and custom. Generic makes a plan for all queries of the same shape, ignoring the parameters, which is then cached and reused. Custom, on the contrary, generates a unique query plan based on query parameters.
By default PostgreSQL has an algorithm to decide which mode is more efficient for particular query, however this algorithm has been observed to be wrong on some of the queries Akkoma sends, leading to serious performance loss. Therefore, it is recommended to disable generic mode.
Akkoma already avoids generic query plans by default, however the method it uses is not the most efficient because it needs to be compatible with all supported PostgreSQL versions. For PostgreSQL 12 and higher additional performance can be gained by adding the following to Akkoma configuration:
```elixir
config :pleroma, Pleroma.Repo,
prepare: :named,
parameters: [
plan_cache_mode: "force_custom_plan"
]
```
A more detailed explaination of the issue can be found at <https://blog.soykaf.com/post/postgresql-elixir-troubles/>.
@ -6,7 +6,7 @@ as soon as the post is received by your instance.
## Nginx
## Nginx
The following are excerpts from the [suggested nginx config](../../../installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work.
The following are excerpts from the [suggested nginx config](https://akkoma.dev/AkkomaGang/akkoma/src/branch/develop/installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work.
A `proxy_cache_path` must be defined, for example:
A `proxy_cache_path` must be defined, for example:
# Differences in Mastodon API responses from vanilla Mastodon
# Differences in Mastodon API responses from vanilla Mastodon
A Akkoma instance can be identified by "<Mastodonversion> (compatible; Pleroma <version>)" present in `version` field in response from `/api/v1/instance`
A Akkoma instance can be identified by "<Mastodonversion> (compatible; Akkoma <version>)" present in `version` field in response from `/api/v1/instance`
## Flake IDs
## Flake IDs
@ -8,23 +8,32 @@ Akkoma uses 128-bit ids as opposed to Mastodon's 64 bits. However, just like Mas
## Timelines
## Timelines
In addition to Mastodon’s timelines, there is also a “bubble timeline” showing
posts from the local instance and a set of closely related instances as chosen
by the administrator. It is available under `/api/v1/timelines/bubble`.
Adding the parameter `with_muted=true` to the timeline queries will also return activities by muted (not by blocked!) users.
Adding the parameter `with_muted=true` to the timeline queries will also return activities by muted (not by blocked!) users.
Adding the parameter `exclude_visibilities` to the timeline queries will exclude the statuses with the given visibilities. The parameter accepts an array of visibility types (`public`, `unlisted`, `private`, `direct`), e.g., `exclude_visibilities[]=direct&exclude_visibilities[]=private`.
Adding the parameter `exclude_visibilities` to the timeline queries will exclude the statuses with the given visibilities. The parameter accepts an array of visibility types (`public`, `unlisted`, `private`, `direct`), e.g., `exclude_visibilities[]=direct&exclude_visibilities[]=private`.
Adding the parameter `reply_visibility` to the public and home timelines queries will filter replies. Possible values: without parameter (default) shows all replies, `following` - replies directed to you or users you follow, `self` - replies directed to you.
Adding the parameter `reply_visibility` to the public, bubble or home timelines queries will filter replies. Possible values: without parameter (default) shows all replies, `following` - replies directed to you or users you follow, `self` - replies directed to you.
Adding the parameter `instance=lain.com` to the public timeline will show only statuses originating from `lain.com` (or any remote instance).
Adding the parameter `instance=lain.com` to the public timeline will show only statuses originating from `lain.com` (or any remote instance).
Home, public, hashtag & list timelines accept these parameters:
All but the direct timeline accept these parameters:
- `only_media`: show only statuses with media attached
- `only_media`: show only statuses with media attached
- `local`: show only local statuses
- `remote`: show only remote statuses
- `remote`: show only remote statuses
Home, public, hashtag & list timelines further accept:
- `local`: show only local statuses
## Statuses
## Statuses
- `visibility`: has additional possible values `list` and `local` (for local-only statuses)
- `visibility`: has additional possible values `list` and `local` (for local-only statuses)
- `emoji_reactions`: additional field since Akkoma 3.2.0; identical to `pleroma/emoji_reactions`
Has these additional fields under the `pleroma` object:
Has these additional fields under the `pleroma` object:
@ -36,7 +45,9 @@ Has these additional fields under the `pleroma` object:
- `spoiler_text`: a map consisting of alternate representations of the `spoiler_text` property with the key being its mimetype. Currently, the only alternate representation supported is `text/plain`
- `spoiler_text`: a map consisting of alternate representations of the `spoiler_text` property with the key being its mimetype. Currently, the only alternate representation supported is `text/plain`
- `expires_at`: a datetime (iso8601) that states when the post will expire (be deleted automatically), or empty if the post won't expire
- `expires_at`: a datetime (iso8601) that states when the post will expire (be deleted automatically), or empty if the post won't expire
- `thread_muted`: true if the thread the post belongs to is muted
- `thread_muted`: true if the thread the post belongs to is muted
- `emoji_reactions`: A list with emoji / reaction maps. The format is `{name: "☕", count: 1, me: true}`. Contains no information about the reacting users, for that use the `/statuses/:id/reactions` endpoint.
- `emoji_reactions`: A list with emoji / reaction maps. The format is `{name: "☕", count: 2, me: true, account_ids: ["UserID1", "UserID2"]}`.
The `account_ids` property was added in Akkoma 3.2.0.
Further info about all reacting users at once, can be found using the `/statuses/:id/reactions` endpoint.
- `parent_visible`: If the parent of this post is visible to the user or not.
- `parent_visible`: If the parent of this post is visible to the user or not.
- `pinned_at`: a datetime (iso8601) when status was pinned, `null` otherwise.
- `pinned_at`: a datetime (iso8601) when status was pinned, `null` otherwise.
@ -110,6 +121,12 @@ Has these additional fields under the `pleroma` object:
- `notification_settings`: object, can be absent. See `/api/v1/pleroma/notification_settings` for the parameters/keys returned.
- `notification_settings`: object, can be absent. See `/api/v1/pleroma/notification_settings` for the parameters/keys returned.
- `favicon`: nullable URL string, Favicon image of the user's instance
- `favicon`: nullable URL string, Favicon image of the user's instance
Has these additional fields under the `akkoma` object:
- `instance`: nullable object with metadata about the user’s instance
- `status_ttl_days`: nullable int, default time after which statuses are deleted
- `permit_followback`: boolean, whether follows from followed accounts are auto-approved
### Source
### Source
Has these additional fields under the `pleroma` object:
Has these additional fields under the `pleroma` object:
@ -214,6 +231,11 @@ Returns: array of Status.
The maximum number of statuses is limited to 100 per request.
The maximum number of statuses is limited to 100 per request.
## PUT `/api/v1/statuses/:id/emoji_reactions/:emoji`
This endpoint is an extension of the Fedibird Mastodon fork.
It behaves identical to PUT `/api/v1/pleroma/statuses/:id/reactions/:emoji`.
## PATCH `/api/v1/accounts/update_credentials`
## PATCH `/api/v1/accounts/update_credentials`
Additional parameters can be added to the JSON body/Form data:
Additional parameters can be added to the JSON body/Form data:
Inspired by <https://www.w3.org/wiki/SocialCG/ActivityPub/MediaUpload>, it is part of the ActivityStreams namespace because it used to be part of the ActivityPub specification and got removed from it.
Inspired by <https://www.w3.org/wiki/SocialCG/ActivityPub/MediaUpload>, it is part of the ActivityStreams namespace because it used to be part of the ActivityPub specification and got removed from it.
Akkoma is a federated social networking platform, compatible with Mastodon and other ActivityPub implementations. It is free software licensed under the AGPLv3.
Akkoma is a federated social networking platform, compatible with Mastodon and other ActivityPub implementations. It is free software licensed under the AGPLv3.
It actually consists of two components: a backend, named simply Akkoma, and a user-facing frontend, named Pleroma-FE. It also includes the Mastodon frontend, if that's your thing.
It actually consists of two components: a backend, named simply Akkoma, and a user-facing frontend, named Akkoma-FE. It also includes the Mastodon frontend, if that's your thing.
It's part of what we call the fediverse, a federated network of instances which speak common protocols and can communicate with each other.
It's part of what we call the fediverse, a federated network of instances which speak common protocols and can communicate with each other.
One account on an instance is enough to talk to the entire fediverse!
One account on an instance is enough to talk to the entire fediverse!
@ -31,11 +31,11 @@ Installation instructions can be found in the installation section of these docs
## I got an account, now what?
## I got an account, now what?
Great! Now you can explore the fediverse! Open the login page for your Akkoma instance (e.g. <https://otp.akkoma.dev>) and login with your username and password. (If you don't have an account yet, click on Register)
Great! Now you can explore the fediverse! Open the login page for your Akkoma instance (e.g. <https://otp.akkoma.dev>) and login with your username and password. (If you don't have an account yet, click on Register)
### Pleroma-FE
### Akkoma-FE
The default front-end used by Akkoma is Pleroma-FE. You can find more information on what it is and how to use it in the [Introduction to Pleroma-FE](https://docs-fe.akkoma.dev/stable/).
The default front-end used by Akkoma is Akkoma-FE. You can find more information on what it is and how to use it in the [Introduction to Akkoma-FE](https://docs-fe.akkoma.dev/stable/).
### Mastodon interface
### Mastodon interface
If the Pleroma-FE interface isn't your thing, or you're just trying something new but you want to keep using the familiar Mastodon interface, we got that too!
If the Akkoma-FE interface isn't your thing, or you're just trying something new but you want to keep using the familiar Mastodon interface, we got that too!
Just add a "/web" after your instance url (e.g. <https://otp.akkoma.dev/web>) and you'll end on the Mastodon web interface, but with a Akkoma backend! MAGIC!
Just add a "/web" after your instance url (e.g. <https://otp.akkoma.dev/web>) and you'll end on the Mastodon web interface, but with a Akkoma backend! MAGIC!
The Mastodon interface is from the Glitch-soc fork. For more information on the Mastodon interface you can check the [Mastodon](https://docs.joinmastodon.org/) and [Glitch-soc](https://glitch-soc.github.io/docs/) documentation.
The Mastodon interface is from the Glitch-soc fork. For more information on the Mastodon interface you can check the [Mastodon](https://docs.joinmastodon.org/) and [Glitch-soc](https://glitch-soc.github.io/docs/) documentation.
@ -145,47 +145,13 @@ If you want to open your newly installed instance to the world, you should run n
doas apk add nginx
doas apk add nginx
```
```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell
doas apk add certbot
```
and then set it up:
```shell
doas mkdir -p /var/lib/letsencrypt/
doas certbot certonly --email <your@emailaddress> -d <yourdomain> --standalone
```
If that doesn’t work, make sure, that nginx is not already running. If it still doesn’t work, try setting up nginx first (change ssl “on” to “off” and try again).
* Copy the example nginx configuration to the nginx folder
* Copy the example nginx configuration to the nginx folder
```shell
```shell
doas cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/conf.d/akkoma.conf
doas cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/conf.d/akkoma.conf
```
```
* Before starting nginx edit the configuration and change it to your needs. You must change change `server_name` and the paths to the certificates. You can use `nano` (install with `apk add nano` if missing).
* Before starting nginx edit the configuration and change it to your needs. You must change change `server_name`. You can use `nano` (install with `apk add nano` if missing).
If you need to renew the certificate in the future, uncomment the relevant location block in the nginx config and run:
* Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell
```shell
doas certbot certonly --email <your@emailaddress> -d <yourdomain> --webroot -w /var/lib/letsencrypt/
doas apk add certbot certbot-nginx
```
and then set it up:
```shell
doas mkdir -p /var/lib/letsencrypt/
doas certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
To automatically renew, set up a cron job like so:
```shell
# Enable the crond service
doas rc-update add crond
doas rc-service crond start
# Test that renewals work
doas certbot renew --cert-name yourinstance.tld --nginx --dry-run
If that doesn’t work, make sure, that nginx is not already running. If it still doesn’t work, try setting up nginx first (change ssl “on” to “off” and try again).
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
---
To make sure renewals work, enable the appropriate systemd timer:
* Copy the example nginx configuration and activate it:
This guide will assume you are on Debian 11 (“bullseye”) or later. This guide should also work with Ubuntu 18.04 (“Bionic Beaver”) and later. It also assumes that you have administrative rights, either as root or a user with [sudo permissions](https://www.digitalocean.com/community/tutorials/how-to-add-delete-and-grant-sudo-privileges-to-users-on-a-debian-vps). If you want to run this guide with root, ignore the `sudo` at the beginning of the lines, unless it calls a user like `sudo -Hu akkoma`; in this case, use `su <username> -s $SHELL -c 'command'` instead.
This guide will assume you are on Debian 12 (“bookworm”) or later. This guide should also work with Ubuntu 22.04 (“Jammy Jellyfish”) and later. It also assumes that you have administrative rights, either as root or a user with [sudo permissions](https://www.digitalocean.com/community/tutorials/how-to-add-delete-and-grant-sudo-privileges-to-users-on-a-debian-vps). If you want to run this guide with root, ignore the `sudo` at the beginning of the lines, unless it calls a user like `sudo -Hu akkoma`; in this case, use `su <username> -s $SHELL -c 'command'` instead.
**Note**: To execute a single command as the Akkoma system user, use `sudo -Hu akkoma command`. You can also switch to a shell by using `sudo -Hu akkoma $SHELL`. If you don’t have and want `sudo` on your system, you can use `su` as root user (UID 0) for a single command by using `su -l akkoma -s $SHELL -c 'command'` and `su -l akkoma -s $SHELL` for starting a shell.
**Note**: To execute a single command as the Akkoma system user, use `sudo -Hu akkoma command`. You can also switch to a shell by using `sudo -Hu akkoma $SHELL`. If you don’t have and want `sudo` on your system, you can use `su` as root user (UID 0) for a single command by using `su -l akkoma -s $SHELL -c 'command'` and `su -l akkoma -s $SHELL` for starting a shell.
* Git clone the AkkomaBE repository from stable-branch and make the Akkoma user the owner of the directory:
### Install Elixir and Erlang
If your distribution packages a recent enough version of Elixir, you can install it directly from the distro repositories and skip to the next section of the guide:
```shell
sudo apt install elixir erlang-dev erlang-nox
```
Otherwise use [asdf](https://github.com/asdf-vm/asdf) to install the latest versions of Elixir and Erlang.
First, install some dependencies needed to build Elixir and Erlang:
If that doesn’t work, make sure, that nginx is not already running. If it still doesn’t work, try setting up nginx first (change ssl “on” to “off” and try again).
---
* Copy the example nginx configuration and activate it:
* Copy the example nginx configuration and activate it:
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
Certificate renewal should be handled automatically by Certbot from now on.
#### Other webserver/proxies
#### Other webserver/proxies
You can find example configurations for them in `/opt/akkoma/installation/`.
You can find example configurations for them in `/opt/akkoma/installation/`.
If that doesn’t work, make sure, that nginx is not already running. If it still doesn’t work, try setting up nginx first (change ssl “on” to “off” and try again).
---
* Copy the example nginx configuration and activate it:
* Copy the example nginx configuration and activate it:
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
Certificate renewal should be handled automatically by Certbot from now on.
#### Other webserver/proxies
#### Other webserver/proxies
You can find example configurations for them in `/opt/akkoma/installation/`.
You can find example configurations for them in `/opt/akkoma/installation/`.
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. If that doesn’t work, make sure, that nginx is not already running. If it still doesn’t work, try setting up nginx first (change ssl “on” to “off” and try again). Often the answer to issues with certbot is to use the `--nginx` flag once you have nginx up and running.
If you are using any additional subdomains, such as for a media proxy, you can re-run the same command with the subdomain in question. When it comes time to renew later, you will not need to run multiple times for each domain, one renew will handle it.
---
* Copy the example nginx configuration and activate it:
* Copy the example nginx configuration and activate it:
```shell
```shell
@ -237,9 +218,24 @@ Pay special attention to the line that begins with `ssl_ecdh_curve`. It is stong
```shell
```shell
# rc-update add nginx default
# rc-update add nginx default
# /etc/init.d/nginx start
# rc-service nginx start
```
```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, install it if you haven't already:
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
If you are using certbot, it is HIGHLY recommend you set up a cron job that renews your certificate, and that you install the suggested `certbot-nginx` plugin. If you don't do these things, you only have yourself to blame when your instance breaks suddenly because you forgot about it.
If you are using certbot, it is HIGHLY recommend you set up a cron job that renews your certificate, and that you install the suggested `certbot-nginx` plugin. If you don't do these things, you only have yourself to blame when your instance breaks suddenly because you forgot about it.
First, ensure that the command you will be installing into your crontab works.
First, ensure that the command you will be installing into your crontab works.
Akkoma comes with a few frontend changes as well as backend ones,
Akkoma comes with a few frontend changes as well as backend ones,
@ -118,3 +165,16 @@ To fix this, run:
```
```
which will remove the config from the database. Things should work now.
which will remove the config from the database. Things should work now.
## Migrating back to Pleroma
Akkoma is a hard fork of Pleroma. As such, migrating back is not guaranteed to always work. But if you want to migrate back to Pleroma, you can always try. Just note that you may run into unexpected issues and you're basically on your own. The following are some tips that may help, but note that these are barely tested, so proceed at your own risk.
First you will need to roll back the database migrations. The latest migration both Akkoma and Pleroma still have in common should be 20210416051708, so roll back to that. If you run from source, that should be
Then switch back to Pleroma for updates (similar to how was done to migrate to Akkoma), and remove the front-ends. The front-ends are installed in the `frontends` folder in the [static directory](../configuration/static_dir.md). Once you are back to Pleroma, you will need to run the database migrations again. See the Pleroma documentation for this.
After this use your previous backups to restore data from diverging features.
This guide covers a installation using an OTP release. To install Akkoma from source, please check out the corresponding guide for your distro.
This guide covers a installation using an OTP release. To install Akkoma from source, please check out the corresponding guide for your distro.
## Pre-requisites
## Pre-requisites
* A machine running Linux with GNU (e.g. Debian, Ubuntu) or musl (e.g. Alpine) libc and an `x86_64` CPU you have root access to. If you are not sure if it's compatible see [Detecting flavour section](#detecting-flavour) below
* A machine running Linux with GNU (e.g. Debian, Ubuntu) or musl (e.g. Alpine) libc and an `x86_64` or `arm64` CPU you have root access to. If you are not sure if it's compatible see [Detecting flavour section](#detecting-flavour) below
* For installing OTP releases on RedHat-based distros like Fedora and Centos Stream, please follow [this guide](./otp_redhat_en.md) instead.
* For installing OTP releases on RedHat-based distros like Fedora and Centos Stream, please follow [this guide](./otp_redhat_en.md) instead.
* A (sub)domain pointed to the machine
* A (sub)domain pointed to the machine
You will be running commands as root. If you aren't root already, please elevate your priviledges by executing `sudo su`/`su`.
You will be running commands as root. If you aren't root already, please elevate your priviledges by executing `sudo -i`/`su`.
While in theory OTP releases are possbile to install on any compatible machine, for the sake of simplicity this guide focuses only on Debian/Ubuntu and Alpine.
While in theory OTP releases are possbile to install on any compatible machine, for the sake of simplicity this guide focuses only on Debian/Ubuntu and Alpine.
### Detecting flavour
### Detecting flavour
This is a little more complex than it used to be (thanks ubuntu)
Use the following mapping to figure out your flavour:
Use the following mapping to figure out your flavour:
| distribution | flavour | available branches |
| distribution | architecture | flavour | available branches |
If your distro does not have either of those you can append `include /etc/nginx/akkoma.conf` to the end of the http section in /etc/nginx/nginx.conf and
If your distro does not have either of those you can append `include /etc/nginx/akkoma.conf` to the end of the http section in /etc/nginx/nginx.conf and
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
#### Start nginx
#### Start nginx
=== "Alpine"
=== "Alpine"
@ -248,32 +255,19 @@ If everything worked, you should see Akkoma-FE when visiting your domain. If tha
## Post installation
## Post installation
### Setting up auto-renew of the Let's Encrypt certificate
### Setting up auto-renew of the Let's Encrypt certificate
```sh
# Create the directory for webroot challenges
mkdir -p /var/lib/letsencrypt
# Uncomment the webroot method
$EDITOR path-to-nginx-config
# Verify that the config is valid
nginx -t
```
=== "Alpine"
=== "Alpine"
```
```
# Restart nginx
rc-service nginx restart
# Start the cron daemon and make it start on boot
# Start the cron daemon and make it start on boot
rc-service crond start
rc-service crond start
rc-update add crond
rc-update add crond
# Ensure the webroot menthod and post hook is working
# Ensure the webroot menthod and post hook is working
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
If you're successful with obtaining the certificates, opening your (sub)domain in a browser will result in a 502 error, since Akkoma hasn't been started yet.
### Setting up a system service
### Setting up a system service
@ -239,19 +241,11 @@ sudo nginx -t
# Restart nginx
# Restart nginx
sudo systemctl restart nginx
sudo systemctl restart nginx
# Ensure the webroot menthod and post hook is working
# If everything worked the output should contain /etc/cron.daily/renew-akkoma-cert
sudo run-parts --test /etc/cron.daily
```
```
Assuming the commands were run successfully, certbot should be able to renew your certificates automatically via the `certbot-renew.timer` systemd unit.
; Use private /tmp and /var/tmp folders inside a new file system namespace, which are discarded after the process stops.
; Use private /tmp and /var/tmp folders inside a new file system namespace, which are discarded after the process stops.
@ -34,6 +41,8 @@ ProtectHome=true
ProtectSystem=full
ProtectSystem=full
; Sets up a new /dev mount for the process and only adds API pseudo devices like /dev/null, /dev/zero or /dev/random but not physical devices. Disabled by default because it may not work on devices like the Raspberry Pi.
; Sets up a new /dev mount for the process and only adds API pseudo devices like /dev/null, /dev/zero or /dev/random but not physical devices. Disabled by default because it may not work on devices like the Raspberry Pi.
PrivateDevices=false
PrivateDevices=false
; Ensures that the service process and all its children can never gain new privileges through execve().
{"Do you want to strip location (GPS) data from uploaded images? This requires exiftool, it was detected as installed. (y/n)",
{"Do you want to strip metadata from uploaded images? This requires exiftool, it was detected as installed. (y/n)",
"y"}
"y"}
else
else
{"Do you want to strip location (GPS) data from uploaded images? This requires exiftool, it was detected as not installed, please install it if you answer yes. (y/n)",
{"Do you want to strip metadata from uploaded images? This requires exiftool, it was detected as not installed, please install it if you answer yes. (y/n)",
{"Do you want to read data from uploaded files so clients can use it to prefill fields like image description? This requires exiftool, it was detected as installed. (y/n)",
"y"}
else
{"Do you want to read data from uploaded files so clients can use it to prefill fields like image description? This requires exiftool, it was detected as not installed, please install it if you answer yes. (y/n)",
"n"}
end
read_uploads_description=
get_option(
options,
:read_uploads_description,
read_uploads_description_message,
read_uploads_description_default
)==="y"
)==="y"
anonymize_uploads=
anonymize_uploads=
@ -186,14 +212,6 @@ def run(["gen" | rest]) do
"n"
"n"
)==="y"
)==="y"
dedupe_uploads=
get_option(
options,
:dedupe_uploads,
"Do you want to deduplicate uploaded files? (y/n)",