ForForkMerge #2

Merged
sliver merged 185 commits from ForForkMerge into stable 2024-03-31 06:59:36 +00:00
Owner

問題なかったためMerge

問題なかったためMerge
sliver added 185 commits 2024-03-31 06:59:00 +00:00
I think it makes more sense that the emoji cache gets reloaded in Akkoma if you add or create emoji packs.
AkkomaGang/akkoma#503
Reviewed-on: AkkomaGang/akkoma#619
Currently, Akkoma sorts by published date first before everything else.
This however makes search results pretty bad since Meilisearch uses a
bucket sort algorithm in order of the ranking rules specified:
https://www.meilisearch.com/docs/learn/core_concepts/relevancy#behavior

Since the `published` attribute is a unix timestamp, the resulting
buckets are pretty small so the other rules essentially have little to
no effect on the rankings of search results.

This fixes that issue by moving the `published:desc` rule further down
so it still sorts by date, but only after considering everything else.

AFAIK attribute and sort doesn't really affect results for Akkoma since
the only attribute considered is the `content` attribute and the `sort`
parameter isn't used in Akkoma searches. Everything else is made to
match more closely to Meilisearch's defaults.
Implements the preferences endpoint in the Mastodon API, but returns
default values for most of the preferences right now. The only supported
preference we can access is default post visibility, and a relevant test
is added as well.
Reviewed-on: AkkomaGang/akkoma#625
Reviewed-on: AkkomaGang/akkoma#615
Reviewed-on: AkkomaGang/akkoma#563
Reviewed-on: AkkomaGang/akkoma#623
Reviewed-on: AkkomaGang/akkoma#627
Closes #612

Co-authored-by: tusooa <tusooa@kazv.moe>
Reviewed-on: AkkomaGang/akkoma#626
Co-authored-by: FloatingGhost <hannah@coffee-and-dreams.uk>
Co-committed-by: FloatingGhost <hannah@coffee-and-dreams.uk>
added arm64 support for update.
Tested on Arch amd64, Debian arm64 and Alpine amd64
This is according to the error message displayed when trying to run the
command in the current version of the docs
Add JPEG-XL, AVIF, and WebP support to the reverse proxy. All three are
supported in WebKit browsers; the latter two are supported in Gecko and
Blink.
Reviewed-on: AkkomaGang/akkoma#630
Reviewed-on: AkkomaGang/akkoma#631
Reviewed-on: AkkomaGang/akkoma#658
Reviewed-on: AkkomaGang/akkoma#634
Reviewed-on: AkkomaGang/akkoma#632
Signed-off-by: Yonle <yonle@lecturify.net>
see https://github.com/ueberauth/ueberauth/issues/194
the previous code passed a state parameter to ueberauth with info
about where to go after the user logged in, etc.
since ueberauth 0.7, this parameter is ignored and oauth state is used
for actual CSRF reasons.

we now set a cookie with the state we need to keep track of, and read
it once the callback happens.
Reviewed-on: AkkomaGang/akkoma#668
Fixes AkkomaGang/akkoma#645
And point to the cheat sheet for all other MRF policies
and their configuration details.
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
Reviewed-on: AkkomaGang/akkoma#676
Their functions were purged in 0f132b802d
Chats were removed in 0f132b802d
The exporter doesn’t support them thus we don't lose anything by this,
but it avoids a bunch of warnings each time the server starts up.
Otherwise we get warnings on startup as local captures
and anonymous functions are supposedly less performant.
Commit e9f1897cfd added this private
function but it never had any users resulting in warnings each startup
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.

Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.
OTP’s default SSL/TLS settings are rather restricitive
and in particular do not use system CA certs.
In our case using system CA certs is virtually always desired
and the lack of it leads to non-obvious errors. Manually configuring
system CA certs from in-database config also isn’t straightforward.

Furthermore, gen_smtp uses a different set of connection options
for direct SSL/TLS and a later TLS upgrade providing additional
confusion and complexity in how to configure this.

Thus provide some suitable defaults for sending SMTP emails.
Everything can still be overriden by admins if necessary.

Note: defaults are not appended when validating the config
in hopes of improving the error message (as the required relay key
is already accessed to generate defaults for optional fields)

Fixes: AkkomaGang/akkoma#660
Reviewed-on: AkkomaGang/akkoma#684
Fixes misspelling and omission of and example in commit
0cfd5b4e89 which added the
status_ttl_property. This was the only place this commit
referred to the property as note_ttl_days.

Partially fixes the omitted schema update of the instance metadata addition
from commit b7e8ce2350. A proper full schema
for nodeinfo is still missing.
Resolves: AkkomaGang/akkoma#148
It was added in cb6e7359af.
Akkoma stopped pretending to be Pleroma here when the mix project name
was changed in c07fcdbf2b.
Reviewed-on: AkkomaGang/akkoma#687
Reviewed-on: AkkomaGang/akkoma#678
Reviewed-on: AkkomaGang/akkoma#680
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.

Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.

Ref.: 4e64397635
Reviewed-on: AkkomaGang/akkoma#682
This fixes an oversight in e99e2407f3
which added background_removal as a possible SimplePolicy setting.
However, it did _not_ add a default value to the base config and
as it turns out instance_list doesn’t handle unset options well.

In effect this caused federating instances with SimplePolicy enabled
but background_removal not explicitly configured to always trip up for
outgoing account updates in check_background_removal (and incoming
updates from Sharkey).
For added ""fun"" this error was able to block account updates made
e.g. via /api/v1/accounts/update_credentials.

Tests were unaffected since they explicitly override
all relevant config options.

Set a default to avoid all this
(note to self: don’t forget next time, baka!)
Reviewed-on: AkkomaGang/akkoma#692
Reviewed-on: AkkomaGang/akkoma#681
Reviewed-on: AkkomaGang/akkoma#685
This vastly reduces idle CPU usage, which should generally be beneficial
for most small-to-medium sized instances.

Additionally update the documentation to specify how to override the vm.args
file for OTP installs
Apparently nothing used this factory until now
Once processed they serve no purpose anymore afaict.
Therefor, lets prune them like other transient activities
to not unnecessarily bloat the table.
fixed up some grammer / wording. removed a setence and made wording more in line with what I could find in Admin-FE (especially wording of "rejecting" vs. dropping)
Reviewed-on: YokaiRick/akkoma#1
Reviewed-on: AkkomaGang/akkoma#686
Reviewed-on: AkkomaGang/akkoma#683
Reviewed-on: AkkomaGang/akkoma#693
This partly reverts 1d884fd914
while fixing both the issue it addressed and the issue it caused.

The above commit successfully fixed OpenGraph metadata tags
which until then always showed the user bio instead of post content
by handing the activities AP ID as url to the Metadata builder
_instead_ of passing the internal ID as activity_id.
However, in doing so the commit instead inflicted this very problem
onto Twitter metadata tags which ironically are used by akkoma-fe.

This is because while the OpenGraph builder wants an URL as url,
the Twitter builder needs the internal ID to build the URL to the
embedded player for videos and has no URL property.

Thanks to twpol for tracking down this root cause in #644.

Now, once identified the problem is simple, but this simplicity
invites multiple possible solutions to bikeshed about.

 1. Just pass both properties to the builder and let them pick

 2. Drop the url parameter from the OpenGraph builder and instead
     a) build static-fe URL of the post from the ID (like Twitter)
     b) use the passed-in object’s AP ID as an URL

Approach 2a has the disadvantage of hardcoding the expected URL outside
the router, which will be problematic should it ever change.
Approach 2b is conceptually similar to how the builder works atm.
However, the og:url is supposed to be a _permanent_ ID, by changing it
we might, afaiui, technically violate OpenGraph specs(?). (Though its
real-world consequence may very well be near non-existent.)

This leaves just approach 1, which this commit implements.
Albeit it too is not without nits to pick, as it leaves the metadata
builders with an inconsistent interface.

Additionally, this will resolve the subotpimal Discord previews for
content-less image posts reported in #664.
Discord already prefers OpenGraph metadata, so it’s mostly unaffected.
However, it appears when encountering an explicitly empty OpenGraph
description and a non-empty Twitter description, it replaces just the
empty field with its Twitter counterpart, resulting in the user’s bio
slipping into the preview.
Secondly, regardless of any OpenGraph tags, Discord uses twitter:card to
decide how prominently images should be, but due to the bug the card
type was stuck as "summary", forcing images to always remain small.

Root cause identified by: twpol

Fixes: AkkomaGang/akkoma#644
Fixes: AkkomaGang/akkoma#664
It was dropped in 9db4c2429f
Else it is too easy to mistake for another MRF policy.
This makes it easier to spot the transparency options
It doesn’t make sense to add/remove them from the policies list
And remove “on by default” text from individual entries.
They are now laready in the “on by default” section.
It is too cumbersome to find a specific policy atm
or to check if all are docuemtned yet.
Trivial placeholder policies are excluded from this.
Or mentions of MRFs in the main list
whose options were already documented.
Closes: https://git.pleroma.social/pleroma/pleroma/-/issues/3245
Mastodon at the very least seems to prevent the creation of emoji with
dots in their name (and refuses to accept them in federation). It feels
like being cautious in what we accept is reasonable here.

Colons are the emoji separator and so obviously should be blocked.

Perhaps instead of filtering out things like this we should just
do a regex match on `[a-zA-Z0-9_-]`? But that's plausibly a decision
for another day

    Perhaps we should also have a centralised "is this a valid emoji shortcode?"
    function
Reviewed-on: AkkomaGang/akkoma#701
By now most instance will run a version past 2022-08 but the guide
only documented it for from source installs and Pleroma develop.
Reviewed-on: AkkomaGang/akkoma#695
Reviewed-on: AkkomaGang/akkoma#699
Reviewed-on: AkkomaGang/akkoma#700
Currently translated at 18.1% (183 of 1006 strings)

Translated using Weblate (Polish)

Currently translated at 6.6% (67 of 1006 strings)

Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: subtype <subtype@hollow.capital>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/pl/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
Currently translated at 100.0% (47 of 47 strings)

Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: subtype <subtype@hollow.capital>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-posix-errors/pl/
Translation: Pleroma fe/Akkoma Backend (Posix Errors)
Updated by "Squash Git commits" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-posix-errors/
Translation: Pleroma fe/Akkoma Backend (Posix Errors)
The following commit will apply the needed patch
The lack thereof enables spoofing ActivityPub objects.

A malicious user could upload fake activities as attachments
and (if having access to remote search) trick local and remote
fedi instances into fetching and processing it as a valid object.

If uploads are hosted on the same domain as the instance itself,
it is possible for anyone with upload access to impersonate(!)
other users of the same instance.
If uploads are exclusively hosted on a different domain, even the most
basic check of domain of the object id and fetch url matching should
prevent impersonation. However, it may still be possible to trick
servers into accepting bogus users on the upload (sub)domain and bogus
notes attributed to such users.
Instances which later migrated to a different domain and have a
permissive redirect rule in place can still be vulnerable.
If — like Akkoma — the fetching server is overly permissive with
redirects, impersonation still works.

This was possible because Plug.Static also uses our custom
MIME type mappings used for actually authentic AP objects.

Provided external storage providers don’t somehow return ActivityStream
Content-Types on their own, instances using those are also safe against
their users being spoofed via uploads.

Akkoma instances using the OnlyMedia upload filter
cannot be exploited as a vector in this way — IF the
fetching server validates the Content-Type of
fetched objects (Akkoma itself does this already).

However, restricting uploads to only multimedia files may be a bit too
heavy-handed. Instead this commit will restrict the returned
Content-Type headers for user uploaded files to a safe subset, falling
back to generic 'application/octet-stream' for anything else.
This will also protect against non-AP payloads as e.g. used in
past frontend code injection attacks.

It’s a slight regression in user comfort, if say PDFs are uploaded,
but this trade-off seems fairly acceptable.

(Note, just excluding our own custom types would offer no protection
 against non-AP payloads and bear a (perhaps small) risk of a silent
 regression should MIME ever decide to add a canonical extension for
 ActivityPub objects)

Now, one might expect there to be other defence mechanisms
besides Content-Type preventing counterfeits from being accepted,
like e.g. validation of the queried URL and AP ID matching.
Inserting a self-reference into our uploads is hard, but unfortunately
*oma does not verify the id in such a way and happily accepts _anything_
from the same domain (without even considering redirects).
E.g. Sharkey (and possibly other *keys) seem to attempt to guard
against this by immediately refetching the object from its ID, but
this is easily circumvented by just uploading two payloads with the
ID of one linking to the other.

Unfortunately *oma is thus _both_ a vector for spoofing and
vulnerable to those spoof payloads, resulting in an easy way
to impersonate our users.

Similar flaws exists for emoji and media proxy.

Subsequent commits will fix this by rigorously sanitising
content types in more areas, hardening our checks, improving
the default config and discouraging insecure config options.
Same-domain setups enabled now at least two exploits,
so they ought to be discouraged and definitely not be the default.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.

Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.

While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.

Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.

Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
Else malicious emoji packs or our EmojiStealer MRF can
put payloads into the same domain as the instance itself.
Sanitising the content type should prevent proper clients
from acting on any potential payload.

Note, this does not affect the default emoji shipped with Akkoma
as they are handled by another plug. However, those are fully trusted
and thus not in needed of sanitisation.
Strict servers fail to process anything from us otherwise.

Fixes: akkoma#716
By mapping all extensions related to our custom privileged types
back to innocuous text/plain, our custom types will never automatically
be inserted which was one of the factors making impersonation possible.

Note, this does not invalidate the upload and emoji Content-Type
restrictions from previous commits. Apart from counterfeit AP objects
there are other payloads with standard types this protects against,
e.g. *.js Javascript payloads as used in prior frontend injections.
Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.

Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.

Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.

Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
Even more than with user uploads, a same-domain proxy setup bears
significant security risks due to serving untrusted content under
the main domain space.

A risky setup like that should never be the default.
To account for our subdomain recommendations
As suggested in b387f4a1c1, only steal
emoji with alphanumerc, dash, or underscore characters.

Also consolidate all validation logic into a single function.

===

Taken from akkoma#703 with cosmetic tweaks

This matches our existing validation logic from Pleroma.Emoji,
and apart from excluding the dot also POSIX’s Portable Filename
Character Set making it always safe for use in filenames.

Mastodon is even stricter also disallowing U+002D HYPEN-MINUS
and requiring at least two characters.

Given both we and Mastodon reject shortcodes excluded
by this anyway, this doesn’t seem like a loss.
E.g. *key’s emoji URLs typically don’t have file extensions, but
until now we just slapped ".png" at its end hoping for the best.

Furthermore, this gives us a chance to actually reject non-images,
which before was not feasible exatly due to those extension-less URLs
Since 3 commits ago we restrict shortcodes to a subset of
the POSIX Portable Filename Character Set, therefore
this can never have a directory component.
Before this was only filled on loading the pack again,
preventing the created pack from being used directly.
This will decouple filenames from shortcodes and
allow more image formats to work instead of only
those included in the auto-load glob. (Albeit we
still saved other formats to disk, wasting space)

Furthermore, this will allow us to make
final URL paths infeasible to predict.
The hardocded path and filenames assumptions
will be broken with the next commit.
Certain attacks rely on predictable paths for their payloads.
If we weren’t so overly lax in our (id, URL) check, the current
counterfeit activity exploit would be one of those.
It seems plausible for future attacks to hinge on
or being made easier by predictable paths too.

In general, letting remote actors place arbitrary data at
a path within our domain of their choosing (sans prefix)
just doesn’t seem like a good idea.

Using fully random filenames would have worked as well, but this
is less friendly for admins checking emoji dirs.
The generated suffix should still be more than enough;
an attacker needs on average 140 trillion attempts to
correctly guess the final path.
To save on bandwith and avoid OOMs with large files.
Ofc, this relies on the remote server
 (a) sending a content-length header and
 (b) being honest about the size.

Common fedi servers seem to provide the header and (b) at least raises
the required privilege of an malicious actor to a server infrastructure
admin of an explicitly allowed host.

A more complete defense which still works when faced with
a malicious server requires changes in upstream Finch;
see https://github.com/sneako/finch/issues/224
No new path traversal attacks are known. But given the many entrypoints
and code flow complexity inside pack.ex, it unfortunately seems
possible a future refactor or addition might reintroduce one.
Furthermore, some old packs might still contain traversing path entries
which could trigger undesireable actions on rename or delete.

To ensure this can never happen, assert safety during path construction.

Path.safe_relative was introduced in Elixir 1.14, but
fortunately, we already require at least 1.14 anyway.
Apart from slightly different error reasons wrt content-type,
this does not change functionality in any way.
Turns out we already had a test for activities spoofed via upload due
to an exploit several years. Back then *oma did not verify content-type
at all and doing so was the only adopted countermeasure.
Even the added test sample though suffered from a mismatching id, yet
nobody seems to have thought it a good idea to tighten id checks, huh

Since we will add stricter id checks later, make id and URL match
and also add a testcase for no content type at all. The new section
will be expanded in subsequent commits.
Such redirects on AP queries seem most likely to be a spoofing attempt.
If the object is legit, the id should match the final domain anyway and
users can directly use the canonical URL.

The lack of such a check (and use of the initially queried domain’s
authority instead of the final domain) was enabling the current exploit
to even affect instances which already migrated away from a same-domain
upload/proxy setup in the past, but retained a redirect to not break old
attachments.

(In theory this redirect could, with some effort, have been limited to
 only old files, but common guides employed a catch-all redirect, which
 allows even future uploads to be reachable via an initial query to the
 main domain)

Same-domain redirects are valid and also used by ourselves,
e.g. for redirecting /notice/XXX to /objects/YYY.
This brings it in line with its name and closes an,
in practice harmless, verification hole.

This was/is the only user of contain_origin making it
safe to change the behaviour on actor-less objects.

Until now refetched objects did not ensure the new actor matches the
domain of the object. We refetch polls occasionally to retrieve
up-to-date vote counts. A malicious AP server could have switched out
the poll after initial posting with a completely different post
attribute to an actor from another server.
While we indeed fell for this spoof before the commit,
it fortunately seems to have had no ill effect in practice,
since the asociated Create activity is not changed. When exposing the
actor via our REST API, we read this info from the activity not the
object.

This at first thought still keeps one avenue for exploit open though:
the updated actor can be from our own domain and a third server be
instructed to fetch the object from us. However this is foiled by an
id mismatch. By necessity of being fetchable and our longstanding
same-domain check, the id must still be from the attacker’s server.
Even the most barebone authenticity check is able to sus this out.
If it’s not already in the database,
it must be counterfeit (or just not exists at all)

Changed test URLs were only ever used from "local: false" users anyway.
In order to properly process incoming notes we need
to be able to map the key id back to an actor.
Also, check collections actually belong to the same server.

Key ids of Hubzilla and Bridgy samples were updated to what
modern versions of those output. If anything still uses the
old format, we would not be able to verify their posts anyway.
Since we reject cross-domain redirects, this doesn’t yet
make a difference, but it’s requried for stricter checking
subsequent commits will introduce.

To make sure (and in case we ever decide to reallow
cross-domain redirects) also use the final location
for containment and reachability checks.
Since we always followed redirects (and until recently allowed fuzzy id
matches), the ap_id of the received object might differ from the iniital
fetch url. This lead to us mistakenly trying to insert a new user with
the same nickname, ap_id, etc as an existing user (which will fail due
to uniqueness constraints) instead of updating the existing one.
This protects us from falling for obvious spoofs as from the current
upload exploit (unfortunately we can’t reasonably do anything about
spoofs with exact matches as was possible via emoji and proxy).

Such objects being invalid is supported by the spec, sepcifically
sections 3.1 and 3.2: https://www.w3.org/TR/activitypub/#obj-id

Anonymous objects are not relevant here (they can only exists within
parent objects iiuc) and neither is client-to-server or transient objects
(as those cannot be fetched in the first place).
This leaves us with the requirement for `id` to (a) exist and
(b) be a publicly dereferencable URI from the originating server.
This alone does not yet demand strict equivalence, but the spec then
further explains objects ought to be fetchable _via their ID_.
Meaning an object not retrievable via its ID, is invalid.

This reading is supported by the fact, e.g. GoToSocial (recently) and
Mastodon (for 6+ years) do already implement such strict ID checks,
additionally proving this doesn’t cause federation issues in practice.

However, apart from canonical IDs there can also be additional display
URLs. *omas first redirect those to their canonical location, but *keys
and Mastodon directly serve the AP representation without redirects.

Mastodon and GTS deal with this in two different ways,
but both constitute an effective countermeasure:
 - Mastodon:
   Unless it already is a known AP id, two fetches occur.
   The first fetch just reads the `id` property and then refetches from
   the id. The last fetch requires the returned id to exactly match the
   URL the content was fetched from. (This can be optimised by skipping
   the second fetch if it already matches)
   05eda8d193/app/helpers/jsonld_helper.rb (L168)
   63f0979799

 - GTS:
   Only does a single fetch and then checks if _either_ the id
   _or_ url property (which can be an object) match the original fetch
   URL. This relies on implementations always including their display URL
   as "url" if differing from the id. For actors this is true for all
   investigated implementations, for posts only Mastodon includes an
   "url", but it is also the only one with a differing display URL.
   2bafd7daf5 (diff-943bbb02c8ac74ac5dc5d20807e561dcdfaebdc3b62b10730f643a20ac23c24fR222)

Albeit Mastodon’s refetch offers higher compatibility with theoretical
implmentations using either multiple different display URL or not
denoting any of them as "url" at all, for now we chose to adopt a
GTS-like refetch-free approach to avoid additional implementation
concerns wrt to whether redirects should be allowed when fetching a
canonical AP id and potential for accidentally loosening some checks
(e.g. cross-domain refetches) for one of the fetches.
This may be reconsidered in the future.
This pixelfed issue was fixed in 2022-12 in
https://github.com/pixelfed/pixelfed/pull/3932

Co-authored-by: FloatingGhost <hannah@coffee-and-dreams.uk>
The newest git HEAD of MIME already knows about APNG, but this
hasn’t been released yet. Without this, APNG attachments from
remote posts won’t display as images in frontends.

Fixes: akkoma#657
At least as far as we can
Reviewed-on: #1
sliver merged commit 06bfc5b9eb into stable 2024-03-31 06:59:36 +00:00
sliver deleted branch ForForkMerge 2024-03-31 06:59:36 +00:00
sliver referenced this pull request from a commit 2024-03-31 06:59:37 +00:00
Sign in to join this conversation.
No reviewers
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: sliver/akkoma#2
No description provided.