Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.
Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.
Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.
Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
Else malicious emoji packs or our EmojiStealer MRF can
put payloads into the same domain as the instance itself.
Sanitising the content type should prevent proper clients
from acting on any potential payload.
Note, this does not affect the default emoji shipped with Akkoma
as they are handled by another plug. However, those are fully trusted
and thus not in needed of sanitisation.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.
Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.
While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.
Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.
Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
The lack thereof enables spoofing ActivityPub objects.
A malicious user could upload fake activities as attachments
and (if having access to remote search) trick local and remote
fedi instances into fetching and processing it as a valid object.
If uploads are hosted on the same domain as the instance itself,
it is possible for anyone with upload access to impersonate(!)
other users of the same instance.
If uploads are exclusively hosted on a different domain, even the most
basic check of domain of the object id and fetch url matching should
prevent impersonation. However, it may still be possible to trick
servers into accepting bogus users on the upload (sub)domain and bogus
notes attributed to such users.
Instances which later migrated to a different domain and have a
permissive redirect rule in place can still be vulnerable.
If — like Akkoma — the fetching server is overly permissive with
redirects, impersonation still works.
This was possible because Plug.Static also uses our custom
MIME type mappings used for actually authentic AP objects.
Provided external storage providers don’t somehow return ActivityStream
Content-Types on their own, instances using those are also safe against
their users being spoofed via uploads.
Akkoma instances using the OnlyMedia upload filter
cannot be exploited as a vector in this way — IF the
fetching server validates the Content-Type of
fetched objects (Akkoma itself does this already).
However, restricting uploads to only multimedia files may be a bit too
heavy-handed. Instead this commit will restrict the returned
Content-Type headers for user uploaded files to a safe subset, falling
back to generic 'application/octet-stream' for anything else.
This will also protect against non-AP payloads as e.g. used in
past frontend code injection attacks.
It’s a slight regression in user comfort, if say PDFs are uploaded,
but this trade-off seems fairly acceptable.
(Note, just excluding our own custom types would offer no protection
against non-AP payloads and bear a (perhaps small) risk of a silent
regression should MIME ever decide to add a canonical extension for
ActivityPub objects)
Now, one might expect there to be other defence mechanisms
besides Content-Type preventing counterfeits from being accepted,
like e.g. validation of the queried URL and AP ID matching.
Inserting a self-reference into our uploads is hard, but unfortunately
*oma does not verify the id in such a way and happily accepts _anything_
from the same domain (without even considering redirects).
E.g. Sharkey (and possibly other *keys) seem to attempt to guard
against this by immediately refetching the object from its ID, but
this is easily circumvented by just uploading two payloads with the
ID of one linking to the other.
Unfortunately *oma is thus _both_ a vector for spoofing and
vulnerable to those spoof payloads, resulting in an easy way
to impersonate our users.
Similar flaws exists for emoji and media proxy.
Subsequent commits will fix this by rigorously sanitising
content types in more areas, hardening our checks, improving
the default config and discouraging insecure config options.
The default refresh interval of 1 day is woefully inadequate here;
users expect to be able to add the alias to their new account and
press the move button on their old account and have it work.
This allows callers to specify a maximum age before a refetch is
triggered. We set that to 5s for the move code, as a nice compromise
between Making Things Work and ensuring that this can't be used
to hammer a remote server
Mastodon at the very least seems to prevent the creation of emoji with
dots in their name (and refuses to accept them in federation). It feels
like being cautious in what we accept is reasonable here.
Colons are the emoji separator and so obviously should be blocked.
Perhaps instead of filtering out things like this we should just
do a regex match on `[a-zA-Z0-9_-]`? But that's plausibly a decision
for another day
Perhaps we should also have a centralised "is this a valid emoji shortcode?"
function
This partly reverts 1d884fd914
while fixing both the issue it addressed and the issue it caused.
The above commit successfully fixed OpenGraph metadata tags
which until then always showed the user bio instead of post content
by handing the activities AP ID as url to the Metadata builder
_instead_ of passing the internal ID as activity_id.
However, in doing so the commit instead inflicted this very problem
onto Twitter metadata tags which ironically are used by akkoma-fe.
This is because while the OpenGraph builder wants an URL as url,
the Twitter builder needs the internal ID to build the URL to the
embedded player for videos and has no URL property.
Thanks to twpol for tracking down this root cause in #644.
Now, once identified the problem is simple, but this simplicity
invites multiple possible solutions to bikeshed about.
1. Just pass both properties to the builder and let them pick
2. Drop the url parameter from the OpenGraph builder and instead
a) build static-fe URL of the post from the ID (like Twitter)
b) use the passed-in object’s AP ID as an URL
Approach 2a has the disadvantage of hardcoding the expected URL outside
the router, which will be problematic should it ever change.
Approach 2b is conceptually similar to how the builder works atm.
However, the og:url is supposed to be a _permanent_ ID, by changing it
we might, afaiui, technically violate OpenGraph specs(?). (Though its
real-world consequence may very well be near non-existent.)
This leaves just approach 1, which this commit implements.
Albeit it too is not without nits to pick, as it leaves the metadata
builders with an inconsistent interface.
Additionally, this will resolve the subotpimal Discord previews for
content-less image posts reported in #664.
Discord already prefers OpenGraph metadata, so it’s mostly unaffected.
However, it appears when encountering an explicitly empty OpenGraph
description and a non-empty Twitter description, it replaces just the
empty field with its Twitter counterpart, resulting in the user’s bio
slipping into the preview.
Secondly, regardless of any OpenGraph tags, Discord uses twitter:card to
decide how prominently images should be, but due to the bug the card
type was stuck as "summary", forcing images to always remain small.
Root cause identified by: twpol
Fixes: #644
Fixes: #664
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.
Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.
Ref.: 4e64397635
Fixes misspelling and omission of and example in commit
0cfd5b4e89 which added the
status_ttl_property. This was the only place this commit
referred to the property as note_ttl_days.
Partially fixes the omitted schema update of the instance metadata addition
from commit b7e8ce2350. A proper full schema
for nodeinfo is still missing.
OTP’s default SSL/TLS settings are rather restricitive
and in particular do not use system CA certs.
In our case using system CA certs is virtually always desired
and the lack of it leads to non-obvious errors. Manually configuring
system CA certs from in-database config also isn’t straightforward.
Furthermore, gen_smtp uses a different set of connection options
for direct SSL/TLS and a later TLS upgrade providing additional
confusion and complexity in how to configure this.
Thus provide some suitable defaults for sending SMTP emails.
Everything can still be overriden by admins if necessary.
Note: defaults are not appended when validating the config
in hopes of improving the error message (as the required relay key
is already accessed to generate defaults for optional fields)
Fixes: #660
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.
Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
the previous code passed a state parameter to ueberauth with info
about where to go after the user logged in, etc.
since ueberauth 0.7, this parameter is ignored and oauth state is used
for actual CSRF reasons.
we now set a cookie with the state we need to keep track of, and read
it once the callback happens.
Implements the preferences endpoint in the Mastodon API, but returns
default values for most of the preferences right now. The only supported
preference we can access is default post visibility, and a relevant test
is added as well.
OTP builds to 1.15
Changelog entry
Ensure policies are fully loaded
Fix :warn
use main branch for linkify
Fix warn in tests
Migrations for phoenix 1.17
Revert "Migrations for phoenix 1.17"
This reverts commit 6a3b2f15b7.
Oban upgrade
Add default empty whitelist
mix format
limit test to amd64
OTP 26 tests for 1.15
use OTP_VERSION tag
baka
just 1.15
Massive deps update
Update locale, deps
Mix format
shell????
multiline???
?
max cases 1
use assert_recieve
don't put_env in async tests
don't async conn/fs tests
mix format
FIx some uploader issues
Fix tests
Set it to `inline` because the vast majority of what's sent is multimedia
content while `attachment` would have the side-effect of triggering a
download dialog.
Closes: https://git.pleroma.social/pleroma/pleroma/-/issues/3114
TwitterCard meta tags are supposed to use the attributes "name" and "content".
OpenGraph tags use the attributes "property" and "content".
Twitter itself is smart enough to detect broken meta tags and discover the TwitterCard
using "property" and "content", but other platforms that only implement parsing of TwitterCards
and not OpenGraph may fail to correctly detect the tags as they're under the wrong attributes.
> "Open Graph protocol also specifies the use of property and content attributes for markup while
> Twitter cards use name and content. Twitter’s parser will fall back to using property and content,
> so there is no need to modify existing Open Graph protocol markup if it already exists." [0]
[0] https://developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
`context` fields for objects and activities can now be generated based
on the object/activity `inReplyTo` field or its ActivityPub ID, as a
fallback method in cases where `context` fields are missing for incoming
activities and objects.
This make the behavior consistent between when UserNote doesn't exist and when comment is null.
The current behavior may return null in APIs, which misleads some clients doing feature detection into thinking the server does not support comments.
For example, see https://codeberg.org/husky/husky/issues/92
weird values in href will cause base64 encoding to fail later down the
line, so let's make sure the value we're passing on is somewhat sane, or
at the very least a binary
this fixes#482
When no image description is filled in, Pleroma allowed fallbacks.
Those were (based on a setting) either the filename, or a fixed description.
Neither are good options for image descriptions imo, so here we remove this.
Note that there's two tests removed who supposedly tested something else.
But examining closer, they didn't seem to test what they claimed to test,
so I removed them rather than try to "fix" them.
Expose quote posting in the api as a feature.
Copies what the quote post PR for pleroma does to allow external clients to enable and disable features based on the feature-set of the instance.
As far as I am aware, akkoma doesn't allow you to disable quote posting, so this doesn't need anything fancy and it's just a hard on switch.
I tried to get one for the bubble tl to work also, but I'm not quite sure how to do it so that it switches off the feature when the bubble tl is disabled. I would argue that it could and ideally should be done as well though.
I also discovered a pretty tame bug in the testing of it, that deleting the DB entry for the bubble tl does not stop the bubble TL from actually working and it will continue to display the panel on the about page, I'll just leave it as a note here.
Reviewed-on: #496
Co-authored-by: foxing <foxing@noreply.akkoma>
Co-committed-by: foxing <foxing@noreply.akkoma>
Some users post posts with spoofed timestamp, and some clients will have issues with certain dates. Tusky for example crashes if the date is any sooner than 1 BCE (“year zero” in the representation).
I limited the range of what is considered a valid date to be somewhere between the years 1583 and 9999 (inclusive).
The numbers have been chosen because:
- ISO 8601 only allows years before 1583 with “mutual agreement”
- Years after 9999 could cause issues with certain clients as well
Co-authored-by: Charlotte 🦝 Delenk <lotte@chir.rs>
Reviewed-on: #425
Co-authored-by: darkkirb <lotte@chir.rs>
Co-committed-by: darkkirb <lotte@chir.rs>
Faced with this issue today, Pleroma responds with status 400 (Bad request) if Exiftool.StripLocation is added to the list of filter modules for uploads. Here is logs:
```
13:27:25.201 [info] POST /api/v1/media
13:27:25.232 request_id=FzdspaAnrA6cyv0APgVR [error] Elixir.Pleroma.Upload.Filter: Filter Elixir.Pleroma.Upload.Filter.Exiftool.StripLocation failed: {:error, "Elixir.Pleroma.Upload.Filter.Exiftool.StripLocation: %ErlangError{original: :enoent}"}
13:27:25.232 request_id=FzdspaAnrA6cyv0APgVR [error] Elixir.Pleroma.Upload store (using Pleroma.Uploaders.Local) failed: "Elixir.Pleroma.Upload.Filter.Exiftool.StripLocation: %ErlangError{original: :enoent}"
```
# This fix solves this problem.
Reviewed-on: #421
Co-authored-by: ihor <ikandreew@gmail.com>
Co-committed-by: ihor <ikandreew@gmail.com>
See #350 (comment)
When making quotes through Mast-API, they will now have the same context as the quoted post. This also results in them being showed when fetching the thread. I checked Misskey to see how it's there, and they show the quotes there as well, see e.g. <https://mk.toast.cafe/notes/98u1g0tulg>.
An example from Akkoma:
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: #379
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Argos Translate is a Python module for translation and can be used as a command line tool.
This is also the engine for LibreTranslate, for which we already have a module.
Here we can use the engine directly from our server without doing requests to a third party or having to install our own LibreTranslate webservice (obviously you do have to install Argos Translate).
One thing that's currently still missing from Argos Translate is auto-detection of languages (see <https://github.com/argosopentech/argos-translate/issues/9>). For now, when no source language is provided, we just return the text unchanged, supposedly translated from the target language. That way you get a near immediate response in pleroma-fe when clicking Translate, after which you can select the source language from a dropdown.
Argos Translate also doesn't seem to handle html very well. Therefore we give admins the option to strip the html before translating. I made this an option because I'm unsure if/how this will change in the future.
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: #351
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>