Ever since 364b6969eb
this setting wasn't used by the backend and a noop.
The stated usecase is better served by setting the base_url
to a local subdomain and using proxying in nginx/Caddy/...
This lets us:
- avoid issues with broken hash indices for PostgreSQL <10
- drop runtime checks and legacy codepaths for <11 in db search
- always enable custom query plans for performance optimisation
PostgreSQL 11 is already EOL since 2023-11-09, so
in theory everyone should already have moved on to 12 anyway.
From experience, setting DB type to "Online transaction processing
system" seems to give the most optimal configuration in terms of
performance.
I also increased the recomended max connections to 25-30 as that leaves
some room for maintenance tasks to run without running out of
connections.
Finally, I removed the example configs since they're probably out of
date and I think it's better to direct people to use PGTune instead.
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.
The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().
Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
This gives feedback when to stop rerunning limited batches.
Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.
If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
Using only the admin key works as well currently
and Akkoma needs to know the admin key to be able
to add new entries etc. However the Meilisearch
key descriptions suggest the admin key is not
supposed to be used for searches, so let’s not.
For compatibility with existings configs, search_key remains optional.
And while add it point to this via a top-level
FEDERATION.md document as standardised by FEP-67ff.
Also add a few missing descriptions to the config cheatsheet
and move the recently removed C2S extension into an appropiate
subsection.
Direct users to add in the appropriate headers and update the listening
port instead of copy/pasting a config that's already outdated and
probably would otherwise have to be synced with the main example nginx
config.
Since the configuration options on the nginx side already exist in the
sample config, there's no need to tell users to copy-paste those
settings in again.
Documentation was already clear on this only stripping GPS tags.
But there are more potentially sensitive metadata tags (e.g. author
and possibly description) and the name alone suggests a broader effect.
Thus change the filter to strip all metadata except for colourspace info
and orientation (technically it strips everything and then readds
selected tags).
Explicitly stripping CommonIFD0 is needed since -all does not modify
IFD0 due to TIFF storing some actual image data there. CommonIFD0 then
strips a bunch of commonly used actual metadata tags from IFD0, to my
understanding leaving TIFF image data and custom metadata tags intact.
Alongside moving to certbot's nginx plugin, also use conf.d instead of
recreating the sites-{available,enabled} setup that Debian/Ubuntu uses.
Furthermore, also request a certificate for the media domain at the same
time since that's now required.
To save on bandwith and avoid OOMs with large files.
Ofc, this relies on the remote server
(a) sending a content-length header and
(b) being honest about the size.
Common fedi servers seem to provide the header and (b) at least raises
the required privilege of an malicious actor to a server infrastructure
admin of an explicitly allowed host.
A more complete defense which still works when faced with
a malicious server requires changes in upstream Finch;
see https://github.com/sneako/finch/issues/224
Even more than with user uploads, a same-domain proxy setup bears
significant security risks due to serving untrusted content under
the main domain space.
A risky setup like that should never be the default.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.
Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.
While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.
Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.
Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.