Add limit CLI flags to prune jobs #655
No reviewers
Labels
No labels
approved, awaiting change
bug
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs docs
needs tests
not a bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: AkkomaGang/akkoma#655
Loading…
Reference in a new issue
No description provided.
Delete branch "Oneric/akkoma:prune-batch"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
The prune tasks can incur heavy database load and take a long time, grinding the instance to an halt for the entire duration. The main culprit behind this is pruning orphaned activities, but that part is also quiet helpful so just omitting it completely is not an option.
This patch series makes pruning of orphaned activities available as a standalone task (
prune_objects
still keeps its--prune-orphaned-activities
flag), optimises the “activities referring to an array of objects” case and adds a couple toggles to the new task to enable a more controlled and background frienedly cleanup if needed/desired.This proved very useful for akko.wtf; without this a full prune run took several days during which the instance became unusable. Further down in this pr thread smitten also reported unpatched pruning could OOM-kill the instance on smaller VPS.
If your instance is (relative to your hardware) big enough for a single full prune to be problematic, a pruning session with the patches here could look like:
Code for
./batch_pruning.sh
:Alternative version with dynamic batch size
Since initially when there are still many orphaned activities things will go quicker, using a dynamic batch size will be more efficient. But again, this needs tweaking for the needs and capabilities of your specific instance and hardware setup. Don't go too crazy with initial size though, else things will likely get bogged down or OOMed again.
The problem with an unconstrained prune is that it will go through many millions of activites and objects, left-joins 4 tables which apart from
users
are all very large and often many of them will be eligible and not just filtered out early. This obviously takes long to process and can lead to such a large data stream and queued up changes by the delete transaction that as it goes on, after a while it bogs down everything else.Splitting it up into batches limits how much data will be processed at once thus avoiding the problem and allowing pruning most or all eligible activities in the background or allow it to work at all on weaker hardware. (Though after enough activites were cleared out by limited batches an unconstrainted query might have become feasible again and the overhead of limited btaches significant, so if you really want to clean out everything consider if it’s possible to switch to an unlimtited prune at the end)
Best reviewed commit by commit and as noted in the commit messages, many of the diff lines are just indentation adjustments and for review its probably a good idea to hide whitespace-only changes.
Resolves #653 ; cc @norm
@ -56,0 +79,4 @@
### Options
- `--limit n` - Only delete up to `n` activities in each query. Running this task in limited batches can help maintain the instance’s responsiveness while still freeing up some space.
I'm a little confused about if there's a difference in the behavior between this and prune_objects.
"in each query" I would understand as limiting the database lock by having smaller limit delete operations.
For prune_objects it says "limits how many remote objects get pruned initially". What does initially mean here?
The task executes multiple DELETE queries on the database, each of these queries will have the given limit applied. Currently it executes two queries, so running the task once wiht
--limit 100
will delete at most 200 rows.It would be possible to limit the overall deleted rows to at most exactly the given amount, but this gives preferential treatment to the first queries and since the purpose is just to limit the load and allow breaks inbetween, I figured this is not needed. But if there’s a reason to, this could be changed.
prune_objects
first deletes remote posts, then (optionally, if such flags were passed) it will run more cleanup jobs. Only the initial prune is affected by the limit the cleanup not. Reason being, that except forprune_orphaned_activities
those cleanup jobs are comparatively cheap anyway.And
prune_orphaned_activities
now has its own task. So if you want to cleanup some space, while not continuously hogging the db, you can first (repeatedly) runprune_objects --limit n
without--prune-orphaned-activities
, but all other desired cleanups in the last run. Then afterwards, repeatedly run the standaloneprune_orphaned_activities --limit n
as long as a single run finishes fast enough.I pushed a new rebased version with tweaked documentation (and a typo in a commit message was fixed). Can you take a look if it’s clearer now?
I see what you mean, and the docs updates are clearer thanks! The steps you describe is how I was running it, I did a few
prune_objects
and then did a fewprune_orphaned_activities
.This seems to be working for me! Usually pruning makes the RAM fills up on my small VPS and the instance crashes but this is running well.
80ba73839c
to3bc63afbe0
3bc63afbe0
to732bc96493
732bc96493
to800acfa81d
Rebased this with two updates:
IO.puts
is no longer needed iand has been dropped.This change also slightly confused the script from the comments; I updated it to work with the new output and made it a bit more robust wrt ordering.
prune_orphaned_activities
is now used in one of the orphan-pruning tests. Since both modes use the same function and the only difference is the argument parser, I figured it wasn’t worth to duplicate the test setup and instead switched one of the two orphan tests to the standalone task.Also just because, here’s an alternative version of the script which tries to scale batch size down between some max and min value instead of immediately ceasing the prune. May be more convenient in some cases, though too low min values prob don’t make much sense (and as before time and batch sizes need tweaking for real instances).
afa01cb8dd
to790b552030
790b552030
toc127d48308
Rebased again, added some further changes and updated the initial post for the current state:
Flag
, and typically there are only few of those. Using the type in the query lets it use our type index instead of scanning the entire table and probing types for every entry, speeding things up greatly(for single-object activities this wouldn’t help much if at all)
seems sensible, a few comments (they are nits)
also fails lint, so a quick format after the typo fixes would be appreciated
@ -23,0 +43,4 @@
delete from public.activities
where id in (
select a.id from public.activities a
left join public.objects o on a.data ->> 'object' = o.data ->> 'id'
you almost certainly don't need to prefix with
public
- we'll be running in the akkoma db anyhowthe
public
prefix already existed before and in a different context i’ve run into issues with it lacking before (update_status_visibility_counter_cache
is the only trigger function which doesn't use fully qualified names with schema prefixes. Evidently this works fine during normal instance operation, but during a data-only backup restore this caused failures. Side note: this trigger is imho ridiculously complex and costly relative to the usefulness of the feature it provides (per-instance and per-visibility post count stats in admin-fe))i can check and if it seems to work for me, remove the prefix everywhere if you want though
@ -23,0 +56,4 @@
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Prune activity singles: deteleted #{del_single} rows...")
typo, deteleted -> deleted
fixed
@ -23,0 +65,4 @@
"""
delete from public.activities
where id in (
select a.id from public.activities a
same with the public prefix here
@ -23,0 +80,4 @@
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Prune activity arrays: deteleted #{del_array} rows...")
typo here
fixed
@ -23,0 +91,4 @@
# Flag is the only type we support with an array (and always has arrays).
# Update the only one with inlined objects, but old Update activities are
#
# We already regularly purge old Delte, Undo, Update and Remove and if
Delte -> delete
fixed
One questions ahead of fixing typos and lint:
Initially (during v1) there was some confusion about if/when
Logger
calls get actually shown to users; since it (now) works for me and existing database tasks use only Logger i stuck with that.However i’m still not actually sure if this reliably shows up and looking at other tasks both
Logger
andIO
are used with the latter seemingly being more popular. On current develop (8afc3bee7a
):Any opinion or guidance on what to prefer?
logger depends on the log level configured by the user, so if they've set :warn, it'll not show :info level logs - so for user-initiated tasks, IO is probably better
iirc logs shown during
mix
tasks didn’t seem to correlate to the regualr logger level configured for normal instance operation. But given they didn’t but now show up for me there’s clearly some setting involved (or maybe it’ſ the same setting but it somehow didn’ŧ get updated during initial recompiles, idk)Will convert things to
IO
(and remove the superfluousrequire Logger
directives)c127d48308
to9e80ebb8d5
ok, there’s yet another way in use for putting text out:
shell_info
andshell_error
fromlib/mix/pleroma.ex
. If running inside a mix shell, they output things to the mix shell, elseIO.puts(msg)
orIO.puts(:stderr, msg)
i guess it’s prooobably best to just use this everywhere?
hmmm, doing so breaks two
test/mix/tasks/pleroma/user_test.exs
tests; apparently their outputisn't caputered anymore; presumably going to the mix shell (though i didn’ŧ spot them in console output)pushed an additional commit converting all but those two prints and some intentionally debug.-only messages in
uploads
toshell_*
calls. Alternatively changingshell_*
helpers to always useIO.puts
if running in test env presumably also works, but idk where the mix.shell IO.puts distinction is relevant to begin with746fdd87b6
toa9d812ad7e
a9d812ad7e
to51a7d74971
tests were easy enough to fix; everything which doesn't have a reason to use something else is now using
shell_*
for printing51a7d74971
tobed7ff8e89
everything passes, i should finally merge this
thanks a lot