Add limit CLI flags to prune jobs #655

Open
Oneric wants to merge 10 commits from Oneric/akkoma:prune-batch into develop
Member

The prune tasks can incur heavy database load and take a long time, grinding the instance to an halt for the entire duration. The main culprit behind this is pruning orphaned activities, but that part is also quiet helpful so just omitting it completely is not an option.

This patch series makes pruning of orphaned activities available as a standalone task (prune_objects still keeps its --prune-orphaned-activities flag), optimises the “activities referring to an array of objects” case and adds a couple toggles to the new task to enable a more controlled and background frienedly cleanup if needed/desired.

This proved very useful for akko.wtf; without this a full prune run took several days during which the instance became unusable. Further down in this pr thread smitten also reported unpatched pruning could OOM-kill the instance on smaller VPS.

If your instance is (relative to your hardware) big enough for a single full prune to be problematic, a pruning session with the patches here could look like:

# add/remove whatever flags you like here as long as
#  --prune-orphaned-activities  and --vacuum are omitted.
# (_IF_ this is already struggling try also omitting --keep-threads
#  or adding the newly added --limit here and running it multiple times)
mix pleroma.database prune_objects --keep-thread

# with patches cleaning out array activities shouldn’t take
# too much resources and thus not need a limit
mix pleroma.database prune_orphaned_activities --no-singles

# script repeatedly running
#  mix pleroma.database prune_orphaned_activities --no-arrays --limit XXX
# as long as it completes within a set timeframe; see below
./batch_pruning.sh

Code for ./batch_pruning.sh:

#!/bin/sh

# Tweak this for your own setup!
# Values tested for a ~70 monthly active users instance
# hosted on a 4 vCPU (Ryzen 7 2700X) 8GB RAM VM
YIELD=120
BATCH_SIZE=150000
BATCH_MAX_TIME=500

while : ; do
    start="$(date +%s)"
    out=$( \
        mix pleroma.database prune_orphaned_activities --no-arrays --limit "$BATCH_SIZE" \
        | grep -E ' (^|\] )Deleted ' \
        | tail -n 1 \
    )"
    end="$(date +%s)"
    duration="$((end - start))"
    echo "$out"

    if echo "$out" | grep -qE 'Deleted 0 rows$' ; then
        echo "Nothing more to delete."
        break
    elif echo "$out" | grep -qE 'Deleted [0-9]+ rows$' ;
        :
    else
        echo "Unexpected status report, abort! Expected count of total deleted rows, got:" >&2
        echo "    $out" >&2
        exit 2
    fi

    if [ "$duration" -gt "$BATCH_MAX_TIME" ] ; then
        echo "Completion of single batch takes too long ($duration > $BATCH_MAX_TIME)" >&2
        echo "Abort further batches to not bog down the instance!" >&2
        exit 1
    fi
    sleep "$YIELD"
done
Alternative version with dynamic batch size

Since initially when there are still many orphaned activities things will go quicker, using a dynamic batch size will be more efficient. But again, this needs tweaking for the needs and capabilities of your specific instance and hardware setup. Don't go too crazy with initial size though, else things will likely get bogged down or OOMed again.

#!/bin/sh

YIELD=120
BATCH_SIZE_MAX=250000
BATCH_SIZE_MIN=100000
BATCH_MAX_TIME=300

set -eu

# params: cur_batch_time cur_batch_size
# returns: new_batch_size (0 if constraints cannot be met; otherwise valid)
lower_batch_size() {
    # Intentional rounding imprecision to facillitate going _below_ max time
    div="$(( ($1 + BATCH_MAX_TIME - 1) / BATCH_MAX_TIME ))"
    newbatch="$(($2 / div))"
    if [ "$newbatch" -lt "$BATCH_SIZE_MIN" ] ; then
        newbatch=0
    fi
    echo "$newbatch"
}

BATCH_SIZE="$BATCH_SIZE_MAX"
echo "Starting with batch size $BATCH_SIZE"
while : ; do
    start="$(date +%s)"
    out="$( \
        mix pleroma.database prune_orphaned_activities --no-arrays --limit "$BATCH_SIZE" \
        | grep -E ''(^|\] )Deleted ' \
    )"
    end="$(date +%s)"
	duration="$((end - start))"
	echo "$out"

	if echo "$out" | tail -n 1 | grep -qE 'Deleted 0 rows$' ; then
		echo "Nothing more to delete."
		break
    elif echo "$out" | grep -qE 'Deleted [0-9]+ rows$' ;
        :
    else
        echo "Unexpected status report, abort! Expected count of total deleted rows, got:" >&2
        echo "    $out" >&2
        exit 2
    fi

	if [ "$duration" -gt "$BATCH_MAX_TIME" ] ; then
		echo "Completion of single batch takes too long ($duration > $BATCH_MAX_TIME)" >&2
        BATCH_SIZE="$(lower_batch_size "$duration" "$BATCH_SIZE")"
        if [ "$BATCH_SIZE" -gt 0 ] ; then
            echo "Try lowering batch size to $BATCH_SIZE..."
        else
    		echo "Cannot lower batch size further. Abort to not bog down instance!" >&2
		    exit 1
        fi
	fi
	sleep "$YIELD"
done

The problem with an unconstrained prune is that it will go through many millions of activites and objects, left-joins 4 tables which apart from users are all very large and often many of them will be eligible and not just filtered out early. This obviously takes long to process and can lead to such a large data stream and queued up changes by the delete transaction that as it goes on, after a while it bogs down everything else.
Splitting it up into batches limits how much data will be processed at once thus avoiding the problem and allowing pruning most or all eligible activities in the background or allow it to work at all on weaker hardware. (Though after enough activites were cleared out by limited batches an unconstrainted query might have become feasible again and the overhead of limited btaches significant, so if you really want to clean out everything consider if it’s possible to switch to an unlimtited prune at the end)

Best reviewed commit by commit and as noted in the commit messages, many of the diff lines are just indentation adjustments and for review its probably a good idea to hide whitespace-only changes.

Resolves #653 ; cc @norm

The prune tasks can incur heavy database load and take a long time, grinding the instance to an halt for the entire duration. The main culprit behind this is pruning orphaned activities, but that part is also quiet helpful so just omitting it completely is not an option. This patch series makes pruning of orphaned activities available as a standalone task (`prune_objects` still keeps its `--prune-orphaned-activities` flag), optimises the “activities referring to an array of objects” case and adds a couple toggles to the new task to enable a more controlled and background frienedly cleanup if needed/desired. This proved very useful for akko.wtf; without this a full prune run took several days during which the instance became unusable. Further down in this pr thread smitten also reported unpatched pruning could OOM-kill the instance on smaller VPS. If your instance is (relative to your hardware) big enough for a single full prune to be problematic, a pruning session with the patches here could look like: ```sh # add/remove whatever flags you like here as long as # --prune-orphaned-activities and --vacuum are omitted. # (_IF_ this is already struggling try also omitting --keep-threads # or adding the newly added --limit here and running it multiple times) mix pleroma.database prune_objects --keep-thread # with patches cleaning out array activities shouldn’t take # too much resources and thus not need a limit mix pleroma.database prune_orphaned_activities --no-singles # script repeatedly running # mix pleroma.database prune_orphaned_activities --no-arrays --limit XXX # as long as it completes within a set timeframe; see below ./batch_pruning.sh ``` Code for `./batch_pruning.sh`: ```sh #!/bin/sh # Tweak this for your own setup! # Values tested for a ~70 monthly active users instance # hosted on a 4 vCPU (Ryzen 7 2700X) 8GB RAM VM YIELD=120 BATCH_SIZE=150000 BATCH_MAX_TIME=500 while : ; do start="$(date +%s)" out=$( \ mix pleroma.database prune_orphaned_activities --no-arrays --limit "$BATCH_SIZE" \ | grep -E ' (^|\] )Deleted ' \ | tail -n 1 \ )" end="$(date +%s)" duration="$((end - start))" echo "$out" if echo "$out" | grep -qE 'Deleted 0 rows$' ; then echo "Nothing more to delete." break elif echo "$out" | grep -qE 'Deleted [0-9]+ rows$' ; : else echo "Unexpected status report, abort! Expected count of total deleted rows, got:" >&2 echo " $out" >&2 exit 2 fi if [ "$duration" -gt "$BATCH_MAX_TIME" ] ; then echo "Completion of single batch takes too long ($duration > $BATCH_MAX_TIME)" >&2 echo "Abort further batches to not bog down the instance!" >&2 exit 1 fi sleep "$YIELD" done ``` <details> <summary><strong>Alternative version with dynamic batch size</strong></summary> Since initially when there are still many orphaned activities things will go quicker, using a dynamic batch size will be more efficient. But again, this needs tweaking for the needs and capabilities of your specific instance and hardware setup. Don't go too crazy with initial size though, else things will likely get bogged down or OOMed again. ```sh #!/bin/sh YIELD=120 BATCH_SIZE_MAX=250000 BATCH_SIZE_MIN=100000 BATCH_MAX_TIME=300 set -eu # params: cur_batch_time cur_batch_size # returns: new_batch_size (0 if constraints cannot be met; otherwise valid) lower_batch_size() { # Intentional rounding imprecision to facillitate going _below_ max time div="$(( ($1 + BATCH_MAX_TIME - 1) / BATCH_MAX_TIME ))" newbatch="$(($2 / div))" if [ "$newbatch" -lt "$BATCH_SIZE_MIN" ] ; then newbatch=0 fi echo "$newbatch" } BATCH_SIZE="$BATCH_SIZE_MAX" echo "Starting with batch size $BATCH_SIZE" while : ; do start="$(date +%s)" out="$( \ mix pleroma.database prune_orphaned_activities --no-arrays --limit "$BATCH_SIZE" \ | grep -E ''(^|\] )Deleted ' \ )" end="$(date +%s)" duration="$((end - start))" echo "$out" if echo "$out" | tail -n 1 | grep -qE 'Deleted 0 rows$' ; then echo "Nothing more to delete." break elif echo "$out" | grep -qE 'Deleted [0-9]+ rows$' ; : else echo "Unexpected status report, abort! Expected count of total deleted rows, got:" >&2 echo " $out" >&2 exit 2 fi if [ "$duration" -gt "$BATCH_MAX_TIME" ] ; then echo "Completion of single batch takes too long ($duration > $BATCH_MAX_TIME)" >&2 BATCH_SIZE="$(lower_batch_size "$duration" "$BATCH_SIZE")" if [ "$BATCH_SIZE" -gt 0 ] ; then echo "Try lowering batch size to $BATCH_SIZE..." else echo "Cannot lower batch size further. Abort to not bog down instance!" >&2 exit 1 fi fi sleep "$YIELD" done ``` -------- </details> The problem with an unconstrained prune is that it will go through many *millions* of activites and objects, left-joins 4 tables which apart from `users` are all very large and often many of them will be eligible and not just filtered out early. This obviously takes long to process and can lead to such a large data stream and queued up changes by the delete transaction that as it goes on, after a while it bogs down everything else. Splitting it up into batches limits how much data will be processed at once thus avoiding the problem and allowing pruning most or all eligible activities in the background or allow it to work at all on weaker hardware. *(Though after enough activites were cleared out by limited batches an unconstrainted query might have become feasible again and the overhead of limited btaches significant, so if you really want to clean out everything consider if it’s possible to switch to an unlimtited prune at the end)* Best reviewed commit by commit and as noted in the commit messages, many of the diff lines are just indentation adjustments and for review its probably a good idea to hide whitespace-only changes. Resolves #653 ; cc @norm
smitten reviewed 2023-12-23 22:07:23 +00:00
@ -56,0 +79,4 @@
### Options
- `--limit n` - Only delete up to `n` activities in each query. Running this task in limited batches can help maintain the instances responsiveness while still freeing up some space.
First-time contributor

I'm a little confused about if there's a difference in the behavior between this and prune_objects.

"in each query" I would understand as limiting the database lock by having smaller limit delete operations.

For prune_objects it says "limits how many remote objects get pruned initially". What does initially mean here?

I'm a little confused about if there's a difference in the behavior between this and prune_objects. "in each query" I would understand as limiting the database lock by having smaller limit delete operations. For prune_objects it says "limits how many remote objects get pruned initially". What does initially mean here?
Author
Member

"in each query" I would understand as limiting the database lock by having smaller limit delete operations.

The task executes multiple DELETE queries on the database, each of these queries will have the given limit applied. Currently it executes two queries, so running the task once wiht --limit 100 will delete at most 200 rows.
It would be possible to limit the overall deleted rows to at most exactly the given amount, but this gives preferential treatment to the first queries and since the purpose is just to limit the load and allow breaks inbetween, I figured this is not needed. But if there’s a reason to, this could be changed.

For prune_objects it says "limits how many remote objects get pruned initially". What does initially mean here?

prune_objects first deletes remote posts, then (optionally, if such flags were passed) it will run more cleanup jobs. Only the initial prune is affected by the limit the cleanup not. Reason being, that except for prune_orphaned_activities those cleanup jobs are comparatively cheap anyway.
And prune_orphaned_activities now has its own task. So if you want to cleanup some space, while not continuously hogging the db, you can first (repeatedly) run prune_objects --limit n without --prune-orphaned-activities, but all other desired cleanups in the last run. Then afterwards, repeatedly run the standalone prune_orphaned_activities --limit n as long as a single run finishes fast enough.

I pushed a new rebased version with tweaked documentation (and a typo in a commit message was fixed). Can you take a look if it’s clearer now?

> "in each query" I would understand as limiting the database lock by having smaller limit delete operations. The task executes multiple DELETE queries on the database, each of these queries will have the given limit applied. Currently it executes two queries, so running the task once wiht `--limit 100` will delete at most 200 rows. It would be possible to limit the overall deleted rows to at most exactly the given amount, but this gives preferential treatment to the first queries and since the purpose is just to limit the load and allow breaks inbetween, I figured this is not needed. But if there’s a reason to, this could be changed. > For prune_objects it says "limits how many remote objects get pruned initially". What does initially mean here? `prune_objects` first deletes remote posts, then (optionally, if such flags were passed) it will run more cleanup jobs. Only the initial prune is affected by the limit the cleanup not. Reason being, that except for `prune_orphaned_activities` those cleanup jobs are comparatively cheap anyway. And `prune_orphaned_activities` now has its own task. So if you want to cleanup some space, while not continuously hogging the db, you can first (repeatedly) run `prune_objects --limit n` *without* `--prune-orphaned-activities`, but all other desired cleanups in the last run. Then afterwards, repeatedly run the standalone `prune_orphaned_activities --limit n` as long as a single run finishes fast enough. I pushed a new rebased version with tweaked documentation (and a typo in a commit message was fixed). Can you take a look if it’s clearer now?
First-time contributor

I see what you mean, and the docs updates are clearer thanks! The steps you describe is how I was running it, I did a few prune_objects and then did a few prune_orphaned_activities.

I see what you mean, and the docs updates are clearer thanks! The steps you describe is how I was running it, I did a few `prune_objects` and then did a few `prune_orphaned_activities`.
Oneric marked this conversation as resolved
First-time contributor

This seems to be working for me! Usually pruning makes the RAM fills up on my small VPS and the instance crashes but this is running well.

This seems to be working for me! Usually pruning makes the RAM fills up on my small VPS and the instance crashes but this is running well.
Oneric force-pushed prune-batch from 80ba73839c to 3bc63afbe0 2023-12-24 23:18:28 +00:00 Compare
Oneric force-pushed prune-batch from 3bc63afbe0 to 732bc96493 2024-01-31 16:45:44 +00:00 Compare
Oneric force-pushed prune-batch from 732bc96493 to 800acfa81d 2024-02-10 01:54:09 +00:00 Compare
Author
Member

Rebased this with two updates:

  1. The logger output now shows up in stdout for me, so duplicating it with IO.puts is no longer needed iand has been dropped.
    This change also slightly confused the script from the comments; I updated it to work with the new output and made it a bit more robust wrt ordering.
  2. Standalone prune_orphaned_activities is now used in one of the orphan-pruning tests. Since both modes use the same function and the only difference is the argument parser, I figured it wasn’t worth to duplicate the test setup and instead switched one of the two orphan tests to the standalone task.

Also just because, here’s an alternative version of the script which tries to scale batch size down between some max and min value instead of immediately ceasing the prune. May be more convenient in some cases, though too low min values prob don’t make much sense (and as before time and batch sizes need tweaking for real instances).

#!/bin/sh

YIELD=120
BATCH_SIZE_MAX=200000
BATCH_SIZE_MIN=150000
BATCH_MAX_TIME=300

set -eu

# params: cur_batch_time cur_batch_size
# returns: new_batch_size (0 if constraints cannot be met; otherwise valid)
lower_batch_size() {
    # Intentional rounding imprecision to facillitate going _below_ max time
    div="$(( ($1 + BATCH_MAX_TIME - 1) / BATCH_MAX_TIME ))"
    newbatch="$(($2 / div))"
    if [ "$newbatch" -lt "$BATCH_SIZE_MIN" ] ; then
        newbatch=0
    fi
    echo "$newbatch"
}

BATCH_SIZE="$BATCH_SIZE_MAX"
echo "Starting with batch size $BATCH_SIZE"
while : ; do
    start="$(date +%s)"
    out="$( \
        mix pleroma.database prune_orphaned_activities --limit "$BATCH_SIZE" \
        | grep -E ' \[info\] Deleted ' \
    )"
    end="$(date +%s)"
	duration="$((end - start))"
	echo "$out"

	if echo "$out" | tail -n 1 | grep -qE 'Deleted 0 rows$' ; then
		echo "Nothing more to delete."
		break
	fi
	if [ "$duration" -gt "$BATCH_MAX_TIME" ] ; then
		echo "Completion of single batch takes too long ($duration > $BATCH_MAX_TIME)" >&2
        BATCH_SIZE="$(lower_batch_size "$duration" "$BATCH_SIZE")"
        if [ "$BATCH_SIZE" -gt 0 ] ; then
            echo "Try lowering batch size to $BATCH_SIZE..."
        else
    		echo "Cannot lower batch size further. Abort to not bog down instance!" >&2
		    exit 1
        fi
	fi
	sleep "$YIELD"
done
Rebased this with two updates: 1. The logger output now shows up in stdout for me, so duplicating it with `IO.puts` is no longer needed iand has been dropped. This change also slightly confused the script from the comments; I updated it to work with the new output and made it a bit more robust wrt ordering. 2. Standalone `prune_orphaned_activities` is now used in one of the orphan-pruning tests. Since both modes use the same function and the only difference is the argument parser, I figured it wasn’t worth to duplicate the test setup and instead switched one of the two orphan tests to the standalone task. Also just because, here’s an alternative version of the script which tries to scale batch size down between some max and min value instead of immediately ceasing the prune. May be more convenient in some cases, though too low min values prob don’t make much sense (and as before time and batch sizes need tweaking for real instances). ```sh #!/bin/sh YIELD=120 BATCH_SIZE_MAX=200000 BATCH_SIZE_MIN=150000 BATCH_MAX_TIME=300 set -eu # params: cur_batch_time cur_batch_size # returns: new_batch_size (0 if constraints cannot be met; otherwise valid) lower_batch_size() { # Intentional rounding imprecision to facillitate going _below_ max time div="$(( ($1 + BATCH_MAX_TIME - 1) / BATCH_MAX_TIME ))" newbatch="$(($2 / div))" if [ "$newbatch" -lt "$BATCH_SIZE_MIN" ] ; then newbatch=0 fi echo "$newbatch" } BATCH_SIZE="$BATCH_SIZE_MAX" echo "Starting with batch size $BATCH_SIZE" while : ; do start="$(date +%s)" out="$( \ mix pleroma.database prune_orphaned_activities --limit "$BATCH_SIZE" \ | grep -E ' \[info\] Deleted ' \ )" end="$(date +%s)" duration="$((end - start))" echo "$out" if echo "$out" | tail -n 1 | grep -qE 'Deleted 0 rows$' ; then echo "Nothing more to delete." break fi if [ "$duration" -gt "$BATCH_MAX_TIME" ] ; then echo "Completion of single batch takes too long ($duration > $BATCH_MAX_TIME)" >&2 BATCH_SIZE="$(lower_batch_size "$duration" "$BATCH_SIZE")" if [ "$BATCH_SIZE" -gt 0 ] ; then echo "Try lowering batch size to $BATCH_SIZE..." else echo "Cannot lower batch size further. Abort to not bog down instance!" >&2 exit 1 fi fi sleep "$YIELD" done ```
Oneric force-pushed prune-batch from afa01cb8dd to 790b552030 2024-02-19 18:36:12 +00:00 Compare
Oneric force-pushed prune-batch from 790b552030 to c127d48308 2024-05-15 01:46:32 +00:00 Compare
Author
Member

Rebased again, added some further changes and updated the initial post for the current state:

  • allow pruning array or single-object activities, separately. Only the latter need batching and checking the former also each batch just adds useless overhead
  • we only have a single activity type which can reference an array of objects, Flag, and typically there are only few of those. Using the type in the query lets it use our type index instead of scanning the entire table and probing types for every entry, speeding things up greatly
    (for single-object activities this wouldn’t help much if at all)
  • more logs documenting how pruning progresses; this makes what and if somethings happening less opaque to admins and hopefully makes long running prunes less frustrating. In any event it makes it easier to tell which part of the process got stalled and which parts are more effective
Rebased again, added some further changes and updated the initial post for the current state: - allow pruning array or single-object activities, separately. Only the latter need batching and checking the former also each batch just adds useless overhead - we only have a single activity type which can reference an array of objects, `Flag`, and typically there are only few of those. Using the type in the query lets it use our type index instead of scanning the entire table and probing types for every entry, speeding things up greatly (for single-object activities this wouldn’t help much if at all) - more logs documenting how pruning progresses; this makes what and if somethings happening less opaque to admins and hopefully makes long running prunes less frustrating. In any event it makes it easier to tell which part of the process got stalled and which parts are more effective
Some checks are pending
ci/woodpecker/pr/build-amd64 Pipeline is pending
ci/woodpecker/pr/build-arm64 Pipeline is pending
ci/woodpecker/pr/docs Pipeline is pending
ci/woodpecker/pr/lint Pipeline is pending
ci/woodpecker/pr/test Pipeline is pending
This pull request can be merged automatically.
This branch is out-of-date with the base branch
You are not authorized to merge this pull request.
Sign in to join this conversation.
No description provided.