Handle failed fetches a bit better #743
No reviewers
Labels
No labels
approved, awaiting change
bug
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs docs
needs tests
not a bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: AkkomaGang/akkoma#743
Loading…
Reference in a new issue
No description provided.
Delete branch "failed-fetch-processing"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
pulls most of https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4015 and adapts it with the various little changes we've made to this stuff
gonna document issues as i see them
user fetch validation can cause oban :error, so can ID collisions
ah right it probably doesn't hit the pattern match in remotefetcher
yep, Oban wants
{:error, _info}
but for everyting not explicitly matched, the last catch all just returns:error:
, that oversight was fixed up on Pleroma’s side with https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4077Also a question: the ported changes make an effort to sort the job into
:discard
instead of:error
to avoid retries, but afaict allremote_fetcher
jobs currently (by default) only have a single attempt anyway, so no retries should occur in the first place?This is based on
RemoteFetcherWorker
usingWorkerHelper
, which sets the queue’s defaultmax_attempts
to1
and overrides this for enqueued jobs based on a config value if present, but iinm there’s no default config value forremote_fetcher
.Oban docs say by default job uniqueness is checked across all states except
:discarded
and:cancelled
. If jobs were retryable and we now discard jobs rather than exhausting all attempts with a backoff, won’t this in theory allow bad jobs to be reattempted via insert faster than it previously did with a backoff?(the changes are still good to have, but just checking i didn’t misunderstand something here)
oh this may actually be an issue - we don't actually enable the
unique
check for any worker 🥴maybe that should also get enabled as part of this, i'll make sure it works
marinating on ihba to see if this breaks, prayge it does not
@ -8,1 +8,3 @@
use Pleroma.Workers.WorkerHelper, queue: "remote_fetcher"
use Pleroma.Workers.WorkerHelper,
queue: "remote_fetcher",
unique: [period: 300, states: Oban.Job.states()]
Multiple fetches of the same AP id can still occur if the
depth
arg differs; settingkeys
to only considerop
andid
should avoid thishas been running on IHBA with no noticable negative effects