A lot of errors when migrating from Pleroma #215
Labels
No labels
approved, awaiting change
bug
configuration
documentation
duplicate
enhancement
extremely low priority
feature request
Fix it yourself
help wanted
invalid
mastodon_api
needs docs
needs tests
not a bug
planned
pleroma_api
privacy
question
static_fe
triage
wontfix
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: AkkomaGang/akkoma#215
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi,
I get the following PostgreSQL error when trying to migrate from Pleroma to Akkoma
I hope you can help me with that.
~Leonie
Ah, this could occur if migrating from develop pleroma where you'd already applied the removal of mastofe patch
Should be fixed via #216
if you're from source, git pull and try again
if you're OTP, wait for https://ci.akkoma.dev/AkkomaGang/akkoma/build/930 to complete then update and try again
This didn't fix anything, now I get these fun messages http://content.koyu.space/1MPAN
ok so it actually did fix your error since these are totally different
sounds like you've got your pool target set really low - it should never be that low by default
you'll want to increase it by changing
queue_target
like so:Getting this beauty here
Oct 05 11:35:12 koyu.space mix[2454618]: 11:35:12.038 [info] Postgrex.Protocol (#PID<0.5170.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.6331.0> exited
that alone doesn't give an awful lot to go on, there should be more logs above it that indicate what used the exit
Is this a little better? http://content.koyu.space/A6tGK
hm, that doesn't look fatal - does the instance die or does it recover after doing that?
It loads the UI and responds to requests very slowly
hm, that sounds like you might have too big of a database for your system
have you done a vacuum/pg_repack to remove stuff?
additionally, how long is your remote object retention? that may be causing object table inflation
I have no idea what those two are. I also store configuration in the database, so how do I check that if I've blown off half of the server?
vacuum - https://docs.akkoma.dev/stable/administration/CLI_tasks/database/#prune-old-remote-posts-from-the-database
pg_repack - https://github.com/reorg/pg_repack
show options in the database - https://docs.akkoma.dev/stable/administration/CLI_tasks/config/#dump-all-of-the-config-settings-defined-in-the-database
There is no object retention configuration. Should I try configuring it or assume it has some sort of default?
Doing a pg_repack does
ERROR: pg_repack failed with error: pg_repack 1.4.7 is not installed in the database
it'll have a default then, you should just be able to run the prune command from the docs.akkoma and it'll remove stuff
This is getting more interesting now after pruning
ok, that's fine, that error isn't fatal at all, just a refetch
you can safely ignore it
How about this? http://content.koyu.space/XIXb4
The server is still super slow
that's an interesting one, seems it doesn't like some of your config
that sounds like you may have an outdated schema in your config
can you add
in your config?
if you run from source, does your
config/config.exs
match the one in our version control?Now it tries to eat itself http://content.koyu.space/Mj6z8
Copied config from version control and added the config flag you mentioned
nice! that means we're past the worst of it, we've got standard inbound requests and cron activating
looks like you might have some long-running requests
when the server is up and the timeouts are occuring, run
SELECT * FROM pg_stat_activity;
on your db and see if it throws anything interestingThese are the first 20 seconds or so and it's clogging up with queries. This looks awful I guess.
yeah ok that's what i'd expect to see in this case
what sort of size box are you running this on? does the IO max out?
It's getting better, but it's still very slow. Some requests even don't get throguh now. I'm running koyu.space on a KVM VPS with the following specs:
10GB RAM
4 Cores (AMD EPYC)
200GB SSD
you may just have a very large backlog of tasks that is slowly processing
try leaving it online for an hour or so and see if it improves
What I find ironic is that the timelines load super slow, but the rest like config etc. loads as it should
Yes, it seems to have synced up, but timelines still load slow
I'll let it sink in a little longer. Will report back.
other things that can cause slow timelines include having thread containment turned on, so check if that's on
it's off by default
Removing masto-fe related settings from the database made it kinda faster 🤔
I'm also getting these from time to time
And the thread containment setting does nothing
hm
well keep it off anyhow
https://pgtune.leopard.in.ua/ might be of use, maybe your Db isn't using as much of your hardware as it could
This is also doing nothing, it's a real hard one
check your running queries again - is there a specific one that's taking a long time?
This one has been trying to do something since start:
none of that is particularly unusual
you could try turning on debug logging and seeing if you get anything?
you might also consider checking
iotop
to check if your IO is doing anything untowardiotop is safe so I don't think it's disk IO. How do I enable debug logging?
there's a bunch of level stuff in the config
also, in case you didn't run it earlier
https://www.postgresql.org/docs/current/sql-vacuum.html
this may help
I did an SQL vacuum and tried debug logging, but there's nothing of value. That's tough.
you can also try the pg_repack thing
you'll need to run
CREATE EXTENSION pg_repack
on your database before you run it (that's why you ran into the not installed thing)pg_repack returned
ERROR: query failed: ERROR: could not create unique index "index_371024"
and the server is still slowmasto-fe errors when migrating from Pleromato A lot of errors when migrating from PleromaI might have found the issue. Running a vacuum using mix takes a long time to finish and has a very high IO load.
Running a vacuum didn't improve performance though. I had high hopes.
Now I'm getting these funny things with a few requests:
This is very interesting. I'm occasionally getting
Oct 06 19:28:24 koyu.space mix[622149]: 19:28:24.173 [notice] :alarm_handler: {:clear, :system_memory_high_watermark}
even though I still have 6.9 GB (nice) available. Why is it not using my entire RAM to work with? Timeout on static assets went away once I restarted the PostgreSQL server.hold up, your static assets were timing out?
that would heavily indicate that your server does not have the disk throughput to run a database
please benchmark your disk to ensure it has the read and write speeds necessary to comfortably run a database
I also tried regenerating the entire config. That didn't help.
How do I do that? I mean Pleroma and bloaty Mastodon worked before.
use hdparm and dd
that really should be sufficient
but I've given you all the resources I can, there's very little else I can do remotely to diagnose
So I figured out that the whole database got corrupted. Rebuilding the index resulted in the whole database being yeeted.
if you've got a backup somewhere it should still be ok
Well, I have a backup, but I can't restore it when the database index is corrupt
Had the same issue, it was indeed related to database corruption.
I was able to restore a working state by:
(psql, connected in your akkoma database)
Took some time, but now the random timeout are fixed !
PS: your database will be busy during those operations so warn your users / close the service for some time.