Change nginx cache size to 1 GiB #759

Merged
floatingghost merged 1 commits from norm/akkoma:nginx-cache-size into develop 2024-04-26 17:40:23 +00:00
Contributor

The current 10 GiB cache size is too large to fit into tmpfs for VMs and
other machines with smaller RAM sizes. Most non-Debian distros mount
/tmp on tmpfs.

The current 10 GiB cache size is too large to fit into tmpfs for VMs and other machines with smaller RAM sizes. Most non-Debian distros mount /tmp on tmpfs.
norm added 1 commit 2024-04-26 05:48:07 +00:00
ci/woodpecker/pr/build-amd64 Pipeline is pending Details
ci/woodpecker/pr/build-arm64 Pipeline is pending Details
ci/woodpecker/pr/docs Pipeline is pending Details
ci/woodpecker/pr/lint Pipeline is pending Details
ci/woodpecker/pr/test Pipeline is pending Details
72c2d9f009
Change nginx cache size to 1 GiB
The current 10 GiB cache size is too large to fit into tmpfs for VMs and
other machines with smaller RAM sizes. Most non-Debian distros mount
/tmp on tmpfs.
floatingghost merged commit 310c1b7e24 into develop 2024-04-26 17:40:23 +00:00
floatingghost deleted branch nginx-cache-size 2024-04-26 17:40:23 +00:00
Member

Btw, does it actually make sense to use an nginx cache for local media?

For caching proxied media, i would guess using a large cache on disk is preferable to a small cache in RAM.
For local media using a disk cache doesn't make any sense (it’s already on disk) and a RAM cache seems at first thought a bit redunadant with the OS-level page cache too, though perhaps effectively reserving some chunk of RAM for recent media is actually helpful?

Btw, does it actually make sense to use an nginx cache for local media? For caching proxied media, i would guess using a large cache on disk is preferable to a small cache in RAM. For local media using a disk cache doesn't make any sense (it’s already on disk) and a RAM cache seems at first thought a bit redunadant with the OS-level page cache too, though perhaps effectively reserving some chunk of RAM for recent media is actually helpful?
Member

Ok, so originally only /proxy used the nginx cache, but cache was added to /media during a restructure(?) after which /media itself might also proxy request when using a non-local uploader.

This proxy behaviour was removed in Akkoma instead always redirecting, so i think we should just stop recommending nginx cache for our own uploads.

Redirecting makes more sense to me and avoids wasting cache on local files. If we want to reintroduce proxying for old-upload-compat routes, this should just redirect to a /proxy URL instead of proxying directly.

Ok, so [originally](https://akkoma.dev/AkkomaGang/akkoma/commit/d1806ec07f44b617769bc862048df30b8a3336da) only `/proxy` used the nginx cache, but [cache was added to `/media`](https://git.pleroma.social/pleroma/pleroma/-/merge_requests/470/diffs#f2305ecba6d3f3630a82a1c370b678d024944f10_0_62) during a restructure(?) after which `/media` itself might also proxy request when using a non-local uploader. This proxy behaviour was [removed in Akkoma](https://akkoma.dev/AkkomaGang/akkoma/commit/364b6969eb7c79e57ed02345ddff4f48519e6b0a#diff-384b228d69d9187f683d4004d86ee5be03a37a39) instead always redirecting, so i think we should just stop recommending nginx cache for our own uploads. Redirecting makes more sense to me and avoids wasting cache on local files. If we want to reintroduce proxying for old-upload-compat routes, this should just redirect to a `/proxy` URL instead of proxying directly.
Author
Contributor

One thing I think the cache does help with is reducing the amount of file descriptors used by akkoma, I've had a few instances of akkoma running out of fds which was a major pain to deal with

I did raise the fd limit but it didn't completely resolve the issue, not sure if there's a better way of dealing with that

One thing I think the cache does help with is reducing the amount of file descriptors used by akkoma, I've had a few instances of akkoma running out of fds which was a major pain to deal with I did raise the fd limit but it didn't completely resolve the issue, not sure if there's a better way of dealing with that
Sign in to join this conversation.
No description provided.