Compare commits

...

24 commits

Author SHA1 Message Date
c8add9d1dc Merge pull request 'fix invalid proxy_hide_header in example config' (#472) from flisk/akkoma:remote-media-docs-fix into develop
Reviewed-on: AkkomaGang/akkoma#472
2023-03-02 11:19:46 +00:00
d43c8080d0 Merge pull request 'updating docs: start akko first, then upgrade frontend' (#486) from flisk/akkoma:fix-updating-docs into develop
Reviewed-on: AkkomaGang/akkoma#486
2023-03-02 11:18:12 +00:00
df03d64dc5 Merge pull request 'Reblog content should be ""' (#489) from masto4-reboost into develop
Reviewed-on: AkkomaGang/akkoma#489
2023-03-02 11:16:26 +00:00
b88e6560e0 Reblog content should be ""
Fixes #450
2023-03-02 11:04:27 +00:00
1ab0b3a0e2 match nginx config to install config and extend docs a bit 2023-02-26 23:58:55 +01:00
cb28b8f0fe updating docs: start akko first, then upgrade frontend 2023-02-26 23:42:28 +01:00
531a550184 fix invalid proxy_hide_header in example config 2023-02-26 23:25:46 +01:00
45a11aa20f add changelog entry for MFM 2023-02-26 22:12:31 +00:00
f56e3098ef Merge branch 'delete_orphaned_activities' into develop 2023-02-26 22:11:30 +00:00
fd1dc87eb4 Merge pull request 'update backwards compat notice in admin_api.md' (#473) from flisk/akkoma:update-admin-api-docs into develop
Reviewed-on: AkkomaGang/akkoma#473
2023-02-26 22:01:57 +00:00
7bd80ccf07 Merge pull request 'update prometheus docs' (#474) from flisk/akkoma:update-prometheus-docs into develop
Reviewed-on: AkkomaGang/akkoma#474
2023-02-26 22:00:12 +00:00
f7211459ef Merge pull request 'Rename index for faster database restore' (#455) from ilja/akkoma:rename_index_for_faster_restore into develop
Reviewed-on: AkkomaGang/akkoma#455
2023-02-26 21:58:56 +00:00
fc842aa7c7 Merge pull request 'Docs: Change docs README for new way of building docs' (#448) from ilja/akkoma:improve_readme_from_docs into develop
Reviewed-on: AkkomaGang/akkoma#448
2023-02-26 21:49:42 +00:00
08d49fba7d fine then no fun allowed, y'all don't deserve it 2023-02-26 21:25:57 +00:00
ilja
328b4d93b7 Changelog + remove some unneeded comments from the tests 2023-02-26 14:43:19 +01:00
ilja
c1c962e1a8 Add docs for pleroma_ctl database prune_objects --prune-orphaned-activities
I also added extra info on VACUUM FULL
2023-02-26 14:41:50 +01:00
ilja
57eef6d764 prune_objects can prune orphaned activities who reference an array of objects
E.g. Flag activities have an array of objects

We prune the activity when NONE of the objects can be found

Note that the cost of finding and deleting these is ~4x higher than finding and deleting the non-array ones

Only string:
Delete on activities  (cost=506573.48..506580.38 rows=0 width=0)

Only Array:
Delete on activities  (cost=3570359.68..4276365.34 rows=0 width=0)

(They are still executed separately, so the total cost is the sum of the two)
2023-02-26 14:41:50 +01:00
ilja
a7ec6e039c prune_objects can prune orphaned activities
We add an option to also prune remote activities who don't have existing objects any more they reference.
Rn, we only check for activities who only reference one object, not an array or embeded object.
2023-02-26 14:41:50 +01:00
ilja
3b634dcbe7 Remove the note about activities_visibility_index
We renamed another index is the previous commit so that this work-around isn't needed any more
2023-02-26 14:38:14 +01:00
ilja
8b2adc4fb4 Rename users_ap_id_COALESCE_follower_address_index for faster db restoration
By default Postgresql first restores the data and then the indexes when dumping and restoring the database.
Restoring index activities_visibility_index took a very long time.
users_ap_id_COALESCE_follower_address_index was later added because having this could speed up the restoration tremendously.
The problem now is that restoration apparently happens in alphabetical order, so this new index wasn't created yet
by the time activities_visibility_index needed it.
There were several work-arounds which included more complex steps during backup/restore.
By renaming this index, it should be restored first and thus activities_visibility_index can make use of it.
This speeds up restoration significantly without requiring more complex or unexpected steps from people.
2023-02-26 14:33:17 +01:00
da4c87b226 update prometheus docs 2023-02-18 14:39:22 +01:00
439ec49137 update backwards compat notice in admin_api.md 2023-02-18 14:37:12 +01:00
ilja
7f8932304f typo + remove unneeded file 2023-02-02 14:37:45 +01:00
ilja
e74e1efe1c Change docs README for new way of building docs
Docs used to be a separate repo who cloned pleroma and pelroma-fe.
Now the docs are just the BE docs and completely part of the Akkoma repo.
I moved back to using venv because that's what I used and cleaner imo since it keeps everything nice in the repo.
(Iirc virtualenv stored things in the Home folder or smthng)
2023-01-26 15:42:53 +01:00
17 changed files with 341 additions and 102 deletions

1
.gitignore vendored
View file

@ -73,6 +73,7 @@ pleroma.iml
# Generated documentation
docs/site
docs/venv
# docker stuff
docker-db

View file

@ -7,9 +7,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Unreleased
## Fixed
- Allowed contentMap to be updated on edit
### Changed
- Restoring the database from a dump now goes much faster without need for work-arounds
### Added
- Extend the mix task `prune_objects` with option `--prune-orphaned-activities` to also prune orphaned activities, allowing to reclaim even more database space
### Removed
- Possibility of using the `style` parameter on `span` elements. This will break certain MFM parameters.
## 2023.02
### Added

View file

@ -1,7 +0,0 @@
all: install
pipenv run mkdocs build
install:
pipenv install
clean:
rm -rf docs

View file

@ -2,33 +2,27 @@
You don't need to build and test the docs as long as you make sure the syntax is correct. But in case you do want to build the docs, feel free to do so.
You'll need to install mkdocs for which you can check the [mkdocs installation guide](https://www.mkdocs.org/#installation). Generally it's best to install it using `pip`. You'll also need to install the correct dependencies.
```sh
# Make sure you're in the same directory as this README
# From the root of the Akkoma repo, you'll need to do
cd docs
### Example using a Debian based distro
# Optionally use a virtual environment
python3 -m venv venv
source venv/bin/activate
#### 1. Install pipenv and dependencies
# Install dependencies
pip install -r requirements.txt
```shell
pip install pipenv
pipenv sync
# Run an http server who rebuilds when files change
# Accessable on http://127.0.0.1:8000
mkdocs serve
# Build the docs
# The static html pages will have been created in the folder "site"
# You can serve them from a server by pointing your server software (nginx, apache...) to this location
mkdocs build
# To get out of the virtual environment, you do
deactivate
```
#### 2. (Optional) Activate the virtual environment
Since dependencies are installed in a virtual environment, you can't use them directly. To use them you should either prefix the command with `pipenv run`, or activate the virtual environment for current shell by executing `pipenv shell` once.
#### 3. Build the docs using the script
```shell
[pipenv run] make all
```
#### 4. Serve the files
A folder `site` containing the static html pages will have been created. You can serve them from a server by pointing your server software (nginx, apache...) to this location. During development, you can run locally with
```shell
[pipenv run] mkdocs serve
```
This handles setting up an http server and rebuilding when files change. You can then access the docs on <http://127.0.0.1:8000>

View file

@ -21,7 +21,6 @@ Replaces embedded objects with references to them in the `objects` table. Only n
mix pleroma.database remove_embedded_objects [option ...]
```
### Options
- `--vacuum` - run `VACUUM FULL` after the embedded objects are replaced with their references
@ -29,8 +28,11 @@ Replaces embedded objects with references to them in the `objects` table. Only n
This will prune remote posts older than 90 days (configurable with [`config :pleroma, :instance, remote_post_retention_days`](../../configuration/cheatsheet.md#instance)) from the database. Pruned posts may be refetched in some cases.
!!! note
The disk space will only be reclaimed after a proper vacuum. By default Postgresql does this for you on a regular basis, but if your instance has been running for a long time and there are many rows deleted, it may be advantageous to use `VACUUM FULL` (e.g. by using the `--vacuum` option).
!!! danger
The disk space will only be reclaimed after `VACUUM FULL`. You may run out of disk space during the execution of the task or vacuuming if you don't have about 1/3rds of the database size free.
You may run out of disk space during the execution of the task or vacuuming if you don't have about 1/3rds of the database size free. Vacuum causes a substantial increase in I/O traffic, and may lead to a degraded experience while it is running.
=== "OTP"
@ -46,9 +48,10 @@ This will prune remote posts older than 90 days (configurable with [`config :ple
### Options
- `--keep-threads` - don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...)
- `--keep-non-public` - keep non-public posts like DM's and followers-only, even if they are remote
- `--vacuum` - run `VACUUM FULL` after the objects are pruned
- `--keep-threads` - Don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...). It also wont delete posts when at least one of the posts in that thread is kept (e.g. because one of the posts has seen recent activity).
- `--keep-non-public` - Keep non-public posts like DM's and followers-only, even if they are remote.
- `--prune-orphaned-activities` - Also prune orphaned activities afterwards. Activities are things like Like, Create, Announce, Flag (aka reports)... They can significantly help reduce the database size.
- `--vacuum` - Run `VACUUM FULL` after the objects are pruned. This should not be used on a regular basis, but is useful if your instance has been running for a long time before pruning.
## Create a conversation for all existing DMs
@ -96,6 +99,9 @@ Can be safely re-run
## Vacuum the database
!!! note
By default Postgresql has an autovacuum deamon running. While the tasks described here can help in some cases, they shouldn't be needed on a regular basis. See [the Postgresql docs on vacuuming](https://www.postgresql.org/docs/current/sql-vacuum.html) for more information on this.
### Analyze
Running an `analyze` vacuum job can improve performance by updating statistics used by the query planner. **It is safe to cancel this.**

View file

@ -21,33 +21,15 @@
6. Restore the database schema and akkoma role using either of the following options
* You can use the original `setup_db.psql` if you have it[²]: `sudo -Hu postgres psql -f config/setup_db.psql`.
* Or recreate the database and user yourself (replace the password with the one you find in the config file) `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-config-file>'; CREATE DATABASE akkoma OWNER akkoma;"`.
7. Now restore the Akkoma instance's data into the empty database schema[¹][³]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
8. If you installed a newer Akkoma version, you should run `MIX_ENV=prod mix ecto.migrate`[]. This task performs database migrations, if there were any.
7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
8. If you installed a newer Akkoma version, you should run `MIX_ENV=prod mix ecto.migrate`[³]. This task performs database migrations, if there were any.
9. Restart the Akkoma service.
10. Run `sudo -Hu postgres vacuumdb --all --analyze-in-stages`. This will quickly generate the statistics so that postgres can properly plan queries.
11. If setting up on a new server configure Nginx by using the `installation/akkoma.nginx` config sample or reference the Akkoma installation guide for your OS which contains the Nginx configuration instructions.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files.
[²]: You can recreate the `config/setup_db.psql` by running the `mix pleroma.instance gen` task again. You can ignore most of the questions, but make the database user, name, and password the same as found in your backed up config file. This will also create a new `config/generated_config.exs` file which you may delete as it is not needed.
[³]: `pg_restore` will add data before adding indexes. The indexes are added in alphabetical order. There's one index, `activities_visibility_index` which may take a long time because it can't make use of an index that's only added later. You can significantly speed up restoration by skipping this index and add it afterwards. For that, you can do the following (we assume the akkoma.pgdump is in the directory you're running the commands):
```sh
pg_restore -l akkoma.pgdump > db.list
# Comment out the step for creating activities_visibility_index by adding a semi colon at the start of the line
sed -i -E 's/(.*activities_visibility_index.*)/;\1/' db.list
# We restore the database using the db.list list-file
sudo -Hu postgres pg_restore -L db.list -d akkoma -v -1 akkoma.pgdump
# You can see the sql statement with which to create the index using
grep -Eao 'CREATE INDEX activities_visibility_index.*' akkoma.pgdump
# Then create the index manually
# Make sure that the command to create is correct! You never know it has changed since writing this guide
sudo -Hu postgres psql -d pleroma_ynh -c "CREATE INDEX activities_visibility_index ON public.activities USING btree (public.activity_visibility(actor, recipients, data), id DESC NULLS LAST) WHERE ((data ->> 'type'::text) = 'Create'::text);"
```
[⁴]: Prefix with `MIX_ENV=prod` to run it using the production config file.
[³]: Prefix with `MIX_ENV=prod` to run it using the production config file.
## Remove

View file

@ -26,11 +26,11 @@ su -s "$SHELL" akkoma
# Run database migrations
./bin/pleroma_ctl migrate
# Update frontend(s). See Frontend Configuration doc for more information.
./bin/pleroma_ctl frontend install pleroma-fe --ref stable
# Start akkoma
./bin/pleroma daemon # or using the system service manager (e.g. systemctl start akkoma)
# Update frontend(s). See Frontend Configuration doc for more information.
./bin/pleroma_ctl frontend install pleroma-fe --ref stable
```
If you selected an alternate flavour on installation,
@ -59,9 +59,9 @@ sudo systemctl stop akkoma
# Run database migrations
mix ecto.migrate
# Update Pleroma-FE frontend to latest stable. For other Frontends see Frontend Configration doc for more information.
mix pleroma.frontend install pleroma-fe --ref stable
# Start akkoma (replace with your system service manager's equivalent if different)
sudo systemctl start akkoma
# Update Pleroma-FE frontend to latest stable. For other Frontends see Frontend Configuration doc for more information.
mix pleroma.frontend install pleroma-fe --ref stable
```

View file

@ -6,33 +6,46 @@ as soon as the post is received by your instance.
## Nginx
```
proxy_cache_path /long/term/storage/path/akkoma-media-cache levels=1:2
keys_zone=akkoma_media_cache:10m inactive=1y use_temp_path=off;
The following are excerpts from the [suggested nginx config](../../../installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work.
A `proxy_cache_path` must be defined, for example:
```
proxy_cache_path /long/term/storage/path/akkoma-media-cache levels=1:2
keys_zone=akkoma_media_cache:10m inactive=1y use_temp_path=off;
```
The `proxy_cache_path` must then be configured for use with media proxy paths:
```
location ~ ^/(media|proxy) {
proxy_cache akkoma_media_cache;
slice 1m;
proxy_cache_key $host$uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_http_version 1.1;
proxy_cache_valid 206 301 302 304 1h;
proxy_cache_valid 200 1y;
proxy_cache_use_stale error timeout invalid_header updating;
proxy_cache_valid 200 206 301 304 1h;
proxy_cache_lock on;
proxy_ignore_client_abort on;
proxy_buffering on;
chunked_transfer_encoding on;
proxy_ignore_headers Cache-Control Expires;
proxy_hide_header Cache-Control Expires;
proxy_pass http://127.0.0.1:4000;
proxy_pass http://phoenix;
}
}
```
Ensure that `proxy_http_version 1.1;` is set for the above `location` block. In the suggested config, this is already the case.
## Akkoma
Add to your `prod.secret.exs`:
### File-based Configuration
If you're using static file configuration, add the `MediaProxyWarmingPolicy` to your MRF policies. For example:
```
config :pleroma, :mrf,
policies: [Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy]
```
### Database Configuration
In the admin interface, add `MediaProxyWarmingPolicy` to the `Policies` option under `Settings``MRF`.

View file

@ -2,7 +2,7 @@
Authentication is required and the user must be an admin.
The `/api/v1/pleroma/admin/*` path is backwards compatible with `/api/pleroma/admin/*` (`/api/pleroma/admin/*` will be deprecated in the future).
Backwards-compatibility for admin API endpoints without version prefixes (`/api/pleroma/admin/*`) has been removed as of Akkoma 3.6.0. Please use `/api/v1/pleroma/admin/*` instead.
## `GET /api/v1/pleroma/admin/users`

View file

@ -5,27 +5,16 @@ Akkoma includes support for exporting metrics via the [prometheus_ex](https://gi
Config example:
```
config :prometheus, Pleroma.Web.Endpoint.MetricsExporter,
enabled: true,
auth: {:basic, "myusername", "mypassword"},
ip_whitelist: ["127.0.0.1"],
path: "/api/pleroma/app_metrics",
format: :text
config :pleroma, :instance,
export_prometheus_metrics: true
```
* `enabled` (Akkoma extension) enables the endpoint
* `ip_whitelist` (Akkoma extension) could be used to restrict access only to specified IPs
* `auth` sets the authentication (`false` for no auth; configurable to HTTP Basic Auth, see [prometheus-plugs](https://github.com/deadtrickster/prometheus-plugs#exporting) documentation)
* `format` sets the output format (`:text` or `:protobuf`)
* `path` sets the path to app metrics page
## `/api/pleroma/app_metrics`
## `/api/v1/akkoma/metrics`
### Exports Prometheus application metrics
* Method: `GET`
* Authentication: not required by default (see configuration options above)
* Authentication: required
* Params: none
* Response: text
@ -37,7 +26,7 @@ The following is a config example to use with [Grafana](https://grafana.com)
```
- job_name: 'beam'
metrics_path: /api/pleroma/app_metrics
metrics_path: /api/v1/akkoma/metrics
scheme: https
static_configs:
- targets: ['otp.akkoma.dev']

View file

@ -69,7 +69,8 @@ def run(["prune_objects" | args]) do
strict: [
vacuum: :boolean,
keep_threads: :boolean,
keep_non_public: :boolean
keep_non_public: :boolean,
prune_orphaned_activities: :boolean
]
)
@ -94,6 +95,21 @@ def run(["prune_objects" | args]) do
log_message
end
log_message =
if Keyword.get(options, :prune_orphaned_activities) do
log_message <> ", pruning orphaned activities"
else
log_message
end
log_message =
if Keyword.get(options, :vacuum) do
log_message <>
", doing a full vacuum (you shouldn't do this as a recurring maintanance task)"
else
log_message
end
Logger.info(log_message)
if Keyword.get(options, :keep_threads) do
@ -155,14 +171,49 @@ def run(["prune_objects" | args]) do
end
|> Repo.delete_all(timeout: :infinity)
prune_hashtags_query = """
if Keyword.get(options, :prune_orphaned_activities) do
# Prune activities who link to a single object
"""
delete from public.activities
where id in (
select a.id from public.activities a
left join public.objects o on a.data ->> 'object' = o.data ->> 'id'
left join public.activities a2 on a.data ->> 'object' = a2.data ->> 'id'
left join public.users u on a.data ->> 'object' = u.ap_id
where not a.local
and jsonb_typeof(a."data" -> 'object') = 'string'
and o.id is null
and a2.id is null
and u.id is null
)
"""
|> Repo.query([], timeout: :infinity)
# Prune activities who link to an array of objects
"""
delete from public.activities
where id in (
select a.id from public.activities a
join json_array_elements_text((a."data" -> 'object')::json) as j on jsonb_typeof(a."data" -> 'object') = 'array'
left join public.objects o on j.value = o.data ->> 'id'
left join public.activities a2 on j.value = a2.data ->> 'id'
left join public.users u on j.value = u.ap_id
group by a.id
having max(o.data ->> 'id') is null
and max(a2.data ->> 'id') is null
and max(u.ap_id) is null
)
"""
|> Repo.query([], timeout: :infinity)
end
"""
DELETE FROM hashtags AS ht
WHERE NOT EXISTS (
SELECT 1 FROM hashtags_objects hto
WHERE ht.id = hto.hashtag_id)
"""
Repo.query(prune_hashtags_query)
|> Repo.query()
if Keyword.get(options, :vacuum) do
Maintenance.vacuum("full")

View file

@ -183,7 +183,7 @@ def render(
in_reply_to_id: nil,
in_reply_to_account_id: nil,
reblog: reblogged,
content: reblogged[:content] || "",
content: "",
created_at: created_at,
reblogs_count: 0,
replies_count: 0,

View file

@ -0,0 +1,23 @@
defmodule Pleroma.Repo.Migrations.RenameIndexUsersApId_COALESCEFollowerAddressIndex do
alias Pleroma.Repo
use Ecto.Migration
def up do
# By default Postgresql first restores the data and then the indexes when dumping and restoring the database.
# Restoring index activities_visibility_index took a very long time.
# users_ap_id_COALESCE_follower_address_index was later added because having this could speed up the restoration tremendously.
# The problem now is that restoration apparently happens in alphabetical order, so this new index wasn't created yet
# by the time activities_visibility_index needed it.
# There were several work-arounds which included more complex steps during backup/restore.
# By renaming this index, it should be restored first and thus activities_visibility_index can make use of it.
# This speeds up restoration significantly without requiring more complex or unexpected steps from people.
Repo.query!("ALTER INDEX public.\"users_ap_id_COALESCE_follower_address_index\"
RENAME TO \"aa_users_ap_id_COALESCE_follower_address_index\";")
end
def down do
Repo.query!("ALTER INDEX public.\"aa_users_ap_id_COALESCE_follower_address_index\"
RENAME TO \"users_ap_id_COALESCE_follower_address_index\";")
end
end

View file

@ -56,8 +56,6 @@ defmodule Pleroma.HTML.Scrubber.Default do
Meta.allow_tag_with_these_attributes(:u, [])
Meta.allow_tag_with_these_attributes(:ul, [])
Meta.allow_tags_with_style_attributes([:span])
Meta.allow_tag_with_this_attribute_values(:span, "class", [
"h-card",
"quote-inline",

View file

@ -353,6 +353,186 @@ test "with the --keep-threads option it keeps old threads with bookmarked posts"
assert length(Repo.all(Object)) == 1
end
test "We don't have unexpected tables which may contain objects that are referenced by activities" do
# We can delete orphaned activities. For that we look for the objects they reference in the 'objects', 'activities', and 'users' table.
# If someone adds another table with objects (idk, maybe with separate relations, or collections or w/e), then we need to make sure we
# add logic for that in the 'prune_objects' task so that we don't wrongly delete their corresponding activities.
# So when someone adds (or removes) a table, this test will fail.
# Either the table contains objects which can be referenced from the activities table
# => in that case the prune_objects job should be adapted so we don't delete activities who still have the referenced object.
# Or it doesn't contain objects which can be referenced from the activities table
# => in that case you can add/remove the table to/from this (sorted) list.
assert Repo.query!(
"SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE';"
).rows
|> Enum.sort() == [
["activities"],
["announcement_read_relationships"],
["announcements"],
["apps"],
["backups"],
["bookmarks"],
["chat_message_references"],
["chats"],
["config"],
["conversation_participation_recipient_ships"],
["conversation_participations"],
["conversations"],
["counter_cache"],
["data_migration_failed_ids"],
["data_migrations"],
["deliveries"],
["filters"],
["following_relationships"],
["hashtags"],
["hashtags_objects"],
["instances"],
["lists"],
["markers"],
["mfa_tokens"],
["moderation_log"],
["notifications"],
["oauth_authorizations"],
["oauth_tokens"],
["oban_jobs"],
["oban_peers"],
["objects"],
["password_reset_tokens"],
["push_subscriptions"],
["registrations"],
["report_notes"],
["scheduled_activities"],
["schema_migrations"],
["thread_mutes"],
["user_follows_hashtag"],
["user_frontend_setting_profiles"],
["user_invite_tokens"],
["user_notes"],
["user_relationships"],
["users"]
]
end
test "it prunes orphaned activities with the --prune-orphaned-activities" do
%Object{} |> Map.merge(%{data: %{"id" => "object_for_activity"}}) |> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{"id" => "remote_activity_with_object", "object" => "object_for_activity"}
})
|> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{
"id" => "remote_activity_with_activity",
"object" => "remote_activity_with_object"
}
})
|> Repo.insert()
%User{} |> Map.merge(%{ap_id: "actor"}) |> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{"id" => "remote_activity_with_actor", "object" => "actor"}
})
|> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{
"id" => "remote_activity_without_existing_referenced_object",
"object" => "non_existing"
}
})
|> Repo.insert()
%Activity{}
|> Map.merge(%{
local: true,
data: %{"id" => "local_activity_with_actor", "object" => "non_existing"}
})
|> Repo.insert()
assert length(Repo.all(Activity)) == 5
Mix.Tasks.Pleroma.Database.run(["prune_objects"])
assert length(Repo.all(Activity)) == 5
Mix.Tasks.Pleroma.Database.run(["prune_objects", "--prune-orphaned-activities"])
activities = Repo.all(Activity)
assert "remote_activity_without_existing_referenced_object" not in Enum.map(
activities,
fn a -> a.data["id"] end
)
assert length(activities) == 4
end
test "it prunes orphaned activities with the --prune-orphaned-activities when the objects are referenced from an array" do
%Object{} |> Map.merge(%{data: %{"id" => "existing_object"}}) |> Repo.insert()
%User{} |> Map.merge(%{ap_id: "existing_actor"}) |> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{
"id" => "remote_activity_existing_object",
"object" => ["non_ existing_object", "existing_object"]
}
})
|> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{
"id" => "remote_activity_existing_actor",
"object" => ["non_ existing_object", "existing_actor"]
}
})
|> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{
"id" => "remote_activity_existing_activity",
"object" => ["non_ existing_object", "remote_activity_existing_actor"]
}
})
|> Repo.insert()
%Activity{}
|> Map.merge(%{
local: false,
data: %{
"id" => "remote_activity_without_existing_referenced_object",
"object" => ["owo", "whats_this"]
}
})
|> Repo.insert()
assert length(Repo.all(Activity)) == 4
Mix.Tasks.Pleroma.Database.run(["prune_objects"])
assert length(Repo.all(Activity)) == 4
Mix.Tasks.Pleroma.Database.run(["prune_objects", "--prune-orphaned-activities"])
activities = Repo.all(Activity)
assert length(activities) == 3
assert "remote_activity_without_existing_referenced_object" not in Enum.map(
activities,
fn a -> a.data["id"] end
)
assert length(activities) == 3
end
end
describe "running update_users_following_followers_counts" do

View file

@ -135,7 +135,7 @@ test "a misskey MFM status with a content field should work and be linked", _ do
assert content =~ "@oops_not_a_mention"
assert content =~
"<span class=\"mfm _mfm_jelly_\" style=\"display: inline-block; animation: 1s linear 0s infinite normal both running mfm-rubberBand;\">mfm goes here</span> </p>aaa"
"<span class=\"mfm _mfm_jelly_\">mfm goes here</span> </p>aaa"
assert content =~ "some text<br/>newline"
end

View file

@ -594,6 +594,7 @@ test "a reblog" do
represented = StatusView.render("show.json", %{for: user, activity: reblog})
assert represented[:id] == to_string(reblog.id)
assert represented[:content] == ""
assert represented[:reblog][:id] == to_string(activity.id)
assert represented[:emojis] == []
assert_schema(represented, "Status", Pleroma.Web.ApiSpec.spec())