Ok. Remotely cleaning a huge (>2 TB, many many files and subdirs) #Nextcloud-hosted folder (not the whole user) is *painful*. Without access to the host it runs on I am limited to either the webinterface - which breaks - or using #webdav with a tool like #rclone.
#rclone purge breaks (timeout), so rclone delete it is. Which is *slow*, really slow. Probably because the remote moves a deleted file into the (for this case) useless trashbin which can't be turned off.
At least one can use #xargs to run multiple rclones in parallel - first get a list of entries of the to-be-deleted-dir (rclone lsf), format them the way rclone expects (basically put name of remote in front) and use something like `xargs -n 1 -P0 rclone delete -v --rmdirs` on it.
Still, its running since yesterday later afternoon and we are down to 1.4Tb left, of 2Tb. Even in parallel, the webdav shit manages to delete 2 to 4 files a second only.