* Initial attempt at a worker to refresh old data.
* Build refresh image on non-master branches.
* Allow specifying missing batch sizes.
* Store more data for deleted submissions.
* Add refresh to builds.
* Update furaffinity-rs dependency.
* Update furaffinity-rs, again.
* Update refresh Dockerfile to avoid extra build.
* Update faktory dependency.
* Update deleted flag migration order.
* Update CI.
* Bump versions.
* Fix dependencies.
* Unify tracing and metrics export.
* Create service to hash image.
* Updates for hashing service.
* Fix missing file changes.
* Old changes I don't remember.
* Update dependencies, improve Docker images.
* Use BKApi instead of in-memory tree.
* Include health endpoint with metrics.
* Avoid some unwraps.
* Improve FurAffinity retry logic, add timeout.
* Only build images when pushing to master.
* Use tracing instead of directly printing messages.
* Make case consistent on messages.
* Use tracing instead of panics directly.
* Record users online.
* Extract submission handling to new function.
Instead of using a single rate limit bucket for searching by uploading
an image and sending a hash, use two separate buckets joined together.
Now when an image is uploaded, it consumes an image and a hash.
When just a hash is provided, it only consumes an image. This naming is
somewhat confusing, but was used for data backwards compatibility.