This adds in process mock s3 backend via the `s3s` crate. This allows us
to run tests without requiring a user run `minio` or hook up their real
aws account.
... because we end up reading stale data. The WAL file is recreated
after a TRUNCATE-level checkpoint and a new one is open, while we keep
reading from the old one, kept alive by the descriptor.
Fixes#598
This is a partial fix to #598 - it makes the checksumming
work as long as it's produced and checked on an arch with matching
endianness. Most of modern machines, including the ones we run on
our platform, CI, and home devices are just little endian anyway.
The proper fix is to also store the checksum computation endianness
in our .meta file, just like SQLite stores it in the WAL header,
but that's not implemented yet.
Tested locally.
Gzip does not perform well on data in form of libSQL 4KiB pages,
and zstd performed uniformly better in all test cases I covered
locally (and not worse in case of random data with super high entropy).
During stress tests, xz turned out to spontaneously fail to compress,
same with bzip2. All compression algos are supported by separate
crates, so these were simply ruled out.
Zstd proved to be:
- fast
- correct
- more than acceptable on compression ratio