JavaScript promise rejection: Loading CSS chunk index-domready failed.
(error: https://sirherobrine23.com.br/assets/css/index-domready.9de057c0.css). Open browser console to see more details.
It also transiently fixes the "malformed db" issue with embedded
replicas. Those replicas didn't create a "libsql_wasm_func_table"
when first connecting, and sqld did, due to having wasm functions
enabled.
But, since our sqld parser still doesn't accept CREATE FUNCTION
yet, there's no harm in disabling the support.
There's no reason I'm aware of to only make checkpoint-interval-s
option effective if bottomless is enabled. Botomlessless instances
should also be able to checkpoint periodically.
* connection: fix various comment, code var and doc typos
* replication: fix various comment, code var and doc typos
* namespace: fix various comment, code var and doc typos
* metrics: fix typo in metric docstring
* hrana/stmt: fix code typo verion -> version
* query_result_builder: fix code typo weigths -> weights
* h2c: fix typo in docstring
* query_analysis: fix typo in docstring
* Group all checks together in the same workflow
Reusing artifacts ends up being faster than parallelizing with the
current CI resources.
* Set same RUSTFLAGS in both jobs && remove unnecessary steps
* Cache rust artifacts
This reverts be7386 and removes decoding limits for the `ReplicationLogProxyClient`
so that when replication requests get forwarded to the primary and it returns a full
batch of `1024` frames it does not trigger the 4mb decoding limit set by default for
tonic gRPC clients.
* test fork namespace
* test destroy namespace
* test create namespace load from url
test load from dump
* test load dump from file
* reorganize test dir
* fix sync_many_replicas test
* load dump tests
* test malformed dumps
* add create dump test.
* fix tests
* bottomless-cli: fix too eager create_dir_all() call
The way create_dir_all() is called on a path to the database,
it also creates the final file name as a directory. As a result,
restoration later fails, because `data.tmp` cannot be moved to `data`,
with `data` being already mistakenly created as a directory.
* bottomless-cli: add `verify` command
The command works similarly to `restore`, except it restores
to a temporary directory and runs a `pragma integrity_check` query.
On success it should return:
```
bottomless-cli -e http://localhost:9000 -n ns-:default -d e4e57664-01ad-76cc-9f19-a96700d5b2e4 verify
Snapshot size: 13512704
Verification: ok
```
* bottomless: remove false positive [BUG] message
For a transaction that spans multiple xFrames calls, it's ok
if our last_valid_frame doesn't match the one that came with xFrames.
As long as our last_valid_frame is *not greater* than the one reported,
we're good. If it's greater, we're in trouble and we report a bug.
* connection: vacuum before checkpointing
This commit adds an optional VACUUM operation before our
periodic checkpoint -- since VACUUM should always be followe
by a checkpoint.
The are two criteria to qualify for vacuuming:
1. We have at least 32MiB of data
2. There are more free pages in the db than regular ones
If that's the case, we keep the db file more than 2x larger
than it could be, and that calls for a vacuum.
Also, for small databases, we entirely skip vacuuming.
Fixes#734
* hrana: add diagnostics for connections
This commit adds a /v2/diagnostics endpoint which prints
various information about current hrana-over-http connections.
Draft, because the diagnostics are currently in a very debuggy
format, and I'm figuring out if we can make it more human-readable.
Still, they're enough to determine if something is holding a lock
via an abandoned hrana-over-http stream.
Example:
```
$ curl -s http://localhost:8080/v2/diagnostics | jq
[
"expired",
"expired",
"expired",
"expired",
"expired",
"(conn: Mutex { data: <locked> }, timeout_ms: 872, stolen: false)",
"(conn: Mutex { data: <locked> }, timeout_ms: 0, stolen: true)"
]
```
* apply review suggestions: no more Debug required in WalHook
* apply review fixes: move everything to admin api
* Update sqld/src/http/admin/mod.rs
Co-authored-by: ad hoc <postma.marin@protonmail.com>
* fix Json return type
* revert leftover change - returning from passive checkpoints
---------
Co-authored-by: ad hoc <postma.marin@protonmail.com>
* bottomless: checkpoint before initializing bottomless
Due to a bug in wallog recovery, we need to checkpoint
the database *strictly before* we initialize bottomless.
A proper fix should be to use our virtual WAL methods
for checkpointing, but there's an initialization cycle
and resolving it will be a larger patch - a connection
with WAL methods wants us to already have the replication
logger created, and replication logger wants to perform
a checkpoint on creation.
As a mid-term solution, we just perform the forbidden
regular checkpoint before bottomless is ever launched.
Combined with the fact that bottomless treats existing
databases as the source of truth, it just creates
a new backup generation and continues working properly.
The following scenario was buggy before:
1. We leave the db in the state where some WAL frames
still exist in data-wal file
2. We restart sqld
3. bottomless is initialized, it reuses the existing db
and WAL frames and uploads them to S3, to avoid
creating a potentially costly snapshot
4. ReplicationLogger::new() incorrectly calls
sqlite3_wal_checkpoint which swipes data from under
bottomless.
5. Bottomless thinks it hasn't checkpointed and continues
to write WAL frames. As a result, it writes garbage
to S3, because the db was checkpointed outside
of bottomless control
* fmt fix