0
0
mirror of https://github.com/tursodatabase/libsql.git synced 2025-05-18 19:57:01 +00:00
Files
libsql/libsql-sqlite3/test/tester.tcl
Pekka Enberg f996bf9f18 Merge upstream SQLite 3.45.1 (#1054)
* Remove unused elements from the json_tree() cursor.

FossilOrigin-Name: 914a50117d477b2cd30d58388fb8d1b71ff7ff6842ba025f38efc6e9647d06d0

* Same results as the legacy JsonNode implementation on a small set of test cases.

FossilOrigin-Name: c3da4b079a1a15a4c0b1a6e71f876648b1d9eb32eddc67b9946c2475c7b6d085

* Fix corner-case error conditions.

FossilOrigin-Name: ec23d34ab75e1d7e9366e59c633e0d30def8759f6d4717583ebeb4c90aeccf0d

* All tests passing.

FossilOrigin-Name: b5a5660ca22437640c9bf32c44d92c76a7293dafcbaf4fa6a4c171128d64871d

* Give the json_valid() function an optional second argument that determines
what is meant by "valid".

FossilOrigin-Name: a4e19ad43dac81e7655ec03ff69bb99d1d02b0c227034c90fb41415fd4793fe3

* Enhance the (SQLITE_DEBUG-only) json_parse() routine so that it shows a 
decoding of JSONB when given a BLOB argument.

FossilOrigin-Name: af267868562e0799ad691dccad05f17afbc34d609eede8c55f57d209290246ef

* In SQLITE_ENABLE_SETLK_TIMEOUT builds, use blocking locks in place of sleep() when opening a read-transaction.

FossilOrigin-Name: a51ef39998e25e86bd0600e71d15011b12e05f4319608018293bdaecb09e8c97

* Have SQLITE_ENABLE_SETLK_TIMEOUT builds block when locking a read-lock slot.

FossilOrigin-Name: f797baf47cf7859cfd8ce248f4f3087af4551a7040af990333426e5a7c269504

* Add untested (#ifdefed-out) code for the MergePatch algorithm against JSONB.
Add (and test) the jsonBlobEdit() routine that is needed by the new MergePatch.

FossilOrigin-Name: 4d353387fc10e1038cfdd86e66007bf728c231a928e588897bbee0fbfe76f225

* More aggressive use of jsonBlobEdit().  Improvements to the MergePatch
implementation sketch.

FossilOrigin-Name: fbca9570fd2e1465739e4d3a8d9bb40fad594fd78ab49b2cb34efa27ebdd8361

* The json_patch() code for JSONB compiles and works sometimes, but there are
still issues.  Incremental check-in.

FossilOrigin-Name: e0099464a0045a04f4ccf29bc2b8325fc8c7f39ccf4847e74818f928c9153588

* All legacy tests are passing.

FossilOrigin-Name: 2c436806b8d5f57de99c00f6154b038454fb9ae427d00d7b4a46ab9c7c69bcb9

* Handle an SQLITE_BUSY_TIMEOUT error if one occurs while attempting a shared lock on a read-lock slot.

FossilOrigin-Name: 5fbf3906d272df3eb981f67455eb35f649ad2774cba9fc3f077b28d9bef3f0cb

* The json_remove() function now uses only JSONB, never JsonNodes, internally.

FossilOrigin-Name: b69786e746ae2b927b64d9871fd120b7f8f06cc53739fd46a4da51aa16cf8576

* Attempt to get json_extract() working with pure JSONB only, and without
the use of JsonNode.  Mostly working, but there are some differences from
legacy in corner cases.

FossilOrigin-Name: 8c324af1eca27e86adc45622af4f3b06a67a3f968596ac58aa7434b1f6f05f3c

* Preserve flexibility in the format of the RHS of -> and ->> operators found
in legacy.

FossilOrigin-Name: 6231ec43adb7436195eb1497de39a6c13c6b4f1c5032e6ea52515d214e61fdbc

* Do not set the J subtype when the output is JSONB.

FossilOrigin-Name: 4f106b64fe8988435872806bd0a6c223b61f53af0dd1c47c847bb4eec4e03e27

* Convert the json_array_length() function to use JSONB instead of JsonNodes.

FossilOrigin-Name: 5ab790736d943e08f097efcee5cfbf0d83c65b0a53f273060330ba719affa5e5

* The assertion change at check-in [7946c79567b0ccd3] is insufficient to fix
the problem of a Table object being deleted out from under the OP_VCheck
opcode.  We need to reference count the Table, which is accomplished here.

FossilOrigin-Name: cad269d5e274443c39203a56603b991accc0399135d436996fc039d1d28ec9db

* In the recovery extension, if a payload size is unreasonably large, it is
probably corrupt, so truncate it.

FossilOrigin-Name: 988c3179e978a3a6d42541e9c7a2ab98150383671810926503376ed808f150ff

* Fix signed integer overflow in fts5.

FossilOrigin-Name: 60e46c7ec68fd8caaed960ca06d98fb06855b2d0bb860dd2fb7b5e89a5e9c7b4

* The json_patch() function now operates exclusively on JSONB.  This patch
also includes improvements to JSONB debug printing routines.

FossilOrigin-Name: fee19d0098242110d2c44ec7b9620c1210ef3f87913305f66ec85d277dd96ab6

* Convert the json_error_position() routine to use only JSONB internally.

FossilOrigin-Name: e7a8ba35bff6fde55827f978de5b343b6c134c7fa53827f5c63915a9dc2598ad

* Convert json_insert(), json_replace(), json_set() to use JSONB internally.
Mostly working, but some corner cases are still not quite right.

FossilOrigin-Name: 99c8f6bd5c9a31b6d00f92e383bec8a8235ed553916ad59adbb1b7663f6ebff1

* Update some OPFS-related help text in WASM tests. Minor cleanups in speedtest1-worker.js.

FossilOrigin-Name: 263f6d3a7784ef7d032dbf7a3265aca8dd70bf50797f28f6b2e8ddb6a301f83a

* New test cases for insert/set/replace with paths that indicate substructure
that does not yet exist.

FossilOrigin-Name: 146c717c51940b2139befc45ac74e7a1c36ef3c32fd3cfe35b334488eebe6298

* New JSON test cases showing insert or set with missing substructure.

FossilOrigin-Name: 6802b6459d0d16c961ff41d240a6c88287f197d8f609090f79308707490a49c2

* Simplification of the new JSON insert/set test cases.

FossilOrigin-Name: 04c0d5644372446c924a2e31a26edf51ddc563a1990d170b0ed4739e3e8b239b

* Enhance json_set() and json_insert() so that they create missing
substructure.

FossilOrigin-Name: cc7a641ab5ae739d31c24f0ad0caeb15a481a63fa8f13720718ea922c25862ff

* Convert json_type() to use JSONB internally.

FossilOrigin-Name: 83074835b900ce85cf67059e674ce959801505c37592671af25ca0af7ed483f1

* Add a basic batch-mode SQL runner for the SAH Pool VFS, for use in comparing it against WebSQL. Bring the WebSQL batch runner up to date, noting that it cannot run without addition of an "origin trial" activation key from Google because that's now the only way to enable WebSQL in Chrome (that part is not checked in because that key is private). Minor code-adjacent cleanups.

FossilOrigin-Name: 883990e7938c1f63906300a6113f0fadce143913b7c384e8aeb5f886f0be7c62

* Convert json_valid() over to using only JSONB as its internal format.

FossilOrigin-Name: 7b5756fa6d00b093bf083a8d7a5ef5485f7a09e4eac473785c8380688f861a1b

* Remove all trace of JsonNode from the JSON implementation.  The JSONB format
is used as the internal binary encoding for searching and editing.

FossilOrigin-Name: 11ebb5f712cc7a515e2e0f2be8c1d71de20c97fe5b74c4f4d72c84fd21182d35

* First attempt to get the JSON text-to-binary cache working.  All test cases
pass, but the cache seems not to help much.

FossilOrigin-Name: 25ed295f300fea6185104a73721076bccd2b2a6e411c78564266fa6dca4ff70c

* Cache is working better, but does not preserve the hasJson5 flag.

FossilOrigin-Name: a12add7ab9f5aee5bb2ede0c4d22e599dd28f7a107dce72b2ea48ef92d233e8a

* Fix up the JSON cache to work better.

FossilOrigin-Name: 1fdbc39521f63aedc6f08ecaafa54ea467b8c6316a692a18ad01eecbf22a0977

* Different approach to querying a tokendata=1 table. Saves cpu and memory.

FossilOrigin-Name: c523f40895866e6fc979a26483dbea8206126b4bbdf4b73b77263c09e13c855e

* Remove old code for tokendata=1 queries.

FossilOrigin-Name: b0a489e8e1bf0290c2117ab32d78b1cc7d67bcb226b55ec044c8367ebde3815b

* Performance optimization in the JSON parser.

FossilOrigin-Name: 68d191f40e708962ec88e0c245b4496bc4a671300484b1cc0f3fc7e6d199a6e6

* Fix harmless compiler warnings and enhance performance the parser.

FossilOrigin-Name: 285633da6d188547e52f07779e209c9e5f3dc33ce0668e14858f3337889ef4b8

* Unroll a loop in the parser for a performance increase.

FossilOrigin-Name: a6dc29e4d5e13949e0fcd9d5dde575c2670eb10a230ab9df3806fc8c3016c540

* Remove a NEVER that can be true if a virtual table column is declared to have
a DEFAULT.  See
[forum:/forumpost/3d4de8917627d058|forum post 3d4de8917627d058].

FossilOrigin-Name: 8abc2ccaf8106f20243568cd7fa74174386eb85d7ea381201e97e2fd527033e0

* Simplification and optimization of the JSON parser.

FossilOrigin-Name: f5ec9485119a2a6cb33eb864c7ca9b41d4a2ed08ab6ad9a6b0dd9358ab253576

* Performance optimization in jsonAppendString().

FossilOrigin-Name: fdf00e96239c73fb67e2acecc5b95f55a1fc51c3deed4512613c0d6070ce5805

* Minor fix to the header comment on jsonXlateTextToBlob().

FossilOrigin-Name: c3677ba410208c07b711f5f526eb5cf039a8eee49f632c7ae04fa55cdfbb9058

* Fix potential unsigned integer underflow in jsonAppendString().

FossilOrigin-Name: d2fba2cbdc3870d34228c1a9446eced884325acc183900d7dd0b96132570fb4a

* Do not allow a JsonParse object to be considered "editable" after an OOM.

FossilOrigin-Name: c6bacf57bd6fe0fee00c9d41163a270b60997c20659949971bbf5c6c62622bfe

* Protect a memcpy() against OOM conditions.

FossilOrigin-Name: 26144d1c25ae0435db568009ba05e485d23d146f2b1f29f3a426c87860316aed

* Ensure that tokendata=1 queries avoid loading large doclists for queries like "common AND uncommon", just as tokendata=0 queries do.

FossilOrigin-Name: 7bda09ab404a110d57449e149a3281fca8dc4cacf7bd9832ea2a1356ad20fe8e

* Take extra care to ensure that JSONB values that are in cache are actually
owned by the JSON subsystem, and that ownership of such values is not handed
back to the bytecode engine.

FossilOrigin-Name: 1304534001e9ef66c6b12752b69d790bfa3427cc803f87cc48ca22ae12df0fdf

* When tokendata=1 queries require multiple segment-cursors, allow those cursors to share a single array of in-memory tombstone pages.

FossilOrigin-Name: e0175d07e4094db5ea4b0378a5ff480dafb6ba9da86a113fa767c4c89c3c866f

* Fix harmless compiler warnings.  Refactor some identifier names for
clearer presentation.

FossilOrigin-Name: 7e3941502789c5afaf19b08112f464abf5e3cba7f92fc9290af2a0f96127ad9a

* Code and comment cleanup.  Everything should work the same.

FossilOrigin-Name: c640754df0d3ffdad994745f0d0e10c8f19f424b87f6a6e6e269491a0350b950

* Fix various compiler warnings and other problems with the new code on this branch.

FossilOrigin-Name: 3a623cfa173b4035c759cb84985d11d8727053beb383648503987d6ab15c0ef0

* Fix harmless compiler warnings reported by MSVC.

FossilOrigin-Name: 419652c0c82980bd043584dcd2976f91dfff7b926b216d597698299850b855c0

* Implement strict JSONB checking in the json_valid() function.

FossilOrigin-Name: 0f26d38880fcbc207abcc94dbc170a7428bab1b4f0b7731aaf5bee0224000994

* Minor code changes for consistency and to simplify testing.

FossilOrigin-Name: df272bd837910ad9e03e222716a1201a601399664365f1dcf73d5932372518ed

* Do not let bad hexadecimal digits in malformed JSONB cause an assertion fault.

FossilOrigin-Name: 8dec1ba1e5076ff596756e00c1e2ada0245f168a503dd1cadadf848331acfac3

* Enable incorrect JSONB to be rendered into text without hitting an
assertion for a bad whitespace escape in a string.

FossilOrigin-Name: 4d6a9a217df6792b41766b774fb0c0553b45f9104c26a0955bf4a30862d7d7bf

* Ensure that OOM conditions in the generation of the "bad JSON path" error
message result in an SQLITE_NOMEM error.

FossilOrigin-Name: aa0e02b5c26a2ef3d6216a0ed8bc01382be43173485f898cb63f2a8c559f2e74

* Avoid problems when the path argument to json_tree() contains embedded U+0000
characters.

FossilOrigin-Name: 9f055091af01a5dddba1a7e9868ad030c8f206237e1569215cb161e53e54aa71

* Remove dead code.  Improved reporting of errors in JSON inputs.

FossilOrigin-Name: 2eaa738e6b5c1b67b3e57c868d9c3a30eea38a0b3b8b02482f06d57a45b10921

* Back off on the use of strlen() for situations where sqlite3_value_bytes()
will work as well, for performance.

FossilOrigin-Name: 79fb54fbb8b9c30f47cdbd437d24a21542716241e822749e5e28c9fbc449bfa8

* Better pre-scan size estimations for objects in the JSON parser resulting
in fewer reallocations and memmove operations.

FossilOrigin-Name: 526b27f90897f5e35dfff7257daf6c4ce4798d649b09b8aecfb02df0449e3c51

* Repair issues and inefficiencies found during testing.

FossilOrigin-Name: ae973cb1515f9d76409c92a2ca2ffd6b71f32b0b490a4886770e7c1b90f12611

* Add tests for using tokendata=1 and contentless_delete=1 together.

FossilOrigin-Name: a2506b8c9718054912270055638204753c4156bbc115e55194e6df9d7e76cb10

* Two new NEVER macros.

FossilOrigin-Name: 52632c92cb06faf0e804654b3490fd6c199521107bd30c8fcbc3a2a5a488098f

* Remove reachable ALWAYS and NEVER macros.

FossilOrigin-Name: f601de3eeabd85993c1f5ee96b62de6fdabbeae2fe8950e00d08feb48d42c498

* Fix bug in xInstToken() causing the wrong token to be returned.

FossilOrigin-Name: da78d07e77cbc783fbc725758911c230fd6a1c1885d9576125de955dcc2bd37f

* Continuing simplifications and code cleanup.

FossilOrigin-Name: ddf92b5059a9106753fd18b82ba8daa269a62af947561c460790107b83416f0b

* Fix a problem with the xInstCount() API and "ORDER BY rank" queries.

FossilOrigin-Name: 317a50563d9e8586fda136e513727241b414e7267d50a06571c8ebd0eae710bc

* Fix memory leak in new code on this branch.

FossilOrigin-Name: ebc160b9a05568df66f86e30804399ee29d34b44a60c57e062f98cb92826353f

* Fixes for xInstToken() with tokendata=0 tables. And with prefix queries.

FossilOrigin-Name: 78fbb71598b1ca756acc078253880a1d0f7983a5a26b9efc683e6488122505a1

* Fix errors in rendering JSON5 escape sequences embedded in JSONB.

FossilOrigin-Name: f1a51ae3863557526a51c6e98e71fcdf4f1ed14a36212b3c90f7408f926345e4

* Do not make the input JSONB editable in json_remove() if there are no PATH
argument.

FossilOrigin-Name: 66594544f3ba9977475a3e3f74404eb2b2fb845053b28bd24c2b52c7df94e9d7

* Fixes to error handling in json_array_length().

FossilOrigin-Name: aa85df2d26b74c171c55bde19ef17c4f11f40b8af7181bbf7162f87cdea7e88b

* Add further tests for xInstToken().

FossilOrigin-Name: 8582707f16133f003a6687f68cbea03d4eb6c2a0e2e07746b7cace0c44e84fa4

* Rename the internal routine jsonMergePatchBlob() to just jsonMergePatch().

FossilOrigin-Name: ebf667b616235bb64b83832008342ba5e7b10b2c170d7cebc431f040fef7ecfb

* Fix OOM and corrupt JSONB handling in json_patch().

FossilOrigin-Name: 1910feb0b7d5cc2b810c3322f6cca281d8730182d30d162bd7bb56800979ea91

* Use an assert() to fix a harmless static analyzer warning.

FossilOrigin-Name: a249ca657e624028bc6b3d2c2bcedd7162d118addb7d62ce519920cecebf1860

* Clean up the JSONB performance test script.

FossilOrigin-Name: 905301075a7fc1010ee7e754867b1b698c9b8576d50e98125def32a5dfb7ee9d

* Small performance gain by unwinding the string literal delimiter search
loop in the JSON parser by one more level.

FossilOrigin-Name: 4c587feac153e8ebe526559ec3d254f545f81e8d1ed3126f91a5ff25ec4aa72e

* Use strspn() to accelerate whitespace bypass in the JSON parser.

FossilOrigin-Name: 843197df08352bdff4b87be91d160e574572aded0d0c66142fd960000c0b4701

* Miscellaneous comment cleanup and typo fixes.

FossilOrigin-Name: 59446dc0bd0091572122a3c8b4653d7a2dc867d16c4a5919f79b81bc3a673ce3

* Further tests for the new code on this branch.

FossilOrigin-Name: 59d008b6c23ab900377bc696ee19381feb7614bac80546eae361e401c3620c4e

* Use extra assert() statement to silence harmless static analyzer warnings.

FossilOrigin-Name: 174c2b2eef5fecd96a5fc89b81032fe81f7801f12097cea10e7e7f0a02114813

* README.md typo fix reported in the forum and update all links from http: to https:.

FossilOrigin-Name: 5c48acdbb44185b352b54911a57a6986d6c7e624bdeba2af48b985d29f0292bf

* Increased rigor in comparisons between object labels in JSON.

FossilOrigin-Name: 2bc86d145fccc07107b7753cb1a69122676d4096fe59c454497bd81a6142d45e

* The rule for the RHS of the ->> and -> operators when the RHS does not begin
with $ is that it must be (1) all digits, or (2) all alphanumerics, or
(3) contained within [..] or else it will become a quoted label.

FossilOrigin-Name: 0e059a546ec11fa5c6d007bd65c249ee2422f1facbdb2792c53e0bc0ccc97e14

* Test cases for object label matching with escape sequences.

FossilOrigin-Name: c6f2aa38e95b7888650cfa7bb773b18a28e01d883033ac77be6d504ffe417d18

* In CLI, move -interactive flag handling back to arg-loop pass 2.

FossilOrigin-Name: 63cb05a862532d2d56e9e81fe32ced09bf58f03146587a118f11c2a84e195e69

* Fix the routine that determines the json_tree.path value for the first row
so that it correctly takes into account escape sequences in the path
argument.

FossilOrigin-Name: b9243ee8a37c62eb8848e765bd4af83bc1b3d3eb24fb4268a1357ad1f8b2e1fb

* Correctly handle 8-byte sizes in the JSONB format.
[forum:/forumpost/283daf08e91183fc|Forum post 283daf08e91183fc].

FossilOrigin-Name: 73d390f39c0bbbc017e01544e4d43c76761f2599bd57f900131c706270dfd202

* Update documentation comments in fts5.h.

FossilOrigin-Name: 38c50e22c98607e6c1fd78d7615cda534773b6d4fd85c712b54749fcd7af0c83

* Work around LLVM's newfound hatred of function pointer casts.
[forum:/forumpost/1a7d257346636292|Forum post 1a7d257346636292].

FossilOrigin-Name: ec0ae4030968c782af48d1c776351c14b2ada21d40aeb97915f33df30706e18f

* Fix compiler warning about shadowed variable in fts5_index.c.

FossilOrigin-Name: ee70e4c1c9c41617850228e48d8df44f105cf2fbbe789340ceca6f27ad6ce5eb

* Improved detection of corrupt JSONB in the jsonReturnFromBlob() function.

FossilOrigin-Name: b014736c1f80ccc46fb4b24ac04310a6ce5cb5b6653665efff366cb3bc742257

* Add ALWAYS() on branches added in [ec0ae4030968c782] that are always true.

FossilOrigin-Name: 451cef8609e96dd9244818adc5c6f240544694bcb4ae620e88f90e403e59d70f

* Rework the jsonEachPathLength() routine in json_tree() so that it is
less susceptible to problems due to goofy object labels.

FossilOrigin-Name: 858b76a00e8ff55215f7a2e6a4cd77fc4d4f98dea7224cd90488744f5ce246a4

* Different fix for the fts5 COMMIT-following-OOM problem first fixed by [fba3129d]. This one does not cause problems if an fts5 table is renamed and then dropped within the same transaction.

FossilOrigin-Name: d8c6b246944934a7a6e027b3f5b986fd64a19dd5c5c5175f4ea8586da59a6764

* Fix a problem with handling OOM and other errors in fts5 when querying tokendata=1 tables.

FossilOrigin-Name: bc911ab5953532956510c199be72b1d3c556f2d0ddbd7fc0ae6f5f917b337b48

* Fix a null-pointer dereference in fts5 tokendata=1 code.

FossilOrigin-Name: d69fa8f0504887f968d9a190ecb889ddb40bb1b56d0d4479f9819c106aec719b

* Avoid an assert() failure when querying an fts5vocab table that accesses a tokendata=1 fts5 table with corrupt %_data records.

FossilOrigin-Name: 386ba9e20423fb2f623d6adc9d3c310fb1b135f54a1dad15ef3b593d97886926

* Ensure an fts5vocab table never uses a special tokendata=1 merge cursor.

FossilOrigin-Name: 1e26510e83b40c9bd2e8bfa2a0e81f2cb915e78fed773204ef537683e48b61dc

* Avoid dropping an error code in new fts5 tokendata=1 code.

FossilOrigin-Name: a66596e33dc9aa4bab2ec3ff45546e1321d0a11bdc764f8381b315292ca92423

* Fix a harmless compiler warning about "confusing indentation".

FossilOrigin-Name: 34f9e9a8c4bea13f60f43062e25cd7d9422f2e7f5b371ed0ddadc9abeb3ca256

* Fix a potential problem RCStr access on a JsonString object that is not
really and RCStr.  Fuzzer/UBSAN find.

FossilOrigin-Name: d2f2174ce2cc89606034e158149a2d05fc3627ec4d5cdb772add7a2250f29d78

* Fix a harmless UBSAN warning.

FossilOrigin-Name: 1503cba6d17e9bade7a5c103ddd23241ff4741f9a2e3032ffe2987af243dae65

* Fix a potential use of uninitialized value in json_valid() with 2nd
argument of 8.

FossilOrigin-Name: fa102036fe46eeb71b7df3e265be1935ae5c78e0b939b08841bcfb8abadbc77a

* Work toward enhanced functionality for json_valid() with deep checking
of the JSONB (second argument has bit 0x08).

FossilOrigin-Name: c370d573198b151767f04e91bf8baa4ae0076751ae468c5709742a0b0ed16770

* Add SQLITE_TESTCTRL_VALIDATE_JSONB, which if enabled under SQLITE_DEBUG causes
cross-checking of generate JSONB.

FossilOrigin-Name: b410a4db74a650003539ffaaea18519d5159b504daac47db6a4874b730f40ac8

* Rename the new test-control to SQLITE_TESTCTRL_JSON_SELFCHECK.  Make it so
that the current value of the setting can be interrogated.

FossilOrigin-Name: 7aff1d9a4cb17ecd5abab21ab032f35a78741dd641ddd8cbcc85fc4a81a0707d

* Activate JSON_SELFCHECK within fuzzcheck.

FossilOrigin-Name: 4d14e733bb521aed65e98533969d2303738232ae87dab70fdf7962e6513195f5

* json_valid(*,8) allows minus-signs on hexadecimal literals.

FossilOrigin-Name: c0d7f4520d839a268b3fd2474d0897a9832aa608bd6238b3e287fabecf07a350

* json_error_position() now uses jsonValidityCheck() to find the approximate
position of an error in a JSONB blob.

FossilOrigin-Name: c3d60cf7028a333b825d5b89516945a73e0c158ac81d8bcc117d21bfd98602c8

* The json_error_position() function now reports an approximate byte offset
to the problem in a JSONB if there is a problem.

FossilOrigin-Name: 80d5d94dff6a2d2557039be3d7d47c1a6003c4b98defe0bd411acfeb963ad5dd

* Validity checking of text nodes in JSONB.

FossilOrigin-Name: fa5160687c2f970d407e8af73c246f7cd806bb4ce35f29a79ac534a8646a6c8e

* Improvements to JSONB validation - catch more cases where the input does
not conform to spec.

FossilOrigin-Name: be1864eac4eb75cc30bf98f73092c8608467f4bd956240df6a0cbea9f1e09e85

* Add NEVER to two unreachable branches in JSON.

FossilOrigin-Name: c96ebb086feb89341565cc52b970ae7799ce1327fe1ad4fc790f1b0dcaa6e229

* Worker1 Promiser API: when multiple db connections are active then use the requested connection instead of always the first-opened connection. Bug reported in [forum:894c330e7f23b177|forum post 894c330e7f23b177].

FossilOrigin-Name: 194276e18e0268829061c09317e7f9f527a703eb45f1755ff1dd30bd99dc1b68

* Fix the JSON object label comparison object so that it works correctly even
if the label ends with escaped whitespace.

FossilOrigin-Name: 4d5353cadd7b7c5f105bc197f3ec739e2d041472d6b3e939654c9f9cfc2749ae

* Improvements to UTF8 handling, and especially the handling of invalid UTF8,
in the JSON routines.

FossilOrigin-Name: 1b229c1101d6c384a30f343c5e47b471ab084b2d8e81170eb8f642afc1c67e3b

* Bug fix in the JSONB validator.
dbsqlfuzz ac6fa521a08609a642198e7decf64180e750b3c4

FossilOrigin-Name: 3e940a6a08b0a0434650cd3d8dd286e09ad8ab805b0a4d515e57bba5d3608577

* Avoid invoking sqlite3ExprColUsage() on an unresolve column reference.
dbsqlfuzz fc34aa62df4de103705d11b807074687ffafbda5.

FossilOrigin-Name: ac9314c0e335694b48c613145f5397247bb88c51806cd0dc3ed4ec306db4bbad

* In CLI, fix .read inability to open 2GB+ files on WIN32.

FossilOrigin-Name: 56c80a62d2e033d64ba5d545ae9cbe3ed7c9d046c0a3fafb6cfa2f0b562d1ef0

* Pass subtype information through the aggregate ORDER BY sorter for
aggregate functions that use subtype information.

FossilOrigin-Name: 3536f4030eab6d650b7ed729d2f71eb6cc3b5fbe16b4e96b99008d66522aaccb

* Improve the error message returned by an fts5 'rebuild' command on an external content table if there is a problem with the content table or view.

FossilOrigin-Name: 0fbf4b8a58fde1c187908934da6f59999b146f32e07ac255cc531c5c4d7007fd

* Fix harmless compiler warnings in JSON and FTS5.

FossilOrigin-Name: 90135efccfeb1046f002bfcbd8dfec9a1a3b40cbe1b5c714ae065b06368e354f

* Add assert()s to FTS5 to fix static analyzer warnings.

FossilOrigin-Name: 27d4a89a5ff96b7b7fc5dc9650e1269f7c7edf91de9b9aafce40be9ecc8b95e9

* Use SQLITE_STRICT_SUBTYPE=1 by default for the JNI and WASM builds unless they're explicitly built with SQLITE_STRICT_SUBTYPE=0.

FossilOrigin-Name: 990211357badf0ab08bd34cf6d25b58849d0fd8503e289c1839fc837a74e1909

* Correct --enable-sab flag in ext/wasm/GNUmakefile to fix a silent alhttpd args-parsing error.

FossilOrigin-Name: 7b9b757d872a31395b0f6454e2309a6a4664b8bdd8749f6a15371cbe72c05b60

* Avoid running the "no_mutex_try" tests with SQLITE_ENABLE_SETLK_TIMEOUT builds as part of the release test.

FossilOrigin-Name: 6b4e1344a28c213cbe8fb97f7f3f6688de93fb73ed96bf460ff74c959da1a712

* Do not run test script fts5origintest4.test with either "memsubsys1" or "mmap" permutations.

FossilOrigin-Name: 05a63d9603ef42cbee6dadff72d97583a9c78e549f70e9a808534d5c1ae7c28a

* Fix a new JSON test case so that it works even if SQLITE_OMIT_VIRTUALTABLE
is defined.

FossilOrigin-Name: b995aae510888a9746b46545d176a0885d4738e1f1bc0b7ad7937ed023efd7d6

* Add mention of --buildonly and --dryrun to the testrunner.tcl usage screen.

FossilOrigin-Name: 23b92d915c12ee768857e2c3c961832f390cad9b53b8bcfc2b97664baab25bb7

* Avoid expiring prepared statements in the middle of an integrity-check.

FossilOrigin-Name: 88beb48472da4667c0727c8ebabe046ea526450ff837fe789d041ed3f1ff105e

* In the count-of-view optimization, deferring freeing obsolete parts of the
parse tree, on the off-chance that some other part of the code might be
holding a pointer to those parts.

FossilOrigin-Name: da442578856c87137eb1677d9b13b7c1cf15828cc41d4756572b278060f69bae

* New test case based on Chromium bug report 1511689.

FossilOrigin-Name: 2c7ef4b4d215f99f8d6787adb64e2037ae96e5dd6cb49c8b81634249f5e1b328

* Enable SQLITE_STRICT_SUBTYPE for default builds of the shell, fuzzcheck,
and testfixture.

FossilOrigin-Name: 5a0c517ed7e46c0f8a3db752cf5b9f8010c60f35084606abe9e7c1c4f993b4a7

* Enhancements to the "randomjson.c" extension.  Automatically load that extension
into fuzzcheck.

FossilOrigin-Name: 70620405ab01d6a5d38bafa9ae175fd6e4eabaf2efb7854734278dafd7b05c99

* Enhancements to ext/misc/randomjson.c.

FossilOrigin-Name: a4e6d1f86f3a502e4170f5a90031e269e48363e95114a66b84d373e3ce0b2704

* Bug fix in the randomjson.c extension.

FossilOrigin-Name: 1f3a33df530dbe330ea8b14a69369b807b413b25a167d1a3938f8f0faf97cc91

* Ensure that all object labels for individual objects generated by
randomjson.c are unique.

FossilOrigin-Name: 29c46aca231b3f1e997ef306a5a651408185bf3ad09ab9fc1fe21ed18caa4d02

* Add randomjson.c to testfixture.  Use it for a new set of invariant tests
against JSON functions.

FossilOrigin-Name: f1c040606bfe784804134d8f3ca130908fad5212b47e3c32792baab977470943

* Ensure that the insert/delete size delta on JSONB objects in the JSON cache
are always set to zero.

FossilOrigin-Name: 4b4581668a908473dbf1322a3e98bc7cca122998c44518ea183af7f0d1ba9f95

* Fix JSON to JSONB translation so that it deals correctly with Infinity
and NaN.

FossilOrigin-Name: 178cb84f36bdb45ba17511900d6d8ea8dfa14912fc5bf7094a20348174a36c95

* Add NEVER() to an unfalsifiable branch.

FossilOrigin-Name: 9a0c67db366d38a0b0741f6a1ae333cf27cfe6f6b7c6eed94bdec9686f9f9f8a

* New JSON invariant test cases.

FossilOrigin-Name: a6a1367b0bf364b1a2e20e153c5f4a578624b8846f9ec0b7c9c3cba0ea2ec346

* Remove a stray comment in the JSON code.

FossilOrigin-Name: 6618bdf0679405b43911ea8cd94050b12a5dc469f3dfe4759ee3ff850a55229e

* Extra ALWAYS() macros to verify state in the sqlite3ExprCanBeNull() routine.

FossilOrigin-Name: be19b84c9f3fe127165809908add148dbe9a827a55608b0490de7e69b7f7f191

* Always make the sqlite_dbdata virtual table available in the CLI.

FossilOrigin-Name: e5fd3b32ad87586a7413570e568c9c1859a37a4f836cca074126471b125fb682

* When unable to resolve an identifier, change the Expr node into TK_NULL
rather than TK_COLUMN, to prevent any downstream misuse of the non-existent
column.  dbsqlfuzz 71869261db80a95e4733afa10ff5724bf3c78592.

FossilOrigin-Name: d2e6117e4f97ab98b01deb5fcad5520f8181d00bed8d904d34963c01d73df857

* Test case for the previous check-in.

FossilOrigin-Name: df5a07e1a5122e08c2fa6076ac08adb2820f997ee11dd88b84863666899dfb57

* Ignore COLLATE operators when determining whether the result of a subexpression
should be shallow-copied or deep-copied.

FossilOrigin-Name: 34ae36a45e814bed7c8340412c7ef3fc849b82357656d0eb5f0f805e59d846d0

* Add ALWAYS() and NEVER() on branches made unreachable by recent changes.

FossilOrigin-Name: c50e6c2ace49d0928b05cbfd877c621e9a0f77dc4e056ccb1dbe5cf118a00d00

* More precise computation of the size of data structures in the query planner.
Response to [forum:/forumpost/7d8685d49d|Forum post 7d8685d49d].

FossilOrigin-Name: 0c8d88e41167ea92341dd1129be01b596a73f46bdcd5b0dd931441a979c013d0

* Fix harmless compiler warning in the randomjson.c extension.

FossilOrigin-Name: debe7060b16669ada7304ffb9bf7616c8fa30bd286d8be871ed17fd6d64a3d4c

* On second thought, we don't really need sqlite_dbdata accessible to the CLI.

FossilOrigin-Name: 36fe6a61ef8fb393281a5e15119d716521219c7b971fbfd63bdea07d27a78ac9

* Remove redundant conditional from sqlite3ExprCanBeNull().

FossilOrigin-Name: 257f96a2d22c605885fa66220c28cf7dc5941c330bccee3f132b9e7b70d89d30

* In JSON - minor code cleanup and refactoring with a small size reduction
and performance increase.

FossilOrigin-Name: 215fabda38daecdbd38b1eca5a6aafbc61b6a36a8303f1d7164d5a1138e63134

* Avoid harmless integer overflow in pager status statistics gathering.
Response to [forum:/forumpost/7f4cdf23f9|forum post 7f4cdf23f9].

FossilOrigin-Name: 206d8c650d937bc700946c40a82a62ea6bc4a80e5f3fb42d0ae2968de25f0644

* Fix SQLITE_ENABLE_SETLK_TIMEOUT assert() statements in os_unix.c to avoid reading past the end of the unixShmNode.aMutex[] array.

FossilOrigin-Name: 029a05cd2928d43d81e4549cce5388c432e2c9e75e3fa0b2fe6e91021b2fb9ac

* Add internal core-developer-only documentation of the JSONB format.

FossilOrigin-Name: 4d30478863b2a60512010de9ec6e3099bfaf75d4afee20acec536713fe94334d

* Add a new comment to debugging output routine sqlite3WhereLoopPrint() to
remind us of what the various fields of the debug output mean.  No changes
to code.

FossilOrigin-Name: da5f34fd4052432b1ae27bb12e56b358cdc5c1282653d60ed0f0fe62f727e4ee

* Fix a usan complaint about signed integer overflow.

FossilOrigin-Name: e65907e0279f4814ec957f0790777d8b94a86926cd27c52442b311b27efc0185

* Update #ifdef checks in pager.c and util.c to account for [0462a2612d1fc1d0] to resolve the build problem reported in [forum:9819032aac|forum post 9819032aac].

FossilOrigin-Name: 0f22d809a1c6c80e381f6bcd931fe4ec36dca0e28d07ab4f4f7f83c813424f60

* Add the -fno-sanitize-recover=undefined to the sanitizer builds used for sdevtest and release testing. To ensure that any test that provokes undefined behaviour fails.

FossilOrigin-Name: 89563311adb0ab7c7a3eadb11c2e27fbca50c56fce8ca616628facbc00d72b88

* Change parameters on a debugging function to include "const".

FossilOrigin-Name: 94c3e1110c6590261bd30ba317fba4dd94023d69b81a94f4b216cce748fe7489

* Add debugging output routines sqlite3ShowWhereLoop(X) and
sqlite3ShowWhereLoopList(X) that can be invoked from a debugger to show
a summary of the content of a single WhereLoop object or a list of WhereLoop
objects.  No change in release builds.

FossilOrigin-Name: 5db30bcc338aac1cf081de2deec7e60749ae012e2b6f95ccf745623adb4a31dc

* Improvements to the query planner to address the inefficiency described
by [forum/forumpost/2568d1f6e6|forum post 2568d1f6e6].

FossilOrigin-Name: 72fcc12cda910a0e3f7875eb3d117b2a5608705c97703985427a02960f1ab5c5

* Avoid signed integer overflow during integrity_check of FTS5.

FossilOrigin-Name: 5937df3b25799eceaadfb04d7226c9995d44c8d8edb5ac3ad02af9d7e3570726

* Fix harmless compiler warnings associated with [5db30bcc338aac1c]

FossilOrigin-Name: e55d1c2333f35fc20615aa83a7843d08cae7945710a2156d44eee0cc37d90ade

* Remove an ALWAYS() added in [c50e6c2ace49d092] because it is sometimes false.
dbsqlfuzz c393a4f783d42efd9552772110aff7e5d937f15e.

FossilOrigin-Name: b9daf37e57cde12c4de271a2b1995e8e91b6411f8c2e8882e536241929609b3a

* Improved handling of malformed unicode within JSON strings.

FossilOrigin-Name: e252bdf5f5de26ba8e2bcc6b0ad94121ed6fc4d86c02fe4a2a058ada93747beb

* Ensure that the xColumnText(), xQueryPhrase() and xPhraseFirstColumn() APIs all return SQLITE_RANGE if they are passed a bad column or phrase number.

FossilOrigin-Name: 1a8a9b1c89519d265869251e8b6d3c5db733f0d3a7dea6c7962811a8f1157dff

* Fix a problem in the shell tool (not library) causing an out-of-bounds write if an ".open" command failed, then the user pressed ctrl-c to interrupt a query running on the substitute in-memory database.

FossilOrigin-Name: 026618b9e321576f616a32e41329066ba629814170c6cfeef35430343f5003f3

* Enhance the (undocumented, debug-only) json_parse() SQL function so that it
returns the text rendering of the JSONB parse of the input, rather than printing
the rendering on stdout.

FossilOrigin-Name: 056de8d551dcbdf1d162e2db15ed418fa9c786f900cd3972ef8a1dea3f4f3aa1

* Fix harmless compiler warnings in FTS5.

FossilOrigin-Name: 3cd5ef44e40570c357f913a9483fa1cd72e7f2827a5ed5826bff99febae213b1

* Performance improvement by unwinding a loop in jsonAppendString().

FossilOrigin-Name: 190ab3c08431a0ba24d76392eab251f5c1792add05e4ec780998b299208eca95

* Update fts5origintext4.test to work with SQLITE_DIRECT_OVERFLOW_READ.

FossilOrigin-Name: 15ed002aed12556aeb9bbe537c4ba839f0c95bac65a69d03401b37cc3fd11b92

* Enable SQLITE_DIRECT_OVERFLOW_READ unless it is specifically disabled using
the -DSQLITE_DIRECT_OVERFLOW_READ=0 compile-time option.

FossilOrigin-Name: 630604a4e604bfb36c31602917bfa8d42c10c82966d0819932bf8f827b9158b8

* Minor doc touchup in the JS bits.

FossilOrigin-Name: 8d2120c35425081e2158d6a8a6b083c4adf8d694046b2d98f5fd235520920432

* Use SQLITE_ENABLE_STAT4 in both the WASM and JNI builds.

FossilOrigin-Name: 99d11e6d0ae687ff6bac5119027f7b04d5e7185214e79cf8c56289cfa809b0f9

* WASM: various build cleanups and add initial infrastructure for a build which elides the oo1 API and its dependents (worker1 and promiser). Sidebar: an attempt was made to move generation of the build rules to an external script, but the mixed-mode make/script was even less legible than the $(eval) indirection going on in the makefile.

FossilOrigin-Name: 563d313163c02b398ae85b7c2ed231019a14e006726f09a7c1f294a58bf4363f

* JNI: move the ByteBuffer-using APIs from public to package visibility for the time being because they have UB-inducing possibilities which need to be worked out. Update test code to account for a change in custom FTS5 columntext() impls.

FossilOrigin-Name: dc501275fcfab3ad9b6ebbadf7588b225a9dd07a0abac5be83d96f15bfba99e9

* Extra steps taken to avoid using low-quality indexes in a query plan.
This branch accomplishes the same end as the nearby enhanced-stat1 branch,
but with much less change and hence less risk.

FossilOrigin-Name: c030e646262fee43a59b45fdc1630d972f8bf88ac3c142b6bdaf4cbb36695a4f

* Remove some unnecessary computations from ANALYZE so that ANALYZE runs with
fewer CPU cycles.  These changes were spotted while working on the nearby
enhanced-stat1 branch.  So even if enhanced-stat1 is abandoned, that effort
put into it will not have been in vain.

FossilOrigin-Name: 5527e8c4abb904b1a438ec1c353d4a960bf82faaf3a2c742af1df7c613850441

* Back out [99d11e6d0ae6] (enabling of STAT4 in WASM/JNI), per /chat discussion.

FossilOrigin-Name: cd7929ee2e2c305475fa5a4dff2edaccf90067126ef04a1c2714cf464925453f

* Update and clean up the in-makefile docs for ext/wasm.

FossilOrigin-Name: 7a7b295e6d7e95ee4a46cc42761895d11700ab295870c5a4380072bb4a5b7099

* Elaborate on the various build flavors used by ext/wasm/. Doc changes only.

FossilOrigin-Name: d489232aa492618d4c8e5817addb2323d0ca067742d7140216914239a66fb221

* Increase the default "max_page_count" to its theoretical maximum of
4294967294.

FossilOrigin-Name: ffb35f1784a4305b979a850485f57f56938104a3a03f4a7aececde92864c4879

* Fix a problem in fts5 caused by a COMMIT involving fts5 data that immediately follows a ROLLBACK TO that does not.

FossilOrigin-Name: 55c61f6a8d6a1bc79497b05669beac5c5397b06382bf24b6bec54845962d219b

* Adjust the sqlite3PagerDirectReadOk() routine (part of the
SQLITE_DIRECT_OVERFLOW_READ optimization) to use less code and to be
more easily testable.

FossilOrigin-Name: eed670ea2a9424f7df4eeb01c152fc38f7190a5e39aa891651b28dc91fcdc019

* Back out [b517a52fa36df0a0] which is no longer reachable due to early
error detection enhancements in [166e82dd20efbfd3].

FossilOrigin-Name: 704943e96f2620b99260667ac9922c2f72bc3e92e2dfe1d9c2a91c7b704564d9

* Update the sqldiff.exe utility program so that it uses the sqlite3_str
string interface, and so that it does console output using the
ext/consio extension.

FossilOrigin-Name: 4443b7e592da97d1cb1b3b79ed0559452d8057a33aba4d184c2fffbf200e05f5

* Enhance sqlite3_analyzer.exe so that it uses the ext/consio extension.

FossilOrigin-Name: 769de0b98e136e4a0945b80216d0c9583c1ccd9de69cb0494875c2300e172646

* Change a constant from decimal to hex to avoid a compiler warning on Mac.

FossilOrigin-Name: e3acb8a43ad544fd5b5341058276bd3b61b6bdb6b719790476a90e0de4320f90

* Convert the JSON functions to use lookaside memory allocation whenever
feasible, to avoid hitting the global memory allocator mutex.

FossilOrigin-Name: a79a244954f728596da3c0e28fa3b887258d1bd831f53881970f418f3fba84c7

* Fix a #ifdef in sqlite3_test_control() that was preventing builds with
SQLITE_OMIT_WSD.

FossilOrigin-Name: d546a9c94caf7408cc6e4530ec190d3a13fae09dc15b71b03d6369e02ee62abd

* Restructure some code to fix what appears to be a false-positive UBSAN warning.

FossilOrigin-Name: fe952c12903ea2150880c8bb57cda2efc00ce9fa801568a68c619e0745f30567

* Avoid errors with SQLITE_OMIT_VIRTUALTABLE builds in json106.test and unionall.test.

FossilOrigin-Name: 90e8a233549a2d31e6959ce3fec927693b772ab3c0abce65e81d7350d2ca5cc6

* Update extension ext/misc/totext.c to avoid both ubsan warnings and dubious real->integer conversions.

FossilOrigin-Name: c626aa108a7a30cef54af8d93ac9e45749568ed38e4e06623a6bad6b4bf6e8ec

* Update JSON performance testing procedures for clarity and to describe how to
do performance testing of JSONB.

FossilOrigin-Name: b115b4f75bc7c4e6d9bab5edf13297f27a36f30083c80d2c502b01208da5dfc0

* Ensure that SQLITE_PROTOCOL is not returned too early when a SQLITE_ENABLE_SETLK_TIMEOUT build fails to open a transaction on a wal mode database in cases where blocking locks are not being used.

FossilOrigin-Name: b934a33671d8a0190082ad7e5e68c78fe0c558d102404eafc1de26e4e7d65b92

* Updates to RTREE to facility testing.

FossilOrigin-Name: 7a5b42ff74882c58493dc8b710fde73d4ff251f5d42271d84be73ceaabc01698

* Remove an ALWAYS() from RTREE.  Dbsqlfuzz found a way to make it false.

FossilOrigin-Name: 40f0a29e6dd90fcb969d7c0e49728ba0ee8f31d9e8f502b9a21469620a8ad283

* Minor change to os_unix.c to facilitate 100% MC/DC testing.

FossilOrigin-Name: 0dfa7b4da134db281c3c4eddb4569c53a450f955f0af2f410e13db801aff4ea2

* Automatically turn off DEFENSIVE mode in the shell tool when executing scripts generated by the ".dump" command against an empty database. Add a warning to the top of generated ".dump" scripts that populate virtual tables.

FossilOrigin-Name: 6e9e96b7e7afb9420110f4b93d10b945c9eadfde5e9c81e59ae9ee8167e75707

* Fix date on new file shell9.test.

FossilOrigin-Name: c82da712113d5dcd63b764dbc68842026989627abc840acb4a33f3a4972b832a

* Improved resolution of unqualified names in the REINDEX command.
[forum:/info/74cd0ceabd|Forum thread 74cd0ceabd].

FossilOrigin-Name: 97709ce2a1f5ae05495e412ca27108048e5b8a63a1e3bca4be13933f7527da7b

* Put an SQLITE_ENABLE_SETLK_TIMEOUT branch inside the appropriate ifdef with
an assert on the else since the condition is always false if SETLK_TIMEOUT
is not available.

FossilOrigin-Name: d81e7a036ac5d70b6a6ee6ab7d81e041c1f5fc04b70bcee47e203d521caf7e93

* In fts5, flush the contents of the in-memory hash table whenever the secure-delete option is toggled. This prevents spurious corruption reports under some circumstances.

FossilOrigin-Name: ccf552319a62bfb329820a3bc1f490bacbaa6e90694a257fc65a568a605542c3

* Fix a comment in sessions.  No functional changes.
[forum:/forumpost/8c20dc935b|Forum post 8c20dc935b].

FossilOrigin-Name: b0eb6d3628c1f70399a22d9fd3b79a796bc343adfeba50515440db609565961a

* Have the shell tool automatically enable SQLITE_CONFIG_DQS_DDL when executing a ".dump" script against an empty db.

FossilOrigin-Name: f47a5f4e0ce078e6cc1183e6cbb3c4013af379b496efae94863a42e5c39928ed

* Version 3.45.0

FossilOrigin-Name: 1066602b2b1976fe58b5150777cced894af17c803e068f5918390d6915b46e1d

* wasm build: reformulate an awk invocation to account for awks which do not support the -e flag. Problem reported on the forum via a docker-hosted build.

FossilOrigin-Name: 90dd51153fd0a6197e2ee49b5492ad120f0bfc324b60651f3d4f47c286887b46

* When backing out a character in a constructed string in JSON, first make sure
the string has not been reset by on OOM.

FossilOrigin-Name: 950bf9fe7829864e0abe6d71ca0495f346feb5d7943d76c95e55a6b86ea855da

* Ensure that the xIntegrity methods of fts3 and fts5 work on read-only databases.

FossilOrigin-Name: e79b97369fa740f62f695057d4a2cf8dae48a683982ec879f04a19039c9cb418

* When a JSON input is a blob, but it looks like valid JSON when cast to text,
then accept it as valid JSON.  This replicates a long-standing bug in the
behavior of JSON routines, and thus avoids breaking legacy apps.

FossilOrigin-Name: 4c2c1b97dce46a279846380c937ac6de5c367927c6843516641eead7ea6db472

* Bump the version number to 3.45.1

FossilOrigin-Name: 54d34edb89430b266221b7e6eea0afbd2c9dafbe774344469473abc8ad1e13fd

* Fix harmless "unused parameter" compiler warning in the new fts3IntegrityMethod
implementation.

FossilOrigin-Name: 9d459f6b50fb6f995e6284a0815c5e211cacac44aad0b96bf01ba68af97f51fc

* In os_unix.c and os_win.c, do not allow xFetch() to return a pointer to a page buffer that is right at the end of the mapped region - if the database is corrupted in a specific way such a page buffer might be overread by several bytes.

FossilOrigin-Name: d131cab652ac11795322af13d0b330e7e44ab91587a1a3e73fe7b9a14b2dd531

* Slight adjustment to test results for Windows in mmap1.test due to
the previous check-in.

FossilOrigin-Name: a8043eaed899285b5cf4aab0c23c3dabb8975910c353cb579fd1f1655db390f6

* Apply the same fix found in [99057383acc8f920] to descending scans.

FossilOrigin-Name: 593d6a1c2e9256d797f160e867278414e882a3d04d7fea269bea86965eaa7576

* Automatically disable the DISTINCT optimization during query planning if the
ORDER BY clause exceeds 63 terms.

FossilOrigin-Name: 6edbdcc02d18727f68f0236e15dde4ecfc77e6f452b522eb4e1e895929b1fb63

* When rendering JSONB back into text JSON, report an error if a zero-length
integer or floating-point node is encountered.  Otherwise, if the node occurs
at the very end of the JSONB, the rendering logic might read one byte past
the end of the initialized part of the BLOB byte array.  OSSFuzz 66284.

FossilOrigin-Name: 3ab08ac75d97ffd9920f5c924362a4819560b40faa8a4f9100068057f5fa420a

* Avoid a potential buffer overread when handling corrupt json blobs.

FossilOrigin-Name: ac402cc551b2cbe3f8fbbc9c711a04942eab5eeb9d2f4a394e9370d2380427b5

* Detect malformed nested JSONB earlier and stop rendering to avoid long
delays.

FossilOrigin-Name: ab40e282465c989bf249453d7c6f60072a38b691f579411cdf9aad234b20f0f7

* Version 3.45.1

FossilOrigin-Name: e876e51a0ed5c5b3126f52e532044363a014bc594cfefa87ffb5b82257cc467a

---------

Co-authored-by: drh <>
Co-authored-by: dan <Dan Kennedy>
Co-authored-by: stephan <stephan@noemail.net>
Co-authored-by: larrybr <larrybr@noemail.net>
2024-07-25 08:55:22 +00:00

2647 lines
76 KiB
Tcl

# 2001 September 15
#
# The author disclaims copyright to this source code. In place of
# a legal notice, here is a blessing:
#
# May you do good and not evil.
# May you find forgiveness for yourself and forgive others.
# May you share freely, never taking more than you give.
#
#***********************************************************************
# This file implements some common TCL routines used for regression
# testing the SQLite library
#
# $Id: tester.tcl,v 1.143 2009/04/09 01:23:49 drh Exp $
#-------------------------------------------------------------------------
# The commands provided by the code in this file to help with creating
# test cases are as follows:
#
# Commands to manipulate the db and the file-system at a high level:
#
# is_relative_file
# test_pwd
# get_pwd
# copy_file FROM TO
# delete_file FILENAME
# drop_all_tables ?DB?
# drop_all_indexes ?DB?
# forcecopy FROM TO
# forcedelete FILENAME
#
# Test the capability of the SQLite version built into the interpreter to
# determine if a specific test can be run:
#
# capable EXPR
# ifcapable EXPR
#
# Calulate checksums based on database contents:
#
# dbcksum DB DBNAME
# allcksum ?DB?
# cksum ?DB?
#
# Commands to execute/explain SQL statements:
#
# memdbsql SQL
# stepsql DB SQL
# execsql2 SQL
# explain_no_trace SQL
# explain SQL ?DB?
# catchsql SQL ?DB?
# execsql SQL ?DB?
#
# Commands to run test cases:
#
# do_ioerr_test TESTNAME ARGS...
# crashsql ARGS...
# integrity_check TESTNAME ?DB?
# verify_ex_errcode TESTNAME EXPECTED ?DB?
# do_test TESTNAME SCRIPT EXPECTED
# do_execsql_test TESTNAME SQL EXPECTED
# do_catchsql_test TESTNAME SQL EXPECTED
# do_timed_execsql_test TESTNAME SQL EXPECTED
#
# Commands providing a lower level interface to the global test counters:
#
# set_test_counter COUNTER ?VALUE?
# omit_test TESTNAME REASON ?APPEND?
# fail_test TESTNAME
# incr_ntest
#
# Command run at the end of each test file:
#
# finish_test
#
# Commands to help create test files that run with the "WAL" and other
# permutations (see file permutations.test):
#
# wal_is_wal_mode
# wal_set_journal_mode ?DB?
# wal_check_journal_mode TESTNAME?DB?
# permutation
# presql
#
# Command to test whether or not --verbose=1 was specified on the command
# line (returns 0 for not-verbose, 1 for verbose and 2 for "verbose in the
# output file only").
#
# verbose
#
# Only run this script once. If sourced a second time, make it a no-op
if {[info exists ::tester_tcl_has_run]} return
# Set the precision of FP arithmatic used by the interpreter. And
# configure SQLite to take database file locks on the page that begins
# 64KB into the database file instead of the one 1GB in. This means
# the code that handles that special case can be tested without creating
# very large database files.
#
set tcl_precision 15
sqlite3_test_control_pending_byte 0x0010000
# If the pager codec is available, create a wrapper for the [sqlite3]
# command that appends "-key {xyzzy}" to the command line. i.e. this:
#
# sqlite3 db test.db
#
# becomes
#
# sqlite3 db test.db -key {xyzzy}
#
if {[info command sqlite_orig]==""} {
rename sqlite3 sqlite_orig
proc sqlite3 {args} {
if {[llength $args]>=2 && [string index [lindex $args 0] 0]!="-"} {
# This command is opening a new database connection.
#
if {[info exists ::G(perm:sqlite3_args)]} {
set args [concat $args $::G(perm:sqlite3_args)]
}
if {[sqlite_orig -has-codec] && ![info exists ::do_not_use_codec]} {
lappend args -key {xyzzy}
}
set res [uplevel 1 sqlite_orig $args]
if {[info exists ::G(perm:presql)]} {
[lindex $args 0] eval $::G(perm:presql)
}
if {[info exists ::G(perm:dbconfig)]} {
set ::dbhandle [lindex $args 0]
uplevel #0 $::G(perm:dbconfig)
}
[lindex $args 0] cache size 3
set res
} else {
# This command is not opening a new database connection. Pass the
# arguments through to the C implementation as the are.
#
uplevel 1 sqlite_orig $args
}
}
}
proc getFileRetries {} {
if {![info exists ::G(file-retries)]} {
#
# NOTE: Return the default number of retries for [file] operations. A
# value of zero or less here means "disabled".
#
return [expr {$::tcl_platform(platform) eq "windows" ? 50 : 0}]
}
return $::G(file-retries)
}
proc getFileRetryDelay {} {
if {![info exists ::G(file-retry-delay)]} {
#
# NOTE: Return the default number of milliseconds to wait when retrying
# failed [file] operations. A value of zero or less means "do not
# wait".
#
return 100; # TODO: Good default?
}
return $::G(file-retry-delay)
}
# Return the string representing the name of the current directory. On
# Windows, the result is "normalized" to whatever our parent command shell
# is using to prevent case-mismatch issues.
#
proc get_pwd {} {
if {$::tcl_platform(platform) eq "windows"} {
#
# NOTE: Cannot use [file normalize] here because it would alter the
# case of the result to what Tcl considers canonical, which would
# defeat the purpose of this procedure.
#
if {[info exists ::env(ComSpec)]} {
set comSpec $::env(ComSpec)
} else {
# NOTE: Hard-code the typical default value.
set comSpec {C:\Windows\system32\cmd.exe}
}
return [string map [list \\ /] \
[string trim [exec -- $comSpec /c CD]]]
} else {
return [pwd]
}
}
# Copy file $from into $to. This is used because some versions of
# TCL for windows (notably the 8.4.1 binary package shipped with the
# current mingw release) have a broken "file copy" command.
#
proc copy_file {from to} {
do_copy_file false $from $to
}
proc forcecopy {from to} {
do_copy_file true $from $to
}
proc do_copy_file {force from to} {
set nRetry [getFileRetries] ;# Maximum number of retries.
set nDelay [getFileRetryDelay] ;# Delay in ms before retrying.
# On windows, sometimes even a [file copy -force] can fail. The cause is
# usually "tag-alongs" - programs like anti-virus software, automatic backup
# tools and various explorer extensions that keep a file open a little longer
# than we expect, causing the delete to fail.
#
# The solution is to wait a short amount of time before retrying the copy.
#
if {$nRetry > 0} {
for {set i 0} {$i<$nRetry} {incr i} {
set rc [catch {
if {$force} {
file copy -force $from $to
} else {
file copy $from $to
}
} msg]
if {$rc==0} break
if {$nDelay > 0} { after $nDelay }
}
if {$rc} { error $msg }
} else {
if {$force} {
file copy -force $from $to
} else {
file copy $from $to
}
}
}
# Check if a file name is relative
#
proc is_relative_file { file } {
return [expr {[file pathtype $file] != "absolute"}]
}
# If the VFS supports using the current directory, returns [pwd];
# otherwise, it returns only the provided suffix string (which is
# empty by default).
#
proc test_pwd { args } {
if {[llength $args] > 0} {
set suffix1 [lindex $args 0]
if {[llength $args] > 1} {
set suffix2 [lindex $args 1]
} else {
set suffix2 $suffix1
}
} else {
set suffix1 ""; set suffix2 ""
}
ifcapable curdir {
return "[get_pwd]$suffix1"
} else {
return $suffix2
}
}
# Delete a file or directory
#
proc delete_file {args} {
do_delete_file false {*}$args
}
proc forcedelete {args} {
do_delete_file true {*}$args
}
proc do_delete_file {force args} {
set nRetry [getFileRetries] ;# Maximum number of retries.
set nDelay [getFileRetryDelay] ;# Delay in ms before retrying.
foreach filename $args {
# On windows, sometimes even a [file delete -force] can fail just after
# a file is closed. The cause is usually "tag-alongs" - programs like
# anti-virus software, automatic backup tools and various explorer
# extensions that keep a file open a little longer than we expect, causing
# the delete to fail.
#
# The solution is to wait a short amount of time before retrying the
# delete.
#
if {$nRetry > 0} {
for {set i 0} {$i<$nRetry} {incr i} {
set rc [catch {
if {$force} {
file delete -force $filename
} else {
file delete $filename
}
} msg]
if {$rc==0} break
if {$nDelay > 0} { after $nDelay }
}
if {$rc} { error $msg }
} else {
if {$force} {
file delete -force $filename
} else {
file delete $filename
}
}
}
}
if {$::tcl_platform(platform) eq "windows"} {
proc do_remove_win32_dir {args} {
set nRetry [getFileRetries] ;# Maximum number of retries.
set nDelay [getFileRetryDelay] ;# Delay in ms before retrying.
foreach dirName $args {
# On windows, sometimes even a [remove_win32_dir] can fail just after
# a directory is emptied. The cause is usually "tag-alongs" - programs
# like anti-virus software, automatic backup tools and various explorer
# extensions that keep a file open a little longer than we expect,
# causing the delete to fail.
#
# The solution is to wait a short amount of time before retrying the
# removal.
#
if {$nRetry > 0} {
for {set i 0} {$i < $nRetry} {incr i} {
set rc [catch {
remove_win32_dir $dirName
} msg]
if {$rc == 0} break
if {$nDelay > 0} { after $nDelay }
}
if {$rc} { error $msg }
} else {
remove_win32_dir $dirName
}
}
}
proc do_delete_win32_file {args} {
set nRetry [getFileRetries] ;# Maximum number of retries.
set nDelay [getFileRetryDelay] ;# Delay in ms before retrying.
foreach fileName $args {
# On windows, sometimes even a [delete_win32_file] can fail just after
# a file is closed. The cause is usually "tag-alongs" - programs like
# anti-virus software, automatic backup tools and various explorer
# extensions that keep a file open a little longer than we expect,
# causing the delete to fail.
#
# The solution is to wait a short amount of time before retrying the
# delete.
#
if {$nRetry > 0} {
for {set i 0} {$i < $nRetry} {incr i} {
set rc [catch {
delete_win32_file $fileName
} msg]
if {$rc == 0} break
if {$nDelay > 0} { after $nDelay }
}
if {$rc} { error $msg }
} else {
delete_win32_file $fileName
}
}
}
}
proc execpresql {handle args} {
trace remove execution $handle enter [list execpresql $handle]
if {[info exists ::G(perm:presql)]} {
$handle eval $::G(perm:presql)
}
}
# This command should be called after loading tester.tcl from within
# all test scripts that are incompatible with encryption codecs.
#
proc do_not_use_codec {} {
set ::do_not_use_codec 1
reset_db
}
unset -nocomplain do_not_use_codec
# Return true if the "reserved_bytes" integer on database files is non-zero.
#
proc nonzero_reserved_bytes {} {
return [sqlite3 -has-codec]
}
# Print a HELP message and exit
#
proc print_help_and_quit {} {
puts {Options:
--pause Wait for user input before continuing
--soft-heap-limit=N Set the soft-heap-limit to N
--hard-heap-limit=N Set the hard-heap-limit to N
--maxerror=N Quit after N errors
--verbose=(0|1) Control the amount of output. Default '1'
--output=FILE set --verbose=2 and output to FILE. Implies -q
-q Shorthand for --verbose=0
--help This message
}
exit 1
}
# The following block only runs the first time this file is sourced. It
# does not run in slave interpreters (since the ::cmdlinearg array is
# populated before the test script is run in slave interpreters).
#
if {[info exists cmdlinearg]==0} {
# Parse any options specified in the $argv array. This script accepts the
# following options:
#
# --pause
# --soft-heap-limit=NN
# --hard-heap-limit=NN
# --maxerror=NN
# --malloctrace=N
# --backtrace=N
# --binarylog=N
# --soak=N
# --file-retries=N
# --file-retry-delay=N
# --start=[$permutation:]$testfile
# --match=$pattern
# --verbose=$val
# --output=$filename
# -q Reduce output
# --testdir=$dir Run tests in subdirectory $dir
# --help
#
set cmdlinearg(soft-heap-limit) 0
set cmdlinearg(hard-heap-limit) 0
set cmdlinearg(maxerror) 1000
set cmdlinearg(malloctrace) 0
set cmdlinearg(backtrace) 10
set cmdlinearg(binarylog) 0
set cmdlinearg(soak) 0
set cmdlinearg(file-retries) 0
set cmdlinearg(file-retry-delay) 0
set cmdlinearg(start) ""
set cmdlinearg(match) ""
set cmdlinearg(verbose) ""
set cmdlinearg(output) ""
set cmdlinearg(testdir) "testdir"
set leftover [list]
foreach a $argv {
switch -regexp -- $a {
{^-+pause$} {
# Wait for user input before continuing. This is to give the user an
# opportunity to connect profiling tools to the process.
puts -nonewline "Press RETURN to begin..."
flush stdout
gets stdin
}
{^-+soft-heap-limit=.+$} {
foreach {dummy cmdlinearg(soft-heap-limit)} [split $a =] break
}
{^-+hard-heap-limit=.+$} {
foreach {dummy cmdlinearg(hard-heap-limit)} [split $a =] break
}
{^-+maxerror=.+$} {
foreach {dummy cmdlinearg(maxerror)} [split $a =] break
}
{^-+malloctrace=.+$} {
foreach {dummy cmdlinearg(malloctrace)} [split $a =] break
if {$cmdlinearg(malloctrace)} {
if {0==$::sqlite_options(memdebug)} {
set err "Error: --malloctrace=1 requires an SQLITE_MEMDEBUG build"
puts stderr $err
exit 1
}
sqlite3_memdebug_log start
}
}
{^-+backtrace=.+$} {
foreach {dummy cmdlinearg(backtrace)} [split $a =] break
sqlite3_memdebug_backtrace $cmdlinearg(backtrace)
}
{^-+binarylog=.+$} {
foreach {dummy cmdlinearg(binarylog)} [split $a =] break
set cmdlinearg(binarylog) [file normalize $cmdlinearg(binarylog)]
}
{^-+soak=.+$} {
foreach {dummy cmdlinearg(soak)} [split $a =] break
set ::G(issoak) $cmdlinearg(soak)
}
{^-+file-retries=.+$} {
foreach {dummy cmdlinearg(file-retries)} [split $a =] break
set ::G(file-retries) $cmdlinearg(file-retries)
}
{^-+file-retry-delay=.+$} {
foreach {dummy cmdlinearg(file-retry-delay)} [split $a =] break
set ::G(file-retry-delay) $cmdlinearg(file-retry-delay)
}
{^-+start=.+$} {
foreach {dummy cmdlinearg(start)} [split $a =] break
set ::G(start:file) $cmdlinearg(start)
if {[regexp {(.*):(.*)} $cmdlinearg(start) -> s.perm s.file]} {
set ::G(start:permutation) ${s.perm}
set ::G(start:file) ${s.file}
}
if {$::G(start:file) == ""} {unset ::G(start:file)}
}
{^-+match=.+$} {
foreach {dummy cmdlinearg(match)} [split $a =] break
set ::G(match) $cmdlinearg(match)
if {$::G(match) == ""} {unset ::G(match)}
}
{^-+output=.+$} {
foreach {dummy cmdlinearg(output)} [split $a =] break
set cmdlinearg(output) [file normalize $cmdlinearg(output)]
if {$cmdlinearg(verbose)==""} {
set cmdlinearg(verbose) 2
}
}
{^-+verbose=.+$} {
foreach {dummy cmdlinearg(verbose)} [split $a =] break
if {$cmdlinearg(verbose)=="file"} {
set cmdlinearg(verbose) 2
} elseif {[string is boolean -strict $cmdlinearg(verbose)]==0} {
error "option --verbose= must be set to a boolean or to \"file\""
}
}
{^-+testdir=.*$} {
foreach {dummy cmdlinearg(testdir)} [split $a =] break
}
{.*help.*} {
print_help_and_quit
}
{^-q$} {
set cmdlinearg(output) test-out.txt
set cmdlinearg(verbose) 2
}
default {
if {[file tail $a]==$a} {
lappend leftover $a
} else {
lappend leftover [file normalize $a]
}
}
}
}
unset -nocomplain a
set testdir [file normalize $testdir]
set cmdlinearg(TESTFIXTURE_HOME) [file dirname [info nameofexec]]
set cmdlinearg(INFO_SCRIPT) [file normalize [info script]]
set argv0 [file normalize $argv0]
if {$cmdlinearg(testdir)!=""} {
file mkdir $cmdlinearg(testdir)
cd $cmdlinearg(testdir)
}
set argv $leftover
# Install the malloc layer used to inject OOM errors. And the 'automatic'
# extensions. This only needs to be done once for the process.
#
sqlite3_shutdown
install_malloc_faultsim 1
sqlite3_initialize
autoinstall_test_functions
# If the --binarylog option was specified, create the logging VFS. This
# call installs the new VFS as the default for all SQLite connections.
#
if {$cmdlinearg(binarylog)} {
vfslog new binarylog {} vfslog.bin
}
# Set the backtrace depth, if malloc tracing is enabled.
#
if {$cmdlinearg(malloctrace)} {
sqlite3_memdebug_backtrace $cmdlinearg(backtrace)
}
if {$cmdlinearg(output)!=""} {
puts "Copying output to file $cmdlinearg(output)"
set ::G(output_fd) [open $cmdlinearg(output) w]
fconfigure $::G(output_fd) -buffering line
}
if {$cmdlinearg(verbose)==""} {
set cmdlinearg(verbose) 1
}
if {[info commands vdbe_coverage]!=""} {
vdbe_coverage start
}
}
# Update the soft-heap-limit each time this script is run. In that
# way if an individual test file changes the soft-heap-limit, it
# will be reset at the start of the next test file.
#
sqlite3_soft_heap_limit64 $cmdlinearg(soft-heap-limit)
sqlite3_hard_heap_limit64 $cmdlinearg(hard-heap-limit)
# Create a test database
#
proc reset_db {} {
catch {db close}
forcedelete test.db
forcedelete test.db-journal
forcedelete test.db-wal
sqlite3 db ./test.db
set ::DB [sqlite3_connection_pointer db]
if {[info exists ::SETUP_SQL]} {
db eval $::SETUP_SQL
}
}
reset_db
# Abort early if this script has been run before.
#
if {[info exists TC(count)]} return
# Make sure memory statistics are enabled.
#
sqlite3_config_memstatus 1
# Initialize the test counters and set up commands to access them.
# Or, if this is a slave interpreter, set up aliases to write the
# counters in the parent interpreter.
#
if {0==[info exists ::SLAVE]} {
set TC(errors) 0
set TC(count) 0
set TC(fail_list) [list]
set TC(omit_list) [list]
set TC(warn_list) [list]
proc set_test_counter {counter args} {
if {[llength $args]} {
set ::TC($counter) [lindex $args 0]
}
set ::TC($counter)
}
}
# Record the fact that a sequence of tests were omitted.
#
proc omit_test {name reason {append 1}} {
set omitList [set_test_counter omit_list]
if {$append} {
lappend omitList [list $name $reason]
}
set_test_counter omit_list $omitList
}
# Record the fact that a test failed.
#
proc fail_test {name} {
set f [set_test_counter fail_list]
lappend f $name
set_test_counter fail_list $f
set_test_counter errors [expr [set_test_counter errors] + 1]
set nFail [set_test_counter errors]
if {$nFail>=$::cmdlinearg(maxerror)} {
output2 "*** Giving up..."
finalize_testing
}
}
# Remember a warning message to be displayed at the conclusion of all testing
#
proc warning {msg {append 1}} {
output2 "Warning: $msg"
set warnList [set_test_counter warn_list]
if {$append} {
lappend warnList $msg
}
set_test_counter warn_list $warnList
}
# Increment the number of tests run
#
proc incr_ntest {} {
set_test_counter count [expr [set_test_counter count] + 1]
}
# Return true if --verbose=1 was specified on the command line. Otherwise,
# return false.
#
proc verbose {} {
return $::cmdlinearg(verbose)
}
# Use the following commands instead of [puts] for test output within
# this file. Test scripts can still use regular [puts], which is directed
# to stdout and, if one is open, the --output file.
#
# output1: output that should be printed if --verbose=1 was specified.
# output2: output that should be printed unconditionally.
# output2_if_no_verbose: output that should be printed only if --verbose=0.
#
proc output1 {args} {
set v [verbose]
if {$v==1} {
uplevel output2 $args
} elseif {$v==2} {
uplevel puts [lrange $args 0 end-1] $::G(output_fd) [lrange $args end end]
}
}
proc output2 {args} {
set nArg [llength $args]
uplevel puts $args
}
proc output2_if_no_verbose {args} {
set v [verbose]
if {$v==0} {
uplevel output2 $args
} elseif {$v==2} {
uplevel puts [lrange $args 0 end-1] stdout [lrange $args end end]
}
}
# Override the [puts] command so that if no channel is explicitly
# specified the string is written to both stdout and to the file
# specified by "--output=", if any.
#
proc puts_override {args} {
set nArg [llength $args]
if {$nArg==1 || ($nArg==2 && [string first [lindex $args 0] -nonewline]==0)} {
uplevel puts_original $args
if {[info exists ::G(output_fd)]} {
uplevel puts [lrange $args 0 end-1] $::G(output_fd) [lrange $args end end]
}
} else {
# A channel was explicitly specified.
uplevel puts_original $args
}
}
rename puts puts_original
proc puts {args} { uplevel puts_override $args }
# Invoke the do_test procedure to run a single test
#
# The $expected parameter is the expected result. The result is the return
# value from the last TCL command in $cmd.
#
# Normally, $expected must match exactly. But if $expected is of the form
# "/regexp/" then regular expression matching is used. If $expected is
# "~/regexp/" then the regular expression must NOT match. If $expected is
# of the form "#/value-list/" then each term in value-list must be numeric
# and must approximately match the corresponding numeric term in $result.
# Values must match within 10%. Or if the $expected term is A..B then the
# $result term must be in between A and B.
#
proc do_test {name cmd expected} {
global argv cmdlinearg
fix_testname name
sqlite3_memdebug_settitle $name
# if {[llength $argv]==0} {
# set go 1
# } else {
# set go 0
# foreach pattern $argv {
# if {[string match $pattern $name]} {
# set go 1
# break
# }
# }
# }
if {[info exists ::G(perm:prefix)]} {
set name "$::G(perm:prefix)$name"
}
incr_ntest
output1 -nonewline $name...
flush stdout
if {![info exists ::G(match)] || [string match $::G(match) $name]} {
if {[catch {uplevel #0 "$cmd;\n"} result]} {
output2_if_no_verbose -nonewline $name...
output2 "\nError: $result"
fail_test $name
} else {
if {[permutation]=="maindbname"} {
set result [string map [list [string tolower ICECUBE] main] $result]
}
if {[regexp {^[~#]?/.*/$} $expected]} {
# "expected" is of the form "/PATTERN/" then the result if correct if
# regular expression PATTERN matches the result. "~/PATTERN/" means
# the regular expression must not match.
if {[string index $expected 0]=="~"} {
set re [string range $expected 2 end-1]
if {[string index $re 0]=="*"} {
# If the regular expression begins with * then treat it as a glob instead
set ok [string match $re $result]
} else {
set re [string map {# {[-0-9.]+}} $re]
set ok [regexp $re $result]
}
set ok [expr {!$ok}]
} elseif {[string index $expected 0]=="#"} {
# Numeric range value comparison. Each term of the $result is matched
# against one term of $expect. Both $result and $expected terms must be
# numeric. The values must match within 10%. Or if $expected is of the
# form A..B then the $result term must be between A and B.
set e2 [string range $expected 2 end-1]
foreach i $result j $e2 {
if {[regexp {^(-?\d+)\.\.(-?\d)$} $j all A B]} {
set ok [expr {$i+0>=$A && $i+0<=$B}]
} else {
set ok [expr {$i+0>=0.9*$j && $i+0<=1.1*$j}]
}
if {!$ok} break
}
if {$ok && [llength $result]!=[llength $e2]} {set ok 0}
} else {
set re [string range $expected 1 end-1]
if {[string index $re 0]=="*"} {
# If the regular expression begins with * then treat it as a glob instead
set ok [string match $re $result]
} else {
set re [string map {# {[-0-9.]+}} $re]
set ok [regexp $re $result]
}
}
} elseif {[regexp {^~?\*.*\*$} $expected]} {
# "expected" is of the form "*GLOB*" then the result if correct if
# glob pattern GLOB matches the result. "~/GLOB/" means
# the glob must not match.
if {[string index $expected 0]=="~"} {
set e [string range $expected 1 end]
set ok [expr {![string match $e $result]}]
} else {
set ok [string match $expected $result]
}
} else {
set ok [expr {[string compare $result $expected]==0}]
}
if {!$ok} {
# if {![info exists ::testprefix] || $::testprefix eq ""} {
# error "no test prefix"
# }
output1 ""
output2 "! $name expected: \[$expected\]\n! $name got: \[$result\]"
fail_test $name
} else {
output1 " Ok"
}
}
} else {
output1 " Omitted"
omit_test $name "pattern mismatch" 0
}
flush stdout
}
proc dumpbytes {s} {
set r ""
for {set i 0} {$i < [string length $s]} {incr i} {
if {$i > 0} {append r " "}
append r [format %02X [scan [string index $s $i] %c]]
}
return $r
}
proc catchcmd {db {cmd ""}} {
global CLI
set out [open cmds.txt w]
puts $out $cmd
close $out
set line "exec $CLI $db < cmds.txt"
set rc [catch { eval $line } msg]
list $rc $msg
}
proc catchsafecmd {db {cmd ""}} {
global CLI
set out [open cmds.txt w]
puts $out $cmd
close $out
set line "exec $CLI -safe $db < cmds.txt"
set rc [catch { eval $line } msg]
list $rc $msg
}
proc catchcmdex {db {cmd ""}} {
global CLI
set out [open cmds.txt w]
fconfigure $out -encoding binary -translation binary
puts -nonewline $out $cmd
close $out
set line "exec -keepnewline -- $CLI $db < cmds.txt"
set chans [list stdin stdout stderr]
foreach chan $chans {
catch {
set modes($chan) [fconfigure $chan]
fconfigure $chan -encoding binary -translation binary -buffering none
}
}
set rc [catch { eval $line } msg]
foreach chan $chans {
catch {
eval fconfigure [list $chan] $modes($chan)
}
}
# puts [dumpbytes $msg]
list $rc $msg
}
proc filepath_normalize {p} {
# test cases should be written to assume "unix"-like file paths
if {$::tcl_platform(platform)!="unix"} {
string map [list \\ / \{/ / .db\} .db] \
[regsub -nocase -all {[a-z]:[/\\]+} $p {/}]
} {
set p
}
}
proc do_filepath_test {name cmd expected} {
uplevel [list do_test $name [
subst -nocommands { filepath_normalize [ $cmd ] }
] [filepath_normalize $expected]]
}
proc realnum_normalize {r} {
# different TCL versions display floating point values differently.
string map {1.#INF inf Inf inf .0e e} [regsub -all {(e[+-])0+} $r {\1}]
}
proc do_realnum_test {name cmd expected} {
uplevel [list do_test $name [
subst -nocommands { realnum_normalize [ $cmd ] }
] [realnum_normalize $expected]]
}
proc fix_testname {varname} {
upvar $varname testname
if {[info exists ::testprefix]
&& [string is digit [string range $testname 0 0]]
} {
set testname "${::testprefix}-$testname"
}
}
proc normalize_list {L} {
set L2 [list]
foreach l $L {lappend L2 $l}
set L2
}
# Run SQL and verify that the number of "vmsteps" required is greater
# than or less than some constant.
#
proc do_vmstep_test {tn sql nstep {res {}}} {
uplevel [list do_execsql_test $tn.0 $sql $res]
set vmstep [db status vmstep]
if {[string range $nstep 0 0]=="+"} {
set body "if {$vmstep<$nstep} {
error \"got $vmstep, expected more than [string range $nstep 1 end]\"
}"
} else {
set body "if {$vmstep>$nstep} {
error \"got $vmstep, expected less than $nstep\"
}"
}
# set name "$tn.vmstep=$vmstep,expect=$nstep"
set name "$tn.1"
uplevel [list do_test $name $body {}]
}
# Either:
#
# do_execsql_test TESTNAME SQL ?RES?
# do_execsql_test -db DB TESTNAME SQL ?RES?
#
proc do_execsql_test {args} {
set db db
if {[lindex $args 0]=="-db"} {
set db [lindex $args 1]
set args [lrange $args 2 end]
}
if {[llength $args]==2} {
foreach {testname sql} $args {}
set result ""
} elseif {[llength $args]==3} {
foreach {testname sql result} $args {}
# With some versions of Tcl on windows, if $result is all whitespace but
# contains some CR/LF characters, the [list {*}$result] below returns a
# copy of $result instead of a zero length string. Not clear exactly why
# this is. The following is a workaround.
if {[llength $result]==0} { set result "" }
} else {
error [string trim {
wrong # args: should be "do_execsql_test ?-db DB? testname sql ?result?"
}]
}
fix_testname testname
uplevel do_test \
[list $testname] \
[list "execsql {$sql} $db"] \
[list [list {*}$result]]
}
proc do_catchsql_test {testname sql result} {
fix_testname testname
uplevel do_test [list $testname] [list "catchsql {$sql}"] [list $result]
}
proc do_timed_execsql_test {testname sql {result {}}} {
fix_testname testname
uplevel do_test [list $testname] [list "execsql_timed {$sql}"]\
[list [list {*}$result]]
}
# Run an EXPLAIN QUERY PLAN $sql in database "db". Then rewrite the output
# as an ASCII-art graph and return a string that is that graph.
#
# Hexadecimal literals in the output text are converted into "xxxxxx" since those
# literals are pointer values that might very from one run of the test to the
# next, yet we want the output to be consistent.
#
proc query_plan_graph {sql} {
db eval "EXPLAIN QUERY PLAN $sql" {
set dx($id) $detail
lappend cx($parent) $id
}
set a "\n QUERY PLAN\n"
append a [append_graph " " dx cx 0]
regsub -all { 0x[A-F0-9]+\y} $a { xxxxxx} a
regsub -all {(MATERIALIZE|CO-ROUTINE|SUBQUERY) \d+\y} $a {\1 xxxxxx} a
regsub -all {\((join|subquery)-\d+\)} $a {(\1-xxxxxx)} a
return $a
}
# Helper routine for [query_plan_graph SQL]:
#
# Output rows of the graph that are children of $level.
#
# prefix: Prepend to every output line
#
# dxname: Name of an array variable that stores text describe
# The description for $id is $dx($id)
#
# cxname: Name of an array variable holding children of item.
# Children of $id are $cx($id)
#
# level: Render all lines that are children of $level
#
proc append_graph {prefix dxname cxname level} {
upvar $dxname dx $cxname cx
set a ""
set x $cx($level)
set n [llength $x]
for {set i 0} {$i<$n} {incr i} {
set id [lindex $x $i]
if {$i==$n-1} {
set p1 "`--"
set p2 " "
} else {
set p1 "|--"
set p2 "| "
}
append a $prefix$p1$dx($id)\n
if {[info exists cx($id)]} {
append a [append_graph "$prefix$p2" dx cx $id]
}
}
return $a
}
# Do an EXPLAIN QUERY PLAN test on input $sql with expected results $res
#
# If $res begins with a "\s+QUERY PLAN\n" then it is assumed to be the
# complete graph which must match the output of [query_plan_graph $sql]
# exactly.
#
# If $res does not begin with "\s+QUERY PLAN\n" then take it is a string
# that must be found somewhere in the query plan output.
#
proc do_eqp_test {name sql res} {
if {[regexp {^\s+QUERY PLAN\n} $res]} {
set query_plan [query_plan_graph $sql]
if {[list {*}$query_plan]==[list {*}$res]} {
uplevel [list do_test $name [list set {} ok] ok]
} else {
uplevel [list \
do_test $name [list query_plan_graph $sql] $res
]
}
} else {
if {[string index $res 0]!="/"} {
set res "/*$res*/"
}
uplevel do_execsql_test $name [list "EXPLAIN QUERY PLAN $sql"] [list $res]
}
}
#-------------------------------------------------------------------------
# Usage: do_select_tests PREFIX ?SWITCHES? TESTLIST
#
# Where switches are:
#
# -errorformat FMTSTRING
# -count
# -query SQL
# -tclquery TCL
# -repair TCL
#
proc do_select_tests {prefix args} {
set testlist [lindex $args end]
set switches [lrange $args 0 end-1]
set errfmt ""
set countonly 0
set tclquery ""
set repair ""
for {set i 0} {$i < [llength $switches]} {incr i} {
set s [lindex $switches $i]
set n [string length $s]
if {$n>=2 && [string equal -length $n $s "-query"]} {
set tclquery [list execsql [lindex $switches [incr i]]]
} elseif {$n>=2 && [string equal -length $n $s "-tclquery"]} {
set tclquery [lindex $switches [incr i]]
} elseif {$n>=2 && [string equal -length $n $s "-errorformat"]} {
set errfmt [lindex $switches [incr i]]
} elseif {$n>=2 && [string equal -length $n $s "-repair"]} {
set repair [lindex $switches [incr i]]
} elseif {$n>=2 && [string equal -length $n $s "-count"]} {
set countonly 1
} else {
error "unknown switch: $s"
}
}
if {$countonly && $errfmt!=""} {
error "Cannot use -count and -errorformat together"
}
set nTestlist [llength $testlist]
if {$nTestlist%3 || $nTestlist==0 } {
error "SELECT test list contains [llength $testlist] elements"
}
eval $repair
foreach {tn sql res} $testlist {
if {$tclquery != ""} {
execsql $sql
uplevel do_test ${prefix}.$tn [list $tclquery] [list [list {*}$res]]
} elseif {$countonly} {
set nRow 0
db eval $sql {incr nRow}
uplevel do_test ${prefix}.$tn [list [list set {} $nRow]] [list $res]
} elseif {$errfmt==""} {
uplevel do_execsql_test ${prefix}.${tn} [list $sql] [list [list {*}$res]]
} else {
set res [list 1 [string trim [format $errfmt {*}$res]]]
uplevel do_catchsql_test ${prefix}.${tn} [list $sql] [list $res]
}
eval $repair
}
}
proc delete_all_data {} {
db eval {SELECT tbl_name AS t FROM sqlite_master WHERE type = 'table'} {
db eval "DELETE FROM '[string map {' ''} $t]'"
}
}
# Run an SQL script.
# Return the number of microseconds per statement.
#
proc speed_trial {name numstmt units sql} {
output2 -nonewline [format {%-21.21s } $name...]
flush stdout
set speed [time {sqlite3_exec_nr db $sql}]
set tm [lindex $speed 0]
if {$tm == 0} {
set rate [format %20s "many"]
} else {
set rate [format %20.5f [expr {1000000.0*$numstmt/$tm}]]
}
set u2 $units/s
output2 [format {%12d uS %s %s} $tm $rate $u2]
global total_time
set total_time [expr {$total_time+$tm}]
lappend ::speed_trial_times $name $tm
}
proc speed_trial_tcl {name numstmt units script} {
output2 -nonewline [format {%-21.21s } $name...]
flush stdout
set speed [time {eval $script}]
set tm [lindex $speed 0]
if {$tm == 0} {
set rate [format %20s "many"]
} else {
set rate [format %20.5f [expr {1000000.0*$numstmt/$tm}]]
}
set u2 $units/s
output2 [format {%12d uS %s %s} $tm $rate $u2]
global total_time
set total_time [expr {$total_time+$tm}]
lappend ::speed_trial_times $name $tm
}
proc speed_trial_init {name} {
global total_time
set total_time 0
set ::speed_trial_times [list]
sqlite3 versdb :memory:
set vers [versdb one {SELECT sqlite_source_id()}]
versdb close
output2 "SQLite $vers"
}
proc speed_trial_summary {name} {
global total_time
output2 [format {%-21.21s %12d uS TOTAL} $name $total_time]
if { 0 } {
sqlite3 versdb :memory:
set vers [lindex [versdb one {SELECT sqlite_source_id()}] 0]
versdb close
output2 "CREATE TABLE IF NOT EXISTS time(version, script, test, us);"
foreach {test us} $::speed_trial_times {
output2 "INSERT INTO time VALUES('$vers', '$name', '$test', $us);"
}
}
}
# Clear out left-over configuration setup from the end of a test
#
proc finish_test_precleanup {} {
catch {db1 close}
catch {db2 close}
catch {db3 close}
catch {unregister_devsim}
catch {unregister_jt_vfs}
catch {unregister_demovfs}
}
# Run this routine last
#
proc finish_test {} {
global argv
finish_test_precleanup
if {[llength $argv]>0} {
# If additional test scripts are specified on the command-line,
# run them also, before quitting.
proc finish_test {} {
finish_test_precleanup
return
}
foreach extra $argv {
puts "Running \"$extra\""
db_delete_and_reopen
uplevel #0 source $extra
}
}
catch {db close}
if {0==[info exists ::SLAVE]} { finalize_testing }
}
proc finalize_testing {} {
global sqlite_open_file_count
set omitList [set_test_counter omit_list]
catch {db close}
catch {db2 close}
catch {db3 close}
vfs_unlink_test
sqlite3 db {}
# sqlite3_clear_tsd_memdebug
db close
sqlite3_reset_auto_extension
sqlite3_soft_heap_limit64 0
sqlite3_hard_heap_limit64 0
set nTest [incr_ntest]
set nErr [set_test_counter errors]
set nKnown 0
if {[file readable known-problems.txt]} {
set fd [open known-problems.txt]
set content [read $fd]
close $fd
foreach x $content {set known_error($x) 1}
foreach x [set_test_counter fail_list] {
if {[info exists known_error($x)]} {incr nKnown}
}
}
if {$nKnown>0} {
output2 "[expr {$nErr-$nKnown}] new errors and $nKnown known errors\
out of $nTest tests"
} else {
set cpuinfo {}
if {[catch {exec hostname} hname]==0} {set cpuinfo [string trim $hname]}
append cpuinfo " $::tcl_platform(os)"
append cpuinfo " [expr {$::tcl_platform(pointerSize)*8}]-bit"
append cpuinfo " [string map {E -e} $::tcl_platform(byteOrder)]"
output2 "SQLite [sqlite3 -sourceid]"
output2 "$nErr errors out of $nTest tests on $cpuinfo"
}
if {$nErr>$nKnown} {
output2 -nonewline "!Failures on these tests:"
foreach x [set_test_counter fail_list] {
if {![info exists known_error($x)]} {output2 -nonewline " $x"}
}
output2 ""
}
foreach warning [set_test_counter warn_list] {
output2 "Warning: $warning"
}
run_thread_tests 1
if {[llength $omitList]>0} {
output2 "Omitted test cases:"
set prec {}
foreach {rec} [lsort $omitList] {
if {$rec==$prec} continue
set prec $rec
output2 [format {. %-12s %s} [lindex $rec 0] [lindex $rec 1]]
}
}
if {$nErr>0 && ![working_64bit_int]} {
output2 "******************************************************************"
output2 "N.B.: The version of TCL that you used to build this test harness"
output2 "is defective in that it does not support 64-bit integers. Some or"
output2 "all of the test failures above might be a result from this defect"
output2 "in your TCL build."
output2 "******************************************************************"
}
if {$::cmdlinearg(binarylog)} {
vfslog finalize binarylog
}
if {[info exists ::run_thread_tests_called]==0} {
if {$sqlite_open_file_count} {
output2 "$sqlite_open_file_count files were left open"
incr nErr
}
}
if {[lindex [sqlite3_status SQLITE_STATUS_MALLOC_COUNT 0] 1]>0 ||
[sqlite3_memory_used]>0} {
output2 "Unfreed memory: [sqlite3_memory_used] bytes in\
[lindex [sqlite3_status SQLITE_STATUS_MALLOC_COUNT 0] 1] allocations"
incr nErr
ifcapable mem5||(mem3&&debug) {
output2 "Writing unfreed memory log to \"./memleak.txt\""
sqlite3_memdebug_dump ./memleak.txt
}
} else {
output2 "All memory allocations freed - no leaks"
ifcapable mem5 {
sqlite3_memdebug_dump ./memusage.txt
}
}
show_memstats
output2 "Maximum memory usage: [sqlite3_memory_highwater 1] bytes"
output2 "Current memory usage: [sqlite3_memory_highwater] bytes"
if {[info commands sqlite3_memdebug_malloc_count] ne ""} {
output2 "Number of malloc() : [sqlite3_memdebug_malloc_count] calls"
}
if {$::cmdlinearg(malloctrace)} {
output2 "Writing mallocs.tcl..."
memdebug_log_sql mallocs.tcl
sqlite3_memdebug_log stop
sqlite3_memdebug_log clear
if {[sqlite3_memory_used]>0} {
output2 "Writing leaks.tcl..."
sqlite3_memdebug_log sync
memdebug_log_sql leaks.tcl
}
}
if {[info commands vdbe_coverage]!=""} {
vdbe_coverage_report
}
foreach f [glob -nocomplain test.db-*-journal] {
forcedelete $f
}
foreach f [glob -nocomplain test.db-mj*] {
forcedelete $f
}
exit [expr {$nErr>0}]
}
proc vdbe_coverage_report {} {
puts "Writing vdbe coverage report to vdbe_coverage.txt"
set lSrc [list]
set iLine 0
if {[file exists ../sqlite3.c]} {
set fd [open ../sqlite3.c]
set iLine
while { ![eof $fd] } {
set line [gets $fd]
incr iLine
if {[regexp {^/\** Begin file (.*\.c) \**/} $line -> file]} {
lappend lSrc [list $iLine $file]
}
}
close $fd
}
set fd [open vdbe_coverage.txt w]
foreach miss [vdbe_coverage report] {
foreach {line branch never} $miss {}
set nextfile ""
while {[llength $lSrc]>0 && [lindex $lSrc 0 0] < $line} {
set nextfile [lindex $lSrc 0 1]
set lSrc [lrange $lSrc 1 end]
}
if {$nextfile != ""} {
puts $fd ""
puts $fd "### $nextfile ###"
}
puts $fd "Vdbe branch $line: never $never (path $branch)"
}
close $fd
}
# Display memory statistics for analysis and debugging purposes.
#
proc show_memstats {} {
set x [sqlite3_status SQLITE_STATUS_MEMORY_USED 0]
set y [sqlite3_status SQLITE_STATUS_MALLOC_SIZE 0]
set val [format {now %10d max %10d max-size %10d} \
[lindex $x 1] [lindex $x 2] [lindex $y 2]]
output1 "Memory used: $val"
set x [sqlite3_status SQLITE_STATUS_MALLOC_COUNT 0]
set val [format {now %10d max %10d} [lindex $x 1] [lindex $x 2]]
output1 "Allocation count: $val"
set x [sqlite3_status SQLITE_STATUS_PAGECACHE_USED 0]
set y [sqlite3_status SQLITE_STATUS_PAGECACHE_SIZE 0]
set val [format {now %10d max %10d max-size %10d} \
[lindex $x 1] [lindex $x 2] [lindex $y 2]]
output1 "Page-cache used: $val"
set x [sqlite3_status SQLITE_STATUS_PAGECACHE_OVERFLOW 0]
set val [format {now %10d max %10d} [lindex $x 1] [lindex $x 2]]
output1 "Page-cache overflow: $val"
ifcapable yytrackmaxstackdepth {
set x [sqlite3_status SQLITE_STATUS_PARSER_STACK 0]
set val [format { max %10d} [lindex $x 2]]
output2 "Parser stack depth: $val"
}
}
# A procedure to execute SQL
#
proc execsql {sql {db db}} {
# puts "SQL = $sql"
uplevel [list $db eval $sql]
}
proc execsql_timed {sql {db db}} {
set tm [time {
set x [uplevel [list $db eval $sql]]
} 1]
set tm [lindex $tm 0]
output1 -nonewline " ([expr {$tm*0.001}]ms) "
set x
}
# Execute SQL and catch exceptions.
#
proc catchsql {sql {db db}} {
# puts "SQL = $sql"
set r [catch [list uplevel [list $db eval $sql]] msg]
lappend r $msg
return $r
}
# Do an VDBE code dump on the SQL given
#
proc explain {sql {db db}} {
output2 ""
output2 "addr opcode p1 p2 p3 p4 p5 #"
output2 "---- ------------ ------ ------ ------ --------------- -- -"
$db eval "explain $sql" {} {
output2 [format {%-4d %-12.12s %-6d %-6d %-6d % -17s %s %s} \
$addr $opcode $p1 $p2 $p3 $p4 $p5 $comment
]
}
}
proc explain_i {sql {db db}} {
output2 ""
output2 "addr opcode p1 p2 p3 p4 p5 #"
output2 "---- ------------ ------ ------ ------ ---------------- -- -"
# Set up colors for the different opcodes. Scheme is as follows:
#
# Red: Opcodes that write to a b-tree.
# Blue: Opcodes that reposition or seek a cursor.
# Green: The ResultRow opcode.
#
if { [catch {fconfigure stdout -mode}]==0 } {
set R "\033\[31;1m" ;# Red fg
set G "\033\[32;1m" ;# Green fg
set B "\033\[34;1m" ;# Red fg
set D "\033\[39;0m" ;# Default fg
} else {
set R ""
set G ""
set B ""
set D ""
}
foreach opcode {
Seek SeekGE SeekGT SeekLE SeekLT NotFound Last Rewind
NoConflict Next Prev VNext VPrev VFilter
SorterSort SorterNext NextIfOpen
} {
set color($opcode) $B
}
foreach opcode {ResultRow} {
set color($opcode) $G
}
foreach opcode {IdxInsert Insert Delete IdxDelete} {
set color($opcode) $R
}
set bSeenGoto 0
$db eval "explain $sql" {} {
set x($addr) 0
set op($addr) $opcode
if {$opcode == "Goto" && ($bSeenGoto==0 || ($p2 > $addr+10))} {
set linebreak($p2) 1
set bSeenGoto 1
}
if {$opcode=="Once"} {
for {set i $addr} {$i<$p2} {incr i} {
set star($i) $addr
}
}
if {$opcode=="Next" || $opcode=="Prev"
|| $opcode=="VNext" || $opcode=="VPrev"
|| $opcode=="SorterNext" || $opcode=="NextIfOpen"
} {
for {set i $p2} {$i<$addr} {incr i} {
incr x($i) 2
}
}
if {$opcode == "Goto" && $p2<$addr && $op($p2)=="Yield"} {
for {set i [expr $p2+1]} {$i<$addr} {incr i} {
incr x($i) 2
}
}
if {$opcode == "Halt" && $comment == "End of coroutine"} {
set linebreak([expr $addr+1]) 1
}
}
$db eval "explain $sql" {} {
if {[info exists linebreak($addr)]} {
output2 ""
}
set I [string repeat " " $x($addr)]
if {[info exists star($addr)]} {
set ii [expr $x($star($addr))]
append I " "
set I [string replace $I $ii $ii *]
}
set col ""
catch { set col $color($opcode) }
output2 [format {%-4d %s%s%-12.12s%s %-6d %-6d %-6d % -17s %s %s} \
$addr $I $col $opcode $D $p1 $p2 $p3 $p4 $p5 $comment
]
}
output2 "---- ------------ ------ ------ ------ ---------------- -- -"
}
proc execsql_pp {sql {db db}} {
set nCol 0
$db eval $sql A {
if {$nCol==0} {
set nCol [llength $A(*)]
foreach c $A(*) {
set aWidth($c) [string length $c]
lappend data $c
}
}
foreach c $A(*) {
set n [string length $A($c)]
if {$n > $aWidth($c)} {
set aWidth($c) $n
}
lappend data $A($c)
}
}
if {$nCol>0} {
set nTotal 0
foreach e [array names aWidth] { incr nTotal $aWidth($e) }
incr nTotal [expr ($nCol-1) * 3]
incr nTotal 4
set fmt ""
foreach c $A(*) {
lappend fmt "% -$aWidth($c)s"
}
set fmt "| [join $fmt { | }] |"
puts [string repeat - $nTotal]
for {set i 0} {$i < [llength $data]} {incr i $nCol} {
set vals [lrange $data $i [expr $i+$nCol-1]]
puts [format $fmt {*}$vals]
if {$i==0} { puts [string repeat - $nTotal] }
}
puts [string repeat - $nTotal]
}
}
# Show the VDBE program for an SQL statement but omit the Trace
# opcode at the beginning. This procedure can be used to prove
# that different SQL statements generate exactly the same VDBE code.
#
proc explain_no_trace {sql} {
set tr [db eval "EXPLAIN $sql"]
return [lrange $tr 7 end]
}
# Another procedure to execute SQL. This one includes the field
# names in the returned list.
#
proc execsql2 {sql} {
set result {}
db eval $sql data {
foreach f $data(*) {
lappend result $f $data($f)
}
}
return $result
}
# Use a temporary in-memory database to execute SQL statements
#
proc memdbsql {sql} {
sqlite3 memdb :memory:
set result [memdb eval $sql]
memdb close
return $result
}
# Use the non-callback API to execute multiple SQL statements
#
proc stepsql {dbptr sql} {
set sql [string trim $sql]
set r 0
while {[string length $sql]>0} {
if {[catch {sqlite3_prepare $dbptr $sql -1 sqltail} vm]} {
return [list 1 $vm]
}
set sql [string trim $sqltail]
# while {[sqlite_step $vm N VAL COL]=="SQLITE_ROW"} {
# foreach v $VAL {lappend r $v}
# }
while {[sqlite3_step $vm]=="SQLITE_ROW"} {
for {set i 0} {$i<[sqlite3_data_count $vm]} {incr i} {
lappend r [sqlite3_column_text $vm $i]
}
}
if {[catch {sqlite3_finalize $vm} errmsg]} {
return [list 1 $errmsg]
}
}
return $r
}
# Do an integrity check of the entire database
#
proc integrity_check {name {db db}} {
ifcapable integrityck {
do_test $name [list execsql {PRAGMA integrity_check} $db] {ok}
}
}
# Check the extended error code
#
proc verify_ex_errcode {name expected {db db}} {
do_test $name [list sqlite3_extended_errcode $db] $expected
}
# Return true if the SQL statement passed as the second argument uses a
# statement transaction.
#
proc sql_uses_stmt {db sql} {
set stmt [sqlite3_prepare $db $sql -1 dummy]
set uses [uses_stmt_journal $stmt]
sqlite3_finalize $stmt
return $uses
}
proc fix_ifcapable_expr {expr} {
set ret ""
set state 0
for {set i 0} {$i < [string length $expr]} {incr i} {
set char [string range $expr $i $i]
set newstate [expr {[string is alnum $char] || $char eq "_"}]
if {$newstate && !$state} {
append ret {$::sqlite_options(}
}
if {!$newstate && $state} {
append ret )
}
append ret $char
set state $newstate
}
if {$state} {append ret )}
return $ret
}
# Returns non-zero if the capabilities are present; zero otherwise.
#
proc capable {expr} {
set e [fix_ifcapable_expr $expr]; return [expr ($e)]
}
# Evaluate a boolean expression of capabilities. If true, execute the
# code. Omit the code if false.
#
proc ifcapable {expr code {else ""} {elsecode ""}} {
#regsub -all {[a-z_0-9]+} $expr {$::sqlite_options(&)} e2
set e2 [fix_ifcapable_expr $expr]
if ($e2) {
set c [catch {uplevel 1 $code} r]
} else {
set c [catch {uplevel 1 $elsecode} r]
}
return -code $c $r
}
# This proc execs a seperate process that crashes midway through executing
# the SQL script $sql on database test.db.
#
# The crash occurs during a sync() of file $crashfile. When the crash
# occurs a random subset of all unsynced writes made by the process are
# written into the files on disk. Argument $crashdelay indicates the
# number of file syncs to wait before crashing.
#
# The return value is a list of two elements. The first element is a
# boolean, indicating whether or not the process actually crashed or
# reported some other error. The second element in the returned list is the
# error message. This is "child process exited abnormally" if the crash
# occurred.
#
# crashsql -delay CRASHDELAY -file CRASHFILE ?-blocksize BLOCKSIZE? $sql
#
proc crashsql {args} {
set blocksize ""
set crashdelay 1
set prngseed 0
set opendb { sqlite3 db test.db -vfs crash }
set tclbody {}
set crashfile ""
set dc ""
set dfltvfs 0
set sql [lindex $args end]
for {set ii 0} {$ii < [llength $args]-1} {incr ii 2} {
set z [lindex $args $ii]
set n [string length $z]
set z2 [lindex $args [expr $ii+1]]
if {$n>1 && [string first $z -delay]==0} {set crashdelay $z2} \
elseif {$n>1 && [string first $z -opendb]==0} {set opendb $z2} \
elseif {$n>1 && [string first $z -seed]==0} {set prngseed $z2} \
elseif {$n>1 && [string first $z -file]==0} {set crashfile $z2} \
elseif {$n>1 && [string first $z -tclbody]==0} {set tclbody $z2} \
elseif {$n>1 && [string first $z -blocksize]==0} {set blocksize "-s $z2" } \
elseif {$n>1 && [string first $z -characteristics]==0} {set dc "-c {$z2}" }\
elseif {$n>1 && [string first $z -dfltvfs]==0} {set dfltvfs $z2 }\
else { error "Unrecognized option: $z" }
}
if {$crashfile eq ""} {
error "Compulsory option -file missing"
}
# $crashfile gets compared to the native filename in
# cfSync(), which can be different then what TCL uses by
# default, so here we force it to the "nativename" format.
set cfile [string map {\\ \\\\} [file nativename [file join [get_pwd] $crashfile]]]
set f [open crash.tcl w]
puts $f "sqlite3_initialize ; sqlite3_shutdown"
puts $f "catch { install_malloc_faultsim 1 }"
puts $f "sqlite3_crash_enable 1 $dfltvfs"
puts $f "sqlite3_crashparams $blocksize $dc $crashdelay $cfile"
puts $f "sqlite3_test_control_pending_byte $::sqlite_pending_byte"
puts $f "autoinstall_test_functions"
# This block sets the cache size of the main database to 10
# pages. This is done in case the build is configured to omit
# "PRAGMA cache_size".
if {$opendb!=""} {
puts $f $opendb
puts $f {db eval {SELECT * FROM sqlite_master;}}
puts $f {set bt [btree_from_db db]}
puts $f {btree_set_cache_size $bt 10}
}
if {$prngseed} {
set seed [expr {$prngseed%10007+1}]
# puts seed=$seed
puts $f "db eval {SELECT randomblob($seed)}"
}
if {[string length $tclbody]>0} {
puts $f $tclbody
}
if {[string length $sql]>0} {
puts $f "db eval {"
puts $f "$sql"
puts $f "}"
}
close $f
set r [catch {
exec [info nameofexec] crash.tcl >@stdout 2>@stdout
} msg]
# Windows/ActiveState TCL returns a slightly different
# error message. We map that to the expected message
# so that we don't have to change all of the test
# cases.
if {$::tcl_platform(platform)=="windows"} {
if {$msg=="child killed: unknown signal"} {
set msg "child process exited abnormally"
}
}
if {$r && [string match {*ERROR: LeakSanitizer*} $msg]} {
set msg "child process exited abnormally"
}
lappend r $msg
}
# crash_on_write ?-devchar DEVCHAR? CRASHDELAY SQL
#
proc crash_on_write {args} {
set nArg [llength $args]
if {$nArg<2 || $nArg%2} {
error "bad args: $args"
}
set zSql [lindex $args end]
set nDelay [lindex $args end-1]
set devchar {}
for {set ii 0} {$ii < $nArg-2} {incr ii 2} {
set opt [lindex $args $ii]
switch -- [lindex $args $ii] {
-devchar {
set devchar [lindex $args [expr $ii+1]]
}
default { error "unrecognized option: $opt" }
}
}
set f [open crash.tcl w]
puts $f "sqlite3_crash_on_write $nDelay"
puts $f "sqlite3_test_control_pending_byte $::sqlite_pending_byte"
puts $f "sqlite3 db test.db -vfs writecrash"
puts $f "db eval {$zSql}"
puts $f "set {} {}"
close $f
set r [catch {
exec [info nameofexec] crash.tcl >@stdout
} msg]
# Windows/ActiveState TCL returns a slightly different
# error message. We map that to the expected message
# so that we don't have to change all of the test
# cases.
if {$::tcl_platform(platform)=="windows"} {
if {$msg=="child killed: unknown signal"} {
set msg "child process exited abnormally"
}
}
lappend r $msg
}
proc run_ioerr_prep {} {
set ::sqlite_io_error_pending 0
catch {db close}
catch {db2 close}
catch {forcedelete test.db}
catch {forcedelete test.db-journal}
catch {forcedelete test2.db}
catch {forcedelete test2.db-journal}
set ::DB [sqlite3 db test.db; sqlite3_connection_pointer db]
sqlite3_extended_result_codes $::DB $::ioerropts(-erc)
if {[info exists ::ioerropts(-tclprep)]} {
eval $::ioerropts(-tclprep)
}
if {[info exists ::ioerropts(-sqlprep)]} {
execsql $::ioerropts(-sqlprep)
}
expr 0
}
# Usage: do_ioerr_test <test number> <options...>
#
# This proc is used to implement test cases that check that IO errors
# are correctly handled. The first argument, <test number>, is an integer
# used to name the tests executed by this proc. Options are as follows:
#
# -tclprep TCL script to run to prepare test.
# -sqlprep SQL script to run to prepare test.
# -tclbody TCL script to run with IO error simulation.
# -sqlbody TCL script to run with IO error simulation.
# -exclude List of 'N' values not to test.
# -erc Use extended result codes
# -persist Make simulated I/O errors persistent
# -start Value of 'N' to begin with (default 1)
#
# -cksum Boolean. If true, test that the database does
# not change during the execution of the test case.
#
proc do_ioerr_test {testname args} {
set ::ioerropts(-start) 1
set ::ioerropts(-cksum) 0
set ::ioerropts(-erc) 0
set ::ioerropts(-count) 100000000
set ::ioerropts(-persist) 1
set ::ioerropts(-ckrefcount) 0
set ::ioerropts(-restoreprng) 1
array set ::ioerropts $args
# TEMPORARY: For 3.5.9, disable testing of extended result codes. There are
# a couple of obscure IO errors that do not return them.
set ::ioerropts(-erc) 0
# Create a single TCL script from the TCL and SQL specified
# as the body of the test.
set ::ioerrorbody {}
if {[info exists ::ioerropts(-tclbody)]} {
append ::ioerrorbody "$::ioerropts(-tclbody)\n"
}
if {[info exists ::ioerropts(-sqlbody)]} {
append ::ioerrorbody "db eval {$::ioerropts(-sqlbody)}"
}
save_prng_state
if {$::ioerropts(-cksum)} {
run_ioerr_prep
eval $::ioerrorbody
set ::goodcksum [cksum]
}
set ::go 1
#reset_prng_state
for {set n $::ioerropts(-start)} {$::go} {incr n} {
set ::TN $n
incr ::ioerropts(-count) -1
if {$::ioerropts(-count)<0} break
# Skip this IO error if it was specified with the "-exclude" option.
if {[info exists ::ioerropts(-exclude)]} {
if {[lsearch $::ioerropts(-exclude) $n]!=-1} continue
}
if {$::ioerropts(-restoreprng)} {
restore_prng_state
}
# Delete the files test.db and test2.db, then execute the TCL and
# SQL (in that order) to prepare for the test case.
do_test $testname.$n.1 {
run_ioerr_prep
} {0}
# Read the 'checksum' of the database.
if {$::ioerropts(-cksum)} {
set ::checksum [cksum]
}
# Set the Nth IO error to fail.
do_test $testname.$n.2 [subst {
set ::sqlite_io_error_persist $::ioerropts(-persist)
set ::sqlite_io_error_pending $n
}] $n
# Execute the TCL script created for the body of this test. If
# at least N IO operations performed by SQLite as a result of
# the script, the Nth will fail.
do_test $testname.$n.3 {
set ::sqlite_io_error_hit 0
set ::sqlite_io_error_hardhit 0
set r [catch $::ioerrorbody msg]
set ::errseen $r
if {[info commands db]!=""} {
set rc [sqlite3_errcode db]
if {$::ioerropts(-erc)} {
# If we are in extended result code mode, make sure all of the
# IOERRs we get back really do have their extended code values.
# If an extended result code is returned, the sqlite3_errcode
# TCLcommand will return a string of the form: SQLITE_IOERR+nnnn
# where nnnn is a number
if {[regexp {^SQLITE_IOERR} $rc] && ![regexp {IOERR\+\d} $rc]} {
return $rc
}
} else {
# If we are not in extended result code mode, make sure no
# extended error codes are returned.
if {[regexp {\+\d} $rc]} {
return $rc
}
}
}
# The test repeats as long as $::go is non-zero. $::go starts out
# as 1. When a test runs to completion without hitting an I/O
# error, that means there is no point in continuing with this test
# case so set $::go to zero.
#
if {$::sqlite_io_error_pending>0} {
set ::go 0
set q 0
set ::sqlite_io_error_pending 0
} else {
set q 1
}
set s [expr $::sqlite_io_error_hit==0]
if {$::sqlite_io_error_hit>$::sqlite_io_error_hardhit && $r==0} {
set r 1
}
set ::sqlite_io_error_hit 0
# One of two things must have happened. either
# 1. We never hit the IO error and the SQL returned OK
# 2. An IO error was hit and the SQL failed
#
#puts "s=$s r=$r q=$q"
expr { ($s && !$r && !$q) || (!$s && $r && $q) }
} {1}
set ::sqlite_io_error_hit 0
set ::sqlite_io_error_pending 0
# Check that no page references were leaked. There should be
# a single reference if there is still an active transaction,
# or zero otherwise.
#
# UPDATE: If the IO error occurs after a 'BEGIN' but before any
# locks are established on database files (i.e. if the error
# occurs while attempting to detect a hot-journal file), then
# there may 0 page references and an active transaction according
# to [sqlite3_get_autocommit].
#
if {$::go && $::sqlite_io_error_hardhit && $::ioerropts(-ckrefcount)} {
do_test $testname.$n.4 {
set bt [btree_from_db db]
db_enter db
array set stats [btree_pager_stats $bt]
db_leave db
set nRef $stats(ref)
expr {$nRef == 0 || ([sqlite3_get_autocommit db]==0 && $nRef == 1)}
} {1}
}
# If there is an open database handle and no open transaction,
# and the pager is not running in exclusive-locking mode,
# check that the pager is in "unlocked" state. Theoretically,
# if a call to xUnlock() failed due to an IO error the underlying
# file may still be locked.
#
ifcapable pragma {
if { [info commands db] ne ""
&& $::ioerropts(-ckrefcount)
&& [db one {pragma locking_mode}] eq "normal"
&& [sqlite3_get_autocommit db]
} {
do_test $testname.$n.5 {
set bt [btree_from_db db]
db_enter db
array set stats [btree_pager_stats $bt]
db_leave db
set stats(state)
} 0
}
}
# If an IO error occurred, then the checksum of the database should
# be the same as before the script that caused the IO error was run.
#
if {$::go && $::sqlite_io_error_hardhit && $::ioerropts(-cksum)} {
do_test $testname.$n.6 {
catch {db close}
catch {db2 close}
set ::DB [sqlite3 db test.db; sqlite3_connection_pointer db]
set nowcksum [cksum]
set res [expr {$nowcksum==$::checksum || $nowcksum==$::goodcksum}]
if {$res==0} {
output2 "now=$nowcksum"
output2 "the=$::checksum"
output2 "fwd=$::goodcksum"
}
set res
} 1
}
set ::sqlite_io_error_hardhit 0
set ::sqlite_io_error_pending 0
if {[info exists ::ioerropts(-cleanup)]} {
catch $::ioerropts(-cleanup)
}
}
set ::sqlite_io_error_pending 0
set ::sqlite_io_error_persist 0
unset ::ioerropts
}
# Return a checksum based on the contents of the main database associated
# with connection $db
#
proc cksum {{db db}} {
set txt [$db eval {
SELECT name, type, sql FROM sqlite_master order by name
}]\n
foreach tbl [$db eval {
SELECT name FROM sqlite_master WHERE type='table' order by name
}] {
append txt [$db eval "SELECT * FROM $tbl"]\n
}
foreach prag {default_synchronous default_cache_size} {
append txt $prag-[$db eval "PRAGMA $prag"]\n
}
set cksum [string length $txt]-[md5 $txt]
# puts $cksum-[file size test.db]
return $cksum
}
# Generate a checksum based on the contents of the main and temp tables
# database $db. If the checksum of two databases is the same, and the
# integrity-check passes for both, the two databases are identical.
#
proc allcksum {{db db}} {
set ret [list]
ifcapable tempdb {
set sql {
SELECT name FROM sqlite_master WHERE type = 'table' UNION
SELECT name FROM sqlite_temp_master WHERE type = 'table' UNION
SELECT 'sqlite_master' UNION
SELECT 'sqlite_temp_master' ORDER BY 1
}
} else {
set sql {
SELECT name FROM sqlite_master WHERE type = 'table' UNION
SELECT 'sqlite_master' ORDER BY 1
}
}
set tbllist [$db eval $sql]
set txt {}
foreach tbl $tbllist {
append txt [$db eval "SELECT * FROM $tbl"]
}
foreach prag {default_cache_size} {
append txt $prag-[$db eval "PRAGMA $prag"]\n
}
# puts txt=$txt
return [md5 $txt]
}
# Generate a checksum based on the contents of a single database with
# a database connection. The name of the database is $dbname.
# Examples of $dbname are "temp" or "main".
#
proc dbcksum {db dbname} {
if {$dbname=="temp"} {
set master sqlite_temp_master
} else {
set master $dbname.sqlite_master
}
set alltab [$db eval "SELECT name FROM $master WHERE type='table'"]
set txt [$db eval "SELECT * FROM $master"]\n
foreach tab $alltab {
append txt [$db eval "SELECT * FROM $dbname.$tab"]\n
}
return [md5 $txt]
}
proc memdebug_log_sql {filename} {
set data [sqlite3_memdebug_log dump]
set nFrame [expr [llength [lindex $data 0]]-2]
if {$nFrame < 0} { return "" }
set database temp
set tbl "CREATE TABLE ${database}.malloc(zTest, nCall, nByte, lStack);"
set sql ""
foreach e $data {
set nCall [lindex $e 0]
set nByte [lindex $e 1]
set lStack [lrange $e 2 end]
append sql "INSERT INTO ${database}.malloc VALUES"
append sql "('test', $nCall, $nByte, '$lStack');\n"
foreach f $lStack {
set frames($f) 1
}
}
set tbl2 "CREATE TABLE ${database}.frame(frame INTEGER PRIMARY KEY, line);\n"
set tbl3 "CREATE TABLE ${database}.file(name PRIMARY KEY, content);\n"
set pid [pid]
foreach f [array names frames] {
set addr [format %x $f]
set cmd "eu-addr2line --pid=$pid $addr"
set line [eval exec $cmd]
append sql "INSERT INTO ${database}.frame VALUES($f, '$line');\n"
set file [lindex [split $line :] 0]
set files($file) 1
}
foreach f [array names files] {
set contents ""
catch {
set fd [open $f]
set contents [read $fd]
close $fd
}
set contents [string map {' ''} $contents]
append sql "INSERT INTO ${database}.file VALUES('$f', '$contents');\n"
}
set escaped "BEGIN; ${tbl}${tbl2}${tbl3}${sql} ; COMMIT;"
set escaped [string map [list "{" "\\{" "}" "\\}" "\\" "\\\\"] $escaped]
set fd [open $filename w]
puts $fd "set BUILTIN {"
puts $fd $escaped
puts $fd "}"
puts $fd {set BUILTIN [string map [list "\\{" "{" "\\}" "}" "\\\\" "\\"] $BUILTIN]}
set mtv [open $::testdir/malloctraceviewer.tcl]
set txt [read $mtv]
close $mtv
puts $fd $txt
close $fd
}
# Drop all tables in database [db]
proc drop_all_tables {{db db}} {
ifcapable trigger&&foreignkey {
set pk [$db one "PRAGMA foreign_keys"]
$db eval "PRAGMA foreign_keys = OFF"
}
foreach {idx name file} [db eval {PRAGMA database_list}] {
if {$idx==1} {
set master sqlite_temp_master
} else {
set master $name.sqlite_master
}
foreach {t type} [$db eval "
SELECT name, type FROM $master
WHERE type IN('table', 'view') AND name NOT LIKE 'sqliteX_%' ESCAPE 'X'
"] {
$db eval "DROP $type \"$t\""
}
}
ifcapable trigger&&foreignkey {
$db eval "PRAGMA foreign_keys = $pk"
}
}
# Drop all auxiliary indexes from the main database opened by handle [db].
#
proc drop_all_indexes {{db db}} {
set L [$db eval {
SELECT name FROM sqlite_master WHERE type='index' AND sql LIKE 'create%'
}]
foreach idx $L { $db eval "DROP INDEX $idx" }
}
#-------------------------------------------------------------------------
# If a test script is executed with global variable $::G(perm:name) set to
# "wal", then the tests are run in WAL mode. Otherwise, they should be run
# in rollback mode. The following Tcl procs are used to make this less
# intrusive:
#
# wal_set_journal_mode ?DB?
#
# If running a WAL test, execute "PRAGMA journal_mode = wal" using
# connection handle DB. Otherwise, this command is a no-op.
#
# wal_check_journal_mode TESTNAME ?DB?
#
# If running a WAL test, execute a tests case that fails if the main
# database for connection handle DB is not currently a WAL database.
# Otherwise (if not running a WAL permutation) this is a no-op.
#
# wal_is_wal_mode
#
# Returns true if this test should be run in WAL mode. False otherwise.
#
proc wal_is_wal_mode {} {
expr {[permutation] eq "wal"}
}
proc wal_set_journal_mode {{db db}} {
if { [wal_is_wal_mode] } {
$db eval "PRAGMA journal_mode = WAL"
}
}
proc wal_check_journal_mode {testname {db db}} {
if { [wal_is_wal_mode] } {
$db eval { SELECT * FROM sqlite_master }
do_test $testname [list $db eval "PRAGMA main.journal_mode"] {wal}
}
}
proc wal_is_capable {} {
ifcapable !wal { return 0 }
if {[permutation]=="journaltest"} { return 0 }
return 1
}
proc permutation {} {
set perm ""
catch {set perm $::G(perm:name)}
set perm
}
proc presql {} {
set presql ""
catch {set presql $::G(perm:presql)}
set presql
}
proc isquick {} {
set ret 0
catch {set ret $::G(isquick)}
set ret
}
#-------------------------------------------------------------------------
#
proc slave_test_script {script} {
# Create the interpreter used to run the test script.
interp create tinterp
# Populate some global variables that tester.tcl expects to see.
foreach {var value} [list \
::argv0 $::argv0 \
::argv {} \
::SLAVE 1 \
] {
interp eval tinterp [list set $var $value]
}
# If output is being copied into a file, share the file-descriptor with
# the interpreter.
if {[info exists ::G(output_fd)]} {
interp share {} $::G(output_fd) tinterp
}
# The alias used to access the global test counters.
tinterp alias set_test_counter set_test_counter
# Set up the ::cmdlinearg array in the slave.
interp eval tinterp [list array set ::cmdlinearg [array get ::cmdlinearg]]
# Set up the ::G array in the slave.
interp eval tinterp [list array set ::G [array get ::G]]
# Load the various test interfaces implemented in C.
load_testfixture_extensions tinterp
# Run the test script.
interp eval tinterp $script
# Check if the interpreter call [run_thread_tests]
if { [interp eval tinterp {info exists ::run_thread_tests_called}] } {
set ::run_thread_tests_called 1
}
# Delete the interpreter used to run the test script.
interp delete tinterp
}
proc slave_test_file {zFile} {
set tail [file tail $zFile]
if {[info exists ::G(start:permutation)]} {
if {[permutation] != $::G(start:permutation)} return
unset ::G(start:permutation)
}
if {[info exists ::G(start:file)]} {
if {$tail != $::G(start:file) && $tail!="$::G(start:file).test"} return
unset ::G(start:file)
}
# Remember the value of the shared-cache setting. So that it is possible
# to check afterwards that it was not modified by the test script.
#
ifcapable shared_cache { set scs [sqlite3_enable_shared_cache] }
# Run the test script in a slave interpreter.
#
unset -nocomplain ::run_thread_tests_called
reset_prng_state
set ::sqlite_open_file_count 0
set time [time { slave_test_script [list source $zFile] }]
set ms [expr [lindex $time 0] / 1000]
# Test that all files opened by the test script were closed. Omit this
# if the test script has "thread" in its name. The open file counter
# is not thread-safe.
#
if {[info exists ::run_thread_tests_called]==0} {
do_test ${tail}-closeallfiles { expr {$::sqlite_open_file_count>0} } {0}
}
set ::sqlite_open_file_count 0
# Test that the global "shared-cache" setting was not altered by
# the test script.
#
ifcapable shared_cache {
set res [expr {[sqlite3_enable_shared_cache] == $scs}]
do_test ${tail}-sharedcachesetting [list set {} $res] 1
}
# Add some info to the output.
#
output2 "Time: $tail $ms ms"
show_memstats
}
# Open a new connection on database test.db and execute the SQL script
# supplied as an argument. Before returning, close the new conection and
# restore the 4 byte fields starting at header offsets 28, 92 and 96
# to the values they held before the SQL was executed. This simulates
# a write by a pre-3.7.0 client.
#
proc sql36231 {sql} {
set B [hexio_read test.db 92 8]
set A [hexio_read test.db 28 4]
sqlite3 db36231 test.db
catch { db36231 func a_string a_string }
execsql $sql db36231
db36231 close
hexio_write test.db 28 $A
hexio_write test.db 92 $B
return ""
}
proc db_save {} {
foreach f [glob -nocomplain sv_test.db*] { forcedelete $f }
foreach f [glob -nocomplain test.db*] {
set f2 "sv_$f"
forcecopy $f $f2
}
}
proc db_save_and_close {} {
db_save
catch { db close }
return ""
}
proc db_restore {} {
foreach f [glob -nocomplain test.db*] { forcedelete $f }
foreach f2 [glob -nocomplain sv_test.db*] {
set f [string range $f2 3 end]
forcecopy $f2 $f
}
}
proc db_restore_and_reopen {{dbfile test.db}} {
catch { db close }
db_restore
sqlite3 db $dbfile
}
proc db_delete_and_reopen {{file test.db}} {
catch { db close }
foreach f [glob -nocomplain test.db*] { forcedelete $f }
sqlite3 db $file
}
# Close any connections named [db], [db2] or [db3]. Then use sqlite3_config
# to configure the size of the PAGECACHE allocation using the parameters
# provided to this command. Save the old PAGECACHE parameters in a global
# variable so that [test_restore_config_pagecache] can restore the previous
# configuration.
#
# Before returning, reopen connection [db] on file test.db.
#
proc test_set_config_pagecache {sz nPg} {
catch {db close}
catch {db2 close}
catch {db3 close}
sqlite3_shutdown
set ::old_pagecache_config [sqlite3_config_pagecache $sz $nPg]
sqlite3_initialize
autoinstall_test_functions
reset_db
}
# Close any connections named [db], [db2] or [db3]. Then use sqlite3_config
# to configure the size of the PAGECACHE allocation to the size saved in
# the global variable by an earlier call to [test_set_config_pagecache].
#
# Before returning, reopen connection [db] on file test.db.
#
proc test_restore_config_pagecache {} {
catch {db close}
catch {db2 close}
catch {db3 close}
sqlite3_shutdown
if {[info exists ::old_pagecache_config]} {
eval sqlite3_config_pagecache $::old_pagecache_config
unset ::old_pagecache_config
}
sqlite3_initialize
autoinstall_test_functions
sqlite3 db test.db
}
proc test_binary_name {nm} {
if {$::tcl_platform(platform)=="windows"} {
set ret "$nm.exe"
} else {
set ret $nm
}
file normalize [file join $::cmdlinearg(TESTFIXTURE_HOME) $ret]
}
proc test_find_binary {nm} {
set ret [test_binary_name $nm]
if {![file executable $ret]} {
finish_test
return ""
}
return $ret
}
# Find the name of the 'shell' executable (e.g. "sqlite3.exe") to use for
# the tests in shell*.test. If no such executable can be found, invoke
# [finish_test ; return] in the callers context.
#
proc test_find_cli {} {
set prog [test_find_binary sqlite3]
if {$prog==""} { return -code return }
return $prog
}
# Find invocation of the 'shell' executable (e.g. "sqlite3.exe") to use
# for the tests in shell*.test with optional valgrind prefix when the
# environment variable SQLITE_CLI_VALGRIND_OPT is set. The set value
# operates as follows:
# empty or 0 => no valgrind prefix;
# 1 => valgrind options for memory leak check;
# other => use value as valgrind options.
# If shell not found, invoke [finish_test ; return] in callers context.
#
proc test_cli_invocation {} {
set prog [test_find_binary sqlite3]
if {$prog==""} { return -code return }
set vgrun [expr {[permutation]=="valgrind"}]
if {$vgrun || [info exists ::env(SQLITE_CLI_VALGRIND_OPT)]} {
if {$vgrun} {
set vgo "--quiet"
} else {
set vgo $::env(SQLITE_CLI_VALGRIND_OPT)
}
if {$vgo == 0 || $vgo eq ""} {
return $prog
} elseif {$vgo == 1} {
return "valgrind --quiet --leak-check=yes $prog"
} else {
return "valgrind $vgo $prog"
}
} else {
return $prog
}
}
# Find the name of the 'sqldiff' executable (e.g. "sqlite3.exe") to use for
# the tests in sqldiff tests. If no such executable can be found, invoke
# [finish_test ; return] in the callers context.
#
proc test_find_sqldiff {} {
set prog [test_find_binary sqldiff]
if {$prog==""} { return -code return }
return $prog
}
# Call sqlite3_expanded_sql() on all statements associated with database
# connection $db. This sometimes finds use-after-free bugs if run with
# valgrind or address-sanitizer.
proc expand_all_sql {db} {
set stmt ""
while {[set stmt [sqlite3_next_stmt $db $stmt]]!=""} {
sqlite3_expanded_sql $stmt
}
}
# If the library is compiled with the SQLITE_DEFAULT_AUTOVACUUM macro set
# to non-zero, then set the global variable $AUTOVACUUM to 1.
set AUTOVACUUM $sqlite_options(default_autovacuum)
# Make sure the FTS enhanced query syntax is disabled.
set sqlite_fts3_enable_parentheses 0
# During testing, assume that all database files are well-formed. The
# few test cases that deliberately corrupt database files should rescind
# this setting by invoking "database_can_be_corrupt"
#
database_never_corrupt
extra_schema_checks 1
source $testdir/thread_common.tcl
source $testdir/malloc_common.tcl
set tester_tcl_has_run 1