html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688573964,https://api.github.com/repos/simonw/sqlite-utils/issues/146,688573964,MDEyOklzc3VlQ29tbWVudDY4ODU3Mzk2NA==,96218,simonwiles,2020-09-08T01:55:07Z,2020-09-08T01:55:07Z,CONTRIBUTOR,"Okay, I've rewritten this PR to preserve the batching behaviour but still fix #145, and rebased the branch to account for the `db.execute()` api change. It's not terribly sophisticated -- if it attempts to insert a batch which has too many variables, the exception is caught, the batch is split in two and each half is inserted separately, and then it carries on as before with the same `batch_size`. In the edge case where this gets triggered, subsequent batches will all be inserted in two groups too if they continue to have the same number of columns (which is presumably reasonably likely). Do you reckon this is acceptable when set against the awkwardness of recalculating the `batch_size` on the fly?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680,Handle case where subsequent records (after first batch) include extra columns, https://github.com/dogsheep/dogsheep-beta/issues/18#issuecomment-688622995,https://api.github.com/repos/dogsheep/dogsheep-beta/issues/18,688622995,MDEyOklzc3VlQ29tbWVudDY4ODYyMjk5NQ==,9599,simonw,2020-09-08T05:15:21Z,2020-09-08T05:15:21Z,MEMBER,"Alternatively it could run as it does now but add a `DELETE FROM index1.search_index WHERE key not in (select key from ...)`. I'm not sure which would be more efficient.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",695553522,Deleted records stay in the search index, https://github.com/dogsheep/dogsheep-beta/issues/18#issuecomment-688623097,https://api.github.com/repos/dogsheep/dogsheep-beta/issues/18,688623097,MDEyOklzc3VlQ29tbWVudDY4ODYyMzA5Nw==,9599,simonw,2020-09-08T05:15:51Z,2020-09-08T05:15:51Z,MEMBER,"I'm inclined to go with the first, simpler option. I have longer term plans for efficient incremental index updates based on clever trickery with triggers.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",695553522,Deleted records stay in the search index, https://github.com/dogsheep/dogsheep-beta/issues/19#issuecomment-688625430,https://api.github.com/repos/dogsheep/dogsheep-beta/issues/19,688625430,MDEyOklzc3VlQ29tbWVudDY4ODYyNTQzMA==,9599,simonw,2020-09-08T05:24:50Z,2020-09-08T05:24:50Z,MEMBER,"I thought about allowing tables to define a incremental indexing SQL query - maybe something that can return just records touched in the past hour, or records since a recorded ""last indexed record"" value. The problem with this is deletes - if you delete a record, how does the indexer know to remove it? See #18 - that's already caused problems.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",695556681,Figure out incremental re-indexing, https://github.com/dogsheep/dogsheep-beta/issues/19#issuecomment-688626037,https://api.github.com/repos/dogsheep/dogsheep-beta/issues/19,688626037,MDEyOklzc3VlQ29tbWVudDY4ODYyNjAzNw==,9599,simonw,2020-09-08T05:27:07Z,2020-09-08T05:27:07Z,MEMBER,"A really clever way to do this would be with triggers. The indexer script would add triggers to each of the database tables that it is indexing - each in their own database. Those triggers would then maintain a `_index_queue_` table. This table would record the primary key of rows that are added, modified or deleted. The indexer could then work by reading through the `_index_queue_` table, re-indexing (or deleting) just the primary keys listed there, and then emptying the queue once it has finished. This would add a small amount of overhead to insert/update/delete queries run against the table. My hunch is that the overhead would be miniscule, but I could still allow people to opt-out for tables that are so high traffic that this would matter.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",695556681,Figure out incremental re-indexing, https://github.com/simonw/sqlite-utils/issues/155#issuecomment-689163158,https://api.github.com/repos/simonw/sqlite-utils/issues/155,689163158,MDEyOklzc3VlQ29tbWVudDY4OTE2MzE1OA==,9599,simonw,2020-09-08T22:10:27Z,2020-09-08T22:10:27Z,OWNER,"For the command version: sqlite-utils rebuild-fts mydb.db This will rebuild all detected FTS tables. You can also specify one or more explicit tables: sqlite-utils rebuild-fts mydb.db dogs ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",696045581,rebuild-fts command and table.rebuild_fts() method, https://github.com/simonw/sqlite-utils/issues/153#issuecomment-689165985,https://api.github.com/repos/simonw/sqlite-utils/issues/153,689165985,MDEyOklzc3VlQ29tbWVudDY4OTE2NTk4NQ==,9599,simonw,2020-09-08T22:18:52Z,2020-09-08T22:18:52Z,OWNER,"I've reverted this change again, because it turns out using the `rebuild` FTS mechanism is a better way of repairing this issue - see #155.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",695377804,table.optimize() should delete junk rows from *_fts_docsize, https://github.com/simonw/sqlite-utils/issues/155#issuecomment-689166404,https://api.github.com/repos/simonw/sqlite-utils/issues/155,689166404,MDEyOklzc3VlQ29tbWVudDY4OTE2NjQwNA==,9599,simonw,2020-09-08T22:20:03Z,2020-09-08T22:20:03Z,OWNER,"I'm going to update `sqlite-utils optimize` to also take an optional list of tables, for consistency.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",696045581,rebuild-fts command and table.rebuild_fts() method, https://github.com/simonw/sqlite-utils/pull/146#issuecomment-689185393,https://api.github.com/repos/simonw/sqlite-utils/issues/146,689185393,MDEyOklzc3VlQ29tbWVudDY4OTE4NTM5Mw==,9599,simonw,2020-09-08T23:17:42Z,2020-09-08T23:17:42Z,OWNER,"That seems like a reasonable approach to me, especially since this is going to be a pretty rare edge-case.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688668680,Handle case where subsequent records (after first batch) include extra columns, https://github.com/simonw/sqlite-utils/issues/145#issuecomment-689186423,https://api.github.com/repos/simonw/sqlite-utils/issues/145,689186423,MDEyOklzc3VlQ29tbWVudDY4OTE4NjQyMw==,9599,simonw,2020-09-08T23:21:23Z,2020-09-08T23:21:23Z,OWNER,Fixed in PR #146.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",688659182,Bug when first record contains fewer columns than subsequent records,