html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/simonw/datasette/issues/523#issuecomment-504807727,https://api.github.com/repos/simonw/datasette/issues/523,504807727,MDEyOklzc3VlQ29tbWVudDUwNDgwNzcyNw==,9599,simonw,2019-06-24T01:24:07Z,2019-06-24T01:24:07Z,OWNER,"For databases opened in immutable mode we pre-calculate the total row count for every table precisely so we can offer this kind of functionality without too much of a performance hit. The total row count is already available to the template (you can hit the .json endpoint to see it), so implementing this should be possible just by updating the template. For mutable databases we have a mechanism for attempting the count and giving up after a specified time limit - we can use that to get ""3 of many"". It looks like this is actually a dupe of #127 and #134.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459627549,Show total/unfiltered row count when filtering, https://github.com/simonw/datasette/issues/523#issuecomment-504809397,https://api.github.com/repos/simonw/datasette/issues/523,504809397,MDEyOklzc3VlQ29tbWVudDUwNDgwOTM5Nw==,2657547,rixx,2019-06-24T01:38:14Z,2019-06-24T01:38:14Z,CONTRIBUTOR,"Ah, apologies – I had found and read those issues, but I was under the impression that they refered only to the filtered row count, not the unfiltered total row count.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459627549,Show total/unfiltered row count when filtering, https://github.com/simonw/datasette/pull/518#issuecomment-504843916,https://api.github.com/repos/simonw/datasette/issues/518,504843916,MDEyOklzc3VlQ29tbWVudDUwNDg0MzkxNg==,9599,simonw,2019-06-24T03:30:37Z,2019-06-24T03:30:37Z,OWNER,It's live! https://a559123.datasette.io/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459587155,Port Datasette from Sanic to ASGI + Uvicorn, https://github.com/simonw/datasette/issues/272#issuecomment-504844339,https://api.github.com/repos/simonw/datasette/issues/272,504844339,MDEyOklzc3VlQ29tbWVudDUwNDg0NDMzOQ==,9599,simonw,2019-06-24T03:33:06Z,2019-06-24T03:33:06Z,OWNER,"It's alive! Here's the first deployed version: https://a559123.datasette.io/ You can confirm it's running under ASGI by viewing https://a559123.datasette.io/-/versions and looking for the `""asgi""` key. Compare to the last version of master running on Sanic here: http://aa91112.datasette.io/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324188953,Port Datasette to ASGI, https://github.com/simonw/datasette/issues/520#issuecomment-504852873,https://api.github.com/repos/simonw/datasette/issues/520,504852873,MDEyOklzc3VlQ29tbWVudDUwNDg1Mjg3Mw==,9599,simonw,2019-06-24T04:28:22Z,2019-06-24T04:28:22Z,OWNER,#272 is closed now! This hook is next on the priority list.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459598080,asgi_wrapper plugin hook, https://github.com/simonw/datasette/issues/272#issuecomment-504857097,https://api.github.com/repos/simonw/datasette/issues/272,504857097,MDEyOklzc3VlQ29tbWVudDUwNDg1NzA5Nw==,9599,simonw,2019-06-24T04:54:15Z,2019-06-24T04:54:15Z,OWNER,I wrote about this on my blog: https://simonwillison.net/2019/Jun/23/datasette-asgi/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324188953,Port Datasette to ASGI, https://github.com/simonw/datasette/issues/511#issuecomment-504863286,https://api.github.com/repos/simonw/datasette/issues/511,504863286,MDEyOklzc3VlQ29tbWVudDUwNDg2MzI4Ng==,9599,simonw,2019-06-24T05:30:02Z,2019-06-24T05:30:02Z,OWNER,I've landed #272 - need to manually test if it works on Windows now!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions, https://github.com/simonw/datasette/issues/398#issuecomment-504863901,https://api.github.com/repos/simonw/datasette/issues/398,504863901,MDEyOklzc3VlQ29tbWVudDUwNDg2MzkwMQ==,9599,simonw,2019-06-24T05:33:26Z,2019-06-24T05:33:26Z,OWNER,I no longer depend on Sanic so I should be able to solve this entirely within the Datasette codebase. I need to figure out how `Transfer-Encoding: chunked` and the `Content-Length` header interact.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",398011658,Ensure downloading a 100+MB SQLite database file works, https://github.com/simonw/datasette/issues/305#issuecomment-504878886,https://api.github.com/repos/simonw/datasette/issues/305,504878886,MDEyOklzc3VlQ29tbWVudDUwNDg3ODg4Ng==,9599,simonw,2019-06-24T06:40:19Z,2019-06-24T06:40:19Z,OWNER,I did this a while ago https://datasette.readthedocs.io/en/stable/contributing.html,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",329147284,Add contributor guidelines to docs, https://github.com/simonw/datasette/issues/267#issuecomment-504879082,https://api.github.com/repos/simonw/datasette/issues/267,504879082,MDEyOklzc3VlQ29tbWVudDUwNDg3OTA4Mg==,9599,simonw,2019-06-24T06:41:02Z,2019-06-24T06:41:02Z,OWNER,Yes this is definitely documented now https://datasette.readthedocs.io/en/stable/performance.html,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323716411,"Documentation for URL hashing, redirects and cache policy", https://github.com/simonw/datasette/issues/106#issuecomment-504879510,https://api.github.com/repos/simonw/datasette/issues/106,504879510,MDEyOklzc3VlQ29tbWVudDUwNDg3OTUxMA==,9599,simonw,2019-06-24T06:42:33Z,2019-06-24T06:42:33Z,OWNER,https://datasette.readthedocs.io/en/stable/sql_queries.html?highlight=Pagination#pagination,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274315193,Document how pagination works, https://github.com/simonw/datasette/issues/124#issuecomment-504879834,https://api.github.com/repos/simonw/datasette/issues/124,504879834,MDEyOklzc3VlQ29tbWVudDUwNDg3OTgzNA==,9599,simonw,2019-06-24T06:43:46Z,2019-06-24T06:43:46Z,OWNER,https://simonwillison.net/2019/May/19/datasette-0-28/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125805,Option to open readonly but not immutable, https://github.com/simonw/datasette/issues/183#issuecomment-504880173,https://api.github.com/repos/simonw/datasette/issues/183,504880173,MDEyOklzc3VlQ29tbWVudDUwNDg4MDE3Mw==,9599,simonw,2019-06-24T06:45:07Z,2019-06-24T06:45:07Z,OWNER,Closing as couldn't replicate,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",291639118,Custom Queries - escaping strings, https://github.com/simonw/datasette/issues/268#issuecomment-504880796,https://api.github.com/repos/simonw/datasette/issues/268,504880796,MDEyOklzc3VlQ29tbWVudDUwNDg4MDc5Ng==,9599,simonw,2019-06-24T06:47:23Z,2019-06-24T06:47:23Z,OWNER,I did a bunch of research relevant to this a while ago: https://simonwillison.net/2019/Jan/7/exploring-search-relevance-algorithms-sqlite/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",323718842,Mechanism for ranking results from SQLite full-text search, https://github.com/simonw/datasette/issues/146#issuecomment-504881030,https://api.github.com/repos/simonw/datasette/issues/146,504881030,MDEyOklzc3VlQ29tbWVudDUwNDg4MTAzMA==,9599,simonw,2019-06-24T06:48:20Z,2019-06-24T06:48:20Z,OWNER,"I'm going to call this ""done"" thanks to cloudrun: #400 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",276455748,datasette publish gcloud, https://github.com/simonw/datasette/issues/340#issuecomment-504881630,https://api.github.com/repos/simonw/datasette/issues/340,504881630,MDEyOklzc3VlQ29tbWVudDUwNDg4MTYzMA==,9599,simonw,2019-06-24T06:50:26Z,2019-06-24T06:50:26Z,OWNER,Black is now enforced by our unit tests as of #449 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",340730961,Embrace black, https://github.com/simonw/datasette/issues/215#issuecomment-504881900,https://api.github.com/repos/simonw/datasette/issues/215,504881900,MDEyOklzc3VlQ29tbWVudDUwNDg4MTkwMA==,9599,simonw,2019-06-24T06:51:29Z,2020-06-06T21:47:11Z,OWNER,See also #520 - asgi_wrapper plugin hook.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",314506669,Allow plugins to define additional URL routes and views, https://github.com/simonw/datasette/issues/238#issuecomment-504882244,https://api.github.com/repos/simonw/datasette/issues/238,504882244,MDEyOklzc3VlQ29tbWVudDUwNDg4MjI0NA==,9599,simonw,2019-06-24T06:52:45Z,2019-06-24T06:52:45Z,OWNER,I'm not going to do this - there are plenty of smarter ways of achieving a similar goal.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",317714268,External metadata.json, https://github.com/simonw/datasette/issues/294#issuecomment-504882686,https://api.github.com/repos/simonw/datasette/issues/294,504882686,MDEyOklzc3VlQ29tbWVudDUwNDg4MjY4Ng==,9599,simonw,2019-06-24T06:54:22Z,2019-06-24T06:54:22Z,OWNER,Consider this when solving #465 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",327365110,inspect should record column types, https://github.com/simonw/datasette/issues/48#issuecomment-504883688,https://api.github.com/repos/simonw/datasette/issues/48,504883688,MDEyOklzc3VlQ29tbWVudDUwNDg4MzY4OA==,9599,simonw,2019-06-24T06:57:43Z,2019-06-24T06:57:43Z,OWNER,"I've seen no evidence that JSON handling is even close to being a performance bottleneck, so wontfix.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",272391665,Switch to ujson, https://github.com/simonw/datasette/issues/514#issuecomment-504998302,https://api.github.com/repos/simonw/datasette/issues/514,504998302,MDEyOklzc3VlQ29tbWVudDUwNDk5ODMwMg==,7936571,chrismp,2019-06-24T12:57:19Z,2019-06-24T12:57:19Z,NONE,"Same error when I used the full path. On Sun, Jun 23, 2019 at 18:31 Simon Willison wrote: > I suggest trying a full path in ExecStart like this: > > ExecStart=/home/chris/Env/datasette/bin/datasette serve -h 0.0.0.0 > /home/chris/digital-library/databases/*.db --cors --metadata metadata.json > > That should eliminate the chance of some kind of path confusion. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > , > or mute the thread > > . > -- *Chris Persaud* ChrisPersaud.com ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/simonw/datasette/issues/527#issuecomment-505052224,https://api.github.com/repos/simonw/datasette/issues/527,505052224,MDEyOklzc3VlQ29tbWVudDUwNTA1MjIyNA==,9599,simonw,2019-06-24T15:08:52Z,2019-06-24T15:08:52Z,OWNER,"The `select rank, *` feature is only available with FTS5 - it won't work with FTS4. So my best guess is that `csvs-to-sqlite` is setting up FTS with FTS4 when you want FTS5. ... I tested on my own machine and that is indeed what's happening! And in fact it looks like it's a known bug - I should fix that! https://github.com/simonw/csvs-to-sqlite/issues/41","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459936585,Unable to use rank when fts-table generated with csvs-to-sqlite, https://github.com/simonw/datasette/issues/527#issuecomment-505052344,https://api.github.com/repos/simonw/datasette/issues/527,505052344,MDEyOklzc3VlQ29tbWVudDUwNTA1MjM0NA==,9599,simonw,2019-06-24T15:09:10Z,2019-06-24T15:09:10Z,OWNER,Closing in favour of that bug in the csvs-to-sqlite repo.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459936585,Unable to use rank when fts-table generated with csvs-to-sqlite, https://github.com/simonw/datasette/issues/527#issuecomment-505057520,https://api.github.com/repos/simonw/datasette/issues/527,505057520,MDEyOklzc3VlQ29tbWVudDUwNTA1NzUyMA==,9599,simonw,2019-06-24T15:21:18Z,2019-06-24T15:21:18Z,OWNER,I just released csvs-to-sqlite 0.9.1 with this bug fix: https://github.com/simonw/csvs-to-sqlite/releases/tag/0.9.1,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459936585,Unable to use rank when fts-table generated with csvs-to-sqlite, https://github.com/simonw/datasette/issues/526#issuecomment-505060332,https://api.github.com/repos/simonw/datasette/issues/526,505060332,MDEyOklzc3VlQ29tbWVudDUwNTA2MDMzMg==,9599,simonw,2019-06-24T15:28:16Z,2019-06-24T15:28:16Z,OWNER,"This is currently a deliberate feature decision. The problem is that the streaming CSV feature relies on Datasette's automated efficient pagination under the hood. When you stream a CSV you're actually causing Datasette to paginate through the full set of ""pages"" under the hood, streaming each page out as a new chunk of CSV rows. This mechanism only works if the `next_url` has been generated for the page. Currently the `next_url` is available for table views (where it uses [the primary key or the sort column](https://datasette.readthedocs.io/en/stable/sql_queries.html#pagination)) and for views, but it's not set for canned queries because I can't be certain they can be efficiently paginated. Offset/limit pagination for canned queries would be a pretty nasty performance hit, because each subsequent page would require even more time for SQLite to scroll through to the specified offset. This does seem like it's worth fixing though: pulling every row for a canned queries would definitely be useful. The problem is that the pagination trick used elsewhere isn't right for canned queries - instead I would need to keep the database cursor open until ALL rows had been fetched. Figuring out how to do that efficiently within an asyncio managed thread pool may take some thought. Maybe this feature ends up as something which is turned off by default (due to the risk of it causing uptime problems for public sites) but that users working on their own private environments can turn on? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/514#issuecomment-505061703,https://api.github.com/repos/simonw/datasette/issues/514,505061703,MDEyOklzc3VlQ29tbWVudDUwNTA2MTcwMw==,9599,simonw,2019-06-24T15:31:25Z,2019-06-24T15:31:25Z,OWNER,"I'm suspicious of the wildcard. Does it work if you do the following? ExecStart=/home/chris/Env/datasette/bin/datasette serve -h 0.0.0.0 /home/chris/digital-library/databases/actual-database.db --cors --metadata /home/chris/digital-library/metadata.json If that does work then it means the ExecStart line doesn't support bash wildcard expansion. You'll need to create a separate script like this: ``` #!/bin/bash /home/chris/Env/datasette/bin/datasette serve -h 0.0.0.0 /home/chris/digital-library/databases/*.db --cors --metadata /home/chris/digital-library/metadata.json ``` Then save that as `/home/chris/digital-library/run-datasette.sh` and try this: ExecStart=/home/chris/digital-library/run-datasette.sh","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459397625,Documentation with recommendations on running Datasette in production without using Docker, https://github.com/simonw/datasette/issues/525#issuecomment-505083671,https://api.github.com/repos/simonw/datasette/issues/525,505083671,MDEyOklzc3VlQ29tbWVudDUwNTA4MzY3MQ==,9599,simonw,2019-06-24T16:29:30Z,2019-06-24T16:29:30Z,OWNER,"It's mentioned here at the moment, but I'm going to expand that: https://github.com/simonw/datasette/blob/34e292d24dc2b8376236472bec3cce1c556ddfe5/docs/full_text_search.rst#L74","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459714943,Add section on sqite-utils enable-fts to the search documentation, https://github.com/simonw/datasette/issues/525#issuecomment-505086213,https://api.github.com/repos/simonw/datasette/issues/525,505086213,MDEyOklzc3VlQ29tbWVudDUwNTA4NjIxMw==,9599,simonw,2019-06-24T16:36:35Z,2019-06-24T16:36:35Z,OWNER,"https://datasette.readthedocs.io/en/latest/full_text_search.html#adding-full-text-search-to-a-sqlite-table ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459714943,Add section on sqite-utils enable-fts to the search documentation, https://github.com/simonw/datasette/pull/437#issuecomment-505087020,https://api.github.com/repos/simonw/datasette/issues/437,505087020,MDEyOklzc3VlQ29tbWVudDUwNTA4NzAyMA==,9599,simonw,2019-06-24T16:38:56Z,2019-06-24T16:38:56Z,OWNER,Closing this because it doesn't really fit the new model of inspect (though we should discuss in #465 how to further evolve this feature) and because as-of #272 we no longer use Sanic - though #520 will implement the equivalent of `prepare_sanic` against ASGI.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",438048318,Add inspect and prepare_sanic hooks, https://github.com/simonw/datasette/issues/526#issuecomment-505161008,https://api.github.com/repos/simonw/datasette/issues/526,505161008,MDEyOklzc3VlQ29tbWVudDUwNTE2MTAwOA==,9599,simonw,2019-06-24T20:11:15Z,2019-06-24T20:11:15Z,OWNER,"Views already use offset/limit pagination so actually I may be over-thinking this. Maybe the right thing to do here is to have the feature enabled by default, since it will work for the VAST majority of queries - the only ones that might cause problems are complex queries across millions of rows. It can continue to use aggressive internal time limits so if someone DOES trigger something expensive they'll get an error. I can allow users to disable the feature with a config setting, or increase the time limit if they need to. Downgrading this from a medium to a small since it's much less effort to enable the existing pagination method for this type of query.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/526#issuecomment-505162238,https://api.github.com/repos/simonw/datasette/issues/526,505162238,MDEyOklzc3VlQ29tbWVudDUwNTE2MjIzOA==,9599,simonw,2019-06-24T20:14:51Z,2019-06-24T20:14:51Z,OWNER,"The other reason I didn't implement this in the first place is that adding offset/limit to a custom query (as opposed to a view) requires modifying the existing SQL - what if that SQL already has its own offset/limit clause? It looks like I can solve that using a nested query: ```sql select * from ( select * from compound_three_primary_keys limit 1000 ) limit 10 offset 100 ``` https://latest.datasette.io/fixtures?sql=select+*+from+%28%0D%0A++select+*+from+compound_three_primary_keys+limit+1000%0D%0A%29+limit+10+offset+100 So I can wrap any user-provided SQL query in an outer offset/limit and implement pagination that way.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries,