{"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-556749086", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 556749086, "node_id": "MDEyOklzc3VlQ29tbWVudDU1Njc0OTA4Ng==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2019-11-21T01:15:34Z", "updated_at": "2019-11-21T01:21:45Z", "author_association": "CONTRIBUTOR", "body": "Hey @simonw is the url_prefix config option available in another branch, it looks like you've written some tests for it above? In 0.32 I get \"url_prefix is not a valid option\". I think this would be *really helpful*!\r\n\r\nThis would be really handy for proxying datasette in another domain's *subdirectory* I believe this will allow folks to run upstream authentication, but the links break if the url_prefix doesn't match. \r\n\r\nI'd prefer not to host a proxied version of datasette on a subdomain (e.g. datasette.myurl.com b/c then I gotta worry about sharing authorization cookies with the subdomain, which I just assume not do, but...)\r\n\r\nEdit: I see the wip-url-prefix branch, I may try with that https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/639#issuecomment-558687342", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/639", "id": 558687342, "node_id": "MDEyOklzc3VlQ29tbWVudDU1ODY4NzM0Mg==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2019-11-26T15:40:00Z", "updated_at": "2019-11-26T15:40:00Z", "author_association": "CONTRIBUTOR", "body": "A bit of background: the reason `heroku git:clone` brings down an empty directory is because `datasette publish heroku` uses the [builds API](https://devcenter.heroku.com/articles/build-and-release-using-the-api), rather than a `git push`, to release the app. I originally did this because it seemed like a lower bar than having a working `git`, but the downside is, as you found out, that tweaking the created app is hard. \r\n\r\nSo there's one option -- change `datasette publish heroku` to use `git push` instead of `heroku builds:create`.\r\n\r\n@pkoppstein - what you suggested seems like it ought to work (you don't need maintenance mode, though). I'm not sure why it doesn't.\r\n\r\nYou could also look into using the [slugs API](https://devcenter.heroku.com/articles/platform-api-deploying-slugs) to download the slug, change `metadata.json`, re-pack and re-upload the slug.\r\n\r\nUltimately though I think I think @simonw's idea of reading `metadata.json` from an external source might be better (#357). Reading from an alternate URL would be fine, or you could also just stuff the whole `metadata.json` into a Heroku config var, and write a plugin to read it from there. \r\n\r\nHope this helps a bit!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 527670799, "label": "updating metadata.json without recreating the app"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/642#issuecomment-559207224", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/642", "id": 559207224, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTIwNzIyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-11-27T18:40:57Z", "updated_at": "2019-11-27T18:41:07Z", "author_association": "CONTRIBUTOR", "body": "Would cookie cutter approaches also work for creating various flavours of customised templates?\r\n\r\nI need to try to create a couple of sites for myself to get a feel for what sorts of thing are easily doable, and what cribbable cookie cutter items might be. I'm guessing https://simonwillison.net/2019/Nov/25/niche-museums/ is a good place to start from?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 529429214, "label": "Provide a cookiecutter template for creating new plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-559632608", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 559632608, "node_id": "MDEyOklzc3VlQ29tbWVudDU1OTYzMjYwOA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2019-11-29T01:43:38Z", "updated_at": "2019-11-29T01:43:38Z", "author_association": "CONTRIBUTOR", "body": "In passing, it looks like a start was made on a datasette Jupyter server extension in https://github.com/lucasdurand/jupyter-datasette although the build fails in MyBinder.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/644#issuecomment-565755208", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/644", "id": 565755208, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NTc1NTIwOA==", "user": {"value": 6025893, "label": "chris48s"}, "created_at": "2019-12-14T21:33:31Z", "updated_at": "2019-12-14T21:33:31Z", "author_association": "CONTRIBUTOR", "body": "Hi @simonw\r\n\r\nHave you had a chance to look at this at all?\r\n\r\nI'm going to have a chunk of time free next week so if there is additional work needed on this, that would be a particularly convenient time for me to revisit this.\r\n\r\nCheers", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 530513784, "label": "Validate metadata json on startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-567133734", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 567133734, "node_id": "MDEyOklzc3VlQ29tbWVudDU2NzEzMzczNA==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2019-12-18T17:33:23Z", "updated_at": "2019-12-18T17:33:23Z", "author_association": "CONTRIBUTOR", "body": "FWIW I did a dumb merge of the branch here: https://github.com/jsfenfen/datasette and it seemed to work in that I could run stuff at a subdirectory, but ended up abandoning it in favor of just posting a subdomain because getting the nginx configs right was making me crazy. I still would prefer posting at a subdirectory but the subdomain seems simpler at the moment. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573388052", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/74", "id": 573388052, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzM4ODA1Mg==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-01-12T06:51:30Z", "updated_at": "2020-01-12T06:51:30Z", "author_association": "CONTRIBUTOR", "body": "Thanks. That showed me that there was a click cli runner error, and setting `export LANG=en_US.UTF-8` fixed it.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546073980, "label": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573389669", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/74", "id": 573389669, "node_id": "MDEyOklzc3VlQ29tbWVudDU3MzM4OTY2OQ==", "user": {"value": 15092, "label": "jayvdb"}, "created_at": "2020-01-12T07:21:17Z", "updated_at": "2020-01-12T07:21:17Z", "author_association": "CONTRIBUTOR", "body": "I guess there is some extra flag for ` CliRunner.invoke` to check exitcode and raise the exception, or that should be an extra assert added.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546073980, "label": "Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/656#issuecomment-576293773", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/656", "id": 576293773, "node_id": "MDEyOklzc3VlQ29tbWVudDU3NjI5Mzc3Mw==", "user": {"value": 6371750, "label": "JBPressac"}, "created_at": "2020-01-20T14:17:11Z", "updated_at": "2020-01-20T14:17:11Z", "author_association": "CONTRIBUTOR", "body": "Seems that headers and definitions has simply to be filled as an HTML table in the description field of matadata.json.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 546961357, "label": "Display of the column definitions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/653#issuecomment-582105810", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/653", "id": 582105810, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MjEwNTgxMA==", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2020-02-04T20:43:01Z", "updated_at": "2020-02-04T20:43:01Z", "author_association": "CONTRIBUTOR", "body": "I *think* the existing code will be OK even if I strip the lines in the middle of a new line delimited string.\r\n\r\nIt's only used for the validation, SQLite handles the `--` just fine and the whole SQL textarea still gets sent once it passes validation.\r\n\r\nI can add your test case to my branch later this evening though.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 541331755, "label": "allow leading comments in SQL input field"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/653#issuecomment-582106085", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/653", "id": 582106085, "node_id": "MDEyOklzc3VlQ29tbWVudDU4MjEwNjA4NQ==", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2020-02-04T20:43:43Z", "updated_at": "2020-02-04T20:43:43Z", "author_association": "CONTRIBUTOR", "body": "but this also doesn't have to land at all if it doesn't match your use case. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 541331755, "label": "allow leading comments in SQL input field"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-586599424", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 586599424, "node_id": "MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-02-15T15:12:19Z", "updated_at": "2020-02-15T15:12:33Z", "author_association": "CONTRIBUTOR", "body": "So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/666#issuecomment-590022164", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/666", "id": 590022164, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MDAyMjE2NA==", "user": {"value": 13896256, "label": "kevindkeogh"}, "created_at": "2020-02-23T03:26:00Z", "updated_at": "2020-02-23T03:26:00Z", "author_association": "CONTRIBUTOR", "body": "It was very helpful for me, using it for a 15M row table. Added a test, happy to amend though!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 562085508, "label": "Use inspect-file, if possible, for total row count"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-593026413", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 593026413, "node_id": "MDEyOklzc3VlQ29tbWVudDU5MzAyNjQxMw==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-01T01:24:45Z", "updated_at": "2020-03-01T01:24:45Z", "author_association": "CONTRIBUTOR", "body": "Did you manage to find an answer to this? I've got a notebook to help people generate datasets on the fly from an API, so it would be cool if they flick it to Datasette for initial exploration.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-602907207", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 602907207, "node_id": "MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-23T23:12:18Z", "updated_at": "2020-03-23T23:12:18Z", "author_association": "CONTRIBUTOR", "body": "This would also be useful for running Datasette in Jupyter notebooks on [Binder](https://mybinder.org/). While you can use [Jupyter-server-proxy](https://github.com/jupyterhub/jupyter-server-proxy) to access Datasette on Binder, the links are broken.\r\n\r\nWhy run Datasette on Binder? I'm developing a [range of Jupyter notebooks](https://glam-workbench.github.io/) that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic.\r\n\r\nFor example, there are a [series of notebooks](https://glam-workbench.github.io/trove-harvester/) that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data.\r\n\r\nBut it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-604166918", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 604166918, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T00:56:30Z", "updated_at": "2020-03-26T00:56:30Z", "author_association": "CONTRIBUTOR", "body": "Thanks! I'm trying to launch Datasette from *within* a notebook using the jupyter-server-proxy and the new `base_url` parameter. While the assets load ok, and the breadcrumb navigation works, the facet links don't seem to use the `base_url`. Or have I missed something?\r\n\r\nMy test repository is here: https://github.com/wragge/datasette-test", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/712#issuecomment-604225034", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/712", "id": 604225034, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDIyNTAzNA==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T04:40:08Z", "updated_at": "2020-03-26T04:40:08Z", "author_association": "CONTRIBUTOR", "body": "Great! Yes, can confirm that this works on Binder. However, when I try to run the same code locally, I get an Internal Server Error when I try to access Datasette.\r\n\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 385, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py\", line 45, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette_debug_asgi.py\", line 24, in wrapped_app\r\n await app(scope, recieve, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/utils/asgi.py\", line 174, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/tracer.py\", line 75, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/app.py\", line 746, in __call__\r\n raw_path = dict(scope[\"headers\"])[path_from_header.encode(\"utf8\")].split(b\"?\")[0]\r\nKeyError: b'x-original-uri'\r\nINFO: 127.0.0.1:49320 - \"GET / HTTP/1.1\" 500 Internal Server Error\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 588108428, "label": "base_url doesn't entirely work for running Datasette inside Binder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/712#issuecomment-604249402", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/712", "id": 604249402, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDI0OTQwMg==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-03-26T06:11:44Z", "updated_at": "2020-03-26T06:11:44Z", "author_association": "CONTRIBUTOR", "body": "Following on from @betatim's suggestion on Twitter, I've changed the proxy url to include 'absolute'.\r\n\r\n``` python\r\nproxy_url = f'{base_url}proxy/absolute/8001/'\r\n```\r\nThis works both on Binder and locally, without using the `path_from_header` option. I've updated the demo repository. Sorry @simonw if I've led you down the wrong path!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 588108428, "label": "base_url doesn't entirely work for running Datasette inside Binder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/573#issuecomment-604328163", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/573", "id": 604328163, "node_id": "MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-03-26T09:41:30Z", "updated_at": "2020-03-26T09:41:30Z", "author_association": "CONTRIBUTOR", "body": "Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 492153532, "label": "Exposing Datasette via Jupyter-server-proxy"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-608716819", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 608716819, "node_id": "MDEyOklzc3VlQ29tbWVudDYwODcxNjgxOQ==", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2020-04-03T22:19:00Z", "updated_at": "2020-04-03T22:19:00Z", "author_association": "CONTRIBUTOR", "body": "Hi Simon,\r\n\r\nI'm thinking of attempting this. Can you clarify some questions I have?\r\n\r\n1) I assume the goal is to have a CORS-friendly HTTPS endpoint that hosts the datasette service + user's db.\r\n\r\n2) If that's the goal, I think Lambda alone is insufficient. Lambda provides the compute fabric, but not the HTTP routing. You'd also need to add Application Load Balancer or API Gateway to provide an HTTP endpoint that routes to the lambda function.\r\n\r\nDo you have a preference between ALB or API GW? ALB has better economics at scale, but has a minimum monthly cost. API GW has worse per-request economics, but scales to zero when no requests are happening.\r\n\r\n3) Does Datasette have any native components, or is it all pure python? If it has native bits, they'll likely need to be recompiled to work on Amazon Linux 2.\r\n\r\n4) There are a few disparate services that need to be wired together to expose a Python service securely to the web. If I was doing this outside of the datasette publish system, I'd use an AWS CloudFormation template. Even within datasette, I think it still makes sense to use a CloudFormation template and just have the publish plugin invoke it (via the standard `aws` cli) with user-specified parameters. Does that sound reasonable to you?\r\n\r\nThanks for your help!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/236#issuecomment-612216820", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/236", "id": 612216820, "node_id": "MDEyOklzc3VlQ29tbWVudDYxMjIxNjgyMA==", "user": {"value": 193185, "label": "cldellow"}, "created_at": "2020-04-10T21:03:38Z", "updated_at": "2020-04-10T21:03:38Z", "author_association": "CONTRIBUTOR", "body": "I made a repo at https://github.com/code402/datasette-lambda to demonstrate the idea, and scratch my personal itch for this.\r\n\r\nThe demo relies on some central authority having already published a public, reusable Lambda layer with Datasette & its dependencies. I think that differs from the other publish plugins which seem to mainly publish Dockerfiles that the host will interpret to install deps from a requirements.txt file.\r\n\r\nI chose that approach because `uvloop` appears to be a dependency with native code that needs to be compiled for the target runtime environment. In this case, that's Amazon Linux 2. I'm not 100% clear on whether that's still required, because:\r\n\r\n- maybe `uvloop` is only needed for `uvicorn`, which the demo doesn't actually use since HTTP routing is handled by API Gateway\r\n- it seems like `uvloop` may be an optional, drop-in optimization for `asyncio` in any case (but I may be misreading this; I'm very much a Python noob)\r\n\r\nIf it's the case that `uvloop` is truly optional, then I think the publish plugin could do the packaging on the user's machine, regardless of what flavour of operating system they're on. That'd be a bit slower for the user, but would provide the most long-term flexibility in terms of supporting plugins.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 317001500, "label": "datasette publish lambda plugin"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/731#issuecomment-618126449", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/731", "id": 618126449, "node_id": "MDEyOklzc3VlQ29tbWVudDYxODEyNjQ0OQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2020-04-23T01:38:55Z", "updated_at": "2020-04-23T01:38:55Z", "author_association": "CONTRIBUTOR", "body": "I've almost suggested this same thing a couple times. I tend to have Makefile (because I'm doing other `make` stuff anyway to get data prepped), and I end up putting all those CLI options in something like `make run`. But it would be way easier to just have all those typical options -- plugins, templates, metadata -- be defaults.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 605110015, "label": "Option to automatically configure based on directory layout"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/731#issuecomment-618758326", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/731", "id": 618758326, "node_id": "MDEyOklzc3VlQ29tbWVudDYxODc1ODMyNg==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2020-04-24T01:55:00Z", "updated_at": "2020-04-24T01:55:00Z", "author_association": "CONTRIBUTOR", "body": "Mounting `./static` at `/static` seems the simplest way. Saves you the trouble of deciding what else (`img` for example) gets special treatment.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 605110015, "label": "Option to automatically configure based on directory layout"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/103#issuecomment-622599528", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/103", "id": 622599528, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMjU5OTUyOA==", "user": {"value": 32605365, "label": "b0b5h4rp13"}, "created_at": "2020-05-01T22:49:12Z", "updated_at": "2020-05-02T11:15:44Z", "author_association": "CONTRIBUTOR", "body": "With SQLITE_MAX_VARS = 999, or even 899, This hits the problem with the batch rows causing a overflow (works fine if SQLITE_MAX_VARS = 799).\r\n\r\np.s. I have tried a few list of dicts to sqlite modules and this was the easiest to use/understand\r\n\r\n------------- file begins ------------------\r\nimport sqlite_utils as su\r\n\r\n\r\ndata = [\r\n{'tickerId': 913324382, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CONSTELLATION B', 'symbol': 'STZ B', 'disSymbol': 'STZ-B', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '163.13', 'change': '6.46', 'changeRatio': '0.0412', 'marketValue': '31180699895.63', 'volume': '417', 'turnoverRate': '0.0000'},\r\n{'tickerId': 913323791, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Molina Health', 'symbol': 'MOH', 'disSymbol': 'MOH', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '173.25', 'change': '9.28', 'changeRatio': '0.0566', 'pPrice': '173.25', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '10520341695.50', 'volume': '1281557', 'turnoverRate': '0.0202'},\r\n{'tickerId': 913257501, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seattle Genetics', 'symbol': 'SGEN', 'disSymbol': 'SGEN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '145.64', 'change': '8.41', 'changeRatio': '0.0613', 'pPrice': '146.45', 'pChange': '0.8100', 'pChRatio': '0.0056', 'marketValue': '25117961347.60', 'volume': '2791411', 'turnoverRate': '0.0162'},\r\n{'tickerId': 925381971, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bandwidth', 'symbol': 'BAND', 'disSymbol': 'BAND', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '89.22', 'change': '7.66', 'changeRatio': '0.0939', 'pPrice': '89.00', 'pChange': '-0.2200', 'pChRatio': '-0.0025', 'marketValue': '2100025474.98', 'volume': '1508629', 'turnoverRate': '0.0641'},\r\n{'tickerId': 913323935, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Magellan Health', 'symbol': 'MGLN', 'disSymbol': 'MGLN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '68.00', 'change': '7.27', 'changeRatio': '0.1197', 'pPrice': '68.00', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '1697894040.00', 'volume': '448919', 'turnoverRate': '0.0180'},\r\n{'tickerId': 913254854, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'On Assignment', 'symbol': 'ASGN', 'disSymbol': 'ASGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '53.04', 'change': '6.59', 'changeRatio': '0.1419', 'pPrice': '53.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '2811120000.00', 'volume': '1339771', 'turnoverRate': '0.0253'},\r\n{'tickerId': 913255732, 'exchangeId': 95, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Arcturus', 'symbol': 'ARCT', 'disSymbol': 'ARCT', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '40.86', 'change': '6.36', 'changeRatio': '0.1843', 'pPrice': '42.60', 'pChange': '1.740', 'pChRatio': '0.0426', 'marketValue': '812021444.46', 'volume': '1577508', 'turnoverRate': '0.0794'},\r\n{'tickerId': 913256616, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'DexCom', 'symbol': 'DXCM', 'disSymbol': 'DXCM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '341.52', 'change': '6.32', 'changeRatio': '0.0189', 'pPrice': '340.00', 'pChange': '-1.5200', 'pChRatio': '-0.0045', 'marketValue': '31522296000.00', 'volume': '1008849', 'turnoverRate': '0.0109'},\r\n{'tickerId': 913255108, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Clorox', 'symbol': 'CLX', 'disSymbol': 'CLX', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '192.71', 'change': '6.27', 'changeRatio': '0.0336', 'pPrice': '192.95', 'pChange': '0.2400', 'pChRatio': '0.0012', 'marketValue': '24185773318.28', 'volume': '4996414', 'turnoverRate': '0.0398'},\r\n{'tickerId': 925314627, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'FRANCO NEVADA', 'symbol': 'FNV', 'disSymbol': 'FNV', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '137.85', 'change': '5.64', 'changeRatio': '0.0427', 'pPrice': '138.50', 'pChange': '0.6500', 'pChRatio': '0.0047', 'marketValue': '26110405326.30', 'volume': '1047688', 'turnoverRate': '0.0055'},\r\n{'tickerId': 913254955, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Aon Plc', 'symbol': 'AON', 'disSymbol': 'AON', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '178.21', 'change': '5.54', 'changeRatio': '0.0321', 'pPrice': '178.21', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '41181209117.22', 'volume': '2026234', 'turnoverRate': '0.0088'},\r\n{'tickerId': 913324105, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Willis Towers', 'symbol': 'WLTW', 'disSymbol': 'WLTW', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '183.34', 'change': '5.05', 'changeRatio': '0.0283', 'pPrice': '183.34', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23597461124.96', 'volume': '968943', 'turnoverRate': '0.0075'},\r\n{'tickerId': 913254759, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'TELADOC HEALTH', 'symbol': 'TDOC', 'disSymbol': 'TDOC', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '169.43', 'change': '4.84', 'changeRatio': '0.0294', 'pPrice': '168.88', 'pChange': '-0.5500', 'pChRatio': '-0.0032', 'marketValue': '12614616858.38', 'volume': '2628946', 'turnoverRate': '0.0353'},\r\n{'tickerId': 913255222, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Emergent Bio', 'symbol': 'EBS', 'disSymbol': 'EBS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '78.70', 'change': '4.75', 'changeRatio': '0.0642', 'pPrice': '78.40', 'pChange': '-0.3000', 'pChRatio': '-0.0038', 'marketValue': '4113368277.10', 'volume': '783804', 'turnoverRate': '0.0150'},\r\n{'tickerId': 913323443, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pool', 'symbol': 'POOL', 'disSymbol': 'POOL', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '216.02', 'change': '4.36', 'changeRatio': '0.0206', 'pPrice': '216.02', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8696077573.82', 'volume': '310837', 'turnoverRate': '0.0077'},\r\n{'tickerId': 913257075, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Masimo', 'symbol': 'MASI', 'disSymbol': 'MASI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '218.00', 'change': '4.09', 'changeRatio': '0.0191', 'pPrice': '217.00', 'pChange': '-1.0000', 'pChRatio': '-0.0046', 'marketValue': '11797070000.00', 'volume': '542131', 'turnoverRate': '0.0100'},\r\n{'tickerId': 913253761, 'exchangeId': 10, 'type': 2, 'secType': [62], 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pope Resources', 'symbol': 'POPE', 'disSymbol': 'POPE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NAS', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '101.05', 'change': '3.95', 'changeRatio': '0.0407', 'pPrice': '99.90', 'pChange': '2.800', 'pChRatio': '0.0288', 'marketValue': '447370075.75', 'volume': '33138', 'turnoverRate': '0.0075'},\r\n{'tickerId': 913323560, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seneca Foods', 'symbol': 'SENEB', 'disSymbol': 'SENEB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '40.04', 'change': '3.84', 'changeRatio': '0.1061', 'marketValue': '347950039.71', 'volume': '501'},\r\n{'tickerId': 913324274, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Resmed', 'symbol': 'RMD', 'disSymbol': 'RMD', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '159.07', 'change': '3.75', 'changeRatio': '0.0241', 'pPrice': '159.07', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23004217759.29', 'volume': '1267075', 'turnoverRate': '0.0088'},\r\n{'tickerId': 913323736, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Vertex Pharms', 'symbol': 'VRTX', 'disSymbol': 'VRTX', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '254.90', 'change': '3.70', 'changeRatio': '0.0147', 'pPrice': '255.00', 'pChange': '0.1000', 'pChRatio': '0.0004', 'marketValue': '66062980780.10', 'volume': '1939843', 'turnoverRate': '0.0075'},\r\n{'tickerId': 913323767, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MCCORMICK VTG', 'symbol': 'MKC V', 'disSymbol': 'MKC-V', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '159.99', 'change': '3.42', 'changeRatio': '0.0218', 'marketValue': '21262671000.00', 'volume': '432', 'turnoverRate': '0.0000'},\r\n{'tickerId': 950118595, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'ZOOM VIDEO', 'symbol': 'ZM', 'disSymbol': 'ZM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '138.56', 'change': '3.39', 'changeRatio': '0.0251', 'pPrice': '138.99', 'pChange': '0.4300', 'pChRatio': '0.0031', 'marketValue': '38620532420.16', 'volume': '13786017', 'turnoverRate': '0.0495'},\r\n{'tickerId': 916040738, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'WHEATON PRECIOUS', 'symbol': 'WPM', 'disSymbol': 'WPM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '41.10', 'change': '3.34', 'changeRatio': '0.0885', 'pPrice': '41.09', 'pChange': '-0.0100', 'pChRatio': '-0.0002', 'marketValue': '18404536146.30', 'volume': '5019137', 'turnoverRate': '0.0112'},\r\n{'tickerId': 913257174, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Royal Gold', 'symbol': 'RGLD', 'disSymbol': 'RGLD', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '125.86', 'change': '3.33', 'changeRatio': '0.0272', 'pPrice': '125.86', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8253015011.08', 'volume': '853473', 'turnoverRate': '0.0130'},\r\n{'tickerId': 913254394, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Fortune Brand', 'symbol': 'FBHS', 'disSymbol': 'FBHS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '51.50', 'change': '3.30', 'changeRatio': '0.0685', 'pPrice': '51.50', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '7194870278.50', 'volume': '3004021', 'turnoverRate': '0.0214'},\r\n{'tickerId': 913323312, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYK', 'disSymbol': 'LBTYK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '21.49', 'change': '3.18', 'changeRatio': '0.1737', 'pPrice': '21.48', 'pChange': '-0.0100', 'pChRatio': '-0.0005', 'marketValue': '13594662302.41', 'volume': '19980228', 'turnoverRate': '0.0315'},\r\n{'tickerId': 913323882, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Preformed Line', 'symbol': 'PLPC', 'disSymbol': 'PLPC', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '52.82', 'change': '3.14', 'changeRatio': '0.0632', 'pPrice': '52.10', 'pChange': '-0.7200', 'pChRatio': '-0.0136', 'marketValue': '264979981.20', 'volume': '9305', 'turnoverRate': '0.0018'},\r\n{'tickerId': 913323248, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Discovery', 'symbol': 'DISCB', 'disSymbol': 'DISCB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'A', 'close': '57.95', 'change': '23.63', 'changeRatio': '0.6884', 'pPrice': '54.26', 'pChange': '-3.6900', 'pChRatio': '-0.0637', 'marketValue': '29362894177.95', 'volume': '218305', 'turnoverRate': '0.0004'},\r\n{'tickerId': 913323930, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MercadoLibre', 'symbol': 'MELI', 'disSymbol': 'MELI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '605.52', 'change': '22.01', 'changeRatio': '0.0377', 'pPrice': '603.69', 'pChange': '-1.8300', 'pChRatio': '-0.0030', 'marketValue': '30226598045.28', 'volume': '699008', 'turnoverRate': '0.0140'},\r\n{'tickerId': 913257170, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYA', 'disSymbol': 'LBTYA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '22.28', 'change': '2.86', 'changeRatio': '0.1473', 'pPrice': '22.29', 'pChange': '0.0100', 'pChRatio': '0.0004', 'marketValue': '14094419548.52', 'volume': '10534672', 'turnoverRate': '0.0167'},\r\n{'tickerId': 913303991, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Brodband', 'symbol': 'LBRDK', 'disSymbol': 'LBRDK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '125.44', 'change': '2.76', 'changeRatio': '0.0225', 'pPrice': '125.44', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '22817900904.96', 'volume': '926177', 'turnoverRate': '0.0042'},\r\n{'tickerId': 913257082, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Helen of Troy', 'symbol': 'HELE', 'disSymbol': 'HELE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '167.04', 'change': '2.76', 'changeRatio': '0.0168', 'pPrice': '167.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '4216707982.08', 'volume': '341465', 'turnoverRate': '0.0135'},\r\n{'tickerId': 913256458, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Forrester', 'symbol': 'FORR', 'disSymbol': 'FORR', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '33.88', 'change': '2.58', 'changeRatio': '0.0824', 'marketValue': '635419400.00', 'volume': '85115', 'turnoverRate': '0.0045'},\r\n{'tickerId': 950158952, 'exchangeId': 95, 'type': 2, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'LYRA THERAPEUTICS, INC.', 'symbol': 'LYRA', 'disSymbol': 'LYRA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'ipo', 'status': 'A', 'close': '18.56', 'change': '2.56', 'changeRatio': '0.1600', 'pPrice': '18.96', 'pChange': '0.4000', 'pChRatio': '0.0216', 'marketValue': '229705575.68', 'volume': '1738472', 'turnoverRate': '0.1405'},\r\n{'tickerId': 913257570, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bio-Techne', 'symbol': 'TECH', 'disSymbol': 'TECH', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '227.54', 'change': '2.54', 'changeRatio': '0.0113', 'pPrice': '227.54', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8726538309.18', 'volume': '497006', 'turnoverRate': '0.0130'},\r\n{'tickerId': 913323246, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bel Fuse', 'symbol': 'BELFB', 'disSymbol': 'BELFB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '9.99', 'change': '2.53', 'changeRatio': '0.3391', 'pPrice': '9.75', 'pChange': '-0.2400', 'pChRatio': '-0.0240', 'marketValue': '122562454.86', 'volume': '177634', 'turnoverRate': '0.0145'},\r\n{'tickerId': 916040647, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Agnico Eagle', 'symbol': 'AEM', 'disSymbol': 'AEM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '61.20', 'change': '2.52', 'changeRatio': '0.0429', 'pPrice': '61.10', 'pChange': '-0.1000', 'pChRatio': '-0.0016', 'marketValue': '14739911553.60', 'volume': '2820765', 'turnoverRate': '0.0117'},\r\n{'tickerId': 913303768, 'exchangeId': 12, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CHASE CORP', 'symbol': 'CCF', 'disSymbol': 'CCF', 'disExchangeCode': 'AMEX', 'exchangeCode': 'ASE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '96.71', 'change': '2.45', 'changeRatio': '0.0260', 'marketValue': '916799598.60', 'volume': '29229', 'turnoverRate': '0.0031'},\r\n{'tickerId': 913324557, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Allergan', 'symbol': 'AGN', 'disSymbol': 'AGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '189.74', 'change': '2.40', 'changeRatio': '0.0128', 'pPrice': '189.76', 'pChange': '0.0200', 'pChRatio': '0.0001', 'marketValue': '62424842326.10', 'volume': '5787032', 'turnoverRate': '0.0176'},\r\n{'tickerId': 913324566, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'West Pharm Svc', 'symbol': 'WST', 'disSymbol': 'WST', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '191.64', 'change': '2.38', 'changeRatio': '0.0126', 'pPrice': '191.64', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '14078267117.08', 'volume': '352460', 'turnoverRate': '0.0042'}\r\n]\r\n\r\ndb = su.Database(f\"overnight hold.db\" )\r\ndb['active'].insert_all(data)\r\n\r\n--------------- file ends ----------------------", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 610517472, "label": "sqlite3.OperationalError: too many SQL variables in insert_all when using rows with varying numbers of columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/730#issuecomment-623463200", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/730", "id": 623463200, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzQ2MzIwMA==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2020-05-04T13:27:22Z", "updated_at": "2020-05-04T13:27:22Z", "author_association": "CONTRIBUTOR", "body": "Superseded by #753.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 604001627, "label": "Update pytest-asyncio requirement from ~=0.10.0 to >=0.10,<0.12"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623845014", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16", "id": 623845014, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzg0NTAxNA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-05T03:55:14Z", "updated_at": "2020-05-05T03:56:24Z", "author_association": "CONTRIBUTOR", "body": "I'm traveling w/o access to my Mac so can't help with any code right now. I suspected ZSCENEIDENTIFIER was a foreign key into one of these psi.sqlite tables. But looks like you're on to something connecting groups to assets. As for the UUID, I think there's two ints because each is 64-bits but UUIDs are 128-bits. Thus they need to be combined to get the 128 bit UUID. You might be able to use Apple's [NSUUID](https://developer.apple.com/documentation/foundation/nsuuid?language=objc), for example, by wrapping with pyObjC. Here's one [example](https://github.com/ronaldoussoren/pyobjc/blob/881c82a7ba90f193934b52b44143360c80dce5e5/pyobjc-framework-Cocoa/PyObjCTest/test_nsuuid.py) of using this in PyObjC's test suite. Interesting it's stored this way instead of a UUIDString as in Photos.sqlite. Perhaps it for faster indexing.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612287234, "label": "Import machine-learning detected labels (dog, llama etc) from Apple Photos"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17", "id": 624284539, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-05T20:20:05Z", "updated_at": "2020-05-05T20:20:05Z", "author_association": "CONTRIBUTOR", "body": "FYI, I've got an [issue](https://github.com/RhetTbull/osxphotos/issues/25) to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612860531, "label": "Only install osxphotos if running on macOS"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626390317", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626390317, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5MDMxNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:11:24Z", "updated_at": "2020-05-10T21:50:58Z", "author_association": "CONTRIBUTOR", "body": "Ugh....Yeah, I think easiest is to catch the exception and return no place as you suggest. This particular bit of code involves un-archiving a serialized NSKeyedArchiver which uses an object table and it is certainly possible to create a circular reference that way. Because this is happening in the decode, the circular reference must be in the original data. Does Photos show valid reverse geolocation info for the photo in question? If so, Photos may be doing something beyond a simple decode of the binary plist. For now, I'll push a patch to catch the exception.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395507, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:54:45Z", "updated_at": "2020-05-10T21:54:45Z", "author_association": "CONTRIBUTOR", "body": "@simonw does Photos show valid reverse geolocation info? Are you sure you're using [bpylist2](https://github.com/xa4a/bpylist2) and not bpylist? They're both unfortunately imported as \"bpylist\" so if you somehow got the wrong (original bpylist) version installed, it could be the issue. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395641, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:55:54Z", "updated_at": "2020-05-10T21:55:54Z", "author_association": "CONTRIBUTOR", "body": "Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626396379, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T22:01:48Z", "updated_at": "2020-05-10T22:01:48Z", "author_association": "CONTRIBUTOR", "body": "Frustrates me when package authors create a \"drop in\" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do \"import bpylist2 as bpylist\"", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 626667235, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T12:20:34Z", "updated_at": "2020-05-11T12:20:34Z", "author_association": "CONTRIBUTOR", "body": "@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example:\r\n\r\n```python\r\nimport osxphotos\r\nphotos = osxphotos.PhotosDB().photos()\r\nexiftool = osxphotos.exiftool.ExifTool(photos[0].path)\r\nexifdata = exiftool.as_dict()\r\ntags = exifdata[\"IPTC:Keywords\"]\r\n```\r\n\r\nNot as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC.\r\n\r\nAs for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 627007458, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T22:51:52Z", "updated_at": "2020-05-11T22:52:26Z", "author_association": "CONTRIBUTOR", "body": "I'm not familiar with `ExifReader`. I wrote my own wrapper around `exiftool` because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like [pyexiftool](https://github.com/smarnach/pyexiftool) didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in `ZEXTENDEDATTRIBUTES` table. I've got an open issue [#11](https://github.com/RhetTbull/osxphotos/issues/11) to add this to osxphotos but it hasn't bubbled to the top of my backlog yet. \r\n\r\nosxphotos will give you the location info: `PhotoInfo.location` returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS).\r\n\r\n```sql\r\nCREATE TABLE ZEXTENDEDATTRIBUTES (\r\n Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, \r\n Z_OPT INTEGER, ZFLASHFIRED INTEGER, \r\n ZISO INTEGER, ZMETERINGMODE INTEGER, \r\n ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, \r\n ZWHITEBALANCE INTEGER, ZASSET INTEGER, \r\n ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, \r\n ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, \r\n ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, \r\n ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, \r\n ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, \r\n ZLENSMODEL VARCHAR\r\n);\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 628405453, "node_id": "MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-14T05:59:53Z", "updated_at": "2020-05-14T05:59:53Z", "author_association": "CONTRIBUTOR", "body": "I've added support for the above exif data to [v0.28.17](https://github.com/RhetTbull/osxphotos/releases/tag/v0.28.17) of osxphotos. `PhotoInfo.exif_info` will return an `ExifInfo` [dataclass](https://docs.python.org/3/library/dataclasses.html) object with the following properties:\r\n\r\n```python\r\n flash_fired: bool\r\n iso: int\r\n metering_mode: int\r\n sample_rate: int\r\n track_format: int\r\n white_balance: int\r\n aperture: float\r\n bit_rate: float\r\n duration: float\r\n exposure_bias: float\r\n focal_length: float\r\n fps: float\r\n latitude: float\r\n longitude: float\r\n shutter_speed: float\r\n camera_make: str\r\n camera_model: str\r\n codec: str\r\n lens_model: str\r\n```\r\n\r\nIt's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data\r\n\r\nNote: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/767#issuecomment-632555800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/767", "id": 632555800, "node_id": "MDEyOklzc3VlQ29tbWVudDYzMjU1NTgwMA==", "user": {"value": 2657547, "label": "rixx"}, "created_at": "2020-05-22T08:00:23Z", "updated_at": "2020-05-22T08:00:23Z", "author_association": "CONTRIBUTOR", "body": "That would be perfect!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 620969465, "label": "Allow to specify a URL fragment for canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/394#issuecomment-641908346", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/394", "id": 641908346, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng==", "user": {"value": 127565, "label": "wragge"}, "created_at": "2020-06-10T10:22:54Z", "updated_at": "2020-06-10T10:22:54Z", "author_association": "CONTRIBUTOR", "body": "There's a working demo here: https://github.com/wragge/datasette-test\r\n\r\nAnd if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 396212021, "label": "base_url configuration setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/691#issuecomment-643709037", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/691", "id": 643709037, "node_id": "MDEyOklzc3VlQ29tbWVudDY0MzcwOTAzNw==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-06-14T02:35:16Z", "updated_at": "2020-06-14T02:35:16Z", "author_association": "CONTRIBUTOR", "body": "The server should reload in the `config_dir` mode. \r\n\r\nRef: #848", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 574021194, "label": "--reload sould reload server if code in --plugins-dir changes"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/851#issuecomment-645293374", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/851", "id": 645293374, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NTI5MzM3NA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-17T10:32:02Z", "updated_at": "2020-06-17T10:32:28Z", "author_association": "CONTRIBUTOR", "body": "Welp, I'm an idiot.\r\n\r\nTurns out I had a sneaky comma `,` after `sql` key:\r\n```\r\n... (:name, :url),\r\n```\r\nwhich tells sqlite to expect another `values(...)` list.\r\n\r\nCorrecting the SQL solved the issue. \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 640330278, "label": "Having trouble getting writable canned queries to work"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647135713", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647135713, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzEzNTcxMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-21T14:30:02Z", "updated_at": "2020-06-21T14:30:02Z", "author_association": "CONTRIBUTOR", "body": "Oops, the same method is called from both index and database pages. But removing select count queries speed up the page load quite a bit.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647194131", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647194131, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzE5NDEzMQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-21T23:15:54Z", "updated_at": "2020-06-21T23:26:09Z", "author_association": "CONTRIBUTOR", "body": "I'm not sure if table counts are to blame. There shouldn't be a ~3 orders of magnitude difference.\r\n\r\n```fish\r\nuser@klein /a/w/scrapyard (master)> set sql \"select count(*) from table_1; select count(*) from table_2; select count(*) from table_3;\"\r\nuser@klein /a/w/scrapyard (master)> time sqlite3 scrapyard.db \"$sql\"\r\n187489\r\n46492\r\n2229\r\n\r\n________________________________________________________\r\nExecuted in 25.57 millis fish external\r\n usr time 3.55 millis 0.00 micros 3.55 millis\r\n sys time 22.42 millis 1123.00 micros 21.30 millis\r\n```\r\n\r\nbut not letting datasette count the tables definitely helps.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647922203", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647922203, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyMjIwMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:44:58Z", "updated_at": "2021-01-05T08:22:43Z", "author_association": "CONTRIBUTOR", "body": "I'm seeing the problem on database page. Index page and table page runs quite fast.\r\n\r\n- Tables have <10 columns (`id`, `url`, `title`, `body_html`, `date`, `author`, `meta` (for keeping unstructured json)). I've added index on `date` columns (using `sqlite-utils`) in addition to the index present on `id` columns. \r\n- All tables have FTS enabled on `text` and `varchar` columns (`title`, `body_html` etc) to speed up searching.\r\n- There are couple of tables related with foreign keys (think a thread in a forum and posts in that thread, related with `thread_id`)\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647923666", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647923666, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyMzY2Ng==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:49:31Z", "updated_at": "2020-06-23T05:49:31Z", "author_association": "CONTRIBUTOR", "body": "I think I should mention that having FTS on all tables mean I have 5 visible, 25 hidden (FTS) tables displayed on database page.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647925594", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647925594, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T05:55:21Z", "updated_at": "2020-06-23T06:28:29Z", "author_association": "CONTRIBUTOR", "body": "Hmm, not seeing the problem now. \r\nI've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms.\r\n\r\nI have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page?\r\n\r\nEDIT: \r\nIt turns out it doesn't hold cache with mutable databases.\r\n\r\nI'll update the issue with more findings and a better way to reproduce the problem if I encounter it again.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647935300", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647935300, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:23:01Z", "updated_at": "2020-06-23T06:23:01Z", "author_association": "CONTRIBUTOR", "body": "> You said \"200k+, 50+ rows in a couple of tables\" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes.\r\n\r\nAh that was a typo, I meant 50k.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-647936117", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 647936117, "node_id": "MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T06:25:17Z", "updated_at": "2020-06-23T06:25:17Z", "author_association": "CONTRIBUTOR", "body": "> \r\n> \r\n> ```\r\n> sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50\r\n> ```\r\n> \r\n> Looks like that will take 35 minutes to run (it's not a particularly fast tool).\r\n\r\nTry chunking write operations into batches every 1000 records or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-648232645", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 648232645, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODIzMjY0NQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-23T15:19:53Z", "updated_at": "2020-06-23T15:19:53Z", "author_association": "CONTRIBUTOR", "body": "The issue seems to appear sporadically, like when I return to database page after a while, during which some records have been added to the database.\r\n\r\nI've just visited database, page first visit took ~10s, consecutive visits took 0.3s.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-648669523", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 648669523, "node_id": "MDEyOklzc3VlQ29tbWVudDY0ODY2OTUyMw==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-06-24T08:13:23Z", "updated_at": "2020-06-24T10:30:36Z", "author_association": "CONTRIBUTOR", "body": "I tried setting `cache_size_kb=0` then `cache_size_kb=100000`, still getting this behavior. I even changed `Database::table_counts` and lowered time limit to 1\r\n\r\n```py\r\ntable_count = (\r\n await self.execute(\r\n \"select count(*) from [{}]\".format(table),\r\n custom_time_limit=1,\r\n )\r\n).rows[0][0]\r\ncounts[table] = table_count\r\n```\r\n\r\nI feel like 10 seconds is a magic number, like a processing timeout and datasette gives up and returns the page. \r\nIndex page loads instantly, table page, query page, as well. But when I return to database page after some time, it loads in 10s.\r\n\r\nEDIT:\r\n\r\nIt's always like 10 + 0.3s, like 10s wait and timeout then 300ms to render the page", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/859#issuecomment-652160909", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/859", "id": 652160909, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjE2MDkwOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T03:09:32Z", "updated_at": "2020-07-01T03:10:21Z", "author_association": "CONTRIBUTOR", "body": "I've just realized Datasette tries to count hidden tables too. There are 5 visible tables, 25 hidden tables, which I haven't realize earlier to consider their effect. I've turned off counting for hidden tables to see if it has any effect.\r\n\r\nWhat's the point of counting FTS tables?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 642572841, "label": "Database page loads too slowly with many large tables (due to table counts)"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652166115", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652166115, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T03:28:07Z", "updated_at": "2020-07-01T03:28:07Z", "author_association": "CONTRIBUTOR", "body": "Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652255960", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652255960, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T07:52:25Z", "updated_at": "2020-07-01T08:10:00Z", "author_association": "CONTRIBUTOR", "body": "I am calling the API from another origin, so injecting CSRF token into templates wouldn't work.\r\n\r\nEDIT:\r\n\r\nI'll try the new version, it sounds promising", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/877#issuecomment-652261382", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/877", "id": 652261382, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T08:03:17Z", "updated_at": "2020-07-01T08:03:23Z", "author_association": "CONTRIBUTOR", "body": "Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648421105, "label": "Consider dropping explicit CSRF protection entirely?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652297139", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652297139, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjI5NzEzOQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T09:11:29Z", "updated_at": "2020-07-01T09:11:29Z", "author_association": "CONTRIBUTOR", "body": "Turns out we should include hidden tables in the result dict, or we're breaking tests. I've committed a refactor https://github.com/simonw/datasette/pull/883/commits/4f06e1bf6fbe4b73be770b87f610bf7c0e6e3ea7", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/883#issuecomment-652394742", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/883", "id": 652394742, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MjM5NDc0Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-07-01T12:41:13Z", "updated_at": "2020-07-01T12:41:13Z", "author_association": "CONTRIBUTOR", "body": "Well tests need to be updated.\r\n \r\nI need to get tests working on Windows.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 648749062, "label": "Skip counting hidden tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/889#issuecomment-652990131", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/889", "id": 652990131, "node_id": "MDEyOklzc3VlQ29tbWVudDY1Mjk5MDEzMQ==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-07-02T12:58:11Z", "updated_at": "2020-07-02T13:00:18Z", "author_association": "CONTRIBUTOR", "body": "FWIW, this error does NOT happen in datasette 0.45a4.\r\n\r\nIt only started on 0.45a5", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 649907676, "label": "asgi_wrapper plugin hook is crashing at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/889#issuecomment-653002499", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/889", "id": 653002499, "node_id": "MDEyOklzc3VlQ29tbWVudDY1MzAwMjQ5OQ==", "user": {"value": 49260, "label": "amjith"}, "created_at": "2020-07-02T13:22:13Z", "updated_at": "2020-07-02T13:22:13Z", "author_association": "CONTRIBUTOR", "body": "I was able to narrow this down to the fact that lifespan protocol is turned on. \r\n\r\nI see the workaround you've used here: https://github.com/simonw/datasette-debug-asgi/commit/72d568d32a3159c763ce908c0b269736935c6987\r\n\r\nIf so, maybe it's time to update some of the asg_wrapper [plugins](https://datasette.readthedocs.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 649907676, "label": "asgi_wrapper plugin hook is crashing at startup"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655018966", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655018966, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTAxODk2Ng==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-07T17:41:06Z", "updated_at": "2020-07-07T17:41:06Z", "author_association": "CONTRIBUTOR", "body": "Hmm, while tests pass, this may not work as intended on larger datasets. Looking into it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655052451", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655052451, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTA1MjQ1MQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-07T18:45:23Z", "updated_at": "2020-07-07T18:45:23Z", "author_association": "CONTRIBUTOR", "body": "Ah, I see the problem. The truncate is inside a loop I didn't realize was there.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655239728, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTIzOTcyOA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T02:16:42Z", "updated_at": "2020-07-08T02:16:42Z", "author_association": "CONTRIBUTOR", "body": "I fixed my original oops by moving the `DELETE FROM $table` out of the chunking loop and repushed. I think this change can be considered in isolation from issues around transactions, which I discuss next.\r\n\r\nI wanted to make the DELETE + INSERT happen all in the same transaction so it was robust, but that was more complicated than I expected. The transaction handling in the Database/Table classes isn't systematic, and this poses big hurdles to making `Table.insert_all` (or other operations) consistent and robust in the face of errors.\r\n\r\nFor example, I wanted to do this (whitespace ignored in diff, so indentation change not highlighted):\r\n\r\n```diff\r\ndiff --git a/sqlite_utils/db.py b/sqlite_utils/db.py\r\nindex d6b9ecf..4107ceb 100644\r\n--- a/sqlite_utils/db.py\r\n+++ b/sqlite_utils/db.py\r\n@@ -1028,6 +1028,11 @@ class Table(Queryable):\r\n batch_size = max(1, min(batch_size, SQLITE_MAX_VARS // num_columns))\r\n self.last_rowid = None\r\n self.last_pk = None\r\n+ with self.db.conn:\r\n+ # Explicit BEGIN is necessary because Python's sqlite3 doesn't\r\n+ # issue implicit BEGINs for DDL, only DML. We mix DDL and DML\r\n+ # below and might execute DDL first, e.g. for table creation.\r\n+ self.db.conn.execute(\"BEGIN\")\r\n if truncate and self.exists():\r\n self.db.conn.execute(\"DELETE FROM [{}];\".format(self.name))\r\n for chunk in chunks(itertools.chain([first_record], records), batch_size):\r\n@@ -1038,7 +1043,11 @@ class Table(Queryable):\r\n # Use the first batch to derive the table names\r\n column_types = suggest_column_types(chunk)\r\n column_types.update(columns or {})\r\n- self.create(\r\n+ # Not self.create() because that is wrapped in its own\r\n+ # transaction and Python's sqlite3 doesn't support\r\n+ # nested transactions.\r\n+ self.db.create_table(\r\n+ self.name,\r\n column_types,\r\n pk,\r\n foreign_keys,\r\n@@ -1139,7 +1148,6 @@ class Table(Queryable):\r\n flat_values = list(itertools.chain(*values))\r\n queries_and_params = [(sql, flat_values)]\r\n \r\n- with self.db.conn:\r\n for query, params in queries_and_params:\r\n try:\r\n result = self.db.conn.execute(query, params)\r\n```\r\n\r\nbut that fails in tests because other methods call `insert/upsert/insert_all/upsert_all` in the middle of their transactions, so the BEGIN statement throws an error (no nested transactions allowed).\r\n\r\nStepping back, it would be nice to make the transaction handling systematic and predictable. One way to do this is to make the `sqlite_utils/db.py` code generally not begin or commit any transactions, and require the caller to do that instead. This lets the caller mix and match the Python API calls into transactions as appropriate (which is impossible for the API methods themselves to fully determine). Then, make `sqlite_utils/cli.py` begin and commit a transaction in each `@cli.command` function, making each command robust and consistent in the face of errors. The big change here, and why I didn't just submit a patch, is that it dramatically changes the Python API to _require_ callers to begin a transaction rather than just immediately calling methods.\r\n\r\nThere is also the caveat that for each transaction, an explicit `BEGIN` is also necessary so that DDL as well as DML (as well as `SELECT`s) are consistent and rolled back on error. There are several bugs.python.org discussions around this particular problem of DDL and some plans to make it better and consistent with DBAPI2, eventually. In the meantime, the sqlite-utils Database class could be a context manager which supports the incantations necessary to do proper transactions. This would still be a Python API change for callers but wouldn't expose them to the weirdness of the sqlite3's default transaction handling.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655643078", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/118", "id": 655643078, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTY0MzA3OA==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T17:05:59Z", "updated_at": "2020-07-08T17:05:59Z", "author_association": "CONTRIBUTOR", "body": "> The only thing missing from this PR is updates to the documentation.\r\n\r\nAh, yes, thanks for this reminder! I've repushed with doc bits added.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 651844316, "label": "Add insert --truncate option"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655652679", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/121", "id": 655652679, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTY1MjY3OQ==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-08T17:24:46Z", "updated_at": "2020-07-08T17:24:46Z", "author_association": "CONTRIBUTOR", "body": "Better transaction handling would be really great. Some of my thoughts on implementing better transaction discipline are in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728.\r\n\r\nMy preferences:\r\n\r\n- Each CLI command should operate in a single transaction so that either the whole thing succeeds or the whole thing is rolled back. This avoids partially completed operations when an error occurs part way through processing. Partially completed operations are typically much harder to recovery from gracefully and may cause inconsistent data states.\r\n\r\n- The Python API should be transaction-agnostic and rely on the caller to coordinate transactions. Only the caller knows how individual insert, create, update, etc operations/methods should be bundled conceptually into transactions. When the caller is the CLI, for example, that bundling would be at the CLI command-level. Other callers might want to break up operations into multiple transactions. Transactions are usually most useful when controlled at the application-level (like logging configuration) instead of the library level. The library needs to provide an API that's conducive to transaction use, though.\r\n\r\n- The Python API should provide a context manager to provide consistent transactions handling with more useful defaults than Python's `sqlite3` module. The latter issues implicit `BEGIN` statements by default for most DML (`INSERT`, `UPDATE`, `DELETE`, \u2026 but not `SELECT`, I believe), but **not** DDL (`CREATE TABLE`, `DROP TABLE`, `CREATE VIEW`, \u2026). Notably, the `sqlite3` module doesn't issue the implicit `BEGIN` until the first DML statement. It _does not_ issue it when entering the `with conn` block, like other DBAPI2-compatible modules do. The `with conn` block for `sqlite3` only arranges to commit or rollback an existing transaction when exiting. Including DDL and `SELECT`s in transactions is important for operation consistency, though. There are several existing bugs.python.org tickets about this and future changes are in the works, but sqlite-utils can provide its own API sooner. sqlite-utils's `Database` class could itself be a context manager (built on the `sqlite3` connection context manager) which additionally issues an explicit `BEGIN` when entering. This would then let Python API callers do something like:\r\n\r\n```python\r\ndb = sqlite_utils.Database(path)\r\n\r\nwith db: # \u2190 BEGIN issued here by Database.__enter__\r\n db.insert(\u2026)\r\n db.create_view(\u2026)\r\n# \u2190 COMMIT/ROLLBACK issue here by sqlite3.connection.__exit__\r\n```", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 652961907, "label": "Improved (and better documented) support for transactions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655898722", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/121", "id": 655898722, "node_id": "MDEyOklzc3VlQ29tbWVudDY1NTg5ODcyMg==", "user": {"value": 79913, "label": "tsibley"}, "created_at": "2020-07-09T04:53:08Z", "updated_at": "2020-07-09T04:53:08Z", "author_association": "CONTRIBUTOR", "body": "Yep, I agree that makes more sense for backwards compat and more casual use cases. I think it should be possible for the Database/Queryable methods to DTRT based on seeing if it's within a context-manager-managed transaction.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 652961907, "label": "Improved (and better documented) support for transactions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/456#issuecomment-661524006", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/456", "id": 661524006, "node_id": "MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg==", "user": {"value": 32467826, "label": "abeyerpath"}, "created_at": "2020-07-21T01:15:07Z", "updated_at": "2020-07-21T01:15:07Z", "author_association": "CONTRIBUTOR", "body": "Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue...\r\n\r\nThe `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 442327592, "label": "Installing installs the tests package"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139", "id": 682182178, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-27T20:46:18Z", "updated_at": "2020-08-27T20:46:18Z", "author_association": "CONTRIBUTOR", "body": "> I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem.\r\n\r\nSo the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048\r\n\r\nWith regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this:\r\nhttps://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63\r\n\r\nIt doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here:\r\nhttps://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716\r\n\r\nThe existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either.\r\n\r\n@simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made?\r\n\r\nWould be grateful for your thoughts! Thanks!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 686978131, "label": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682815377", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/139", "id": 682815377, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MjgxNTM3Nw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-28T16:14:58Z", "updated_at": "2020-08-28T16:14:58Z", "author_association": "CONTRIBUTOR", "body": "Thanks! And yeah, I had updating the docs on my list too :) Will try to get to it this afternoon (budgeting time is fraught with uncertainty at the moment!).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 686978131, "label": "insert_all(..., alter=True) should work for new columns introduced after the first 100 records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/145#issuecomment-683382252", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/145", "id": 683382252, "node_id": "MDEyOklzc3VlQ29tbWVudDY4MzM4MjI1Mg==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-08-30T06:27:25Z", "updated_at": "2020-08-30T06:27:52Z", "author_association": "CONTRIBUTOR", "body": "Note: had to adjust the test above because trying to exhaust a `SQLITE_MAX_VARIABLE_NUMBER` of 250000 in 99 records requires 2526 columns, and trips the ` \"Rows can have a maximum of {} columns\".format(SQLITE_MAX_VARS)` check even before it trips the default `SQLITE_MAX_COLUMN` value (2000).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688659182, "label": "Bug when first record contains fewer columns than subsequent records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/952#issuecomment-686061028", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/952", "id": 686061028, "node_id": "MDEyOklzc3VlQ29tbWVudDY4NjA2MTAyOA==", "user": {"value": 27856297, "label": "dependabot-preview[bot]"}, "created_at": "2020-09-02T22:26:14Z", "updated_at": "2020-09-02T22:26:14Z", "author_association": "CONTRIBUTOR", "body": "Looks like black is up-to-date now, so this is no longer needed.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 687245650, "label": "Update black requirement from ~=19.10b0 to >=19.10,<21.0"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688479163", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688479163, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODQ3OTE2Mw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-07T19:10:33Z", "updated_at": "2020-09-07T19:11:57Z", "author_association": "CONTRIBUTOR", "body": "@simonw -- I've gone ahead updated the documentation to reflect the changes introduced in this PR. IMO it's ready to merge now.\r\n\r\nIn writing the documentation changes, I begin to wonder about the value and role of `batch_size` at all, tbh. May I assume it was originally intended to prevent using the entire row set to determine columns and column types, and that this was a performance consideration? If so, this PR entirely undermines its purpose. I've been passing in excess of 500,000 rows at a time to `insert_all()` with these changes and although I'm sure the performance difference is measurable it's not really noticeable; given #145, I don't know that any performance advantages outweigh the problems doing it this way removes. What do you think about just dropping the argument and defaulting to the maximum `batch_size` permissible given `SQLITE_MAX_VARS`? Are there other reasons one might want to restrict `batch_size` that I've overlooked? I could open a new issue to discuss/implement this.\r\n\r\nOf course the documentation will need to change again too if/when something is done about #147.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688481317", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688481317, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODQ4MTMxNw==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-07T19:18:55Z", "updated_at": "2020-09-07T19:18:55Z", "author_association": "CONTRIBUTOR", "body": "Just force-pushed to update d042f9c with more formatting changes to satisfy `black==20.8b1` and pass the GitHub Actions \"Test\" workflow.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688573964", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/146", "id": 688573964, "node_id": "MDEyOklzc3VlQ29tbWVudDY4ODU3Mzk2NA==", "user": {"value": 96218, "label": "simonwiles"}, "created_at": "2020-09-08T01:55:07Z", "updated_at": "2020-09-08T01:55:07Z", "author_association": "CONTRIBUTOR", "body": "Okay, I've rewritten this PR to preserve the batching behaviour but still fix #145, and rebased the branch to account for the `db.execute()` api change. It's not terribly sophisticated -- if it attempts to insert a batch which has too many variables, the exception is caught, the batch is split in two and each half is inserted separately, and then it carries on as before with the same `batch_size`. In the edge case where this gets triggered, subsequent batches will all be inserted in two groups too if they continue to have the same number of columns (which is presumably reasonably likely). Do you reckon this is acceptable when set against the awkwardness of recalculating the `batch_size` on the fly?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 688668680, "label": "Handle case where subsequent records (after first batch) include extra columns"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-690860653", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50", "id": 690860653, "node_id": "MDEyOklzc3VlQ29tbWVudDY5MDg2MDY1Mw==", "user": {"value": 370930, "label": "mikepqr"}, "created_at": "2020-09-11T04:04:08Z", "updated_at": "2020-09-11T04:04:08Z", "author_association": "CONTRIBUTOR", "body": "There's probably a nicer way of doing (hence this is a comment rather than a PR), but this appears to fix it:\r\n```diff\r\n--- a/twitter_to_sqlite/utils.py\r\n+++ b/twitter_to_sqlite/utils.py\r\n@@ -181,6 +181,7 @@ def fetch_timeline(\r\n args[\"tweet_mode\"] = \"extended\"\r\n min_seen_id = None\r\n num_rate_limit_errors = 0\r\n+ seen_count = 0\r\n while True:\r\n if min_seen_id is not None:\r\n args[\"max_id\"] = min_seen_id - 1\r\n@@ -208,6 +209,7 @@ def fetch_timeline(\r\n yield tweet\r\n min_seen_id = min(t[\"id\"] for t in tweets)\r\n max_seen_id = max(t[\"id\"] for t in tweets)\r\n+ seen_count += len(tweets)\r\n if last_since_id is not None:\r\n max_seen_id = max((last_since_id, max_seen_id))\r\n last_since_id = max_seen_id\r\n@@ -217,7 +219,9 @@ def fetch_timeline(\r\n replace=True,\r\n )\r\n if stop_after is not None:\r\n- break\r\n+ if seen_count >= stop_after:\r\n+ break\r\n+ args[\"count\"] = min(args[\"count\"], stop_after - seen_count)\r\n time.sleep(sleep)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 698791218, "label": "favorites --stop_after=N stops after min(N, 200)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/48#issuecomment-704503719", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/48", "id": 704503719, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNDUwMzcxOQ==", "user": {"value": 755825, "label": "adamjonas"}, "created_at": "2020-10-06T19:26:59Z", "updated_at": "2020-10-06T19:26:59Z", "author_association": "CONTRIBUTOR", "body": "ref #46 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 681228542, "label": "Add pull requests"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/swarm-to-sqlite/pull/10#issuecomment-707326192", "issue_url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/10", "id": 707326192, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNzMyNjE5Mg==", "user": {"value": 29426418, "label": "mattiaborsoi"}, "created_at": "2020-10-12T20:20:02Z", "updated_at": "2020-10-12T20:20:02Z", "author_association": "CONTRIBUTOR", "body": "This closes issue #8 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 719637258, "label": "Update utils.py to fix sqlite3.OperationalError"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1019#issuecomment-708520800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1019", "id": 708520800, "node_id": "MDEyOklzc3VlQ29tbWVudDcwODUyMDgwMA==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2020-10-14T16:37:19Z", "updated_at": "2020-10-14T16:37:19Z", "author_association": "CONTRIBUTOR", "body": "\ud83c\udf89 Thanks so much @simonw ! \ud83c\udf89 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 721050815, "label": "\"Edit SQL\" button on canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-714657366", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 714657366, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-22T17:51:29Z", "updated_at": "2020-10-22T17:51:29Z", "author_association": "CONTRIBUTOR", "body": "How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `\"extra_js_urls\": [ \"/static/app.js\"]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-714908859", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 714908859, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDkwODg1OQ==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2020-10-23T04:49:20Z", "updated_at": "2020-10-23T04:49:20Z", "author_association": "CONTRIBUTOR", "body": "Good luck on 1.0! It may also be worth lobbying for a `Framework::Datasette::1.0` classifier. This would be a nice way to allow the ecosystem to self-document a bit more [discoverably](https://pypi.org/search/?q=&o=&c=Framework+%3A%3A+Datasette%3A%3A+1.0). \r\n\r\nI was surprised to see the [PR for `Framework::Jupyter`](https://github.com/pypa/warehouse/pull/1905/files) is a... database migration! Of course, there may be more workflow to it!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-716066000", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 716066000, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-24T22:58:33Z", "updated_at": "2020-10-24T22:58:33Z", "author_association": "CONTRIBUTOR", "body": "From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note:\r\n\r\n```\r\ndatasette.urls.instance()\r\nReturns the URL to the Datasette instance root page. This is usually \"/\"\r\n```\r\n\r\nWhat about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to?\r\n\r\n- [ ] `https://example.com:PORT/weirdpath/datasette`\r\n- [ ] `https://example.com:PORT/weirdpath/`\r\n- [ ] `https://example.com:PORT/`\r\n- [ ] `https://example.com`\r\n- [ ] something else?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-716123598", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 716123598, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-25T10:20:12Z", "updated_at": "2020-10-25T10:53:24Z", "author_association": "CONTRIBUTOR", "body": "I'm trying to [run something behind a MyBinder proxy](https://github.com/ouseful-testing/nbsearch), but seem to have something set up incorrectly and not sure what the fix is?\r\n\r\nI'm starting datasette with jupyter-server-proxy setup:\r\n\r\n```\r\n# __init__.py\r\ndef setup_nbsearch():\r\n\r\n return {\r\n \"command\": [\r\n \"datasette\",\r\n \"serve\",\r\n f\"{_NBSEARCH_DB_PATH}\",\r\n \"-p\",\r\n \"{port}\",\r\n \"--config\",\r\n \"base_url:{base_url}nbsearch/\"\r\n ],\r\n \"absolute_url\": True,\r\n # The following needs a the labextension installing.\r\n # eg in postBuild: jupyter labextension install jupyterlab-server-proxy\r\n \"launcher_entry\": {\r\n \"enabled\": True,\r\n \"title\": \"nbsearch\",\r\n },\r\n }\r\n```\r\n\r\nwhere the `base_url` gets automatically populated by the server-proxy. I define the loaders as:\r\n\r\n```\r\n# __init__.py\r\nfrom datasette import hookimpl\r\n\r\n@hookimpl\r\ndef extra_css_urls(database, table, columns, view_name, datasette):\r\n return [\r\n \"/-/static-plugins/nbsearch/prism.css\",\r\n \"/-/static-plugins/nbsearch/nbsearch.css\",\r\n ]\r\n```\r\nbut these seem to also need a base_url prefix set somehow?\r\n\r\nCurrently, the generated HTML loads properly but internal links are incorrect; eg they take the form `` which resolves to eg `https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css` rather than required URL of form `https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css`.\r\n\r\nThe main css is loaded correctly: ``", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1043#issuecomment-716237524", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1043", "id": 716237524, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjIzNzUyNA==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2020-10-26T00:14:57Z", "updated_at": "2020-10-26T00:14:57Z", "author_association": "CONTRIBUTOR", "body": "Sorry, I was out of the loop this weekend. The missing sdists were in some the `datasette-*` plugins... i'll capture my findings more concretely in one spot when i have a chance...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 727915394, "label": "Include LICENSE in sdist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/pull/189#issuecomment-717359145", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/189", "id": 717359145, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNzM1OTE0NQ==", "user": {"value": 35681, "label": "adamwolf"}, "created_at": "2020-10-27T16:20:32Z", "updated_at": "2020-10-27T16:20:32Z", "author_association": "CONTRIBUTOR", "body": "No problem. I added a test. Let me know if it looks sufficient or if you want me to to tweak something!\r\n\r\nIf you don't mind, would you tag this PR as \"hacktoberfest-accepted\"? If you do mind, no problem and I'm sorry for asking :) My kiddos like the shirts.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 729818242, "label": "Allow iterables other than Lists in m2m records"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1049#issuecomment-718528252", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1049", "id": 718528252, "node_id": "MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-29T09:20:34Z", "updated_at": "2020-10-29T09:20:34Z", "author_association": "CONTRIBUTOR", "body": "That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 729017519, "label": "Add template block prior to extra URL loaders"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/838#issuecomment-720354227", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/838", "id": 720354227, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-11-02T09:33:58Z", "updated_at": "2020-11-02T09:33:58Z", "author_association": "CONTRIBUTOR", "body": "Thanks; just a note that the `datasette.urls.static(path)` and `datasette.urls.static_plugins(plugin_name, path)` items both seem to be repeated and appear in the docs twice?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 637395097, "label": "Incorrect URLs when served behind a proxy with base_url set"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1112#issuecomment-735279355", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1112", "id": 735279355, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTI3OTM1NQ==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-11-28T19:21:09Z", "updated_at": "2020-11-28T19:21:09Z", "author_association": "CONTRIBUTOR", "body": "(Even more annoying is that I see my editor leaked an extra delete space at the end of the line. I'm happy to rebuild this to be less annoying, but you probably don't want the changelog update either way)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752749485, "label": "Fix --metadata doc usage"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-735281577", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 735281577, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTI4MTU3Nw==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-11-28T19:39:53Z", "updated_at": "2020-11-28T19:39:53Z", "author_association": "CONTRIBUTOR", "body": "I was confused by `--config` and I tried passing the json from datasette-ripgrep into `config.json` just as a wild guess. \r\n\r\nA short term solution might be pointing out in plugins that their snippet json can go in `metadata.json` at least makes it easier to search for config options or to know where to start if someone is new. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1114#issuecomment-735436014", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1114", "id": 735436014, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTQzNjAxNA==", "user": {"value": 2182, "label": "danp"}, "created_at": "2020-11-29T18:33:30Z", "updated_at": "2020-11-29T18:33:30Z", "author_association": "CONTRIBUTOR", "body": "Thank you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752966476, "label": "--load-extension=spatialite not working with datasetteproject/datasette docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1111#issuecomment-736322290", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1111", "id": 736322290, "node_id": "MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-12-01T08:54:47Z", "updated_at": "2020-12-01T08:54:47Z", "author_association": "CONTRIBUTOR", "body": "Somewhat related: https://github.com/simonw/datasette/issues/859\r\nI fixed the issue with forking and disabling the counts for hidden tables.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 751195017, "label": "Accessing a database's `.json` is slow for very large SQLite files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1130#issuecomment-738907852", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1130", "id": 738907852, "node_id": "MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-12-04T17:22:29Z", "updated_at": "2020-12-04T17:31:25Z", "author_association": "CONTRIBUTOR", "body": "EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it.\r\n\r\n```css\r\nbody {\r\n min-height: 100vh;\r\n min-height: -webkit-fill-available;\r\n}\r\nhtml {\r\n height: -webkit-fill-available;\r\n}\r\n```\r\nhttps://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/\r\n\r\n---\r\n\r\nIt's actually not that difficult to fix.\r\nWell, this is actually a workaround to keep viewport in place.\r\n\r\nI usually put a transition (forgot to do it here) that keeps page from resizing.\r\n\r\n```css\r\n.container {\r\n min-height: 100vh;\r\n transition: height 10000s steps(0);\r\n}\r\n```\r\n\r\n`steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step.\r\nThis fix also prevents page from jumping around when the keyboard pops up and down.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 756876238, "label": "Fix footer not sticking to bottom in short pages"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/998#issuecomment-743080047", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/998", "id": 743080047, "node_id": "MDEyOklzc3VlQ29tbWVudDc0MzA4MDA0Nw==", "user": {"value": 6371750, "label": "JBPressac"}, "created_at": "2020-12-11T09:25:09Z", "updated_at": "2020-12-11T09:25:09Z", "author_association": "CONTRIBUTOR", "body": "Hello Simon,\r\nI have a similar problem with horizontal scrollbar display with Datasette version 0.51 and superior for a table with more than 30 rows. With Datasette 0.50, the horizontal scrollbar is displayed, if I upgrade Datasette to 0.51 and superior, the horizontal scrollbar disappears.\r\n\r\nDatasette 0.50: horizontal scrollbar\r\n\r\n![2020-12-11 10_23_28-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885620-a5f17800-3b9a-11eb-8870-654e7d4372ca.png)\r\n\r\nDatasette 0.51 and superior: no horizontal scrollbar\r\n\r\n![2020-12-11 10_24_55-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885782-dfc27e80-3b9a-11eb-9d55-6c9a56227bf2.png)\r\n\r\nThanks,", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 717699884, "label": "Wide tables should scroll horizontally within the page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/493#issuecomment-748305976", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/493", "id": 748305976, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng==", "user": {"value": 50527, "label": "jefftriplett"}, "created_at": "2020-12-18T20:34:39Z", "updated_at": "2020-12-18T20:34:39Z", "author_association": "CONTRIBUTOR", "body": "I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: \r\n\r\n```shell\r\ndatasette serve --metadata=./metadata.json\r\n```\r\n\r\nI have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 449886319, "label": "Rename metadata.json to config.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748436779, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-19T07:49:00Z", "updated_at": "2020-12-19T07:49:00Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748562288, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:44:22Z", "updated_at": "2020-12-20T04:44:22Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 748562330, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:45:08Z", "updated_at": "2020-12-20T04:45:08Z", "author_association": "CONTRIBUTOR", "body": "Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1158#issuecomment-750389683", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1158", "id": 750389683, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MDM4OTY4Mw==", "user": {"value": 6774676, "label": "eumiro"}, "created_at": "2020-12-23T17:02:50Z", "updated_at": "2020-12-23T17:02:50Z", "author_association": "CONTRIBUTOR", "body": "The dict/set suggestion comes from `pyupgrade --py36-plus`, but then had to `black` the change.\r\n\r\nThe rest comes from PyCharm's Inspect code function. I reviewed all the suggestions and fixed a thing or two, such as leading/trailing spaces in the docstrings or turned around the chained conditions.\r\n\r\nThen I tried to convert all `os.path/glob/open` to `Path`, but there were some local test issues, so I'll have to start over in smaller chunks if you want to have that too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 773913793, "label": "Modernize code to Python 3.6+"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59", "id": 751375487, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw==", "user": {"value": 631242, "label": "frosencrantz"}, "created_at": "2020-12-26T17:08:44Z", "updated_at": "2020-12-26T17:08:44Z", "author_association": "CONTRIBUTOR", "body": "Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771872303, "label": "Remove unneeded exists=True for -a/--auth flag."}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/417#issuecomment-752098906", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/417", "id": 752098906, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-12-29T14:34:30Z", "updated_at": "2020-12-29T14:34:50Z", "author_association": "CONTRIBUTOR", "body": "FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py\r\n\r\nNot a production thing, just an experiment trying to explore what might be possible...", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 421546944, "label": "Datasette Library"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-753531657", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 753531657, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzUzMTY1Nw==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2021-01-02T21:25:36Z", "updated_at": "2021-01-02T21:25:36Z", "author_association": "CONTRIBUTOR", "body": "Actually, on more research, I found out this is handled by the [trove-classifiers package](https://github.com/pypa/trove-classifiers/blob/master/src/trove_classifiers/__init__.py#L2) now, so it's just a one-liner pr instead of fire-up-a-docker-container-and-do-some-migrations", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1170#issuecomment-754004715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1170", "id": 754004715, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDAwNDcxNQ==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-04T14:25:44Z", "updated_at": "2021-01-04T14:25:44Z", "author_association": "CONTRIBUTOR", "body": "I was going to re-add the filter to only run Prettier when there have been changes in `datasette/static` but that would mean it wouldn't run when the package is updated. That plus the fact that [the last run of the job took only 8 seconds](https://github.com/benpickles/datasette/runs/1640121514) is why I decided not to re-add the filter.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 778126516, "label": "Install Prettier via package.json"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1169#issuecomment-754007242", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1169", "id": 754007242, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDAwNzI0Mg==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-04T14:29:57Z", "updated_at": "2021-01-04T14:29:57Z", "author_association": "CONTRIBUTOR", "body": "I somewhat share your reluctance to add a package.json to seemingly every project out there but ultimately if they're project dependencies it's important they're managed within the codebase.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777677671, "label": "Prettier package not actually being cached"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1167#issuecomment-754619930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1167", "id": 754619930, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDYxOTkzMA==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-05T12:57:57Z", "updated_at": "2021-01-05T12:57:57Z", "author_association": "CONTRIBUTOR", "body": "Not sure where exactly to put the actual docs (presumably somewhere in [docs/contributing.rst](https://github.com/simonw/datasette/blob/main/docs/contributing.rst)) but I've made a slight change to make it easier to run locally (copying [the approach in excalidraw](https://github.com/excalidraw/excalidraw/blob/ade2565f497243a5e428f4906d8ed80c872fd981/package.json#L90-L94)): https://github.com/simonw/datasette/compare/main...benpickles:prettier-docs\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777145954, "label": "Add Prettier to contributing documentation"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54", "id": 754721153, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T15:51:09Z", "updated_at": "2021-01-05T15:51:09Z", "author_association": "CONTRIBUTOR", "body": "Correction: the failure is on `lists-member.js` (I was thrown by the `block` variable name, but that's just a coincidence)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779088071, "label": "Archive import appears to be broken on recent exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55", "id": 754728696, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T16:02:55Z", "updated_at": "2021-01-05T16:02:55Z", "author_association": "CONTRIBUTOR", "body": "This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779211940, "label": "Fix archive imports"}, "performed_via_github_app": null}