{"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111726586", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111726586, "node_id": "IC_kwDOBm6k_c5CQ5n6", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T04:17:16Z", "updated_at": "2022-04-28T04:19:31Z", "author_association": "OWNER", "body": "I could experiment with the `await asyncio.run_in_executor(processpool_executor, fn)` mechanism described in https://stackoverflow.com/a/29147750\r\n\r\nCode examples: https://cs.github.com/?scopeName=All+repos&scope=&q=run_in_executor+ProcessPoolExecutor", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111725638", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111725638, "node_id": "IC_kwDOBm6k_c5CQ5ZG", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T04:15:15Z", "updated_at": "2022-04-28T04:15:15Z", "author_association": "OWNER", "body": "Useful theory from Keith Medcalf https://sqlite.org/forum/forumpost/e363c69d3441172e\r\n\r\n> This is true, but the concurrency is limited to the execution which occurs with the GIL released (that is, in the native C sqlite3 library itself). Each row (for example) can be retrieved in parallel but \"constructing the python return objects for each row\" will be serialized (by the GIL).\r\n> \r\n> That is to say that if your have two python threads each with their own connection, and each one is performing a select that returns 1,000,000 rows (lets say that is 25% of the candidates for each select) then the difference in execution time between executing two python threads in parallel vs a single serial thead will not be much different (if even detectable at all). In fact it is possible that the multiple-threaded version takes longer to run both queries to completion because of the increased contention over a shared resource (the GIL).\r\n\r\nSo maybe this is a GIL thing.\r\n\r\nI should test with some expensive SQL queries (maybe big aggregations against large tables) and see if I can spot an improvement there.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111714665", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111714665, "node_id": "IC_kwDOBm6k_c5CQ2tp", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:52:47Z", "updated_at": "2022-04-28T03:52:58Z", "author_association": "OWNER", "body": "Nice custom template/theme!\r\n\r\nYeah, for that I'd recommend hosting elsewhere - on a regular VPS (I use `systemd` like this: https://docs.datasette.io/en/stable/deploying.html#running-datasette-using-systemd ) or using Fly if you want to tub containers without managing a full server.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111712953", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111712953, "node_id": "IC_kwDOBm6k_c5CQ2S5", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T03:48:36Z", "updated_at": "2022-04-28T03:48:36Z", "author_association": "CONTRIBUTOR", "body": "I don't think that'd work for this project. The db is very big, and my aim was to have an environment where researchers could be making use of the data, but be easily able to add corrections to the HTR/OCR extracted data when they came across problems. It's in its immutable (!) form here: https://sydney-stock-exchange-xqtkxtd5za-ts.a.run.app/stock_exchange/stocks", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111708206", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111708206, "node_id": "IC_kwDOBm6k_c5CQ1Iu", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:38:56Z", "updated_at": "2022-04-28T03:38:56Z", "author_association": "OWNER", "body": "In terms of this bug, there are a few potential fixes:\r\n\r\n1. Detect the write to a immutable database and show the user a proper, meaningful error message in the red error box at the top of the page\r\n2. Don't allow the user to even submit the form - show a message saying that this canned query is unavailable because the database cannot be written to\r\n3. Don't even allow Datasette to start running at all - if there's a canned query configured in `metadata.yml` and the database it refers to is in `-i` immutable mode throw an error on startup\r\n\r\nI'm not keen on that last one because it would be frustrating if you couldn't launch Datasette just because you had an old canned query lying around in your metadata file.\r\n\r\nSo I'm leaning towards option 2.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111707384", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111707384, "node_id": "IC_kwDOBm6k_c5CQ074", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:36:46Z", "updated_at": "2022-04-28T03:36:56Z", "author_association": "OWNER", "body": "A more realistic solution (which I've been using on several of my own projects) is to keep the data itself in GitHub and encourage users to edit it there - using the GitHub web interface to edit YAML files or similar.\r\n\r\nNeeds your users to be comfortable hand-editing YAML though! You can at least guard against critical errors by having CI run tests against their YAML before deploying.\r\n\r\nI have a dream of building a more friendly web forms interface which edits the YAML back on GitHub for the user, but that's just a concept at the moment.\r\n\r\nEven more fun would be if a user-friendly form could submit PRs for review without the user having to know what a PR is!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111706519", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111706519, "node_id": "IC_kwDOBm6k_c5CQ0uX", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:34:49Z", "updated_at": "2022-04-28T03:34:49Z", "author_association": "OWNER", "body": "I've wanted to do stuff like that on Cloud Run too. So far I've assumed that it's not feasible, but recently I've been wondering how hard it would be to have a small (like less than 100KB or so) Datasette instance which persists data to a backing GitHub repository such that when it starts up it can pull the latest copy and any time someone edits it can push their changes.\r\n\r\nI'm still not sure it would work well on Cloud Run due to the uncertainty at what would happen if Cloud Run decided to boot up a second instance - but it's still an interesting thought exercise.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111705323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111705323, "node_id": "IC_kwDOBm6k_c5CQ0br", "user": {"value": 127565, "label": "wragge"}, "created_at": "2022-04-28T03:32:06Z", "updated_at": "2022-04-28T03:32:06Z", "author_association": "CONTRIBUTOR", "body": "Ah, that would be it! I have a core set of data which doesn't change to which I want authorised users to be able to submit corrections. I was going to deal with the persistence issue by just grabbing the user corrections at regular intervals and saving to GitHub. I might need to rethink. Thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111705069", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111705069, "node_id": "IC_kwDOBm6k_c5CQ0Xt", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:31:33Z", "updated_at": "2022-04-28T03:31:33Z", "author_association": "OWNER", "body": "Confirmed - this is a bug where immutable databases fail to show a useful error if you write to them with a canned query.\r\n\r\nSteps to reproduce:\r\n```\r\necho '\r\ndatabases:\r\n writable:\r\n queries:\r\n add_name:\r\n sql: insert into names(name) values (:name)\r\n write: true\r\n' > write-metadata.yml\r\necho '{\"name\": \"Simon\"}' | sqlite-utils insert writable.db names -\r\ndatasette writable.db -m write-metadata.yml\r\n```\r\nThen visit http://127.0.0.1:8001/writable/add_name - adding names works.\r\n\r\nNow do this instead:\r\n\r\n```\r\ndatasette -i writable.db -m write-metadata.yml\r\n```\r\n\r\nAnd I'm getting a broken error:\r\n\r\n![error](https://user-images.githubusercontent.com/9599/165670823-6604dd69-9905-475c-8098-5da22ab026a1.gif)\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111699175", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111699175, "node_id": "IC_kwDOBm6k_c5CQy7n", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:19:48Z", "updated_at": "2022-04-28T03:20:08Z", "author_association": "OWNER", "body": "I ran `py-spy` and then hammered refresh a bunch of times on the `http://127.0.0.1:8856/github/commits?_facet=repo&_facet=committer&_trace=1&_noparallel=` page - it generated this SVG profile for me.\r\n\r\nThe area on the right is the threads running the DB queries:\r\n\r\n![profile](https://user-images.githubusercontent.com/9599/165669677-5461ede5-3dc4-4b49-8319-bfe5fd8a723d.svg)\r\n\r\nInteractive version here: https://static.simonwillison.net/static/2022/datasette-parallel-profile.svg", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111698307", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111698307, "node_id": "IC_kwDOBm6k_c5CQyuD", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:18:02Z", "updated_at": "2022-04-28T03:18:02Z", "author_association": "OWNER", "body": "If the behaviour you are seeing is because the database is running in immutable mode then that's a bug - you should get a useful error message instead!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1728#issuecomment-1111697985", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1728", "id": 1111697985, "node_id": "IC_kwDOBm6k_c5CQypB", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T03:17:20Z", "updated_at": "2022-04-28T03:17:20Z", "author_association": "OWNER", "body": "How did you deploy to Cloud Run?\r\n\r\n`datasette publish cloudrun` defaults to running databases there in `-i` immutable mode, because if you managed to change a file on disk on Cloud Run those changes would be lost the next time your container restarted there.\r\n\r\nThat's why I upgraded `datasette-publish-fly` to provide a way of working with their volumes support - they're the best option I know of right now for running Datasette in a container with a persistent volume that can accept writes: https://simonwillison.net/2022/Feb/15/fly-volumes/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1218133366, "label": "Writable canned queries fail with useless non-error against immutable databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111683539", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111683539, "node_id": "IC_kwDOBm6k_c5CQvHT", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T02:47:57Z", "updated_at": "2022-04-28T02:47:57Z", "author_association": "OWNER", "body": "Maybe this is the Python GIL after all?\r\n\r\nI've been hoping that the GIL won't be an issue because the `sqlite3` module releases the GIL for the duration of the execution of a SQL query - see https://github.com/python/cpython/blob/f348154c8f8a9c254503306c59d6779d4d09b3a9/Modules/_sqlite/cursor.c#L749-L759\r\n\r\nSo I've been hoping this means that SQLite code itself can run concurrently on multiple cores even when Python threads cannot.\r\n\r\nBut maybe I'm misunderstanding how that works?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111681513", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111681513, "node_id": "IC_kwDOBm6k_c5CQunp", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T02:44:26Z", "updated_at": "2022-04-28T02:44:26Z", "author_association": "OWNER", "body": "I could try `py-spy top`, which I previously used here:\r\n- https://github.com/simonw/datasette/issues/1673", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111661331", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111661331, "node_id": "IC_kwDOBm6k_c5CQpsT", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T02:07:31Z", "updated_at": "2022-04-28T02:07:31Z", "author_association": "OWNER", "body": "Asked on the SQLite forum about this here: https://sqlite.org/forum/forumpost/ffbfa9f38e", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111602802", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111602802, "node_id": "IC_kwDOBm6k_c5CQbZy", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T00:21:35Z", "updated_at": "2022-04-28T00:21:35Z", "author_association": "OWNER", "body": "Tried this but I'm getting back an empty JSON array of traces at the bottom of the page most of the time (intermittently it works correctly):\r\n\r\n```diff\r\ndiff --git a/datasette/database.py b/datasette/database.py\r\nindex ba594a8..d7f9172 100644\r\n--- a/datasette/database.py\r\n+++ b/datasette/database.py\r\n@@ -7,7 +7,7 @@ import sys\r\n import threading\r\n import uuid\r\n \r\n-from .tracer import trace\r\n+from .tracer import trace, trace_child_tasks\r\n from .utils import (\r\n detect_fts,\r\n detect_primary_keys,\r\n@@ -207,30 +207,31 @@ class Database:\r\n time_limit_ms = custom_time_limit\r\n \r\n with sqlite_timelimit(conn, time_limit_ms):\r\n- try:\r\n- cursor = conn.cursor()\r\n- cursor.execute(sql, params if params is not None else {})\r\n- max_returned_rows = self.ds.max_returned_rows\r\n- if max_returned_rows == page_size:\r\n- max_returned_rows += 1\r\n- if max_returned_rows and truncate:\r\n- rows = cursor.fetchmany(max_returned_rows + 1)\r\n- truncated = len(rows) > max_returned_rows\r\n- rows = rows[:max_returned_rows]\r\n- else:\r\n- rows = cursor.fetchall()\r\n- truncated = False\r\n- except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:\r\n- if e.args == (\"interrupted\",):\r\n- raise QueryInterrupted(e, sql, params)\r\n- if log_sql_errors:\r\n- sys.stderr.write(\r\n- \"ERROR: conn={}, sql = {}, params = {}: {}\\n\".format(\r\n- conn, repr(sql), params, e\r\n+ with trace(\"sql\", database=self.name, sql=sql.strip(), params=params):\r\n+ try:\r\n+ cursor = conn.cursor()\r\n+ cursor.execute(sql, params if params is not None else {})\r\n+ max_returned_rows = self.ds.max_returned_rows\r\n+ if max_returned_rows == page_size:\r\n+ max_returned_rows += 1\r\n+ if max_returned_rows and truncate:\r\n+ rows = cursor.fetchmany(max_returned_rows + 1)\r\n+ truncated = len(rows) > max_returned_rows\r\n+ rows = rows[:max_returned_rows]\r\n+ else:\r\n+ rows = cursor.fetchall()\r\n+ truncated = False\r\n+ except (sqlite3.OperationalError, sqlite3.DatabaseError) as e:\r\n+ if e.args == (\"interrupted\",):\r\n+ raise QueryInterrupted(e, sql, params)\r\n+ if log_sql_errors:\r\n+ sys.stderr.write(\r\n+ \"ERROR: conn={}, sql = {}, params = {}: {}\\n\".format(\r\n+ conn, repr(sql), params, e\r\n+ )\r\n )\r\n- )\r\n- sys.stderr.flush()\r\n- raise\r\n+ sys.stderr.flush()\r\n+ raise\r\n \r\n if truncate:\r\n return Results(rows, truncated, cursor.description)\r\n@@ -238,9 +239,8 @@ class Database:\r\n else:\r\n return Results(rows, False, cursor.description)\r\n \r\n- with trace(\"sql\", database=self.name, sql=sql.strip(), params=params):\r\n- results = await self.execute_fn(sql_operation_in_thread)\r\n- return results\r\n+ with trace_child_tasks():\r\n+ return await self.execute_fn(sql_operation_in_thread)\r\n \r\n @property\r\n def size(self):\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111597176", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111597176, "node_id": "IC_kwDOBm6k_c5CQaB4", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T00:11:44Z", "updated_at": "2022-04-28T00:11:44Z", "author_association": "OWNER", "body": "Though it would be interesting to also have the trace reveal how much time is spent in the functions that wrap that core SQL - the stuff that is being measured at the moment.\r\n\r\nI have a hunch that this could help solve the over-arching performance mystery.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111595319", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111595319, "node_id": "IC_kwDOBm6k_c5CQZk3", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-28T00:09:45Z", "updated_at": "2022-04-28T00:11:01Z", "author_association": "OWNER", "body": "Here's where read queries are instrumented: https://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L241-L242\r\n\r\nSo the instrumentation is actually capturing quite a bit of Python activity before it gets to SQLite:\r\n\r\nhttps://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L179-L190\r\n\r\nAnd then:\r\n\r\nhttps://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L204-L233\r\n\r\nIdeally I'd like that `trace()` block to wrap just the `cursor.execute()` and `cursor.fetchmany(...)` or `cursor.fetchall()` calls.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111558204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111558204, "node_id": "IC_kwDOBm6k_c5CQQg8", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T22:58:39Z", "updated_at": "2022-04-27T22:58:39Z", "author_association": "OWNER", "body": "I should check my timing mechanism. Am I capturing the time taken just in SQLite or does it include time spent in Python crossing between async and threaded world and waiting for a thread pool worker to become available?\r\n\r\nThat could explain the longer query times.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111553029", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111553029, "node_id": "IC_kwDOBm6k_c5CQPQF", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T22:48:21Z", "updated_at": "2022-04-27T22:48:21Z", "author_association": "OWNER", "body": "I wonder if it would be worth exploring multiprocessing here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111551076", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111551076, "node_id": "IC_kwDOBm6k_c5CQOxk", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T22:44:51Z", "updated_at": "2022-04-27T22:45:04Z", "author_association": "OWNER", "body": "Really wild idea: what if I created three copies of the SQLite database file - as three separate file names - and then balanced the parallel queries across all these? Any chance that could avoid any mysterious locking issues?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111535818", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111535818, "node_id": "IC_kwDOBm6k_c5CQLDK", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T22:18:45Z", "updated_at": "2022-04-27T22:18:45Z", "author_association": "OWNER", "body": "Another avenue: https://twitter.com/weargoggles/status/1519426289920270337\r\n\r\n> SQLite has its own mutexes to provide thread safety, which as another poster noted are out of play in multi process setups. Perhaps downgrading from the \u201cserializable\u201d to \u201cmulti-threaded\u201d safety would be okay for Datasette? https://sqlite.org/c3ref/c_config_covering_index_scan.html#sqliteconfigmultithread\r\n\r\nDoesn't look like there's an obvious way to access that from Python via the `sqlite3` module though.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1111506339", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/159", "id": 1111506339, "node_id": "IC_kwDOCGYnMM5CQD2j", "user": {"value": 154364, "label": "dracos"}, "created_at": "2022-04-27T21:35:13Z", "updated_at": "2022-04-27T21:35:13Z", "author_association": "NONE", "body": "Just stumbled across this, wondering why none of my deletes were working.", "reactions": "{\"total_count\": 2, \"+1\": 2, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 702386948, "label": ".delete_where() does not auto-commit (unlike .insert() or .upsert())"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111485722", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111485722, "node_id": "IC_kwDOBm6k_c5CP-0a", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T21:08:20Z", "updated_at": "2022-04-27T21:08:20Z", "author_association": "OWNER", "body": "Tried that and it didn't seem to make a difference either.\r\n\r\nI really need a much deeper view of what's going on here.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111462442", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111462442, "node_id": "IC_kwDOBm6k_c5CP5Iq", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T20:40:59Z", "updated_at": "2022-04-27T20:42:49Z", "author_association": "OWNER", "body": "This looks VERY relevant: [SQLite Shared-Cache Mode](https://www.sqlite.org/sharedcache.html):\r\n\r\n> SQLite includes a special \"shared-cache\" mode (disabled by default) intended for use in embedded servers. If shared-cache mode is enabled and a thread establishes multiple connections to the same database, the connections share a single data and schema cache. This can significantly reduce the quantity of memory and IO required by the system.\r\n\r\nEnabled as part of the URI filename:\r\n\r\n ATTACH 'file:aux.db?cache=shared' AS aux;\r\n\r\nTurns out I'm already using this for in-memory databases that have `.memory_name` set, but not (yet) for regular file-backed databases:\r\n\r\nhttps://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L73-L75\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111460068", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111460068, "node_id": "IC_kwDOBm6k_c5CP4jk", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T20:38:32Z", "updated_at": "2022-04-27T20:38:32Z", "author_association": "OWNER", "body": "WAL mode didn't seem to make a difference. I thought there was a chance it might help multiple read connections operate at the same time but it looks like it really does only matter for when writes are going on.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111456500", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111456500, "node_id": "IC_kwDOBm6k_c5CP3r0", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T20:36:01Z", "updated_at": "2022-04-27T20:36:01Z", "author_association": "OWNER", "body": "Yeah all of this is pretty much assuming read-only connections. Datasette has a separate mechanism for ensuring that writes are executed one at a time against a dedicated connection from an in-memory queue:\r\n- https://github.com/simonw/datasette/issues/682", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111451790", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111451790, "node_id": "IC_kwDOBm6k_c5CP2iO", "user": {"value": 716529, "label": "glyph"}, "created_at": "2022-04-27T20:30:33Z", "updated_at": "2022-04-27T20:30:33Z", "author_association": "NONE", "body": "> I should try seeing what happens with WAL mode enabled.\r\n\r\nI've only skimmed above but it looks like you're doing mainly read-only queries? WAL mode is about better interactions between writers & readers, primarily.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111448928", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111448928, "node_id": "IC_kwDOBm6k_c5CP11g", "user": {"value": 716529, "label": "glyph"}, "created_at": "2022-04-27T20:27:05Z", "updated_at": "2022-04-27T20:27:05Z", "author_association": "NONE", "body": "You don't want to re-use an SQLite connection from multiple threads anyway: https://www.sqlite.org/threadsafe.html\r\n\r\nMultiple connections can operate on the file in parallel, but a single connection can't:\r\n\r\n> Multi-thread. In this mode, SQLite can be safely used by multiple threads **provided that no single database connection is used simultaneously in two or more threads**.\r\n\r\n(emphasis mine)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111442012", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111442012, "node_id": "IC_kwDOBm6k_c5CP0Jc", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T20:19:00Z", "updated_at": "2022-04-27T20:19:00Z", "author_association": "OWNER", "body": "Something worth digging into: are these parallel queries running against the same SQLite connection or are they each rubbing against a separate SQLite connection?\r\n\r\nJust realized I know the answer: they're running against separate SQLite connections, because that's how the time limit mechanism works: it installs a progress handler for each connection which terminates it after a set time.\r\n\r\nThis means that if SQLite benefits from multiple threads using the same connection (due to shared caches or similar) then Datasette will not be seeing those benefits.\r\n\r\nIt also means that if there's some mechanism within SQLite that penalizes you for having multiple parallel connections to a single file (just guessing here, maybe there's some kind of locking going on?) then Datasette will suffer those penalties.\r\n\r\nI should try seeing what happens with WAL mode enabled.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111432375", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111432375, "node_id": "IC_kwDOBm6k_c5CPxy3", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T20:07:57Z", "updated_at": "2022-04-27T20:07:57Z", "author_association": "OWNER", "body": "Also useful: https://avi.im/blag/2021/fast-sqlite-inserts/ - from a tip on Twitter: https://twitter.com/ricardoanderegg/status/1519402047556235264", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111431785", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111431785, "node_id": "IC_kwDOBm6k_c5CPxpp", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T20:07:16Z", "updated_at": "2022-04-27T20:07:16Z", "author_association": "OWNER", "body": "I think I need some much more in-depth tracing tricks for this.\r\n\r\nhttps://www.maartenbreddels.com/perf/jupyter/python/tracing/gil/2021/01/14/Tracing-the-Python-GIL.html looks relevant - uses the `perf` tool on Linux.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111408273", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111408273, "node_id": "IC_kwDOBm6k_c5CPr6R", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T19:40:51Z", "updated_at": "2022-04-27T19:42:17Z", "author_association": "OWNER", "body": "Relevant: here's the code that sets up a Datasette SQLite connection: https://github.com/simonw/datasette/blob/7a6654a253dee243518dc542ce4c06dbb0d0801d/datasette/database.py#L73-L96\r\n\r\nIt's using `check_same_thread=False` - here's [the Python docs on that](https://docs.python.org/3/library/sqlite3.html#sqlite3.connect):\r\n\r\n> By default, *check_same_thread* is [`True`](https://docs.python.org/3/library/constants.html#True \"True\") and only the creating thread may use the connection. If set [`False`](https://docs.python.org/3/library/constants.html#False \"False\"), the returned connection may be shared across multiple threads. When using multiple threads with the same connection writing operations should be serialized by the user to avoid data corruption.\r\n\r\nThis is why Datasette reserves a single connection for write queries and queues them up in memory, [as described here](https://simonwillison.net/2020/Feb/26/weeknotes-datasette-writes/).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111390433", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111390433, "node_id": "IC_kwDOBm6k_c5CPnjh", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T19:21:02Z", "updated_at": "2022-04-27T19:21:02Z", "author_association": "OWNER", "body": "One weird thing: I noticed that in the parallel trace above the SQL query bars are wider. Mousover shows duration in ms, and I got 13ms for this query:\r\n\r\n select message as value, count(*) as n from (\r\n\r\nBut in the `?_noparallel=1` version that some query took 2.97ms.\r\n\r\nGiven those numbers though I would expect the overall page time to be MUCH worse for the parallel version - but the page load times are instead very close to each other, with parallel often winning.\r\n\r\nThis is super-weird.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111385875", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111385875, "node_id": "IC_kwDOBm6k_c5CPmcT", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T19:16:57Z", "updated_at": "2022-04-27T19:16:57Z", "author_association": "OWNER", "body": "I just remembered the `--setting num_sql_threads` option... which defaults to 3! https://github.com/simonw/datasette/blob/942411ef946e9a34a2094944d3423cddad27efd3/datasette/app.py#L109-L113\r\n\r\nWould explain why the first trace never seems to show more than three SQL queries executing at once.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1727#issuecomment-1111380282", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1727", "id": 1111380282, "node_id": "IC_kwDOBm6k_c5CPlE6", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T19:10:27Z", "updated_at": "2022-04-27T19:10:27Z", "author_association": "OWNER", "body": "Wrote more about that here: https://simonwillison.net/2022/Apr/27/parallel-queries/\r\n\r\nCompare https://latest-with-plugins.datasette.io/github/commits?_facet=repo&_facet=committer&_trace=1\r\n\r\n![image](https://user-images.githubusercontent.com/9599/165601503-2083c5d2-d740-405c-b34d-85570744ca82.png)\r\n\r\nWith the same thing but with parallel execution disabled:\r\n\r\nhttps://latest-with-plugins.datasette.io/github/commits?_facet=repo&_facet=committer&_trace=1&_noparallel=1\r\n\r\n![image](https://user-images.githubusercontent.com/9599/165601525-98abbfb1-5631-4040-b6bd-700948d1db6e.png)\r\n\r\nThose total page load time numbers are very similar. Is this parallel optimization worthwhile?\r\n\r\nMaybe it's only worth it on larger databases? Or maybe larger databases perform worse with this?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1217759117, "label": "Research: demonstrate if parallel SQL queries are worthwhile"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1724#issuecomment-1110585475", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1724", "id": 1110585475, "node_id": "IC_kwDOBm6k_c5CMjCD", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T06:15:14Z", "updated_at": "2022-04-27T06:15:14Z", "author_association": "OWNER", "body": "Yeah, that page is 438K (but only 20K gzipped).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216619276, "label": "?_trace=1 doesn't work on Global Power Plants demo"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1724#issuecomment-1110370095", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1724", "id": 1110370095, "node_id": "IC_kwDOBm6k_c5CLucv", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T00:18:30Z", "updated_at": "2022-04-27T00:18:30Z", "author_association": "OWNER", "body": "So this isn't a bug here, it's working as intended.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216619276, "label": "?_trace=1 doesn't work on Global Power Plants demo"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1724#issuecomment-1110369004", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1724", "id": 1110369004, "node_id": "IC_kwDOBm6k_c5CLuLs", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-27T00:16:35Z", "updated_at": "2022-04-27T00:17:04Z", "author_association": "OWNER", "body": "I bet this is because it's exceeding the size limit: https://github.com/simonw/datasette/blob/da53e0360da4771ffb56a8e3eb3f7476f3168299/datasette/tracer.py#L80-L88\r\n\r\nhttps://github.com/simonw/datasette/blob/da53e0360da4771ffb56a8e3eb3f7476f3168299/datasette/tracer.py#L102-L113", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216619276, "label": "?_trace=1 doesn't work on Global Power Plants demo"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1723#issuecomment-1110330554", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1723", "id": 1110330554, "node_id": "IC_kwDOBm6k_c5CLky6", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T23:06:20Z", "updated_at": "2022-04-26T23:06:20Z", "author_association": "OWNER", "body": "Deployed here: https://latest-with-plugins.datasette.io/github/commits?_facet=repo&_trace=1&_facet=committer", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216508080, "label": "Research running SQL in table view in parallel using `asyncio.gather()`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1723#issuecomment-1110305790", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1723", "id": 1110305790, "node_id": "IC_kwDOBm6k_c5CLev-", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T22:19:04Z", "updated_at": "2022-04-26T22:19:04Z", "author_association": "OWNER", "body": "I realized that seeing the total time in queries wasn't enough to understand this, because if the queries were executed in serial or parallel it should still sum up to the same amount of SQL time (roughly).\r\n\r\nInstead I need to know how long the page took to render. But that's hard to display on the page since you can't measure it until rendering has finished!\r\n\r\nSo I built an ASGI plugin to handle that measurement: https://github.com/simonw/datasette-total-page-time\r\n\r\nAnd with that plugin installed, `http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel2&_facet=other_fuel1&_parallel=1` (the parallel version) takes 377ms:\r\n\r\n\"CleanShot\r\n\r\nWhile `http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel2&_facet=other_fuel1` (the serial version) takes 762ms:\r\n\r\n\"image\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216508080, "label": "Research running SQL in table view in parallel using `asyncio.gather()`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1723#issuecomment-1110279869", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1723", "id": 1110279869, "node_id": "IC_kwDOBm6k_c5CLYa9", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T21:45:39Z", "updated_at": "2022-04-26T21:45:39Z", "author_association": "OWNER", "body": "Getting some nice traces out of this:\r\n\r\n\"CleanShot\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216508080, "label": "Research running SQL in table view in parallel using `asyncio.gather()`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1723#issuecomment-1110278577", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1723", "id": 1110278577, "node_id": "IC_kwDOBm6k_c5CLYGx", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T21:44:04Z", "updated_at": "2022-04-26T21:44:04Z", "author_association": "OWNER", "body": "And some simple benchmarks with `ab` - using the `?_parallel=1` hack to try it with and without a parallel `asyncio.gather()`:\r\n\r\n```\r\n~ % ab -n 100 'http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2' \r\nThis is ApacheBench, Version 2.3 <$Revision: 1879490 $>\r\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\r\nLicensed to The Apache Software Foundation, http://www.apache.org/\r\n\r\nBenchmarking 127.0.0.1 (be patient).....done\r\n\r\n\r\nServer Software: uvicorn\r\nServer Hostname: 127.0.0.1\r\nServer Port: 8001\r\n\r\nDocument Path: /global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2\r\nDocument Length: 314187 bytes\r\n\r\nConcurrency Level: 1\r\nTime taken for tests: 68.279 seconds\r\nComplete requests: 100\r\nFailed requests: 13\r\n (Connect: 0, Receive: 0, Length: 13, Exceptions: 0)\r\nTotal transferred: 31454937 bytes\r\nHTML transferred: 31418437 bytes\r\nRequests per second: 1.46 [#/sec] (mean)\r\nTime per request: 682.787 [ms] (mean)\r\nTime per request: 682.787 [ms] (mean, across all concurrent requests)\r\nTransfer rate: 449.89 [Kbytes/sec] received\r\n\r\nConnection Times (ms)\r\n min mean[+/-sd] median max\r\nConnect: 0 0 0.0 0 0\r\nProcessing: 621 683 68.0 658 993\r\nWaiting: 620 682 68.0 657 992\r\nTotal: 621 683 68.0 658 993\r\n\r\nPercentage of the requests served within a certain time (ms)\r\n 50% 658\r\n 66% 678\r\n 75% 687\r\n 80% 711\r\n 90% 763\r\n 95% 879\r\n 98% 926\r\n 99% 993\r\n 100% 993 (longest request)\r\n\r\n\r\n----\r\n\r\nIn parallel:\r\n\r\n~ % ab -n 100 'http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=1'\r\nThis is ApacheBench, Version 2.3 <$Revision: 1879490 $>\r\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\r\nLicensed to The Apache Software Foundation, http://www.apache.org/\r\n\r\nBenchmarking 127.0.0.1 (be patient).....done\r\n\r\n\r\nServer Software: uvicorn\r\nServer Hostname: 127.0.0.1\r\nServer Port: 8001\r\n\r\nDocument Path: /global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=1\r\nDocument Length: 315703 bytes\r\n\r\nConcurrency Level: 1\r\nTime taken for tests: 34.763 seconds\r\nComplete requests: 100\r\nFailed requests: 11\r\n (Connect: 0, Receive: 0, Length: 11, Exceptions: 0)\r\nTotal transferred: 31607988 bytes\r\nHTML transferred: 31570288 bytes\r\nRequests per second: 2.88 [#/sec] (mean)\r\nTime per request: 347.632 [ms] (mean)\r\nTime per request: 347.632 [ms] (mean, across all concurrent requests)\r\nTransfer rate: 887.93 [Kbytes/sec] received\r\n\r\nConnection Times (ms)\r\n min mean[+/-sd] median max\r\nConnect: 0 0 0.0 0 0\r\nProcessing: 311 347 28.0 338 450\r\nWaiting: 311 347 28.0 338 450\r\nTotal: 312 348 28.0 338 451\r\n\r\nPercentage of the requests served within a certain time (ms)\r\n 50% 338\r\n 66% 348\r\n 75% 361\r\n 80% 367\r\n 90% 396\r\n 95% 408\r\n 98% 436\r\n 99% 451\r\n 100% 451 (longest request)\r\n\r\n----\r\n\r\nWith concurrency 10, not parallel:\r\n\r\n~ % ab -c 10 -n 100 'http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=' \r\nThis is ApacheBench, Version 2.3 <$Revision: 1879490 $>\r\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\r\nLicensed to The Apache Software Foundation, http://www.apache.org/\r\n\r\nBenchmarking 127.0.0.1 (be patient).....done\r\n\r\n\r\nServer Software: uvicorn\r\nServer Hostname: 127.0.0.1\r\nServer Port: 8001\r\n\r\nDocument Path: /global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=\r\nDocument Length: 314346 bytes\r\n\r\nConcurrency Level: 10\r\nTime taken for tests: 38.408 seconds\r\nComplete requests: 100\r\nFailed requests: 93\r\n (Connect: 0, Receive: 0, Length: 93, Exceptions: 0)\r\nTotal transferred: 31471333 bytes\r\nHTML transferred: 31433733 bytes\r\nRequests per second: 2.60 [#/sec] (mean)\r\nTime per request: 3840.829 [ms] (mean)\r\nTime per request: 384.083 [ms] (mean, across all concurrent requests)\r\nTransfer rate: 800.18 [Kbytes/sec] received\r\n\r\nConnection Times (ms)\r\n min mean[+/-sd] median max\r\nConnect: 0 0 0.1 0 1\r\nProcessing: 685 3719 354.0 3774 4096\r\nWaiting: 684 3707 353.7 3750 4095\r\nTotal: 685 3719 354.0 3774 4096\r\n\r\nPercentage of the requests served within a certain time (ms)\r\n 50% 3774\r\n 66% 3832\r\n 75% 3855\r\n 80% 3878\r\n 90% 3944\r\n 95% 4006\r\n 98% 4057\r\n 99% 4096\r\n 100% 4096 (longest request)\r\n\r\n\r\n----\r\n\r\nConcurrency 10 parallel:\r\n\r\n~ % ab -c 10 -n 100 'http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=1'\r\nThis is ApacheBench, Version 2.3 <$Revision: 1879490 $>\r\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\r\nLicensed to The Apache Software Foundation, http://www.apache.org/\r\n\r\nBenchmarking 127.0.0.1 (be patient).....done\r\n\r\n\r\nServer Software: uvicorn\r\nServer Hostname: 127.0.0.1\r\nServer Port: 8001\r\n\r\nDocument Path: /global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=1\r\nDocument Length: 315703 bytes\r\n\r\nConcurrency Level: 10\r\nTime taken for tests: 36.762 seconds\r\nComplete requests: 100\r\nFailed requests: 89\r\n (Connect: 0, Receive: 0, Length: 89, Exceptions: 0)\r\nTotal transferred: 31606516 bytes\r\nHTML transferred: 31568816 bytes\r\nRequests per second: 2.72 [#/sec] (mean)\r\nTime per request: 3676.182 [ms] (mean)\r\nTime per request: 367.618 [ms] (mean, across all concurrent requests)\r\nTransfer rate: 839.61 [Kbytes/sec] received\r\n\r\nConnection Times (ms)\r\n min mean[+/-sd] median max\r\nConnect: 0 0 0.1 0 0\r\nProcessing: 381 3602 419.6 3609 4458\r\nWaiting: 381 3586 418.7 3607 4457\r\nTotal: 381 3603 419.6 3609 4458\r\n\r\nPercentage of the requests served within a certain time (ms)\r\n 50% 3609\r\n 66% 3741\r\n 75% 3791\r\n 80% 3821\r\n 90% 3972\r\n 95% 4074\r\n 98% 4386\r\n 99% 4458\r\n 100% 4458 (longest request)\r\n\r\n\r\nTrying -c 3 instead. Non parallel:\r\n\r\n~ % ab -c 3 -n 100 'http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel='\r\nThis is ApacheBench, Version 2.3 <$Revision: 1879490 $>\r\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\r\nLicensed to The Apache Software Foundation, http://www.apache.org/\r\n\r\nBenchmarking 127.0.0.1 (be patient).....done\r\n\r\n\r\nServer Software: uvicorn\r\nServer Hostname: 127.0.0.1\r\nServer Port: 8001\r\n\r\nDocument Path: /global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=\r\nDocument Length: 314346 bytes\r\n\r\nConcurrency Level: 3\r\nTime taken for tests: 39.365 seconds\r\nComplete requests: 100\r\nFailed requests: 83\r\n (Connect: 0, Receive: 0, Length: 83, Exceptions: 0)\r\nTotal transferred: 31470808 bytes\r\nHTML transferred: 31433208 bytes\r\nRequests per second: 2.54 [#/sec] (mean)\r\nTime per request: 1180.955 [ms] (mean)\r\nTime per request: 393.652 [ms] (mean, across all concurrent requests)\r\nTransfer rate: 780.72 [Kbytes/sec] received\r\n\r\nConnection Times (ms)\r\n min mean[+/-sd] median max\r\nConnect: 0 0 0.0 0 0\r\nProcessing: 731 1153 126.2 1189 1359\r\nWaiting: 730 1151 125.9 1188 1358\r\nTotal: 731 1153 126.2 1189 1359\r\n\r\nPercentage of the requests served within a certain time (ms)\r\n 50% 1189\r\n 66% 1221\r\n 75% 1234\r\n 80% 1247\r\n 90% 1296\r\n 95% 1309\r\n 98% 1343\r\n 99% 1359\r\n 100% 1359 (longest request)\r\n\r\n----\r\n\r\nParallel:\r\n\r\n~ % ab -c 3 -n 100 'http://127.0.0.1:8001/global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=1'\r\nThis is ApacheBench, Version 2.3 <$Revision: 1879490 $>\r\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\r\nLicensed to The Apache Software Foundation, http://www.apache.org/\r\n\r\nBenchmarking 127.0.0.1 (be patient).....done\r\n\r\n\r\nServer Software: uvicorn\r\nServer Hostname: 127.0.0.1\r\nServer Port: 8001\r\n\r\nDocument Path: /global-power-plants/global-power-plants?_facet=primary_fuel&_facet=other_fuel1&_facet=other_fuel3&_facet=other_fuel2&_parallel=1\r\nDocument Length: 315703 bytes\r\n\r\nConcurrency Level: 3\r\nTime taken for tests: 34.530 seconds\r\nComplete requests: 100\r\nFailed requests: 18\r\n (Connect: 0, Receive: 0, Length: 18, Exceptions: 0)\r\nTotal transferred: 31606179 bytes\r\nHTML transferred: 31568479 bytes\r\nRequests per second: 2.90 [#/sec] (mean)\r\nTime per request: 1035.902 [ms] (mean)\r\nTime per request: 345.301 [ms] (mean, across all concurrent requests)\r\nTransfer rate: 893.87 [Kbytes/sec] received\r\n\r\nConnection Times (ms)\r\n min mean[+/-sd] median max\r\nConnect: 0 0 0.0 0 0\r\nProcessing: 412 1020 104.4 1018 1280\r\nWaiting: 411 1018 104.1 1014 1275\r\nTotal: 412 1021 104.4 1018 1280\r\n\r\nPercentage of the requests served within a certain time (ms)\r\n 50% 1018\r\n 66% 1041\r\n 75% 1061\r\n 80% 1079\r\n 90% 1136\r\n 95% 1176\r\n 98% 1251\r\n 99% 1280\r\n 100% 1280 (longest request)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216508080, "label": "Research running SQL in table view in parallel using `asyncio.gather()`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1723#issuecomment-1110278182", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1723", "id": 1110278182, "node_id": "IC_kwDOBm6k_c5CLYAm", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T21:43:34Z", "updated_at": "2022-04-26T21:43:34Z", "author_association": "OWNER", "body": "Here's the diff I'm using:\r\n```diff\r\ndiff --git a/datasette/views/table.py b/datasette/views/table.py\r\nindex d66adb8..f15ef1e 100644\r\n--- a/datasette/views/table.py\r\n+++ b/datasette/views/table.py\r\n@@ -1,3 +1,4 @@\r\n+import asyncio\r\n import itertools\r\n import json\r\n \r\n@@ -5,6 +6,7 @@ import markupsafe\r\n \r\n from datasette.plugins import pm\r\n from datasette.database import QueryInterrupted\r\n+from datasette import tracer\r\n from datasette.utils import (\r\n await_me_maybe,\r\n CustomRow,\r\n@@ -150,6 +152,16 @@ class TableView(DataView):\r\n default_labels=False,\r\n _next=None,\r\n _size=None,\r\n+ ):\r\n+ with tracer.trace_child_tasks():\r\n+ return await self._data_traced(request, default_labels, _next, _size)\r\n+\r\n+ async def _data_traced(\r\n+ self,\r\n+ request,\r\n+ default_labels=False,\r\n+ _next=None,\r\n+ _size=None,\r\n ):\r\n database_route = tilde_decode(request.url_vars[\"database\"])\r\n table_name = tilde_decode(request.url_vars[\"table\"])\r\n@@ -159,6 +171,20 @@ class TableView(DataView):\r\n raise NotFound(\"Database not found: {}\".format(database_route))\r\n database_name = db.name\r\n \r\n+ # For performance profiling purposes, ?_parallel=1 turns on asyncio.gather\r\n+ async def _gather_parallel(*args):\r\n+ return await asyncio.gather(*args)\r\n+\r\n+ async def _gather_sequential(*args):\r\n+ results = []\r\n+ for fn in args:\r\n+ results.append(await fn)\r\n+ return results\r\n+\r\n+ gather = (\r\n+ _gather_parallel if request.args.get(\"_parallel\") else _gather_sequential\r\n+ )\r\n+\r\n # If this is a canned query, not a table, then dispatch to QueryView instead\r\n canned_query = await self.ds.get_canned_query(\r\n database_name, table_name, request.actor\r\n@@ -174,8 +200,12 @@ class TableView(DataView):\r\n write=bool(canned_query.get(\"write\")),\r\n )\r\n \r\n- is_view = bool(await db.get_view_definition(table_name))\r\n- table_exists = bool(await db.table_exists(table_name))\r\n+ is_view, table_exists = map(\r\n+ bool,\r\n+ await gather(\r\n+ db.get_view_definition(table_name), db.table_exists(table_name)\r\n+ ),\r\n+ )\r\n \r\n # If table or view not found, return 404\r\n if not is_view and not table_exists:\r\n@@ -497,33 +527,44 @@ class TableView(DataView):\r\n )\r\n )\r\n \r\n- if not nofacet:\r\n- for facet in facet_instances:\r\n- (\r\n+ async def execute_facets():\r\n+ if not nofacet:\r\n+ # Run them in parallel\r\n+ facet_awaitables = [facet.facet_results() for facet in facet_instances]\r\n+ facet_awaitable_results = await gather(*facet_awaitables)\r\n+ for (\r\n instance_facet_results,\r\n instance_facets_timed_out,\r\n- ) = await facet.facet_results()\r\n- for facet_info in instance_facet_results:\r\n- base_key = facet_info[\"name\"]\r\n- key = base_key\r\n- i = 1\r\n- while key in facet_results:\r\n- i += 1\r\n- key = f\"{base_key}_{i}\"\r\n- facet_results[key] = facet_info\r\n- facets_timed_out.extend(instance_facets_timed_out)\r\n-\r\n- # Calculate suggested facets\r\n+ ) in facet_awaitable_results:\r\n+ for facet_info in instance_facet_results:\r\n+ base_key = facet_info[\"name\"]\r\n+ key = base_key\r\n+ i = 1\r\n+ while key in facet_results:\r\n+ i += 1\r\n+ key = f\"{base_key}_{i}\"\r\n+ facet_results[key] = facet_info\r\n+ facets_timed_out.extend(instance_facets_timed_out)\r\n+\r\n suggested_facets = []\r\n- if (\r\n- self.ds.setting(\"suggest_facets\")\r\n- and self.ds.setting(\"allow_facet\")\r\n- and not _next\r\n- and not nofacet\r\n- and not nosuggest\r\n- ):\r\n- for facet in facet_instances:\r\n- suggested_facets.extend(await facet.suggest())\r\n+\r\n+ async def execute_suggested_facets():\r\n+ # Calculate suggested facets\r\n+ if (\r\n+ self.ds.setting(\"suggest_facets\")\r\n+ and self.ds.setting(\"allow_facet\")\r\n+ and not _next\r\n+ and not nofacet\r\n+ and not nosuggest\r\n+ ):\r\n+ # Run them in parallel\r\n+ facet_suggest_awaitables = [\r\n+ facet.suggest() for facet in facet_instances\r\n+ ]\r\n+ for suggest_result in await gather(*facet_suggest_awaitables):\r\n+ suggested_facets.extend(suggest_result)\r\n+\r\n+ await gather(execute_facets(), execute_suggested_facets())\r\n \r\n # Figure out columns and rows for the query\r\n columns = [r[0] for r in results.description]\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1216508080, "label": "Research running SQL in table view in parallel using `asyncio.gather()`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1110265087", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1110265087, "node_id": "IC_kwDOBm6k_c5CLUz_", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T21:26:17Z", "updated_at": "2022-04-26T21:26:17Z", "author_association": "OWNER", "body": "Running facets and facet suggestions in parallel using `asyncio.gather()` turns out to be a lot less hassle than I had thought - maybe I don't need `asyncinject` for this at all?\r\n\r\n```diff\r\n if not nofacet:\r\n- for facet in facet_instances:\r\n- (\r\n- instance_facet_results,\r\n- instance_facets_timed_out,\r\n- ) = await facet.facet_results()\r\n+ # Run them in parallel\r\n+ facet_awaitables = [facet.facet_results() for facet in facet_instances]\r\n+ facet_awaitable_results = await asyncio.gather(*facet_awaitables)\r\n+ for (\r\n+ instance_facet_results,\r\n+ instance_facets_timed_out,\r\n+ ) in facet_awaitable_results:\r\n for facet_info in instance_facet_results:\r\n base_key = facet_info[\"name\"]\r\n key = base_key\r\n@@ -522,8 +540,10 @@ class TableView(DataView):\r\n and not nofacet\r\n and not nosuggest\r\n ):\r\n- for facet in facet_instances:\r\n- suggested_facets.extend(await facet.suggest())\r\n+ # Run them in parallel\r\n+ facet_suggest_awaitables = [facet.suggest() for facet in facet_instances]\r\n+ for suggest_result in await asyncio.gather(*facet_suggest_awaitables):\r\n+ suggested_facets.extend(suggest_result)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1110246593", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1110246593, "node_id": "IC_kwDOBm6k_c5CLQTB", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T21:03:56Z", "updated_at": "2022-04-26T21:03:56Z", "author_association": "OWNER", "body": "Well this is fun... I applied this change:\r\n\r\n```diff\r\ndiff --git a/datasette/views/table.py b/datasette/views/table.py\r\nindex d66adb8..85f9e44 100644\r\n--- a/datasette/views/table.py\r\n+++ b/datasette/views/table.py\r\n@@ -1,3 +1,4 @@\r\n+import asyncio\r\n import itertools\r\n import json\r\n \r\n@@ -5,6 +6,7 @@ import markupsafe\r\n \r\n from datasette.plugins import pm\r\n from datasette.database import QueryInterrupted\r\n+from datasette import tracer\r\n from datasette.utils import (\r\n await_me_maybe,\r\n CustomRow,\r\n@@ -174,8 +176,11 @@ class TableView(DataView):\r\n write=bool(canned_query.get(\"write\")),\r\n )\r\n \r\n- is_view = bool(await db.get_view_definition(table_name))\r\n- table_exists = bool(await db.table_exists(table_name))\r\n+ with tracer.trace_child_tasks():\r\n+ is_view, table_exists = map(bool, await asyncio.gather(\r\n+ db.get_view_definition(table_name),\r\n+ db.table_exists(table_name)\r\n+ ))\r\n \r\n # If table or view not found, return 404\r\n if not is_view and not table_exists:\r\n```\r\nAnd now using https://datasette.io/plugins/datasette-pretty-traces I get this:\r\n\r\n![CleanShot 2022-04-26 at 14 03 33@2x](https://user-images.githubusercontent.com/9599/165392009-84c4399d-3e94-46d4-ba7b-a64a116cac5c.png)\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1110239536", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1110239536, "node_id": "IC_kwDOBm6k_c5CLOkw", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T20:54:53Z", "updated_at": "2022-04-26T20:54:53Z", "author_association": "OWNER", "body": "`pytest tests/test_table_*` runs the tests quickly.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1110238896", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1110238896, "node_id": "IC_kwDOBm6k_c5CLOaw", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T20:53:59Z", "updated_at": "2022-04-26T20:53:59Z", "author_association": "OWNER", "body": "I'm going to rename `database` to `database_name` and `table` to `table_name` to avoid confusion with the `Database` object as opposed to the string name for the database.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1110229319", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1110229319, "node_id": "IC_kwDOBm6k_c5CLMFH", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T20:41:32Z", "updated_at": "2022-04-26T20:44:38Z", "author_association": "OWNER", "body": "This time I'm not going to bother with the `filter_args` thing - I'm going to just try to use `asyncinject` to execute some big high level things in parallel - facets, suggested facets, counts, the query - and then combine it with the `extras` mechanism I'm trying to introduce too.\r\n\r\nMost importantly: I want that `extra_template()` function that adds more template context for the HTML to be executed as part of an `asyncinject` flow!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1110219185", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1110219185, "node_id": "IC_kwDOBm6k_c5CLJmx", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T20:28:40Z", "updated_at": "2022-04-26T20:56:48Z", "author_association": "OWNER", "body": "The refactor I did in #1719 pretty much clashes with all of the changes in https://github.com/simonw/datasette/commit/5053f1ea83194ecb0a5693ad5dada5b25bf0f7e6 so I'll probably need to start my `api-extras` branch again from scratch.\r\n\r\nUsing a new `tableview-asyncinject` branch.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1110212021", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1110212021, "node_id": "IC_kwDOBm6k_c5CLH21", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T20:20:27Z", "updated_at": "2022-04-26T20:20:27Z", "author_association": "OWNER", "body": "Closing this because I have a good enough idea of the design for now - the details of the parameters can be figured out when I implement this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109309683", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109309683, "node_id": "IC_kwDOBm6k_c5CHrjz", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T04:12:39Z", "updated_at": "2022-04-26T04:12:39Z", "author_association": "OWNER", "body": "I think the rough shape of the three plugin hooks is right. The detailed decisions that are needed concern what the parameters should be, which I think will mainly happen as part of:\r\n\r\n- #1715", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109306070", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109306070, "node_id": "IC_kwDOBm6k_c5CHqrW", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T04:05:20Z", "updated_at": "2022-04-26T04:05:20Z", "author_association": "OWNER", "body": "The proposed plugin for annotations - allowing users to attach comments to database tables, columns and rows - would be a great application for all three of those `?_extra=` plugin hooks.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109305184", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109305184, "node_id": "IC_kwDOBm6k_c5CHqdg", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T04:03:35Z", "updated_at": "2022-04-26T04:03:35Z", "author_association": "OWNER", "body": "I bet there's all kinds of interesting potential extras that could be calculated by loading the results of the query into a Pandas DataFrame.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109200774", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109200774, "node_id": "IC_kwDOBm6k_c5CHQ-G", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T01:25:43Z", "updated_at": "2022-04-26T01:26:15Z", "author_association": "OWNER", "body": "Had a thought: if a custom HTML template is going to make use of stuff generated using these extras, it will need a way to tell Datasette to execute those extras even in the absence of the `?_extra=...` URL parameters.\r\n\r\nIs that necessary? Or should those kinds of plugins use the existing `extra_template_vars` hook instead?\r\n\r\nOr maybe the `extra_template_vars` hook gets redesigned so it can depend on other `extras` in some way?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109200335", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109200335, "node_id": "IC_kwDOBm6k_c5CHQ3P", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T01:24:47Z", "updated_at": "2022-04-26T01:24:47Z", "author_association": "OWNER", "body": "Sketching out a `?_extra=statistics` table plugin:\r\n\r\n```python\r\nfrom datasette import hookimpl\r\n\r\n@hookimpl\r\ndef register_table_extras(datasette):\r\n return [statistics]\r\n\r\nasync def statistics(datasette, query, columns, sql):\r\n # ... need to figure out which columns are integer/floats\r\n # then build and execute a SQL query that calculates sum/avg/etc for each column\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/428#issuecomment-1109190401", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/428", "id": 1109190401, "node_id": "IC_kwDOCGYnMM5CHOcB", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T01:05:29Z", "updated_at": "2022-04-26T01:05:29Z", "author_association": "OWNER", "body": "Django makes extensive use of savepoints for nested transactions: https://docs.djangoproject.com/en/4.0/topics/db/transactions/#savepoints", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215216249, "label": "Research adding support for savepoints"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109174715", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109174715, "node_id": "IC_kwDOBm6k_c5CHKm7", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:40:13Z", "updated_at": "2022-04-26T00:43:33Z", "author_association": "OWNER", "body": "Some of the things I'd like to use `?_extra=` for, that may or not make sense as plugins:\r\n\r\n- Performance breakdown information, maybe including explain output for a query/table\r\n- Information about the tables that were consulted in a query - imagine pulling in additional table metadata\r\n- Statistical aggregates against the full set of results. This may well be a Datasette core feature at some point in the future, but being able to provide it early as a plugin would be really cool.\r\n- For tables, what are the other tables they can join against?\r\n- Suggested facets\r\n- Facet results themselves\r\n- New custom facets I haven't thought of - though the `register_facet_classes` hook covers that already\r\n- Table schema\r\n- Table metadata\r\n- Analytics - how many times has this table been queried? Would be a plugin thing\r\n- For geospatial data, how about a GeoJSON polygon that represents the bounding box for all returned results? Effectively this is an extra aggregation.\r\n\r\nLooking at https://github-to-sqlite.dogsheep.net/github/commits.json?_labels=on&_shape=objects for inspiration.\r\n\r\nI think there's a separate potential mechanism in the future that lets you add custom columns to a table. This would affect `.csv` and the HTML presentation too, which makes it a different concept from the `?_extra=` hook that affects the JSON export (and the context that is fed to the HTML templates).", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109171871", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109171871, "node_id": "IC_kwDOBm6k_c5CHJ6f", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:34:48Z", "updated_at": "2022-04-26T00:34:48Z", "author_association": "OWNER", "body": "Let's try sketching out a `register_table_extras` plugin for something new.\r\n\r\nThe first idea I came up with suggests adding new fields to the individual row records that come back - my mental model for extras so far has been that they add new keys to the root object.\r\n\r\nSo if a table result looked like this:\r\n\r\n```json\r\n{\r\n \"rows\": [\r\n {\"id\": 1, \"name\": \"Cleo\"},\r\n {\"id\": 2, \"name\": \"Suna\"}\r\n ],\r\n \"next_url\": null\r\n}\r\n```\r\nI was initially thinking that `?_extra=facets` would add a `\"facets\": {...}` key to that root object.\r\n\r\nHere's a plugin idea I came up with that would probably justify adding to the individual row objects instead:\r\n\r\n- `?_extra=check404s` - does an async `HEAD` request against every column value that looks like a URL and checks if it returns a 404\r\n\r\nThis could also work by adding a `\"check404s\": {\"url-here\": 200}` key to the root object though.\r\n\r\nI think I need some better plugin concepts before committing to this new hook. There's overlap between this and how I want the enrichments mechanism ([see here](https://simonwillison.net/2021/Jan/17/weeknotes-still-pretty-distracted/)) to work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109165411", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109165411, "node_id": "IC_kwDOBm6k_c5CHIVj", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:22:42Z", "updated_at": "2022-04-26T00:22:42Z", "author_association": "OWNER", "body": "Passing `pk_values` to the plugin hook feels odd. I think I'd pass a `row` object instead and let the code look up the primary key values on that row (by introspecting the primary keys for the table).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109164803", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109164803, "node_id": "IC_kwDOBm6k_c5CHIMD", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:21:40Z", "updated_at": "2022-04-26T00:21:40Z", "author_association": "OWNER", "body": "What would the existing https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables feature look like if it was re-imagined as a `register_row_extras()` plugin?\r\n\r\nRough sketch, copying most of the code from https://github.com/simonw/datasette/blob/579f59dcec43a91dd7d404e00b87a00afd8515f2/datasette/views/row.py#L98\r\n\r\n```python\r\nfrom datasette import hookimpl\r\n\r\n@hookimpl\r\ndef register_row_extras(datasette):\r\n return [foreign_key_tables]\r\n\r\nasync def foreign_key_tables(datasette, database, table, pk_values):\r\n if len(pk_values) != 1:\r\n return []\r\n db = datasette.get_database(database)\r\n all_foreign_keys = await db.get_all_foreign_keys()\r\n foreign_keys = all_foreign_keys[table][\"incoming\"]\r\n if len(foreign_keys) == 0:\r\n return []\r\n\r\n sql = \"select \" + \", \".join(\r\n [\r\n \"(select count(*) from {table} where {column}=:id)\".format(\r\n table=escape_sqlite(fk[\"other_table\"]),\r\n column=escape_sqlite(fk[\"other_column\"]),\r\n )\r\n for fk in foreign_keys\r\n ]\r\n )\r\n try:\r\n rows = list(await db.execute(sql, {\"id\": pk_values[0]}))\r\n except QueryInterrupted:\r\n # Almost certainly hit the timeout\r\n return []\r\n\r\n foreign_table_counts = dict(\r\n zip(\r\n [(fk[\"other_table\"], fk[\"other_column\"]) for fk in foreign_keys],\r\n list(rows[0]),\r\n )\r\n )\r\n foreign_key_tables = []\r\n for fk in foreign_keys:\r\n count = (\r\n foreign_table_counts.get((fk[\"other_table\"], fk[\"other_column\"])) or 0\r\n )\r\n key = fk[\"other_column\"]\r\n if key.startswith(\"_\"):\r\n key += \"__exact\"\r\n link = \"{}?{}={}\".format(\r\n self.ds.urls.table(database, fk[\"other_table\"]),\r\n key,\r\n \",\".join(pk_values),\r\n )\r\n foreign_key_tables.append({**fk, **{\"count\": count, \"link\": link}})\r\n return foreign_key_tables\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109162123", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109162123, "node_id": "IC_kwDOBm6k_c5CHHiL", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:16:42Z", "updated_at": "2022-04-26T00:16:51Z", "author_association": "OWNER", "body": "Actually I'm going to imitate the existing `register_*` hooks:\r\n\r\n- `def register_output_renderer(datasette)`\r\n- `def register_facet_classes()`\r\n- `def register_routes(datasette)`\r\n- `def register_commands(cli)`\r\n- `def register_magic_parameters(datasette)`\r\n\r\nSo I'm going to call the new hooks:\r\n\r\n- `register_table_extras(datasette)`\r\n- `register_row_extras(datasette)`\r\n- `register_query_extras(datasette)`\r\n\r\nThey'll return a list of `async def` functions. The names of those functions will become the names of the extras.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109160226", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109160226, "node_id": "IC_kwDOBm6k_c5CHHEi", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:14:11Z", "updated_at": "2022-04-26T00:14:11Z", "author_association": "OWNER", "body": "There are four existing plugin hooks that include the word \"extra\" but use it to mean something else - to mean additional CSS/JS/variables to be injected into the page:\r\n\r\n- `def extra_css_urls(...)`\r\n- `def extra_js_urls(...)`\r\n- `def extra_body_script(...)`\r\n- `def extra_template_vars(...)`\r\n\r\nI think `extra_*` and `*_extras` are different enough that they won't be confused with each other.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109159307", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109159307, "node_id": "IC_kwDOBm6k_c5CHG2L", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:12:28Z", "updated_at": "2022-04-26T00:12:28Z", "author_association": "OWNER", "body": "I'm going to keep table and row separate. So I think I need to add three new plugin hooks:\r\n\r\n- `table_extras()`\r\n- `row_extras()`\r\n- `query_extras()`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1720#issuecomment-1109158903", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1720", "id": 1109158903, "node_id": "IC_kwDOBm6k_c5CHGv3", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-26T00:11:42Z", "updated_at": "2022-04-26T00:11:42Z", "author_association": "OWNER", "body": "Places this plugin hook (or hooks?) should be able to affect:\r\n\r\n- JSON for a table/view\r\n- JSON for a row\r\n- JSON for a canned query\r\n- JSON for a custom arbitrary query\r\n\r\nI'm going to combine those last two, which means there are three places. But maybe I can combine the table one and the row one as well?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1215174094, "label": "Design plugin hook for extras"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1719#issuecomment-1108907238", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1719", "id": 1108907238, "node_id": "IC_kwDOBm6k_c5CGJTm", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-25T18:34:21Z", "updated_at": "2022-04-25T18:34:21Z", "author_association": "OWNER", "body": "Well this refactor turned out to be pretty quick and really does greatly simplify both the `RowView` and `TableView` classes. Very happy with this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1214859703, "label": "Refactor `RowView` and remove `RowTableShared`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/262#issuecomment-1108890170", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/262", "id": 1108890170, "node_id": "IC_kwDOBm6k_c5CGFI6", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-25T18:17:09Z", "updated_at": "2022-04-25T18:18:39Z", "author_association": "OWNER", "body": "I spotted in https://github.com/simonw/datasette/issues/1719#issuecomment-1108888494 that there's actually already an undocumented implementation of `?_extras=foreign_key_tables` - https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables\r\n\r\nI added that feature all the way back in November 2017! https://github.com/simonw/datasette/commit/a30c5b220c15360d575e94b0e67f3255e120b916", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323658641, "label": "Add ?_extra= mechanism for requesting extra properties in JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1719#issuecomment-1108888494", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1719", "id": 1108888494, "node_id": "IC_kwDOBm6k_c5CGEuu", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-25T18:15:42Z", "updated_at": "2022-04-25T18:15:42Z", "author_association": "OWNER", "body": "Here's an undocumented feature I forgot existed: https://latest.datasette.io/fixtures/simple_primary_key/1.json?_extras=foreign_key_tables\r\n\r\n`?_extras=foreign_key_tables`\r\n\r\nhttps://github.com/simonw/datasette/blob/0bc5186b7bb4fc82392df08f99a9132f84dcb331/datasette/views/table.py#L1021-L1024\r\n\r\nIt's even covered by the tests:\r\n\r\nhttps://github.com/simonw/datasette/blob/b9c2b1cfc8692b9700416db98721fa3ec982f6be/tests/test_api.py#L691-L703", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1214859703, "label": "Refactor `RowView` and remove `RowTableShared`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1719#issuecomment-1108884171", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1719", "id": 1108884171, "node_id": "IC_kwDOBm6k_c5CGDrL", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-25T18:10:46Z", "updated_at": "2022-04-25T18:12:45Z", "author_association": "OWNER", "body": "It looks like the only class method from that shared class needed by `RowView` is `self.display_columns_and_rows()`.\r\n\r\nWhich I've been wanting to refactor to provide to `QueryView` too:\r\n\r\n- #715", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1214859703, "label": "Refactor `RowView` and remove `RowTableShared`"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1108877454", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1108877454, "node_id": "IC_kwDOBm6k_c5CGCCO", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-25T18:04:27Z", "updated_at": "2022-04-25T18:04:27Z", "author_association": "OWNER", "body": "Pushed my WIP on this to the `api-extras` branch: 5053f1ea83194ecb0a5693ad5dada5b25bf0f7e6", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1108875068", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1108875068, "node_id": "IC_kwDOBm6k_c5CGBc8", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-25T18:03:13Z", "updated_at": "2022-04-25T18:06:33Z", "author_association": "OWNER", "body": "The `RowTableShared` class is making this a whole lot more complicated.\r\n\r\nI'm going to split the `RowView` view out into an entirely separate `views/row.py` module.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107873311", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107873311, "node_id": "IC_kwDOBm6k_c5CCM4f", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T16:24:14Z", "updated_at": "2022-04-24T16:24:14Z", "author_association": "OWNER", "body": "Wrote up what I learned in a TIL: https://til.simonwillison.net/sphinx/blacken-docs", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107873271", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107873271, "node_id": "IC_kwDOBm6k_c5CCM33", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T16:23:57Z", "updated_at": "2022-04-24T16:23:57Z", "author_association": "OWNER", "body": "Turns out I didn't need that `git diff-index` trick after all - the `blacken-docs` command returns a non-zero exit code if it changes any files.\r\n\r\nSubmitted a documentation PR to that project instead:\r\n\r\n- https://github.com/asottile/blacken-docs/pull/162", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107870788", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107870788, "node_id": "IC_kwDOBm6k_c5CCMRE", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T16:09:23Z", "updated_at": "2022-04-24T16:09:23Z", "author_association": "OWNER", "body": "One more attempt at testing the `git diff-index` trick.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107869884", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107869884, "node_id": "IC_kwDOBm6k_c5CCMC8", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T16:04:03Z", "updated_at": "2022-04-24T16:04:03Z", "author_association": "OWNER", "body": "OK, I'm expecting this one to fail at the `git diff-index --quiet HEAD --` check.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107869556", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107869556, "node_id": "IC_kwDOBm6k_c5CCL90", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T16:02:27Z", "updated_at": "2022-04-24T16:02:27Z", "author_association": "OWNER", "body": "Looking at that first error it appears to be a place where I had deliberately omitted the body of the function:\r\n\r\nhttps://github.com/simonw/datasette/blob/36573638b0948174ae237d62e6369b7d55220d7f/docs/internals.rst#L196-L211\r\n\r\nI can use `...` as the function body here to get it to pass.\r\n\r\nFixing those warnings actually helped me spot a couple of bugs, so I'm glad this happened.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107868585", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107868585, "node_id": "IC_kwDOBm6k_c5CCLup", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:57:10Z", "updated_at": "2022-04-24T15:57:19Z", "author_association": "OWNER", "body": "The tests failed there because of what I thought were warnings but turn out to be treated as errors:\r\n```\r\n% blacken-docs -l 60 docs/*.rst \r\ndocs/internals.rst:196: code block parse error Cannot parse: 14:0: \r\ndocs/json_api.rst:449: code block parse error Cannot parse: 1:0: \r\ndocs/testing_plugins.rst:135: code block parse error Cannot parse: 5:0: \r\n% echo $?\r\n1\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107867281", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107867281, "node_id": "IC_kwDOBm6k_c5CCLaR", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:49:23Z", "updated_at": "2022-04-24T15:49:23Z", "author_association": "OWNER", "body": "I'm going to push the first commit with a deliberate missing formatting to check that the tests fail.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107866013", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107866013, "node_id": "IC_kwDOBm6k_c5CCLGd", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:42:07Z", "updated_at": "2022-04-24T15:42:07Z", "author_association": "OWNER", "body": "In the absence of `--check` I can use this to detect if changes are applied:\r\n```zsh\r\n% git diff-index --quiet HEAD --\r\n% echo $? \r\n0\r\n% blacken-docs -l 60 docs/*.rst\r\ndocs/authentication.rst: Rewriting...\r\n...\r\n% git diff-index --quiet HEAD --\r\n% echo $? \r\n1\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107865493", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107865493, "node_id": "IC_kwDOBm6k_c5CCK-V", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:39:02Z", "updated_at": "2022-04-24T15:39:02Z", "author_association": "OWNER", "body": "There's no `blacken-docs --check` option so I filed a feature request:\r\n- https://github.com/asottile/blacken-docs/issues/161", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107863924", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107863924, "node_id": "IC_kwDOBm6k_c5CCKl0", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:30:03Z", "updated_at": "2022-04-24T15:30:03Z", "author_association": "OWNER", "body": "On the one hand, I'm not crazy about some of the indentation decisions Black made here - in particular this one, which I had indented deliberately for readability:\r\n```diff\r\n diff --git a/docs/authentication.rst b/docs/authentication.rst\r\nindex 0d98cf8..8008023 100644\r\n--- a/docs/authentication.rst\r\n+++ b/docs/authentication.rst\r\n@@ -381,11 +381,7 @@ Authentication plugins can set signed ``ds_actor`` cookies themselves like so:\r\n .. code-block:: python\r\n \r\n response = Response.redirect(\"/\")\r\n- response.set_cookie(\"ds_actor\", datasette.sign({\r\n- \"a\": {\r\n- \"id\": \"cleopaws\"\r\n- }\r\n- }, \"actor\"))\r\n+ response.set_cookie(\"ds_actor\", datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"))\r\n```\r\nBut... consistency is a virtue. Maybe I'm OK with just this one disagreement?\r\n\r\nAlso: I've been mentally trying to keep the line lengths a bit shorter to help them be more readable on mobile devices.\r\n\r\nI'll try a different line length using `blacken-docs -l 60 docs/*.rst` instead.\r\n\r\nI like this more - here's the result for that example:\r\n```diff\r\ndiff --git a/docs/authentication.rst b/docs/authentication.rst\r\nindex 0d98cf8..2496073 100644\r\n--- a/docs/authentication.rst\r\n+++ b/docs/authentication.rst\r\n@@ -381,11 +381,10 @@ Authentication plugins can set signed ``ds_actor`` cookies themselves like so:\r\n .. code-block:: python\r\n \r\n response = Response.redirect(\"/\")\r\n- response.set_cookie(\"ds_actor\", datasette.sign({\r\n- \"a\": {\r\n- \"id\": \"cleopaws\"\r\n- }\r\n- }, \"actor\"))\r\n+ response.set_cookie(\r\n+ \"ds_actor\",\r\n+ datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\r\n+ )\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107863365", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107863365, "node_id": "IC_kwDOBm6k_c5CCKdF", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:26:41Z", "updated_at": "2022-04-24T15:26:41Z", "author_association": "OWNER", "body": "Tried this:\r\n```\r\npip install blacken-docs\r\nblacken-docs docs/*.rst\r\ngit diff | pbcopy\r\n```\r\nGot this:\r\n```diff\r\n diff --git a/docs/authentication.rst b/docs/authentication.rst\r\nindex 0d98cf8..8008023 100644\r\n--- a/docs/authentication.rst\r\n+++ b/docs/authentication.rst\r\n@@ -381,11 +381,7 @@ Authentication plugins can set signed ``ds_actor`` cookies themselves like so:\r\n .. code-block:: python\r\n \r\n response = Response.redirect(\"/\")\r\n- response.set_cookie(\"ds_actor\", datasette.sign({\r\n- \"a\": {\r\n- \"id\": \"cleopaws\"\r\n- }\r\n- }, \"actor\"))\r\n+ response.set_cookie(\"ds_actor\", datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"))\r\n \r\n Note that you need to pass ``\"actor\"`` as the namespace to :ref:`datasette_sign`.\r\n \r\n@@ -412,12 +408,16 @@ To include an expiry, add a ``\"e\"`` key to the cookie value containing a `base62\r\n expires_at = int(time.time()) + (24 * 60 * 60)\r\n \r\n response = Response.redirect(\"/\")\r\n- response.set_cookie(\"ds_actor\", datasette.sign({\r\n- \"a\": {\r\n- \"id\": \"cleopaws\"\r\n- },\r\n- \"e\": baseconv.base62.encode(expires_at),\r\n- }, \"actor\"))\r\n+ response.set_cookie(\r\n+ \"ds_actor\",\r\n+ datasette.sign(\r\n+ {\r\n+ \"a\": {\"id\": \"cleopaws\"},\r\n+ \"e\": baseconv.base62.encode(expires_at),\r\n+ },\r\n+ \"actor\",\r\n+ ),\r\n+ )\r\n \r\n The resulting cookie will encode data that looks something like this:\r\n \r\ndiff --git a/docs/spatialite.rst b/docs/spatialite.rst\r\nindex d1b300b..556bad8 100644\r\n--- a/docs/spatialite.rst\r\n+++ b/docs/spatialite.rst\r\n@@ -58,19 +58,22 @@ Here's a recipe for taking a table with existing latitude and longitude columns,\r\n .. code-block:: python\r\n \r\n import sqlite3\r\n- conn = sqlite3.connect('museums.db')\r\n+\r\n+ conn = sqlite3.connect(\"museums.db\")\r\n # Lead the spatialite extension:\r\n conn.enable_load_extension(True)\r\n- conn.load_extension('/usr/local/lib/mod_spatialite.dylib')\r\n+ conn.load_extension(\"/usr/local/lib/mod_spatialite.dylib\")\r\n # Initialize spatial metadata for this database:\r\n- conn.execute('select InitSpatialMetadata(1)')\r\n+ conn.execute(\"select InitSpatialMetadata(1)\")\r\n # Add a geometry column called point_geom to our museums table:\r\n conn.execute(\"SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);\")\r\n # Now update that geometry column with the lat/lon points\r\n- conn.execute('''\r\n+ conn.execute(\r\n+ \"\"\"\r\n UPDATE museums SET\r\n point_geom = GeomFromText('POINT('||\"longitude\"||' '||\"latitude\"||')',4326);\r\n- ''')\r\n+ \"\"\"\r\n+ )\r\n # Now add a spatial index to that column\r\n conn.execute('select CreateSpatialIndex(\"museums\", \"point_geom\");')\r\n # If you don't commit your changes will not be persisted:\r\n@@ -186,13 +189,14 @@ Here's Python code to create a SQLite database, enable SpatiaLite, create a plac\r\n .. code-block:: python\r\n \r\n import sqlite3\r\n- conn = sqlite3.connect('places.db')\r\n+\r\n+ conn = sqlite3.connect(\"places.db\")\r\n # Enable SpatialLite extension\r\n conn.enable_load_extension(True)\r\n- conn.load_extension('/usr/local/lib/mod_spatialite.dylib')\r\n+ conn.load_extension(\"/usr/local/lib/mod_spatialite.dylib\")\r\n # Create the masic countries table\r\n- conn.execute('select InitSpatialMetadata(1)')\r\n- conn.execute('create table places (id integer primary key, name text);')\r\n+ conn.execute(\"select InitSpatialMetadata(1)\")\r\n+ conn.execute(\"create table places (id integer primary key, name text);\")\r\n # Add a MULTIPOLYGON Geometry column\r\n conn.execute(\"SELECT AddGeometryColumn('places', 'geom', 4326, 'MULTIPOLYGON', 2);\")\r\n # Add a spatial index against the new column\r\n@@ -201,13 +205,17 @@ Here's Python code to create a SQLite database, enable SpatiaLite, create a plac\r\n from shapely.geometry.multipolygon import MultiPolygon\r\n from shapely.geometry import shape\r\n import requests\r\n- geojson = requests.get('https://data.whosonfirst.org/404/227/475/404227475.geojson').json()\r\n+\r\n+ geojson = requests.get(\r\n+ \"https://data.whosonfirst.org/404/227/475/404227475.geojson\"\r\n+ ).json()\r\n # Convert to \"Well Known Text\" format\r\n- wkt = shape(geojson['geometry']).wkt\r\n+ wkt = shape(geojson[\"geometry\"]).wkt\r\n # Insert and commit the record\r\n- conn.execute(\"INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))\", (\r\n- \"Wales\", wkt\r\n- ))\r\n+ conn.execute(\r\n+ \"INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))\",\r\n+ (\"Wales\", wkt),\r\n+ )\r\n conn.commit()\r\n \r\n Querying polygons using within()\r\ndiff --git a/docs/writing_plugins.rst b/docs/writing_plugins.rst\r\nindex bd60a4b..5af01f6 100644\r\n--- a/docs/writing_plugins.rst\r\n+++ b/docs/writing_plugins.rst\r\n@@ -18,9 +18,10 @@ The quickest way to start writing a plugin is to create a ``my_plugin.py`` file\r\n \r\n from datasette import hookimpl\r\n \r\n+\r\n @hookimpl\r\n def prepare_connection(conn):\r\n- conn.create_function('hello_world', 0, lambda: 'Hello world!')\r\n+ conn.create_function(\"hello_world\", 0, lambda: \"Hello world!\")\r\n \r\n If you save this in ``plugins/my_plugin.py`` you can then start Datasette like this::\r\n \r\n@@ -60,22 +61,18 @@ The example consists of two files: a ``setup.py`` file that defines the plugin:\r\n \r\n from setuptools import setup\r\n \r\n- VERSION = '0.1'\r\n+ VERSION = \"0.1\"\r\n \r\n setup(\r\n- name='datasette-plugin-demos',\r\n- description='Examples of plugins for Datasette',\r\n- author='Simon Willison',\r\n- url='https://github.com/simonw/datasette-plugin-demos',\r\n- license='Apache License, Version 2.0',\r\n+ name=\"datasette-plugin-demos\",\r\n+ description=\"Examples of plugins for Datasette\",\r\n+ author=\"Simon Willison\",\r\n+ url=\"https://github.com/simonw/datasette-plugin-demos\",\r\n+ license=\"Apache License, Version 2.0\",\r\n version=VERSION,\r\n- py_modules=['datasette_plugin_demos'],\r\n- entry_points={\r\n- 'datasette': [\r\n- 'plugin_demos = datasette_plugin_demos'\r\n- ]\r\n- },\r\n- install_requires=['datasette']\r\n+ py_modules=[\"datasette_plugin_demos\"],\r\n+ entry_points={\"datasette\": [\"plugin_demos = datasette_plugin_demos\"]},\r\n+ install_requires=[\"datasette\"],\r\n )\r\n \r\n And a Python module file, ``datasette_plugin_demos.py``, that implements the plugin:\r\n@@ -88,12 +85,12 @@ And a Python module file, ``datasette_plugin_demos.py``, that implements the plu\r\n \r\n @hookimpl\r\n def prepare_jinja2_environment(env):\r\n- env.filters['uppercase'] = lambda u: u.upper()\r\n+ env.filters[\"uppercase\"] = lambda u: u.upper()\r\n \r\n \r\n @hookimpl\r\n def prepare_connection(conn):\r\n- conn.create_function('random_integer', 2, random.randint)\r\n+ conn.create_function(\"random_integer\", 2, random.randint)\r\n \r\n \r\n Having built a plugin in this way you can turn it into an installable package using the following command::\r\n@@ -123,11 +120,13 @@ To bundle the static assets for a plugin in the package that you publish to PyPI\r\n \r\n .. code-block:: python\r\n \r\n- package_data={\r\n- 'datasette_plugin_name': [\r\n- 'static/plugin.js',\r\n- ],\r\n- },\r\n+ package_data = (\r\n+ {\r\n+ \"datasette_plugin_name\": [\r\n+ \"static/plugin.js\",\r\n+ ],\r\n+ },\r\n+ )\r\n \r\n Where ``datasette_plugin_name`` is the name of the plugin package (note that it uses underscores, not hyphens) and ``static/plugin.js`` is the path within that package to the static file.\r\n \r\n@@ -152,11 +151,13 @@ Templates should be bundled for distribution using the same ``package_data`` mec\r\n \r\n .. code-block:: python\r\n \r\n- package_data={\r\n- 'datasette_plugin_name': [\r\n- 'templates/my_template.html',\r\n- ],\r\n- },\r\n+ package_data = (\r\n+ {\r\n+ \"datasette_plugin_name\": [\r\n+ \"templates/my_template.html\",\r\n+ ],\r\n+ },\r\n+ )\r\n \r\n You can also use wildcards here such as ``templates/*.html``. See `datasette-edit-schema `__ for an example of this pattern.\r\n ```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1718#issuecomment-1107862882", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1718", "id": 1107862882, "node_id": "IC_kwDOBm6k_c5CCKVi", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T15:23:56Z", "updated_at": "2022-04-24T15:23:56Z", "author_association": "OWNER", "body": "Found https://github.com/asottile/blacken-docs via\r\n- https://github.com/psf/black/issues/294", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213683988, "label": "Code examples in the documentation should be formatted with Black"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1717#issuecomment-1107848097", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1717", "id": 1107848097, "node_id": "IC_kwDOBm6k_c5CCGuh", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-24T14:02:37Z", "updated_at": "2022-04-24T14:02:37Z", "author_association": "OWNER", "body": "This is a neat feature, thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213281044, "label": "Add timeout option to Cloudrun build"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1717#issuecomment-1107459446", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1717", "id": 1107459446, "node_id": "IC_kwDOBm6k_c5CAn12", "user": {"value": 22429695, "label": "codecov[bot]"}, "created_at": "2022-04-23T11:56:36Z", "updated_at": "2022-04-23T11:56:36Z", "author_association": "NONE", "body": "# [Codecov](https://codecov.io/gh/simonw/datasette/pull/1717?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) Report\n> Merging [#1717](https://codecov.io/gh/simonw/datasette/pull/1717?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (9b9a314) into [main](https://codecov.io/gh/simonw/datasette/commit/d57c347f35bcd8cff15f913da851b4b8eb030867?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) (d57c347) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n```diff\n@@ Coverage Diff @@\n## main #1717 +/- ##\n=======================================\n Coverage 91.75% 91.75% \n=======================================\n Files 34 34 \n Lines 4574 4575 +1 \n=======================================\n+ Hits 4197 4198 +1 \n Misses 377 377 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/simonw/datasette/pull/1717?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison) | Coverage \u0394 | |\n|---|---|---|\n| [datasette/publish/cloudrun.py](https://codecov.io/gh/simonw/datasette/pull/1717/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison#diff-ZGF0YXNldHRlL3B1Ymxpc2gvY2xvdWRydW4ucHk=) | `97.05% <100.00%> (+0.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/simonw/datasette/pull/1717?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison)\n> `\u0394 = absolute (impact)`, `\u00f8 = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/simonw/datasette/pull/1717?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Last update [d57c347...9b9a314](https://codecov.io/gh/simonw/datasette/pull/1717?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Simon+Willison).\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1213281044, "label": "Add timeout option to Cloudrun build"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1106989581", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1106989581, "node_id": "IC_kwDOBm6k_c5B-1IN", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-22T23:03:29Z", "updated_at": "2022-04-22T23:03:29Z", "author_association": "OWNER", "body": "I'm having second thoughts about injecting `request` - might be better to have the view function pull the relevant pieces out of the request before triggering the rest of the resolution.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1106947168", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1106947168, "node_id": "IC_kwDOBm6k_c5B-qxg", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-22T22:25:57Z", "updated_at": "2022-04-22T22:26:06Z", "author_association": "OWNER", "body": "```python\r\nasync def database(request: Request, datasette: Datasette) -> Database:\r\n database_route = tilde_decode(request.url_vars[\"database\"])\r\n try:\r\n return datasette.get_database(route=database_route)\r\n except KeyError:\r\n raise NotFound(\"Database not found: {}\".format(database_route))\r\n\r\nasync def table_name(request: Request) -> str:\r\n return tilde_decode(request.url_vars[\"table\"])\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1106945876", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1106945876, "node_id": "IC_kwDOBm6k_c5B-qdU", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-22T22:24:29Z", "updated_at": "2022-04-22T22:24:29Z", "author_association": "OWNER", "body": "Looking at the start of `TableView.data()`:\r\n\r\nhttps://github.com/simonw/datasette/blob/d57c347f35bcd8cff15f913da851b4b8eb030867/datasette/views/table.py#L333-L346\r\n\r\nI'm going to resolve `table_name` and `database` from the URL - `table_name` will be a string, `database` will be the DB object returned by `datasette.get_database()`. Then those can be passed in separately too.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1716#issuecomment-1106923258", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1716", "id": 1106923258, "node_id": "IC_kwDOBm6k_c5B-k76", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-22T22:02:07Z", "updated_at": "2022-04-22T22:02:07Z", "author_association": "OWNER", "body": "https://github.com/simonw/datasette/blame/main/datasette/views/base.py\r\n\r\n\"image\"\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212838949, "label": "Configure git blame to ignore Black commit"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1715#issuecomment-1106908642", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1715", "id": 1106908642, "node_id": "IC_kwDOBm6k_c5B-hXi", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-22T21:47:55Z", "updated_at": "2022-04-22T21:47:55Z", "author_association": "OWNER", "body": "I need a `asyncio.Registry` with functions registered to perform the role of the table view.\r\n\r\nSomething like this perhaps:\r\n```python\r\ndef table_html_context(facet_results, query, datasette, rows):\r\n return {...}\r\n```\r\nThat then gets called like this:\r\n```python\r\nasync def view(request):\r\n registry = Registry(facet_results, query, datasette, rows)\r\n context = await registry.resolve(table_html, request=request, datasette=datasette)\r\n return Reponse.html(await datasette.render(\"table.html\", context)\r\n```\r\nIt's also interesting to start thinking about this from a Python client library point of view. If I'm writing code outside of the HTTP request cycle, what would it look like?\r\n\r\nOne thing I could do: break out is the code that turns a request into a list of pairs extracted from the request - this code here: https://github.com/simonw/datasette/blob/8338c66a57502ef27c3d7afb2527fbc0663b2570/datasette/views/table.py#L442-L449\r\n\r\nI could turn that into a typed dependency injection function like this:\r\n\r\n```python\r\ndef filter_args(request: Request) -> List[Tuple[str, str]]:\r\n # Arguments that start with _ and don't contain a __ are\r\n # special - things like ?_search= - and should not be\r\n # treated as filters.\r\n filter_args = []\r\n for key in request.args:\r\n if not (key.startswith(\"_\") and \"__\" not in key):\r\n for v in request.args.getlist(key):\r\n filter_args.append((key, v))\r\n return filter_args\r\n```\r\nThen I can either pass a `request` into a `.resolve()` call, or I can instead skip that function by passing:\r\n\r\n```python\r\noutput = registry.resolve(table_context, filter_args=[(\"foo\", \"bar\")])\r\n```\r\nI do need to think about where plugins get executed in all of this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1212823665, "label": "Refactor TableView to use asyncinject"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105642187", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105642187, "node_id": "IC_kwDOBm6k_c5B5sLL", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-21T18:59:08Z", "updated_at": "2022-04-21T18:59:08Z", "author_association": "CONTRIBUTOR", "body": "Ha! That was your idea (and a good one).\r\n\r\nBut it's probably worth measuring to see what overhead it adds. It did require both passing in the database and making the whole thing `async`. \r\n\r\nJust timing the queries themselves:\r\n\r\n1. [Using `AsGeoJSON(geometry) as geometry`](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++AsGeoJSON%28geometry%29+as+geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 10.235 ms\r\n2. [Leaving as binary](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 8.63 ms\r\n\r\nLooking at the network panel:\r\n\r\n1. Takes about 200 ms for the `fetch` request\r\n2. Takes about 300 ms\r\n\r\nI'm not sure how best to time the GeoJSON generation, but it would be interesting to check. Maybe I'll write a plugin to add query times to response headers.\r\n\r\nThe other thing to consider with async streaming is that it might be well-suited for a slower response. When I have to get the whole result and send a response in a fixed amount of time, I need the most efficient query possible. If I can hang onto a connection and get things one chunk at a time, maybe it's ok if there's some overhead.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105615625", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105615625, "node_id": "IC_kwDOBm6k_c5B5lsJ", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-21T18:31:41Z", "updated_at": "2022-04-21T18:32:22Z", "author_association": "OWNER", "body": "The `datasette-geojson` plugin is actually an interesting case here, because of the way it converts SpatiaLite geometries into GeoJSON: https://github.com/eyeseast/datasette-geojson/blob/602c4477dc7ddadb1c0a156cbcd2ef6688a5921d/datasette_geojson/__init__.py#L61-L66\r\n\r\n```python\r\n\r\n if isinstance(geometry, bytes):\r\n results = await db.execute(\r\n \"SELECT AsGeoJSON(:geometry)\", {\"geometry\": geometry}\r\n )\r\n return geojson.loads(results.single_value())\r\n```\r\nThat actually seems to work really well as-is, but it does worry me a bit that it ends up having to execute an extra `SELECT` query for every single returned row - especially in streaming mode where it might be asked to return 1m rows at once.\r\n\r\nMy PostgreSQL/MySQL engineering brain says that this would be better handled by doing a chunk of these (maybe 100) at once, to avoid the per-query-overhead - but with SQLite that might not be necessary.\r\n\r\nAt any rate, this is one of the reasons I'm interested in \"iterate over this sequence of chunks of 100 rows at a time\" as a potential option here.\r\n\r\nOf course, a better solution would be for `datasette-geojson` to have a way to influence the SQL query before it is executed, adding a `AsGeoJSON(geometry)` clause to it - so that's something I'm open to as well.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105608964", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105608964, "node_id": "IC_kwDOBm6k_c5B5kEE", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-21T18:26:29Z", "updated_at": "2022-04-21T18:26:29Z", "author_association": "OWNER", "body": "I'm questioning if the mechanisms should be separate at all now - a single response rendering is really just a case of a streaming response that only pulls the first N records from the iterator.\r\n\r\nIt probably needs to be an `async for` iterator, which I've not worked with much before. Good opportunity to learn.\r\n\r\nThis actually gets a fair bit more complicated due to the work I'm doing right now to improve the default JSON API:\r\n\r\n- #1709\r\n\r\nI want to do things like make faceting results optionally available to custom renderers - which is a separate concern from streaming rows.\r\n\r\nI'm going to poke around with a bunch of prototypes and see what sticks.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105588651", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105588651, "node_id": "IC_kwDOBm6k_c5B5fGr", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-21T18:15:39Z", "updated_at": "2022-04-21T18:15:39Z", "author_association": "CONTRIBUTOR", "body": "What if you split rendering and streaming into two things:\r\n\r\n- `render` is a function that returns a response\r\n- `stream` is a function that sends chunks, or yields chunks passed to an ASGI `send` callback\r\n\r\nThat way current plugins still work, and streaming is purely additive. A `stream` function could get a cursor or iterator of rows, instead of a list, so it could more efficiently handle large queries.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105571003", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105571003, "node_id": "IC_kwDOBm6k_c5B5ay7", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-21T18:10:38Z", "updated_at": "2022-04-21T18:10:46Z", "author_association": "OWNER", "body": "Maybe the simplest design for this is to add an optional `can_stream` to the contract:\r\n\r\n```python\r\n @hookimpl\r\n def register_output_renderer(datasette):\r\n return {\r\n \"extension\": \"tsv\",\r\n \"render\": render_tsv,\r\n \"can_render\": lambda: True,\r\n \"can_stream\": lambda: True\r\n }\r\n```\r\nWhen streaming, a new parameter could be passed to the render function - maybe `chunks` - which is an iterator/generator over a sequence of chunks of rows.\r\n\r\nOr it could use the existing `rows` parameter but treat that as an iterator?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/72#issuecomment-1105474232", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/72", "id": 1105474232, "node_id": "IC_kwDODFdgUs5B5DK4", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-21T17:02:15Z", "updated_at": "2022-04-21T17:02:15Z", "author_association": "MEMBER", "body": "That's interesting - yeah it looks like the number of pages can be derived from the `Link` header, which is enough information to show a progress bar, probably using Click just to avoid adding another dependency.\r\n\r\nhttps://docs.github.com/en/rest/guides/traversing-with-pagination", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1211283427, "label": "feature: display progress bar when downloading multi-page responses"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1574#issuecomment-1105464661", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1574", "id": 1105464661, "node_id": "IC_kwDOBm6k_c5B5A1V", "user": {"value": 208018, "label": "dholth"}, "created_at": "2022-04-21T16:51:24Z", "updated_at": "2022-04-21T16:51:24Z", "author_association": "NONE", "body": "tfw you have more ephemeral storage than upstream bandwidth\r\n\r\n```\r\nFROM python:3.10-slim AS base\r\n\r\nRUN apt update && apt -y install zstd\r\n\r\nENV DATASETTE_SECRET 'sosecret'\r\nRUN --mount=type=cache,target=/root/.cache/pip\r\n pip install -U datasette datasette-pretty-json datasette-graphql\r\n\r\nENV PORT 8080\r\nEXPOSE 8080\r\n\r\nFROM base AS pack\r\n\r\nCOPY . /app\r\nWORKDIR /app\r\n\r\nRUN datasette inspect --inspect-file inspect-data.json\r\nRUN zstd --rm *.db\r\n\r\nFROM base AS unpack\r\n\r\nCOPY --from=pack /app /app\r\nWORKDIR /app\r\n\r\nCMD [\"/bin/bash\", \"-c\", \"shopt -s nullglob && zstd --rm -d *.db.zst && datasette serve --host 0.0.0.0 --cors --inspect-file inspect-data.json --metadata metadata.json --create --port $PORT *.db\"]\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1084193403, "label": "introduce new option for datasette package to use a slim base image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1713#issuecomment-1103312860", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1713", "id": 1103312860, "node_id": "IC_kwDOBm6k_c5Bwzfc", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-04-20T00:52:19Z", "updated_at": "2022-04-20T00:52:19Z", "author_association": "CONTRIBUTOR", "body": "feels related to #1402 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1203943272, "label": "Datasette feature for publishing snapshots of query results"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/425#issuecomment-1101594549", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/425", "id": 1101594549, "node_id": "IC_kwDOCGYnMM5BqP-1", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-04-18T17:36:14Z", "updated_at": "2022-04-18T17:36:14Z", "author_association": "OWNER", "body": "Releated:\r\n- #408", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1203842656, "label": "`sqlite3.NotSupportedError`: deterministic=True requires SQLite 3.8.3 or higher"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/pull/1159#issuecomment-1100243987", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1159", "id": 1100243987, "node_id": "IC_kwDOBm6k_c5BlGQT", "user": {"value": 552629, "label": "lovasoa"}, "created_at": "2022-04-15T17:24:43Z", "updated_at": "2022-04-15T17:24:43Z", "author_association": "NONE", "body": "@simonw : do you think this could be merged ?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 774332247, "label": "Improve the display of facets information"}, "performed_via_github_app": null}