issue_comments
7 rows where "updated_at" is on date 2018-06-04 sorted by performed_via_github_app
This data as json, CSV (advanced)
id | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app ▼ |
---|---|---|---|---|---|---|---|---|---|---|---|
392343839 | https://github.com/simonw/datasette/issues/292#issuecomment-392343839 | https://api.github.com/repos/simonw/datasette/issues/292 | MDEyOklzc3VlQ29tbWVudDM5MjM0MzgzOQ== | simonw 9599 | 2018-05-27T16:10:09Z | 2018-06-04T17:38:04Z | OWNER | The more efficient way of doing this kind of count would be to provide a mechanism which can also add extra fragments to a `GROUP BY` clause used for the `SELECT`. Or... how about a mechanism similar to Django's `prefetch_related` which lets you define extra queries that will be called with a list of primary keys (or values from other columns) and used to populate a new column? A little unconventional but could be extremely useful and efficient. Related to that: since the per-query overhead in SQLite is tiny, could even define an extra query to be run once-per-row before returning results. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Mechanism for customizing the SQL used to select specific columns in the table view 326800219 | |
394400419 | https://github.com/simonw/datasette/issues/304#issuecomment-394400419 | https://api.github.com/repos/simonw/datasette/issues/304 | MDEyOklzc3VlQ29tbWVudDM5NDQwMDQxOQ== | simonw 9599 | 2018-06-04T15:39:03Z | 2018-06-04T15:39:03Z | OWNER | In the interest of getting this shipped, I'm going to ignore the `3.7.10` issue. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to configure SQLite cache_size 328229224 | |
394412217 | https://github.com/simonw/datasette/issues/304#issuecomment-394412217 | https://api.github.com/repos/simonw/datasette/issues/304 | MDEyOklzc3VlQ29tbWVudDM5NDQxMjIxNw== | simonw 9599 | 2018-06-04T16:13:32Z | 2018-06-04T16:13:32Z | OWNER | Docs: http://datasette.readthedocs.io/en/latest/config.html#cache-size-kb | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to configure SQLite cache_size 328229224 | |
394412784 | https://github.com/simonw/datasette/issues/302#issuecomment-394412784 | https://api.github.com/repos/simonw/datasette/issues/302 | MDEyOklzc3VlQ29tbWVudDM5NDQxMjc4NA== | simonw 9599 | 2018-06-04T16:15:22Z | 2018-06-04T16:15:22Z | OWNER | I think this is related to #303 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | test-2.3.sqlite database filename throws a 404 328171513 | |
394417567 | https://github.com/simonw/datasette/issues/266#issuecomment-394417567 | https://api.github.com/repos/simonw/datasette/issues/266 | MDEyOklzc3VlQ29tbWVudDM5NDQxNzU2Nw== | simonw 9599 | 2018-06-04T16:30:48Z | 2018-06-04T16:32:55Z | OWNER | When serving streaming responses, I need to check that a large CSV file doesn't completely max out the CPU in a way that is harmful to the rest of the instance. If it does, one option may be to insert an async sleep call in between each chunk that is streamed back. This could be controlled by a `csv_pause_ms` config setting, defaulting to maybe 5 but can be disabled entirely by setting to 0. That's only if testing proves that this is a necessary mechanism. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Export to CSV 323681589 | |
394431323 | https://github.com/simonw/datasette/issues/272#issuecomment-394431323 | https://api.github.com/repos/simonw/datasette/issues/272 | MDEyOklzc3VlQ29tbWVudDM5NDQzMTMyMw== | simonw 9599 | 2018-06-04T17:17:37Z | 2018-06-04T17:17:37Z | OWNER | I built this ASGI debugging tool to help with this migration: https://asgi-scope.now.sh/fivethirtyeight-34d6604/most-common-name%2Fsurnames.json?foo=bar&bazoeuto=onetuh&a=. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Port Datasette to ASGI 324188953 | |
394503399 | https://github.com/simonw/datasette/issues/272#issuecomment-394503399 | https://api.github.com/repos/simonw/datasette/issues/272 | MDEyOklzc3VlQ29tbWVudDM5NDUwMzM5OQ== | simonw 9599 | 2018-06-04T21:20:14Z | 2018-06-04T21:20:14Z | OWNER | Results of an extremely simple micro-benchmark comparing the two shows that uvicorn is at least as fast as Sanic (benchmarks a little faster with a very simple payload): https://gist.github.com/simonw/418950af178c01c416363cc057420851 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Port Datasette to ASGI 324188953 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
updated_at (date) 1 ✖