issue_comments
18 rows where reactions = "{"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}" sorted by id descending
This data as json, CSV (advanced)
id ▲ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
1111506339 | https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1111506339 | https://api.github.com/repos/simonw/sqlite-utils/issues/159 | IC_kwDOCGYnMM5CQD2j | dracos 154364 | 2022-04-27T21:35:13Z | 2022-04-27T21:35:13Z | NONE | Just stumbled across this, wondering why none of my deletes were working. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | .delete_where() does not auto-commit (unlike .insert() or .upsert()) 702386948 | |
893133496 | https://github.com/simonw/datasette/issues/1419#issuecomment-893133496 | https://api.github.com/repos/simonw/datasette/issues/1419 | IC_kwDOBm6k_c41PCK4 | simonw 9599 | 2021-08-05T03:22:44Z | 2021-08-05T03:22:44Z | OWNER | I ran into this exact same problem today! I only just learned how to use filter on aggregates: https://til.simonwillison.net/sqlite/sqlite-aggregate-filter-clauses A workaround I used is to add this to the deploy command: datasette publish cloudrun ... --install=pysqlite3-binary This will install the https://pypi.org/project/pysqlite3-binary for package which bundles a more recent SQLite version. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | `publish cloudrun` should deploy a more recent SQLite version 959710008 | |
875738149 | https://github.com/simonw/datasette/issues/1388#issuecomment-875738149 | https://api.github.com/repos/simonw/datasette/issues/1388 | MDEyOklzc3VlQ29tbWVudDg3NTczODE0OQ== | simonw 9599 | 2021-07-07T16:14:29Z | 2021-07-07T16:14:29Z | OWNER | This sounds like a valuable feature for people running Datasette behind a proxy. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Serve using UNIX domain socket 939051549 | |
808651088 | https://github.com/simonw/datasette/issues/1258#issuecomment-808651088 | https://api.github.com/repos/simonw/datasette/issues/1258 | MDEyOklzc3VlQ29tbWVudDgwODY1MTA4OA== | simonw 9599 | 2021-03-27T04:41:52Z | 2021-03-27T04:42:14Z | OWNER | Right now they look like this: ```yaml databases: fixtures: queries: neighborhood_search: params: - text ``` In addition to being able to specify defaults, I'd also like to add other things in the future - most significantly the ability to specify a different input widget (e.g. textarea v.s. single-line input) So maybe this looks like: ```yaml params: - name: text default: "" - name: age widget: number ``` | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Allow canned query params to specify default values 828858421 | |
802099264 | https://github.com/simonw/datasette/issues/1262#issuecomment-802099264 | https://api.github.com/repos/simonw/datasette/issues/1262 | MDEyOklzc3VlQ29tbWVudDgwMjA5OTI2NA== | simonw 9599 | 2021-03-18T16:43:09Z | 2021-03-18T16:43:09Z | OWNER | I often find myself wanting this too, when I'm exploring a new dataset. i agree with Bob that this is a good candidate for a plugin. The plugin system isn't quite setup for this yet though - there isn't an obvious mechanism for adding extra sort orders or other interface elements that manipulate the query used by the table view in some way. I'm going to promote this issue to status of a plugin hook feature request - I have a hunch that a plugin hook that enables `order by random()` could enable a lot of other useful plugin features too. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Plugin hook that could support 'order by random()' for table view 834602299 | |
782765665 | https://github.com/simonw/datasette/issues/782#issuecomment-782765665 | https://api.github.com/repos/simonw/datasette/issues/782 | MDEyOklzc3VlQ29tbWVudDc4Mjc2NTY2NQ== | simonw 9599 | 2021-02-20T23:34:41Z | 2021-02-20T23:34:41Z | OWNER | OK, I'm back to the "top level object as the default" side of things now - it's pretty much unanimous at this point, and it's certainly true that it's not a decision you'll even regret. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Redesign default .json format 627794879 | |
755133937 | https://github.com/simonw/datasette/issues/1101#issuecomment-755133937 | https://api.github.com/repos/simonw/datasette/issues/1101 | MDEyOklzc3VlQ29tbWVudDc1NTEzMzkzNw== | simonw 9599 | 2021-01-06T07:25:48Z | 2021-01-06T07:26:43Z | OWNER | Idea: instead of returning a dictionary, `register_output_renderer` could return an object. The object could have the following properties: - `.extension` - the extension to use - `.can_render(...)` - says if it can render this - `.can_stream(...)` - says if streaming is supported - `async .stream_rows(rows_iterator, send)` - method that loops through all rows and uses `send` to send them to the response in the correct format I can then deprecate the existing `dict` return type for 1.0. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | register_output_renderer() should support streaming data 749283032 | |
751504136 | https://github.com/simonw/datasette/issues/417#issuecomment-751504136 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MTUwNDEzNg== | drewda 212369 | 2020-12-27T19:02:06Z | 2020-12-27T19:02:06Z | NONE | Very much looking forward to seeing this functionality come together. This is probably out-of-scope for an initial release, but in the future it could be useful to also think of how to run this is a container'ized context. For example, an immutable datasette container that points to an S3 bucket of SQLite DBs or CSVs. Or an immutable datasette container pointing to a NFS volume elsewhere on a Kubernetes cluster. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette Library 421546944 | |
737563699 | https://github.com/simonw/datasette/issues/749#issuecomment-737563699 | https://api.github.com/repos/simonw/datasette/issues/749 | MDEyOklzc3VlQ29tbWVudDczNzU2MzY5OQ== | simonw 9599 | 2020-12-02T23:45:42Z | 2020-12-02T23:45:42Z | OWNER | I asked about this on Twitter - https://twitter.com/steren/status/1334281184965140483 > You simply need to send the `Transfer-Encoding: chunked` header. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Cloud Run fails to serve database files larger than 32MB 610829227 | |
696163452 | https://github.com/simonw/datasette/issues/670#issuecomment-696163452 | https://api.github.com/repos/simonw/datasette/issues/670 | MDEyOklzc3VlQ29tbWVudDY5NjE2MzQ1Mg== | snth 652285 | 2020-09-21T14:46:10Z | 2020-09-21T14:46:10Z | NONE | I'm currently using PostgREST to serve OpenAPI APIs off Postgresql databases. I would like to try out datasette once this becomes available on Postgres. | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Prototoype for Datasette on PostgreSQL 564833696 | |
615932007 | https://github.com/dogsheep/dogsheep-photos/issues/4#issuecomment-615932007 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/4 | MDEyOklzc3VlQ29tbWVudDYxNTkzMjAwNw== | simonw 9599 | 2020-04-18T19:27:55Z | 2020-04-18T19:27:55Z | MEMBER | Research thread: https://twitter.com/simonw/status/1249049694984011776 > I want to build some software that lets people store their own data in their own S3 bucket, but if possible I'd like not to have to teach people the incantations needed to get their bucket setup and minimum-permission credentials figures out https://testdriven.io/blog/storing-django-static-and-media-files-on-amazon-s3/ looks useful | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upload all my photos to a secure S3 bucket 602533539 | |
586729798 | https://github.com/simonw/sqlite-utils/issues/86#issuecomment-586729798 | https://api.github.com/repos/simonw/sqlite-utils/issues/86 | MDEyOklzc3VlQ29tbWVudDU4NjcyOTc5OA== | simonw 9599 | 2020-02-16T17:11:02Z | 2020-02-16T17:11:02Z | OWNER | I filed a bug in the Python issue tracker here: https://bugs.python.org/issue39652 | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Problem with square bracket in CSV column name 564579430 | |
580028669 | https://github.com/simonw/datasette/issues/662#issuecomment-580028669 | https://api.github.com/repos/simonw/datasette/issues/662 | MDEyOklzc3VlQ29tbWVudDU4MDAyODY2OQ== | simonw 9599 | 2020-01-30T00:30:19Z | 2020-01-30T00:30:19Z | OWNER | I just shipped 0.34: https://datasette.readthedocs.io/en/stable/changelog.html#v0-34 | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Escape_fts5_query-hookimplementation does not work with queries to standard tables 556814876 | |
473312514 | https://github.com/simonw/datasette/issues/417#issuecomment-473312514 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDQ3MzMxMjUxNA== | simonw 9599 | 2019-03-15T14:42:07Z | 2019-03-17T22:12:30Z | OWNER | A neat ability of Datasette Library would be if it can work against other files that have been dropped into the folder. In particular: if a user drops a CSV file into the folder, how about automatically converting that CSV file to SQLite using [sqlite-utils](https://github.com/simonw/sqlite-utils)? | {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette Library 421546944 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
updated_at (date) 14 ✖