issue_comments
4 rows where "created_at" is on date 2021-07-18 sorted by author_association descending
This data as json, CSV (advanced)
Suggested facets: issue_url, user
id | html_url | issue_url | node_id | user | created_at | updated_at | author_association ▲ | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
882052693 | https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052693 | https://api.github.com/repos/simonw/sqlite-utils/issues/297 | IC_kwDOCGYnMM40kw5V | simonw 9599 | 2021-07-18T12:57:54Z | 2022-06-21T13:17:15Z | OWNER | Another implementation option would be to use the CSV virtual table mechanism. This could avoid shelling out to the `sqlite3` binary, but requires solving the harder problem of compiling and distributing a loadable SQLite module: https://www.sqlite.org/csv.html (Would be neat to produce a Python wheel of this, see https://simonwillison.net/2022/May/23/bundling-binary-tools-in-python-wheels/) This would also help solve the challenge of making this optimization available to the `sqlite-utils memory` command. That command operates against an in-memory database so it's not obvious how it could shell out to a binary. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Option for importing CSV data using the SQLite .import mechanism 944846776 | |
882052852 | https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052852 | https://api.github.com/repos/simonw/sqlite-utils/issues/297 | IC_kwDOCGYnMM40kw70 | simonw 9599 | 2021-07-18T12:59:20Z | 2021-07-18T12:59:20Z | OWNER | I'm not too worried about `sqlite-utils memory` because if your data is large enough that you can benefit from this optimization you probably should use a real file as opposed to a disposable memory database when analyzing it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Option for importing CSV data using the SQLite .import mechanism 944846776 | |
882091516 | https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-882091516 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32 | IC_kwDOD079W840k6X8 | aaronyih1 10793464 | 2021-07-18T17:29:39Z | 2021-07-18T17:33:02Z | NONE | Same here for US West (N. California) us-west-1. Running on Catalina. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | KeyError: 'Contents' on running upload 803333769 | |
882096402 | https://github.com/simonw/datasette/issues/123#issuecomment-882096402 | https://api.github.com/repos/simonw/datasette/issues/123 | IC_kwDOBm6k_c40k7kS | RayBB 921217 | 2021-07-18T18:07:29Z | 2021-07-18T18:07:29Z | NONE | I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases. My comment is partially inspired by this post about streaming sqlite dbs from github pages or such https://news.ycombinator.com/item?id=27016630 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
updated_at (date) 2 ✖