issue_comments
3 rows where user = 2118708
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id ▼ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
954303095 | https://github.com/simonw/sqlite-utils/issues/248#issuecomment-954303095 | https://api.github.com/repos/simonw/sqlite-utils/issues/248 | IC_kwDOCGYnMM444YJ3 | Florents-Tselai 2118708 | 2021-10-28T23:46:47Z | 2021-10-28T23:46:47Z | NONE | @mhalle maybe you can try out #333 ? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | support for Apache Arrow / parquet files I/O 836829560 | |
956041692 | https://github.com/simonw/sqlite-utils/issues/173#issuecomment-956041692 | https://api.github.com/repos/simonw/sqlite-utils/issues/173 | IC_kwDOCGYnMM44_Anc | Florents-Tselai 2118708 | 2021-11-01T08:42:24Z | 2021-11-01T08:42:24Z | NONE | > I know how to build this for CSV and TSV - I can read them via a file wrapper that counts how many bytes it has seen. > > Not sure how to do it for JSON though. Maybe I could provide it just for newline-delimited JSON? Again I can measure progress based on how many bytes have been read. I was thinking about this, while inserting a stream of ~40M line-delimited json docs. Wouldn't a `--total-expected` flag work ? That's [how tqdm does it](https://github.com/tqdm/tqdm/blob/fc69d5dcf578f7c7986fa76841a6b793f813df35/tqdm/std.py#L366) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Progress bar for sqlite-utils insert 707478649 | |
979345527 | https://github.com/simonw/sqlite-utils/pull/333#issuecomment-979345527 | https://api.github.com/repos/simonw/sqlite-utils/issues/333 | IC_kwDOCGYnMM46X6B3 | Florents-Tselai 2118708 | 2021-11-25T16:31:47Z | 2021-11-25T16:31:47Z | NONE | Thanks for your reply @simonw . Tbh, my first attempt was actually the `parquet-to-sqlite` package but I already had Makefiles that relied on `SQLite-utils` and it was less intrusive to my workflow. Maybe I'll revisit that decision. FYI: there's a `[sqlite-parquet-vtable](https://github.com/cldellow/sqlite-parquet-vtable)` I don't think plugins make much sense either. Probably defeats the purpose of simplicity: simple database along with a pip-able package. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add functionality to read Parquet files. 1039037439 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);