home / github

Menu
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 403625674

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

id ▼ html_url issue_url node_id user created_at updated_at author_association body reactions issue performed_via_github_app
457980966 https://github.com/simonw/sqlite-utils/issues/7#issuecomment-457980966 https://api.github.com/repos/simonw/sqlite-utils/issues/7 MDEyOklzc3VlQ29tbWVudDQ1Nzk4MDk2Ng== simonw 9599 2019-01-28T02:29:32Z 2019-01-28T02:29:32Z OWNER Remember to remove this TODO (and turn the `[]` into `()` on this line) as part of this task: https://github.com/simonw/sqlite-utils/blob/5309c5c7755818323a0f5353bad0de98ecc866be/sqlite_utils/cli.py#L78-L80 {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} .insert_all() should accept a generator and process it efficiently 403625674  
458011885 https://github.com/simonw/sqlite-utils/issues/7#issuecomment-458011885 https://api.github.com/repos/simonw/sqlite-utils/issues/7 MDEyOklzc3VlQ29tbWVudDQ1ODAxMTg4NQ== simonw 9599 2019-01-28T06:25:48Z 2019-01-28T06:25:48Z OWNER Re-opening for the second bit involving the cli tool. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} .insert_all() should accept a generator and process it efficiently 403625674  
458011906 https://github.com/simonw/sqlite-utils/issues/7#issuecomment-458011906 https://api.github.com/repos/simonw/sqlite-utils/issues/7 MDEyOklzc3VlQ29tbWVudDQ1ODAxMTkwNg== simonw 9599 2019-01-28T06:25:55Z 2019-01-28T06:25:55Z OWNER I tested this with a script called `churn_em_out.py` ``` i = 0 while True: i += 1 print( '{"id": I, "another": "row", "number": J}'.replace("I", str(i)).replace( "J", str(i + 1) ) ) ``` Then I ran this: ``` python churn_em_out.py | \ sqlite-utils insert /tmp/getbig.db stats - \ --nl --batch-size=10000 ``` And used `watch 'ls -lah /tmp/getbig.db'` to watch the file growing as it had 10,000 lines of junk committed in batches. The memory used by the process never grew about around 50MB. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} .insert_all() should accept a generator and process it efficiently 403625674  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 19.592ms · About: simonw/datasette-graphql