home / github

Menu
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "CONTRIBUTOR", "created_at" is on date 2023-06-14 and "updated_at" is on date 2023-06-14

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

id ▼ html_url issue_url node_id user created_at updated_at author_association body reactions issue performed_via_github_app
1590531892 https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1590531892 https://api.github.com/repos/simonw/sqlite-utils/issues/557 IC_kwDOCGYnMM5ezZc0 chapmanjacobd 7908073 2023-06-14T06:09:21Z 2023-06-14T06:09:21Z CONTRIBUTOR I put together a [simple script](https://github.com/chapmanjacobd/library/blob/42129c5ebe15f9d74653c0f5ca4ed0c991d383e0/xklb/scripts/dedupe_db.py) to upsert and remove duplicate rows based on business keys. If anyone has similar problems with above this might help ``` CREATE TABLE my_table ( id INTEGER PRIMARY KEY, column1 TEXT, column2 TEXT, column3 TEXT ); INSERT INTO my_table (column1, column2, column3) VALUES ('Value 1', 'Duplicate 1', 'Duplicate A'), ('Value 2', 'Duplicate 2', 'Duplicate B'), ('Value 3', 'Duplicate 2', 'Duplicate C'), ('Value 4', 'Duplicate 3', 'Duplicate D'), ('Value 5', 'Duplicate 3', 'Duplicate E'), ('Value 6', 'Duplicate 3', 'Duplicate F'); ``` ``` library dedupe-db test.db my_table --bk column2 ``` {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Aliased ROWID option for tables created from alter=True commands 1740150327  
1592047502 https://github.com/simonw/sqlite-utils/issues/555#issuecomment-1592047502 https://api.github.com/repos/simonw/sqlite-utils/issues/555 IC_kwDOCGYnMM5e5LeO chapmanjacobd 7908073 2023-06-14T22:00:10Z 2023-06-14T22:01:57Z CONTRIBUTOR You may want to try doing a performance comparison between this and just selecting all the ids with few constraints and then doing the filtering within python. That might seem like a lazy-programmer, inefficient way but queries with large resultsets are a different profile than what databases like SQLITE are designed for. That is not to say that SQLITE is slow or that python is always faster but when you start reading >20% of an index there is an equilibrium that is reached. Especially when adding in writing extra temp tables and stuff to memory/disk. And especially given the `NOT IN` style of query... You may also try chunking like this: ```py def chunks(lst, n) -> Generator: for i in range(0, len(lst), n): yield lst[i : i + n] SQLITE_PARAM_LIMIT = 32765 data = [] chunked = chunks(video_ids, consts.SQLITE_PARAM_LIMIT) for ids in chunked: data.expand( list( db.query( f"""SELECT * from videos WHERE id in (""" + ",".join(["?"] * len(ids)) + ")", (*ids,), ) ) ) ``` but that actually won't work with your `NOT IN` requirements. You need to query the full resultset to check any row. Since you are doing stuff with files/videos in SQLITE you might be interested in my side project: https://github.com/chapmanjacobd/library {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Filter table by a large bunch of ids 1733198948  
1592052320 https://github.com/simonw/sqlite-utils/issues/535#issuecomment-1592052320 https://api.github.com/repos/simonw/sqlite-utils/issues/535 IC_kwDOCGYnMM5e5Mpg chapmanjacobd 7908073 2023-06-14T22:05:28Z 2023-06-14T22:05:28Z CONTRIBUTOR piping to `jq` is good enough usually {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} rows: --transpose or psql extended view-like functionality 1655860104  
1592110694 https://github.com/simonw/sqlite-utils/issues/529#issuecomment-1592110694 https://api.github.com/repos/simonw/sqlite-utils/issues/529 IC_kwDOCGYnMM5e5a5m chapmanjacobd 7908073 2023-06-14T23:11:47Z 2023-06-14T23:12:12Z CONTRIBUTOR sorry i was wrong. `sqlite-utils --raw-lines` works correctly ``` sqlite-utils --raw-lines :memory: "SELECT * FROM (VALUES ('test'), ('line2'))" | cat -A test$ line2$ sqlite-utils --csv --no-headers :memory: "SELECT * FROM (VALUES ('test'), ('line2'))" | cat -A test$ line2$ ``` I think this was fixed somewhat recently {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Microsoft line endings 1581090327  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1073.142ms · About: simonw/datasette-graphql