home / github

Menu
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 1271426387

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

id ▼ html_url issue_url node_id user created_at updated_at author_association body reactions issue performed_via_github_app
1155804459 https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155804459 https://api.github.com/repos/simonw/sqlite-utils/issues/444 IC_kwDOCGYnMM5E5C0r simonw 9599 2022-06-14T23:28:18Z 2022-06-14T23:28:18Z OWNER I think these become part of the `_import_options` list which is used in a few places: https://github.com/simonw/sqlite-utils/blob/b8af3b96f5c72317cc8783dc296a94f6719987d9/sqlite_utils/cli.py#L765-L800 {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool 1271426387  
1155804591 https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155804591 https://api.github.com/repos/simonw/sqlite-utils/issues/444 IC_kwDOCGYnMM5E5C2v simonw 9599 2022-06-14T23:28:36Z 2022-06-14T23:28:36Z OWNER I'm going with `--extras-key` and `--ignore-extras` as the two new options. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool 1271426387  
1155815186 https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155815186 https://api.github.com/repos/simonw/sqlite-utils/issues/444 IC_kwDOCGYnMM5E5FcS simonw 9599 2022-06-14T23:48:16Z 2022-06-14T23:48:16Z OWNER This is tricky to implement because of this code: https://github.com/simonw/sqlite-utils/blob/b8af3b96f5c72317cc8783dc296a94f6719987d9/sqlite_utils/cli.py#L938-L945 It's reconstructing each document using the known headers here: `docs = (dict(zip(headers, row)) for row in reader)` So my first attempt at this - the diff here - did not have the desired result: ```diff diff --git a/sqlite_utils/cli.py b/sqlite_utils/cli.py index 86eddfb..00b920b 100644 --- a/sqlite_utils/cli.py +++ b/sqlite_utils/cli.py @@ -6,7 +6,7 @@ import hashlib import pathlib import sqlite_utils from sqlite_utils.db import AlterError, BadMultiValues, DescIndex -from sqlite_utils.utils import maximize_csv_field_size_limit +from sqlite_utils.utils import maximize_csv_field_size_limit, _extra_key_strategy from sqlite_utils import recipes import textwrap import inspect @@ -797,6 +797,15 @@ _import_options = ( "--encoding", help="Character encoding for input, defaults to utf-8", ), + click.option( + "--ignore-extras", + is_flag=True, + help="If a CSV line has more than the expected number of values, ignore the extras", + ), + click.option( + "--extras-key", + help="If a CSV line has more than the expected number of values put them in a list in this column", + ), ) @@ -885,6 +894,8 @@ def insert_upsert_implementation( sniff, no_headers, encoding, + ignore_extras, + extras_key, batch_size, alter, upsert, @@ -909,6 +920,10 @@ def insert_upsert_implementation( raise click.ClickException("--flatten cannot be used with --csv or --tsv") if encoding and not (csv or tsv): raise click.ClickException("--encoding must be used with --csv or --tsv") + if ignore_extras and extras_key: + raise click.ClickException( + "--ignore-extras and --extras-key cannot be used together" + ) if pk and len(pk) == 1: pk = pk[0] encodin… {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool 1271426387  
1155815956 https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155815956 https://api.github.com/repos/simonw/sqlite-utils/issues/444 IC_kwDOCGYnMM5E5FoU simonw 9599 2022-06-14T23:49:56Z 2022-07-07T16:39:18Z OWNER Yeah my initial implementation there makes no sense: ```python csv_reader_args = {"dialect": dialect} if delimiter: csv_reader_args["delimiter"] = delimiter if quotechar: csv_reader_args["quotechar"] = quotechar reader = _extra_key_strategy( csv_std.reader(decoded, **csv_reader_args), ignore_extras, extras_key ) first_row = next(reader) if no_headers: headers = ["untitled_{}".format(i + 1) for i in range(len(first_row))] reader = itertools.chain([first_row], reader) else: headers = first_row docs = (dict(zip(headers, row)) for row in reader) ``` Because my `_extra_key_strategy()` helper function is designed to work against `csv.DictReader` - not against `csv.reader()` which returns a sequence of lists, not a sequence of dictionaries. In fact, what's happening here is that `dict(zip(headers, row))` is ignoring anything in the row that doesn't correspond to a header: ```pycon >>> list(zip(["a", "b"], [1, 2, 3])) [('a', 1), ('b', 2)] ``` {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool 1271426387  
1155966234 https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155966234 https://api.github.com/repos/simonw/sqlite-utils/issues/444 IC_kwDOCGYnMM5E5qUa simonw 9599 2022-06-15T04:18:05Z 2022-06-15T04:18:05Z OWNER I'm going to push a branch with my not-yet-working code (which does at least include a test). {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool 1271426387  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.019ms · About: simonw/datasette-graphql