issues
7 rows where user = 4068
This data as json, CSV (advanced)
Suggested facets: state, comments, repo, created_at (date), updated_at (date), closed_at (date)
id ▼ | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
919250621 | MDU6SXNzdWU5MTkyNTA2MjE= | 269 | bool type not supported | frafra 4068 | closed | 0 | 3 | 2021-06-11T22:00:36Z | 2021-06-15T01:34:10Z | 2021-06-15T01:34:10Z | NONE | Hi! Thank you for sharing this very nice tool :) It would be nice to have support for more types, like `bool`: it is not possible to convert to boolean at the moment. My suggestion would be to handle it as `bool(int(value))`, like csvkit does. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
919314806 | MDU6SXNzdWU5MTkzMTQ4MDY= | 270 | Cannot set type JSON | frafra 4068 | closed | 0 | 4 | 2021-06-11T23:53:22Z | 2021-06-16T17:34:49Z | 2021-06-16T15:47:06Z | NONE | It would be great if the column type could be set to JSON. That would not be different from handling a regular string. It would be something like `repr(value)` and it would work with both JSON and CSV inputs, no matter if `value` is a real list or just a string representing a list. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
919508498 | MDU6SXNzdWU5MTk1MDg0OTg= | 1375 | JSON export dumps JSON fields as TEXT | frafra 4068 | closed | 0 | 2 | 2021-06-12T09:45:08Z | 2021-06-14T09:41:59Z | 2021-06-13T15:37:58Z | NONE | Hi! When a user tries to export data as JSON, I would expect to see the value of JSON columns represented as JSON instead of being rendered as a string. What do you think? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1250161887 | I_kwDOCGYnMM5Kg_Tf | 438 | illegal UTF-16 surrogate | frafra 4068 | closed | 0 | 2 | 2022-05-26T22:49:52Z | 2022-05-27T08:21:53Z | 2022-05-27T08:21:53Z | NONE | I am trying to insert `https://artsdatabanken.no/Fab2018/api/export/csv` into a SQLite database, but I have an error when using `sqlite-utils`: ``` sqlite-utils insert --csv --delimiter ";" --encoding="utf-16-le" --pk "Id" csv fremmedart test.db [------------------------------------] 0% Error: 'utf-16-le' codec can't decode bytes in position 98-99: illegal UTF-16 surrogate The input you provided uses a character encoding other than utf-8. You can fix this by passing the --encoding= option with the encoding of the file. If you do not know the encoding, running 'file filename.csv' may tell you. It's often worth trying: --encoding=latin-1 ``` I tried to convert the file using `iconv -f "utf-16le" -t "utf-8"`, but I still get a similar error (slightly different position): ``` sqlite-utils insert --csv --delimiter ";" --encoding=utf-8 --pk "Id" csv_utf8 fremmedart test.db [------------------------------------] 0% Error: 'utf-8' codec can't decode byte 0xd9 in position 99: invalid continuation byte The input you provided uses a character encoding other than utf-8. You can fix this by passing the --encoding= option with the encoding of the file. If you do not know the encoding, running 'file filename.csv' may tell you. It's often worth trying: --encoding=latin-1 ``` I have no issues reading such file using this Python code: ```python content = open('csv', encoding='utf-16-le').read()) ``` `in2csv` works too. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1250495688 | I_kwDOCGYnMM5KiQzI | 439 | Misleading progress bar against utf-16-le CSV input | frafra 4068 | open | 0 | 12 | 2022-05-27T08:34:49Z | 2022-06-15T03:53:43Z | NONE | The program crashes without any error. ``` wget "https://artsdatabanken.no/Fab2018/api/export/csv" sqlite-utils create-database test.db sqlite-utils insert --csv --delimiter ";" --encoding "utf-16-le" test test.db csv [------------------------------------] 0% [#################-------------------] 49% 00:00:01 ``` I would like to highlight various issues: 1. sqlite-utils catches exceptions without printing the stacktrace and/or reraising the exception, so there is no easy way to use `pdb` or similar to debug the program, solution: add a debug option 2. Silent crash: this is related to (1.), and it happens when there is a catch-all mechanism; solution: let the program fail. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1250629388 | I_kwDOCGYnMM5KixcM | 440 | CSV files with too many values in a row cause errors | frafra 4068 | closed | 0 | 20 | 2022-05-27T10:54:44Z | 2022-06-14T22:23:01Z | 2022-06-14T20:12:46Z | NONE | *Original title: csv.DictReader can have None as key* In some cases, `csv.DictReader` can have `None` as key for unnamed columns, and a list of values as value. `sqlite_utils.utils.rows_from_file` cannot handle that: ```python url="https://artsdatabanken.no/Fab2018/api/export/csv" db = sqlite_utils.Database(":memory") with urlopen(url) as fab: reader, _ = sqlite_utils.utils.rows_from_file(fab, encoding="utf-16le") db["fab2018"].insert_all(reader, pk="Id") ``` Result: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 2924, in insert_all chunk = list(chunk) File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 3454, in fix_square_braces if any("[" in key or "]" in key for key in record.keys()): File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 3454, in <genexpr> if any("[" in key or "]" in key for key in record.keys()): TypeError: argument of type 'NoneType' is not iterable ``` Code: https://github.com/simonw/sqlite-utils/blob/59be60c471fd7a2c4be7f75e8911163e618ff5ca/sqlite_utils/db.py#L3454 `sqlite-utils insert` from command line is not affected by this issue. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1378495690 | I_kwDOBm6k_c5SKizK | 1814 | Static files not served | frafra 4068 | closed | 0 | 2 | 2022-09-19T20:38:17Z | 2022-09-19T23:35:06Z | 2022-09-19T23:34:30Z | NONE | Folder structure: ``` bibliography/ bibliography/static-files bibliography/static-files/styles.css bibliography/bibliography.db bibliography/metadata.json bibliography/settings.json ``` ``` $ cat bibliography/settings.json { "suggest_facets": false, "truncate_cells_html": 1000, "static": "assets:static-files/" } ``` File `/assets/styles.css` is not found (HTTP 404, `Database not found: assets`). Using datasette revision d0737e4de51ce178e556fc011ccb8cc46bbb6359. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);