{"html_url": "https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052693", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/297", "id": 882052693, "node_id": "IC_kwDOCGYnMM40kw5V", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-07-18T12:57:54Z", "updated_at": "2022-06-21T13:17:15Z", "author_association": "OWNER", "body": "Another implementation option would be to use the CSV virtual table mechanism. This could avoid shelling out to the `sqlite3` binary, but requires solving the harder problem of compiling and distributing a loadable SQLite module: https://www.sqlite.org/csv.html\r\n\r\n(Would be neat to produce a Python wheel of this, see https://simonwillison.net/2022/May/23/bundling-binary-tools-in-python-wheels/)\r\n\r\nThis would also help solve the challenge of making this optimization available to the `sqlite-utils memory` command. That command operates against an in-memory database so it's not obvious how it could shell out to a binary.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 944846776, "label": "Option for importing CSV data using the SQLite .import mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/297#issuecomment-882052852", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/297", "id": 882052852, "node_id": "IC_kwDOCGYnMM40kw70", "user": {"value": 9599, "label": "simonw"}, "created_at": "2021-07-18T12:59:20Z", "updated_at": "2021-07-18T12:59:20Z", "author_association": "OWNER", "body": "I'm not too worried about `sqlite-utils memory` because if your data is large enough that you can benefit from this optimization you probably should use a real file as opposed to a disposable memory database when analyzing it.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 944846776, "label": "Option for importing CSV data using the SQLite .import mechanism"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-882091516", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32", "id": 882091516, "node_id": "IC_kwDOD079W840k6X8", "user": {"value": 10793464, "label": "aaronyih1"}, "created_at": "2021-07-18T17:29:39Z", "updated_at": "2021-07-18T17:33:02Z", "author_association": "NONE", "body": "Same here for US West (N. California) us-west-1. Running on Catalina.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 803333769, "label": "KeyError: 'Contents' on running upload"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/123#issuecomment-882096402", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/123", "id": 882096402, "node_id": "IC_kwDOBm6k_c40k7kS", "user": {"value": 921217, "label": "RayBB"}, "created_at": "2021-07-18T18:07:29Z", "updated_at": "2021-07-18T18:07:29Z", "author_association": "NONE", "body": "I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases.\r\n\r\nMy comment is partially inspired by this post about streaming sqlite dbs from github pages or such\r\nhttps://news.ycombinator.com/item?id=27016630\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275125561, "label": "Datasette serve should accept paths/URLs to CSVs and other file formats"}, "performed_via_github_app": null}