{"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155310521", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155310521, "node_id": "IC_kwDOCGYnMM5E3KO5", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T14:58:50Z", "updated_at": "2022-06-14T14:58:50Z", "author_association": "OWNER", "body": "Interesting challenge in writing tests for this: if you give `csv.Sniffer` a short example with an invalid row in it sometimes it picks the wrong delimiter!\r\n\r\n id,name\\r\\n1,Cleo,oops\r\n\r\nIt decided the delimiter there was `e`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155317293", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155317293, "node_id": "IC_kwDOCGYnMM5E3L4t", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T15:04:01Z", "updated_at": "2022-06-14T15:04:01Z", "author_association": "OWNER", "body": "I think that's unavoidable: it looks like `csv.Sniffer` only works if you feed it a CSV file with an equal number of values in each row, which is understandable.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155350755", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155350755, "node_id": "IC_kwDOCGYnMM5E3UDj", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T15:25:18Z", "updated_at": "2022-06-14T15:25:18Z", "author_association": "OWNER", "body": "That broke `mypy`:\r\n\r\n`sqlite_utils/utils.py:229: error: Incompatible types in assignment (expression has type \"Iterable[Dict[Any, Any]]\", variable has type \"DictReader[str]\")`", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155358637", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155358637, "node_id": "IC_kwDOCGYnMM5E3V-t", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T15:31:34Z", "updated_at": "2022-06-14T15:31:34Z", "author_association": "OWNER", "body": "Getting this past `mypy` is really hard!\r\n\r\n```\r\n% mypy sqlite_utils\r\nsqlite_utils/utils.py:189: error: No overload variant of \"pop\" of \"MutableMapping\" matches argument type \"None\"\r\nsqlite_utils/utils.py:189: note: Possible overload variants:\r\nsqlite_utils/utils.py:189: note: def pop(self, key: str) -> str\r\nsqlite_utils/utils.py:189: note: def [_T] pop(self, key: str, default: Union[str, _T] = ...) -> Union[str, _T]\r\n```\r\nThat's because of this line:\r\n\r\n row.pop(key=None)\r\n\r\nWhich is legit here - we have a dictionary where one of the keys is `None` and we want to remove that key. But the baked in type is apparently `def pop(self, key: str) -> str`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1155364367", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/412", "id": 1155364367, "node_id": "IC_kwDOCGYnMM5E3XYP", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T15:36:28Z", "updated_at": "2022-06-14T15:36:28Z", "author_association": "OWNER", "body": "Here's as far as I got with my initial prototype, in `sqlite_utils/pandas.py`:\r\n\r\n```python\r\nfrom .db import Database as _Database, Table as _Table, View as _View\r\nimport pandas as pd\r\nfrom typing import (\r\n Iterable,\r\n Union,\r\n Optional,\r\n)\r\n\r\n\r\nclass Database(_Database):\r\n def query(\r\n self, sql: str, params: Optional[Union[Iterable, dict]] = None\r\n ) -> pd.DataFrame:\r\n return pd.DataFrame(super().query(sql, params))\r\n\r\n def table(self, table_name: str, **kwargs) -> Union[\"Table\", \"View\"]:\r\n \"Return a table object, optionally configured with default options.\"\r\n klass = View if table_name in self.view_names() else Table\r\n return klass(self, table_name, **kwargs)\r\n\r\n\r\nclass PandasQueryable:\r\n def rows_where(\r\n self,\r\n where: str = None,\r\n where_args: Optional[Union[Iterable, dict]] = None,\r\n order_by: str = None,\r\n select: str = \"*\",\r\n limit: int = None,\r\n offset: int = None,\r\n ) -> pd.DataFrame:\r\n return pd.DataFrame(\r\n super().rows_where(\r\n where,\r\n where_args,\r\n order_by=order_by,\r\n select=select,\r\n limit=limit,\r\n offset=offset,\r\n )\r\n )\r\n\r\n\r\nclass Table(PandasQueryable, _Table):\r\n pass\r\n\r\n\r\nclass View(PandasQueryable, _View):\r\n pass\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1160182768, "label": "Optional Pandas integration"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155389614", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155389614, "node_id": "IC_kwDOCGYnMM5E3diu", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T15:54:03Z", "updated_at": "2022-06-14T15:54:03Z", "author_association": "OWNER", "body": "Filed an issue against `python/typeshed`:\r\n\r\n- https://github.com/python/typeshed/issues/8075", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/441#issuecomment-1155421299", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/441", "id": 1155421299, "node_id": "IC_kwDOCGYnMM5E3lRz", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T16:23:52Z", "updated_at": "2022-06-14T16:23:52Z", "author_association": "OWNER", "body": "Actually I have a thought for something that could help here: I could add a mechanism for inserting additional where filters and parameters into that `.search()` method.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1257724585, "label": "Combining `rows_where()` and `search()` to limit which rows are searched"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/441#issuecomment-1155515426", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/441", "id": 1155515426, "node_id": "IC_kwDOCGYnMM5E38Qi", "user": {"value": 1448859, "label": "betatim"}, "created_at": "2022-06-14T17:53:43Z", "updated_at": "2022-06-14T17:53:43Z", "author_association": "NONE", "body": "That would be handy (additional where filters) but I think the trick with the `with` statement is already an order of magnitude better than what I had thought of, so my problem is solved by it (plus I got to learn about `with` today!)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1257724585, "label": "Combining `rows_where()` and `search()` to limit which rows are searched"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155666672", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155666672, "node_id": "IC_kwDOCGYnMM5E4hLw", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T20:11:52Z", "updated_at": "2022-06-14T20:11:52Z", "author_association": "OWNER", "body": "I'm going to rename `restkey` to `extras_key` for consistency with `ignore_extras`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/443#issuecomment-1155672522", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/443", "id": 1155672522, "node_id": "IC_kwDOCGYnMM5E4inK", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T20:18:58Z", "updated_at": "2022-06-14T20:18:58Z", "author_association": "OWNER", "body": "New documentation: https://sqlite-utils.datasette.io/en/latest/python-api.html#reading-rows-from-a-file", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1269998342, "label": "Make `utils.rows_from_file()` a documented API"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155672675", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155672675, "node_id": "IC_kwDOCGYnMM5E4ipj", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T20:19:07Z", "updated_at": "2022-06-14T20:19:07Z", "author_association": "OWNER", "body": "Documentation: https://sqlite-utils.datasette.io/en/latest/python-api.html#reading-rows-from-a-file", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 1, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/442#issuecomment-1155714131", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/442", "id": 1155714131, "node_id": "IC_kwDOCGYnMM5E4sxT", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T21:07:50Z", "updated_at": "2022-06-14T21:07:50Z", "author_association": "OWNER", "body": "Here's the commit where I added that originally, including a test: https://github.com/simonw/sqlite-utils/commit/1a93b72ba710ea2271eaabc204685a27d2469374", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1269886084, "label": "`maximize_csv_field_size_limit()` utility function"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/442#issuecomment-1155748444", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/442", "id": 1155748444, "node_id": "IC_kwDOCGYnMM5E41Jc", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T21:55:15Z", "updated_at": "2022-06-14T21:55:15Z", "author_association": "OWNER", "body": "Documentation: https://sqlite-utils.datasette.io/en/latest/python-api.html#setting-the-maximum-csv-field-size-limit", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1269886084, "label": "`maximize_csv_field_size_limit()` utility function"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1155749696", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/433", "id": 1155749696, "node_id": "IC_kwDOCGYnMM5E41dA", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T21:57:05Z", "updated_at": "2022-06-14T21:57:05Z", "author_association": "OWNER", "body": "Marking this as help wanted because I can't figure out how to replicate it!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1239034903, "label": "CLI eats my cursor"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/441#issuecomment-1155750270", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/441", "id": 1155750270, "node_id": "IC_kwDOCGYnMM5E41l-", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T21:57:57Z", "updated_at": "2022-06-14T21:57:57Z", "author_association": "OWNER", "body": "I added `where=` and `where_args=` parameters to that `.search()` method - updated documentation is here: https://sqlite-utils.datasette.io/en/latest/python-api.html#searching-with-table-search", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1257724585, "label": "Combining `rows_where()` and `search()` to limit which rows are searched"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/431#issuecomment-1155753397", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/431", "id": 1155753397, "node_id": "IC_kwDOCGYnMM5E42W1", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:01:38Z", "updated_at": "2022-06-14T22:01:38Z", "author_association": "OWNER", "body": "Yeah, I think it would be neat if the library could support self-referential many-to-many in a nice way.\r\n\r\nI'm not sure about the `left_name/right_name` design though. Would it be possible to have this work as the user intends, by spotting that the other table name `\"people\"` matches the name of the current table?\r\n\r\n```python\r\ndb[\"people\"].insert({\"name\": \"Mary\"}, pk=\"name\").m2m(\r\n \"people\", [{\"name\": \"Michael\"}, {\"name\": \"Suzy\"}], m2m_table=\"parent_child\", pk=\"name\"\r\n)\r\n```\r\nThe created table could look like this:\r\n```sql\r\nCREATE TABLE [parent_child] (\r\n [people_id_1] TEXT REFERENCES [people]([name]),\r\n [people_id_2] TEXT REFERENCES [people]([name]),\r\n PRIMARY KEY ([people_id_1], [people_id_2])\r\n)\r\n```\r\nI've not thought very hard about this, so the design I'm proposing here might not work.\r\n\r\nAre there other reasons people might wan the `left_name=` and `right_name=` parameters? If so then I'm much happier with those.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1227571375, "label": "Allow making m2m relation of a table to itself"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/432#issuecomment-1155756742", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/432", "id": 1155756742, "node_id": "IC_kwDOCGYnMM5E43LG", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:05:38Z", "updated_at": "2022-06-14T22:05:49Z", "author_association": "OWNER", "body": "I don't like the idea of `table_names()` returning names of tables from connected databases as well, because it feels like it could lead to surprising behaviour - especially if those connected databases turn to have table names that are duplicated in the main connected database.\r\n\r\nIt would be neat if functions like `.rows_where()` worked though.\r\n\r\nOne thought would be to support something like this:\r\n```python\r\nrows = db[\"otherdb.tablename\"].rows_where()\r\n```\r\nBut... `.` is a valid character in a SQLite table name. So `\"otherdb.tablename\"` might ambiguously refer to a table called `tablename` in a connected database with the alias `otherdb`, OR a table in the current database with the name `otherdb.tablename`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1236693079, "label": "Support `rows_where()`, `delete_where()` etc for attached alias databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/432#issuecomment-1155758664", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/432", "id": 1155758664, "node_id": "IC_kwDOCGYnMM5E43pI", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:07:50Z", "updated_at": "2022-06-14T22:07:50Z", "author_association": "OWNER", "body": "Another potential fix: add a `alias=` parameter to `rows_where()` and other similar methods. Then you could do this:\r\n\r\n```python\r\nrows = db[\"tablename\"].rows_where(alias=\"otherdb\")\r\n```\r\nThis feels wrong to me: `db[\"tablename\"]` is the bit that is supposed to return a table object. Having part of what that table object is exist as a parameter to other methods is confusing.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1236693079, "label": "Support `rows_where()`, `delete_where()` etc for attached alias databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/432#issuecomment-1155759857", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/432", "id": 1155759857, "node_id": "IC_kwDOCGYnMM5E437x", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:09:07Z", "updated_at": "2022-06-14T22:09:07Z", "author_association": "OWNER", "body": "Third option, and I think the one I like the best:\r\n```python\r\nrows = db.table(\"tablename\", alias=\"otherdb\").rows_where(alias=\"otherdb\")\r\n```\r\nThe `db.table(tablename)` method already exists as an alternative to `db[tablename]`: https://sqlite-utils.datasette.io/en/stable/python-api.html#python-api-table-configuration\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1236693079, "label": "Support `rows_where()`, `delete_where()` etc for attached alias databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/432#issuecomment-1155764064", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/432", "id": 1155764064, "node_id": "IC_kwDOCGYnMM5E449g", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:15:44Z", "updated_at": "2022-06-14T22:15:44Z", "author_association": "OWNER", "body": "Implementing this would be a pretty big change - initial instinct is that I'd need to introduce a `self.alias` property to `Queryable` (the subclass of `Table` and `View`) and a new `self.name_with_alias` getter which returns `alias.tablename` if `alias` is set to a not-None value. Then I'd need to rewrite every piece of code like this:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/db.py#L1161\r\n\r\nTo look like this instead:\r\n```python\r\n sql = \"select {} from [{}]\".format(select, self.name_with_alias)\r\n```\r\nBut some parts would be harder - for example:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/db.py#L1227-L1231\r\n\r\nWould have to know to query `alias.sqlite_master` instead.\r\n\r\nThe cached table counts logic like this would need a bunch of changes too:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/db.py#L644-L657", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1236693079, "label": "Support `rows_where()`, `delete_where()` etc for attached alias databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/432#issuecomment-1155764428", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/432", "id": 1155764428, "node_id": "IC_kwDOCGYnMM5E45DM", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:16:21Z", "updated_at": "2022-06-14T22:16:21Z", "author_association": "OWNER", "body": "Initial idea of how the `.table()` method would change:\r\n```diff\r\ndiff --git a/sqlite_utils/db.py b/sqlite_utils/db.py\r\nindex 7a06304..3ecb40b 100644\r\n--- a/sqlite_utils/db.py\r\n+++ b/sqlite_utils/db.py\r\n@@ -474,11 +474,12 @@ class Database:\r\n self._tracer(sql, None)\r\n return self.conn.executescript(sql)\r\n \r\n- def table(self, table_name: str, **kwargs) -> Union[\"Table\", \"View\"]:\r\n+ def table(self, table_name: str, alias: Optional[str] = None, **kwargs) -> Union[\"Table\", \"View\"]:\r\n \"\"\"\r\n Return a table object, optionally configured with default options.\r\n \r\n :param table_name: Name of the table\r\n+ :param alias: The database alias to use, if referring to a table in another connected database\r\n \"\"\"\r\n klass = View if table_name in self.view_names() else Table\r\n return klass(self, table_name, **kwargs)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1236693079, "label": "Support `rows_where()`, `delete_where()` etc for attached alias databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155767202", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155767202, "node_id": "IC_kwDOCGYnMM5E45ui", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:21:10Z", "updated_at": "2022-06-14T22:21:10Z", "author_association": "OWNER", "body": "I can't figure out why that error is being swallowed like that. The most likely culprit was this code: \r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/cli.py#L1021-L1043\r\n\r\nBut I tried changing it like this:\r\n\r\n```diff\r\ndiff --git a/sqlite_utils/cli.py b/sqlite_utils/cli.py\r\nindex 86eddfb..ed26fdd 100644\r\n--- a/sqlite_utils/cli.py\r\n+++ b/sqlite_utils/cli.py\r\n@@ -1023,6 +1023,7 @@ def insert_upsert_implementation(\r\n docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs\r\n )\r\n except Exception as e:\r\n+ raise\r\n if (\r\n isinstance(e, sqlite3.OperationalError)\r\n and e.args\r\n```\r\nAnd your steps to reproduce still got to 49% and then failed silently.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155767915", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440", "id": 1155767915, "node_id": "IC_kwDOCGYnMM5E455r", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:22:27Z", "updated_at": "2022-06-14T22:22:27Z", "author_association": "OWNER", "body": "I forgot to add equivalents of `extras_key=` and `ignore_extras=` to the CLI tool - will do that in a separate issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250629388, "label": "CSV files with too many values in a row cause errors"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155769216", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155769216, "node_id": "IC_kwDOCGYnMM5E46OA", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:24:49Z", "updated_at": "2022-06-14T22:25:06Z", "author_association": "OWNER", "body": "I have a hunch that this crash may be caused by a CSV value which is too long, as addressed at the library level in:\r\n- #440\r\n\r\nBut not yet addressed in the CLI tool, see:\r\n\r\n- #444\r\n\r\nEither way though, I really don't like that errors like this are swallowed!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155771462", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155771462, "node_id": "IC_kwDOCGYnMM5E46xG", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:28:38Z", "updated_at": "2022-06-14T22:28:38Z", "author_association": "OWNER", "body": "Maybe this isn't a CSV field value problem - I tried this patch and didn't seem to hit the new breakpoints:\r\n```diff\r\ndiff --git a/sqlite_utils/utils.py b/sqlite_utils/utils.py\r\nindex d2ccc5f..f1b823a 100644\r\n--- a/sqlite_utils/utils.py\r\n+++ b/sqlite_utils/utils.py\r\n@@ -204,13 +204,17 @@ def _extra_key_strategy(\r\n # DictReader adds a 'None' key with extra row values\r\n if None not in row:\r\n yield row\r\n- elif ignore_extras:\r\n+ continue\r\n+ else:\r\n+ breakpoint()\r\n+ if ignore_extras:\r\n # ignoring row.pop(none) because of this issue:\r\n # https://github.com/simonw/sqlite-utils/issues/440#issuecomment-1155358637\r\n row.pop(None) # type: ignore\r\n yield row\r\n elif not extras_key:\r\n extras = row.pop(None) # type: ignore\r\n+ breakpoint()\r\n raise RowError(\r\n \"Row {} contained these extra values: {}\".format(row, extras)\r\n )\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155772244", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155772244, "node_id": "IC_kwDOCGYnMM5E469U", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:30:03Z", "updated_at": "2022-06-14T22:30:03Z", "author_association": "OWNER", "body": "Tried this:\r\n```\r\n% python -i $(which sqlite-utils) insert --csv --delimiter \";\" --encoding \"utf-16-le\" test test.db csv\r\n [------------------------------------] 0%\r\n [#################-------------------] 49% 00:00:01Traceback (most recent call last):\r\n File \"/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py\", line 1072, in main\r\n ctx.exit()\r\n File \"/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py\", line 692, in exit\r\n raise Exit(code)\r\nclick.exceptions.Exit: 0\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/bin/sqlite-utils\", line 33, in \r\n sys.exit(load_entry_point('sqlite-utils', 'console_scripts', 'sqlite-utils')())\r\n File \"/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py\", line 1137, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/simon/.local/share/virtualenvs/sqlite-utils-C4Ilevlm/lib/python3.8/site-packages/click/core.py\", line 1090, in main\r\n sys.exit(e.exit_code)\r\nSystemExit: 0\r\n>>> \r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155776023", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155776023, "node_id": "IC_kwDOCGYnMM5E474X", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:36:07Z", "updated_at": "2022-06-14T22:36:07Z", "author_association": "OWNER", "body": "Wait! The arguments in that are the wrong way round. This is correct:\r\n\r\n sqlite-utils insert --csv --delimiter \";\" --encoding \"utf-16-le\" test.db test csv\r\n\r\nIt still outputs the following:\r\n\r\n [------------------------------------] 0%\r\n [#################-------------------] 49% 00:00:02%\r\n\r\nBut it creates a `test.db` file that is 6.2MB.\r\n\r\nThat database has 3141 rows in it:\r\n\r\n```\r\n% sqlite-utils tables test.db --counts -t\r\ntable count\r\n------- -------\r\ntest 3142\r\n```\r\nI converted that `csv` file to utf-8 like so:\r\n\r\n iconv -f UTF-16LE -t UTF-8 csv > utf8.csv\r\n\r\nAnd it contains 3142 lines:\r\n```\r\n% wc -l utf8.csv \r\n 3142 utf8.csv\r\n```\r\nSo my hunch here is that the problem is actually that the progress bar doesn't know how to correctly measure files in `utf-16-le` encoding!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155781399", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155781399, "node_id": "IC_kwDOCGYnMM5E49MX", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:45:41Z", "updated_at": "2022-06-14T22:45:41Z", "author_association": "OWNER", "body": "TIL how to use `iconv`: https://til.simonwillison.net/linux/iconv", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155782835", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155782835, "node_id": "IC_kwDOCGYnMM5E49iz", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:48:22Z", "updated_at": "2022-06-14T22:49:53Z", "author_association": "OWNER", "body": "Here's the code that implements the progress bar in question: https://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/cli.py#L918-L932\r\n\r\nIt calls `file_progress()` which looks like this:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/utils.py#L159-L175\r\n\r\nWhich uses this:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/utils.py#L148-L156", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155784284", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155784284, "node_id": "IC_kwDOCGYnMM5E495c", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T22:51:03Z", "updated_at": "2022-06-14T22:52:13Z", "author_association": "OWNER", "body": "Yes, this is the problem. The progress bar length is set to the length in bytes of the file - `os.path.getsize(file.name)` - but it's then incremented by the length of each DECODED line in turn.\r\n\r\nSo if the file is in `utf-16-le` (twice the size of `utf-8`) the progress bar will finish at 50%!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155788944", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155788944, "node_id": "IC_kwDOCGYnMM5E4_CQ", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:00:24Z", "updated_at": "2022-06-14T23:00:24Z", "author_association": "OWNER", "body": "The progress bar only works if the file-like object passed to it has a `fp.fileno()` that isn't 0 (for stdin) - that's how it detects that the file is something which it can measure the size of in order to show progress.\r\n\r\nIf we know the file size in bytes AND we know the character encoding, can we change `UpdateWrapper` to update the number of bytes-per-character instead?\r\n\r\nI don't think so: I can't see a way of definitively saying \"for this encoding the number of bytes per character is X\" - and in fact I'm pretty sure that question doesn't even make sense since variable-length encodings exist.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/439#issuecomment-1155789101", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439", "id": 1155789101, "node_id": "IC_kwDOCGYnMM5E4_Et", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:00:45Z", "updated_at": "2022-06-14T23:00:45Z", "author_association": "OWNER", "body": "I'm going to mark this as \"help wanted\" and leave it open. I'm glad that it's not actually a bug where errors get swallowed.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1250495688, "label": "Misleading progress bar against utf-16-le CSV input"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/434#issuecomment-1155791109", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/434", "id": 1155791109, "node_id": "IC_kwDOCGYnMM5E4_kF", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:04:40Z", "updated_at": "2022-06-14T23:04:40Z", "author_association": "OWNER", "body": "Definitely a bug - thanks for the detailed write-up!\r\n\r\nYou're right, the code at fault is here:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/1b09538bc6c1fda773590f3e600993ef06591041/sqlite_utils/db.py#L2213-L2231", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1243151184, "label": "`detect_fts()` identifies the wrong table if tables have names that are subsets of each other"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/434#issuecomment-1155794149", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/434", "id": 1155794149, "node_id": "IC_kwDOCGYnMM5E5ATl", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:09:54Z", "updated_at": "2022-06-14T23:09:54Z", "author_association": "OWNER", "body": "A test that demonstrates the problem:\r\n```python\r\n@pytest.mark.parametrize(\"reverse_order\", (True, False))\r\ndef test_detect_fts_similar_tables(fresh_db, reverse_order):\r\n # https://github.com/simonw/sqlite-utils/issues/434\r\n table1, table2 = (\"demo\", \"demo2\")\r\n if reverse_order:\r\n table1, table2 = table2, table1\r\n\r\n fresh_db[table1].insert({\"title\": \"Hello\"}).enable_fts(\r\n [\"title\"], fts_version=\"FTS4\"\r\n )\r\n fresh_db[table2].insert({\"title\": \"Hello\"}).enable_fts(\r\n [\"title\"], fts_version=\"FTS4\"\r\n )\r\n assert fresh_db[table1].detect_fts() == \"{}_fts\".format(table1)\r\n assert fresh_db[table2].detect_fts() == \"{}_fts\".format(table2)\r\n```\r\nThe order matters - so this test currently passes in one direction and fails in the other:\r\n```\r\n> assert fresh_db[table2].detect_fts() == \"{}_fts\".format(table2)\r\nE AssertionError: assert 'demo2_fts' == 'demo_fts'\r\nE - demo_fts\r\nE + demo2_fts\r\nE ? +\r\n\r\ntests/test_introspect.py:53: AssertionError\r\n========================================================================================= short test summary info =========================================================================================\r\nFAILED tests/test_introspect.py::test_detect_fts_similar_tables[True] - AssertionError: assert 'demo2_fts' == 'demo_fts'\r\n=============================================================================== 1 failed, 1 passed, 855 deselected in 1.00s ===============================================================================\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1243151184, "label": "`detect_fts()` identifies the wrong table if tables have names that are subsets of each other"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/434#issuecomment-1155801812", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/434", "id": 1155801812, "node_id": "IC_kwDOCGYnMM5E5CLU", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:23:32Z", "updated_at": "2022-06-14T23:23:32Z", "author_association": "OWNER", "body": "Since table names can be quoted like this:\r\n```sql\r\nCREATE VIRTUAL TABLE \"searchable_fts\"\r\n USING FTS4 (text1, text2, [name with . and spaces], content=\"searchable\")\r\n```\r\nOR like this:\r\n```sql\r\nCREATE VIRTUAL TABLE \"searchable_fts\"\r\n USING FTS4 (text1, text2, [name with . and spaces], content=[searchable])\r\n```\r\nThis fix looks to be correct to me (copying from the updated `test_with_trace()` test):\r\n\r\n```python\r\n (\r\n \"SELECT name FROM sqlite_master\\n\"\r\n \" WHERE rootpage = 0\\n\"\r\n \" AND (\\n\"\r\n \" sql LIKE :like\\n\"\r\n \" OR sql LIKE :like2\\n\"\r\n \" OR (\\n\"\r\n \" tbl_name = :table\\n\"\r\n \" AND sql LIKE '%VIRTUAL TABLE%USING FTS%'\\n\"\r\n \" )\\n\"\r\n \" )\",\r\n {\r\n \"like\": \"%VIRTUAL TABLE%USING FTS%content=[dogs]%\",\r\n \"like2\": '%VIRTUAL TABLE%USING FTS%content=\"dogs\"%',\r\n \"table\": \"dogs\",\r\n },\r\n )\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1243151184, "label": "`detect_fts()` identifies the wrong table if tables have names that are subsets of each other"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/430#issuecomment-1155803262", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/430", "id": 1155803262, "node_id": "IC_kwDOCGYnMM5E5Ch-", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:26:11Z", "updated_at": "2022-06-14T23:26:11Z", "author_association": "OWNER", "body": "It looks like `PRAGMA temp_store` was the right option to use here: https://www.sqlite.org/pragma.html#pragma_temp_store\r\n\r\n`temp_store_directory` is listed as deprecated here: https://www.sqlite.org/pragma.html#pragma_temp_store_directory\r\n\r\nI'm going to turn this into a help-wanted documentation issue.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1224112817, "label": "Document how to use `PRAGMA temp_store` to avoid errors when running VACUUM against huge databases"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155804459", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/444", "id": 1155804459, "node_id": "IC_kwDOCGYnMM5E5C0r", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:28:18Z", "updated_at": "2022-06-14T23:28:18Z", "author_association": "OWNER", "body": "I think these become part of the `_import_options` list which is used in a few places:\r\n\r\nhttps://github.com/simonw/sqlite-utils/blob/b8af3b96f5c72317cc8783dc296a94f6719987d9/sqlite_utils/cli.py#L765-L800", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1271426387, "label": "CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155804591", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/444", "id": 1155804591, "node_id": "IC_kwDOCGYnMM5E5C2v", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:28:36Z", "updated_at": "2022-06-14T23:28:36Z", "author_association": "OWNER", "body": "I'm going with `--extras-key` and `--ignore-extras` as the two new options.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1271426387, "label": "CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/444#issuecomment-1155815186", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/444", "id": 1155815186, "node_id": "IC_kwDOCGYnMM5E5FcS", "user": {"value": 9599, "label": "simonw"}, "created_at": "2022-06-14T23:48:16Z", "updated_at": "2022-06-14T23:48:16Z", "author_association": "OWNER", "body": "This is tricky to implement because of this code: https://github.com/simonw/sqlite-utils/blob/b8af3b96f5c72317cc8783dc296a94f6719987d9/sqlite_utils/cli.py#L938-L945\r\n\r\nIt's reconstructing each document using the known headers here:\r\n\r\n`docs = (dict(zip(headers, row)) for row in reader)`\r\n\r\nSo my first attempt at this - the diff here - did not have the desired result:\r\n\r\n```diff\r\ndiff --git a/sqlite_utils/cli.py b/sqlite_utils/cli.py\r\nindex 86eddfb..00b920b 100644\r\n--- a/sqlite_utils/cli.py\r\n+++ b/sqlite_utils/cli.py\r\n@@ -6,7 +6,7 @@ import hashlib\r\n import pathlib\r\n import sqlite_utils\r\n from sqlite_utils.db import AlterError, BadMultiValues, DescIndex\r\n-from sqlite_utils.utils import maximize_csv_field_size_limit\r\n+from sqlite_utils.utils import maximize_csv_field_size_limit, _extra_key_strategy\r\n from sqlite_utils import recipes\r\n import textwrap\r\n import inspect\r\n@@ -797,6 +797,15 @@ _import_options = (\r\n \"--encoding\",\r\n help=\"Character encoding for input, defaults to utf-8\",\r\n ),\r\n+ click.option(\r\n+ \"--ignore-extras\",\r\n+ is_flag=True,\r\n+ help=\"If a CSV line has more than the expected number of values, ignore the extras\",\r\n+ ),\r\n+ click.option(\r\n+ \"--extras-key\",\r\n+ help=\"If a CSV line has more than the expected number of values put them in a list in this column\",\r\n+ ),\r\n )\r\n \r\n \r\n@@ -885,6 +894,8 @@ def insert_upsert_implementation(\r\n sniff,\r\n no_headers,\r\n encoding,\r\n+ ignore_extras,\r\n+ extras_key,\r\n batch_size,\r\n alter,\r\n upsert,\r\n@@ -909,6 +920,10 @@ def insert_upsert_implementation(\r\n raise click.ClickException(\"--flatten cannot be used with --csv or --tsv\")\r\n if encoding and not (csv or tsv):\r\n raise click.ClickException(\"--encoding must be used with --csv or --tsv\")\r\n+ if ignore_extras and extras_key:\r\n+ raise click.ClickException(\r\n+ \"--ignore-extras and --extras-key cannot be used together\"\r\n+ )\r\n if pk and len(pk) == 1:\r\n pk = pk[0]\r\n encoding = encoding or \"utf-8-sig\"\r\n@@ -935,7 +950,9 @@ def insert_upsert_implementation(\r\n csv_reader_args[\"delimiter\"] = delimiter\r\n if quotechar:\r\n csv_reader_args[\"quotechar\"] = quotechar\r\n- reader = csv_std.reader(decoded, **csv_reader_args)\r\n+ reader = _extra_key_strategy(\r\n+ csv_std.reader(decoded, **csv_reader_args), ignore_extras, extras_key\r\n+ )\r\n first_row = next(reader)\r\n if no_headers:\r\n headers = [\"untitled_{}\".format(i + 1) for i in range(len(first_row))]\r\n@@ -1101,6 +1118,8 @@ def insert(\r\n sniff,\r\n no_headers,\r\n encoding,\r\n+ ignore_extras,\r\n+ extras_key,\r\n batch_size,\r\n alter,\r\n detect_types,\r\n@@ -1176,6 +1195,8 @@ def insert(\r\n sniff,\r\n no_headers,\r\n encoding,\r\n+ ignore_extras,\r\n+ extras_key,\r\n batch_size,\r\n alter=alter,\r\n upsert=False,\r\n@@ -1214,6 +1235,8 @@ def upsert(\r\n sniff,\r\n no_headers,\r\n encoding,\r\n+ ignore_extras,\r\n+ extras_key,\r\n alter,\r\n not_null,\r\n default,\r\n@@ -1254,6 +1277,8 @@ def upsert(\r\n sniff,\r\n no_headers,\r\n encoding,\r\n+ ignore_extras,\r\n+ extras_key,\r\n batch_size,\r\n alter=alter,\r\n upsert=True,\r\n@@ -1297,6 +1322,8 @@ def bulk(\r\n sniff,\r\n no_headers,\r\n encoding,\r\n+ ignore_extras,\r\n+ extras_key,\r\n load_extension,\r\n ):\r\n \"\"\"\r\n@@ -1331,6 +1358,8 @@ def bulk(\r\n sniff=sniff,\r\n no_headers=no_headers,\r\n encoding=encoding,\r\n+ ignore_extras=ignore_extras,\r\n+ extras_key=extras_key,\r\n batch_size=batch_size,\r\n alter=False,\r\n upsert=False,\r\n\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 1271426387, "label": "CSV `extras_key=` and `ignore_extras=` equivalents for CLI tool"}, "performed_via_github_app": null}