{"html_url": "https://github.com/simonw/datasette/issues/266#issuecomment-389570841", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/266", "id": 389570841, "node_id": "MDEyOklzc3VlQ29tbWVudDM4OTU3MDg0MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2018-05-16T15:54:49Z", "updated_at": "2018-06-15T07:41:09Z", "author_association": "OWNER", "body": "At the most basic level, this will work based on an extension. Most places you currently put a `.json` extension should also allow a `.csv` extension.\r\n\r\nBy default this will return the exact results you see on the current page (default max will remain 1000).\r\n\r\n## Streaming all records\r\n\r\nWhere things get interested is *streaming mode*. This will be an option which returns ALL matching records as a streaming CSV file, even if that ends up being millions of records.\r\n\r\nI think the best way to build this will be on top of the existing mechanism used to efficiently implement keyset pagination via `_next=` tokens.\r\n\r\n## Expanding foreign keys\r\n\r\nFor tables with foreign key references it would be useful if the CSV format could expand those references to include the labels from `label_column` - maybe via an additional `?_expand=1` option.\r\n\r\nWhen expanding each foreign key column will be shown twice:\r\n\r\n rowid,city_id,city_id_label,state", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323681589, "label": "Export to CSV"}, "performed_via_github_app": null}