issue_comments: 389570841
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/266#issuecomment-389570841 | https://api.github.com/repos/simonw/datasette/issues/266 | 389570841 | MDEyOklzc3VlQ29tbWVudDM4OTU3MDg0MQ== | 9599 | 2018-05-16T15:54:49Z | 2018-06-15T07:41:09Z | OWNER | At the most basic level, this will work based on an extension. Most places you currently put a `.json` extension should also allow a `.csv` extension. By default this will return the exact results you see on the current page (default max will remain 1000). ## Streaming all records Where things get interested is *streaming mode*. This will be an option which returns ALL matching records as a streaming CSV file, even if that ends up being millions of records. I think the best way to build this will be on top of the existing mechanism used to efficiently implement keyset pagination via `_next=` tokens. ## Expanding foreign keys For tables with foreign key references it would be useful if the CSV format could expand those references to include the labels from `label_column` - maybe via an additional `?_expand=1` option. When expanding each foreign key column will be shown twice: rowid,city_id,city_id_label,state | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 323681589 |