{"html_url": "https://github.com/simonw/datasette/issues/272#issuecomment-503195217", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/272", "id": 503195217, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzE5NTIxNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-06-18T15:46:31Z", "updated_at": "2019-06-18T15:54:18Z", "author_association": "OWNER", "body": "How should file serving work?\r\n\r\nStarlette and Sanic both use `aiofiles` - https://github.com/Tinche/aiofiles - which is a small wrapper around file operations which runs them all in an executor thread. It doesn't have any C dependencies so it looks like a good option. [Quart uses it too](https://gitlab.com/pgjones/quart/blob/317562ea660edb7159efc20fa57b95223d408ea0/quart/wrappers/response.py#L122-169).\r\n\r\n`aiohttp` does things differently: it has [an implementation based on sendfile](https://github.com/aio-libs/aiohttp/blob/7a324fd46ff7dc9bb0bb1bc5afb326e04cf7cef0/aiohttp/web_fileresponse.py#L46-L122) with [an alternative fallback](https://github.com/aio-libs/aiohttp/blob/7a324fd46ff7dc9bb0bb1bc5afb326e04cf7cef0/aiohttp/web_fileresponse.py#L175-L200) which reads chunks from a file object and yields them one chunk at a time, \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324188953, "label": "Port Datasette to ASGI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/272#issuecomment-503351966", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/272", "id": 503351966, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzM1MTk2Ng==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-06-18T23:45:17Z", "updated_at": "2019-06-18T23:45:17Z", "author_association": "OWNER", "body": "Uvicorn 0.8.1 is our and supports `raw_path`!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 324188953, "label": "Port Datasette to ASGI"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/502#issuecomment-503237884", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/502", "id": 503237884, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzIzNzg4NA==", "user": {"value": 7936571, "label": "chrismp"}, "created_at": "2019-06-18T17:39:18Z", "updated_at": "2019-06-18T17:46:08Z", "author_association": "NONE", "body": "It appears that I cannot reopen this issue but the proposed solution did not solve it. The link is not there. I have full text search enabled for a bunch of tables in my database and even clicking the link to reveal hidden tables did not show the download DB link.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 453131917, "label": "Exporting sqlite database(s)?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/512#issuecomment-503200024", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/512", "id": 503200024, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzIwMDAyNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-06-18T15:58:18Z", "updated_at": "2019-06-18T15:58:18Z", "author_association": "OWNER", "body": "The `about` and `license` and `source` keys are all intended to be used for links - so if you provide a `about_url` it will be displayed as a URL, and you can then use the `about` key to customize the link label.\r\n\r\nThere are `description` and `title` fields which can be used to display text without linking to anything.\r\n\r\nI'm definitely open to reconsidering how these work - I don't think they quite serve people's needs as they are right now, so suggestions for improving them would be very welcome.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 457147936, "label": "\"about\" parameter in metadata does not appear when alone"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/512#issuecomment-503236800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/512", "id": 503236800, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzIzNjgwMA==", "user": {"value": 7936571, "label": "chrismp"}, "created_at": "2019-06-18T17:36:37Z", "updated_at": "2019-06-18T17:36:37Z", "author_association": "NONE", "body": "Oh I didn't know the `description` field could be used for a database's metadata. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 457147936, "label": "\"about\" parameter in metadata does not appear when alone"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/513#issuecomment-503199253", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/513", "id": 503199253, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzE5OTI1Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2019-06-18T15:56:29Z", "updated_at": "2019-06-18T15:56:29Z", "author_association": "OWNER", "body": "Unfortunately not - I really wish this was possible. I have not yet found a great serverless solution for publishing 1GB+ databases - they're too big for Heroku, Cloud Run OR Zeit Now. Once databases get that big the only option I've found is to run a VPS (or an EC2 instance) with a mounted hard drive volume and execute `datasette serve` on that instance, with an nginx running on port 80 that proxies traffic back to Datasette.\r\n\r\nI'd love to figure out a way to make hosting larger databases as easy as it currently is to host small ones.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 457201907, "label": "Is it possible to publish to Heroku despite slug size being too large?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/513#issuecomment-503249999", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/513", "id": 503249999, "node_id": "MDEyOklzc3VlQ29tbWVudDUwMzI0OTk5OQ==", "user": {"value": 7936571, "label": "chrismp"}, "created_at": "2019-06-18T18:11:36Z", "updated_at": "2019-06-18T18:11:36Z", "author_association": "NONE", "body": "Ah, so basically put the SQLite databases on Linode, for example, and run `datasette serve` on there? I'm comfortable with that. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 457201907, "label": "Is it possible to publish to Heroku despite slug size being too large?"}, "performed_via_github_app": null}