html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app
https://github.com/simonw/datasette/issues/749#issuecomment-622450636,https://api.github.com/repos/simonw/datasette/issues/749,622450636,MDEyOklzc3VlQ29tbWVudDYyMjQ1MDYzNg==,9599,2020-05-01T16:08:46Z,2020-05-01T16:08:46Z,OWNER,"Proposed solution: on Cloud Run don't show the ""download database"" link if the database file is larger than 32MB.
I can do this with a new config setting, `max_db_mb`, which is automatically set by the `publish cloudrun` command.
This is consistent with the existing `max_csv_mb` setting: https://datasette.readthedocs.io/en/stable/config.html#max-csv-mb
I should set `max_csv_mb` to 32MB on Cloud Run deploys as well.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",610829227,
https://github.com/simonw/datasette/issues/749#issuecomment-726417847,https://api.github.com/repos/simonw/datasette/issues/749,726417847,MDEyOklzc3VlQ29tbWVudDcyNjQxNzg0Nw==,9599,2020-11-13T00:05:14Z,2020-11-13T00:05:14Z,OWNER,"https://cloud.google.com/blog/products/serverless/cloud-run-now-supports-http-grpc-server-streaming indicates this limit should no longer apply:
> With this addition, Cloud Run can now ... Send responses larger than the previous 32 MB limit
But I'm still getting errors from Cloud Run attempting to download `.db` files larger than 32 MB.
I filed a question in their issue tracker about that here: https://issuetracker.google.com/issues/173038375","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",610829227,
https://github.com/simonw/datasette/issues/749#issuecomment-737563699,https://api.github.com/repos/simonw/datasette/issues/749,737563699,MDEyOklzc3VlQ29tbWVudDczNzU2MzY5OQ==,9599,2020-12-02T23:45:42Z,2020-12-02T23:45:42Z,OWNER,"I asked about this on Twitter - https://twitter.com/steren/status/1334281184965140483
> You simply need to send the `Transfer-Encoding: chunked` header.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",610829227,
https://github.com/simonw/datasette/issues/749#issuecomment-737580084,https://api.github.com/repos/simonw/datasette/issues/749,737580084,MDEyOklzc3VlQ29tbWVudDczNzU4MDA4NA==,9599,2020-12-03T00:31:14Z,2020-12-03T00:31:14Z,OWNER,"This works!
```
/tmp % wget 'https://covid-19.datasettes.com/covid.db'
--2020-12-02 16:28:02-- https://covid-19.datasettes.com/covid.db
Resolving covid-19.datasettes.com (covid-19.datasettes.com)... 172.217.5.83
Connecting to covid-19.datasettes.com (covid-19.datasettes.com)|172.217.5.83|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/octet-stream]
Saving to: ‘covid.db’
covid.db [ <=> ] 306.42M 3.27MB/s in 98s
2020-12-02 16:29:40 (3.13 MB/s) - ‘covid.db’ saved [321306624]
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",610829227,