{"html_url": "https://github.com/simonw/datasette/issues/1082#issuecomment-721545090", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1082", "id": 721545090, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTU0NTA5MA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-04T06:47:15Z", "updated_at": "2020-11-04T06:47:15Z", "author_association": "OWNER", "body": "I've run into a similar problem with Google Cloud Run: beyond a certain size of database file I find myself needing to run instances there with more RAM assigned to them.\r\n\r\nI haven't yet figured out a method to estimate the amount of RAM that will be needed to successfully serve a database file of a specific size- I've been using trial and error.\r\n\r\n5GB is quite a big database file, so it doesn't surprise me that it may need a bigger instance. I recommend trying it on a 1GB or 2GB of RAM Digital Ocean instance (their default is 512MB) and see if that works.\r\n\r\nLet me know what you find out!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 735852274, "label": "DigitalOcean buildpack memory errors for large sqlite db?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1082#issuecomment-721547177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1082", "id": 721547177, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTU0NzE3Nw==", "user": {"value": 39538958, "label": "justmars"}, "created_at": "2020-11-04T06:52:30Z", "updated_at": "2020-11-04T06:53:16Z", "author_association": "NONE", "body": "I think I tried the same db size on the following scenarios in Digital Ocean:\r\n1. Basic ($5/month) with 512MB RAM\r\n2. Basic ($10/month) with 1GB RAM\r\n3. Pro ($12/month) with 1GB RAM\r\n\r\nAll such attempts conked out with \"out of memory\" errors", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 735852274, "label": "DigitalOcean buildpack memory errors for large sqlite db?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1082#issuecomment-721931504", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1082", "id": 721931504, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTkzMTUwNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-04T19:32:47Z", "updated_at": "2020-11-04T19:35:44Z", "author_association": "OWNER", "body": "I wonder if setting a soft memory limit within Datasette would help here: https://www.sqlite.org/malloc.html#_setting_memory_usage_limits \r\n\r\n> If attempts are made to allocate more memory than specified by the soft heap limit, then SQLite will first attempt to free cache memory before continuing with the allocation request.\r\n\r\nhttps://www.sqlite.org/pragma.html#pragma_soft_heap_limit\r\n\r\n> **PRAGMA soft_heap_limit**\r\n> **PRAGMA soft_heap_limit=N**\r\n> \r\n> This pragma invokes the [sqlite3_soft_heap_limit64()](https://www.sqlite.org/c3ref/hard_heap_limit64.html) interface with the argument N, if N is specified and is a non-negative integer. The soft_heap_limit pragma always returns the same integer that would be returned by the [sqlite3_soft_heap_limit64](https://www.sqlite.org/c3ref/hard_heap_limit64.html)(-1) C-language function.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 735852274, "label": "DigitalOcean buildpack memory errors for large sqlite db?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1083#issuecomment-721926827", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1083", "id": 721926827, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTkyNjgyNw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-04T19:23:42Z", "updated_at": "2020-11-04T19:23:42Z", "author_association": "OWNER", "body": "https://latest.datasette.io/fixtures/sortable#export has advanced export options, but https://latest.datasette.io/fixtures?sql=select+pk1%2C+pk2%2C+content%2C+sortable%2C+sortable_with_nulls%2C+sortable_with_nulls_2%2C+text+from+sortable+order+by+pk1%2C+pk2+limit+101 does not.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 736365306, "label": "Advanced CSV export for arbitrary queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1083#issuecomment-721927254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1083", "id": 721927254, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTkyNzI1NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-04T19:24:34Z", "updated_at": "2020-11-04T19:24:34Z", "author_association": "OWNER", "body": "Related: #856 - if it's possible to paginate correctly configured canned query then the CSV option to \"stream all rows\" could work for queries as well as tables.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 736365306, "label": "Advanced CSV export for arbitrary queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/268#issuecomment-721896822", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/268", "id": 721896822, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTg5NjgyMg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-04T18:23:29Z", "updated_at": "2020-11-04T18:23:29Z", "author_association": "OWNER", "body": "Worth noting that joining to get the rank works for FTS5 but not for FTS4 - see comment here: https://github.com/simonw/sqlite-utils/issues/192#issuecomment-721420539\r\n\r\nEasiest solution would be to only support sort-by-rank for FTS5 tables. Alternative would be to depend on https://github.com/simonw/sqlite-fts4", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 323718842, "label": "Mechanism for ranking results from SQLite full-text search"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/sqlite-utils/issues/192#issuecomment-721453779", "issue_url": "https://api.github.com/repos/simonw/sqlite-utils/issues/192", "id": 721453779, "node_id": "MDEyOklzc3VlQ29tbWVudDcyMTQ1Mzc3OQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2020-11-04T00:59:24Z", "updated_at": "2020-11-04T00:59:36Z", "author_association": "OWNER", "body": "FTS5 was added in SQLite 3.9.0 in 2015-10-14 - so about a year after CTEs, which means CTEs will always be safe to use with FTS5 queries.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 735532751, "label": "sqlite-utils search command"}, "performed_via_github_app": null}