{"html_url": "https://github.com/simonw/datasette/issues/16#issuecomment-339420462", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/16", "id": 339420462, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTQyMDQ2Mg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T18:10:51Z", "updated_at": "2017-10-25T18:10:51Z", "author_association": "OWNER", "body": "https://sitesforprofit.com/responsive-table-plugins-and-patterns has some useful links.\r\n\r\nI really like the pattern from https://css-tricks.com/responsive-data-tables/\r\n\r\n /* \r\n Max width before this PARTICULAR table gets nasty\r\n This query will take effect for any screen smaller than 760px\r\n and also iPads specifically.\r\n */\r\n @media \r\n only screen and (max-width: 760px),\r\n (min-device-width: 768px) and (max-device-width: 1024px) {\r\n\r\n /* Force table to not be like tables anymore */\r\n table, thead, tbody, th, td, tr { \r\n display: block; \r\n }\r\n \r\n /* Hide table headers (but not display: none;, for accessibility) */\r\n thead tr { \r\n position: absolute;\r\n top: -9999px;\r\n left: -9999px;\r\n }\r\n \r\n tr { border: 1px solid #ccc; }\r\n \r\n td { \r\n /* Behave like a \"row\" */\r\n border: none;\r\n border-bottom: 1px solid #eee; \r\n position: relative;\r\n padding-left: 50%; \r\n }\r\n \r\n td:before { \r\n /* Now like a table header */\r\n position: absolute;\r\n /* Top/left values mimic padding */\r\n top: 6px;\r\n left: 6px;\r\n width: 45%; \r\n padding-right: 10px; \r\n white-space: nowrap;\r\n }\r\n \r\n /*\r\n Label the data\r\n */\r\n td:nth-of-type(1):before { content: \"First Name\"; }\r\n td:nth-of-type(2):before { content: \"Last Name\"; }\r\n td:nth-of-type(3):before { content: \"Job Title\"; }\r\n td:nth-of-type(4):before { content: \"Favorite Color\"; }\r\n td:nth-of-type(5):before { content: \"Wars of Trek?\"; }\r\n td:nth-of-type(6):before { content: \"Porn Name\"; }\r\n td:nth-of-type(7):before { content: \"Date of Birth\"; }\r\n td:nth-of-type(8):before { content: \"Dream Vacation City\"; }\r\n td:nth-of-type(9):before { content: \"GPA\"; }\r\n td:nth-of-type(10):before { content: \"Arbitrary Data\"; }\r\n }", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267726219, "label": "Default HTML/CSS needs to look reasonable and be responsive"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/19#issuecomment-339366612", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/19", "id": 339366612, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM2NjYxMg==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T15:21:16Z", "updated_at": "2017-10-25T15:21:16Z", "author_association": "OWNER", "body": "I had to manually set the content disposition header:\r\n\r\n return await response.file_stream(\r\n filepath, headers={\r\n 'Content-Disposition': 'attachment; filename=\"{}\"'.format(ilepath)\r\n }\r\n )\r\n\r\nIn the next release of Sanic I can just use the filename= argument instead:\r\n\r\nhttps://github.com/channelcat/sanic/commit/07e95dba4f5983afc1e673df14bdd278817288aa", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267741262, "label": "Efficient url for downloading the raw database file"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/23#issuecomment-339186887", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/23", "id": 339186887, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTE4Njg4Nw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T01:39:43Z", "updated_at": "2017-10-25T04:22:41Z", "author_association": "OWNER", "body": "Still to do:\r\n\r\n- [x] `gt`, `gte`, `lt`, `lte`\r\n- [x] `like`\r\n- [x] `glob`\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267788884, "label": "Support Django-style filters in querystring arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/23#issuecomment-339210353", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/23", "id": 339210353, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTIxMDM1Mw==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T04:23:02Z", "updated_at": "2017-10-25T04:23:02Z", "author_association": "OWNER", "body": "I'm going to call this one done for the moment. The date filters can go in a stretch goal.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267788884, "label": "Support Django-style filters in querystring arguments"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/37#issuecomment-339382054", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/37", "id": 339382054, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM4MjA1NA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T16:05:56Z", "updated_at": "2017-10-25T16:05:56Z", "author_association": "OWNER", "body": "Could this be as simple as using the iterative JSON encoder and adding a yield statement in between each chunk?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268453968, "label": "Ability to serialize massive JSON without blocking event loop"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/38#issuecomment-339388215", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/38", "id": 339388215, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM4ODIxNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T16:25:45Z", "updated_at": "2017-10-25T16:25:45Z", "author_association": "OWNER", "body": "First experiment: hook up an iterative CSV dump (just because that\u2019s a tiny bit easier to get started with than iterative a JSON). Have it execute a big select statement and then iterate through the result set 100 rows at a time using sqite fetchmany() - also have it async sleep for a second in between each batch of 100.\r\n\r\nCan this work without needing python threads? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268462768, "label": "Experiment with patterns for concurrent long running queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/38#issuecomment-339388771", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/38", "id": 339388771, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM4ODc3MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T16:27:29Z", "updated_at": "2017-10-25T16:27:29Z", "author_association": "OWNER", "body": "If this does work, I need to figure it what to do about the HTML view. ASsuming I can iteratively produce JSON and CSV, what to do about HTML? One option: render the first 500 rows as HTML, then hand off to an infinite scroll experience that iteratively loads more rows as JSON.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268462768, "label": "Experiment with patterns for concurrent long running queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/38#issuecomment-339389105", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/38", "id": 339389105, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM4OTEwNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T16:28:39Z", "updated_at": "2017-10-25T16:28:39Z", "author_association": "OWNER", "body": "The gold standard here is to be able to serve up increasingly large datasets without blocking the event loop and while using a sustainable amount of RAM", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268462768, "label": "Experiment with patterns for concurrent long running queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/38#issuecomment-339389328", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/38", "id": 339389328, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM4OTMyOA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T16:29:23Z", "updated_at": "2017-10-25T16:29:23Z", "author_association": "OWNER", "body": "Ideally we can get some serious gains from the fact that our database file is opened with the immutable option.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268462768, "label": "Experiment with patterns for concurrent long running queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/39#issuecomment-339406634", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/39", "id": 339406634, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTQwNjYzNA==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T17:27:10Z", "updated_at": "2017-10-25T17:27:10Z", "author_association": "OWNER", "body": "It certainly looks like some of the stuff in https://sqlite.org/pragma.html could be used to screw around with things. Example: `PRAGMA case_sensitive_like = 1` - would that affect future queries?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268469569, "label": "Protect against malicious SQL that causes damage even though our DB is immutable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/39#issuecomment-339413825", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/39", "id": 339413825, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTQxMzgyNQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T17:48:48Z", "updated_at": "2017-10-25T17:48:48Z", "author_association": "OWNER", "body": "Could I use https://sqlparse.readthedocs.io/en/latest/ to parse incoming statements and ensure they are pure SELECTs? Would that prevent people from using a compound SELECT statement to trigger an evil PRAGMA of some sort?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268469569, "label": "Protect against malicious SQL that causes damage even though our DB is immutable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/40#issuecomment-339395551", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/40", "id": 339395551, "node_id": "MDEyOklzc3VlQ29tbWVudDMzOTM5NTU1MQ==", "user": {"value": 9599, "label": "simonw"}, "created_at": "2017-10-25T16:49:32Z", "updated_at": "2017-10-25T16:49:32Z", "author_association": "OWNER", "body": "Simplest implementation will be to create a temporary directory somewhere, copy in a Dockerfile and the databases and run \u201cnow\u201d in it.\r\n\r\nIdeally I can use symlinks rather than copying potentially large database files around.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 268470572, "label": "Implement command-line tool interface"}, "performed_via_github_app": null}