5 rows where issue = 268462768 sorted by created_at

View and edit SQL

Suggested facets: created_at (date), updated_at (date)

id html_url issue_url node_id user created_at ▼ updated_at author_association body reactions issue performed_via_github_app
339388215 https://github.com/simonw/datasette/issues/38#issuecomment-339388215 https://api.github.com/repos/simonw/datasette/issues/38 MDEyOklzc3VlQ29tbWVudDMzOTM4ODIxNQ== simonw 9599 2017-10-25T16:25:45Z 2017-10-25T16:25:45Z OWNER First experiment: hook up an iterative CSV dump (just because that’s a tiny bit easier to get started with than iterative a JSON). Have it execute a big select statement and then iterate through the result set 100 rows at a time using sqite fetchmany() - also have it async sleep for a second in between each batch of 100. Can this work without needing python threads? {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Experiment with patterns for concurrent long running queries 268462768  
339388771 https://github.com/simonw/datasette/issues/38#issuecomment-339388771 https://api.github.com/repos/simonw/datasette/issues/38 MDEyOklzc3VlQ29tbWVudDMzOTM4ODc3MQ== simonw 9599 2017-10-25T16:27:29Z 2017-10-25T16:27:29Z OWNER If this does work, I need to figure it what to do about the HTML view. ASsuming I can iteratively produce JSON and CSV, what to do about HTML? One option: render the first 500 rows as HTML, then hand off to an infinite scroll experience that iteratively loads more rows as JSON. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Experiment with patterns for concurrent long running queries 268462768  
339389105 https://github.com/simonw/datasette/issues/38#issuecomment-339389105 https://api.github.com/repos/simonw/datasette/issues/38 MDEyOklzc3VlQ29tbWVudDMzOTM4OTEwNQ== simonw 9599 2017-10-25T16:28:39Z 2017-10-25T16:28:39Z OWNER The gold standard here is to be able to serve up increasingly large datasets without blocking the event loop and while using a sustainable amount of RAM {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Experiment with patterns for concurrent long running queries 268462768  
339389328 https://github.com/simonw/datasette/issues/38#issuecomment-339389328 https://api.github.com/repos/simonw/datasette/issues/38 MDEyOklzc3VlQ29tbWVudDMzOTM4OTMyOA== simonw 9599 2017-10-25T16:29:23Z 2017-10-25T16:29:23Z OWNER Ideally we can get some serious gains from the fact that our database file is opened with the immutable option. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Experiment with patterns for concurrent long running queries 268462768  
392601114 https://github.com/simonw/datasette/issues/38#issuecomment-392601114 https://api.github.com/repos/simonw/datasette/issues/38 MDEyOklzc3VlQ29tbWVudDM5MjYwMTExNA== simonw 9599 2018-05-28T20:47:31Z 2018-05-28T20:47:31Z OWNER I think the way Datasette executes SQL queries in a thread pool introduced in #45 is a good solution for this ticket. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Experiment with patterns for concurrent long running queries 268462768  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);