home / github / issue_comments

Menu
  • GraphQL API

issue_comments: 398101670

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions issue performed_via_github_app
https://github.com/simonw/datasette/issues/316#issuecomment-398101670 https://api.github.com/repos/simonw/datasette/issues/316 398101670 MDEyOklzc3VlQ29tbWVudDM5ODEwMTY3MA== 9599 2018-06-18T15:49:35Z 2018-06-18T15:50:38Z OWNER Wow, I've gone as high as 7GB but I've never tried it against 600GB. `datasette inspect` is indeed expected to take a long time for large databases. That's why it's available as a separate command: by running `datasette inspect` to generate `inspect-data.json` you can execute it just once against a large database and then have `datasette serve` take advantage of that cached metadata (hence avoiding `datasette serve` hanging on startup). As you spotted, most of the time is spent in those counts. I imagine you don't need those row counts in order for the rest of Datasette to function correctly (they are mainly used for display purposes - on the https://latest.datasette.io/fixtures index page for example). If your database changes infrequently, for the moment I recommend running `datasette inspect` once to generate the `inspect-data.json` file (let me know how long it takes) and then passing that file to `datasette serve mydb.db --inspect-file=inspect-data.json` If your database DOES change frequently then this workaround won't help you much. Let me know and I'll see how much work it would take to have those row counts be optional rather than required. {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 333238932  
Powered by Datasette · Queries took 0.797ms