issue_comments: 974711959
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/1426#issuecomment-974711959 | https://api.github.com/repos/simonw/datasette/issues/1426 | 974711959 | IC_kwDOBm6k_c46GOyX | 52649 | 2021-11-20T21:11:51Z | 2021-11-20T21:11:51Z | NONE | I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 964322136 |