issue_comments
5 rows where author_association = "CONTRIBUTOR" and "created_at" is on date 2019-05-03 sorted by issue_url descending
This data as json, CSV (advanced)
Suggested facets: issue_url, issue, updated_at (date)
id | html_url | issue_url ▲ | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
489221481 | https://github.com/simonw/datasette/issues/446#issuecomment-489221481 | https://api.github.com/repos/simonw/datasette/issues/446 | MDEyOklzc3VlQ29tbWVudDQ4OTIyMTQ4MQ== | russss 45057 | 2019-05-03T19:58:31Z | 2019-05-03T19:58:31Z | CONTRIBUTOR | In this particular case I don't think there's an issue making all those required. However, I suspect we might have to allow optional values at some point - my preferred solution to russss/datasette-geo#2 would need one. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Define mechanism for plugins to return structured data 440134714 | |
489222223 | https://github.com/simonw/datasette/issues/446#issuecomment-489222223 | https://api.github.com/repos/simonw/datasette/issues/446 | MDEyOklzc3VlQ29tbWVudDQ4OTIyMjIyMw== | russss 45057 | 2019-05-03T20:01:19Z | 2019-05-03T20:01:29Z | CONTRIBUTOR | Also I have a slight preference against (ab)using `__slots__` to enforce fields, although I have done it myself in the past. It would be possible to do this with `__setattr__` instead, although that's an implementation detail and I'm not too fussed about it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Define mechanism for plugins to return structured data 440134714 | |
489105665 | https://github.com/simonw/datasette/pull/434#issuecomment-489105665 | https://api.github.com/repos/simonw/datasette/issues/434 | MDEyOklzc3VlQ29tbWVudDQ4OTEwNTY2NQ== | eyeseast 25778 | 2019-05-03T14:01:30Z | 2019-05-03T14:01:30Z | CONTRIBUTOR | This is exactly what I needed. Thank you. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "datasette publish cloudrun" command to publish to Google Cloud Run 434321685 | |
489163939 | https://github.com/simonw/datasette/pull/434#issuecomment-489163939 | https://api.github.com/repos/simonw/datasette/issues/434 | MDEyOklzc3VlQ29tbWVudDQ4OTE2MzkzOQ== | rprimet 10352819 | 2019-05-03T16:49:45Z | 2019-05-03T16:50:03Z | CONTRIBUTOR | > The second time I ran the command I got an error: > > ERROR: (gcloud.beta.run.deploy) Deployment endpoint was not found. Perhaps the > provided region was invalid. Set the `run/region` property to a valid region and > retry. Ex: `gcloud config set run/region us-central1` > Yes, I was able to reproduce this; I used to get prompted for a run region interactively by the `gcloud` tool before, but maybe this is changing? (the [documentation](https://cloud.google.com/run/docs/deploying) now assumes `run/region` is set). Not sure which course of action is best: making `datasette` ensure that `run/region` is set beforehand or wait a bit until the gcloud CLI stabilizes? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "datasette publish cloudrun" command to publish to Google Cloud Run 434321685 | |
489060765 | https://github.com/simonw/datasette/issues/419#issuecomment-489060765 | https://api.github.com/repos/simonw/datasette/issues/419 | MDEyOklzc3VlQ29tbWVudDQ4OTA2MDc2NQ== | russss 45057 | 2019-05-03T11:07:42Z | 2019-05-03T11:07:42Z | CONTRIBUTOR | Are you planning on removing inspect entirely? I didn't spot this work before I started on datasette-geo, but ironically I think it has a use case which really needs the inspect functionality (or some replacement). Datasette-geo uses it to store the bounding box of all the geographic features in the table. This is needed when rendering the map because it avoids having to send loads of tile requests for areas which are empty. Even with relatively small datasets, calculating the bounding box seems to take around 5 seconds, so I don't think it's really feasible to do this on page load. One possible fix would be to do this on startup, and then in a thread which watches the database for changes. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Default to opening files in mutable mode, special option for immutable files 421551434 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
created_at (date) 1 ✖