issue_comments
11 rows where "created_at" is on date 2019-10-11
This data as json, CSV (advanced)
Suggested facets: author_association
reactions 1 ✖
id ▼ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
540879620 | https://github.com/dogsheep/twitter-to-sqlite/issues/4#issuecomment-540879620 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/4 | MDEyOklzc3VlQ29tbWVudDU0MDg3OTYyMA== | simonw 9599 | 2019-10-11T02:59:16Z | 2019-10-11T02:59:16Z | MEMBER | Also import ad preferences and all that other junk. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Command for importing data from a Twitter Export file 488835586 | |
541052329 | https://github.com/simonw/datasette/issues/585#issuecomment-541052329 | https://api.github.com/repos/simonw/datasette/issues/585 | MDEyOklzc3VlQ29tbWVudDU0MTA1MjMyOQ== | rixx 2657547 | 2019-10-11T12:53:51Z | 2019-10-11T12:53:51Z | CONTRIBUTOR | I think this would be good, yeah – currently, databases are explicitly sorted by name in the IndexView, we could just remove that part (and use an `OrderedDict` for consistency, I suppose)? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Databases on index page should display in order they were passed to "datasette serve"? 503217375 | |
541112108 | https://github.com/dogsheep/twitter-to-sqlite/issues/17#issuecomment-541112108 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/17 | MDEyOklzc3VlQ29tbWVudDU0MTExMjEwOA== | simonw 9599 | 2019-10-11T15:30:15Z | 2019-10-11T15:30:15Z | MEMBER | It should delete the tables entirely. That way it will work even if the table schema has changed. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | import command should empty all archive-* tables first 505674949 | |
541112588 | https://github.com/dogsheep/twitter-to-sqlite/issues/17#issuecomment-541112588 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/17 | MDEyOklzc3VlQ29tbWVudDU0MTExMjU4OA== | simonw 9599 | 2019-10-11T15:31:30Z | 2019-10-11T15:31:30Z | MEMBER | No need for an option: > This command will delete and recreate all of your `archive-*` tables every time you run it. If this is not what you want, run the command against a fresh SQLite database rather than running it again one that already exists. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | import command should empty all archive-* tables first 505674949 | |
541118773 | https://github.com/dogsheep/twitter-to-sqlite/issues/18#issuecomment-541118773 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/18 | MDEyOklzc3VlQ29tbWVudDU0MTExODc3Mw== | simonw 9599 | 2019-10-11T15:48:31Z | 2019-10-11T15:48:31Z | MEMBER | https://developer.twitter.com/en/docs/tweets/timelines/api-reference/get-statuses-home_timeline | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Command to import home-timeline 505928530 | |
541118904 | https://github.com/simonw/datasette/issues/507#issuecomment-541118904 | https://api.github.com/repos/simonw/datasette/issues/507 | MDEyOklzc3VlQ29tbWVudDU0MTExODkwNA== | rixx 2657547 | 2019-10-11T15:48:49Z | 2019-10-11T15:48:49Z | CONTRIBUTOR | Headless Chrome and Firefox via Selenium are a solid choice in my experience. You may be interested in how pretix and pretalx solve this problem: They use pytest to create those screenshots on release to make sure they are up to date. See [this writeup](https://behind.pretix.eu/2018/11/15/automated-screenshots/) and [this repo](https://github.com/pretix/pretix-screenshots). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Every datasette plugin on the ecosystem page should have a screenshot 455852801 | |
541118934 | https://github.com/dogsheep/twitter-to-sqlite/issues/18#issuecomment-541118934 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/18 | MDEyOklzc3VlQ29tbWVudDU0MTExODkzNA== | simonw 9599 | 2019-10-11T15:48:54Z | 2019-10-11T15:48:54Z | MEMBER | Rate limit is tight: 15 requests every 15 mins! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Command to import home-timeline 505928530 | |
541119038 | https://github.com/simonw/datasette/issues/512#issuecomment-541119038 | https://api.github.com/repos/simonw/datasette/issues/512 | MDEyOklzc3VlQ29tbWVudDU0MTExOTAzOA== | rixx 2657547 | 2019-10-11T15:49:13Z | 2019-10-11T15:49:13Z | CONTRIBUTOR | How open are you to changing the config variable names (with appropriate deprecation, of course)? `"about_url_text", "license_url_text"` etc might be better suited to convey that these are just meant as basically URL titles. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "about" parameter in metadata does not appear when alone 457147936 | |
541119834 | https://github.com/dogsheep/twitter-to-sqlite/issues/18#issuecomment-541119834 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/18 | MDEyOklzc3VlQ29tbWVudDU0MTExOTgzNA== | simonw 9599 | 2019-10-11T15:51:22Z | 2019-10-11T16:51:33Z | MEMBER | In order to support multiple user timelines being saved in the same database, I'm going to import the tweets into the `tweets` table AND add a new `timeline_tweets` table recording that a specific tweet showed up in a specific user's timeline. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Command to import home-timeline 505928530 | |
541141169 | https://github.com/dogsheep/twitter-to-sqlite/issues/18#issuecomment-541141169 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/18 | MDEyOklzc3VlQ29tbWVudDU0MTE0MTE2OQ== | simonw 9599 | 2019-10-11T16:51:29Z | 2019-10-11T16:51:29Z | MEMBER | Documented here: https://github.com/dogsheep/twitter-to-sqlite/blob/master/README.md#retrieving-tweets-from-your-home-timeline | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Command to import home-timeline 505928530 | |
541248629 | https://github.com/dogsheep/twitter-to-sqlite/issues/19#issuecomment-541248629 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/19 | MDEyOklzc3VlQ29tbWVudDU0MTI0ODYyOQ== | simonw 9599 | 2019-10-11T22:48:56Z | 2019-10-11T22:48:56Z | MEMBER | `since_id` documented here: https://developer.twitter.com/en/docs/tweets/timelines/api-reference/get-statuses-home_timeline > Returns results with an ID greater than (that is, more recent than) the specified ID. There are limits to the number of Tweets which can be accessed through the API. If the limit of Tweets has occured since the since_id, the since_id will be forced to the oldest ID available. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | since_id support for home-timeline 506087267 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
updated_at (date) 1 ✖