issue_comments: 770071568
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770071568 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60 | 770071568 | MDEyOklzc3VlQ29tbWVudDc3MDA3MTU2OA== | 9599 | 2021-01-29T21:56:15Z | 2021-01-29T21:56:15Z | MEMBER | I really like the way you're using pipes here - really smart. It's similar to how I build the demo database in this GitHub Actions workflow: https://github.com/dogsheep/github-to-sqlite/blob/62dfd3bc4014b108200001ef4bc746feb6f33b45/.github/workflows/deploy-demo.yml#L52-L82 `twitter-to-sqlite` actually has a mechanism for doing this kind of thing, documented at https://github.com/dogsheep/twitter-to-sqlite#providing-input-from-a-sql-query-with---sql-and---attach It lets you do things like: ``` $ twitter-to-sqlite users-lookup my.db --sql="select follower_id from following" --ids ``` Maybe I should add something similar to `github-to-sqlite`? Feels like it could be really useful. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 797097140 |