issue_comments
7 rows where user = 596279
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, issue, created_at (date), updated_at (date)
id ▼ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
736173084 | https://github.com/simonw/datasette/issues/942#issuecomment-736173084 | https://api.github.com/repos/simonw/datasette/issues/942 | MDEyOklzc3VlQ29tbWVudDczNjE3MzA4NA== | zaneselvans 596279 | 2020-12-01T02:20:58Z | 2020-12-01T02:20:58Z | NONE | Are there common patterns for storing column-based metadata inside SQLite itself? I know Postgres allows "comment" fields, which this is kind of trying to replicate. Should the `units` and `description` and possibly other per-column metadata fields be combined into a single (tabular?) structure, that would be displayed above the data on the table / query results page? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Support column descriptions in metadata.json 681334912 | |
737428262 | https://github.com/simonw/datasette/issues/942#issuecomment-737428262 | https://api.github.com/repos/simonw/datasette/issues/942 | MDEyOklzc3VlQ29tbWVudDczNzQyODI2Mg== | zaneselvans 596279 | 2020-12-02T18:55:21Z | 2020-12-02T18:55:21Z | NONE | Are you thinking that those metadata tables would be added to the SQLite DB by Datasette, when you tell it to wrap up the database, with the metadata coming from the `metadata.json`? Would it be easy to allow the prepopulation of those tables in the database itself? We've been struggling with the best way to make sure that the data is always accompanied by metadata, and baking it all into the database itself would be nice, since then we wouldn't need to worry about separately distributing different files in different contexts. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Support column descriptions in metadata.json 681334912 | |
806010960 | https://github.com/simonw/datasette/issues/741#issuecomment-806010960 | https://api.github.com/repos/simonw/datasette/issues/741 | MDEyOklzc3VlQ29tbWVudDgwNjAxMDk2MA== | zaneselvans 596279 | 2021-03-24T17:19:42Z | 2021-03-24T17:19:42Z | NONE | Ah, okay so `--extra-options` applies to both `datasette publish` and `datasette package`? There wren't any examples of it being used with `publish` in the docs, so this tripped me up for a bit. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Replace "datasette publish --extra-options" with "--setting" 607223136 | |
898032118 | https://github.com/simonw/datasette/issues/942#issuecomment-898032118 | https://api.github.com/repos/simonw/datasette/issues/942 | IC_kwDOBm6k_c41huH2 | zaneselvans 596279 | 2021-08-12T23:12:00Z | 2021-08-12T23:12:00Z | NONE | This looks awesome. We'll definitely make extensive use of this feature! On Thu, Aug 12, 2021 at 5:52 PM Simon Willison ***@***.***> wrote: > I like this. Need to solve for mobile though where the cog menu isn't > visible - I think I'll do that with a definition list at the top of the > page. > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/simonw/datasette/issues/942#issuecomment-898022235>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAERSNYEF6QRZO2HRJGRIWDT4RGDFANCNFSM4QEC6ATA> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email> > . > -- Zane A. Selvans, PhD Chief Data Wrangler Catalyst Cooperative https://catalyst.coop ***@***.*** Signal/WhatsApp/SMS: +1 720 443 1363 Twitter: @ZaneSelvans <https://twitter.com/ZaneSelvans> PGP <https://www.gnupg.org/>: 0x64F7B56F3A127B04 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Support column descriptions in metadata.json 681334912 | |
1051473892 | https://github.com/simonw/datasette/issues/260#issuecomment-1051473892 | https://api.github.com/repos/simonw/datasette/issues/260 | IC_kwDOBm6k_c4-rDfk | zaneselvans 596279 | 2022-02-26T02:24:15Z | 2022-02-26T02:24:15Z | NONE | Is there already functionality that can be used to validate the `metadata.json` file? Is there a JSON Schema that defines it? Or a validation that's available via datasette with Python? We're working on [automatically building the metadata](https://github.com/catalyst-cooperative/pudl/pull/1479) in CI and when we deploy to cloud run, and it would be nice to be able to check whether the the metadata we're outputting is valid in our tests. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Validate metadata.json on startup 323223872 | |
1059652834 | https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059652834 | https://api.github.com/repos/simonw/sqlite-utils/issues/412 | IC_kwDOCGYnMM4_KQTi | zaneselvans 596279 | 2022-03-05T02:14:40Z | 2022-03-05T02:14:40Z | NONE | We do a lot of `df.to_sql()` to write into sqlite, mostly in [this moddule](https://github.com/catalyst-cooperative/pudl/blob/main/src/pudl/load.py#L25) | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Optional Pandas integration 1160182768 | |
1235079469 | https://github.com/simonw/datasette/issues/260#issuecomment-1235079469 | https://api.github.com/repos/simonw/datasette/issues/260 | IC_kwDOBm6k_c5JndEt | zaneselvans 596279 | 2022-09-02T05:24:59Z | 2022-09-02T05:24:59Z | NONE | @zschira is working with Pydantic while converting between and validating JSON frictionless datapackage descriptors that annotate an SQLite DB ([extracted from FERC's XBRL data](https://github.com/catalyst-cooperative/ferc-xbrl-extractor)) and the Datasette YAML metadata [so we can publish them with Datasette](https://github.com/catalyst-cooperative/pudl/pull/1831). Maybe there's some overlap? We've been loving Pydantic. | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1} | Validate metadata.json on startup 323223872 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);