issue_comments
9 rows where issue = 275125561
This data as json, CSV (advanced)
Suggested facets: user, author_association, created_at (date), updated_at (date)
id ▼ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
350521853 | https://github.com/simonw/datasette/issues/123#issuecomment-350521853 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDM1MDUyMTg1Mw== | simonw 9599 | 2017-12-10T03:09:53Z | 2017-12-10T03:09:53Z | OWNER | I'm going to keep this separate in csvs-to-sqlite. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
473313975 | https://github.com/simonw/datasette/issues/123#issuecomment-473313975 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDQ3MzMxMzk3NQ== | simonw 9599 | 2019-03-15T14:45:46Z | 2019-03-15T14:45:46Z | OWNER | I'm reopening this one as part of #417. Further experience with Python's CSV standard library module has convinced me that pandas is not a required dependency for this. My [sqlite-utils](https://github.com/simonw/sqlite-utils) package can do most of the work here with very few dependencies. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
473323329 | https://github.com/simonw/datasette/issues/123#issuecomment-473323329 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDQ3MzMyMzMyOQ== | simonw 9599 | 2019-03-15T15:09:15Z | 2019-05-14T15:53:05Z | OWNER | How would Datasette accepting URLs work? I want to support not just SQLite files and CSVs but other extensible formats (geojson, Atom, shapefiles etc) as well. So `datasette serve` needs to be able to take filepaths or URLs to a variety of different content types. If it's a URL, we can use the first 200 downloaded bytes to decide which type of file it is. This is likely more reliable than hoping the web server provided the correct content-type. Also: let's have a threshold for downloading to disk. We will start downloading to a temp file (location controlled by an environment variable) if either the content length header is above that threshold OR we hit that much data cached in memory already and don't know how much more is still to come. There needs to be a command line option for saying "grab from this URL but force treat it as CSV" - same thing for files on disk. datasette mydb.db --type=db http://blah/blah --type=csv If you provide less `--type` options thatn you did URLs then the default behavior is used for all of the subsequent URLs. Auto detection could be tricky. Probably do this with a plugin hook. https://github.com/h2non/filetype.py is interesting but deals with images video etc so not right for this purpose. I think we need our own simple content sniffing code via a plugin hook. What if two plugin type hooks can both potentially handle a sniffed file? The CLI can quit and return an error saying content is ambiguous and you need to specify a `--type`, picking from the following list. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
698110186 | https://github.com/simonw/datasette/issues/123#issuecomment-698110186 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDY5ODExMDE4Ng== | obra 45416 | 2020-09-24T04:49:51Z | 2020-09-24T04:49:51Z | NONE | As a half-measure, I'd get value out of being able to upload a CSV and have datasette run csv-to-sqlite on it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
698168648 | https://github.com/simonw/datasette/issues/123#issuecomment-698168648 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDY5ODE2ODY0OA== | simonw 9599 | 2020-09-24T07:28:38Z | 2020-09-24T07:28:38Z | OWNER | @obra there's a plugin for that! https://github.com/simonw/datasette-upload-csvs | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
698174957 | https://github.com/simonw/datasette/issues/123#issuecomment-698174957 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDY5ODE3NDk1Nw== | obra 45416 | 2020-09-24T07:42:05Z | 2020-09-24T07:42:05Z | NONE | Oh. Awesome. On Thu, Sep 24, 2020 at 12:28:53AM -0700, Simon Willison wrote: > @obra there's a plugin for that! https://github.com/simonw/ > datasette-upload-csvs > > â > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe.* > -- | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
735440555 | https://github.com/simonw/datasette/issues/123#issuecomment-735440555 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDczNTQ0MDU1NQ== | jsancho-gpl 11912854 | 2020-11-29T19:12:30Z | 2020-11-29T19:12:30Z | NONE | [datasette-connectors](https://github.com/pytables/datasette-connectors) provides an API for making connectors for any file based database. For example, [datasette-pytables](https://github.com/pytables/datasette-pytables) is a connector for HDF5 files, so now is possible to use this type of files with Datasette. It'd be nice if Datasette coud provide that API directly, for other file formats and for urls too. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
882096402 | https://github.com/simonw/datasette/issues/123#issuecomment-882096402 | https://api.github.com/repos/simonw/datasette/issues/123 | IC_kwDOBm6k_c40k7kS | RayBB 921217 | 2021-07-18T18:07:29Z | 2021-07-18T18:07:29Z | NONE | I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases. My comment is partially inspired by this post about streaming sqlite dbs from github pages or such https://news.ycombinator.com/item?id=27016630 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
882138084 | https://github.com/simonw/datasette/issues/123#issuecomment-882138084 | https://api.github.com/repos/simonw/datasette/issues/123 | IC_kwDOBm6k_c40lFvk | simonw 9599 | 2021-07-19T00:04:31Z | 2021-07-19T00:04:31Z | OWNER | I've been thinking more about this one today too. An extension of this (touched on in #417, Datasette Library) would be to support pointing Datasette at a directory and having it automatically load any CSV files it finds anywhere in that folder or its descendants - either loading them fully, or providing a UI that allows users to select a file to open it in Datasette. For larger files I think the right thing to do is import them into an on-disk SQLite database, which is limited only by available disk space. For smaller files loading them into an in-memory database should work fine. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);