home / github

Menu
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

20 rows where reactions = "{"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}" sorted by id descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, user, author_association, issue, created_at (date), updated_at (date)

reactions 1 ✖

  • {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} · 20 ✖
id ▲ html_url issue_url node_id user created_at updated_at author_association body reactions issue performed_via_github_app
1297703307 https://github.com/simonw/sqlite-utils/issues/448#issuecomment-1297703307 https://api.github.com/repos/simonw/sqlite-utils/issues/448 IC_kwDOCGYnMM5NWWGL mcarpenter 167893 2022-10-31T21:23:51Z 2022-10-31T21:27:32Z CONTRIBUTOR The Windows aspect is a red herring: OP's sample above produces the same error on Linux. (Though I don't know what's going on with the CI). The same error can also be obtained by passing an `io` from a file opened in non-binary mode (`'r'` as opposed to `'rb'`) to `rows_from_file()`. This is how I got here. The fix for my case is easy: open the file in mode `'rb'`. The analagous fix for OP's problem also works: use `BytesIO` in place of `StringIO`. Minimal test case (derived from [utils.py](https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/utils.py#L304)): ``` python import io from typing import cast #fp = io.StringIO("id,name\n1,Cleo") # error fp = io.BytesIO(bytes("id,name\n1,Cleo", encoding='utf-8')) # okay reader = io.BufferedReader(cast(io.RawIOBase, fp)) reader.peek(1) # exception thrown here ``` I see the signature of `rows_from_file()` correctly has `fp: BinaryIO` but I guess you'd need either a runtime type check for that (not all `io`s have `mode()`), or to catch the `AttributeError` on `peek()` to produce a better error for users. Neither option is ideal. Some thoughts on testing binary-ness of `io`s in this SO question: https://stackoverflow.com/questions/44584829/how-to-determine-if-file-is-opened-in-binary-or-text-mode {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Reading rows from a file => AttributeError: '_io.StringIO' object has no attribute 'readinto' 1279144769  
1236214402 https://github.com/simonw/sqlite-utils/issues/239#issuecomment-1236214402 https://api.github.com/repos/simonw/sqlite-utils/issues/239 IC_kwDOCGYnMM5JryKC simonw 9599 2022-09-03T23:46:02Z 2022-09-03T23:46:02Z OWNER Yeah having a version of this that can setup m2m relationships would definitely be interesting. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} sqlite-utils extract could handle nested objects 816526538  
1190449764 https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190449764 https://api.github.com/repos/simonw/sqlite-utils/issues/456 IC_kwDOCGYnMM5G9NJk jcmkk3 45919695 2022-07-20T15:45:54Z 2022-07-20T15:45:54Z NONE > hadley wickham's melt and reshape could be good inspo: http://had.co.nz/reshape/introduction.pdf Note that Hadley has since implemented `pivot_longer` and `pivot_wider` instead of the previous verbs/functions that he used. Those can be found in the tidyr package and are probably the best reference which includes all of the learnings from years of user feedback. https://tidyr.tidyverse.org/articles/pivot.html {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} feature request: pivot command 1310243385  
1112889800 https://github.com/simonw/datasette/issues/1727#issuecomment-1112889800 https://api.github.com/repos/simonw/datasette/issues/1727 IC_kwDOBm6k_c5CVVnI simonw 9599 2022-04-29T05:29:38Z 2022-04-29T05:29:38Z OWNER OK, I just got the most incredible result with that! I started up a container running `bash` like this, from my `datasette` checkout. I'm mapping port 8005 on my laptop to port 8001 inside the container because laptop port 8001 was already doing something else: ``` docker run -it --rm --name my-running-script -p 8005:8001 -v "$PWD":/usr/src/myapp \ -w /usr/src/myapp nogil/python bash ``` Then in `bash` I ran the following commands to install Datasette and its dependencies: ``` pip install -e '.[test]' pip install datasette-pretty-traces # For debug tracing ``` Then I started Datasette against my `github.db` database (from github-to-sqlite.dogsheep.net/github.db) like this: ``` datasette github.db -h 0.0.0.0 --setting trace_debug 1 ``` I hit the following two URLs to compare the parallel v.s. not parallel implementations: - `http://127.0.0.1:8005/github/issues?_facet=milestone&_facet=repo&_trace=1&_size=10` - `http://127.0.0.1:8005/github/issues?_facet=milestone&_facet=repo&_trace=1&_size=10&_noparallel=1` And... the parallel one beat the non-parallel one decisively, on multiple page refreshes! Not parallel: 77ms Parallel: 47ms <img width="1213" alt="CleanShot 2022-04-28 at 22 10 54@2x" src="https://user-images.githubusercontent.com/9599/165889437-60d4200d-698a-4175-af23-7c03bb456e66.png"> <img width="1213" alt="CleanShot 2022-04-28 at 22 10 21@2x" src="https://user-images.githubusercontent.com/9599/165889445-2dfb8676-d823-405e-aecb-ad28ec3043da.png"> So yeah, I'm very confident this is a problem with the GIL. And I am absolutely **stunned** that @colesbury's fork ran Datasette (which has some reasonably tricky threading and async stuff going on) out of the box! {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Research: demonstrate if parallel SQL queries are worthwhile 1217759117  
1111506339 https://github.com/simonw/sqlite-utils/issues/159#issuecomment-1111506339 https://api.github.com/repos/simonw/sqlite-utils/issues/159 IC_kwDOCGYnMM5CQD2j dracos 154364 2022-04-27T21:35:13Z 2022-04-27T21:35:13Z NONE Just stumbled across this, wondering why none of my deletes were working. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} .delete_where() does not auto-commit (unlike .insert() or .upsert()) 702386948  
1059652834 https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059652834 https://api.github.com/repos/simonw/sqlite-utils/issues/412 IC_kwDOCGYnMM4_KQTi zaneselvans 596279 2022-03-05T02:14:40Z 2022-03-05T02:14:40Z NONE We do a lot of `df.to_sql()` to write into sqlite, mostly in [this moddule](https://github.com/catalyst-cooperative/pudl/blob/main/src/pudl/load.py#L25) {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Optional Pandas integration 1160182768  
1059650190 https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059650190 https://api.github.com/repos/simonw/sqlite-utils/issues/412 IC_kwDOCGYnMM4_KPqO simonw 9599 2022-03-05T02:04:43Z 2022-03-05T02:04:54Z OWNER To be honest, I'm having second thoughts about this now mainly because the idiom for turning a generator of dicts into a DataFrame is SO simple: ```python df = pd.DataFrame(db.query("select * from articles")) ``` Given it's that simple, I'm questioning if there's any value to adding this to `sqlite-utils` at all. This likely becomes a documentation thing instead! {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Optional Pandas integration 1160182768  
893133496 https://github.com/simonw/datasette/issues/1419#issuecomment-893133496 https://api.github.com/repos/simonw/datasette/issues/1419 IC_kwDOBm6k_c41PCK4 simonw 9599 2021-08-05T03:22:44Z 2021-08-05T03:22:44Z OWNER I ran into this exact same problem today! I only just learned how to use filter on aggregates: https://til.simonwillison.net/sqlite/sqlite-aggregate-filter-clauses A workaround I used is to add this to the deploy command: datasette publish cloudrun ... --install=pysqlite3-binary This will install the https://pypi.org/project/pysqlite3-binary for package which bundles a more recent SQLite version. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} `publish cloudrun` should deploy a more recent SQLite version 959710008  
875738149 https://github.com/simonw/datasette/issues/1388#issuecomment-875738149 https://api.github.com/repos/simonw/datasette/issues/1388 MDEyOklzc3VlQ29tbWVudDg3NTczODE0OQ== simonw 9599 2021-07-07T16:14:29Z 2021-07-07T16:14:29Z OWNER This sounds like a valuable feature for people running Datasette behind a proxy. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Serve using UNIX domain socket 939051549  
802099264 https://github.com/simonw/datasette/issues/1262#issuecomment-802099264 https://api.github.com/repos/simonw/datasette/issues/1262 MDEyOklzc3VlQ29tbWVudDgwMjA5OTI2NA== simonw 9599 2021-03-18T16:43:09Z 2021-03-18T16:43:09Z OWNER I often find myself wanting this too, when I'm exploring a new dataset. i agree with Bob that this is a good candidate for a plugin. The plugin system isn't quite setup for this yet though - there isn't an obvious mechanism for adding extra sort orders or other interface elements that manipulate the query used by the table view in some way. I'm going to promote this issue to status of a plugin hook feature request - I have a hunch that a plugin hook that enables `order by random()` could enable a lot of other useful plugin features too. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Plugin hook that could support 'order by random()' for table view 834602299  
782765665 https://github.com/simonw/datasette/issues/782#issuecomment-782765665 https://api.github.com/repos/simonw/datasette/issues/782 MDEyOklzc3VlQ29tbWVudDc4Mjc2NTY2NQ== simonw 9599 2021-02-20T23:34:41Z 2021-02-20T23:34:41Z OWNER OK, I'm back to the "top level object as the default" side of things now - it's pretty much unanimous at this point, and it's certainly true that it's not a decision you'll even regret. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Redesign default .json format 627794879  
755133937 https://github.com/simonw/datasette/issues/1101#issuecomment-755133937 https://api.github.com/repos/simonw/datasette/issues/1101 MDEyOklzc3VlQ29tbWVudDc1NTEzMzkzNw== simonw 9599 2021-01-06T07:25:48Z 2021-01-06T07:26:43Z OWNER Idea: instead of returning a dictionary, `register_output_renderer` could return an object. The object could have the following properties: - `.extension` - the extension to use - `.can_render(...)` - says if it can render this - `.can_stream(...)` - says if streaming is supported - `async .stream_rows(rows_iterator, send)` - method that loops through all rows and uses `send` to send them to the response in the correct format I can then deprecate the existing `dict` return type for 1.0. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} register_output_renderer() should support streaming data 749283032  
751504136 https://github.com/simonw/datasette/issues/417#issuecomment-751504136 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDc1MTUwNDEzNg== drewda 212369 2020-12-27T19:02:06Z 2020-12-27T19:02:06Z NONE Very much looking forward to seeing this functionality come together. This is probably out-of-scope for an initial release, but in the future it could be useful to also think of how to run this is a container'ized context. For example, an immutable datasette container that points to an S3 bucket of SQLite DBs or CSVs. Or an immutable datasette container pointing to a NFS volume elsewhere on a Kubernetes cluster. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Datasette Library 421546944  
737563699 https://github.com/simonw/datasette/issues/749#issuecomment-737563699 https://api.github.com/repos/simonw/datasette/issues/749 MDEyOklzc3VlQ29tbWVudDczNzU2MzY5OQ== simonw 9599 2020-12-02T23:45:42Z 2020-12-02T23:45:42Z OWNER I asked about this on Twitter - https://twitter.com/steren/status/1334281184965140483 > You simply need to send the `Transfer-Encoding: chunked` header. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Cloud Run fails to serve database files larger than 32MB 610829227  
696163452 https://github.com/simonw/datasette/issues/670#issuecomment-696163452 https://api.github.com/repos/simonw/datasette/issues/670 MDEyOklzc3VlQ29tbWVudDY5NjE2MzQ1Mg== snth 652285 2020-09-21T14:46:10Z 2020-09-21T14:46:10Z NONE I'm currently using PostgREST to serve OpenAPI APIs off Postgresql databases. I would like to try out datasette once this becomes available on Postgres. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Prototoype for Datasette on PostgreSQL 564833696  
615932007 https://github.com/dogsheep/dogsheep-photos/issues/4#issuecomment-615932007 https://api.github.com/repos/dogsheep/dogsheep-photos/issues/4 MDEyOklzc3VlQ29tbWVudDYxNTkzMjAwNw== simonw 9599 2020-04-18T19:27:55Z 2020-04-18T19:27:55Z MEMBER Research thread: https://twitter.com/simonw/status/1249049694984011776 > I want to build some software that lets people store their own data in their own S3 bucket, but if possible I'd like not to have to teach people the incantations needed to get their bucket setup and minimum-permission credentials figures out https://testdriven.io/blog/storing-django-static-and-media-files-on-amazon-s3/ looks useful {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Upload all my photos to a secure S3 bucket 602533539  
586729798 https://github.com/simonw/sqlite-utils/issues/86#issuecomment-586729798 https://api.github.com/repos/simonw/sqlite-utils/issues/86 MDEyOklzc3VlQ29tbWVudDU4NjcyOTc5OA== simonw 9599 2020-02-16T17:11:02Z 2020-02-16T17:11:02Z OWNER I filed a bug in the Python issue tracker here: https://bugs.python.org/issue39652 {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Problem with square bracket in CSV column name 564579430  
580028669 https://github.com/simonw/datasette/issues/662#issuecomment-580028669 https://api.github.com/repos/simonw/datasette/issues/662 MDEyOklzc3VlQ29tbWVudDU4MDAyODY2OQ== simonw 9599 2020-01-30T00:30:19Z 2020-01-30T00:30:19Z OWNER I just shipped 0.34: https://datasette.readthedocs.io/en/stable/changelog.html#v0-34 {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Escape_fts5_query-hookimplementation does not work with queries to standard tables 556814876  
488555399 https://github.com/simonw/datasette/issues/431#issuecomment-488555399 https://api.github.com/repos/simonw/datasette/issues/431 MDEyOklzc3VlQ29tbWVudDQ4ODU1NTM5OQ== simonw 9599 2019-05-02T05:13:54Z 2019-05-02T05:13:54Z OWNER Datasette master now treats databases as readonly but NOT immutable. This means you can make changes to those databases from another process and those changes will be instantly reflected in the Datasette interface. As such, reloading on database change is no longer necessary. Closing this ticket. {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Datasette doesn't reload when database file changes 432870248  
473312514 https://github.com/simonw/datasette/issues/417#issuecomment-473312514 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDQ3MzMxMjUxNA== simonw 9599 2019-03-15T14:42:07Z 2019-03-17T22:12:30Z OWNER A neat ability of Datasette Library would be if it can work against other files that have been dropped into the folder. In particular: if a user drops a CSV file into the folder, how about automatically converting that CSV file to SQLite using [sqlite-utils](https://github.com/simonw/sqlite-utils)? {"total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} Datasette Library 421546944  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 783.938ms · About: simonw/datasette-graphql