home / github

Menu
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

34 rows where user = 536941

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, comments, repo, type, draft, created_at (date), updated_at (date), closed_at (date)

id ▼ node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
950664971 MDU6SXNzdWU5NTA2NjQ5NzE= 1401 unordered list is not rendering bullet points in description_html on database page fgregg 536941 open 0     2 2021-07-22T13:24:18Z 2021-10-23T13:09:10Z   CONTRIBUTOR   Thanks for this tremendous package, @simonw! In the `description_html` for a database, I [have an unordered list](https://github.com/labordata/warehouse/blob/fcea4502e5b615b0eb3e0bdcb45ec634abe20bb6/warehouse_metadata.yml#L19-L22). However, on the database page on the deployed site, it is not rendering this as a bulleted list. ![Screenshot 2021-07-22 at 09-21-51 nlrb](https://user-images.githubusercontent.com/536941/126645923-2777b7f1-fd4c-4d2d-af70-a35e49a07675.png) Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5 The documentation gives an [example of using an unordered list](https://docs.datasette.io/en/stable/metadata.html#using-yaml-for-metadata) in a `description_html`, so I expected this will work. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
951185411 MDU6SXNzdWU5NTExODU0MTE= 1402 feature request: social meta tags fgregg 536941 open 0     2 2021-07-23T01:57:23Z 2021-07-26T19:31:41Z   CONTRIBUTOR   it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
959137143 MDU6SXNzdWU5NTkxMzcxNDM= 1415 feature request: document minimum permissions for service account for cloudrun fgregg 536941 open 0     4 2021-08-03T13:48:43Z 2023-11-05T16:46:59Z   CONTRIBUTOR   Thanks again for such a powerful project. For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions. It would be great to document what those minimum permission that need to be set in the IAM. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
959710008 MDU6SXNzdWU5NTk3MTAwMDg= 1419 `publish cloudrun` should deploy a more recent SQLite version fgregg 536941 open 0     3 2021-08-04T00:45:55Z 2021-08-05T03:23:24Z   CONTRIBUTOR   I recently changed from deploying a datasette using `datasette publish heroku` to `datasette publish cloudrun`. [A query that ran on the heroku site](https://odpr.bunkum.us/odpr-6c2f4fc?sql=with+pivot_members+as+%28%0D%0A++select%0D%0A++++f_num%2C%0D%0A++++max%28union_name%29+as+union_name%2C%0D%0A++++max%28aff_abbr%29+as+abbreviation%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2007%0D%0A++++%29+as+%222007%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2008%0D%0A++++%29+as+%222008%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2009%0D%0A++++%29+as+%222009%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2010%0D%0A++++%29+as+%222010%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2011%0D%0A++++%29+as+%222011%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2012%0D%0A++++%29+as+%222012%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2013%0D%0A++++%29+as+%222013%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2014%0D%0A++++%29+as+%222014%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2015%0D%0A++++%29+as+%222015%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2016%0D%0A++++%29+as+%222016%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2017%0D%0A++++%29+as+%222017%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2018%0D%0A++++%29+as+%222018%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2019%0D%0A++++%29+as+%222019%22%0D%0A++from%0D%0A++++lm_data%0D%0A++++left+join+ar_membership+using+%28rpt_id%29%0D%0A++where%0D%0A++++members+%21%3D+%27%27%0D%0A++++and+trim%28desig_name%29+%3D+%27NHQ%27%0D%0A++++and+union_name+not+in… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1033678984 PR_kwDOBm6k_c4tjgJ8 1495 Allow routes to have extra options fgregg 536941 open 0     5 2021-10-22T15:00:45Z 2021-11-19T15:36:27Z   CONTRIBUTOR simonw/datasette/pulls/1495 Right now, datasette routes can only be a 2-tuple of `(regex, view_fn)`. If it was possible for datasette to handle extra options, like [standard Django does](https://docs.djangoproject.com/en/3.2/topics/http/urls/#passing-extra-options-to-view-functions), it would add flexibility for plugin authors. For example, if extra options were enabled, then it would be easy to make a single table the home page (#1284). This plugin would accomplish it. ```python from datasette import hookimpl from datasette.views.table import TableView @hookimpl def register_routes(datasette): return [ (r"^/$", TableView.as_view(datasette), {'db_name': 'DB_NAME', 'table': 'TABLE_NAME'}) ] ``` datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/1495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1044267332 I_kwDOCGYnMM4-PkFE 336 sqlite-util tranform --column-order mangles columns of type "timestamp" fgregg 536941 closed 0     1 2021-11-04T01:15:38Z 2023-05-08T21:13:38Z 2023-05-08T21:13:38Z CONTRIBUTOR   Reproducible code below: ```bash > echo 'create table bar (baz text, created_at timestamp default CURRENT_TIMESTAMP)' | sqlite3 foo.db > sqlite3 foo.db SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. sqlite> .schema bar CREATE TABLE bar (baz text, created_at timestamp default CURRENT_TIMESTAMP); sqlite> .exit > sqlite-utils transform foo.db bar --column-order baz sqlite3 foo.db SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. sqlite> .schema bar CREATE TABLE IF NOT EXISTS "bar" ( [baz] TEXT, [created_at] FLOAT DEFAULT 'CURRENT_TIMESTAMP' ); sqlite> .exit > sqlite-utils transform foo.db bar --column-order baz > sqlite3 foo.db SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. sqlite> .schema bar CREATE TABLE IF NOT EXISTS "bar" ( [baz] TEXT, [created_at] FLOAT DEFAULT '''CURRENT_TIMESTAMP''' ); ``` sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1077102934 I_kwDOCGYnMM5AM0lW 353 Allow passing a file of code to "sqlite-utils convert" fgregg 536941 closed 0     8 2021-12-10T18:06:14Z 2021-12-11T01:38:29Z 2021-12-11T01:09:39Z CONTRIBUTOR   sqlite-utils is so nice, but the ergonomics of the multiline code in kind of tough. It's really hard (maybe impossible) to make the newlines play well with Makefiles. it would be great to write your code fragment in a separate file and direct it into the sqlite-utils either like ```sqlite-utils convert my.db my_table my_column < custom_code.py``` or ```sqlite-utils convert my.db my_table my_column --custom-code=custom_code.py``` Thanks, as ever, for these great tools! sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1077620955 I_kwDOBm6k_c5AOzDb 1549 Redesign CSV export to improve usability fgregg 536941 open 0   Datasette 1.0 3268330 5 2021-12-11T19:02:12Z 2022-04-04T11:17:13Z   CONTRIBUTOR   *Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser* Right now, if the user clicks on the CSV related to a <s>table or a</s> query, the response header for the content type is "content-type: text/plain; charset=utf-8" Most browsers will try to open a file with this content-type in the browser. This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser. It would be great if the response header could be something like ``` 'Content-type: text/csv'); 'Content-disposition: attachment;filename=MyVerySpecial.csv'); ``` which would lead browsers to open a download dialog. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1079111498 I_kwDOBm6k_c5AUe9K 1553 if csv export is truncated in non streaming mode set informative response header fgregg 536941 open 0     3 2021-12-13T22:50:44Z 2021-12-16T19:17:28Z   CONTRIBUTOR   streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit. it would be great if a response is truncated that an header signalling that was set in the header. i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1082765654 I_kwDOBm6k_c5AibFW 1561 add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on fgregg 536941 closed 0     3 2021-12-17T00:45:12Z 2022-03-19T04:45:40Z 2022-03-19T04:45:40Z CONTRIBUTOR   If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id. One way that it could work is to have the _memory hash be a hash of all the individual databases. Otherwise, crossdb queries can get quit out of data if using aggressive caching. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1089529555 I_kwDOBm6k_c5A8ObT 1581 when hashed urls are turned on, the _memory db has improperly long-lived cache expiry fgregg 536941 closed 0     1 2021-12-28T00:05:48Z 2022-03-24T04:08:18Z 2022-03-24T04:08:18Z CONTRIBUTOR   if hashed_urls are on, then a -000 suffix is added to the `_memory` database, and the cache settings are set just as if it was a normal hashed database. in particular, this header is set: `cache-control: max-age=31536000` this is not appropriate because the `_memory-000` database isn't really hashed based on the contents of the databases (see #1561). Either the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1090055810 PR_kwDOBm6k_c4wWDxH 1582 don't set far expiry if hash is '000' fgregg 536941 closed 0     1 2021-12-28T18:16:13Z 2022-03-24T04:07:58Z 2022-03-24T04:07:58Z CONTRIBUTOR simonw/datasette/pulls/1582 This will close #1581. I couldn't find any unit tests related to the testing hashed urls, and I know that you want to break that code out of the core application (#1561), so I'm not quite sure what you would like me to for testing. datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/1582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1090810196 I_kwDOBm6k_c5BBHFU 1583 consider adding deletion step of cloudbuild artifacts to gcloud publish fgregg 536941 open 0     1 2021-12-30T00:33:23Z 2021-12-30T00:34:16Z   CONTRIBUTOR   right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1096536240 I_kwDOBm6k_c5BW9Cw 1586 run analyze on all databases as part of start up or publishing fgregg 536941 open 0     1 2022-01-07T17:52:34Z 2022-02-02T07:13:37Z   CONTRIBUTOR   Running `analyze;` lets sqlite's query planner make *much* better use of any indices. It might be nice if the analyze was run as part of the start up of "serve" or "publish". datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1096558279 I_kwDOCGYnMM5BXCbH 365 create-index should run analyze after creating index fgregg 536941 closed 0   3.21 7558727 16 2022-01-07T18:21:25Z 2022-01-11T02:43:34Z 2022-01-11T01:36:48Z CONTRIBUTOR   sqlite's query planner depends upon analyze to make good use of indices. It would be nice if analyze was run as part of the create-index command. If data is inserted later, things can get out date, but it would still probably be a net win. sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1126692066 I_kwDOCGYnMM5DJ_Ti 403 Document how to add a primary key to a rowid table using `sqlite-utils transform --pk` fgregg 536941 closed 0     4 2022-02-08T01:39:40Z 2022-02-09T04:22:43Z 2022-02-08T19:33:59Z CONTRIBUTOR   *Original title: Add option for adding a new, serial, primary key* sometimes we have tables that don't have primary keys, but ought to have them. we *can* use rowid for that, but it would often be nicer to have an explicit primary key. using the current value of rowid would be fine. sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1163369515 I_kwDOBm6k_c5FV5wr 1655 query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data fgregg 536941 open 0     8 2022-03-09T00:56:40Z 2023-10-17T21:53:17Z   CONTRIBUTOR   [this page](https://labordata.bunkum.us/opdr-8335ea3?sql=with+most_recent_lu+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%28%0D%0A++++++select%0D%0A++++++++*%0D%0A++++++from%0D%0A++++++++lm_data%0D%0A++++++order+by%0D%0A++++++++f_num%2C%0D%0A++++++++receive_date+desc%0D%0A++++%29+t%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++aff_abbr+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+abbr_local_name%2C%0D%0A++coalesce%28%0D%0A++++regexp_match%28%27%28.*%3F%29%28%2C%3F+AFL-CIO%24%29%27%2C+union_name%29%2C%0D%0A++++regexp_match%28%27%28.*%3F%29%28+IND%24%29%27%2C+union_name%29%2C%0D%0A++++union_name%0D%0A++%29+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+full_local_name%2C%0D%0A++*%0D%0Afrom%0D%0A++most_recent_lu%0D%0Awhere+%28desig_num+IS+NOT+NULL+OR+unit_name+IS+NOT+NULL%29+AND+desig_name+%21%3D+%27HQ%27%0D%0Alimit%0D%0A++5000+offset+0) is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb. it's using over a 1G on chrome 99. i found this because, i was trying to figure out why editing the SQL was getting very slow. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1200224939 I_kwDOBm6k_c5Hifqr 1707 [feature] expanded detail page fgregg 536941 open 0     1 2022-04-11T16:29:17Z 2022-04-11T16:33:00Z   CONTRIBUTOR   Right now, if click on the detail page for a row you get the info for the row and links to related tables: ![Screenshot 2022-04-11 at 12-27-26 lm20 filing](https://user-images.githubusercontent.com/536941/162786802-90ac1a71-4624-47c4-ae55-b783f4f6c92d.png) It would be very cool if there was an option to expand the rows of the related tables from within this detail view. If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1309542173 PR_kwDOCGYnMM47pwAb 455 in extract code, check equality with IS instead of = for nulls fgregg 536941 closed 0     3 2022-07-19T13:40:25Z 2022-08-27T14:45:03Z 2022-08-27T14:45:03Z CONTRIBUTOR simonw/sqlite-utils/pulls/455 sqlite "IS" is equivalent to SQL "IS NOT DISTINCT FROM" closes #423 sqlite-utils 140912432 pull     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1310243385 I_kwDOCGYnMM5OGLo5 456 feature request: pivot command fgregg 536941 open 0     5 2022-07-20T00:58:08Z 2022-07-20T17:50:50Z   CONTRIBUTOR   pivoting long-format table to wide-format tables is pretty common and kind of pain. would love to see this feature in sqlite-utils! sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1334628400 I_kwDOBm6k_c5PjNAw 1779 google cloudrun updated their limits on maxscale based on memory and cpu count fgregg 536941 closed 0   Datasette 0.62 8303187 13 2022-08-10T13:27:21Z 2022-08-14T19:42:59Z 2022-08-14T17:07:34Z CONTRIBUTOR   if you don't set an explicit limit on container scaling, then [google defaults to 100](https://cloud.google.com/run/docs/configuring/max-instances#limits) google recently updated the [limits on container scaling](https://cloud.google.com/run/docs/configuring/max-instances#limits), such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100. would be nice if `datasette publish` could do this math for you and set the right maxScale. [Log of an failing publish run](https://github.com/labordata/warehouse/runs/7764725972?check_suite_focus=true#step:8:332). ``` ERROR: (gcloud.run.deploy) spec.template.spec.containers[0].resources.limits.cpu: Invalid value specified for cpu. For the specified value, maxScale may not exceed 15. Consider running your workload in a region with greater capacity, decreasing your requested cpu-per-instance, or requesting an increase in quota for this region if you are seeing sustained usage near this limit, see https://cloud.google.com/run/quotas. Your project may gain access to further scaling by adding billing information to your account. Traceback (most recent call last): File "/home/runner/.local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/runner/.local/lib/python3.8/site-packages/… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1386456717 PR_kwDOBm6k_c4_oHI4 1820 [SPIKE] Don't truncate query CSVs fgregg 536941 closed 0     2 2022-09-26T17:27:01Z 2022-10-07T16:12:17Z 2022-10-07T16:12:17Z CONTRIBUTOR simonw/datasette/pulls/1820 Relates to #526 This is a minimal set of changes needed for having *query* CSVs attempt to download all the rows. What's good about it is the minimalism. What's bad about it: 1. We are abusing the `_size` argument to indicate we don't want truncation, which isn't the most obvious thing. Additionally, there are various checks that make sure the "_size" URL parameter is a positive integer, which we are relying on to prevent overloading. 2. The default CSV on a table page will use the max_returned_rows argument. Changing this could be a breaking change, since that's currently a place that has some facilities for pagination. Additionally, i think there's a limit under the hood somewhere which if we removed could lead to sql timeouts 3. There are similar reasons for leaving the current streaming method alone, as the current methods could allow for downloading very large files that could have a sql timeout if we tried to get them in one go. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1820.org.readthedocs.build/en/1820/ <!-- readthedocs-preview datasette end --> datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 1  
1400083043 I_kwDOBm6k_c5Tc5Jj 1834 inspect data is not used for caching database hash fgregg 536941 closed 0     0 2022-10-06T17:52:01Z 2022-10-06T20:06:21Z 2022-10-06T20:06:08Z CONTRIBUTOR   When databases are loaded, https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/app.py#L257-L260 there is nothing preventing the rehashing of the database for immutable databases. https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L50-L53 what i might expect is that relevant values of `inspect_data` get passed to the `Database` class to prevent re-hashing? With data that is many gigs large, this is a significant start up time. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1834/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1400121355 PR_kwDOBm6k_c5AVujU 1835 use inspect data for hash and file size fgregg 536941 closed 0     3 2022-10-06T18:25:24Z 2022-10-27T20:51:30Z 2022-10-06T20:06:07Z CONTRIBUTOR simonw/datasette/pulls/1835 `inspect_data` should already include the hash and the db file size, so this PR takes advantage of using those instead of always recalculating. should help a lot on startup with large DBs. closes #1834 datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/1835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1400374908 I_kwDOBm6k_c5TeAZ8 1836 docker image is duplicating db files somehow fgregg 536941 open 0     13 2022-10-06T22:35:54Z 2022-10-08T16:56:51Z   CONTRIBUTOR   if you look into the docker image created by docker publish, the `datasette inspect` line is duplicating the db files. here's the result of the inspect command: <img width="490" alt="Screen Shot 2022-10-06 at 2 58 08 PM" src="https://user-images.githubusercontent.com/536941/194430909-9303ee1a-9bdd-4212-b54b-de28f43768d4.png"> datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1400431789 PR_kwDOBm6k_c5AWyQK 1837 Make hash and size a lazy property fgregg 536941 closed 0     2 2022-10-06T23:51:22Z 2022-10-27T20:51:21Z 2022-10-27T20:51:20Z CONTRIBUTOR simonw/datasette/pulls/1837 Many apologies, @simonw. My previous PR #1835 did not really solve the problem because the name of the database is often not known to database object in the init method. I took a cue from how you dealt with this issue and made hash a lazy property and did something similar with size. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1837.org.readthedocs.build/en/1837/ <!-- readthedocs-preview datasette end --> datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/1837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1426379903 PR_kwDOBm6k_c5BtJNn 1870 don't use immutable=1, only mode=ro fgregg 536941 open 0     7 2022-10-27T23:33:04Z 2023-10-03T19:12:37Z   CONTRIBUTOR simonw/datasette/pulls/1870 Opening db files in immutable mode sometimes leads to the file being mutated, which causes duplication in the docker image layers: see #1836, #1480 That this happens in "immutable" mode is surprising, because the sqlite docs say that setting this should open the database as read only. https://www.sqlite.org/c3ref/open.html > immutable: The immutable parameter is a boolean query parameter that indicates that the database file is stored on read-only media. When immutable is set, SQLite assumes that the database file cannot be changed, even by a process with higher privilege, and so the database is opened read-only and all locking and change detection is disabled. Caution: Setting the immutable property on a database file that does in fact change can result in incorrect query results and/or [SQLITE_CORRUPT](https://www.sqlite.org/rescode.html#corrupt) errors. See also: [SQLITE_IOCAP_IMMUTABLE](https://www.sqlite.org/c3ref/c_iocap_atomic.html). Perhaps this is a bug in sqlite? <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1870.org.readthedocs.build/en/1870/ <!-- readthedocs-preview datasette end --> datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/1870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1448143294 I_kwDOBm6k_c5WUOm- 1890 Autocomplete text entry for filter values that correspond to facets fgregg 536941 closed 0     16 2022-11-14T14:11:31Z 2022-11-17T00:47:36Z 2022-11-16T03:23:01Z CONTRIBUTOR   datasette allows users to enter in the value for named parameters into a free-text form field. I think it would add a lot of usability, if the form field could be a drop down of options when query value is already a faceted column. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
1509783085 I_kwDOBm6k_c5Z_XYt 1969 sql-formatter javascript is not now working with CloudFlare rocketloader fgregg 536941 open 0     0 2022-12-23T21:14:06Z 2023-01-10T01:56:33Z   CONTRIBUTOR   This is probably not a bug with datasette, but I thought you might want to know, @simonw. I noticed today that my CloudFlare proxied datasette instance lost the "Format SQL" option. I'm pretty sure it was there last week. In the CloudFlare settings, if I turn off [Rocket Loader](https://developers.cloudflare.com/fundamentals/speed/rocket-loader/), I get the "Format SQL" option back. Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading? I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89 datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/1969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1555701851 PR_kwDOBm6k_c5IdsD7 2003 Show referring tables and rows when the referring foreign key is compound fgregg 536941 open 0     3 2023-01-24T21:31:31Z 2023-01-25T18:44:42Z   CONTRIBUTOR simonw/datasette/pulls/2003 sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys. in particular, 1. in a table view, there is not a link from the row to the referenced row if the foreign key is compound 2. in a row view, there is no listing of tables and rows that refer to the focal row if those referencing foreign keys are compound. Both of these issues are discussed in #1099. This PR only fixes the second one, because it's not clear what the right UX is for the first issue. ![Screenshot 2023-01-24 at 19-47-40 nlrb bargaining_unit](https://user-images.githubusercontent.com/536941/214454749-d53deead-4151-4329-a5d4-8a7a454de7d3.png) Some things that might not be desirable about this approach. 1. it changes the external API, by changing `column` => `columns` and `other_column` => `other_columns` (see inline comment for more discussion. 2. There are various places where the plural foreign keys have to be checked for length and discarded or transformed to singular. datasette 107914493 pull     {"url": "https://api.github.com/repos/simonw/datasette/issues/2003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} 0  
1560651350 I_kwDOCGYnMM5dBaZW 523 Feature request: trim all leading and trailing white space for all columns for all tables in a database fgregg 536941 open 0     1 2023-01-28T02:40:10Z 2023-01-28T02:41:14Z   CONTRIBUTOR   It's pretty common that i need to trim leading or trailing white space from lots of columns in a database a part of an initial ETL. I use the following recipe a lot, and it would be great to include this functionality into sqlite-utils `trimify.sql` ```sql select 'select group_concat(''update [' || name || '] set ['' || name || ''] = trim(['' || name || ''])'', ''; '') || ''; '' as sql_to_run from pragma_table_info('''||name||''');' from sqlite_schema; ``` then something like: ```bash sqlite3 example.db < scripts/trimify.sql > table_trim.sql && \ sqlite3 $example.db < table_trim.sql > trim.sql && \ sqlite3 $example.db < trim.sql ``` sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1595340692 I_kwDOCGYnMM5fFveU 530 add ability to configure "on delete" and "on update" attributes of foreign keys: fgregg 536941 open 0     2 2023-02-22T15:44:14Z 2023-05-08T20:39:01Z   CONTRIBUTOR   sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils. https://www.sqlite.org/foreignkeys.html#fk_actions sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
1822813627 I_kwDOBm6k_c5spe27 2108 some (many?) SQL syntax errors are not throwing errors with a .csv endpoint fgregg 536941 open 0     0 2023-07-26T16:57:45Z 2023-07-26T16:58:07Z   CONTRIBUTOR   here's a CTE query that should always fail with a syntax error: ```sql with foo as (nonsense) select * from foo; ``` when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B same with this bad sql ```sql select a, from foo; ``` https://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B vs https://global-power-plants.datasettes.com/global-power-plants.csv?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B but, datasette catches this bad sql at both endpoints: ```sql slect a from foo; ``` https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/2108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
2028698018 I_kwDOBm6k_c5463mi 2213 feature request: gzip compression of database downloads fgregg 536941 open 0     1 2023-12-06T14:35:03Z 2023-12-06T15:05:46Z   CONTRIBUTOR   At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 72.748ms · About: simonw/datasette-graphql