issues
351 rows where comments = 3
This data as json, CSV (advanced)
Suggested facets: state, milestone, author_association, repo, type, draft, created_at (date), updated_at (date), closed_at (date)
id ▼ | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
267516066 | MDU6SXNzdWUyNjc1MTYwNjY= | 5 | Implement sensible query pagination | simonw 9599 | closed | 0 | Ship first public release 2857392 | 3 | 2017-10-23T01:16:00Z | 2017-11-10T20:41:39Z | 2017-11-10T20:41:39Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267861210 | MDU6SXNzdWUyNjc4NjEyMTA= | 26 | Command line tool for uploading one or more DBs to Now | simonw 9599 | closed | 0 | Ship first public release 2857392 | 3 | 2017-10-24T00:43:10Z | 2017-11-11T07:25:30Z | 2017-11-11T07:25:30Z | OWNER | Uploading files appears to be undocumented, but I found it in their code here: https://github.com/zeit/now-cli/blob/0ca7d1fe44ebdf460b64fdc38ba543b8e295ac40/src/providers/sh/util/index.js#L291 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/26/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267886330 | MDU6SXNzdWUyNjc4ODYzMzA= | 27 | Ability to plot a simple graph | simonw 9599 | closed | 0 | 3 | 2017-10-24T03:34:59Z | 2018-07-10T17:52:41Z | 2018-07-10T17:52:41Z | OWNER | Might be as simple as: pick he type of chart (bar, line) and then pick the column for the X axis and the column for the Y axis. Maybe also allow a pie chart. It’s up to the user to come up with SQL that gets the right values. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/27/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268262480 | MDU6SXNzdWUyNjgyNjI0ODA= | 36 | date, year, month and day querystring lookups | simonw 9599 | closed | 0 | 3 | 2017-10-25T04:23:45Z | 2018-05-28T17:30:53Z | 2018-05-28T17:30:53Z | OWNER | - [ ] `?timestamp___date=2017-07-17` - return every item where the timestamp falls on that date - [ ] `?timestamp___year=2017` - return every item where the timestamp falls within 2017 - [ ] `?timestamp___month=1` - return every item where the month component is January - [ ] `?timestamp___day=10` - return every item where the day-of-the-month component is 10 Follow on from #23 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/36/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273054652 | MDU6SXNzdWUyNzMwNTQ2NTI= | 53 | Implement a better database index page | simonw 9599 | closed | 0 | Ship first public release 2857392 | 3 | 2017-11-10T20:47:36Z | 2017-11-12T21:19:33Z | 2017-11-12T01:50:27Z | OWNER | This view isn't great. I should do a better job of separating out tables from views and indexes, showing the count of rows in each table, and maybe move the SQL to the individual table pages. <img width="871" alt="flights" src="https://user-images.githubusercontent.com/9599/32423242-1b4458ce-c25a-11e7-910f-2dc1de909b8f.png"> | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273569068 | MDU6SXNzdWUyNzM1NjkwNjg= | 79 | Add more detailed API documentation to the README | simonw 9599 | closed | 0 | 3 | 2017-11-13T20:36:21Z | 2018-05-28T17:24:48Z | 2018-05-28T17:24:48Z | OWNER | Need to document: - [ ] The ?column__gt=4 style filter arguments for tables - [ ] The ?sql= API, and how named parameters work - [ ] How API pagination works - [ ] How redirects and cache headers work | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/79/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
274001453 | MDU6SXNzdWUyNzQwMDE0NTM= | 96 | UI for editing named parameters | simonw 9599 | closed | 0 | 3 | 2017-11-15T01:19:21Z | 2017-11-16T01:45:51Z | 2017-11-16T01:33:38Z | OWNER | On any page displaying a custom query that includes named parameters, we should show HTML form fields for editing those parameters. Eg the breed parameter on https://australian-dogs.now.sh/australian-dogs-3ba9628?sql=select+name%2C+count%28*%29+as+n+from+%28%0D%0A%0D%0Aselect+upper%28%22Animal+name%22%29+as+name+from+%5BAdelaide-City-Council-dog-registrations-2013%5D+where+Breed+like+%3Abreed%0D%0A%0D%0Aunion+all%0D%0A%0D%0Aselect+upper%28Animal_Name%29+as+name+from+%5BAdelaide-City-Council-dog-registrations-2014%5D+where+Breed_Description+like+%3Abreed%0D%0A%0D%0Aunion+all+%0D%0A%0D%0Aselect+upper%28Animal_Name%29+as+name+from+%5BAdelaide-City-Council-dog-registrations-2015%5D+where+Breed_Description+like+%3Abreed%0D%0A%0D%0Aunion+all%0D%0A%0D%0Aselect+upper%28%22AnimalName%22%29+as+name+from+%5BCity-of-Port-Adelaide-Enfield-Dog_Registrations_2016%5D+where+AnimalBreed+like+%3Abreed%0D%0A%0D%0Aunion+all%0D%0A%0D%0Aselect+upper%28%22Animal+Name%22%29+as+name+from+%5BMitcham-dog-registrations-2015%5D+where+Breed+like+%3Abreed%0D%0A%0D%0Aunion+all%0D%0A%0D%0Aselect+upper%28%22DOG_NAME%22%29+as+name+from+%5Bburnside-dog-registrations-2015%5D+where+DOG_BREED+like+%3Abreed%0D%0A%0D%0Aunion+all+%0D%0A%0D%0Aselect+upper%28%22Animal_Name%22%29+as+name+from+%5Bcity-of-playford-2015-dog-registration%5D+where+Breed_Description+like+%3Abreed%0D%0A%0D%0Aunion+all%0D%0A%0D%0Aselect+upper%28%22Animal+Name%22%29+as+name+from+%5Bcity-of-prospect-dog-registration-details-2016%5D+where%22Breed+Description%22+like+%3Abreed%0D%0A%0D%0A%29+group+by+name+order+by+n+desc%3B&breed=pug | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/96/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
274022950 | MDU6SXNzdWUyNzQwMjI5NTA= | 97 | Link to JSON for the list of tables | simonw 9599 | closed | 0 | 3 | 2017-11-15T03:29:05Z | 2018-05-29T18:51:35Z | 2018-05-28T20:57:21Z | OWNER | https://twitter.com/yschimke/status/930606210855854080 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/97/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
275166669 | MDU6SXNzdWUyNzUxNjY2Njk= | 131 | UI support for running FTS searches | simonw 9599 | closed | 0 | 3 | 2017-11-19T15:16:20Z | 2017-11-19T17:18:05Z | 2017-11-19T17:00:12Z | OWNER | Here's an example query that searches all FTS indexed columns in a table: https://sf-trees-search.now.sh/sf-trees-search-a899b92?sql=select+*+from+Street_Tree_List+where+rowid+in+%28select+rowid+from+Street_Tree_List_fts+where+Street_Tree_List_fts+match+%27grove+london+dpw%27%29%0D%0A And here's a query that searches a specific column: https://sf-trees-search.now.sh/sf-trees-search-a899b92?sql=select+*+from+Street_Tree_List+where+rowid+in+%28select+rowid+from+Street_Tree_List_fts+where+qSpecies+match+%27london%27%29%0D%0A If we detect that a table has FTS enabled (which we can do by looking for it as a content table reference in another FTS table's create definition) we should add a search box to the table page which constructs this query - maybe using `?_search=XXX` in the query string? <s>To support search against specified columns, we can do `?_search__ qSpecies=London`.</s> - not necessary, see comment below. - [x] Detect if a table has a FTS index defined against it as a content= parameter - [x] Decide what to do if there is more than one FTS index (maybe just pick the first one?) - [x] Add the `?_search=` query string argument - [x] Add the UI | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
275176006 | MDU6SXNzdWUyNzUxNzYwMDY= | 133 | If view is filtered, search should apply within those filtered rows | simonw 9599 | closed | 0 | Foreign key edition 2919870 | 3 | 2017-11-19T17:25:36Z | 2017-11-24T22:30:32Z | 2017-11-24T22:30:15Z | OWNER | Eg on https://sf-trees.now.sh/sf-trees-ebc2ad9/Street_Tree_List?qSpecies=1 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
275179724 | MDU6SXNzdWUyNzUxNzk3MjQ= | 135 | ?_search=x should work if used directly against a FTS virtual table | simonw 9599 | closed | 0 | Custom templates edition 2949431 | 3 | 2017-11-19T18:17:53Z | 2017-12-07T04:54:41Z | 2017-12-07T04:54:41Z | OWNER | e.g. https://sf-trees.now.sh/sf-trees-ebc2ad9/Street_Tree_List_fts?_search=grove should work | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
275917760 | MDU6SXNzdWUyNzU5MTc3NjA= | 142 | Show extra instructions with the interrupted | simonw 9599 | closed | 0 | 3 | 2017-11-22T01:44:29Z | 2018-05-28T21:25:06Z | 2018-05-28T21:24:35Z | OWNER | When you are using Datasette locally for ad-hoc analysis it can be frustrating to hit the time limit. If you start it with the correct command line arguments you can disable that time limit. So how about we tell you how to do that anytime you hit the interrupted error provided you are accessing it from localhost. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
276091279 | MDU6SXNzdWUyNzYwOTEyNzk= | 144 | apsw as alternative sqlite3 binding (for full text search) | mhalle 649467 | closed | 0 | 3 | 2017-11-22T14:40:39Z | 2018-05-28T21:29:42Z | 2018-05-28T21:29:42Z | NONE | Hey there, Have you considered providing apsw support as an alternative to stock python sqlite3? I use apsw because it keeps up with sqlite3 and is straightforward to bring in extensions like FTS5. FTS really accelerates the kind of searching often done by web clients. I may be able to help (it shouldn't be much code), but there are a couple of stylistic questions that come up when supporting an optional package. Also, apsw is tricky in that it doesn't have a pypi package (author says limitations in providing options to setup.py). | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
276704327 | MDU6SXNzdWUyNzY3MDQzMjc= | 150 | _group_count= feature improvements | simonw 9599 | closed | 0 | 3 | 2017-11-24T22:06:18Z | 2018-05-28T16:41:28Z | 2018-05-28T16:41:28Z | OWNER | - [ ] The "apply filters" form should keep you on the _group_count= page - [ ] Foreign key references should be expand - [ ] Page title should reflect the view you are on | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
276873891 | MDU6SXNzdWUyNzY4NzM4OTE= | 154 | Datasette CSS should include content hash in the URL | simonw 9599 | closed | 0 | Custom templates edition 2949431 | 3 | 2017-11-27T00:57:36Z | 2017-12-09T03:10:23Z | 2017-12-09T03:10:22Z | OWNER | When I deployed the latest version of datasette to https://fivethirtyeight.datasettes.com/ I noticed I was getting served stale CSS since it had been cached. Including the sha of he contents in its URL should fix that. I can calculate this on server start. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
280013907 | MDU6SXNzdWUyODAwMTM5MDc= | 164 | datasette skeleton command for kick-starting database and table metadata | simonw 9599 | closed | 0 | Custom templates edition 2949431 | 3 | 2017-12-07T06:13:28Z | 2021-03-23T02:45:12Z | 2017-12-07T06:20:45Z | OWNER | Generates an example `metadata.json` file populated with all of the databases and tables inspected from the specified databases. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
280745470 | MDU6SXNzdWUyODA3NDU0NzA= | 170 | Custom template for named canned query | simonw 9599 | closed | 0 | Custom templates edition 2949431 | 3 | 2017-12-09T19:07:51Z | 2017-12-09T21:35:30Z | 2017-12-09T21:34:52Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
306811513 | MDU6SXNzdWUzMDY4MTE1MTM= | 186 | proposal new option to disable user agents cache | stefanocudini 47107 | closed | 0 | 3 | 2018-03-20T10:42:20Z | 2018-03-21T09:07:22Z | 2018-03-21T01:28:31Z | NONE | I think it would be very useful for debugging an option of adding headers to http replies ``` Cache-Control: no-cache ``` especially in the html output | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
310882100 | MDU6SXNzdWUzMTA4ODIxMDA= | 193 | Cleaner mechanism for handling custom errors | simonw 9599 | closed | 0 | 3 | 2018-04-03T15:19:13Z | 2018-04-13T18:18:59Z | 2018-04-13T18:18:59Z | OWNER | This code is pretty messy: https://github.com/simonw/datasette/blob/0abd3abacb309a2bd5913a7a2df4e9256585b1bb/datasette/app.py#L245-L265 Instead, it would be nice if I could raise an exception that would be converted into the appropriate JSON or HTML error message, with a corresponding HTTP code. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
312313496 | MDU6SXNzdWUzMTIzMTM0OTY= | 195 | Run pks_for_table in inspect, executing once at build time rather than constantly | simonw 9599 | closed | 0 | 3 | 2018-04-08T15:12:40Z | 2018-04-10T00:54:43Z | 2018-04-10T00:54:43Z | OWNER | Right now several Datasette views call the `await self.pks_for_table(...)` method to figure out what primary keys are set for a specific table. This executes a `PRAGMA table_info` SQL query. It would be faster and more efficient to execute this query for each table as part of the `inspect()` method. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
313494458 | MDExOlB1bGxSZXF1ZXN0MTgxMDMzMDI0 | 200 | Hide Spatialite system tables | russss 45057 | closed | 0 | 3 | 2018-04-11T21:26:58Z | 2018-04-12T21:34:48Z | 2018-04-12T21:34:48Z | CONTRIBUTOR | simonw/datasette/pulls/200 | They were getting on my nerves. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
314319372 | MDExOlB1bGxSZXF1ZXN0MTgxNjQyMTE0 | 205 | Support filtering with units and more | russss 45057 | closed | 0 | 3 | 2018-04-14T10:47:51Z | 2018-04-14T15:24:04Z | 2018-04-14T15:24:04Z | CONTRIBUTOR | simonw/datasette/pulls/205 | The first commit: * Adds units to exported JSON * Adds units key to metadata skeleton * Adds some docs for units The second commit adds filtering by units by the first method I mentioned in #203:  [Try it here](https://wtr-api.herokuapp.com/wtr-663ea99/license_frequency?frequency__gt=50GHz&height__lt=50ft). I think it integrates pretty neatly. The third commit adds support for registering custom units with Pint from metadata.json. Probably pretty niche, but I need decibels! | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
315142414 | MDU6SXNzdWUzMTUxNDI0MTQ= | 221 | Allow plugins to add new cli sub commands | simonw 9599 | closed | 0 | 3 | 2018-04-17T16:40:13Z | 2021-01-04T20:12:14Z | 2021-01-04T20:12:14Z | OWNER | I could then test this out by having https://github.com/simonw/csvs-to-sqlite register itself as a plugin | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
315327860 | MDU6SXNzdWUzMTUzMjc4NjA= | 223 | datasette publish --install=name-of-plugin | simonw 9599 | closed | 0 | 3 | 2018-04-18T04:33:59Z | 2018-04-18T14:56:17Z | 2018-04-18T14:56:17Z | OWNER | Mechanism for causing datasette publish and datasette package to install one or more additional plugins using `pip install` - refs #14 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
316526433 | MDU6SXNzdWUzMTY1MjY0MzM= | 234 | label_column option in metadata.json | simonw 9599 | closed | 0 | 3 | 2018-04-21T21:19:08Z | 2018-04-22T20:47:12Z | 2018-04-22T20:47:12Z | OWNER | Currently the column used for displaying a foreign key relationship is automatically detected by `inspect()` by looking for tables that have a primary key column and one other column. This doesn't work for tables with more than two columns. Let's allow the table section in `metadata.json` to optionally define a `label_column` which, if present, will be used for those displays. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
317714268 | MDU6SXNzdWUzMTc3MTQyNjg= | 238 | External metadata.json | simonw 9599 | closed | 0 | 3 | 2018-04-25T17:02:30Z | 2019-06-24T06:52:55Z | 2019-06-24T06:52:45Z | OWNER | A frustration I'm having with https://register-of-members-interests.datasettes.com/ is that I keep coming up with new canned queries but I don't want to redeploy the whole thing just to add them to `metadata.json` Maybe Datasette could optionally take a `--metadata-url` option which causes it to load from a URL instead and occasionally check for updates. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
322741659 | MDExOlB1bGxSZXF1ZXN0MTg3NzcwMzQ1 | 258 | Add new metadata key persistent_urls which removes the hash from all database urls | philroche 247131 | closed | 0 | 3 | 2018-05-14T09:39:18Z | 2018-05-21T07:38:15Z | 2018-05-21T07:38:15Z | NONE | simonw/datasette/pulls/258 | Add new metadata key "persistent_urls" which removes the hash from all database urls when set to "true" This PR is just to gauge if this, or something like it, is something you would consider merging? I understand the reason why the substring of the hash is included in the url but there are some use cases where the urls should persist across deployments. For bookmarks for example or for scripts that use the JSON API. This is the initial commit for this feature. Tests and documentation updates to follow. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
323671577 | MDU6SXNzdWUzMjM2NzE1Nzc= | 263 | Facets should not execute for ?shape=array|object | simonw 9599 | closed | 0 | 3 | 2018-05-16T15:26:13Z | 2021-06-02T02:54:34Z | 2021-06-02T02:54:34Z | OWNER | Split off from #255 - there's no point executing the facet SQL for the `?_shape=array` and `?_shape=object` API responses. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
323716411 | MDU6SXNzdWUzMjM3MTY0MTE= | 267 | Documentation for URL hashing, redirects and cache policy | simonw 9599 | closed | 0 | 3 | 2018-05-16T17:29:01Z | 2019-06-24T06:41:02Z | 2019-06-24T06:41:02Z | OWNER | See my comments on #258 for a starting point | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
324720095 | MDU6SXNzdWUzMjQ3MjAwOTU= | 275 | "config" section in metadata.json (root, database and table level) | simonw 9599 | closed | 0 | 3 | 2018-05-20T16:02:28Z | 2023-08-23T01:28:37Z | 2023-08-23T01:28:37Z | OWNER | Split off from #274 Metadata should an optional `"config"` section at root, table or database level. The TableView and RowView and DatabaseView and BaseView classes could all have a `.config("key")` method which knows how to resolve the hierarchy of configs. This will allow individual tables (or databases) to set their own config settings for things like `sql_time_limit_ms` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
325294102 | MDU6SXNzdWUzMjUyOTQxMDI= | 278 | Build smallest possible Docker image with Datasette plus recent SQLite (with json1) plus Spatialite 4.4.0 | simonw 9599 | closed | 0 | 3 | 2018-05-22T13:28:40Z | 2018-05-23T17:43:36Z | 2018-05-23T17:43:36Z | OWNER | A Dockerfile that does the following: * Bundles Datasette master * Python 3.6 most recent version (or 3.7 if it has been released) * SQLite 3.23.1 (or most recent release) such that "import sqite3" in Python gets that version. Ideally with the json1 module baked in by default, but having it loadable as an optional module is fine too * SpatiaLite 4.4.0-RC0 (or most recent version) such that it can be loaded as an optional module * Uses multi-stage builds to stay as small as possible Note that the current "release" of SpatiaLite is 4.3.0 which is missing key features like https://www.gaia-gis.it/fossil/libspatialite/wiki?name=KNN - 4.4.0 probably needs to be compiled from source. I don't know the best way to get a current SQLite version bundled for Python 3. Maybe https://github.com/coleifer/pysqlite3 ? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/278/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
326767626 | MDU6SXNzdWUzMjY3Njc2MjY= | 288 | Support multiple filters of the same type | simonw 9599 | closed | 0 | 3 | 2018-05-26T21:13:12Z | 2019-04-15T23:45:04Z | 2019-04-15T23:44:26Z | OWNER | This should work for example: https://fivethirtyeight.datasettes.com/fivethirtyeight-5de27e3/biopics%2Fbiopics?year_release__not=2014&year_release__not=2015 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
326768188 | MDU6SXNzdWUzMjY3NjgxODg= | 289 | ?_ttl= parameter to control caching | simonw 9599 | closed | 0 | 3 | 2018-05-26T21:22:55Z | 2018-05-26T22:22:47Z | 2018-05-26T22:17:48Z | OWNER | This would allow clients to specify the max-age caching header that should be returned with the query. Most important this will allow caching to be completely urned off for specific queries using `?_ttl=0`. Sending 0 should cause a `Cache-Control: no-cache` header to be returned. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
326783670 | MDU6SXNzdWUzMjY3ODM2NzA= | 291 | Avoid plugins accidentally loading dependencies twice | simonw 9599 | closed | 0 | 3 | 2018-05-27T03:15:21Z | 2020-09-30T20:36:12Z | 2018-05-28T20:42:02Z | OWNER | Plugins that include JavaScript files risk loading the same code twice. In particular: I want to build a second plugin that uses the Leaflet mapping library (the first was [datasette-cluster-map](https://pypi.org/project/datasette-cluster-map/)). But I don't want the two plugins to load duplicate copies of Leaflet. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
327395270 | MDU6SXNzdWUzMjczOTUyNzA= | 296 | Per-database and per-table /-/ URL namespace | simonw 9599 | open | 0 | 3 | 2018-05-29T16:23:13Z | 2019-06-28T16:46:34Z | OWNER | Initially this will be for subsets of `/-/inspect` and `/-/metadata` but it will also give us a URL namespace for future features like `/-/facet` (expanded list of a specific facet, linked to from `...`) and `/-/graph` To start: * `/dbname/-/inspect` * `/dbname/-/metadata` * `/dbname/tablename/-/inspect` * `/dbname/tablename/-/metadata` This means we will no longer allow databases or tables to have the name `"-"` - I think that's OK We will continue to support rows with a primary key of `"-"` at the following URL: * `/dbname/tablename/-` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
328229224 | MDU6SXNzdWUzMjgyMjkyMjQ= | 304 | Ability to configure SQLite cache_size | simonw 9599 | closed | 0 | 3 | 2018-05-31T17:28:07Z | 2018-06-04T16:13:32Z | 2018-06-04T16:03:19Z | OWNER | See https://www.sqlite.org/pragma.html#pragma_cache_size Let's call the config setting `cache_size_kb` to emphasize that we're using the negative option. Note this warning: perhaps we should raise an error if you try to use this setting against a SQLite version prior to 3.7.10 > If the argument N is positive then the suggested cache size is set to N. If the argument N is negative, then the number of cache pages is adjusted to use approximately abs(N*1024) bytes of memory. Backwards compatibility note: The behavior of cache_size with a negative N was different in prior to version 3.7.10 (2012-01-16). In version 3.7.9 and earlier, the number of pages in the cache was set to the absolute value of N. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
340039409 | MDU6SXNzdWUzNDAwMzk0MDk= | 336 | Ensure --help examples in docs are always up to date | simonw 9599 | closed | 0 | 3 | 2018-07-10T23:20:01Z | 2018-07-24T16:01:29Z | 2018-07-24T16:01:29Z | OWNER | Ideally I would automatically generate the --help output shown in our docs, but I don't think I can get that working with readthedocs. Instead, I'm going to add a unit test that checks that those extracts in the documentation match the current output of the --help command. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
351017129 | MDU6SXNzdWUzNTEwMTcxMjk= | 360 | Use pysqlite3 if available | simonw 9599 | closed | 0 | 3 | 2018-08-16T00:50:45Z | 2018-08-16T01:50:42Z | 2018-08-16T00:58:58Z | OWNER | [pysqlite3](https://github.com/coleifer/pysqlite3) is a way to provide access to a more recent version of SQLite than the standard library `sqlite3` module (which tends to use the version provided with the operating system - which on e.g. the Travis CI Ubuntu build environment can be as old as 3.8.0). | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
377156339 | MDU6SXNzdWUzNzcxNTYzMzk= | 371 | datasette publish digitalocean plugin | psychemedia 82988 | closed | 0 | 3 | 2018-11-04T14:07:41Z | 2021-01-04T20:14:28Z | 2021-01-04T20:14:28Z | CONTRIBUTOR | Provide support for launching `datasette` on Digital Ocean. Example: [Deploy Docker containers into Digital Ocean](https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd). Digital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: [Docker One-Click Application](https://www.digitalocean.com/docs/one-clicks/docker/). Related: - Launching containers in Digital Ocean servers running docker: [How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-provision-and-manage-remote-docker-hosts-with-docker-machine-on-ubuntu-16-04) - [How To Use Doctl, the Official DigitalOcean Command-Line Client](https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client) | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
397129564 | MDU6SXNzdWUzOTcxMjk1NjQ= | 397 | Update official datasetteproject/datasette Docker container to SQLite 3.26.0 | claes 43564 | closed | 0 | 3 | 2019-01-08T22:51:50Z | 2019-01-11T01:25:33Z | 2019-01-11T00:56:18Z | NONE | I try to start datasette on a database that contains the below view It fails in a way that makes me think it does not support the window functions SQL syntax. ``` create view general_ledger as select transactions.account_number, strftime("%Y-%m-%d", verifications.verification_date) as verification_date, verifications.verification_number, verifications.verification_text, case when transactions.centi_amount >= 0 and verifications.verification_number > 0 then printf("%.2f", (transactions.centi_amount/100.0)) end as debit, case when transactions.centi_amount <= 0 and verifications.verification_number > 0 then printf("%.2f", (transactions.centi_amount/100.0)) end as credit, printf("%.2f", sum(transactions.centi_amount) over (partition by transactions.account_number order by verifications.verification_number range between unbounded preceding and current row)/100.0) from verifications inner join transactions on transactions.verification_id = verifications.id order by transactions.account_number, verifications.verification_number; ``` ``` docker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/ledger.db Serve! files=('/mnt/ledger.db',) on port 8001 Traceback (most recent call last): File "/usr/local/bin/datasette", line 11, in <module> sys.exit(cli()) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/datase… | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
398011658 | MDU6SXNzdWUzOTgwMTE2NTg= | 398 | Ensure downloading a 100+MB SQLite database file works | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 3 | 2019-01-10T20:57:52Z | 2020-12-05T19:36:27Z | 2020-12-05T19:36:27Z | OWNER | I've seen attempted downloads of large files fail after about ten seconds. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
400229984 | MDU6SXNzdWU0MDAyMjk5ODQ= | 401 | How to pass configuration to plugins? | dazzag24 1055831 | closed | 0 | 3 | 2019-01-17T11:20:41Z | 2019-01-18T11:48:13Z | 2019-01-18T06:49:07Z | NONE | Hi, Firstly, thanks for your work on datasette, it is a hugely useful tool! I've been working on a fork [https://github.com/dazzag24/datasette-cluster-map] of datasette-cluster-map to allow the tileserver to be easily switched. Primarily because the tiles being served in the current version use localised text for labels and I'd like to have English used for these names instead. It uses http://leaflet-extras.github.io/leaflet-providers/preview/ to allow you to simply set the tile provider using a call like so: ``` let tiles = L.tileLayer.provider('Esri.WorldTopoMap'); ``` instead of the current: ``` let tiles = L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 19, detectRetina: true, attribution: '© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors' }), ``` However I've got stuck in trying to work out how to pass the provider string to the plugin. In the documentation: https://datasette.readthedocs.io/en/stable/plugins.html you discuss configuration of plugins and use an example of passing in which latitude and longitude columns should be used. However I cannot seem to see anywhere in the current datasette-cluster-map code where these config params are passed in or used. Can you please point me to an example or how to pass configuration from the metadata.json down into a plugin. Once I've over come this issue I was wondering if you would be interested in taking this change into your version? Many thanks Darren | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
400340905 | MDU6SXNzdWU0MDAzNDA5MDU= | 402 | Use SQLITE_DBCONFIG_DEFENSIVE plus other recommendations from SQLite security docs | simonw 9599 | open | 0 | 3 | 2019-01-17T15:52:28Z | 2019-01-17T16:15:21Z | OWNER | > Was just having a skim through the datasette source. Given that the vuln impacts shadow tables, wasn't sure whether these are also covered by the immutable flag. Latest release introduced a SQLITE_DBCONFIG_DEFENSIVE flag that they recommend setting: https://sqlite.org/security.html https://twitter.com/ignoredambience/status/1085926961413869568 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/402/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
403499298 | MDExOlB1bGxSZXF1ZXN0MjQ3OTIzMzQ3 | 404 | Experiment: run Jinja in async mode | simonw 9599 | closed | 0 | 3 | 2019-01-27T00:28:44Z | 2019-11-12T05:02:18Z | 2019-11-12T05:02:13Z | OWNER | simonw/datasette/pulls/404 | See http://jinja.pocoo.org/docs/2.10/api/#async-support Tests all pass. Have not checked performance difference yet. Creating pull request to run tests in Travis. This is not ready to merge - I'm not yet sure if this is a good idea. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
403625674 | MDU6SXNzdWU0MDM2MjU2NzQ= | 7 | .insert_all() should accept a generator and process it efficiently | simonw 9599 | closed | 0 | 3 | 2019-01-28T02:11:58Z | 2019-01-28T06:26:53Z | 2019-01-28T06:26:53Z | OWNER | Right now you have to load every record into memory before passing the list to `.insert_all()` and friends. If you want to process millions of rows, this is inefficient. Python has generators - we should use them! The only catch here is that part of the magic of `sqlite-utils` is that it guesses the column types and creates the table for you. This code will need to be updated to notice if the table needs creating and, if it does, create it using the first X (where x=1,000 but can be customized) records. If a record outside of those first 1,000 has a rogue column, we can crash with an error. This will free us up to make the `--nl` option added in #6 much more efficient. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
403922644 | MDU6SXNzdWU0MDM5MjI2NDQ= | 8 | Problems handling column names containing spaces or - | psychemedia 82988 | closed | 0 | 3 | 2019-01-28T17:23:28Z | 2019-04-14T15:29:33Z | 2019-02-23T21:09:03Z | NONE | Irrrespective of whether using column names containing a space or - character is good practice, SQLite does allow it, but `sqlite-utils` throws an error in the following cases: ```python from sqlite_utils import Database dbname = 'test.db' DB = Database(sqlite3.connect(dbname)) import pandas as pd df = pd.DataFrame({'col1':range(3), 'col2':range(3)}) #Convert pandas dataframe to appropriate list/dict format DB['test1'].insert_all( df.to_dict(orient='records') ) #Works fine ``` However: ```python df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) DB['test1'].insert_all(df.to_dict(orient='records')) ``` throws: ``` --------------------------------------------------------------------------- OperationalError Traceback (most recent call last) <ipython-input-27-070b758f4f92> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "1": syntax error ``` and: ```python df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) DB['test1'].upsert_all(df.to_dict(orient='records')) ``` results in: ``` --------------------------------------------------------------------------- OperationalError Traceback (most recent call last) <ipython-input-28-654523549d20> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_… | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
408376825 | MDU6SXNzdWU0MDgzNzY4MjU= | 409 | Zeit API v1 does not work for new users - need to migrate to v2 | michaelmcandrew 209967 | closed | 0 | 3 | 2019-02-09T00:50:33Z | 2020-04-06T15:44:46Z | 2020-04-06T15:44:46Z | NONE | Hello there, This looks like a great tool. Thanks. Unfortunately, I hit the following error: ``` michael@hazel ~/src/cc-datasette/data/out datasette publish now cc-datasette.db > WARN! You are using an old version of the Now Platform. More: https://zeit.co/docs/v1-upgrade > Deploying /tmp/tmpjtrxwsyf/datasette under michaelmcandrew > Using project datasette > Error! You tried to create a Now 1.0 deployment. Please use Now 2.0 instead: https://zeit.co/upgrade ``` I'm guessing you might not hit this because you are not a 'new user' of Zeit (https://github.com/zeit/now-cli/issues/1805#issuecomment-452470953). Would it be a lot of work to upgrade to the new Zeit API, do you think? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
410384988 | MDU6SXNzdWU0MTAzODQ5ODg= | 411 | How to pass named parameter into spatialite MakePoint() function | dazzag24 1055831 | closed | 0 | 3 | 2019-02-14T16:30:22Z | 2023-10-25T13:23:04Z | 2019-05-05T12:25:04Z | NONE | Hi, datasette version: "0.26.2" extensions: spatialite: "4.4.0-RC0" sqlite version: "3.22.0" I have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables: ``` conn.execute('SELECT InitSpatialMetadata(1)') conn.execute("SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);") conn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||"longitude"||' '||"latitude"||')',4326);''') conn.execute("SELECT CreateSpatialIndex('airports', 'point_geom');") ``` I'm attempting to create a canned query and have this in my metadata.json file: ``` "find_airports_nearest_to_point":{ "sql":"SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a JOIN airports AS b ON (b.rowid = a.fid) WHERE f_table_name = \"airports\" AND ref_geometry = MakePoint( :Long , :Lat ) AND max_items = 10;"} ``` which doesn't seem to perform the templating of the name parameters correctly and I get no results. Have also tired: ``` MakePoint( || :Long || , || :Lat || ) ``` which returns this error: ``` near "||": syntax error ``` However I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported? Thanks Darren | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
413842611 | MDU6SXNzdWU0MTM4NDI2MTE= | 14 | Utilities for adding indexes | simonw 9599 | closed | 0 | 3 | 2019-02-24T16:57:28Z | 2019-02-24T19:11:28Z | 2019-02-24T19:11:28Z | OWNER | Both in the Python API and the CLI tool. For the CLI tool this should work: $ sqlite-utils create-index mydb.db mytable col1 col2 This will create a compound index across col1 and col2. The name of the index will be automatically chosen unless you use the `--name=...` option. Support a `--unique` option too. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
418329842 | MDU6SXNzdWU0MTgzMjk4NDI= | 415 | Add query parameter to hide SQL textarea | ad-si 36796532 | closed | 0 | 3 | 2019-03-07T14:11:30Z | 2019-03-15T09:30:57Z | 2019-03-15T05:22:43Z | NONE | It would be cool if there was a query parameter to hide / remove the SQL textarea. Then I could simply save a bookmark for a certain query and open it to see the data without having to scroll below the (long) SQL query first. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
432870248 | MDU6SXNzdWU0MzI4NzAyNDg= | 431 | Datasette doesn't reload when database file changes | psychemedia 82988 | closed | 0 | 3 | 2019-04-13T16:50:43Z | 2019-05-02T05:13:55Z | 2019-05-02T05:13:54Z | CONTRIBUTOR | My understanding of the `--reload` option was that if the database file changed `datasette` would automatically reload. I'm running on a Mac and from the `datasette` UI queries don't seem to be picking up data in a newly changed db (I checked the db timestamp - it certainly updated). I was also expecting to see some sort of log statement in the datasette logging to say that it had detected a file change and restarted, but don't see anything there? Will try to check on an Ubuntu box when I get a chance to see if this is a Mac thing. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
442327592 | MDU6SXNzdWU0NDIzMjc1OTI= | 456 | Installing installs the tests package | hellerve 7725188 | closed | 0 | 3 | 2019-05-09T16:35:16Z | 2020-07-24T20:39:54Z | 2020-07-24T20:39:54Z | CONTRIBUTOR | Because `setup.py` uses `find_packages` and `tests` is on the top-level, `pip install datasette` will install a top-level package called `tests`, which is probably not desired behavior. The offending line is here: https://github.com/simonw/datasette/blob/bfa2ae0d16d39bb82dbe4da4f3fdc3c7f6257418/setup.py#L40 And only `pip uninstall datasette` with a conflicting package would warn you by default; apparently another package had the same problem, which is why I get this message when uninstalling: ``` $ pip uninstall datasette Uninstalling datasette-0.27: Would remove: /usr/local/bin/datasette /usr/local/lib/python3.7/site-packages/datasette-0.27.dist-info/* /usr/local/lib/python3.7/site-packages/datasette/* /usr/local/lib/python3.7/site-packages/tests/* Would not remove (might be manually added): [ .. snip .. ] Proceed (y/n)? ``` This should be a relatively simple fix, and I could drop a PR if desired! Cheers | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
443020048 | MDU6SXNzdWU0NDMwMjAwNDg= | 459 | Fix the "datasette now publish ... --alias=x" option | simonw 9599 | closed | 0 | 0.28 4305096 | 3 | 2019-05-11T17:48:40Z | 2019-05-11T20:22:08Z | 2019-05-11T20:22:08Z | OWNER | Now have deprecated the mechanism we were using for this - running `now alias` without any parameters - in favour of something new: https://zeit.co/blog/automatic-aliasing | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
445868234 | MDU6SXNzdWU0NDU4NjgyMzQ= | 478 | Make it so Docker build doesn't delay PyPI release | simonw 9599 | closed | 0 | Datasette 0.29 4471010 | 3 | 2019-05-19T21:52:10Z | 2019-07-08T03:30:41Z | 2019-07-07T20:03:20Z | OWNER | Datasette automated releases currently include building a Docker image that has a full custom-compiled version of SQLite and SpatiaLite. This takes ages! I still want to publish this Docker image (to https://hub.docker.com/r/datasetteproject/datasette/tags ) but I'd like it if this wasn't a blocker on pushing the new package to PyPI. Ideally PyPI publish would happen first. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
448391492 | MDU6SXNzdWU0NDgzOTE0OTI= | 21 | Option to ignore inserts if primary key exists already | simonw 9599 | closed | 0 | 3 | 2019-05-25T00:17:12Z | 2019-05-29T05:09:01Z | 2019-05-29T04:18:26Z | OWNER | > I've just noticed that SQLite lets you IGNORE inserts that collide with a pre-existing key. This can be quite handy if you have a dataset that keeps changing in part, and you don't want to upsert and replace pre-existing PK rows but you do want to ignore collisions to existing PK rows. > > Do `sqlite_utils` support such (cavalier!) behaviour? _Originally posted by @psychemedia in https://github.com/simonw/sqlite-utils/issues/18#issuecomment-480621924_ | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/21/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
451585764 | MDU6SXNzdWU0NTE1ODU3NjQ= | 499 | Accessibility for non-techie newsies? | chrismp 7936571 | open | 0 | 3 | 2019-06-03T16:49:37Z | 2019-06-05T21:22:55Z | NONE | Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
452901999 | MDExOlB1bGxSZXF1ZXN0Mjg1Njk4MzEw | 501 | Test against Python 3.8-dev using Travis | simonw 9599 | closed | 0 | 3 | 2019-06-06T08:37:53Z | 2019-11-11T03:23:29Z | 2019-11-11T03:23:29Z | OWNER | simonw/datasette/pulls/501 | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
453131917 | MDU6SXNzdWU0NTMxMzE5MTc= | 502 | Exporting sqlite database(s)? | chrismp 7936571 | closed | 0 | 3 | 2019-06-06T16:39:53Z | 2021-04-03T05:16:54Z | 2019-06-11T18:50:42Z | NONE | I'm working on datasette from one computer. But if I want to work on it from another computer and want to copy the SQLite database(s) already on the Heroku datasette instance, how to I copy the database(s) to the second computer so that I can then update it and push to online via datasette's command line code that pushes code to Heroku? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
457147936 | MDU6SXNzdWU0NTcxNDc5MzY= | 512 | "about" parameter in metadata does not appear when alone | chrismp 7936571 | open | 0 | 3 | 2019-06-17T21:04:20Z | 2019-10-11T15:49:13Z | NONE | Here's an example of metadata I have for one database on datasette. ``` "Records-requests": { "tables": { "Some table": { "about": "This table has data." } } } ``` The text in `about` does not show up when I publish the data. But it shows up after I add a `"source"` parameter in the metadata. Is this intended? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
459598080 | MDU6SXNzdWU0NTk1OTgwODA= | 520 | asgi_wrapper plugin hook | simonw 9599 | closed | 0 | simonw 9599 | 3 | 2019-06-23T17:16:45Z | 2019-07-03T04:40:34Z | 2019-07-03T04:06:28Z | OWNER | After #272 we can finally add this hook. It will allow plugins to wrap their own ASGI middleware around Datasette. Potential use-cases include: * adding authentication * custom CORS headers (see #454) * maybe gzip support? * possibly defining entirely new routes, though that may be better handled by a separate hook | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
459936585 | MDU6SXNzdWU0NTk5MzY1ODU= | 527 | Unable to use rank when fts-table generated with csvs-to-sqlite | clausjuhl 2181410 | closed | 0 | 3 | 2019-06-24T14:49:48Z | 2019-06-24T15:21:18Z | 2019-06-24T15:09:10Z | NONE | Hi Simon. If i generate a fts-table with the csvs-to-sqlite f-option, I'm unable to use (in datasette's GUI) the internal ranking of the table for sorting or viewing, but if I generate the fts-table with the enable-fts argument from sqlite-utils, everyrthing works ok. Eg.: datasette, version 0.28 sqlite-utils, version 1.2.1 csvs-to-sqlite, version 0.9 No column named rank with these commands: $ csvs-to-sqlite minutes.csv minutes.db -f text_data $ datasette -i minutes.db select rank, * from minutes_fts where minutes_fts match 'dog' Everything ok with these commands: $ csvs-to-sqlite minutes.csv minutes.db $ sqlite-utils enable-fts minutes.db text_data $ datasette -i minutes.db select rank, * from minutes_fts where minutes_fts match 'dog' Am I doing something wrong? Thank you for a great application! | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
463915863 | MDU6SXNzdWU0NjM5MTU4NjM= | 538 | Mechanism for secrets in plugin configuration | simonw 9599 | closed | 0 | 3 | 2019-07-03T19:23:34Z | 2019-07-04T05:47:54Z | 2019-07-04T05:47:54Z | OWNER | See https://github.com/simonw/datasette-auth-github/issues/1 We need a mechanism where by plugins can tap into "secret" config options without exposing them in the visible metadata.json (where plugin configs currently live, see https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration ) | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
464868844 | MDU6SXNzdWU0NjQ4Njg4NDQ= | 543 | datasette publish option for setting plugin configuration secrets | simonw 9599 | closed | 0 | Datasette 0.29 4471010 | 3 | 2019-07-06T16:21:23Z | 2019-07-08T02:06:34Z | 2019-07-08T02:06:34Z | OWNER | Follow-on from #538 - the `datasette publish` command needs a way of passing secrets which will be made available to plugin configuration but will not be exposed in `/-/metadata.json`. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
465327844 | MDU6SXNzdWU0NjUzMjc4NDQ= | 553 | Potential improvements to facet-by-date | simonw 9599 | open | 0 | 3 | 2019-07-08T15:37:53Z | 2019-07-08T15:41:55Z | OWNER | In addition to #483 Tobias had some useful suggestions on Twitter: https://twitter.com/rixxtr/status/1148253926476701696 > I think for date facets, it might be more meaningful to order them by date, rather than by size? Or offer both? I'm *definitely* often interested in size-over-time, so https://data.rixx.de/django_tickets/tickets?_facet_date=created#facet-created … isn't all that helpful! Screenshot of that link: <img width="1092" alt="django_tickets__tickets__29_846_rows" src="https://user-images.githubusercontent.com/9599/60823090-9f680100-a15b-11e9-84e9-52b9d666e90f.png"> | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
467790646 | MDU6SXNzdWU0Njc3OTA2NDY= | 560 | CodeMirror fails to load on database page | simonw 9599 | closed | 0 | 3 | 2019-07-14T03:31:00Z | 2019-09-03T01:03:02Z | 2019-07-14T03:38:59Z | OWNER | It's not loading on https://latest.datasette.io/fixtures But it does load on https://latest.datasette.io/fixtures?sql=select+*+from+facetable | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
470691999 | MDU6SXNzdWU0NzA2OTE5OTk= | 43 | .add_column() doesn't match indentation of initial creation | simonw 9599 | closed | 0 | 3 | 2019-07-20T16:33:10Z | 2019-07-23T13:09:11Z | 2019-07-23T13:09:05Z | OWNER | I spotted a table which was created once and then had columns added to it and the formatted SQL looks like this: ```sql CREATE TABLE [records] ( [type] TEXT, [sourceName] TEXT, [sourceVersion] TEXT, [unit] TEXT, [creationDate] TEXT, [startDate] TEXT, [endDate] TEXT, [value] TEXT, [metadata_Health Mate App Version] TEXT, [metadata_Withings User Identifier] TEXT, [metadata_Modified Date] TEXT, [metadata_Withings Link] TEXT, [metadata_HKWasUserEntered] TEXT , [device] TEXT, [metadata_HKMetadataKeyHeartRateMotionContext] TEXT, [metadata_HKDeviceManufacturerName] TEXT, [metadata_HKMetadataKeySyncVersion] TEXT, [metadata_HKMetadataKeySyncIdentifier] TEXT, [metadata_HKSwimmingStrokeStyle] TEXT, [metadata_HKVO2MaxTestType] TEXT, [metadata_HKTimeZone] TEXT, [metadata_Average HR] TEXT, [metadata_Recharge] TEXT, [metadata_Lights] TEXT, [metadata_Asleep] TEXT, [metadata_Rating] TEXT, [metadata_Energy Threshold] TEXT, [metadata_Deep Sleep] TEXT, [metadata_Nap] TEXT, [metadata_Edit Slots] TEXT, [metadata_Tags] TEXT, [metadata_Daytime HR] TEXT) ``` It would be nice if the columns that were added later matched the indentation of the initial columns. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/43/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
471780443 | MDU6SXNzdWU0NzE3ODA0NDM= | 46 | extracts= option for insert/update/etc | simonw 9599 | closed | 0 | 3 | 2019-07-23T15:55:46Z | 2020-03-01T16:53:40Z | 2019-07-23T17:00:44Z | OWNER | Relates to #42 and #44. I want the ability to extract values out into lookup tables during bulk insert/upsert operations. `db.insert_all(rows, extracts=["species"])` - creates species table for values in the species column `db.insert_all(rows, extracts={"species": "Species"})` - as above but the new table is called `Species`. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/46/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
472097220 | MDU6SXNzdWU0NzIwOTcyMjA= | 7 | Script uses a lot of RAM | simonw 9599 | closed | 0 | 3 | 2019-07-24T06:11:11Z | 2019-07-24T06:35:52Z | 2019-07-24T06:35:52Z | MEMBER | I'm using an XML pull parser which should avoid the need to slurp the whole XML file into memory, but it's not working - the script still uses over 1GB of RAM when it runs according to Activity Monitor. I think this is because I'm still causing the full root element to be incrementally loaded into memory just in case I try and access it later. http://effbot.org/elementtree/iterparse.htm says I should use `elem.clear()` as I go. It also says: > The above pattern has one drawback; it does not clear the root element, so you will end up with a single element with lots of empty child elements. If your files are huge, rather than just large, this might be a problem. To work around this, you need to get your hands on the root element. So I will try that recipe and see if it helps. | healthkit-to-sqlite 197882382 | issue | {"url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
476573875 | MDU6SXNzdWU0NzY1NzM4NzU= | 567 | Datasette Edit | simonw 9599 | closed | 0 | 3 | 2019-08-04T17:09:28Z | 2020-02-25T03:40:50Z | 2020-02-25T03:40:50Z | OWNER | Datasette started out immutable. Then it gained the ability to run against read-only databases that were being modified by other processes. It's time for the next logical progression: the option to allow Datasette (or more likely individual plugins) to write to the database! This is going to require some careful rethinking of how connection management works. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/567/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
488833698 | MDU6SXNzdWU0ODg4MzM2OTg= | 2 | "twitter-to-sqlite user-timeline" command for pulling tweets by a specific user | simonw 9599 | closed | 0 | 3 | 2019-09-03T21:29:12Z | 2019-09-04T20:02:11Z | 2019-09-04T20:02:11Z | MEMBER | Twitter only allows up to 3,200 tweets to be retrieved from https://developer.twitter.com/en/docs/tweets/timelines/api-reference/get-statuses-user_timeline.html I'm going to do: $ twitter-to-sqlite tweets simonw | twitter-to-sqlite 206156866 | issue | {"url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
492153532 | MDU6SXNzdWU0OTIxNTM1MzI= | 573 | Exposing Datasette via Jupyter-server-proxy | psychemedia 82988 | closed | 0 | 3 | 2019-09-11T10:32:36Z | 2020-03-26T09:41:30Z | 2020-03-26T09:41:30Z | CONTRIBUTOR | It is possible to expose a running `datasette` service in a Jupyter environment such as a MyBinder environment using the [`jupyter-server-proxy`](https://github.com/jupyterhub/jupyter-server-proxy). For example, using [this demo Binder](https://mybinder.org/v2/gh/binder-examples/r/master?filepath=index.ipynb) which has the server proxy installed, we can then upload a simple test database from the notebook homepage, from a Jupyter termianl install datasette and set it running against the test db on eg port 8001 and then view it via the path `proxy/8001`. Clicking links results in 404s though because the `datasette` links aren't relative to the current path?  | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
499954048 | MDExOlB1bGxSZXF1ZXN0MzIyNTI5Mzgx | 578 | Added support for multi arch builds | heussd 887095 | closed | 0 | 3 | 2019-09-29T18:43:03Z | 2019-11-13T19:13:15Z | 2019-11-13T19:13:15Z | NONE | simonw/datasette/pulls/578 | Minor changes in Dockerfile and new Makefile to support Docker multi architecture builds. `make`will build one image per architecture and push them as one Docker manifest to Docker Hub. Feel free to change `IMAGE_NAME ` to `datasetteproject/datasette` to update your official Docker Hub image(s). | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
503053800 | MDU6SXNzdWU1MDMwNTM4MDA= | 12 | Extract "source" into a separate lookup table | simonw 9599 | closed | 0 | 3 | 2019-10-06T05:17:23Z | 2019-10-17T15:49:24Z | 2019-10-17T15:49:24Z | MEMBER | It's pretty bulky and ugly at the moment: <img width="334" alt="trump__tweets__1_820_rows" src="https://user-images.githubusercontent.com/9599/66264630-df23a080-e7bd-11e9-9154-403c2e69f841.png"> | twitter-to-sqlite 206156866 | issue | {"url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
503234169 | MDU6SXNzdWU1MDMyMzQxNjk= | 2 | Track and use the 'since' value | simonw 9599 | closed | 0 | 3 | 2019-10-07T05:02:59Z | 2020-03-27T22:22:30Z | 2020-03-27T22:22:30Z | MEMBER | Pocket says: > Whenever possible, you should use the since parameter, or count and and offset parameters when retrieving a user's list. After retrieving the list, you should store the current time (which is provided along with the list response) and pass that in the next request for the list. This way the server only needs to return a small set (changes since that time) instead of the user's entire list every time. At the bottom of https://getpocket.com/developer/docs/v3/retrieve | pocket-to-sqlite 213286752 | issue | {"url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
505512251 | MDU6SXNzdWU1MDU1MTIyNTE= | 588 | Queries per DB table in metadata.json | bsilverm 12617395 | closed | 0 | 3 | 2019-10-10T21:08:19Z | 2019-10-21T12:58:22Z | 2019-10-21T01:48:42Z | NONE | It doesn't appear possible to have separate queries defined per database table. When I do something like below, my table descriptions show up but not the queries: ` "databases": { "MYDB": { "tables": { "MYFIRSTTABLE": { "source": "Test", "source_url": "https://www.google.com", "queries": { "Query 1": { "sql": "select * from MYFIRSTTABLE", "title": "Query 1", "description": "This is the first query" }, } }, "MYSECONDTABLE": { "source":"Test2", "source_url":"https://www.google.com", "queries": { "Query 2" : { "sql":"select * from MYSECONDTABLE;", "title": "Query 2", "description":"This is the second query" } } } }` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
505818256 | MDExOlB1bGxSZXF1ZXN0MzI3MTcyNTQ1 | 590 | Handle spaces in DB names | rixx 2657547 | closed | 0 | 3 | 2019-10-11T12:18:22Z | 2019-11-04T23:16:31Z | 2019-11-04T23:16:30Z | CONTRIBUTOR | simonw/datasette/pulls/590 | Closes #503 | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
506087267 | MDU6SXNzdWU1MDYwODcyNjc= | 19 | since_id support for home-timeline | simonw 9599 | closed | 0 | 3 | 2019-10-11T22:48:24Z | 2019-10-16T19:13:06Z | 2019-10-16T19:12:46Z | MEMBER | Currently every time you run `home-timeline` we pull all 800 available tweets. We should offer to support `since_id` (which can be provided or can be pulled directly from the database) in order to work more efficiently if this command is executed e.g. on a cron. | twitter-to-sqlite 206156866 | issue | {"url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/19/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
506183241 | MDU6SXNzdWU1MDYxODMyNDE= | 593 | make uvicorn optional dependancy (because not ok on windows python yet) | stonebig 4312421 | closed | 0 | 3 | 2019-10-12T12:51:07Z | 2019-10-13T06:22:08Z | 2019-10-13T06:22:07Z | NONE | would it be possible to: - remove uvicorn mandatory dependancy ? - eventually make a fallback to hypercorn ? reason: - uvloop not yet supported on Windows/Python-3.8 and below, may happen with Python-3.9 only. - it seems a 6 lines effort (but I'm not expert) | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
506268945 | MDU6SXNzdWU1MDYyNjg5NDU= | 20 | --since support for various commands for refresh-by-cron | simonw 9599 | closed | 0 | 3 | 2019-10-13T03:40:46Z | 2019-10-21T03:32:04Z | 2019-10-16T19:26:11Z | MEMBER | I want to run a cron that updates my Twitter database every X minutes. It should be able to retrieve the following without needing to paginate through everything: - [x] Tweets I have tweeted - [x] My home timeline (see #19) - [x] Tweets I have favourited It would be nice if this could be standardized across all commands as a `--since` option. | twitter-to-sqlite 206156866 | issue | {"url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/20/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
506297048 | MDU6SXNzdWU1MDYyOTcwNDg= | 594 | upgrade to uvicorn-0.9 to be Python-3.8 friendly | stonebig 4312421 | closed | 0 | 3 | 2019-10-13T09:23:43Z | 2019-11-12T04:47:04Z | 2019-11-12T04:47:04Z | NONE | uvicorn-0.8 relies on websockets-0.7 which lacks python-3.8 compatiblity | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
509535510 | MDExOlB1bGxSZXF1ZXN0MzMwMDc2MjYz | 602 | Offer to format readonly SQL | rixx 2657547 | closed | 0 | 3 | 2019-10-20T02:29:32Z | 2019-11-04T07:29:33Z | 2019-11-04T02:39:56Z | CONTRIBUTOR | simonw/datasette/pulls/602 | Following discussion in #601, this PR adds a "Format SQL" button to read-only SQL (if the SQL actually differs from the formatting result). It also removes a console error on readonly SQL queries. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
509693773 | MDU6SXNzdWU1MDk2OTM3NzM= | 604 | _where= parameter is not persisted in hidden form fields | simonw 9599 | closed | 0 | 3 | 2019-10-21T02:14:10Z | 2019-10-30T19:12:38Z | 2019-10-30T18:49:44Z | OWNER | e.g. on this page: https://v0-30.datasette.io/fixtures/roadside_attractions?_where=name%20like%20%27%museum%%27 Click the "Apply" button and the `_where=` parameter will be dropped. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
512218858 | MDU6SXNzdWU1MTIyMTg4NTg= | 606 | /-/plugins shows incorrect name for plugins | simonw 9599 | closed | 0 | 3 | 2019-10-24T22:53:25Z | 2019-11-01T05:41:04Z | 2019-11-01T05:40:07Z | OWNER | https://fivethirtyeight.datasettes.com/-/plugins ```json [ { "name": "datasette_jellyfish", "static": false, "templates": false, "version": "0.3" }, { "name": "datasette_vega", "static": true, "templates": false, "version": "0.6.2" } ] ``` These should be shown as `datasette-jellyfish` and `datasette-vega` since those are the names on PyPI. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
516370822 | MDU6SXNzdWU1MTYzNzA4MjI= | 611 | Static assets no longer loading for installed plugins | simonw 9599 | closed | 0 | 3 | 2019-11-01T22:07:00Z | 2019-11-01T22:15:55Z | 2019-11-01T22:15:55Z | OWNER | Caused by fix I made in #606 e.g. `/-/static-plugins/datasette_leaflet_geojson/datasette-leaflet-geojson.js` is a 404, but view-`/-/static-plugins/datasette-leaflet-geojson/datasette-leaflet-geojson.js` works correctly. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
516967682 | MDU6SXNzdWU1MTY5Njc2ODI= | 10 | Add this repos_starred view | simonw 9599 | closed | 0 | 3 | 2019-11-04T05:44:38Z | 2020-05-02T16:37:36Z | 2020-05-02T16:37:36Z | MEMBER | ```sql create view repos_starred as select stars.starred_at, users.login, repos.* from repos join stars on repos.id = stars.repo join users on repos.owner = users.id order by starred_at desc; ``` | github-to-sqlite 207052882 | issue | {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
522334771 | MDU6SXNzdWU1MjIzMzQ3NzE= | 633 | Publish to Heroku is broken: "WARNING: You must pass the application as an import string to enable 'reload' or 'workers" | simonw 9599 | closed | 0 | 3 | 2019-11-13T16:32:11Z | 2020-04-28T20:37:50Z | 2019-11-13T16:43:23Z | OWNER | ``` 2019-11-13T16:27:59.821483+00:00 heroku[web.1]: Starting process with command `datasette serve --host 0.0.0.0 -i fixtures.db --cors --port 36817 --inspect-file inspect-data.json` 2019-11-13T16:28:01.856471+00:00 heroku[web.1]: State changed from starting to crashed 2019-11-13T16:28:01.750253+00:00 app[web.1]: Serve! files=() (immutables=('fixtures.db',)) on port 36817 2019-11-13T16:28:01.771524+00:00 app[web.1]: WARNING: You must pass the application as an import string to enable 'reload' or 'workers'. 2019-11-13T16:28:01.837839+00:00 heroku[web.1]: Process exited with status 1 ``` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
525254973 | MDU6SXNzdWU1MjUyNTQ5NzM= | 636 | rowid is not included in dropdown filter menus | simonw 9599 | closed | 0 | 3 | 2019-11-19T20:43:04Z | 2019-11-19T23:01:17Z | 2019-11-19T23:01:17Z | OWNER | For `rowid` tables the `rowid` column isn't shown in the list of filter options: <img width="652" alt="md__md__53_805_rows_where_where_rowid___1060124_sorted_by_rowid_descending" src="https://user-images.githubusercontent.com/9599/69184590-00202680-0aca-11ea-8522-3a4690924b83.png"> This also means if you link to e.g. `?rowid__gt=1060124` the resulting filter interface will be slightly broken: clicking the "apply" button again will lose your filter for example. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
525993034 | MDU6SXNzdWU1MjU5OTMwMzQ= | 637 | Custom queries with 0 results should say "0 results" | simonw 9599 | closed | 0 | 3 | 2019-11-20T18:28:14Z | 2019-11-23T06:17:23Z | 2019-11-23T06:07:08Z | OWNER | Consider https://latest.datasette.io/fixtures/neighborhood_search?text=foop <img width="803" alt="fixtures__select_neighborhood__facet_cities_name__state_from_facetable_join_facet_cities_on_facetable_city_id___facet_cities_id_where_neighborhood_like_________text________order_by_neighborhood_" src="https://user-images.githubusercontent.com/9599/69266652-6a939e00-0b80-11ea-8b08-960a05a8c8d0.png"> It's currently not obvious that the query executed and returned 0 results. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
526913133 | MDU6SXNzdWU1MjY5MTMxMzM= | 638 | Don't suggest column for faceting if all values are 1 | simonw 9599 | closed | 0 | 3 | 2019-11-22T00:14:22Z | 2019-11-22T01:14:59Z | 2019-11-22T00:57:49Z | OWNER | https://www.niche-museums.com/museums/museums?_facet=wikipedia_url <img width="759" alt="museums__museums__42_rows" src="https://user-images.githubusercontent.com/9599/69387171-e58caf80-0c79-11ea-8b4e-cfe5861bc0ab.png"> Challenge is how to do this efficiently, since suggested facet queries need to be lightning fast. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
530491074 | MDU6SXNzdWU1MzA0OTEwNzQ= | 14 | Command for importing events | simonw 9599 | open | 0 | 3 | 2019-11-29T21:28:58Z | 2020-04-14T19:38:34Z | MEMBER | Eg from https://api.github.com/users/simonw/events Docs here: https://developer.github.com/v3/activity/events/#list-events-performed-by-a-user | github-to-sqlite 207052882 | issue | {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
531502365 | MDU6SXNzdWU1MzE1MDIzNjU= | 646 | Make database level information from metadata.json available in the index.html template | lagolucas 18017473 | open | 0 | Datasette 1.0 3268330 | 3 | 2019-12-02T19:55:10Z | 2022-03-15T20:50:34Z | NONE | Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks! | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | |||||||
534507142 | MDU6SXNzdWU1MzQ1MDcxNDI= | 69 | Feature request: enable extensions loading | aborruso 30607 | closed | 0 | 3 | 2019-12-08T08:06:25Z | 2022-02-05T00:04:25Z | 2020-10-16T18:42:49Z | NONE | Hi, it would be great to add a parameter that enables the load of a sqlite extension you need. Something like "-ext modspatialite". In this way your great tool would be even more comfortable and powerful. Thank you very much | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
534629631 | MDU6SXNzdWU1MzQ2Mjk2MzE= | 650 | Add a glossary to the documentation | simonw 9599 | open | 0 | 3 | 2019-12-09T00:23:45Z | 2022-01-13T22:04:56Z | OWNER | Call it `glossary.rst` - it can use a definition list something like this: ```rst .. _glossary: Glossary ======== Term A definition of the term. Another term Another definition. ``` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
539590148 | MDU6SXNzdWU1Mzk1OTAxNDg= | 651 | fts5 syntax error when using punctuation | clausjuhl 2181410 | closed | 0 | 3 | 2019-12-18T10:25:35Z | 2021-07-14T19:26:06Z | 2019-12-30T06:42:55Z | NONE | Hi Simon I get a syntax error when using punctuation or special characters in a fulltext search (using fts5). I created the virtual table using sqlite-utils' "enable-fts"-command. The same error appears on Niche Museums [https://www.niche-museums.com/browse/search?q=park.](https://www.niche-museums.com/browse/search?q=park.), but works fine in most of your other datasette-examples, e.g. register-of-members-interests [https://register-of-members-interests.datasettes.com/regmem-98dc8b7/items?_search=mins.](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/items?_search=mins.) What am I doing wrong? Many thanks! | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
541467590 | MDU6SXNzdWU1NDE0Njc1OTA= | 654 | Template debug mode that outputs template context | simonw 9599 | closed | 0 | 3 | 2019-12-22T15:51:25Z | 2019-12-22T16:13:11Z | 2019-12-22T16:04:51Z | OWNER | It would make writing templates (including custom templates) easier if there was an option to dump out the full template context - maybe `?_context=1` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
542553350 | MDU6SXNzdWU1NDI1NTMzNTA= | 655 | Copy and paste doesn't work reliably on iPhone for SQL editor | simonw 9599 | closed | 0 | Datasette 1.0 3268330 | 3 | 2019-12-26T13:15:10Z | 2020-09-30T20:36:12Z | 2020-08-30T17:51:40Z | OWNER | I'm having a lot of trouble copying and pasting from the codemirror editor on my iPhone. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
542814756 | MDU6SXNzdWU1NDI4MTQ3NTY= | 71 | Tests are failing due to missing FTS5 | simonw 9599 | closed | 0 | 3 | 2019-12-27T09:41:16Z | 2019-12-27T09:49:37Z | 2019-12-27T09:49:37Z | OWNER | https://travis-ci.com/simonw/sqlite-utils/jobs/268436167 This is a recent change: 2 months ago they worked fine. I'm not sure what changed here. Maybe something to do with https://launchpad.net/~jonathonf/+archive/ubuntu/backports ? | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
546073980 | MDU6SXNzdWU1NDYwNzM5ODA= | 74 | Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column | jayvdb 15092 | open | 0 | 3 | 2020-01-07T04:35:50Z | 2020-01-12T07:21:17Z | CONTRIBUTOR | openSUSE 15.1 is using python 3.6.5 and click-7.0 , however it has test failures while openSUSE Tumbleweed on py37 passes. Most fail on the cli exit code like ```py [ 74s] =================================== FAILURES =================================== [ 74s] _________________________________ test_tables __________________________________ [ 74s] [ 74s] db_path = '/tmp/pytest-of-abuild/pytest-0/test_tables0/test.db' [ 74s] [ 74s] def test_tables(db_path): [ 74s] result = CliRunner().invoke(cli.cli, ["tables", db_path]) [ 74s] > assert '[{"table": "Gosh"},\n {"table": "Gosh2"}]' == result.output.strip() [ 74s] E assert '[{"table": "...e": "Gosh2"}]' == '' [ 74s] E - [{"table": "Gosh"}, [ 74s] E - {"table": "Gosh2"}] [ 74s] [ 74s] tests/test_cli.py:28: AssertionError ``` packaging project at https://build.opensuse.org/package/show/home:jayvdb:py-new/python-sqlite-utils I'll keep digging into this after I have github-to-sqlite working on Tumbleweed, as I'll need openSUSE Leap 15.1 working before I can submit this into the main python repo. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/74/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
555832585 | MDU6SXNzdWU1NTU4MzI1ODU= | 661 | --port option to expose a port other than 8001 in "datasette package" | dvhthomas 134771 | closed | 0 | 3 | 2020-01-27T21:05:56Z | 2020-01-30T04:17:52Z | 2020-01-29T22:46:45Z | NONE | I see how to alter the port using `datasette serve -p XXX` per the docs. However, I'm packaging up to server the container on AppEngine flexible, which [requires](https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build#listening_to_port_8080) that the container is serving traffic on port 8080. https://github.com/simonw/datasette/blob/7950105c278b140e6cb665c68b59df219870f9bc/Dockerfile#L41 Is there a way to inject a non-default port into the Dockerfile, or should I just do something like `sed` to replace 8001 with 8080 after `dataset package` has done it's thing? Thanks for the advice. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/661/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
559197745 | MDU6SXNzdWU1NTkxOTc3NDU= | 82 | Tutorial command no longer works | petey284 10350886 | closed | 0 | 3 | 2020-02-03T16:36:11Z | 2020-02-27T04:16:43Z | 2020-02-27T04:16:30Z | NONE | Issue with command on [tutorial](https://simonwillison.net/2019/Feb/25/sqlite-utils/) on Simon's site. The following command no longer works, and breaks with the previous too many variables error: #50 ``` cmd > curl "https://data.nasa.gov/resource/y77d-th95.json" | \ sqlite-utils insert meteorites.db meteorites - --pk=id ``` Output: ``` cmd Traceback (most recent call last): File "continuum\miniconda3\envs\main\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "continuum\miniconda3\envs\main\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "Continuum\miniconda3\envs\main\Scripts\sqlite-utils.exe\__main__.py", line 9, in <module> File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 764, in __call__ return self.main(*args, **kwargs) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 717, in main rv = self.invoke(ctx) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 555, in invoke return callback(*args, **kwargs) File "continuum\miniconda3\envs\main\lib\site-packages\sqlite_utils\cli.py", line 434, in insert default=default, File "continuum\miniconda3\envs\main\lib\site-packages\sqlite_utils\cli.py", line 384, in insert_upsert_implementation docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs File "continuum\miniconda3\envs\main\lib\site-packages\sqlite_utils\db.py", line 1081, in insert_all result = self.db.conn.execute(query, params) sqlite3.OperationalError: too many SQL variables ``` My thought is that maybe the dataset grew over the last few years and so didn't run into this issue before. No error… | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/82/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);