issues
2,892 rows sorted by user
This data as json, CSV (advanced)
Suggested facets: state, assignee, author_association, repo, type, draft, state_reason, created_at (date), closed_at (date)
updated_at (date) >30 ✖
- 2022-01-13 33
- 2021-06-17 21
- 2022-12-13 20
- 2022-01-20 19
- 2023-05-08 19
- 2019-06-24 18
- 2020-09-24 17
- 2022-03-19 17
- 2020-10-08 16
- 2021-01-24 16
- 2020-05-04 14
- 2022-11-15 14
- 2017-11-13 13
- 2020-05-28 13
- 2020-06-09 13
- 2020-10-23 12
- 2020-10-31 12
- 2021-10-13 12
- 2022-03-21 12
- 2022-08-27 12
- 2019-07-08 11
- 2021-01-25 11
- 2021-08-18 11
- 2022-11-18 11
- 2017-10-24 10
- 2017-12-10 10
- 2018-05-28 10
- 2018-07-10 10
- 2019-05-19 10
- 2019-05-29 10
- …
milestone >30 ✖
- Datasette 1.0 110
- 0.51 53
- Ship first public release 49
- Datasette 0.44 39
- Datasette 0.45 25
- Datasette 0.60 24
- Datasette 1.0a0 23
- 0.28 22
- Custom templates edition 21
- Datasette 0.50 21
- Datasette 0.54 19
- Datasette 1.0a3 19
- 3.21 16
- Datasette 0.52 15
- Datasette 1.0a2 15
- Datasette 0.49 13
- Foreign key edition 11
- Datasette 0.29 11
- Datasette 0.43 11
- 3.29 10
- Datasette 0.39 8
- 2.20 8
- Datasette 0.62 8
- 0.23.1 7
- 1.0 7
- 3.0 7
- Datasette 1.0a1 6
- v1 stretch goals 5
- 1.0 4
- Apple Photos online and securely browsable 4
- …
id | node_id | number | title | user ▼ | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
771324837 | MDU6SXNzdWU3NzEzMjQ4Mzc= | 53 | --since support for favorites | anotherjesse 27 | closed | 0 | 1 | 2020-12-19T07:08:23Z | 2020-12-19T07:47:11Z | 2020-12-19T07:47:11Z | NONE | Having support for `--since` for updating your favorites would be ideal as the api is both slow and it only returns ~3k most recent favorites. https://twittercommunity.com/t/cant-get-all-favorite-tweets-by-rest-api/22007/3 The api seems to take an optional `since_id` parameter - https://developer.twitter.com/en/docs/twitter-api/v1/tweets/post-and-engage/api-reference/get-favorites-list | twitter-to-sqlite 206156866 | issue | {"url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
723838331 | MDU6SXNzdWU3MjM4MzgzMzE= | 11 | export.xml file name varies with different language settings | jarib 572 | closed | 0 | 7 | 2020-10-17T20:07:18Z | 2020-10-17T21:39:15Z | 2020-10-17T21:14:10Z | NONE | The XML file exported from my phone has a Norwegian file name – `eksport.xml` 🙄 I can work around this by unpacking the zip and using `--xml`, but then I lose the workout points. Perhaps this could be solved by `--localized-xml eksport.xml`? Alternatively just fall back to the first XML file in the root folder of the zip. | healthkit-to-sqlite 197882382 | issue | {"url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/11/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
544571092 | MDU6SXNzdWU1NDQ1NzEwOTI= | 15 | Assets table with downloads | garethr 2029 | closed | 0 | 1.0 5225818 | 4 | 2020-01-02T13:05:28Z | 2020-03-28T12:17:01Z | 2020-03-23T19:17:32Z | NONE | The `releases` command extracts the releases table, but data about the individual assets are locked up in the JSON document in the `assets` field. My main interest is in individual and aggregate download counts. I was wondering if creating a new table with a record per asset may be useful? If so I'm happy to send a PR when I get a moment. Do you have opinions about that simply being part of the `releases` command or would you prefer a separate command as well? | github-to-sqlite 207052882 | issue | {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
609950090 | MDU6SXNzdWU2MDk5NTAwOTA= | 33 | Fall back to authentication via ENV | garethr 2029 | closed | 0 | 4 | 2020-04-30T12:58:14Z | 2020-05-02T18:46:10Z | 2020-05-02T18:45:37Z | NONE | Would you accept a PR that falls back to looking for an environment variable for the GitHub token? Specifically a change here: https://github.com/dogsheep/github-to-sqlite/blob/c34d5a18bfc41fa08755ba3d5cf9fe09ff204238/github_to_sqlite/cli.py#L271 I'd like to use `github-to-sqlite` in a GitHub Action workflow and this would be simpler than trying to fill out the prompt or generate a file with sensitive content. Wanted to check first, I'm happy to submit a PR with tests and updates to the docs. | github-to-sqlite 207052882 | issue | {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
988493790 | MDExOlB1bGxSZXF1ZXN0NzI3MzkwODM1 | 36 | Correct naming of tool in readme | badboy 2129 | open | 0 | 1 | 2021-09-05T12:05:40Z | 2022-01-06T16:04:46Z | FIRST_TIME_CONTRIBUTOR | dogsheep/dogsheep-photos/pulls/36 | dogsheep-photos 256834907 | pull | {"url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||||
1515717718 | PR_kwDOC8tyDs5Gc-VH | 23 | Include workout statistics | badboy 2129 | open | 0 | 0 | 2023-01-01T17:29:57Z | 2023-01-01T17:29:57Z | FIRST_TIME_CONTRIBUTOR | dogsheep/healthkit-to-sqlite/pulls/23 | Not sure when this changed (iOS 16 maybe?), but the `WorkoutStatistics` now has a whole bunch of information about workouts, e.g. for runs it contains the distance (as a `<WorkoutStatistics type="HKQuantityTypeIdentifierDistanceWalkingRunning ...>` element). Adding it as another column at leat allows me to pull these out (using SQLite's JSON support). I'm running with this patch on my own data now. | healthkit-to-sqlite 197882382 | pull | {"url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/23/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
752966476 | MDU6SXNzdWU3NTI5NjY0NzY= | 1114 | --load-extension=spatialite not working with datasetteproject/datasette docker image | danp 2182 | closed | 0 | 4 | 2020-11-29T17:35:20Z | 2022-01-20T21:29:42Z | 2020-11-29T17:37:45Z | CONTRIBUTOR | https://github.com/simonw/datasette/commit/6aa5886379dd9017215904fb28567b80018902f9 added the `--load-extension=spatialite` shortcut looking for the extension in these places: https://github.com/simonw/datasette/blob/12877d7a48e2aa28bb5e780f929a218f7265d849/datasette/utils/__init__.py#L56-L60 However, in the datasetteproject/datasette docker image the file is at `/usr/local/lib/mod_spatialite.so`. This results in the example command [here](https://docs.datasette.io/en/stable/installation.html#loading-spatialite) failing: ``` % docker run --rm -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/data.db --load-extension=spatialite Error: Could not find SpatiaLite extension ``` But it does work when given an explicit path: ``` % docker run --rm -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/data.db --load-extension=/usr/local/lib/mod_spatialite.so INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit) ... ``` Perhaps `SPATIALITE_PATHS` should include `/usr/local/lib/mod_spatialite.so`? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1160677684 | PR_kwDOBm6k_c40AW_v | 1649 | Add /opt/homebrew to where spatialite extension can be found | danp 2182 | closed | 0 | 1 | 2022-03-06T18:09:35Z | 2022-03-06T22:46:00Z | 2022-03-06T19:39:15Z | CONTRIBUTOR | simonw/datasette/pulls/1649 | Helps homebrew on Apple Silicon setups find spatialite without needing a full path. Similar to #1114 | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1385026210 | I_kwDOBm6k_c5SjdKi | 1819 | Preserve query on timeout | danp 2182 | closed | 0 | 3 | 2022-09-25T13:32:31Z | 2022-09-26T23:16:15Z | 2022-09-26T23:06:06Z | CONTRIBUTOR | If a query hits the timeout it shows a message like: > SQL query took too long. The time limit is controlled by the [sql_time_limit_ms](https://docs.datasette.io/en/stable/settings.html#sql-time-limit-ms) configuration option. But the query is lost. Hitting the browser back button shows the query _before_ the one that errored. It would be nice if the query that errored was preserved for more tweaking. This would make it similar to how "invalid syntax" works since #1346 / #619. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1345452427 | I_kwDODLZ_YM5QMfmL | 11 | -a option is used for "--auth" and for "--all" | fernand0 2467 | closed | 0 | 3 | 2022-08-21T10:50:48Z | 2022-08-21T21:11:57Z | 2022-08-21T21:11:57Z | NONE | I'm not sure which option is best, instead of -a -all. | pocket-to-sqlite 213286752 | issue | {"url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/11/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1353411865 | I_kwDODEpn8M5Qq20Z | 1 | Problem with my user | fernand0 2467 | open | 0 | 0 | 2022-08-28T16:59:37Z | 2022-08-28T16:59:37Z | NONE | If I call the program with: inaturalist-to-sqlite inaturalist.db ftricas the program exits with an error: `Importing 36 observations Traceback (most recent call last): File "/home/ftricas/.pyenv/versions/3.10.6/bin/inaturalist-to-sqlite", line 8, in <module> sys.exit(cli()) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/inaturalist_to_sqlite/cli.py", line 51, in cli save_observation(observation, db) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/inaturalist_to_sqlite/utils.py", line 34, in save_observation db["observations"] File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/sqlite_utils/db.py", line 2965, in insert return self.insert_all( File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/sqlite_utils/db.py", line 3068, in insert_all self.create( File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/sqlite_utils/db.py", line 1564, in create self.db.create_table( File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/sqlite_utils/db.py", line 951, in create_table sql = self.create_table_sql( File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/sqlite_utils/db.py", line 765, in create_table_sql foreign_keys = self.resolve_foreign_keys(name, foreign_keys or []) File "/home/ftricas/.pyenv/versions/3.10.6/… | inaturalist-to-sqlite 206202864 | issue | {"url": "https://api.github.com/repos/dogsheep/inaturalist-to-sqlite/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1353418822 | PR_kwDODtX3eM497MOV | 5 | The program fails when the user has no submissions | fernand0 2467 | open | 0 | 0 | 2022-08-28T17:25:45Z | 2022-08-28T17:25:45Z | FIRST_TIME_CONTRIBUTOR | dogsheep/hacker-news-to-sqlite/pulls/5 | Tested with: hacker-news-to-sqlite user hacker-news.db fernand0 Result: ` Traceback (most recent call last): File "/home/ftricas/.pyenv/versions/3.10.6/bin/hacker-news-to-sqlite", line 8, in <module> sys.exit(cli()) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/home/ftricas/.pyenv/versions/3.10.6/lib/python3.10/site-packages/hacker_news_to_sqlite/cli.py", line 27, in user submitted = user.pop("submitted", None) or [] AttributeError: 'NoneType' object has no attribute 'pop' ` There is a problem of style with the patch (but not sure what to do) because with the new inicialization ( submitted = []) the part or [] is not needed. Maybe there is a more adequate way of doing this. | hacker-news-to-sqlite 248903544 | pull | {"url": "https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
1078702875 | I_kwDOBm6k_c5AS7Mb | 1552 | Allow to set `facets_array` in metadata (like current `facets`) | davidbgk 3556 | closed | 0 | Datasette 0.60 7571612 | 9 | 2021-12-13T16:00:44Z | 2022-01-13T22:26:15Z | 2021-12-16T18:47:48Z | CONTRIBUTOR | For now, you can set a `facets` value (array) in your metadata file but I couldn't find a way to set a `facets_array` in order to provide default facets for arrays (like tags). My use-case is to access to [that kind of view](https://latest.datasette.io/fixtures/facetable?_facet_array=tags) by default without URL's parameters as with other default facets. _I'm new to datasette, and I'm willing to help with a PR if that is not already implemented and I missed it!_ | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
1098275181 | PR_kwDOBm6k_c4wwNCl | 1589 | Typo in docs about default redirect status code | davidbgk 3556 | closed | 0 | 1 | 2022-01-10T19:14:36Z | 2022-03-06T02:27:49Z | 2022-03-06T01:58:32Z | CONTRIBUTOR | simonw/datasette/pulls/1589 | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
1473659191 | I_kwDOBm6k_c5X1kE3 | 1929 | Incorrect link from the API explorer to the JSON API documentation | davidbgk 3556 | closed | 0 | 4 | 2022-12-03T02:08:58Z | 2022-12-06T19:36:23Z | 2022-12-06T19:34:20Z | CONTRIBUTOR | I installed `datasette==1.0a1`. When I go to http://127.0.0.1:8001/-/api I have a link: `Use this tool to try out the [Datasette API](https://docs.datasette.io/en/1.0a1/json_api.html).` but that documentation page does not exist. I'm not sure where it has to be fixed, should it link to the stable page https://docs.datasette.io/en/stable/json_api.html , the latest one https://docs.datasette.io/en/latest/json_api.html#the-json-write-api or would it be more appropriated to deploy documentation for the `1.0a1` version? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1473664029 | PR_kwDOBm6k_c5ELz0u | 1930 | Typo in JSON API `Updating a row` documentation | davidbgk 3556 | closed | 0 | 2 | 2022-12-03T02:22:31Z | 2022-12-08T21:12:35Z | 2022-12-08T21:12:35Z | CONTRIBUTOR | simonw/datasette/pulls/1930 | <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1930.org.readthedocs.build/en/1930/ <!-- readthedocs-preview datasette end --> | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
777677671 | MDU6SXNzdWU3Nzc2Nzc2NzE= | 1169 | Prettier package not actually being cached | benpickles 3637 | closed | 0 | 4 | 2021-01-03T17:04:41Z | 2021-01-04T19:52:34Z | 2021-01-04T19:52:33Z | CONTRIBUTOR | With the current configuration Prettier seems to be installed on every run - which can been [seen from the output](https://github.com/simonw/datasette/runs/1631686028?check_suite_focus=true#step:4:4): ``` npx: installed 1 in 5.166s ``` Prettier isn't explicitly being installed (it's surprising that actually installing the dependencies isn't included in the [actions/cache docs](https://github.com/actions/cache/blob/main/examples.md#macos-and-ubuntu)) but it turns out that `npx` will automatically install the package for the specified command (it actually _guesses_ the package name from the name of the command). I'm not sure where Prettier ends up being installed but it doesn't appear to be in `~/.npm` according to the [post-cache output](https://github.com/simonw/datasette/runs/1631686028#step:7:2) (or `./node_modules` when I tested locally): ``` Cache hit occurred on the primary key Linux-npm-565329898f77080e58b14d45cf816ab94877e6f2ece9d395c369c533548a7ee7, not saving cache. ``` I think there are a couple of approaches to tackling this, you could manually install/cache Prettier within the action, or add a `package.json` with Prettier. I would go with the latter because it's a more standard and maintainable approach and it will also ensure that, along with CI, anyone working on the project will run the same version of Prettier (you'll also get Dependabot JavaScript updates). I've tested the [`package.json` approach on a branch](https://github.com/simonw/datasette/compare/main...benpickles:cache-prettier) and am happy to turn it into a pull request if you fancy. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
778126516 | MDExOlB1bGxSZXF1ZXN0NTQ4MjcxNDcy | 1170 | Install Prettier via package.json | benpickles 3637 | closed | 0 | Datasette 0.54 6346396 | 3 | 2021-01-04T14:18:03Z | 2021-01-24T21:21:01Z | 2021-01-04T19:52:34Z | CONTRIBUTOR | simonw/datasette/pulls/1170 | This adds a package.json with Prettier and means that developers/CI will use the same version. It also ensures that NPM packages are cached on GitHub Actions which fixes #1169. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||
1149729902 | PR_kwDOCGYnMM4zbaJy | 410 | Correct spelling mistakes (found with codespell) | EdwardBetts 3818 | closed | 0 | 1 | 2022-02-24T20:44:18Z | 2022-03-06T08:48:29Z | 2022-03-01T21:05:29Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/410 | sqlite-utils 140912432 | pull | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
919250621 | MDU6SXNzdWU5MTkyNTA2MjE= | 269 | bool type not supported | frafra 4068 | closed | 0 | 3 | 2021-06-11T22:00:36Z | 2021-06-15T01:34:10Z | 2021-06-15T01:34:10Z | NONE | Hi! Thank you for sharing this very nice tool :) It would be nice to have support for more types, like `bool`: it is not possible to convert to boolean at the moment. My suggestion would be to handle it as `bool(int(value))`, like csvkit does. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
919314806 | MDU6SXNzdWU5MTkzMTQ4MDY= | 270 | Cannot set type JSON | frafra 4068 | closed | 0 | 4 | 2021-06-11T23:53:22Z | 2021-06-16T17:34:49Z | 2021-06-16T15:47:06Z | NONE | It would be great if the column type could be set to JSON. That would not be different from handling a regular string. It would be something like `repr(value)` and it would work with both JSON and CSV inputs, no matter if `value` is a real list or just a string representing a list. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
919508498 | MDU6SXNzdWU5MTk1MDg0OTg= | 1375 | JSON export dumps JSON fields as TEXT | frafra 4068 | closed | 0 | 2 | 2021-06-12T09:45:08Z | 2021-06-14T09:41:59Z | 2021-06-13T15:37:58Z | NONE | Hi! When a user tries to export data as JSON, I would expect to see the value of JSON columns represented as JSON instead of being rendered as a string. What do you think? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1250161887 | I_kwDOCGYnMM5Kg_Tf | 438 | illegal UTF-16 surrogate | frafra 4068 | closed | 0 | 2 | 2022-05-26T22:49:52Z | 2022-05-27T08:21:53Z | 2022-05-27T08:21:53Z | NONE | I am trying to insert `https://artsdatabanken.no/Fab2018/api/export/csv` into a SQLite database, but I have an error when using `sqlite-utils`: ``` sqlite-utils insert --csv --delimiter ";" --encoding="utf-16-le" --pk "Id" csv fremmedart test.db [------------------------------------] 0% Error: 'utf-16-le' codec can't decode bytes in position 98-99: illegal UTF-16 surrogate The input you provided uses a character encoding other than utf-8. You can fix this by passing the --encoding= option with the encoding of the file. If you do not know the encoding, running 'file filename.csv' may tell you. It's often worth trying: --encoding=latin-1 ``` I tried to convert the file using `iconv -f "utf-16le" -t "utf-8"`, but I still get a similar error (slightly different position): ``` sqlite-utils insert --csv --delimiter ";" --encoding=utf-8 --pk "Id" csv_utf8 fremmedart test.db [------------------------------------] 0% Error: 'utf-8' codec can't decode byte 0xd9 in position 99: invalid continuation byte The input you provided uses a character encoding other than utf-8. You can fix this by passing the --encoding= option with the encoding of the file. If you do not know the encoding, running 'file filename.csv' may tell you. It's often worth trying: --encoding=latin-1 ``` I have no issues reading such file using this Python code: ```python content = open('csv', encoding='utf-16-le').read()) ``` `in2csv` works too. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1250495688 | I_kwDOCGYnMM5KiQzI | 439 | Misleading progress bar against utf-16-le CSV input | frafra 4068 | open | 0 | 12 | 2022-05-27T08:34:49Z | 2022-06-15T03:53:43Z | NONE | The program crashes without any error. ``` wget "https://artsdatabanken.no/Fab2018/api/export/csv" sqlite-utils create-database test.db sqlite-utils insert --csv --delimiter ";" --encoding "utf-16-le" test test.db csv [------------------------------------] 0% [#################-------------------] 49% 00:00:01 ``` I would like to highlight various issues: 1. sqlite-utils catches exceptions without printing the stacktrace and/or reraising the exception, so there is no easy way to use `pdb` or similar to debug the program, solution: add a debug option 2. Silent crash: this is related to (1.), and it happens when there is a catch-all mechanism; solution: let the program fail. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1250629388 | I_kwDOCGYnMM5KixcM | 440 | CSV files with too many values in a row cause errors | frafra 4068 | closed | 0 | 20 | 2022-05-27T10:54:44Z | 2022-06-14T22:23:01Z | 2022-06-14T20:12:46Z | NONE | *Original title: csv.DictReader can have None as key* In some cases, `csv.DictReader` can have `None` as key for unnamed columns, and a list of values as value. `sqlite_utils.utils.rows_from_file` cannot handle that: ```python url="https://artsdatabanken.no/Fab2018/api/export/csv" db = sqlite_utils.Database(":memory") with urlopen(url) as fab: reader, _ = sqlite_utils.utils.rows_from_file(fab, encoding="utf-16le") db["fab2018"].insert_all(reader, pk="Id") ``` Result: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 2924, in insert_all chunk = list(chunk) File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 3454, in fix_square_braces if any("[" in key or "]" in key for key in record.keys()): File "/home/user/.local/pipx/venvs/sqlite-utils/lib/python3.8/site-packages/sqlite_utils/db.py", line 3454, in <genexpr> if any("[" in key or "]" in key for key in record.keys()): TypeError: argument of type 'NoneType' is not iterable ``` Code: https://github.com/simonw/sqlite-utils/blob/59be60c471fd7a2c4be7f75e8911163e618ff5ca/sqlite_utils/db.py#L3454 `sqlite-utils insert` from command line is not affected by this issue. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1378495690 | I_kwDOBm6k_c5SKizK | 1814 | Static files not served | frafra 4068 | closed | 0 | 2 | 2022-09-19T20:38:17Z | 2022-09-19T23:35:06Z | 2022-09-19T23:34:30Z | NONE | Folder structure: ``` bibliography/ bibliography/static-files bibliography/static-files/styles.css bibliography/bibliography.db bibliography/metadata.json bibliography/settings.json ``` ``` $ cat bibliography/settings.json { "suggest_facets": false, "truncate_cells_html": 1000, "static": "assets:static-files/" } ``` File `/assets/styles.css` is not found (HTTP 404, `Database not found: assets`). Using datasette revision d0737e4de51ce178e556fc011ccb8cc46bbb6359. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
830283447 | MDU6SXNzdWU4MzAyODM0NDc= | 34 | bucket name | dsisnero 6213 | open | 0 | 0 | 2021-03-12T16:40:57Z | 2021-03-12T16:40:57Z | NONE | I followed the instructions to setup credentials but I am getting a invalid bucket name. Can you put a sample auth.json file in the base that shows the correct format for this? Thanks | dogsheep-photos 256834907 | issue | {"url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
959999095 | MDU6SXNzdWU5NTk5OTkwOTU= | 1421 | "Query parameters" form shows wrong input fields if query contains "03:31" style times | j4mie 6988 | closed | 0 | 11 | 2021-08-04T07:29:04Z | 2021-08-09T03:41:07Z | 2021-08-09T03:33:02Z | NONE | Datasette version `0.58.1`. I'm guessing this is a bug in the code that looks for `:param`-style query parameters.. <img width="543" alt="image" src="https://user-images.githubusercontent.com/6988/128139832-ef9c5291-f3d7-4402-8625-b45d26b4e5bc.png"> | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1586980089 | PR_kwDOBm6k_c5KF-by | 2026 | Avoid repeating primary key columns if included in _col args | runderwood 8513 | open | 0 | 0 | 2023-02-16T04:16:25Z | 2023-02-16T04:16:41Z | FIRST_TIME_CONTRIBUTOR | simonw/datasette/pulls/2026 | ...while maintaining given order. Fixes #1975 (if I'm understanding correctly). <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/ <!-- readthedocs-preview datasette end --> | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
750141615 | MDExOlB1bGxSZXF1ZXN0NTI2ODQ3ODIz | 7 | Fixed conflicting CLI flags | tlockney 8944 | closed | 0 | 1 | 2020-11-24T23:25:12Z | 2022-08-21T21:11:56Z | 2022-08-21T21:11:56Z | CONTRIBUTOR | dogsheep/pocket-to-sqlite/pulls/7 | The `-a` used for the auth credentials and the shortened form of the `--all` flags were in conflict on the `fetch` command. To be consistent with other `-to-sqlite` libraries in the Dogsheep ecosystem, I removed the shortened form of the `--all` flag. | pocket-to-sqlite 213286752 | pull | {"url": "https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
267513424 | MDU6SXNzdWUyNjc1MTM0MjQ= | 1 | Addressable pages for every row in a table | simonw 9599 | closed | 0 | Ship first public release 2857392 | 6 | 2017-10-23T00:44:16Z | 2017-10-24T14:11:04Z | 2017-10-24T14:11:03Z | OWNER | /database-name-7sha256/table-name/compound-pk /database-name-7sha256/table-name/compound-pk.json Tricky part will be figuring out what the private key is - especially since it could be a compound primary key and it might involve different data types. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267513523 | MDU6SXNzdWUyNjc1MTM1MjM= | 2 | Initial proof-of-concept | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-10-23T00:45:37Z | 2017-10-23T01:26:39Z | 2017-10-23T00:45:53Z | OWNER | Implemented in https://github.com/simonw/stateless-datasets/commit/de04d7a854d71003ffcf98028eab976a936c2dba | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267515678 | MDU6SXNzdWUyNjc1MTU2Nzg= | 3 | Make individual column valuables addressable, with smart content types | simonw 9599 | open | 0 | 1 | 2017-10-23T01:11:32Z | 2017-12-10T03:11:58Z | OWNER | Some SQLite databases embed images in columns. It would be cool if these had URLs. /database-name-7sha256/table-name/compound-pk/column /database-name-7sha256/table-name/compound-pk/column.json /database-name-7sha256/table-name/compound-pk/column.png /database-name-7sha256/table-name/compound-pk/column.gif /database-name-7sha256/table-name/compound-pk/column.txt The one without an explicit file extension auto-detects the correct extension. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/3/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
267515836 | MDU6SXNzdWUyNjc1MTU4MzY= | 4 | Make URLs immutable | simonw 9599 | closed | 0 | Ship first public release 2857392 | 8 | 2017-10-23T01:13:30Z | 2017-10-24T02:38:24Z | 2017-10-24T02:38:24Z | OWNER | Absolutely everything should have a far-future expires header Part of the URL will be the truncated sha1 hash of the database file itself, calculated at build time | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/4/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267516066 | MDU6SXNzdWUyNjc1MTYwNjY= | 5 | Implement sensible query pagination | simonw 9599 | closed | 0 | Ship first public release 2857392 | 3 | 2017-10-23T01:16:00Z | 2017-11-10T20:41:39Z | 2017-11-10T20:41:39Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267516329 | MDU6SXNzdWUyNjc1MTYzMjk= | 6 | Better JSON response options | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-10-23T01:18:47Z | 2017-10-24T15:07:58Z | 2017-10-24T15:07:58Z | OWNER | Default returns this: { “Columns”: [“id”, “name”, “age”], “Rows”: [ [45, “Simon”, 36] ] } .jsono instead returns a list of objects each duplicating the headers in its keys. They both probably share the same pagination mechanism so it might not be a jsono flat list. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267516650 | MDU6SXNzdWUyNjc1MTY2NTA= | 7 | Framework where by every page is JSON plus a template | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-23T01:22:03Z | 2017-10-24T02:27:25Z | 2017-10-24T02:27:25Z | OWNER | Every single page of my interface should be implemented as a function that returns JSON. I can then build my jinja templates on top of the exact data that would be returned by the API version. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267517314 | MDU6SXNzdWUyNjc1MTczMTQ= | 8 | Attempting an INSERT or UPDATE should return a sane error message | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-23T01:28:25Z | 2017-10-23T15:28:12Z | 2017-10-23T15:28:08Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267517348 | MDU6SXNzdWUyNjc1MTczNDg= | 9 | Initial test suite | simonw 9599 | closed | 0 | Ship first public release 2857392 | 2 | 2017-10-23T01:28:46Z | 2017-10-24T05:55:33Z | 2017-10-24T05:55:33Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267517381 | MDU6SXNzdWUyNjc1MTczODE= | 10 | Set up Travis | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-23T01:29:07Z | 2017-11-04T23:48:57Z | 2017-11-04T23:48:57Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267522549 | MDU6SXNzdWUyNjc1MjI1NDk= | 11 | Code that generates compile-time properties about the database | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-23T02:18:24Z | 2017-10-23T16:04:23Z | 2017-10-23T16:04:23Z | OWNER | At a minimum this will include: * sha hash of each database file * list of tables with row counts for each database file | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/11/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267523511 | MDU6SXNzdWUyNjc1MjM1MTE= | 12 | Make it so you can override templates | simonw 9599 | closed | 0 | Custom templates edition 2949431 | 1 | 2017-10-23T02:25:35Z | 2017-11-30T16:42:46Z | 2017-11-30T16:38:34Z | OWNER | The app will ship with default templates but, just like with the Django admin, you will be able to override them using either explicit configuration settings or just by dropping in templates with certain file names. Template inheritance should work here, both allowing you to override just the base template and allowing you to customize tiny bits of others. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267542338 | MDU6SXNzdWUyNjc1NDIzMzg= | 13 | Add a syntax highlighting SQL editor | simonw 9599 | closed | 0 | 1 | 2017-10-23T05:03:33Z | 2017-11-15T02:04:51Z | 2017-11-15T02:04:51Z | OWNER | https://ace.c9.io/#nav=embedding looks like a good option | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267707940 | MDU6SXNzdWUyNjc3MDc5NDA= | 14 | Datasette Plugins | simonw 9599 | closed | 0 | 22 | 2017-10-23T15:15:28Z | 2019-05-13T18:58:20Z | 2019-05-13T18:58:19Z | OWNER | It would be neat if additional functionality could be opted-in to the system in the form of easy-to-add plugins, hosted as separate packages. First example: a Google Analytics plugin, which adds GA tracking code with your tracking ID to the web interface for your dataset. This may be an opportunity to experiment with entry points: http://amir.rachum.com/blog/2017/07/28/python-entry-points/ | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267713226 | MDU6SXNzdWUyNjc3MTMyMjY= | 15 | Support multiple databases | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-10-23T15:29:51Z | 2017-10-24T02:01:38Z | 2017-10-24T02:01:38Z | OWNER | I'm going to loop through every database file in the app root directory and bundle all of them. Each one will be accessible at /databasename Note this is without the file extension, and we will disallow multiple files with the same name but different extensions. Supported extensions to start with will be `.db` and `.sqlite` and `.sqlite3` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267726219 | MDU6SXNzdWUyNjc3MjYyMTk= | 16 | Default HTML/CSS needs to look reasonable and be responsive | simonw 9599 | closed | 0 | Ship first public release 2857392 | 6 | 2017-10-23T16:05:22Z | 2017-11-11T20:19:07Z | 2017-11-11T20:19:07Z | OWNER | Version one should have the following characteristics: - Looks OK - Works great on mobile - Loads extremely fast - No JavaScript! At least not in v1. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267732005 | MDU6SXNzdWUyNjc3MzIwMDU= | 17 | In development mode, should still pick up new .db files | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-23T16:22:40Z | 2017-10-24T02:26:48Z | 2017-10-24T02:26:47Z | OWNER | Follow on from #11 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/17/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267739593 | MDU6SXNzdWUyNjc3Mzk1OTM= | 18 | See if I can get a websockets interface working | simonw 9599 | closed | 0 | 1 | 2017-10-23T16:46:41Z | 2021-01-04T20:05:52Z | 2021-01-04T20:05:48Z | OWNER | Since I am already running on Sanic, how hard would it be to add a websocket ebdpoint that lets you talk to sqlite interactively? Could this be used to efficiently support streaming in answers to giant queries? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267741262 | MDU6SXNzdWUyNjc3NDEyNjI= | 19 | Efficient url for downloading the raw database file | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-23T16:52:17Z | 2017-10-25T15:21:16Z | 2017-10-25T15:19:37Z | OWNER | Use Sanic support for steaming large files http://sanic.readthedocs.io/en/latest/sanic/response.html#file-streaming | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/19/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267759136 | MDU6SXNzdWUyNjc3NTkxMzY= | 20 | Config file with support for defining canned queries | simonw 9599 | closed | 0 | simonw 9599 | Custom templates edition 2949431 | 9 | 2017-10-23T17:53:06Z | 2017-12-05T19:05:35Z | 2017-12-05T17:44:09Z | OWNER | Probably using YAML because then we get support for multiline strings: bats: db: bats.sqlite3 name: "Bat sightings" queries: specific_row: | select * from Bats where a = 1; | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/20/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||
267769034 | MDU6SXNzdWUyNjc3NjkwMzQ= | 21 | Use Sanic configuration mechanism | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-23T18:25:14Z | 2017-11-10T20:45:42Z | 2017-11-10T20:45:42Z | OWNER | http://sanic.readthedocs.io/en/latest/sanic/config.html | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/21/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267769431 | MDU6SXNzdWUyNjc3Njk0MzE= | 22 | Refactor to use class based views | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-10-23T18:26:22Z | 2019-05-27T20:05:56Z | 2017-10-24T02:25:53Z | OWNER | http://sanic.readthedocs.io/en/latest/sanic/class_based_views.html | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/22/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267788884 | MDU6SXNzdWUyNjc3ODg4ODQ= | 23 | Support Django-style filters in querystring arguments | simonw 9599 | closed | 0 | Ship first public release 2857392 | 6 | 2017-10-23T19:29:42Z | 2017-10-25T04:23:03Z | 2017-10-25T04:23:02Z | OWNER | e.g /database/table?name__contains=Simon&age__gte=4 Same format as Django: double underscore as the split. If you need to match against a column that happens to contain a double underscore in its official name, do this: /database/table?weird__column__exact=Simon __exact is the default operation if none is supplied. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/23/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267828746 | MDU6SXNzdWUyNjc4Mjg3NDY= | 24 | Implement full URL design | simonw 9599 | closed | 0 | Ship first public release 2857392 | 2 | 2017-10-23T21:49:05Z | 2017-10-24T14:12:00Z | 2017-10-24T14:12:00Z | OWNER | Full URL design: /database-name /database-name.json /database-name-7sha256 /database-name-7sha256.json /database-name/table-name /database-name/table-name.json /database-name-7sha256/table-name /database-name-7sha256/table-name.json /database-name-7sha256/table-name/compound-pk /database-name-7sha256/table-name/compound-pk.json | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/24/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267857622 | MDU6SXNzdWUyNjc4NTc2MjI= | 25 | Endpoint that returns SQL ready to be piped into DB | simonw 9599 | closed | 0 | 2 | 2017-10-24T00:19:26Z | 2017-11-15T05:11:12Z | 2017-11-15T05:11:11Z | OWNER | It would be cool if I could figure out a way to generate both the create table statements and the inserts for an individual table or the entire database and then stream them down to the client. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/25/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267861210 | MDU6SXNzdWUyNjc4NjEyMTA= | 26 | Command line tool for uploading one or more DBs to Now | simonw 9599 | closed | 0 | Ship first public release 2857392 | 3 | 2017-10-24T00:43:10Z | 2017-11-11T07:25:30Z | 2017-11-11T07:25:30Z | OWNER | Uploading files appears to be undocumented, but I found it in their code here: https://github.com/zeit/now-cli/blob/0ca7d1fe44ebdf460b64fdc38ba543b8e295ac40/src/providers/sh/util/index.js#L291 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/26/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
267886330 | MDU6SXNzdWUyNjc4ODYzMzA= | 27 | Ability to plot a simple graph | simonw 9599 | closed | 0 | 3 | 2017-10-24T03:34:59Z | 2018-07-10T17:52:41Z | 2018-07-10T17:52:41Z | OWNER | Might be as simple as: pick he type of chart (bar, line) and then pick the column for the X axis and the column for the Y axis. Maybe also allow a pie chart. It’s up to the user to come up with SQL that gets the right values. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/27/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
267886865 | MDU6SXNzdWUyNjc4ODY4NjU= | 28 | /database?sql= should redirect correctly | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-10-24T03:38:44Z | 2017-10-24T23:54:30Z | 2017-10-24T23:54:30Z | OWNER | Needs to redirect to the location with the hash while retaining the query string. This should also work with the .json extension. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/28/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
268050821 | MDU6SXNzdWUyNjgwNTA4MjE= | 29 | Handle bytestring records encoding to JSON | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-24T14:18:45Z | 2017-10-24T14:59:00Z | 2017-10-24T14:58:47Z | OWNER | http://localhost:8006/northwind-40d049b/Categories.json 500s right now The string representation of one of the values looks like this: b"\x15\x1c/\x00\x02\x00 This is a bytestring from the database which cannot be naively converted to a unicode string. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/29/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
268078453 | MDU6SXNzdWUyNjgwNzg0NTM= | 30 | Do something neat with foreign keys | simonw 9599 | closed | 0 | 1 | 2017-10-24T15:29:29Z | 2017-11-14T18:29:08Z | 2017-11-14T18:29:01Z | OWNER | https://www.sqlite.org/pragma.html#pragma_foreign_key_list SQLite has robust support for introspecting foreign keys. I could use that to automatically link to the corresponding record from my tables. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268087542 | MDU6SXNzdWUyNjgwODc1NDI= | 31 | Idea: colour scheme based on sha256 of db | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-24T15:52:38Z | 2018-05-28T18:10:45Z | 2017-11-09T14:14:59Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/31/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268106803 | MDU6SXNzdWUyNjgxMDY4MDM= | 32 | Try running SQLite queries in a separate thread | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-24T16:48:42Z | 2017-11-09T14:05:56Z | 2017-11-09T14:05:56Z | OWNER | https://pymotw.com/3/asyncio/executors.html Would be good to have some actual benchmarks so I can evaluate if this is worth it or not. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/32/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
268110769 | MDU6SXNzdWUyNjgxMTA3Njk= | 33 | Use locust for benchmarking and load tests | simonw 9599 | open | 0 | 0 | 2017-10-24T17:00:09Z | 2017-12-10T03:12:16Z | OWNER | https://github.com/locustio/locust Needed for #32 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
268176505 | MDU6SXNzdWUyNjgxNzY1MDU= | 34 | Support CSV export with a .csv extension | simonw 9599 | closed | 0 | 1 | 2017-10-24T20:34:43Z | 2021-06-17T18:14:48Z | 2018-05-28T20:45:34Z | OWNER | Maybe do this using streaming with multiple pagination SQL queries so we can support arbritrarily large exports. How would this work against a view which doesn’t have an obvious efficient pagination mechanism? Maybe limit views to up to 1000 exported records? Relates to #5 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/34/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268262480 | MDU6SXNzdWUyNjgyNjI0ODA= | 36 | date, year, month and day querystring lookups | simonw 9599 | closed | 0 | 3 | 2017-10-25T04:23:45Z | 2018-05-28T17:30:53Z | 2018-05-28T17:30:53Z | OWNER | - [ ] `?timestamp___date=2017-07-17` - return every item where the timestamp falls on that date - [ ] `?timestamp___year=2017` - return every item where the timestamp falls within 2017 - [ ] `?timestamp___month=1` - return every item where the month component is January - [ ] `?timestamp___day=10` - return every item where the day-of-the-month component is 10 Follow on from #23 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/36/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268453968 | MDU6SXNzdWUyNjg0NTM5Njg= | 37 | Ability to serialize massive JSON without blocking event loop | simonw 9599 | closed | 0 | 2 | 2017-10-25T15:58:03Z | 2020-05-30T17:29:20Z | 2020-05-30T17:29:20Z | OWNER | We run the risk of someone attempting a select statement that returns thousands of rows and hence takes several seconds just to JSON encode the response, effectively blocking the event loop and pausing all other traffic. The Twisted community have a solution for this, can we adapt that in some way? http://as.ynchrono.us/2010/06/asynchronous-json_18.html?m=1 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/37/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268462768 | MDU6SXNzdWUyNjg0NjI3Njg= | 38 | Experiment with patterns for concurrent long running queries | simonw 9599 | closed | 0 | 5 | 2017-10-25T16:23:42Z | 2018-05-28T20:47:31Z | 2018-05-28T20:47:31Z | OWNER | I want to understand how the system could perform under load with many concurrent long-running queries. Can we serve these without blocking the event loop? | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268469569 | MDU6SXNzdWUyNjg0Njk1Njk= | 39 | Protect against malicious SQL that causes damage even though our DB is immutable | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-10-25T16:44:27Z | 2021-08-17T23:52:07Z | 2017-11-05T02:53:47Z | OWNER | I’m currently operating under the assumption that it’s safe to allow arbitrary SQL statements because we are dealing with an immutable database. But this might not be the case - there are some pretty weird SQLite language extensions (ATTACH, PRAGMA etc) and I’m not certain they cannot be used to break things in a way that would affect future requests to the API. Solution: provide a “safe mode” option which disables the ?sql= mechanism. This still leaves the URL filter lookups, so I need to make sure that those are “safe”. In the future I may also implement a whitelist option where datasets can be configured to only allow specific filters against specific columns. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
268470572 | MDU6SXNzdWUyNjg0NzA1NzI= | 40 | Implement command-line tool interface | simonw 9599 | closed | 0 | Ship first public release 2857392 | 11 | 2017-10-25T16:47:15Z | 2017-11-11T07:27:33Z | 2017-11-11T07:27:33Z | OWNER | The first version needs to take one or more file names or URLs, then generate and deploy an app to Now. It will assume you already have the now command installed and configured. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/40/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
268590777 | MDU6SXNzdWUyNjg1OTA3Nzc= | 41 | Homepage should show summary of databases | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-10-26T00:18:11Z | 2017-10-27T04:05:35Z | 2017-10-27T04:05:35Z | OWNER | I sch database should have a name, optional description, download link and a summary of the tables Flights.db Flights and suchlike blah. URL? License? 577373 rows across 14 tables airports, routes, airlines... Title of the homepage is derived from the databases or can be manually overridden e. “Datasets of Flights, NHS, Blah...” - or if only one database just the title of that. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/41/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
268591332 | MDU6SXNzdWUyNjg1OTEzMzI= | 42 | Homepage UI for editing metadata file | simonw 9599 | closed | 0 | 4 | 2017-10-26T00:22:03Z | 2017-12-10T03:02:14Z | 2017-12-10T03:02:14Z | OWNER | Since we are going to have a metadata file which sets the title/description/etc for each database, why not allow you to run the app in —dev mode which makes the homepage into a WYSIWYG editor that can save to that file format. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/42/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
268592894 | MDU6SXNzdWUyNjg1OTI4OTQ= | 43 | While running, server should spot new db files added to its directory | simonw 9599 | closed | 0 | v1 stretch goals 2859414 | 1 | 2017-10-26T00:32:37Z | 2017-11-14T08:25:53Z | 2017-11-14T08:25:37Z | OWNER | Maybe in each request it checks the time and if 5s has elapsed since t last scanned the directory it scans it again This would allow people with dedicated hosting to run the app there and just upload new datasets whenever they want. It would also be very convenient for development. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/43/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
269731374 | MDU6SXNzdWUyNjk3MzEzNzQ= | 44 | ?_group_count=country - return counts by specific column(s) | simonw 9599 | closed | 0 | 7 | 2017-10-30T19:50:32Z | 2018-04-26T15:09:58Z | 2018-04-26T15:09:58Z | OWNER | Imagine if this: https://stateless-datasets-jykibytogk.now.sh/flights-07d1283/airports.jsono?country__contains=gu&_group_count=country Turned into this: https://stateless-datasets-jykibytogk.now.sh/flights-07d1283?sql=select%20country,%20count(*)%20as%20group_count_country%20from%20airports%20where%20country%20like%20%27%gu%%27%20group%20by%20country%20order%20by%20group_count_country%20desc This would involve introducing a new precedent of query string arguments that start with an _ having special meanings. While we're at it, could try adding _fields=x,y,z Tasks: - [x] Get initial version working - [ ] Refactor code to not just "pretend to be a view" - [ ] Get foreign key relationships expanded | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/44/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
271242824 | MDU6SXNzdWUyNzEyNDI4MjQ= | 45 | Run SQLite operations in a thread pool | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-11-05T02:27:12Z | 2017-11-05T02:27:34Z | 2017-11-05T02:27:33Z | OWNER | Let's run SQLite operations in threads, so we don't end up blocking our core event loop. These articles are helpful: * https://pymotw.com/3/asyncio/executors.html * https://marlinux.wordpress.com/2017/05/19/python-3-6-asyncio-sqlalchemy/ | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/45/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
271301468 | MDU6SXNzdWUyNzEzMDE0Njg= | 46 | Dockerfile should build more recent SQLite with FTS5 and spatialite support | simonw 9599 | closed | 0 | 13 | 2017-11-05T18:16:22Z | 2017-11-17T14:32:12Z | 2017-11-17T14:32:12Z | OWNER | The SQLite bundled with Python 3 doesn't support the FTS5 search extension. It would be nice if the SQLite built by our Dockerfile could support as many modern SQLite features as possible. https://web.archive.org/web/20170212034155/http://charlesleifer.com/blog/using-the-sqlite-json1-and-fts5-extensions-with-python/ has instructions on building a more recent SQLite and the pysqlite package. Our Dockerfile could carry out an updated version of this process. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/46/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
271831408 | MDU6SXNzdWUyNzE4MzE0MDg= | 47 | Create neat example database | simonw 9599 | closed | 0 | 5 | 2017-11-07T13:29:38Z | 2017-11-14T03:08:13Z | 2017-11-14T03:08:13Z | OWNER | How about data from open elections eg https://github.com/openelections/openelections-data-ca?files=1 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/47/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
272391665 | MDU6SXNzdWUyNzIzOTE2NjU= | 48 | Switch to ujson | simonw 9599 | closed | 0 | 4 | 2017-11-08T23:50:29Z | 2019-06-24T06:57:54Z | 2019-06-24T06:57:43Z | OWNER | ujson is already a dependency of Sanic, and should be quite a bit faster. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/48/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
272661336 | MDU6SXNzdWUyNzI2NjEzMzY= | 49 | Pick a name | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-11-09T17:56:17Z | 2017-11-10T18:33:22Z | 2017-11-10T18:33:22Z | OWNER | Options so far: * immutabase * datasite * sqlstatic * dbserve * sqlserve Terms to play with: * immutable * sqlite * dataset * json * static * serve | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/49/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
272694136 | MDU6SXNzdWUyNzI2OTQxMzY= | 50 | Unit tests against application itself | simonw 9599 | closed | 0 | Ship first public release 2857392 | 2 | 2017-11-09T19:31:49Z | 2017-11-11T22:23:22Z | 2017-11-11T22:23:22Z | OWNER | Use Sanic’s testing mechanism. Test should create a temporary SQLite database file on disk by executing sql that is stored in the test themselves. For the moment we can just test the JSON API more thoroughly and just sanity check that the HTML output doesn’t throw any errors. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/50/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
272735257 | MDU6SXNzdWUyNzI3MzUyNTc= | 51 | Make a proper README | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-11-09T21:46:07Z | 2017-11-13T18:44:23Z | 2017-11-13T18:44:23Z | OWNER | Include instructions on building a local Docker container - currently detailed here: https://gist.github.com/simonw/0ea5c960608c2d876e4637a5e48aa95d (those instructions don't work now that we have removed the Dockerfile in favour of a template generated by `datasette publish`) | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/51/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273026602 | MDU6SXNzdWUyNzMwMjY2MDI= | 52 | Solution for temporarily uploading DB so it can be built by docker | simonw 9599 | closed | 0 | 2 | 2017-11-10T18:55:25Z | 2017-12-10T03:02:57Z | 2017-12-10T03:02:57Z | OWNER | For the `datasette publish` command I ideally need a way of uploading the specified DB to somewhere temporary on the internet so that when the Dockerfile is built by the final hosting location it can download that database as part of the build process. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/52/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273054652 | MDU6SXNzdWUyNzMwNTQ2NTI= | 53 | Implement a better database index page | simonw 9599 | closed | 0 | Ship first public release 2857392 | 3 | 2017-11-10T20:47:36Z | 2017-11-12T21:19:33Z | 2017-11-12T01:50:27Z | OWNER | This view isn't great. I should do a better job of separating out tables from views and indexes, showing the count of rows in each table, and maybe move the SQL to the individual table pages. <img width="871" alt="flights" src="https://user-images.githubusercontent.com/9599/32423242-1b4458ce-c25a-11e7-910f-2dc1de909b8f.png"> | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273121803 | MDU6SXNzdWUyNzMxMjE4MDM= | 54 | Views should not attempt to link to records / use rowids | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-11-11T05:44:54Z | 2017-11-12T21:29:42Z | 2017-11-12T21:29:33Z | OWNER | http://localhost:8001/parlgov-development-25f9855/view_variable <img width="837" alt="parlgov-development__view_variable" src="https://user-images.githubusercontent.com/9599/32686757-5b40f6a8-c660-11e7-88de-5e8dfb12ccf1.png"> | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/54/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273127117 | MDU6SXNzdWUyNzMxMjcxMTc= | 55 | Ship first version to PyPI | simonw 9599 | closed | 0 | Ship first public release 2857392 | 2 | 2017-11-11T07:38:48Z | 2017-11-13T21:19:43Z | 2017-11-13T21:19:43Z | OWNER | Just before doing this, update the Dockerfile template to `pip install datasette` https://github.com/simonw/datasette/blob/65e350ca2a4845c25752a62c16ba58cfe2c14b9b/datasette/utils.py#L125 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/55/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273127443 | MDU6SXNzdWUyNzMxMjc0NDM= | 56 | Easy way to block search engine crawling in robots.txt | simonw 9599 | closed | 0 | 1 | 2017-11-11T07:46:07Z | 2018-05-28T20:50:25Z | 2018-05-28T20:50:24Z | OWNER | For people who don't want their datasets to be crawled by search engines. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/56/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273127694 | MDU6SXNzdWUyNzMxMjc2OTQ= | 57 | Ship a Docker image of the whole thing | simonw 9599 | closed | 0 | 7 | 2017-11-11T07:51:28Z | 2018-06-28T04:01:51Z | 2018-06-28T04:01:38Z | OWNER | The generated Docker images can then just inherit from that. This will speed up deploys as no need to `pip install` anything. - [x] Ship that image to Docker Hub - [ ] Update the generated Dockerfile to use it | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/57/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273128608 | MDU6SXNzdWUyNzMxMjg2MDg= | 58 | publish command should detect if "now" is installed | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-11-11T08:10:17Z | 2017-11-11T16:00:07Z | 2017-11-11T16:00:07Z | OWNER | If now is not installed, it should tell you where to get it. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/58/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273157085 | MDU6SXNzdWUyNzMxNTcwODU= | 59 | datasette publish hyper | simonw 9599 | closed | 0 | 4 | 2017-11-11T16:27:26Z | 2019-05-13T19:01:00Z | 2019-05-13T19:00:44Z | OWNER | This is a bit tricky, because unlike Now there doesn't seem to be a way to tell Hyper to "build this Dockerfile and deploy the resulting image". They expect you to build a container and publish it to a registry instead. https://docs.hyper.sh/Reference/CLI/load.html allows you to publish an image directly from a tarball, but that still leaves the challenge of creating that image. The nice thing about the Now integration is that you don't need to have Docker installed on your local machine. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/59/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273163905 | MDU6SXNzdWUyNzMxNjM5MDU= | 60 | Rethink how metadata is generated and stored | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-11-11T18:01:28Z | 2017-11-11T20:12:17Z | 2017-11-11T20:12:16Z | OWNER | I broke the existing mechanism in 407795b61217205625f2d4e084afbf69f1db781b In order to get unit tests for the sanic app working. I think i should ditch the build-metadata.json cache file entirely and calculate the SHA hashes on startup. Not sure what to do about the table row counts. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/60/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273173116 | MDU6SXNzdWUyNzMxNzMxMTY= | 61 | Common header and footer | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-11-11T20:20:08Z | 2017-11-11T20:37:19Z | 2017-11-11T20:37:19Z | OWNER | Split from #16 - [x] A link to the homepage from some kind of navigation bar in the header - [x] link to github.com/simonw/datasette in the footer - [x] Slightly better titles (maybe ditch the visited link colours for titles only? should keep those for primary key links) | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/61/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273174397 | MDU6SXNzdWUyNzMxNzQzOTc= | 62 | Link to .json and .jsono versions on various pages | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-11-11T20:37:47Z | 2017-11-11T22:41:06Z | 2017-11-11T22:41:06Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/62/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273174447 | MDU6SXNzdWUyNzMxNzQ0NDc= | 63 | Review design of JSON output | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-11-11T20:38:33Z | 2017-11-11T22:20:17Z | 2017-11-11T22:20:17Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/63/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
273181020 | MDU6SXNzdWUyNzMxODEwMjA= | 64 | Support for ?field__isnull=1 or similar | simonw 9599 | closed | 0 | 1 | 2017-11-11T22:26:52Z | 2017-11-17T14:38:21Z | 2017-11-17T14:38:21Z | OWNER | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/64/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||||
273191608 | MDU6SXNzdWUyNzMxOTE2MDg= | 65 | Re-implement ?sql= mode | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-11-12T01:47:17Z | 2017-11-12T02:36:37Z | 2017-11-12T02:35:42Z | OWNER | Here's the code I removed: async def data(self, request, name, hash): sql = 'select * from sqlite_master' custom_sql = False params = {} if request.args.get('sql'): params = request.raw_args sql = params.pop('sql') validate_sql_select(sql) custom_sql = True rows = await self.execute(name, sql, params) columns = [r[0] for r in rows.description] return { 'database': name, 'rows': rows, 'columns': columns, 'query': { 'sql': sql, 'params': params, } }, { 'database_hash': hash, 'custom_sql': custom_sql, } | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/65/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273191806 | MDU6SXNzdWUyNzMxOTE4MDY= | 66 | Show table SQL on table page | simonw 9599 | closed | 0 | Ship first public release 2857392 | 1 | 2017-11-12T01:51:23Z | 2017-11-12T21:17:29Z | 2017-11-12T21:17:29Z | OWNER | Let's do the SQL for the table you are looking at, plus SQL for any indexes that mention that table. The page for a view should show the SQL for that view. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/66/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273192789 | MDU6SXNzdWUyNzMxOTI3ODk= | 67 | Command that builds a local docker container | simonw 9599 | closed | 0 | Ship first public release 2857392 | 2 | 2017-11-12T02:13:29Z | 2017-11-13T16:17:52Z | 2017-11-13T16:17:52Z | OWNER | Be nice to indicate that this isn't just for Now. Shouldn't be too hard either. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/67/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273247186 | MDU6SXNzdWUyNzMyNDcxODY= | 68 | Support for title/source/license metadata | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-11-12T17:04:21Z | 2017-12-04T04:55:43Z | 2017-11-13T15:26:11Z | OWNER | I've decided this is important for launch: I want to set a precedent for people citing, licensing and documenting their datasets. Not sure how best to go about supporting this. I'd like to allow for the following data to be optionally attached to any given database: - Title - Description, potentially in markdown? - Original source URL - License I'd also like the ability to attach descriptions to individual tables - and maybe even to table columns? The question then becomes: how should this information be stored. A few options: - In the SQLite database itself, in a specially named table. Problem here is that this means having to modify SQLite databases before publishing them. - In a separate SQLite database that can be published alongside the databases we are publishing. - In a JSON file. This is neat, but JSON files are not a great editing experience once you start including multiple lines (e.g. a markdown description). - In a YAML file. This is a better format for multi-line descriptions, but still isn't a great editing experience. Whatever the format, it can be made much more usable by offering a web-based editing UI for populating it (a special mode the server can be run in). | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273248366 | MDU6SXNzdWUyNzMyNDgzNjY= | 69 | Enforce pagination (or at least limits) for arbitrary custom SQL | simonw 9599 | closed | 0 | Ship first public release 2857392 | 4 | 2017-11-12T17:21:33Z | 2017-11-13T20:32:47Z | 2017-11-13T19:35:47Z | OWNER | It's way too easy to accidentally trigger a page that returns 100,000 rows at the moment. I need to use the LIMIT clause on views and custom SQL - I can support pagination "next" links using offset as well. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273267081 | MDU6SXNzdWUyNzMyNjcwODE= | 70 | Paginate views using OFFSET/LIMIT | simonw 9599 | closed | 0 | Ship first public release 2857392 | 0 | 2017-11-12T21:30:29Z | 2017-11-13T21:11:01Z | 2017-11-13T21:11:01Z | OWNER | As with #69 these should obey a maximum offset setting, which can be over-ridden. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | |||||
273278840 | MDU6SXNzdWUyNzMyNzg4NDA= | 71 | Set up some example datasets on a Cloudflare-backed domain | simonw 9599 | closed | 0 | Ship first public release 2857392 | 10 | 2017-11-13T00:06:30Z | 2017-11-13T02:09:34Z | 2017-11-13T02:09:34Z | OWNER | To better demonstrate the caching and HTTP/2 features, I'd like to go live with some demos that are hosted behind Cloudflare. - [x] Redirect https://datasettes.com/ and https://www.datasettes.com/ to https://github.com/simonw/datasette - [x] Have `now domain add -e datasettes.com` run without errors (hopefully just a matter of waiting for the DNS to update) - [x] Alias an example dataset hosted on Now on a datasettes.com subdomain - [x] Confirm that HTTP caching and HTTP/2 redirect pushing works as expected - this may require another page rule | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);