issue_comments
563 rows where author_association = "CONTRIBUTOR" sorted by body
This data as json, CSV (advanced)
id | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body ▼ | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
652261382 | https://github.com/simonw/datasette/issues/877#issuecomment-652261382 | https://api.github.com/repos/simonw/datasette/issues/877 | MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg== | abdusco 3243482 | 2020-07-01T08:03:17Z | 2020-07-01T08:03:23Z | CONTRIBUTOR | Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Consider dropping explicit CSRF protection entirely? 648421105 | |
1038336591 | https://github.com/simonw/sqlite-utils/issues/398#issuecomment-1038336591 | https://api.github.com/repos/simonw/sqlite-utils/issues/398 | IC_kwDOCGYnMM4948JP | eyeseast 25778 | 2022-02-13T18:48:21Z | 2022-02-13T18:49:49Z | CONTRIBUTOR | Been chipping away at this between other things and realized `sqlite-utils init-spatialite` is probably unnecessary. Any of the other commands requires running `db.init_spatialite` to have the extension functions available, and that will do everything `init-spatialite` would do. I think it's probably worth keeping a SpatiaLite flag on `create-database` in case you wanted to create all the spatial metadata up front. Otherwise, it's going to get added the first time you run `add-geometry-column` or `create-spatial-index`, which is probably fine in most cases. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add SpatiaLite helpers to CLI 1124237013 | |
655652679 | https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655652679 | https://api.github.com/repos/simonw/sqlite-utils/issues/121 | MDEyOklzc3VlQ29tbWVudDY1NTY1MjY3OQ== | tsibley 79913 | 2020-07-08T17:24:46Z | 2020-07-08T17:24:46Z | CONTRIBUTOR | Better transaction handling would be really great. Some of my thoughts on implementing better transaction discipline are in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728. My preferences: - Each CLI command should operate in a single transaction so that either the whole thing succeeds or the whole thing is rolled back. This avoids partially completed operations when an error occurs part way through processing. Partially completed operations are typically much harder to recovery from gracefully and may cause inconsistent data states. - The Python API should be transaction-agnostic and rely on the caller to coordinate transactions. Only the caller knows how individual insert, create, update, etc operations/methods should be bundled conceptually into transactions. When the caller is the CLI, for example, that bundling would be at the CLI command-level. Other callers might want to break up operations into multiple transactions. Transactions are usually most useful when controlled at the application-level (like logging configuration) instead of the library level. The library needs to provide an API that's conducive to transaction use, though. - The Python API should provide a context manager to provide consistent transactions handling with more useful defaults than Python's `sqlite3` module. The latter issues implicit `BEGIN` statements by default for most DML (`INSERT`, `UPDATE`, `DELETE`, … but not `SELECT`, I believe), but **not** DDL (`CREATE TABLE`, `DROP TABLE`, `CREATE VIEW`, …). Notably, the `sqlite3` module doesn't issue the implicit `BEGIN` until the first DML statement. It _does not_ issue it when entering the `with conn` block, like other DBAPI2-compatible modules do. The `with conn` block for `sqlite3` only arranges to commit or rollback an existing transaction when exiting. Including DDL and `SELECT`s in transactions is important for operation consistency, though. There are several existing bugs.python.org tickets about this and future changes are in the works, but sql… | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Improved (and better documented) support for transactions 652961907 | |
661524006 | https://github.com/simonw/datasette/issues/456#issuecomment-661524006 | https://api.github.com/repos/simonw/datasette/issues/456 | MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg== | abeyerpath 32467826 | 2020-07-21T01:15:07Z | 2020-07-21T01:15:07Z | CONTRIBUTOR | Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue... The `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Installing installs the tests package 442327592 | |
1012158895 | https://github.com/simonw/sqlite-utils/issues/79#issuecomment-1012158895 | https://api.github.com/repos/simonw/sqlite-utils/issues/79 | IC_kwDOCGYnMM48VFGv | eyeseast 25778 | 2022-01-13T13:55:59Z | 2022-01-13T13:55:59Z | CONTRIBUTOR | Came here to add this. I might pick it up. Would also add a utility to create (and update and delete?) a spatial index. It's not much code but I have to look it up every time. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Helper methods for working with SpatiaLite 557842245 | |
1125342229 | https://github.com/simonw/datasette/issues/741#issuecomment-1125342229 | https://api.github.com/repos/simonw/datasette/issues/741 | IC_kwDOBm6k_c5DE1wV | eyeseast 25778 | 2022-05-12T19:21:16Z | 2022-05-12T19:21:16Z | CONTRIBUTOR | Came here to check if this had been flagged already. Was helping a colleague get something on Cloud Run and had to dig to find `--extra-options="--setting sql_time_limit_ms 2500"`. If I get some time next week, maybe I'll try to tackle it. Would definitely make things easier to be able to do something like this: ```sh datasette publish cloudrun something.db --setting sql_time_limit_ms 2500 ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Replace "datasette publish --extra-options" with "--setting" 607223136 | |
509013413 | https://github.com/simonw/datasette/issues/507#issuecomment-509013413 | https://api.github.com/repos/simonw/datasette/issues/507 | MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw== | psychemedia 82988 | 2019-07-07T16:31:57Z | 2019-07-07T16:31:57Z | CONTRIBUTOR | Chrome and Firefox [both support headless screengrabs]( https://www.bleepingcomputer.com/news/software/chrome-and-firefox-can-take-screenshots-of-sites-from-the-command-line/) from command line, but I don't know how parameterised they can be? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Every datasette plugin on the ecosystem page should have a screenshot 455852801 | |
1077671779 | https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1077671779 | https://api.github.com/repos/simonw/sqlite-utils/issues/399 | IC_kwDOCGYnMM5AO_dj | eyeseast 25778 | 2022-03-24T14:11:33Z | 2022-03-24T14:11:43Z | CONTRIBUTOR | Coming back to this. I was about to add a utility function to [datasette-geojson]() to convert lat/lng columns to geometries. Thankfully I googled first. There's a SpatiaLite function for this: [MakePoint](https://www.gaia-gis.it/gaia-sins/spatialite-sql-latest.html#p0). ```sql select MakePoint(longitude, latitude) as geometry from places; ``` I'm not sure if that would work with `conversions`, since it needs two columns, but it's an option for tables that already have latitude, longitude columns. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Make it easier to insert geometries, with documentation and maybe code 1124731464 | |
970266123 | https://github.com/simonw/datasette/issues/1012#issuecomment-970266123 | https://api.github.com/repos/simonw/datasette/issues/1012 | IC_kwDOBm6k_c451RYL | bollwyvl 45380 | 2021-11-16T13:18:36Z | 2021-11-16T13:18:36Z | CONTRIBUTOR | Congratulations, looks like it went through! There was a bit of a hold-up on the JupyterLab ones, but it's semi automated: a dependabot pr to warehouse and a CI deploy, with a click in between. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | For 1.0 update trove classifier in setup.py 718540751 | |
754721153 | https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54 | MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw== | jacobian 21148 | 2021-01-05T15:51:09Z | 2021-01-05T15:51:09Z | CONTRIBUTOR | Correction: the failure is on `lists-member.js` (I was thrown by the `block` variable name, but that's just a coincidence) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Archive import appears to be broken on recent exports 779088071 | |
804640440 | https://github.com/simonw/datasette/issues/1153#issuecomment-804640440 | https://api.github.com/repos/simonw/datasette/issues/1153 | MDEyOklzc3VlQ29tbWVudDgwNDY0MDQ0MA== | mroswell 192568 | 2021-03-23T05:58:20Z | 2021-03-23T05:58:20Z | CONTRIBUTOR | Could there be a little widget that offers conversion from one to the other? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Use YAML examples in documentation by default, not JSON 771202454 | |
1321460293 | https://github.com/simonw/datasette/issues/1884#issuecomment-1321460293 | https://api.github.com/repos/simonw/datasette/issues/1884 | IC_kwDOBm6k_c5Ow-JF | asg017 15178711 | 2022-11-21T04:40:55Z | 2022-11-21T04:40:55Z | CONTRIBUTOR | Counting any virtual tables can be pretty tricky. On one hand, counting a [CSV virtual table](https://www.sqlite.org/csv.html) would return the number of rows in the CSV, which is helpful (but can be I/O intensive). Counting a [FTS5 virtual table](https://www.sqlite.org/fts5.html) would return the number of entries in the FTS index, which is kindof helpful, but can be misleading in some cases. On the other hand, arbitrarily running `COUNT(*)` on some virtual tables can be incredibly expensive. SQLite offers new shortcuts/pushdowns on `COUNT(*)` queries for virtual tables, and instead calls the underlying vtab implementation and iterates through all rows in the table without discretion. For example, a virtual table that's backed by a Postgres table would call `select * from pg_table`, which would use up a lot of network and CPU calls. Or a virtual table backed by a [google sheet](https://github.com/0x6b/libgsqlite) would make network/API requests to get all the rows from the sheet just to make a count. The [`pragma_table_list`](https://www.sqlite.org/pragma.html#pragma_table_list) pragma tells you when a table is a regular table or virtual (in the `type` column), but was only added in version 3.37.0 (2021-11-27). Personally, I wouldnt try to `COUNT(*)` virtual tables - it depends on how the virtual table is implemented, it requires that the connection has the proper extensions loaded, and it may accientally cause perf issues for new-age extensions. A few extensions that I'm writing have virtual tables that wouldn't benefit much from `COUNT(*)`, and the fact that SQLite iterates through all rows in a table to count just makes things worse. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exclude virtual tables from datasette inspect 1439009231 | |
626395641 | https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 | MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ== | RhetTbull 41546558 | 2020-05-10T21:55:54Z | 2020-05-10T21:55:54Z | CONTRIBUTOR | Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990 | |
593026413 | https://github.com/simonw/datasette/issues/573#issuecomment-593026413 | https://api.github.com/repos/simonw/datasette/issues/573 | MDEyOklzc3VlQ29tbWVudDU5MzAyNjQxMw== | wragge 127565 | 2020-03-01T01:24:45Z | 2020-03-01T01:24:45Z | CONTRIBUTOR | Did you manage to find an answer to this? I've got a notebook to help people generate datasets on the fly from an API, so it would be cool if they flick it to Datasette for initial exploration. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exposing Datasette via Jupyter-server-proxy 492153532 | |
837166862 | https://github.com/simonw/datasette/issues/1280#issuecomment-837166862 | https://api.github.com/repos/simonw/datasette/issues/1280 | MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg== | blairdrummond 10801138 | 2021-05-10T19:07:46Z | 2021-05-10T19:07:46Z | CONTRIBUTOR | Do you have a list of sqlite versions you want to test against? One cool thing I saw recently (that we started using) was using `import docker` within python, and then writing pytest functions which executed against the container [setup](https://github.com/StatCan/kubeflow-containers/blob/3c7dcfb5e7188982fb8ebcded82e84292720f720/conftest.py#L85) [example](https://github.com/StatCan/kubeflow-containers/blob/master/tests/jupyterlab-cpu/test_julia.py#L8-L18) The inspiration for this came from the [jupyter docker-stacks](https://github.com/jupyter/docker-stacks/blob/09fb66007615ea68d9bce8f8e1a2cf9402f1e432/test/test_packages.py#L107) So off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to run CI against multiple SQLite versions 842862708 | |
459915995 | https://github.com/simonw/datasette/issues/160#issuecomment-459915995 | https://api.github.com/repos/simonw/datasette/issues/160 | MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ== | psychemedia 82988 | 2019-02-02T00:43:16Z | 2019-02-02T00:58:20Z | CONTRIBUTOR | Do you have any simple working examples of how to use `--static`? Inspection of default served files suggests locations such as `http://example.com/-/static/app.css?0e06ee`. If `datasette` is being proxied to `http://example.com/foo/datasette`, what form should arguments to `--static` take so that static files are correctly referenced? Use case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple `datasette` demo in MyBinder using jupyter-server-proxy. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to bundle and serve additional static files 278208011 | |
474282321 | https://github.com/simonw/datasette/issues/412#issuecomment-474282321 | https://api.github.com/repos/simonw/datasette/issues/412 | MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ== | psychemedia 82988 | 2019-03-19T10:09:46Z | 2019-03-19T10:09:46Z | CONTRIBUTOR | Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to `ATTACH DATABASE`? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Linked Data(sette) 411257981 | |
652166115 | https://github.com/simonw/datasette/issues/877#issuecomment-652166115 | https://api.github.com/repos/simonw/datasette/issues/877 | MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ== | abdusco 3243482 | 2020-07-01T03:28:07Z | 2020-07-01T03:28:07Z | CONTRIBUTOR | Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Consider dropping explicit CSRF protection entirely? 648421105 | |
738907852 | https://github.com/simonw/datasette/pull/1130#issuecomment-738907852 | https://api.github.com/repos/simonw/datasette/issues/1130 | MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg== | abdusco 3243482 | 2020-12-04T17:22:29Z | 2020-12-04T17:31:25Z | CONTRIBUTOR | EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it. ```css body { min-height: 100vh; min-height: -webkit-fill-available; } html { height: -webkit-fill-available; } ``` https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ --- It's actually not that difficult to fix. Well, this is actually a workaround to keep viewport in place. I usually put a transition (forgot to do it here) that keeps page from resizing. ```css .container { min-height: 100vh; transition: height 10000s steps(0); } ``` `steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step. This fix also prevents page from jumping around when the keyboard pops up and down. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix footer not sticking to bottom in short pages 756876238 | |
1304078945 | https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304078945 | https://api.github.com/repos/simonw/sqlite-utils/issues/511 | IC_kwDOCGYnMM5Nuqph | chapmanjacobd 7908073 | 2022-11-04T19:38:36Z | 2022-11-04T20:13:17Z | CONTRIBUTOR | Even more bizarre, the source db only has one record and the target table has no conflicting record: ``` 875 0.3s lb:/ (main|✚2) [0|0]🌺 sqlite-utils tube_71.db 'select * from media where path = "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"' | jq [ { "size": null, "time_created": null, "play_count": 1, "language": null, "view_count": null, "width": null, "height": null, "fps": null, "average_rating": null, "live_status": null, "age_limit": null, "uploader": null, "time_played": 0, "path": "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz", "id": "088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz/074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv", "ie_key": "ArchiveOrg", "playlist_path": "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz", "duration": 1424.05, "tags": null, "title": "074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv" } ] 875 0.3s lb:/ (main|✚2) [0|0]🥧 sqlite-utils video.db 'select * from media where path = "https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"' | jq [] ``` I've been able to use this code successfully several times before so not sure what's causing the issue. I guess the way that I'm handling multiple databases is an issue, though it hasn't ever inserted into the source db, not sure what's different. The only reasonable explanation is that it is trying to insert into the source db from the source db for some reason? Or maybe sqlite3 is checking the source db for primary key violation because the table name is the same | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | [insert_all, upsert_all] IntegrityError: constraint failed 1436539554 | |
1465315726 | https://github.com/simonw/sqlite-utils/pull/531#issuecomment-1465315726 | https://api.github.com/repos/simonw/sqlite-utils/issues/531 | IC_kwDOCGYnMM5XVvGO | eyeseast 25778 | 2023-03-12T22:21:56Z | 2023-03-12T22:21:56Z | CONTRIBUTOR | Exactly, that's what I was running into. On my M2 MacBook, SpatiaLite ends up in what is -- for the moment -- a non-standard location, so even when I passed in the location with `--load-extension`, I still hit an error on `create-spatial-index`. What I learned doing this originally is that SQLite needs to load the extension for each connection, even if all the SpatiaLite stuff is already in the database. So that's why `init_spatialite()` gets called again. Here's the code where I hit the error: https://github.com/eyeseast/boston-parcels/blob/main/Makefile#L30 It works using this branch. I'm not attached to this solution if you can think of something better. And I'm not sure, TBH, my test would actually catch what I'm after here. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add paths for homebrew on Apple silicon 1620164673 | |
381361734 | https://github.com/simonw/datasette/issues/125#issuecomment-381361734 | https://api.github.com/repos/simonw/datasette/issues/125 | MDEyOklzc3VlQ29tbWVudDM4MTM2MTczNA== | russss 45057 | 2018-04-14T21:26:30Z | 2018-04-14T21:26:30Z | CONTRIBUTOR | FWIW I am now doing this on my WTR app (instead of silently limiting maps to 1000). [Telefonica](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/18325) now has about 4000 markers and good old [BT](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/8412) has 22,000 or so. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Plot rows on a map with Leaflet and Leaflet.markercluster 275135393 | |
567133734 | https://github.com/simonw/datasette/issues/394#issuecomment-567133734 | https://api.github.com/repos/simonw/datasette/issues/394 | MDEyOklzc3VlQ29tbWVudDU2NzEzMzczNA== | jsfenfen 639012 | 2019-12-18T17:33:23Z | 2019-12-18T17:33:23Z | CONTRIBUTOR | FWIW I did a dumb merge of the branch here: https://github.com/jsfenfen/datasette and it seemed to work in that I could run stuff at a subdirectory, but ended up abandoning it in favor of just posting a subdomain because getting the nginx configs right was making me crazy. I still would prefer posting at a subdirectory but the subdomain seems simpler at the moment. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | base_url configuration setting 396212021 | |
752098906 | https://github.com/simonw/datasette/issues/417#issuecomment-752098906 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg== | psychemedia 82988 | 2020-12-29T14:34:30Z | 2020-12-29T14:34:50Z | CONTRIBUTOR | FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py Not a production thing, just an experiment trying to explore what might be possible... | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette Library 421546944 | |
652990131 | https://github.com/simonw/datasette/issues/889#issuecomment-652990131 | https://api.github.com/repos/simonw/datasette/issues/889 | MDEyOklzc3VlQ29tbWVudDY1Mjk5MDEzMQ== | amjith 49260 | 2020-07-02T12:58:11Z | 2020-07-02T13:00:18Z | CONTRIBUTOR | FWIW, this error does NOT happen in datasette 0.45a4. It only started on 0.45a5 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | asgi_wrapper plugin hook is crashing at startup 649907676 | |
624284539 | https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17 | MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ== | RhetTbull 41546558 | 2020-05-05T20:20:05Z | 2020-05-05T20:20:05Z | CONTRIBUTOR | FYI, I've got an [issue](https://github.com/RhetTbull/osxphotos/issues/25) to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Only install osxphotos if running on macOS 612860531 | |
864621099 | https://github.com/simonw/sqlite-utils/issues/278#issuecomment-864621099 | https://api.github.com/repos/simonw/sqlite-utils/issues/278 | MDEyOklzc3VlQ29tbWVudDg2NDYyMTA5OQ== | mcint 601708 | 2021-06-20T22:39:57Z | 2021-06-20T22:39:57Z | CONTRIBUTOR | Fair. I looked into it, it looks like it could be done, but it would be _a bit ugly_. I can upload and link a gist of my exploration. **Click** can parse a first argument while still recognizing it as a sub-command keyword. From there, the program could: 1. ignore it preemptively if it matches a sub-command 2. and/or check if a (db) file exists at the path. It would then also need to set a shared db argument variable. Click also makes it easy to parse arguments from environment variables. If you're amenable, I may submit a patch for only that, which would update each sub-command to check for a DB/SQLITE_UTILS_DB environment variable. The goal would be usage that looks like: `DB=./convenient.db sqlite-utils [operation] [args]` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Support db as first parameter before subcommand, or as environment variable 923697888 | |
604328163 | https://github.com/simonw/datasette/issues/573#issuecomment-604328163 | https://api.github.com/repos/simonw/datasette/issues/573 | MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw== | psychemedia 82988 | 2020-03-26T09:41:30Z | 2020-03-26T09:41:30Z | CONTRIBUTOR | Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exposing Datasette via Jupyter-server-proxy 492153532 | |
1029180984 | https://github.com/simonw/sqlite-utils/pull/385#issuecomment-1029180984 | https://api.github.com/repos/simonw/sqlite-utils/issues/385 | IC_kwDOCGYnMM49WA44 | eyeseast 25778 | 2022-02-03T16:42:04Z | 2022-02-03T16:42:04Z | CONTRIBUTOR | Fixed my spelling. That's a useful thing. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add new spatialite helper methods 1102899312 | |
748562330 | https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31 | MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA== | RhetTbull 41546558 | 2020-12-20T04:45:08Z | 2020-12-20T04:45:08Z | CONTRIBUTOR | Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115 | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Update for Big Sur 771511344 | |
604249402 | https://github.com/simonw/datasette/issues/712#issuecomment-604249402 | https://api.github.com/repos/simonw/datasette/issues/712 | MDEyOklzc3VlQ29tbWVudDYwNDI0OTQwMg== | wragge 127565 | 2020-03-26T06:11:44Z | 2020-03-26T06:11:44Z | CONTRIBUTOR | Following on from @betatim's suggestion on Twitter, I've changed the proxy url to include 'absolute'. ``` python proxy_url = f'{base_url}proxy/absolute/8001/' ``` This works both on Binder and locally, without using the `path_from_header` option. I've updated the demo repository. Sorry @simonw if I've led you down the wrong path! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | base_url doesn't entirely work for running Datasette inside Binder 588108428 | |
489342728 | https://github.com/simonw/datasette/pull/450#issuecomment-489342728 | https://api.github.com/repos/simonw/datasette/issues/450 | MDEyOklzc3VlQ29tbWVudDQ4OTM0MjcyOA== | russss 45057 | 2019-05-04T16:37:35Z | 2019-05-04T16:37:35Z | CONTRIBUTOR | For a bit more context: this fixes a crash with `unsupported operand type(s) for +: 'int' and 'NoneType'` on the index page for me. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Coalesce hidden table count to 0 440304714 | |
1317326406 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317326406 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OhM5G | bgrins 95570 | 2022-11-16T16:45:09Z | 2022-11-16T16:45:09Z | CONTRIBUTOR | For escaped table names it looks like we could pass a Completion object (https://codemirror.net/docs/ref/#autocomplete) instead of a string which would allow the non escaped name to be a label and then the escaped name to actually complete in the editor, which might help with some of the funkiness I was seeing w/ completion | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1641082395 | https://github.com/simonw/datasette/issues/2104#issuecomment-1641082395 | https://api.github.com/repos/simonw/datasette/issues/2104 | IC_kwDOBm6k_c5h0O4b | asg017 15178711 | 2023-07-18T22:41:37Z | 2023-07-18T22:41:37Z | CONTRIBUTOR | For filtering virtual table's "shadow tables" (ex the FTS5 _content and most the spatialite tables), you can use `pragma_table_list` (first appeared in SQLite 3.37 (2021-11-27), which has a `type` column that calls out `type="shadow"` tables https://www.sqlite.org/pragma.html#pragma_table_list | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Tables starting with an underscore should be treated as hidden 1808215339 | |
770069864 | https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60 | MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA== | daniel-butler 22578954 | 2021-01-29T21:52:05Z | 2021-02-12T18:29:43Z | CONTRIBUTOR | For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called `gh-organization`. The github's owner id of gh-orgnization is `123456789`. ```bash github-to-sqlite repos github.db gh-organization ``` I'm on a windows computer running git bash to be able to use the `|` command. This works for me ```bash sqlite3 github.db "SELECT full_name FROM repos WHERE owner = '123456789';" | tr '\n\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; } ``` On a pure linux system I think this would work because the new line character is normally `\n` ```bash sqlite3 github.db "SELECT full_name FROM repos WHERE owner = '123456789';" | tr '\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }` ``` As expected I ran into rate limit issues #51 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Use Data from SQLite in other commands 797097140 | |
812813732 | https://github.com/simonw/datasette/issues/502#issuecomment-812813732 | https://api.github.com/repos/simonw/datasette/issues/502 | MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg== | louispotok 5413548 | 2021-04-03T05:16:54Z | 2021-04-03T05:16:54Z | CONTRIBUTOR | For what it's worth, if anyone finds this in the future, I was having the same issue. After digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so `datasette serve -i xyz.db` rather than the doc's quickstart recommendation of `datasette serve xyz.db`. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exporting sqlite database(s)? 453131917 | |
910121331 | https://github.com/dogsheep/twitter-to-sqlite/issues/58#issuecomment-910121331 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/58 | IC_kwDODEm0Qs42P1lz | rubenv 42904 | 2021-09-01T09:49:33Z | 2021-09-01T09:49:33Z | CONTRIBUTOR | Found the cause, it's the other commands. PR #59 submitted. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Error: Use either --since or --since_id, not both - still broken 984939366 | |
716066000 | https://github.com/simonw/datasette/issues/1033#issuecomment-716066000 | https://api.github.com/repos/simonw/datasette/issues/1033 | MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA== | psychemedia 82988 | 2020-10-24T22:58:33Z | 2020-10-24T22:58:33Z | CONTRIBUTOR | From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note: ``` datasette.urls.instance() Returns the URL to the Datasette instance root page. This is usually "/" ``` What about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to? - [ ] `https://example.com:PORT/weirdpath/datasette` - [ ] `https://example.com:PORT/weirdpath/` - [ ] `https://example.com:PORT/` - [ ] `https://example.com` - [ ] something else? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette.urls.static_plugins(...) method 725099777 | |
626396379 | https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21 | MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ== | RhetTbull 41546558 | 2020-05-10T22:01:48Z | 2020-05-10T22:01:48Z | CONTRIBUTOR | Frustrates me when package authors create a "drop in" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do "import bpylist2 as bpylist" | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | bpylist.archiver.CircularReference: archive has a cycle with uid(13) 615474990 | |
1065477258 | https://github.com/simonw/sqlite-utils/issues/411#issuecomment-1065477258 | https://api.github.com/repos/simonw/sqlite-utils/issues/411 | IC_kwDOCGYnMM4_geSK | eyeseast 25778 | 2022-03-11T20:14:59Z | 2022-03-11T20:14:59Z | CONTRIBUTOR | Good call on adding this to `create-table`, especially for stored columns. Having the stored/virtual split might make this tricky to implement, but I haven't gone any farther than thinking about what the CLI looks like. I'm going to try making the SQL side work first and figure that'll tell me more about what it needs. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Support for generated columns 1160034488 | |
714908859 | https://github.com/simonw/datasette/issues/1012#issuecomment-714908859 | https://api.github.com/repos/simonw/datasette/issues/1012 | MDEyOklzc3VlQ29tbWVudDcxNDkwODg1OQ== | bollwyvl 45380 | 2020-10-23T04:49:20Z | 2020-10-23T04:49:20Z | CONTRIBUTOR | Good luck on 1.0! It may also be worth lobbying for a `Framework::Datasette::1.0` classifier. This would be a nice way to allow the ecosystem to self-document a bit more [discoverably](https://pypi.org/search/?q=&o=&c=Framework+%3A%3A+Datasette%3A%3A+1.0). I was surprised to see the [PR for `Framework::Jupyter`](https://github.com/pypa/warehouse/pull/1905/files) is a... database migration! Of course, there may be more workflow to it! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | For 1.0 update trove classifier in setup.py 718540751 | |
804471733 | https://github.com/simonw/datasette/issues/88#issuecomment-804471733 | https://api.github.com/repos/simonw/datasette/issues/88 | MDEyOklzc3VlQ29tbWVudDgwNDQ3MTczMw== | mroswell 192568 | 2021-03-22T23:46:36Z | 2021-03-22T23:46:36Z | CONTRIBUTOR | Google Map API limits seem to prevent https://nhs-england-map.netlify.com from being a working demo. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add NHS England Hospitals example to wiki 273775212 | |
604225034 | https://github.com/simonw/datasette/issues/712#issuecomment-604225034 | https://api.github.com/repos/simonw/datasette/issues/712 | MDEyOklzc3VlQ29tbWVudDYwNDIyNTAzNA== | wragge 127565 | 2020-03-26T04:40:08Z | 2020-03-26T04:40:08Z | CONTRIBUTOR | Great! Yes, can confirm that this works on Binder. However, when I try to run the same code locally, I get an Internal Server Error when I try to access Datasette. ``` ERROR: Exception in ASGI application Traceback (most recent call last): File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette_debug_asgi.py", line 24, in wrapped_app await app(scope, recieve, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/utils/asgi.py", line 174, in __call__ await self.app(scope, receive, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/tracer.py", line 75, in __call__ await self.app(scope, receive, send) File "/Volumes/Workspace/mycode/datasette-test/lib/python3.7/site-packages/datasette/app.py", line 746, in __call__ raw_path = dict(scope["headers"])[path_from_header.encode("utf8")].split(b"?")[0] KeyError: b'x-original-uri' INFO: 127.0.0.1:49320 - "GET / HTTP/1.1" 500 Internal Server Error ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | base_url doesn't entirely work for running Datasette inside Binder 588108428 | |
1105642187 | https://github.com/simonw/datasette/issues/1101#issuecomment-1105642187 | https://api.github.com/repos/simonw/datasette/issues/1101 | IC_kwDOBm6k_c5B5sLL | eyeseast 25778 | 2022-04-21T18:59:08Z | 2022-04-21T18:59:08Z | CONTRIBUTOR | Ha! That was your idea (and a good one). But it's probably worth measuring to see what overhead it adds. It did require both passing in the database and making the whole thing `async`. Just timing the queries themselves: 1. [Using `AsGeoJSON(geometry) as geometry`](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++AsGeoJSON%28geometry%29+as+geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 10.235 ms 2. [Leaving as binary](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 8.63 ms Looking at the network panel: 1. Takes about 200 ms for the `fetch` request 2. Takes about 300 ms I'm not sure how best to time the GeoJSON generation, but it would be interesting to check. Maybe I'll write a plugin to add query times to response headers. The other thing to consider with async streaming is that it might be well-suited for a slower response. When I have to get the whole result and send a response in a fixed amount of time, I need the most efficient query possible. If I can hang onto a connection and get things one chunk at a time, maybe it's ok if there's some overhead. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | register_output_renderer() should support streaming data 749283032 | |
1317281292 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317281292 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OhB4M | bgrins 95570 | 2022-11-16T16:19:16Z | 2022-11-16T16:19:16Z | CONTRIBUTOR | Ha, nice idea! Updating the dialect with that list. I'm thinking of also adding `count` to the list since that's a common thing people would want to autocomplete. I notice BQ console highlights `count` in the same manner as other keywords like `select` as well. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1040998433 | https://github.com/simonw/sqlite-utils/pull/407#issuecomment-1040998433 | https://api.github.com/repos/simonw/sqlite-utils/issues/407 | IC_kwDOCGYnMM4-DGAh | eyeseast 25778 | 2022-02-16T01:29:39Z | 2022-02-16T01:29:39Z | CONTRIBUTOR | Happy to do it and have it in the library. Going to use it a bunch. This whole SpatiaLite toolchain become a huge part of my work in the past year. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add SpatiaLite helpers to CLI 1138948786 | |
541118904 | https://github.com/simonw/datasette/issues/507#issuecomment-541118904 | https://api.github.com/repos/simonw/datasette/issues/507 | MDEyOklzc3VlQ29tbWVudDU0MTExODkwNA== | rixx 2657547 | 2019-10-11T15:48:49Z | 2019-10-11T15:48:49Z | CONTRIBUTOR | Headless Chrome and Firefox via Selenium are a solid choice in my experience. You may be interested in how pretix and pretalx solve this problem: They use pytest to create those screenshots on release to make sure they are up to date. See [this writeup](https://behind.pretix.eu/2018/11/15/automated-screenshots/) and [this repo](https://github.com/pretix/pretix-screenshots). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Every datasette plugin on the ecosystem page should have a screenshot 455852801 | |
743080047 | https://github.com/simonw/datasette/issues/998#issuecomment-743080047 | https://api.github.com/repos/simonw/datasette/issues/998 | MDEyOklzc3VlQ29tbWVudDc0MzA4MDA0Nw== | JBPressac 6371750 | 2020-12-11T09:25:09Z | 2020-12-11T09:25:09Z | CONTRIBUTOR | Hello Simon, I have a similar problem with horizontal scrollbar display with Datasette version 0.51 and superior for a table with more than 30 rows. With Datasette 0.50, the horizontal scrollbar is displayed, if I upgrade Datasette to 0.51 and superior, the horizontal scrollbar disappears. Datasette 0.50: horizontal scrollbar ![2020-12-11 10_23_28-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885620-a5f17800-3b9a-11eb-8870-654e7d4372ca.png) Datasette 0.51 and superior: no horizontal scrollbar ![2020-12-11 10_24_55-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885782-dfc27e80-3b9a-11eb-9d55-6c9a56227bf2.png) Thanks, | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Wide tables should scroll horizontally within the page 717699884 | |
1065940779 | https://github.com/simonw/datasette/issues/1384#issuecomment-1065940779 | https://api.github.com/repos/simonw/datasette/issues/1384 | IC_kwDOBm6k_c4_iPcr | brandonrobertz 2670795 | 2022-03-12T18:49:29Z | 2022-03-12T18:50:07Z | CONTRIBUTOR | Hello! Just wanted to chime in and note that there's a plugin to have Datasette [watch for updates to an external metadata.yaml/json and update the internal settings accordingly](https://datasette.io/plugins/datasette-remote-metadata), so I think the cache/poll use case is already covered. @khusmann If you don't need truly dynamic metadata then what you've come up with or the plugin ought to work fine. Making the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict. Trying to force all paths through a async hook would have performance ramifications and making everything use the internal meta will cause problems for users that need changes to take effect immediately. This is why I came to the non-async solution as it was the path of least change within Datasette. As always, open to new ideas, etc! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Plugin hook for dynamic metadata 930807135 | |
791509910 | https://github.com/simonw/datasette/issues/766#issuecomment-791509910 | https://api.github.com/repos/simonw/datasette/issues/766 | MDEyOklzc3VlQ29tbWVudDc5MTUwOTkxMA== | JBPressac 6371750 | 2021-03-05T15:57:35Z | 2021-03-05T16:35:21Z | CONTRIBUTOR | Hello, I have the same wildcards search problems with an instance of Datasette. http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwerz&_sort=rowid is OK but http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwe* is not (FTS is activated on "Reference" "IntituleAnalyse" "NomDuProducteur" "PresentationDuContenu" "Notes"). Notice that a SQL query as below launched directly from SQLite in the server's shell, retrieves results. `select * from fonds_auguste_dupouy_1872_1967_fts where IntituleAnalyse MATCH "gwe*";` Thanks, | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Enable wildcard-searches by default 617323873 | |
1309735529 | https://github.com/simonw/datasette/issues/1884#issuecomment-1309735529 | https://api.github.com/repos/simonw/datasette/issues/1884 | IC_kwDOBm6k_c5OEPpp | eyeseast 25778 | 2022-11-10T03:57:23Z | 2022-11-10T03:57:23Z | CONTRIBUTOR | Here's how to get a list of virtual tables: https://stackoverflow.com/questions/46617118/how-to-fetch-names-of-virtual-tables | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exclude virtual tables from datasette inspect 1439009231 | |
1399589414 | https://github.com/simonw/datasette/pull/1159#issuecomment-1399589414 | https://api.github.com/repos/simonw/datasette/issues/1159 | IC_kwDOBm6k_c5TbAom | cldellow 193185 | 2023-01-22T19:48:41Z | 2023-01-22T19:48:41Z | CONTRIBUTOR | Hey @lovasoa, I hope you don't mind - I pulled this PR into [datasette-ui-extras](https://github.com/cldellow/datasette-ui-extras), a plugin I'm making that collects UI tweaks to Datasette. You can apply it to your own Datasette instance by running `datasette install datasette-ui-extras` | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Improve the display of facets information 774332247 | |
556749086 | https://github.com/simonw/datasette/issues/394#issuecomment-556749086 | https://api.github.com/repos/simonw/datasette/issues/394 | MDEyOklzc3VlQ29tbWVudDU1Njc0OTA4Ng== | jsfenfen 639012 | 2019-11-21T01:15:34Z | 2019-11-21T01:21:45Z | CONTRIBUTOR | Hey @simonw is the url_prefix config option available in another branch, it looks like you've written some tests for it above? In 0.32 I get "url_prefix is not a valid option". I think this would be *really helpful*! This would be really handy for proxying datasette in another domain's *subdirectory* I believe this will allow folks to run upstream authentication, but the links break if the url_prefix doesn't match. I'd prefer not to host a proxied version of datasette on a subdomain (e.g. datasette.myurl.com b/c then I gotta worry about sharing authorization cookies with the subdomain, which I just assume not do, but...) Edit: I see the wip-url-prefix branch, I may try with that https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | base_url configuration setting 396212021 | |
499320973 | https://github.com/simonw/datasette/issues/394#issuecomment-499320973 | https://api.github.com/repos/simonw/datasette/issues/394 | MDEyOklzc3VlQ29tbWVudDQ5OTMyMDk3Mw== | kevindkeogh 13896256 | 2019-06-06T02:07:59Z | 2019-06-06T02:07:59Z | CONTRIBUTOR | Hey was this ever merged? Trying to run this behind nginx, and encountering this issue. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | base_url configuration setting 396212021 | |
1421571810 | https://github.com/simonw/sqlite-utils/issues/520#issuecomment-1421571810 | https://api.github.com/repos/simonw/sqlite-utils/issues/520 | IC_kwDOCGYnMM5Uu3bi | mcarpenter 167893 | 2023-02-07T22:43:09Z | 2023-02-07T22:43:09Z | CONTRIBUTOR | Hey, isn't this essentially the same issue as #448 ? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | rows_from_file() raises confusing error if file-like object is not in binary mode 1516644980 | |
565755208 | https://github.com/simonw/datasette/pull/644#issuecomment-565755208 | https://api.github.com/repos/simonw/datasette/issues/644 | MDEyOklzc3VlQ29tbWVudDU2NTc1NTIwOA== | chris48s 6025893 | 2019-12-14T21:33:31Z | 2019-12-14T21:33:31Z | CONTRIBUTOR | Hi @simonw Have you had a chance to look at this at all? I'm going to have a chunk of time free next week so if there is additional work needed on this, that would be a particularly convenient time for me to revisit this. Cheers | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Validate metadata json on startup 530513784 | |
1313252879 | https://github.com/simonw/datasette/issues/1886#issuecomment-1313252879 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5ORqYP | adipasquale 883348 | 2022-11-14T08:10:23Z | 2022-11-14T08:10:23Z | CONTRIBUTOR | Hi @simonw and thanks for the great tools you're publishing, your dedication is inspiring! I work for the French Ministry of Culture on a surveying tool for objects protected for their historical value. It is part of a program building modern public services called [beta.gouv.fr](https://beta.gouv.fr/). In that context I'm using data published by the Ministry that I have ingested into datasette and published on a free Fly instance : https://collectif-objets-datasette.fly.dev . I have also ingested another data set with infos about french cities on this instance so that I can perform joined queries. The surveying tool synchronizes its data regularly from this datasette instance, and I also use it to perform queries when asked generic questions about the distribution of objects. (The data is not very accessible as it's undocumented and for internal usage mostly) | {"total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0} | Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
751375487 | https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59 | MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw== | frosencrantz 631242 | 2020-12-26T17:08:44Z | 2020-12-26T17:08:44Z | CONTRIBUTOR | Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Remove unneeded exists=True for -a/--auth flag. 771872303 | |
608716819 | https://github.com/simonw/datasette/issues/236#issuecomment-608716819 | https://api.github.com/repos/simonw/datasette/issues/236 | MDEyOklzc3VlQ29tbWVudDYwODcxNjgxOQ== | cldellow 193185 | 2020-04-03T22:19:00Z | 2020-04-03T22:19:00Z | CONTRIBUTOR | Hi Simon, I'm thinking of attempting this. Can you clarify some questions I have? 1) I assume the goal is to have a CORS-friendly HTTPS endpoint that hosts the datasette service + user's db. 2) If that's the goal, I think Lambda alone is insufficient. Lambda provides the compute fabric, but not the HTTP routing. You'd also need to add Application Load Balancer or API Gateway to provide an HTTP endpoint that routes to the lambda function. Do you have a preference between ALB or API GW? ALB has better economics at scale, but has a minimum monthly cost. API GW has worse per-request economics, but scales to zero when no requests are happening. 3) Does Datasette have any native components, or is it all pure python? If it has native bits, they'll likely need to be recompiled to work on Amazon Linux 2. 4) There are a few disparate services that need to be wired together to expose a Python service securely to the web. If I was doing this outside of the datasette publish system, I'd use an AWS CloudFormation template. Even within datasette, I think it still makes sense to use a CloudFormation template and just have the publish plugin invoke it (via the standard `aws` cli) with user-specified parameters. Does that sound reasonable to you? Thanks for your help! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette publish lambda plugin 317001500 | |
647925594 | https://github.com/simonw/datasette/issues/859#issuecomment-647925594 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA== | abdusco 3243482 | 2020-06-23T05:55:21Z | 2020-06-23T06:28:29Z | CONTRIBUTOR | Hmm, not seeing the problem now. I've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms. I have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page? EDIT: It turns out it doesn't hold cache with mutable databases. I'll update the issue with more findings and a better way to reproduce the problem if I encounter it again. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
655018966 | https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655018966 | https://api.github.com/repos/simonw/sqlite-utils/issues/118 | MDEyOklzc3VlQ29tbWVudDY1NTAxODk2Ng== | tsibley 79913 | 2020-07-07T17:41:06Z | 2020-07-07T17:41:06Z | CONTRIBUTOR | Hmm, while tests pass, this may not work as intended on larger datasets. Looking into it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add insert --truncate option 651844316 | |
483017176 | https://github.com/simonw/datasette/issues/431#issuecomment-483017176 | https://api.github.com/repos/simonw/datasette/issues/431 | MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng== | psychemedia 82988 | 2019-04-14T16:58:37Z | 2019-04-14T16:58:37Z | CONTRIBUTOR | Hmm... nope... I see an updated timestamp from `ls -al` on the db but no reload? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette doesn't reload when database file changes 432870248 | |
772007663 | https://github.com/simonw/datasette/issues/1212#issuecomment-772007663 | https://api.github.com/repos/simonw/datasette/issues/1212 | MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw== | kbaikov 4488943 | 2021-02-02T21:36:56Z | 2021-02-02T21:36:56Z | CONTRIBUTOR | How do you get 4-5 minutes? I run my tests in WSL 2, so may be i need to try a real linux VM. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Tests are very slow. 797651831 | |
714657366 | https://github.com/simonw/datasette/issues/1033#issuecomment-714657366 | https://api.github.com/repos/simonw/datasette/issues/1033 | MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng== | psychemedia 82988 | 2020-10-22T17:51:29Z | 2020-10-22T17:51:29Z | CONTRIBUTOR | How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `"extra_js_urls": [ "/static/app.js"]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette.urls.static_plugins(...) method 725099777 | |
541119038 | https://github.com/simonw/datasette/issues/512#issuecomment-541119038 | https://api.github.com/repos/simonw/datasette/issues/512 | MDEyOklzc3VlQ29tbWVudDU0MTExOTAzOA== | rixx 2657547 | 2019-10-11T15:49:13Z | 2019-10-11T15:49:13Z | CONTRIBUTOR | How open are you to changing the config variable names (with appropriate deprecation, of course)? `"about_url_text", "license_url_text"` etc might be better suited to convey that these are just meant as basically URL titles. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "about" parameter in metadata does not appear when alone 457147936 | |
582105810 | https://github.com/simonw/datasette/pull/653#issuecomment-582105810 | https://api.github.com/repos/simonw/datasette/issues/653 | MDEyOklzc3VlQ29tbWVudDU4MjEwNTgxMA== | jaywgraves 418191 | 2020-02-04T20:43:01Z | 2020-02-04T20:43:01Z | CONTRIBUTOR | I *think* the existing code will be OK even if I strip the lines in the middle of a new line delimited string. It's only used for the validation, SQLite handles the `--` just fine and the whole SQL textarea still gets sent once it passes validation. I can add your test case to my branch later this evening though. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | allow leading comments in SQL input field 541331755 | |
1316387382 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316387382 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Odno2 | bgrins 95570 | 2022-11-16T05:33:55Z | 2022-11-16T05:33:55Z | CONTRIBUTOR | I added a commit to make our own dialect at https://github.com/simonw/datasette/pull/1893/commits/e273fc8ed5341bdf0b622e722d761bd2acc30a90. Pulled in the full list of keywords from https://www.sqlite.org/lang_keywords.html but haven't gone through and pruned it to only include common select keywords. @simonw you'll have better knowledge than me on that - do you want to take a first shot at narrowing that down to the set that people will be using in the editor? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1613895188 | https://github.com/simonw/datasette/issues/2093#issuecomment-1613895188 | https://api.github.com/repos/simonw/datasette/issues/2093 | IC_kwDOBm6k_c5gMhYU | asg017 15178711 | 2023-06-29T22:51:53Z | 2023-06-29T22:51:53Z | CONTRIBUTOR | I agree with not liking `metadata.json` stuff in a `datasette.*` config file. Editing description of a table/column in a file like `datasette.*` seems odd to me. Though since plugin configuration currently lives in `metadata.json`, I think it should be removed from there and placed in `datasette.*`, at least for top-level config like `datasette-auth-github`'s config. Keeping `metadata.json` strictly for documentation/licensing/column units makes sense to me, but anything plugin related should be in some config file, like `datasette.*`. And ya, supporting both `datasette.*` and CLI flags makes a lot of sense to me. Any `--setting` flag should override anything in `datasette.*` for easier debugging, with possibly a warning message so people don't get confused. Same with `--port` and a port defined in `datasette.*` | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Proposal: Combine settings, metadata, static, etc. into a single `datasette.toml` File 1781530343 | |
487724539 | https://github.com/simonw/datasette/pull/441#issuecomment-487724539 | https://api.github.com/repos/simonw/datasette/issues/441 | MDEyOklzc3VlQ29tbWVudDQ4NzcyNDUzOQ== | russss 45057 | 2019-04-29T20:08:32Z | 2019-04-29T20:08:32Z | CONTRIBUTOR | I also just realised that I should be passing the datasette object into the hook function...as I just found I need it. So hold off merging until I've fixed that. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add register_output_renderer hook 438437973 | |
1407264466 | https://github.com/simonw/sqlite-utils/issues/523#issuecomment-1407264466 | https://api.github.com/repos/simonw/sqlite-utils/issues/523 | IC_kwDOCGYnMM5T4SbS | fgregg 536941 | 2023-01-28T02:41:14Z | 2023-01-28T02:41:14Z | CONTRIBUTOR | I also often then run another little script to cast all empty strings to null, but i save that for another issue if this gets accepted. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Feature request: trim all leading and trailing white space for all columns for all tables in a database 1560651350 | |
953334718 | https://github.com/simonw/datasette/issues/1380#issuecomment-953334718 | https://api.github.com/repos/simonw/datasette/issues/1380 | IC_kwDOBm6k_c440ru- | glasnt 813732 | 2021-10-27T21:45:04Z | 2021-10-27T21:45:04Z | CONTRIBUTOR | I am also getting this issue, using the currently most recent version of datasette ``` $ datasette --version datasette, version 0.59.1 ``` If I run `datasette` within just a folder of files, ``` $ datasette serve . ``` Adding new files while datasette is running shows no new files, and removing files causes datasette to return 500 errors. ``` home Error 500 [Errno 2] No such file or directory: 'mydatabase.db' Powered by Datasette ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Serve all db files in a folder 924748955 | |
652255960 | https://github.com/simonw/datasette/issues/877#issuecomment-652255960 | https://api.github.com/repos/simonw/datasette/issues/877 | MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA== | abdusco 3243482 | 2020-07-01T07:52:25Z | 2020-07-01T08:10:00Z | CONTRIBUTOR | I am calling the API from another origin, so injecting CSRF token into templates wouldn't work. EDIT: I'll try the new version, it sounds promising | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Consider dropping explicit CSRF protection entirely? 648421105 | |
946255239 | https://github.com/simonw/datasette/issues/1432#issuecomment-946255239 | https://api.github.com/repos/simonw/datasette/issues/1432 | IC_kwDOBm6k_c44ZrWH | mroswell 192568 | 2021-10-18T23:55:25Z | 2021-10-18T23:55:25Z | CONTRIBUTOR | I am getting this when I visit my live Datasette page: ``` This Serverless Function has crashed. Your connection is working correctly. Vercel is working correctly. 500: INTERNAL_SERVER_ERROR Code: FUNCTION_INVOCATION_FAILED ID: ... ``` And in the server logs, I'm getting ``` [GET] /disinfectants/listN 19:53:14:23 module initialization error: __init__() got an unexpected keyword argument 'config' module initialization error __init__() got an unexpected keyword argument 'config' ``` Which is the same error that @ashishdotme reported above. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Rename Datasette.__init__(config=) parameter to settings= 969855774 | |
895003796 | https://github.com/simonw/datasette/issues/1425#issuecomment-895003796 | https://api.github.com/repos/simonw/datasette/issues/1425 | IC_kwDOBm6k_c41WKyU | abdusco 3243482 | 2021-08-09T07:14:35Z | 2021-08-09T07:14:35Z | CONTRIBUTOR | I believe this also provides a workaround for the problem I face in https://github.com/simonw/datasette/issues/1300. Now I should be able to get table PKs and generate a row URL. I'll test this out and report my findings. ```py from datasette.utils import path_from_row_pks pks = await db.primary_keys(table) url = self.ds.urls.row_blob( database, table, path_from_row_pks(row, pks, not pks), column, ) ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | render_cell() hook should support returning an awaitable 963528457 | |
782053455 | https://github.com/simonw/datasette/pull/1229#issuecomment-782053455 | https://api.github.com/repos/simonw/datasette/issues/1229 | MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ== | camallen 295329 | 2021-02-19T12:47:19Z | 2021-02-19T12:47:19Z | CONTRIBUTOR | I believe this pr and #1031 are related and fix the same issue. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ensure immutable databses when starting in configuration directory mode with 810507413 | |
1317789308 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317789308 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Oi958 | bgrins 95570 | 2022-11-16T22:59:57Z | 2022-11-16T22:59:57Z | CONTRIBUTOR | I can push up a commit that uses the static fixtures schema for testing, but given that the query used to generate it is authed we would still need some work to make that work on live data, right? Ideally it could come down to db and query views directly to avoid waiting on an extra xhr and managing that state change.On Nov 16, 2022, at 2:16 PM, Simon Willison ***@***.***> wrote: Honestly I'm not too bothered if table names with weird characters don't work correctly here - I care about those in the Datasette fixtures.db database because Datasette aims to support ANY valid SQLite database, so I need stuff in the test suite that includes weird edge cases like this. But I would hope very few people actually create tables with spaces in their names, so it's not a huge concern to me if autocompletion doesn't work properly for those. —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***> | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
748305976 | https://github.com/simonw/datasette/issues/493#issuecomment-748305976 | https://api.github.com/repos/simonw/datasette/issues/493 | MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng== | jefftriplett 50527 | 2020-12-18T20:34:39Z | 2020-12-18T20:34:39Z | CONTRIBUTOR | I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: ```shell datasette serve --metadata=./metadata.json ``` I have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Rename metadata.json to config.json 449886319 | |
1339906969 | https://github.com/simonw/datasette/issues/1929#issuecomment-1339906969 | https://api.github.com/repos/simonw/datasette/issues/1929 | IC_kwDOBm6k_c5P3VuZ | davidbgk 3556 | 2022-12-06T19:34:20Z | 2022-12-06T19:34:20Z | CONTRIBUTOR | I confirm that it works 👍 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect link from the API explorer to the JSON API documentation 1473659191 | |
1407767434 | https://github.com/simonw/datasette/issues/1696#issuecomment-1407767434 | https://api.github.com/repos/simonw/datasette/issues/1696 | IC_kwDOBm6k_c5T6NOK | cldellow 193185 | 2023-01-29T20:56:20Z | 2023-01-29T20:56:20Z | CONTRIBUTOR | I did some horrible things in https://github.com/cldellow/datasette-ui-extras/issues/2 to enable this in my plugin -- example here: https://dux-demo.fly.dev/cooking/posts?_facet=owner_user_id&owner_user_id=67 The implementation relies on two things: - a `filters_from_request` hook that adds a good human description (unfortunately, without the benefit of the CSS styling you mention) - doing something evil to hijack the `exact` and `not` operators in the `Filters` class. We can't leave them as is, or we'll get 2 human descriptions -- the built-in Datasette one and the one from my plugin. We can't remove them, or the filters UI will stop supporting the `=` and `!=` operators This got me thinking: it'd be neat if the list of operators that the filters UI supported wasn't a closed set. A motivating example: adding a geospatial `NEAR` operator. Ideally it'd take two arguments - a target point and a radius, so you could express a filter like `find me all rows whose lat/lng are within 10km of 43.4516° N, 80.4925° W`. (Optionally, the UI could be enhanced if the geonames database was loaded and queried, so a user could say `find me all rows whose lat/lng are within 10km of Kitchener, ON`, and the city gets translated to a lat/lng for them) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Show foreign key label when filtering 1186696202 | |
905904540 | https://github.com/simonw/datasette/issues/859#issuecomment-905904540 | https://api.github.com/repos/simonw/datasette/issues/859 | IC_kwDOBm6k_c41_wGc | brandonrobertz 2670795 | 2021-08-25T21:59:14Z | 2021-08-25T21:59:55Z | CONTRIBUTOR | I did two tests: one with 1000 5-30mb DBs and a second with 20 multi gig DBs. For the second, I created them like so: `for i in {1..20}; do sqlite-generate db$i.db --tables ${i}00 --rows 100,2000 --columns 5,100 --pks 0 --fks 0; done` This was for deciding whether to use lots of small DBs or to group things into a smaller number of bigger DBs. The second strategy wins. By simply persisting the `_internal` DB to disk, I was able to avoid most of the performance issues I was experiencing previously. (To do this, I changed the `datasette/internal_db.py:init_internal_db` creates to if not exists, and changed the `_internal` DB instantiation in `datasette/app.py:Datasette.__init__` to a path with `is_mutable=True`.) Super rough, but the pages now load so I can continue testing ideas. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
770150526 | https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51 | MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg== | daniel-butler 22578954 | 2021-01-30T03:44:19Z | 2021-01-30T03:47:24Z | CONTRIBUTOR | I don't have much experience with github's rate limiting. In my day job we use the [tenacity library](https://github.com/jd/tenacity) to handle http errors we get. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | github-to-sqlite should handle rate limits better 703246031 | |
1407470429 | https://github.com/simonw/datasette/pull/2008#issuecomment-1407470429 | https://api.github.com/repos/simonw/datasette/issues/2008 | IC_kwDOBm6k_c5T5Etd | cldellow 193185 | 2023-01-28T19:34:29Z | 2023-01-28T19:34:29Z | CONTRIBUTOR | I don't know how/if you do automated tests for performance, so I haven't changed any of the tests. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | array facet: don't materialize unnecessary columns 1560982210 | |
393106520 | https://github.com/simonw/datasette/issues/276#issuecomment-393106520 | https://api.github.com/repos/simonw/datasette/issues/276 | MDEyOklzc3VlQ29tbWVudDM5MzEwNjUyMA== | russss 45057 | 2018-05-30T10:09:25Z | 2018-05-30T10:09:25Z | CONTRIBUTOR | I don't think it's unreasonable to only support spatialite geometries in a coordinate reference system which is at least transformable to WGS84. It would be nice to support different CRSes in the database so conversion to spatialite from the source data is lossless. I think the working CRS for datasette should be WGS84 though (leaflet requires it, for example) - it's just a case of calling `ST_Transform(geom, 4326)` on the column while we're loading the data. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Handle spatialite geometry columns better 324835838 | |
1111712953 | https://github.com/simonw/datasette/issues/1728#issuecomment-1111712953 | https://api.github.com/repos/simonw/datasette/issues/1728 | IC_kwDOBm6k_c5CQ2S5 | wragge 127565 | 2022-04-28T03:48:36Z | 2022-04-28T03:48:36Z | CONTRIBUTOR | I don't think that'd work for this project. The db is very big, and my aim was to have an environment where researchers could be making use of the data, but be easily able to add corrections to the HTR/OCR extracted data when they came across problems. It's in its immutable (!) form here: https://sydney-stock-exchange-xqtkxtd5za-ts.a.run.app/stock_exchange/stocks | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Writable canned queries fail with useless non-error against immutable databases 1218133366 | |
833132571 | https://github.com/simonw/datasette/issues/1300#issuecomment-833132571 | https://api.github.com/repos/simonw/datasette/issues/1300 | MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ== | abdusco 3243482 | 2021-05-06T00:16:50Z | 2021-05-06T00:18:05Z | CONTRIBUTOR | I ended up using some JS as a workaround. First, add a JS file in `metadata.yaml`: ```yaml extra_js_urls: - '/static/app.js' ``` then inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `<img/>` elements. You need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post. ~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run. ```js window.addEventListener('DOMContentLoaded', () => { function renderBlobImages() { document.querySelectorAll('a[href*=".blob"]').forEach(el => { const img = document.createElement('img'); img.className = 'blob-image'; img.loading = 'lazy'; img.src = el.href.replace('.blob', '.jpg'); el.parentElement.replaceChild(img, el); }); } renderBlobImages(); }); ``` while this does the job, I'd prefer handling this in Python where it belongs. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Make row available to `render_cell` plugin hook 860625833 | |
1316041828 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316041828 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OcTRk | bgrins 95570 | 2022-11-15T23:51:35Z | 2022-11-15T23:51:35Z | CONTRIBUTOR | I experimented with autocompleting the actual schema in https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25, but it would need some work (current problems with it listed in the commit message there) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
655239728 | https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728 | https://api.github.com/repos/simonw/sqlite-utils/issues/118 | MDEyOklzc3VlQ29tbWVudDY1NTIzOTcyOA== | tsibley 79913 | 2020-07-08T02:16:42Z | 2020-07-08T02:16:42Z | CONTRIBUTOR | I fixed my original oops by moving the `DELETE FROM $table` out of the chunking loop and repushed. I think this change can be considered in isolation from issues around transactions, which I discuss next. I wanted to make the DELETE + INSERT happen all in the same transaction so it was robust, but that was more complicated than I expected. The transaction handling in the Database/Table classes isn't systematic, and this poses big hurdles to making `Table.insert_all` (or other operations) consistent and robust in the face of errors. For example, I wanted to do this (whitespace ignored in diff, so indentation change not highlighted): ```diff diff --git a/sqlite_utils/db.py b/sqlite_utils/db.py index d6b9ecf..4107ceb 100644 --- a/sqlite_utils/db.py +++ b/sqlite_utils/db.py @@ -1028,6 +1028,11 @@ class Table(Queryable): batch_size = max(1, min(batch_size, SQLITE_MAX_VARS // num_columns)) self.last_rowid = None self.last_pk = None + with self.db.conn: + # Explicit BEGIN is necessary because Python's sqlite3 doesn't + # issue implicit BEGINs for DDL, only DML. We mix DDL and DML + # below and might execute DDL first, e.g. for table creation. + self.db.conn.execute("BEGIN") if truncate and self.exists(): self.db.conn.execute("DELETE FROM [{}];".format(self.name)) for chunk in chunks(itertools.chain([first_record], records), batch_size): @@ -1038,7 +1043,11 @@ class Table(Queryable): # Use the first batch to derive the table names column_types = suggest_column_types(chunk) column_types.update(columns or {}) - self.create( + # Not self.create() because that is wrapped in its own + # transaction and Python's sqlite3 doesn't support + # nested transactions. + self.db.create_table( + … | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add insert --truncate option 651844316 | |
760950128 | https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55 | MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA== | jacobian 21148 | 2021-01-15T13:44:52Z | 2021-01-15T13:44:52Z | CONTRIBUTOR | I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix archive imports 779211940 | |
573389669 | https://github.com/simonw/sqlite-utils/issues/74#issuecomment-573389669 | https://api.github.com/repos/simonw/sqlite-utils/issues/74 | MDEyOklzc3VlQ29tbWVudDU3MzM4OTY2OQ== | jayvdb 15092 | 2020-01-12T07:21:17Z | 2020-01-12T07:21:17Z | CONTRIBUTOR | I guess there is some extra flag for ` CliRunner.invoke` to check exitcode and raise the exception, or that should be an extra assert added. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Test failures on openSUSE 15.1: AssertionError: Explicit other_table and other_column 546073980 | |
405026800 | https://github.com/simonw/datasette/issues/294#issuecomment-405026800 | https://api.github.com/repos/simonw/datasette/issues/294 | MDEyOklzc3VlQ29tbWVudDQwNTAyNjgwMA== | russss 45057 | 2018-07-14T14:24:31Z | 2018-07-14T14:24:31Z | CONTRIBUTOR | I had a quick look at this in relation to #343 and I feel like it might be worth modelling the inspected table metadata internally as an object rather than a dict. (We'd still have to serialise it back to JSON.) There are a few places where we rely on the structure of this metadata dict for various reasons, including in templates (and potentially also in user templates). It would be nice to have a reasonably well defined API for accessing metadata internally so that it's clearer what we're breaking. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | inspect should record column types 327365110 | |
835491318 | https://github.com/simonw/datasette/pull/1296#issuecomment-835491318 | https://api.github.com/repos/simonw/datasette/issues/1296 | MDEyOklzc3VlQ29tbWVudDgzNTQ5MTMxOA== | blairdrummond 10801138 | 2021-05-08T19:59:01Z | 2021-05-08T19:59:01Z | CONTRIBUTOR | I have also found that ubuntu has fewer vulnerabilities than the buster based images. ``` ➜ ~ docker pull python:3-buster ➜ ~ trivy image python:3-buster | head 2021-04-28T17:14:29.313-0400 INFO Detecting Debian vulnerabilities... 2021-04-28T17:14:29.393-0400 INFO Trivy skips scanning programming language libraries because no supported file was detected python:3-buster (debian 10.9) ============================= Total: 1621 (UNKNOWN: 13, LOW: 1106, MEDIUM: 343, HIGH: 145, CRITICAL: 14) +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ | LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION | TITLE | +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Dockerfile: use Ubuntu 20.10 as base 855446829 | |
527211047 | https://github.com/simonw/sqlite-utils/pull/57#issuecomment-527211047 | https://api.github.com/repos/simonw/sqlite-utils/issues/57 | MDEyOklzc3VlQ29tbWVudDUyNzIxMTA0Nw== | amjith 49260 | 2019-09-02T17:30:43Z | 2019-09-02T17:30:43Z | CONTRIBUTOR | I have merged the other PR (#56) into this one. I have incorporated your suggestions. Cheers! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add triggers while enabling FTS 487987958 | |
527209840 | https://github.com/simonw/sqlite-utils/pull/56#issuecomment-527209840 | https://api.github.com/repos/simonw/sqlite-utils/issues/56 | MDEyOklzc3VlQ29tbWVudDUyNzIwOTg0MA== | amjith 49260 | 2019-09-02T17:23:21Z | 2019-09-02T17:23:21Z | CONTRIBUTOR | I have updated the other PR with the changes from this one and added tests. I have also changed the escaping from double quotes to brackets. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Escape the table name in populate_fts and search. 487847945 | |
392825746 | https://github.com/simonw/datasette/issues/276#issuecomment-392825746 | https://api.github.com/repos/simonw/datasette/issues/276 | MDEyOklzc3VlQ29tbWVudDM5MjgyNTc0Ng== | russss 45057 | 2018-05-29T15:42:53Z | 2018-05-29T15:42:53Z | CONTRIBUTOR | I haven't had time to look further into this, but if doing this as a plugin results in useful hooks then I think we should do it that way. We could always require the plugin as a standard dependency. I think this is going to result in quite a bit of refactoring anyway so it's a good time to add hooks regardless. On the other hand, if we have to add lots of specialist hooks for it then maybe it's worth integrating into the core. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Handle spatialite geometry columns better 324835838 | |
1059647114 | https://github.com/simonw/sqlite-utils/issues/412#issuecomment-1059647114 | https://api.github.com/repos/simonw/sqlite-utils/issues/412 | IC_kwDOCGYnMM4_KO6K | eyeseast 25778 | 2022-03-05T01:54:24Z | 2022-03-05T01:54:24Z | CONTRIBUTOR | I haven't tried this, but it looks like Pandas has a method for this: https://pandas.pydata.org/docs/reference/api/pandas.read_sql_query.html | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Optional Pandas integration 1160182768 | |
1151887842 | https://github.com/simonw/datasette/issues/1528#issuecomment-1151887842 | https://api.github.com/repos/simonw/datasette/issues/1528 | IC_kwDOBm6k_c5EqGni | eyeseast 25778 | 2022-06-10T03:23:08Z | 2022-06-10T03:23:08Z | CONTRIBUTOR | I just put together a version of this in a plugin: https://github.com/eyeseast/datasette-query-files. Happy to have any feedback. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add new `"sql_file"` key to Canned Queries in metadata? 1060631257 | |
1616853644 | https://github.com/simonw/datasette/issues/2087#issuecomment-1616853644 | https://api.github.com/repos/simonw/datasette/issues/2087 | IC_kwDOBm6k_c5gXzqM | asg017 15178711 | 2023-07-02T22:00:48Z | 2023-07-02T22:00:48Z | CONTRIBUTOR | I just saw in the docs that Dasette auto-detects `settings.json`: > settings.json - settings that would normally be passed using --setting - here they should be stored as a JSON object of key/value pairs > [*Source*](https://docs.datasette.io/en/stable/settings.html#:~:text=settings.json%20%2D%20settings%20that%20would%20normally%20be%20passed%20using%20%2D%2Dsetting%20%2D%20here%20they%20should%20be%20stored%20as%20a%20JSON%20object%20of%20key/value%20pairs) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | `--settings settings.json` option 1765870617 | |
1030740653 | https://github.com/simonw/sqlite-utils/issues/399#issuecomment-1030740653 | https://api.github.com/repos/simonw/sqlite-utils/issues/399 | IC_kwDOCGYnMM49b9qt | eyeseast 25778 | 2022-02-06T02:57:17Z | 2022-02-06T02:57:17Z | CONTRIBUTOR | I like the idea of having stock conversions you could import. I'd actually move them to a dedicated module (call it `sqlite_utils.conversions` or something), because it's different from other utilities. Maybe they even take configuration, or they're composable. ```python from sqlite_utils.conversions import LongitudeLatitude db["places"].insert( { "name": "London", "lng": -0.118092, "lat": 51.509865, }, conversions={"point": LongitudeLatitude("lng", "lat")}, ) ``` I would definitely use that for every CSV I get with lat/lng columns where I actually need GeoJSON. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Make it easier to insert geometries, with documentation and maybe code 1124731464 | |
803631102 | https://github.com/simonw/datasette/issues/942#issuecomment-803631102 | https://api.github.com/repos/simonw/datasette/issues/942 | MDEyOklzc3VlQ29tbWVudDgwMzYzMTEwMg== | mroswell 192568 | 2021-03-21T17:48:42Z | 2021-03-21T17:48:42Z | CONTRIBUTOR | I like this idea. Though it might be nice to have some kind of automated system from database to file, so that developers could easily track diffs. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Support column descriptions in metadata.json 681334912 | |
913001282 | https://github.com/simonw/datasette/pull/1455#issuecomment-913001282 | https://api.github.com/repos/simonw/datasette/issues/1455 | IC_kwDOBm6k_c42a0tC | ctb 51016 | 2021-09-04T16:31:24Z | 2021-09-04T16:31:24Z | CONTRIBUTOR | I love it! maybe 'researchers' instead? Or 'scientists and researchers'? | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add scientists to target groups 988325628 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
created_at (date) >30 ✖