issue_comments
928 rows where author_association = "NONE" sorted by user
This data as json, CSV (advanced)
id | html_url | issue_url | node_id | user ▼ | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
1074256603 | https://github.com/simonw/sqlite-utils/issues/417#issuecomment-1074256603 | https://api.github.com/repos/simonw/sqlite-utils/issues/417 | IC_kwDOCGYnMM5AB9rb | blaine 9954 | 2022-03-21T18:19:41Z | 2022-03-21T18:19:41Z | NONE | That makes sense; just a little hint that points folks towards doing the right thing might be helpful! fwiw, the reason I was using jq in the first place was just a quick way to extract one attribute from an actual JSON array. When I initially imported it, I got a table with a bunch of embedded JSON values, rather than a native table, because each array entry had two attributes, one with the data I _actually_ wanted. Not sure how common a use-case this is, though (and easily fixed, aside from the jq weirdness!) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | insert fails on JSONL with whitespace 1175744654 | |
1239516561 | https://github.com/dogsheep/pocket-to-sqlite/issues/10#issuecomment-1239516561 | https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/10 | IC_kwDODLZ_YM5J4YWR | ashanan 11887 | 2022-09-07T15:07:38Z | 2022-09-07T15:07:38Z | NONE | Thanks! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | When running `auth` command, don't overwrite an existing auth.json file 1246826792 | |
925300720 | https://github.com/simonw/sqlite-utils/issues/328#issuecomment-925300720 | https://api.github.com/repos/simonw/sqlite-utils/issues/328 | IC_kwDOCGYnMM43Jvfw | gravis 12752 | 2021-09-22T20:21:33Z | 2021-09-22T20:21:33Z | NONE | Wow, that was fast! Thank you! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Invalid JSON output when no rows 1004613267 | |
617208503 | https://github.com/simonw/datasette/issues/176#issuecomment-617208503 | https://api.github.com/repos/simonw/datasette/issues/176 | MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw== | nkirsch 12976 | 2020-04-21T14:16:24Z | 2020-04-21T14:16:24Z | NONE | @eads I'm interested in helping, if there's still a need... | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add GraphQL endpoint 285168503 | |
1229449018 | https://github.com/simonw/sqlite-utils/issues/474#issuecomment-1229449018 | https://api.github.com/repos/simonw/sqlite-utils/issues/474 | IC_kwDOCGYnMM5JR-c6 | hubgit 14294 | 2022-08-28T12:40:13Z | 2022-08-28T12:40:13Z | NONE | Creating the table before inserting is a useful workaround, thanks. It does require figuring out the `create table` syntax and listing all the fields manually, though, which loses some of the magic of sqlite-utils. I was expecting to find an option like `--headers=foo,bar` (or `--header-row='foo\tbar'`, if that would be easier) - not necessarily that exact syntax, but something that would essentially be treated the same as having a header row in the file. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add an option for specifying column names when inserting CSV data 1353074021 | |
1236200834 | https://github.com/simonw/sqlite-utils/issues/239#issuecomment-1236200834 | https://api.github.com/repos/simonw/sqlite-utils/issues/239 | IC_kwDOCGYnMM5Jru2C | hubgit 14294 | 2022-09-03T21:26:32Z | 2022-09-03T21:26:32Z | NONE | I was looking for something like this today, for extracting columns containing objects (and arrays of objects) into separate tables. Would it make sense (especially for the fields containing arrays of objects) to create a one-to-many relationship, where each row of the newly created table would contain the id of the row that originally contained it? If the extracted objects have a unique id and are repeated, it could even create a many-to-many relationship, with a third table for the joins. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | sqlite-utils extract could handle nested objects 816526538 | |
571412923 | https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-571412923 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16 | MDEyOklzc3VlQ29tbWVudDU3MTQxMjkyMw== | jayvdb 15092 | 2020-01-07T03:06:46Z | 2020-01-07T03:06:46Z | NONE | I re-tried after doing `auth`, and I get the same result. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exception running first command: IndexError: list index out of range 546051181 | |
602136481 | https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-602136481 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16 | MDEyOklzc3VlQ29tbWVudDYwMjEzNjQ4MQ== | jayvdb 15092 | 2020-03-22T02:08:57Z | 2020-03-22T02:08:57Z | NONE | I'd love to be using your library as a better cached gh layer for a new library I have built, replacing large parts of the very ugly https://github.com/jayvdb/pypidb/blob/master/pypidb/_github.py , and then probably being able to rebuild the setuppy chunk as a feature here at a later stage. I would also need tokenless and netrc support, but I would be happy to add those bits. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exception running first command: IndexError: list index out of range 546051181 | |
974607456 | https://github.com/simonw/datasette/issues/1522#issuecomment-974607456 | https://api.github.com/repos/simonw/datasette/issues/1522 | IC_kwDOBm6k_c46F1Rg | mrchrisadams 17906 | 2021-11-20T07:10:11Z | 2021-11-20T07:10:11Z | NONE | As a a sanity check, would it be worth looking at trying to push the multi-process container on another provider of a knative / cloud run / tekton ? I have a somewhat similar use case for a future proejct, so i'm been very grateful to you sharing all the progress in this issue. As I understand it, Scaleway also offer a very similar offering using what appear to be many similar components that might at least see if it's an issue with more than one knative based FaaS provider https://www.scaleway.com/en/serverless-containers/ https://developers.scaleway.com/en/products/containers/api/#main-features | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Deploy a live instance of demos/apache-proxy 1058896236 | |
906015471 | https://github.com/dogsheep/dogsheep-photos/issues/7#issuecomment-906015471 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/7 | IC_kwDOD079W842ALLv | dkam 18232 | 2021-08-26T02:01:01Z | 2021-08-26T02:01:01Z | NONE | Perceptual hashes might be what you're after : http://phash.org | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Integrate image content hashing 602585497 | |
1035717429 | https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1035717429 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31 | IC_kwDOD079W849u8s1 | harperreed 18504 | 2022-02-11T01:55:38Z | 2022-02-11T01:55:38Z | NONE | I would love this merged! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Update for Big Sur 771511344 | |
1141711418 | https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1141711418 | https://api.github.com/repos/simonw/sqlite-utils/issues/26 | IC_kwDOCGYnMM5EDSI6 | nileshtrivedi 19304 | 2022-05-31T06:21:15Z | 2022-05-31T06:21:15Z | NONE | I ran into this. My use case has a JSON file with array of `book` objects with a key called `reviews` which is also an array of objects. My JSON is human-edited and does not specify IDs for either books or reviews. Because sqlite-utils does not support inserting nested objects, I instead have to maintain two separate CSV files with `id` column in `books.csv` and `book_id` column in reviews.csv. I think the right way to declare the relationship while inserting a JSON might be to describe the relationship: `sqlite-utils insert data.db books mydata.json --hasmany reviews --hasone author --manytomany tags` This is relying on the assumption that foreign keys can point to `rowid` primary key. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Mechanism for turning nested JSON into foreign keys / many-to-many 455486286 | |
348252037 | https://github.com/simonw/datasette/issues/153#issuecomment-348252037 | https://api.github.com/repos/simonw/datasette/issues/153 | MDEyOklzc3VlQ29tbWVudDM0ODI1MjAzNw== | ftrain 20264 | 2017-11-30T16:59:00Z | 2017-11-30T16:59:00Z | NONE | WOW! -- Paul Ford // (646) 369-7128 // @ftrain On Thu, Nov 30, 2017 at 11:47 AM, Simon Willison <notifications@github.com> wrote: > Remaining work on this now lives in a milestone: > https://github.com/simonw/datasette/milestone/6 > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/simonw/datasette/issues/153#issuecomment-348248406>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AABPKHzaVPKwTOoHouK2aMUnM-mPnPk6ks5s7twzgaJpZM4Qq2zW> > . > | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to customize presentation of specific columns in HTML view 276842536 | |
524300388 | https://github.com/simonw/sqlite-utils/issues/54#issuecomment-524300388 | https://api.github.com/repos/simonw/sqlite-utils/issues/54 | MDEyOklzc3VlQ29tbWVudDUyNDMwMDM4OA== | ftrain 20264 | 2019-08-23T12:41:09Z | 2019-08-23T12:41:09Z | NONE | Extremely cool and easy to understand. Thank you! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to list views, and to access db["view_name"].rows / rows_where / etc 480961330 | |
712855389 | https://github.com/simonw/datasette/issues/991#issuecomment-712855389 | https://api.github.com/repos/simonw/datasette/issues/991 | MDEyOklzc3VlQ29tbWVudDcxMjg1NTM4OQ== | furilo 24740 | 2020-10-20T13:36:41Z | 2020-10-20T13:36:41Z | NONE | Here is one quick sketch (done in Figma :P) for an idea: a possible filter to switch between showing all tables from all databases, or grouping tables by database. (the switch is interactive) All tables: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=1%3A2&viewport=536%2C348%2C0.5&scaling=min-zoom Grouped: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=3%3A974&viewport=536%2C348%2C0.5&scaling=min-zoom When only 1 database: https://www.figma.com/proto/BjFrMroEtmVx6EeRjvSrox/Datasette-test?node-id=1%3A162&viewport=536%2C348%2C0.5&scaling=min-zoom Is this is useful, I can send some more suggestions/sketches. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Redesign application homepage 714377268 | |
791089881 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791089881 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 | MDEyOklzc3VlQ29tbWVudDc5MTA4OTg4MQ== | maxhawkins 28565 | 2021-03-05T02:03:19Z | 2021-03-05T02:03:19Z | NONE | I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout. Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Add Gmail takeout mbox import 813880401 | |
849708617 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-849708617 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 | MDEyOklzc3VlQ29tbWVudDg0OTcwODYxNw== | maxhawkins 28565 | 2021-05-27T15:01:42Z | 2021-05-27T15:01:42Z | NONE | Any updates? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Add Gmail takeout mbox import 813880401 | |
884672647 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-884672647 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 | IC_kwDODFE5qs40uwiH | maxhawkins 28565 | 2021-07-22T05:56:31Z | 2021-07-22T14:03:08Z | NONE | How does this commit look? https://github.com/maxhawkins/google-takeout-to-sqlite/commit/72802a83fee282eb5d02d388567731ba4301050d It seems that Takeout's mbox format is pretty simple, so we can get away with just splitting the file on lines begining with `From `. My commit just splits the file every time a line starts with `From ` and uses `email.message_from_bytes` to parse each chunk. I was able to load a 12GB takeout mbox without the program using more than a couple hundred MB of memory during the import process. It does make us lose the progress bar, but maybe I can add that back in a later commit. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Add Gmail takeout mbox import 813880401 | |
885022230 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885022230 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 | IC_kwDODFE5qs40wF4W | maxhawkins 28565 | 2021-07-22T15:51:46Z | 2021-07-22T15:51:46Z | NONE | One thing I noticed is this importer doesn't save attachments along with the body of the emails. It would be nice if those got stored as blobs in a separate attachments table so attachments can be included while fetching search results. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Add Gmail takeout mbox import 813880401 | |
885094284 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885094284 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 | IC_kwDODFE5qs40wXeM | maxhawkins 28565 | 2021-07-22T17:41:32Z | 2021-07-22T17:41:32Z | NONE | I added a follow-up commit that deals with emails that don't have a `Date` header: https://github.com/maxhawkins/google-takeout-to-sqlite/commit/4bc70103582c10802c85a523ef1e99a8a2154aa9 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Add Gmail takeout mbox import 813880401 | |
888075098 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5 | IC_kwDODFE5qs407vNa | maxhawkins 28565 | 2021-07-28T07:18:56Z | 2021-07-28T07:18:56Z | NONE | > I'm not sure why but my most recent import, when displayed in Datasette, looks like this: > > <img alt="mbox__mbox_emails__753_446_rows" width="574" src="https://user-images.githubusercontent.com/9599/109985836-0ab00080-7cba-11eb-97d5-0631a0835b61.png"> I did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead. @simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Add Gmail takeout mbox import 813880401 | |
894581223 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-894581223 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8 | IC_kwDODFE5qs41Ujnn | maxhawkins 28565 | 2021-08-07T00:57:48Z | 2021-08-07T00:57:48Z | NONE | Just added two more fixes: * Added parsing for rfc 2047 encoded unicode headers * Body is now stored as TEXT rather than a BLOB regardless of what order the messages are parsed in. I was able to run this on my Takeout export and everything seems to work fine. @simonw let me know if this looks good to merge. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add Gmail takeout mbox import (v2) 954546309 | |
896378525 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-896378525 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8 | IC_kwDODFE5qs41baad | maxhawkins 28565 | 2021-08-10T23:28:45Z | 2021-08-10T23:28:45Z | NONE | I added parsing of text/html emails using BeautifulSoup. Around half of the emails in my archive don't include a text/plain payload so adding html parsing makes a good chunk of them searchable. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add Gmail takeout mbox import (v2) 954546309 | |
1003437288 | https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1003437288 | https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8 | IC_kwDODFE5qs47zzzo | maxhawkins 28565 | 2021-12-31T19:06:20Z | 2021-12-31T19:06:20Z | NONE | > @maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text. Shouldn't be hard. The easiest way is probably to remove the `if body.content_type == "text/html"` clause from [utils.py:254](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8e6d487b697ce2e8ad885acf613a157bfba84c59#diff-25ad9dd1ced1b8bfc37fda8444819c803232c08891e4af3d4064aa205d8174eaR254) and just return content directly without parsing. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add Gmail takeout mbox import (v2) 954546309 | |
620401172 | https://github.com/simonw/datasette/issues/736#issuecomment-620401172 | https://api.github.com/repos/simonw/datasette/issues/736 | MDEyOklzc3VlQ29tbWVudDYyMDQwMTE3Mg== | aborruso 30607 | 2020-04-28T06:09:28Z | 2020-04-28T06:09:28Z | NONE | > Would you mind trying publishing your database using one of the other options - Heroku, Cloud Run or https://fly.io/ - and see if you have the same bug there? It works in heroku, than might be a bug with datasette-publish-now. Thank you | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | strange behavior using accented characters 606720674 | |
620401443 | https://github.com/simonw/datasette/issues/735#issuecomment-620401443 | https://api.github.com/repos/simonw/datasette/issues/735 | MDEyOklzc3VlQ29tbWVudDYyMDQwMTQ0Mw== | aborruso 30607 | 2020-04-28T06:10:20Z | 2020-04-28T06:10:20Z | NONE | It works in heroku, than might be a bug with datasette-publish-now. Thank you | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Error when I click on "View and edit SQL" 605806386 | |
621008152 | https://github.com/simonw/datasette/issues/744#issuecomment-621008152 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyMTAwODE1Mg== | aborruso 30607 | 2020-04-29T06:05:02Z | 2020-04-29T06:05:02Z | NONE | Hi @simonw , I have installed it and I have the below errors. > Is it possible that your /tmp directory is on a different volume from the template folder? That could cause a problem with the symlinks. No, /tmp folder is in the same volume. Thank you ``` Traceback (most recent call last): File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py", line 607, in link_or_copy_directory shutil.copytree(src, dst, copy_function=os.link) File "/usr/lib/python3.7/shutil.py", line 365, in copytree raise Error(errors) shutil.Error: [('/var/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmpcqv_1i5d/templates/base.html', "[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmpcqv_1i5d/templates/base.html'"), ('/var/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmpcqv_1i5d/templates/index.html', "[Errno 18] Invalid cross-device link: '/var/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmpcqv_1i5d/templates/index.html'")] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/aborruso/.local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/aborruso/.local/lib/python3.7/site-pa… | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
621011554 | https://github.com/simonw/datasette/issues/744#issuecomment-621011554 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyMTAxMTU1NA== | aborruso 30607 | 2020-04-29T06:17:26Z | 2020-04-29T06:17:26Z | NONE | A stupid note: I have no `tmpcqv_1i5d` folder in in `/tmp`. It seems to me that it does not create any `/tmp/tmpcqv_1i5d/templates` folder (or other name folder, inside /tmp) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
621030783 | https://github.com/simonw/datasette/issues/744#issuecomment-621030783 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyMTAzMDc4Mw== | aborruso 30607 | 2020-04-29T07:16:27Z | 2020-04-29T07:16:27Z | NONE | Hi @simonw it's debian as Windows Subsystem for Linux ``` PRETTY_NAME="Pengwin" NAME="Pengwin" VERSION_ID="10" VERSION="10 (buster)" ID=debian ID_LIKE=debian HOME_URL="https://github.com/whitewaterfoundry/Pengwin" SUPPORT_URL="https://github.com/whitewaterfoundry/Pengwin" BUG_REPORT_URL="https://github.com/whitewaterfoundry/Pengwin" VERSION_CODENAME=buster ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
625060561 | https://github.com/simonw/datasette/issues/744#issuecomment-625060561 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyNTA2MDU2MQ== | aborruso 30607 | 2020-05-07T06:38:24Z | 2020-05-07T06:38:24Z | NONE | Hi @simonw probably I could try to do it in Python for windows. I do not like to do these things in win enviroment. Because probably WSL Linux env (in which I do a lot of great things) is not an environment that will be tested for datasette. In win I shouldn't have any problems. Am I right? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
625066073 | https://github.com/simonw/datasette/issues/744#issuecomment-625066073 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyNTA2NjA3Mw== | aborruso 30607 | 2020-05-07T06:53:09Z | 2020-05-07T06:53:09Z | NONE | @simonw another error starting from Windows. I run ``` datasette publish heroku -n comunepa --template-dir template commissioniComunePalermo.db ``` And I have ``` Traceback (most recent call last): File "c:\python37\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\aborr\AppData\Roaming\Python\Python37\Scripts\datasette.exe\__main__.py", line 9, in <module> File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 53, in heroku line.split()[0] for line in check_output(["heroku", "plugins"]).splitlines() File "c:\python37\lib\subprocess.py", line 395, in check_output **kwargs).stdout File "c:\python37\lib\subprocess.py", line 472, in run with Popen(*popenargs, **kwargs) as process: File "c:\python37\lib\subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "c:\python37\lib\subprocess.py", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2]… | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
625083715 | https://github.com/simonw/datasette/issues/744#issuecomment-625083715 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyNTA4MzcxNQ== | aborruso 30607 | 2020-05-07T07:34:18Z | 2020-05-07T07:34:18Z | NONE | In Windows I'm not very strong. I use debian (inside WSL). However these are the possible steps: - I have installed Python 3 for win (I have 3.7.3); - I have installed heroku cli for win64 and logged in; - I have installed datasette running `python -m pip install --upgrade --user datasette`. It's a very basic Python env that I do not use. This time only to reach my goal: try to publish using custom template | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
625091976 | https://github.com/simonw/datasette/issues/744#issuecomment-625091976 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYyNTA5MTk3Ng== | aborruso 30607 | 2020-05-07T07:51:25Z | 2020-05-07T07:51:25Z | NONE | I have installed `heroku plugins:install heroku-builds`, but I have the same error. Then I have removed from `datasette\publish\heroku.py` ```python # Check for heroku-builds plugin plugins = [ line.split()[0] for line in check_output(["heroku", "plugins"]).splitlines() ] if b"heroku-builds" not in plugins: click.echo( "Publishing to Heroku requires the heroku-builds plugin to be installed." ) click.confirm( "Install it? (this will run `heroku plugins:install heroku-builds`)", abort=True, ) call(["heroku", "plugins:install", "heroku-builds"]) ``` And now I have ``` Traceback (most recent call last): File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 210, in temporary_heroku_directory yield File "C:\Users\aborr\AppData\Roaming\Python\Python37\site-packages\datasette\publish\heroku.py", line 96, in heroku list_output = check_output(["heroku", "apps:list", "--json"]).decode( File "c:\python37\lib\subprocess.py", line 395, in check_output **kwargs).stdout File "c:\python37\lib\subprocess.py", line 472, in run with Popen(*popenargs, **kwargs) as process: File "c:\python37\lib\subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "c:\python37\lib\subprocess.py", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The specified file could not be found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\python37\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\aborr\AppData\Roaming\Python\Python37\Scripts\datasette.exe\__main__.py", line 9, in <module> File "C:\Users\aborr\AppData\Roaming\Python\Python… | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
632249565 | https://github.com/simonw/datasette/issues/744#issuecomment-632249565 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYzMjI0OTU2NQ== | aborruso 30607 | 2020-05-21T17:47:40Z | 2020-05-21T17:47:40Z | NONE | @simonw can I test it know? What I must do to update it? Thank you | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
632255088 | https://github.com/simonw/datasette/issues/744#issuecomment-632255088 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYzMjI1NTA4OA== | aborruso 30607 | 2020-05-21T17:58:51Z | 2020-05-21T17:58:51Z | NONE | Thank you very much!! I will try and I write you here | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
632305868 | https://github.com/simonw/datasette/issues/744#issuecomment-632305868 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYzMjMwNTg2OA== | aborruso 30607 | 2020-05-21T19:43:23Z | 2020-05-21T19:43:23Z | NONE | @simonw now I have ``` Traceback (most recent call last): File "/home/aborruso/.local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py", line 103, in heroku extra_metadata, File "/usr/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/publish/heroku.py", line 191, in temporary_heroku_directory os.path.join(tmp.name, "templates"), File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py", line 605, in link_or_copy_directory shutil.copytree(src, dst, copy_function=os.link, dirs_exist_ok=True) TypeError: copytree() got an unexpected keyword argument 'dirs_exist_ok' ``` Do I must open a new issue? Thank you | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
634283355 | https://github.com/simonw/datasette/issues/744#issuecomment-634283355 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYzNDI4MzM1NQ== | aborruso 30607 | 2020-05-26T21:15:34Z | 2020-05-26T21:15:34Z | NONE | > Oh no! It looks like `dirs_exist_ok` is Python 3.8 only. This is a bad fix, it needs to work on older Python's too. Re-opening. Thank you very much | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
634446887 | https://github.com/simonw/datasette/issues/744#issuecomment-634446887 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYzNDQ0Njg4Nw== | aborruso 30607 | 2020-05-27T06:01:28Z | 2020-05-27T06:01:28Z | NONE | Dear @simonw thank you for your time, now IT WORKS!!! I hope that this edit to datasette code is not for an exceptional case (my PC configuration) and that it will be useful to other users. Thank you again!! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
635386935 | https://github.com/simonw/datasette/issues/744#issuecomment-635386935 | https://api.github.com/repos/simonw/datasette/issues/744 | MDEyOklzc3VlQ29tbWVudDYzNTM4NjkzNQ== | aborruso 30607 | 2020-05-28T14:32:53Z | 2020-05-28T14:32:53Z | NONE | Wow, I'm in some way very proud! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | link_or_copy_directory() error - Invalid cross-device link 608058890 | |
710768396 | https://github.com/simonw/sqlite-utils/issues/69#issuecomment-710768396 | https://api.github.com/repos/simonw/sqlite-utils/issues/69 | MDEyOklzc3VlQ29tbWVudDcxMDc2ODM5Ng== | aborruso 30607 | 2020-10-17T07:46:59Z | 2020-10-17T07:46:59Z | NONE | Great @simonw thank you very much | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Feature request: enable extensions loading 534507142 | |
710778368 | https://github.com/simonw/sqlite-utils/issues/188#issuecomment-710778368 | https://api.github.com/repos/simonw/sqlite-utils/issues/188 | MDEyOklzc3VlQ29tbWVudDcxMDc3ODM2OA== | aborruso 30607 | 2020-10-17T08:52:58Z | 2020-10-17T08:52:58Z | NONE | I have done a stupid question. If I run ``` sqlite-utils :memory: "select spatialite_version()" --load-extension=/usr/local/lib/mod_spatialite.so ``` I have `[{"spatialite_version()": "5.0.0"}]` Thank you for this great tool | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | About loading spatialite 723708310 | |
778008752 | https://github.com/simonw/datasette/issues/1220#issuecomment-778008752 | https://api.github.com/repos/simonw/datasette/issues/1220 | MDEyOklzc3VlQ29tbWVudDc3ODAwODc1Mg== | aborruso 30607 | 2021-02-12T06:37:34Z | 2021-02-12T06:37:34Z | NONE | I have used my path, I'm running it from the folder in wich I have the db. Do I must an absolute path? Do I must create exactly that folder? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Installing datasette via docker: Path 'fixtures.db' does not exist 806743116 | |
778467759 | https://github.com/simonw/datasette/issues/1220#issuecomment-778467759 | https://api.github.com/repos/simonw/datasette/issues/1220 | MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ== | aborruso 30607 | 2021-02-12T21:35:17Z | 2021-02-12T21:35:17Z | NONE | Thank you | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Installing datasette via docker: Path 'fixtures.db' does not exist 806743116 | |
1279924827 | https://github.com/simonw/datasette/issues/1845#issuecomment-1279924827 | https://api.github.com/repos/simonw/datasette/issues/1845 | IC_kwDOBm6k_c5MShpb | kindly 30636 | 2022-10-16T08:54:53Z | 2022-10-16T08:54:53Z | NONE | > It was part of a larger idea I was exploring around ensuring Datasette could be used to start interacting with CSV/JSON data out-of-the-box, without needing to first convert that data into SQLite using separate tools. This would be great. My organization deals with very nested JSON open data and I have been wanting to find a way to hook into datasette so that the analysts do not have to first convert to sqlite first. This can kind of be done with datasette-lite. From this random nested JSON API: https://api.nobelprize.org/v1/prize.json You can use the API of https://flatterer.herokuapp.com to return a multi table sqlite database: https://lite.datasette.io/?url=https://flatterer.herokuapp.com/api/convert?output_format=sqlite%26file_url=https://api.nobelprize.org/v1/prize.json This is great and fun, but it would be great if there was some plugin mechanism that you could feed a local datasette a nested JSON file directly, possibly hooking into other flattening tools for this. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Reconsider the Datasette first-run experience 1410305897 | |
782745199 | https://github.com/simonw/datasette/issues/782#issuecomment-782745199 | https://api.github.com/repos/simonw/datasette/issues/782 | MDEyOklzc3VlQ29tbWVudDc4Mjc0NTE5OQ== | frankieroberto 30665 | 2021-02-20T20:32:03Z | 2021-02-20T20:32:03Z | NONE | I think it’s a good idea if the top level item of the response JSON is always an object, rather than an array, at least as the default. Mainly because it allows you to add extra keys in a backwards-compatible way. Also just seems more expected somehow. The API design guidance for the UK government also recommends this: https://www.gov.uk/guidance/gds-api-technical-and-data-standards#use-json I also strongly dislike having versioned APIs (eg with a `/v1/` path prefix, as it invariably means that old versions stop working at some point, even though the bit of the API you’re using might not have changed at all. | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1} | Redesign default .json format 627794879 | |
782746755 | https://github.com/simonw/datasette/issues/782#issuecomment-782746755 | https://api.github.com/repos/simonw/datasette/issues/782 | MDEyOklzc3VlQ29tbWVudDc4Mjc0Njc1NQ== | frankieroberto 30665 | 2021-02-20T20:44:05Z | 2021-02-20T20:44:05Z | NONE | Minor suggestion: rename `size` query param to `limit`, to better reflect that it’s a maximum number of rows returned rather than a guarantee of getting that number, and also for consistency with the SQL keyword? I like the idea of specifying a limit of 0 if you don’t want any rows data - and returning an empty array under the `rows` key seems fine. Have you given any thought as to whether to pretty print (format with spaces) the output or not? Can be useful for debugging/exploring in a browser or other basic tools which don’t parse the JSON. Could be default (can’t be much bigger with gzip?) or opt-in. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Redesign default .json format 627794879 | |
783265830 | https://github.com/simonw/datasette/issues/782#issuecomment-783265830 | https://api.github.com/repos/simonw/datasette/issues/782 | MDEyOklzc3VlQ29tbWVudDc4MzI2NTgzMA== | frankieroberto 30665 | 2021-02-22T10:21:14Z | 2021-02-22T10:21:14Z | NONE | @simonw: > The problem there is that ?_size=x isn't actually doing the same thing as the SQL limit keyword. Interesting! Although I don't think it matters too much what the underlying implementation is - I more meant that `limit` is familiar to developers conceptually as "up to and including this number, if they exist", whereas "size" is potentially more ambiguous. However, it's probably no big deal either way. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Redesign default .json format 627794879 | |
951731255 | https://github.com/simonw/datasette/pull/1204#issuecomment-951731255 | https://api.github.com/repos/simonw/datasette/issues/1204 | IC_kwDOBm6k_c44ukQ3 | 20after4 30934 | 2021-10-26T09:01:28Z | 2021-10-26T09:01:28Z | NONE | > Writing the tests will be a bit tricky since we need to confirm that the `include_table_top(datasette, database, actor, table)` arguments were all passed correctly but the only thing we get back from the plugin is a list of templates. Maybe encode those values into the template names somehow? Why not return a data structure instead of just a template name? I've already done some custom hacking to modify datasette but the plugin mechanism you are building here would be much cleaner than what I've built. I'd be happy to help with testing this PR and fleshing it out further if you are still considering merging this. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | WIP: Plugin includes 793002853 | |
951740637 | https://github.com/simonw/datasette/issues/878#issuecomment-951740637 | https://api.github.com/repos/simonw/datasette/issues/878 | IC_kwDOBm6k_c44umjd | 20after4 30934 | 2021-10-26T09:12:15Z | 2021-10-26T09:12:15Z | NONE | This sounds really ambitious but also really awesome. I like the idea that basically any piece of a page could be selectively replaced. It sort of sounds like a python asyncio version of https://github.com/observablehq/runtime | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | New pattern for views that return either JSON or HTML, available for plugins 648435885 | |
981966693 | https://github.com/simonw/datasette/issues/1532#issuecomment-981966693 | https://api.github.com/repos/simonw/datasette/issues/1532 | IC_kwDOBm6k_c46h59l | 20after4 30934 | 2021-11-29T19:56:52Z | 2021-11-29T19:56:52Z | NONE | FWIW I've written some web components that consume the json api and I think it's a really nice way to work with datasette. I like the combination with datasette+sqlite as a back-end feeding data to a front-end that's entirely javascript + html. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Use datasette-table Web Component to guide the design of the JSON API for 1.0 1065429936 | |
981980048 | https://github.com/simonw/datasette/issues/1304#issuecomment-981980048 | https://api.github.com/repos/simonw/datasette/issues/1304 | IC_kwDOBm6k_c46h9OQ | 20after4 30934 | 2021-11-29T20:13:53Z | 2021-11-29T20:14:11Z | NONE | There isn't any way to do this with sqlite as far as I know. The only option is to insert the right number of ? placeholders into the sql template and then provide an array of values. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Document how to send multiple values for "Named parameters" 863884805 | |
982745406 | https://github.com/simonw/datasette/issues/1532#issuecomment-982745406 | https://api.github.com/repos/simonw/datasette/issues/1532 | IC_kwDOBm6k_c46k4E- | 20after4 30934 | 2021-11-30T15:28:57Z | 2021-11-30T15:28:57Z | NONE | It's a really great API and the documentation is really great too. Honestly, in more than 20 years of professional experience, I haven't worked with any software API that was more of a joy to use. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Use datasette-table Web Component to guide the design of the JSON API for 1.0 1065429936 | |
988461884 | https://github.com/simonw/datasette/issues/1304#issuecomment-988461884 | https://api.github.com/repos/simonw/datasette/issues/1304 | IC_kwDOBm6k_c466rs8 | 20after4 30934 | 2021-12-08T03:20:26Z | 2021-12-08T03:20:26Z | NONE | The easiest or most straightforward thing to do is to use named parameters like: ```sql select * where key IN (:p1, :p2, :p3) ``` And simply construct the list of placeholders dynamically based on the number of values. Doing this is possible with datasette if you forgo "canned queries" and just use the raw query endpoint and pass the query sql, along with p1, p2 ... in the request. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Document how to send multiple values for "Named parameters" 863884805 | |
988463455 | https://github.com/simonw/datasette/issues/1304#issuecomment-988463455 | https://api.github.com/repos/simonw/datasette/issues/1304 | IC_kwDOBm6k_c466sFf | 20after4 30934 | 2021-12-08T03:23:14Z | 2021-12-08T03:23:14Z | NONE | I actually think it would be a useful thing to add support for in datasette. It wouldn't be difficult to unwind an array of params and add the placeholders automatically. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Document how to send multiple values for "Named parameters" 863884805 | |
988468238 | https://github.com/simonw/datasette/issues/1528#issuecomment-988468238 | https://api.github.com/repos/simonw/datasette/issues/1528 | IC_kwDOBm6k_c466tQO | 20after4 30934 | 2021-12-08T03:35:45Z | 2021-12-08T03:35:45Z | NONE | FWIW I implemented something similar with a bit of plugin code: ```python @hookimpl def canned_queries(datasette: Datasette, database: str) -> Mapping[str, str]: # load "canned queries" from the filesystem under # www/sql/db/query_name.sql queries = {} sqldir = Path(__file__).parent.parent / "sql" if database: sqldir = sqldir / database if not sqldir.is_dir(): return queries for f in sqldir.glob('*.sql'): try: sql = f.read_text('utf8').strip() if not len(sql): log(f"Skipping empty canned query file: {f}") continue queries[f.stem] = { "sql": sql } except OSError as err: log(err) return queries ``` | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0} | Add new `"sql_file"` key to Canned Queries in metadata? 1060631257 | |
941274088 | https://github.com/dogsheep/swarm-to-sqlite/issues/12#issuecomment-941274088 | https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/12 | IC_kwDODD6af844GrPo | fs111 33631 | 2021-10-12T18:31:57Z | 2021-10-12T18:31:57Z | NONE | I am running into the same problem. Is there any workaround? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 403 when getting token 951817328 | |
1008279307 | https://github.com/simonw/datasette/pull/1574#issuecomment-1008279307 | https://api.github.com/repos/simonw/datasette/issues/1574 | IC_kwDOBm6k_c48GR8L | fs111 33631 | 2022-01-09T11:26:06Z | 2022-01-09T11:26:06Z | NONE | @fgregg my thinking was backwards compatibility. I don't know what people do to their builds, I just wanted a smaller image for my use case. @simonw any chance to take a look at this? If there is no interest, feel free to close the PR | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | introduce new option for datasette package to use a slim base image 1084193403 | |
1084216224 | https://github.com/simonw/datasette/pull/1574#issuecomment-1084216224 | https://api.github.com/repos/simonw/datasette/issues/1574 | IC_kwDOBm6k_c5An9Og | fs111 33631 | 2022-03-31T07:45:25Z | 2022-03-31T07:45:25Z | NONE | @simonw I like that you want to go "slim by default". Do you want another PR for that or should I just wait? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | introduce new option for datasette package to use a slim base image 1084193403 | |
1214765672 | https://github.com/simonw/datasette/pull/1574#issuecomment-1214765672 | https://api.github.com/repos/simonw/datasette/issues/1574 | IC_kwDOBm6k_c5IZ9po | fs111 33631 | 2022-08-15T08:49:31Z | 2022-08-15T08:49:31Z | NONE | closing as this is now the default | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | introduce new option for datasette package to use a slim base image 1084193403 | |
592999503 | https://github.com/simonw/sqlite-utils/issues/46#issuecomment-592999503 | https://api.github.com/repos/simonw/sqlite-utils/issues/46 | MDEyOklzc3VlQ29tbWVudDU5Mjk5OTUwMw== | chrishas35 35075 | 2020-02-29T22:08:20Z | 2020-02-29T22:08:20Z | NONE | @simonw any thoughts on allow extracts to specify the lookup column name? If I'm understanding the documentation right, `.lookup()` allows you to define the "value" column (the documentation uses name), but when you use `extracts` keyword as part of `.insert()`, `.upsert()` etc. the lookup must be done against a column named "value". I have an existing lookup table that I've populated with columns "id" and "name" as opposed to "id" and "value", and seems I can't use `extracts=`, unless I'm missing something... Initial thought on how to do this would be to allow the dictionary value to be a tuple of table name column pair... so: ``` table = db.table("trees", extracts={"species_id": ("Species", "name"}) ``` I haven't dug too much into the existing code yet, but does this make sense? Worth doing? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | extracts= option for insert/update/etc 471780443 | |
593122605 | https://github.com/simonw/sqlite-utils/issues/89#issuecomment-593122605 | https://api.github.com/repos/simonw/sqlite-utils/issues/89 | MDEyOklzc3VlQ29tbWVudDU5MzEyMjYwNQ== | chrishas35 35075 | 2020-03-01T17:33:11Z | 2020-03-01T17:33:11Z | NONE | If you're happy with the proposed implementation, I have code & tests written that I'll get ready for a PR. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to customize columns used by extracts= feature 573578548 | |
803502424 | https://github.com/simonw/sqlite-utils/issues/249#issuecomment-803502424 | https://api.github.com/repos/simonw/sqlite-utils/issues/249 | MDEyOklzc3VlQ29tbWVudDgwMzUwMjQyNA== | prabhur 36287 | 2021-03-21T02:43:32Z | 2021-03-21T02:43:32Z | NONE | > Did you run `enable-fts` before you inserted the data? > > If so you'll need to run `populate-fts` after the insert to populate the FTS index. > > A better solution may be to add `--create-triggers` to the `enable-fts` command to add triggers that will automatically keep the index updated as you insert new records. Wow. Wasn't expecting a response this quick, especially during a weekend. :-) Sincerely appreciate it. I tried the `populate-fts` and that did the trick. My bad for not consulting the docs again. I think I forgot to add that step when I automated the workflow. Thanks for the suggestion. I'll close this issue. Have a great weekend and many many thanks for creating these suite of tools around sqlite. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Full text search possibly broken? 836963850 | |
1261194164 | https://github.com/simonw/datasette/issues/1624#issuecomment-1261194164 | https://api.github.com/repos/simonw/datasette/issues/1624 | IC_kwDOBm6k_c5LLEu0 | palfrey 38532 | 2022-09-28T16:54:22Z | 2022-09-28T16:54:22Z | NONE | https://github.com/simonw/datasette-cors seems to workaround this | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Index page `/` has no CORS headers 1122427321 | |
633234781 | https://github.com/dogsheep/dogsheep-photos/issues/20#issuecomment-633234781 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20 | MDEyOklzc3VlQ29tbWVudDYzMzIzNDc4MQ== | dmd 41439 | 2020-05-24T13:56:13Z | 2020-05-24T13:56:13Z | NONE | As that seems to be closed, can you give a hint on how to make this work? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to serve thumbnailed Apple Photo from its place on disk 613006393 | |
1537744000 | https://github.com/simonw/sqlite-utils/issues/540#issuecomment-1537744000 | https://api.github.com/repos/simonw/sqlite-utils/issues/540 | IC_kwDOCGYnMM5bqByA | pquentin 42327 | 2023-05-08T04:56:12Z | 2023-05-08T04:56:12Z | NONE | Hey @simonw, urllib3 maintainer here :wave: Sorry for breaking your CI. I understand you may prefer to pin the Python version, but note that specifying just `python: "3"` will get you the latest. We use that in urllib3: https://github.com/urllib3/urllib3/blob/main/.readthedocs.yml I can open PRs to sqlite-utils / datasette if you're interested | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | sphinx.builders.linkcheck build error 1699184583 | |
472844001 | https://github.com/simonw/datasette/issues/409#issuecomment-472844001 | https://api.github.com/repos/simonw/datasette/issues/409 | MDEyOklzc3VlQ29tbWVudDQ3Mjg0NDAwMQ== | Uninen 43100 | 2019-03-14T13:04:20Z | 2019-03-14T13:04:42Z | NONE | It seems this affects the Datasette Publish -site as well: https://github.com/simonw/datasette-publish-support/issues/3 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Zeit API v1 does not work for new users - need to migrate to v2 408376825 | |
1316289392 | https://github.com/simonw/datasette/issues/1886#issuecomment-1316289392 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OdPtw | rtanglao 45195 | 2022-11-16T03:54:17Z | 2022-11-16T03:58:56Z | NONE | Happy Birthday Datasette! Thanks Simon!! I use datasette on everything most notably [my flickr metadata SQLite DB](https://www.dropbox.com/s/6j10e2vohp2j5kf/roland2019-2020.db?dl=0) to make art. Datasette lite on my 2019 flickr metadata is super helpful too: https://lite.datasette.io/?csv=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-flickr-sqlite-csv%2Fmain%2F2019-roland-flickr-metadata.csv Even better datasette lite on all firefox support questions from 2021: https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-kits-api3%2Fmain%2FYEARLY_CSV_FILES%2F2021-firefox-sumo-questions.db Thanks again Simon! So great! What a gift to the world!!!!!! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
697973420 | https://github.com/simonw/datasette/issues/619#issuecomment-697973420 | https://api.github.com/repos/simonw/datasette/issues/619 | MDEyOklzc3VlQ29tbWVudDY5Nzk3MzQyMA== | obra 45416 | 2020-09-23T21:07:58Z | 2020-09-23T21:07:58Z | NONE | I've just run into this after crafting a complex query and discovered that hitting back loses my query. Even showing me the whole bad query would be a huge improvement over the current status quo. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "Invalid SQL" page should let you edit the SQL 520655983 | |
698110186 | https://github.com/simonw/datasette/issues/123#issuecomment-698110186 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDY5ODExMDE4Ng== | obra 45416 | 2020-09-24T04:49:51Z | 2020-09-24T04:49:51Z | NONE | As a half-measure, I'd get value out of being able to upload a CSV and have datasette run csv-to-sqlite on it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
698174957 | https://github.com/simonw/datasette/issues/123#issuecomment-698174957 | https://api.github.com/repos/simonw/datasette/issues/123 | MDEyOklzc3VlQ29tbWVudDY5ODE3NDk1Nw== | obra 45416 | 2020-09-24T07:42:05Z | 2020-09-24T07:42:05Z | NONE | Oh. Awesome. On Thu, Sep 24, 2020 at 12:28:53AM -0700, Simon Willison wrote: > @obra there's a plugin for that! https://github.com/simonw/ > datasette-upload-csvs > > â > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe.* > -- | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette serve should accept paths/URLs to CSVs and other file formats 275125561 | |
489353316 | https://github.com/simonw/datasette/issues/187#issuecomment-489353316 | https://api.github.com/repos/simonw/datasette/issues/187 | MDEyOklzc3VlQ29tbWVudDQ4OTM1MzMxNg== | carsonyl 46059 | 2019-05-04T18:36:36Z | 2019-05-04T18:36:36Z | NONE | Hi @simonw - I just hit this issue when trying out Datasette after your PyCon talk today. Datasette is pinned to Sanic 0.7.0, but it looks like 0.8.0 added the option to remove the uvloop dependency for Windows by having an environment variable `SANIC_NO_UVLOOP` at install time. Maybe that'll be sufficient before a port to Starlette? | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0} | Windows installation error 309033998 | |
620841496 | https://github.com/simonw/datasette/issues/633#issuecomment-620841496 | https://api.github.com/repos/simonw/datasette/issues/633 | MDEyOklzc3VlQ29tbWVudDYyMDg0MTQ5Ng== | nryberg 46165 | 2020-04-28T20:37:50Z | 2020-04-28T20:37:50Z | NONE | Using the Heroku web interface, you can set the WEB_CONCURRENCY = 1 ![image](https://user-images.githubusercontent.com/46165/80535319-352c8100-8966-11ea-9d4f-df2622ec8bff.png) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Publish to Heroku is broken: "WARNING: You must pass the application as an import string to enable 'reload' or 'workers" 522334771 | |
911772943 | https://github.com/dogsheep/evernote-to-sqlite/issues/14#issuecomment-911772943 | https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/14 | IC_kwDOEhK-wc42WI0P | step21 46968 | 2021-09-02T14:53:11Z | 2021-09-02T14:53:11Z | NONE | Additionally, assuming the line numbers match up with the provided enenx file, the mentioned line plus one before and after is as follows: ``` <![CDATA[>]]> </span></div> <div style="padding: 0px; font-family: Arial, sans-serif; font-size: 12px; line-height: 16px; white-space: pre-wrap;"><br style=" padding: 0px;"/> ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | xml.etree.ElementTree.Parse Error - mismatched tag 986829194 | |
374872202 | https://github.com/simonw/datasette/issues/186#issuecomment-374872202 | https://api.github.com/repos/simonw/datasette/issues/186 | MDEyOklzc3VlQ29tbWVudDM3NDg3MjIwMg== | stefanocudini 47107 | 2018-03-21T09:07:22Z | 2018-03-21T09:07:22Z | NONE | --debug is perfect tnk | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | proposal new option to disable user agents cache 306811513 | |
346974336 | https://github.com/simonw/datasette/issues/141#issuecomment-346974336 | https://api.github.com/repos/simonw/datasette/issues/141 | MDEyOklzc3VlQ29tbWVudDM0Njk3NDMzNg== | janimo 50138 | 2017-11-26T00:00:35Z | 2017-11-26T00:00:35Z | NONE | FWIW I worked around this by setting TMPDIR to ~/tmp before running the command. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette publish can fail if /tmp is on a different device 275814941 | |
346987395 | https://github.com/simonw/datasette/issues/124#issuecomment-346987395 | https://api.github.com/repos/simonw/datasette/issues/124 | MDEyOklzc3VlQ29tbWVudDM0Njk4NzM5NQ== | janimo 50138 | 2017-11-26T06:24:08Z | 2017-11-26T06:24:08Z | NONE | Are there performance gains when using immutable as opposed to read-only? From what I see other processes can still modify the DB when immutable, but there are no change notifications. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Option to open readonly but not immutable 275125805 | |
347123991 | https://github.com/simonw/datasette/issues/124#issuecomment-347123991 | https://api.github.com/repos/simonw/datasette/issues/124 | MDEyOklzc3VlQ29tbWVudDM0NzEyMzk5MQ== | janimo 50138 | 2017-11-27T09:25:15Z | 2017-11-27T09:25:15Z | NONE | That's the only reference to immutable I saw as well, making me think that there may be no perceivable advantages over simply using mode=ro. Since the database is never or seldom updated the change notifications should not impact performance. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Option to open readonly but not immutable 275125805 | |
974711959 | https://github.com/simonw/datasette/issues/1426#issuecomment-974711959 | https://api.github.com/repos/simonw/datasette/issues/1426 | IC_kwDOBm6k_c46GOyX | tannewt 52649 | 2021-11-20T21:11:51Z | 2021-11-20T21:11:51Z | NONE | I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Manage /robots.txt in Datasette core, block robots by default 964322136 | |
1115542067 | https://github.com/simonw/datasette/issues/1732#issuecomment-1115542067 | https://api.github.com/repos/simonw/datasette/issues/1732 | IC_kwDOBm6k_c5CfdIz | tannewt 52649 | 2022-05-03T01:50:44Z | 2022-05-03T01:50:44Z | NONE | I haven’t set one up unfortunately. My time is very limited because we just had a baby. On Mon, May 2, 2022, at 6:42 PM, Simon Willison wrote: > > > Thanks, this definitely sounds like a bug. Do you have simple steps to reproduce this? > > > — > Reply to this email directly, view it on GitHub <https://github.com/simonw/datasette/issues/1732#issuecomment-1115533820>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAAM3KIY5L6FENZ22XANTHDVICAAXANCNFSM5UYOTKQA>. > You are receiving this because you authored the thread.Message ID: ***@***.***> > | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Custom page variables aren't decoded 1221849746 | |
754911290 | https://github.com/simonw/datasette/issues/1171#issuecomment-754911290 | https://api.github.com/repos/simonw/datasette/issues/1171 | MDEyOklzc3VlQ29tbWVudDc1NDkxMTI5MA== | rcoup 59874 | 2021-01-05T21:31:15Z | 2021-01-05T21:31:15Z | NONE | We did this for [Sno](https://sno.earth) under macOS — it's a PyInstaller binary/setup which uses [Packages](http://s.sudre.free.fr/Software/Packages/about.html) for packaging. * [Building & Signing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L67-L95) * [Packaging & Notarizing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L121-L215) * [Github Workflow](https://github.com/koordinates/sno/blob/master/.github/workflows/build.yml#L228-L269) has the CI side of it FYI (if you ever get to it) for Windows you need to get a code signing certificate. And if you want automated CI, you'll want to get an "EV CodeSigning for HSM" certificate from GlobalSign, which then lets you put the certificate into Azure Key Vault. Which you can use with [azuresigntool](https://github.com/vcsjones/AzureSignTool) to sign your code & installer. (Non-EV certificates are a waste of time, the user still gets big warnings at install time). | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0} | GitHub Actions workflow to build and sign macOS binary executables 778450486 | |
344424382 | https://github.com/simonw/datasette/issues/93#issuecomment-344424382 | https://api.github.com/repos/simonw/datasette/issues/93 | MDEyOklzc3VlQ29tbWVudDM0NDQyNDM4Mg== | atomotic 67420 | 2017-11-14T22:42:16Z | 2017-11-14T22:42:16Z | NONE | tried quickly, this seems working: ``` ~ pip3 install pyinstaller ~ pyinstaller -F --add-data /usr/local/lib/python3.6/site-packages/datasette/templates:datasette/templates --add-data /usr/local/lib/python3.6/site-packages/datasette/static:datasette/static /usr/local/bin/datasette ~ du -h dist/datasette 6.8M dist/datasette ~ file dist/datasette dist/datasette: Mach-O 64-bit executable x86_64 ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Package as standalone binary 273944952 | |
344430299 | https://github.com/simonw/datasette/issues/93#issuecomment-344430299 | https://api.github.com/repos/simonw/datasette/issues/93 | MDEyOklzc3VlQ29tbWVudDM0NDQzMDI5OQ== | atomotic 67420 | 2017-11-14T23:06:33Z | 2017-11-14T23:06:33Z | NONE | i will look better tomorrow, it's late i surely made some mistake https://asciinema.org/a/ZyAWbetrlriDadwWyVPUWB94H | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Package as standalone binary 273944952 | |
344516406 | https://github.com/simonw/datasette/issues/93#issuecomment-344516406 | https://api.github.com/repos/simonw/datasette/issues/93 | MDEyOklzc3VlQ29tbWVudDM0NDUxNjQwNg== | atomotic 67420 | 2017-11-15T08:09:41Z | 2017-11-15T08:09:41Z | NONE | actually you can use travis to build for linux/macos and [appveyor](https://www.appveyor.com/) to build for windows. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Package as standalone binary 273944952 | |
1006708046 | https://github.com/dogsheep/dogsheep-photos/pull/36#issuecomment-1006708046 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36 | IC_kwDOD079W848ASVO | scoates 71983 | 2022-01-06T16:04:46Z | 2022-01-06T16:04:46Z | NONE | This one got me, today, too. 👍 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Correct naming of tool in readme 988493790 | |
645515103 | https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47 | MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw== | hpk42 73579 | 2020-06-17T17:30:01Z | 2020-06-17T17:30:01Z | NONE | It's the one with python3.7:: >>> sqlite3.sqlite_version '3.11.0' On Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote: > That means your version of SQLite is old enough that it doesn't support the FTS5 extension. > > Could you share what operating system you're running, and what the output is that you get from running this? > > python -c 'import sqlite3; print(sqlite3.connect(":memory:").execute("select sqlite_version()").fetchone()[0])' > > I can teach this tool to fall back on FTS4 if FTS5 isn't available. > > -- > You are receiving this because you authored the thread. > Reply to this email directly or view it on GitHub: > https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fall back to FTS4 if FTS5 is not available 639542974 | |
414860009 | https://github.com/simonw/datasette/issues/267#issuecomment-414860009 | https://api.github.com/repos/simonw/datasette/issues/267 | MDEyOklzc3VlQ29tbWVudDQxNDg2MDAwOQ== | annapowellsmith 78156 | 2018-08-21T23:57:51Z | 2018-08-21T23:57:51Z | NONE | Looks to me like hashing, redirects and caching were documented as part of https://github.com/simonw/datasette/commit/788a542d3c739da5207db7d1fb91789603cdd336#diff-3021b0e065dce289c34c3b49b3952a07 - so perhaps this can be closed? :tada: | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Documentation for URL hashing, redirects and cache policy 323716411 | |
643083451 | https://github.com/simonw/datasette/issues/838#issuecomment-643083451 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDY0MzA4MzQ1MQ== | tsibley 79913 | 2020-06-12T06:04:14Z | 2020-06-12T06:04:14Z | NONE | Hmm, I haven't tried removing `ProxyPassReverse`, but it doesn't touch the HTML, which is the issue I'm seeing. You can read the [documentation here](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassreverse). `ProxyPassReverse` is a standard directive when proxying with Apache. I've used it dozens of times with other applications. Looking a little more at the code, I think the issue here is that the behaviour of `base_url` makes sense when Datasette is _mounted_ at a path within a larger application, but not when HTTP requests are being _proxied_ to it. In a _mount_ situation, it is perfectly fine to construct URLs reusing the domain and path from the request. In a _proxy_ situation, it never is, as the domain and path in the request are not the domain and path that the non-proxy client actually needs to use. That is, links which include the Apache → Datasette request origin, `localhost:8001`, instead of the browser → Apache request origin, `example.com`, will be broken. The tests you pointed to also reflect this in two ways: 1. They strip a leading `http://localhost`, allowing such URLs in the facet links to pass, but inclusion of that in a proxy situation would mean the URL is broken. 2. The test client emits direct ASGI events instead of actual proxied HTTP requests. The headers of these ASGI events don't reflect the way an HTTP proxy works; instead they pass through the original request path which contains `base_url`. This works because Datasette responds to requests equivalently at either `/…` or `/{base_url}/…`, which makes some sense in a _mount_ situation but is unconventional (albeit workable) for a proxied app. Apps that support being proxied automatically support being mounted, but apps that only support being mounted don't automatically support being proxied. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
790857004 | https://github.com/simonw/datasette/issues/1238#issuecomment-790857004 | https://api.github.com/repos/simonw/datasette/issues/1238 | MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA== | tsibley 79913 | 2021-03-04T19:06:55Z | 2021-03-04T19:06:55Z | NONE | @rgieseke Ah, that's super helpful. Thank you for the workaround for now! | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Custom pages don't work with base_url setting 813899472 | |
795893813 | https://github.com/simonw/datasette/issues/838#issuecomment-795893813 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDc5NTg5MzgxMw== | tsibley 79913 | 2021-03-10T18:43:39Z | 2021-03-10T18:43:39Z | NONE | @simonw Unfortunately this issue as I reported it is not actually solved in version 0.55. Every link which is returned by the `Datasette.absolute_url` method is still wrong, because it uses the request URL as the base. This still includes the suggested facet links and pagination links. What I wrote originally still stands: > Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include `base_url` or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at `http://localhost:8001`. > > I looked into this a little in the source code, and it seems to be an issue anywhere `request.url` or `request.path` is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the `path_with_…` family of utility functions and the `Datasette.absolute_url` method. Would you prefer to re-open this issue or have me create a new one? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
795939998 | https://github.com/simonw/datasette/issues/838#issuecomment-795939998 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDc5NTkzOTk5OA== | tsibley 79913 | 2021-03-10T19:16:55Z | 2021-03-10T19:16:55Z | NONE | Nod. The problem with the tests is that they're ignoring the origin (hostname, port) of links. In a reverse proxy situation, the frontend request origin is different than the backend request origin. The problem is Datasette generates links with the backend request origin. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
795950636 | https://github.com/simonw/datasette/issues/838#issuecomment-795950636 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDc5NTk1MDYzNg== | tsibley 79913 | 2021-03-10T19:24:13Z | 2021-03-10T19:24:13Z | NONE | I think this could be solved by one of: 1. Stop generating absolute URLs, e.g. ones that include an origin. Relative URLs with absolute paths are fine, as long as they take `base_url` into account (as they do now, yay!). 2. Extend `base_url` to include the expected frontend origin, and then use that information when generating absolute URLs. 3. Document which HTTP headers the reverse proxy should set (e.g. the `X-Forwarded-*` family of conventional headers) to pass the frontend origin information to Datasette, and then use that information when generating absolute URLs. Option 1 seems like the easiest to me, if you can get away with never having to generate an absolute URL. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
464341721 | https://github.com/simonw/sqlite-utils/issues/8#issuecomment-464341721 | https://api.github.com/repos/simonw/sqlite-utils/issues/8 | MDEyOklzc3VlQ29tbWVudDQ2NDM0MTcyMQ== | psychemedia 82988 | 2019-02-16T12:08:41Z | 2019-02-16T12:08:41Z | NONE | We also get an error if a column name contains a `.` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Problems handling column names containing spaces or - 403922644 | |
480621924 | https://github.com/simonw/sqlite-utils/issues/18#issuecomment-480621924 | https://api.github.com/repos/simonw/sqlite-utils/issues/18 | MDEyOklzc3VlQ29tbWVudDQ4MDYyMTkyNA== | psychemedia 82988 | 2019-04-07T19:31:42Z | 2019-04-07T19:31:42Z | NONE | I've just noticed that SQLite lets you IGNORE inserts that collide with a pre-existing key. This can be quite handy if you have a dataset that keeps changing in part, and you don't want to upsert and replace pre-existing PK rows but you do want to ignore collisions to existing PK rows. Do `sqlite_utils` support such (cavalier!) behaviour? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | .insert/.upsert/.insert_all/.upsert_all should add missing columns 413871266 | |
482994231 | https://github.com/simonw/sqlite-utils/issues/8#issuecomment-482994231 | https://api.github.com/repos/simonw/sqlite-utils/issues/8 | MDEyOklzc3VlQ29tbWVudDQ4Mjk5NDIzMQ== | psychemedia 82988 | 2019-04-14T15:04:07Z | 2019-04-14T15:29:33Z | NONE | PLEASE IGNORE THE BELOW... I did a package update and rebuilt the kernel I was working in... may just have been an old version of sqlite_utils, seems to be working now. (Too many containers / too many environments!) Has an issue been reintroduced here with FTS? eg I'm getting an error thrown by spaces in column names here: ``` /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) def enable_fts(self, columns, fts_version="FTS5"): --> 329 "Enables FTS on the specified columns" 330 sql = """ 331 CREATE VIRTUAL TABLE "{table}_fts" USING {fts_version} ( ``` when trying an `insert_all`. Also, if a col has a `.` in it, I seem to get: ``` /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near ".": syntax error ``` (Can't post a worked minimal example right now; racing trying to build something against a live timing screen that will stop until next weekend in an hour or two...) PS Hmmm I did a test and they seem to work; I must be messing up s/where else... ``` import sqlite3 from sqlite_utils import Database dbname='testingDB_sqlite_utils.db' #!rm $dbname conn = sqlite3.connect(dbname, timeout=10) #Setup database tables c = conn.cursor() setup=''' CREATE TABLE IF NOT EXISTS "test1" ( "NO" INTEGER, "NAME" TEXT ); CREATE TABLE IF NOT EXISTS "test2" ( "NO" INTEGER, `TIME OF DAY` TEXT ); CREATE TABLE IF NOT EXISTS "test3" ( "NO" INTEGER, `AVG. SPEED (MPH)` FLOAT ); ''' c.executescript(setup) DB = Database(conn) … | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Problems handling column names containing spaces or - 403922644 | |
571138093 | https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093 | https://api.github.com/repos/simonw/sqlite-utils/issues/73 | MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw== | psychemedia 82988 | 2020-01-06T13:28:31Z | 2020-01-06T13:28:31Z | NONE | I think I actually had several issues in play... The missing key was one, but I think there is also an issue as per below. For example, in the following: ```python def init_testdb(dbname='test.db'): if os.path.exists(dbname): os.remove(dbname) conn = sqlite3.connect(dbname) db = Database(conn) return conn, db conn, db = init_testdb() c = conn.cursor() c.executescript('CREATE TABLE "test1" ("Col1" TEXT, "Col2" TEXT, PRIMARY KEY ("Col1"));') c.executescript('CREATE TABLE "test2" ("Col1" TEXT, "Col2" TEXT, PRIMARY KEY ("Col1"));') print('Test 1...') for i in range(3): db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) print('Test 2...') for i in range(3): db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}, {'Col1':'c','Col2':'x'}], pk=('Col1')) print('Done...') --------------------------------------------------------------------------- Test 1... Test 2... IndexError: list index out of range --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-763-444132ca189f> in <module> 22 print('Test 2...') 23 for i in range(3): ---> 24 db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) 25 db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}, 26 {'Col1':'c','Col2':'x'}], pk=('Col1')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1… | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | upsert_all() throws issue when upserting to empty table 545407916 | |
573047321 | https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321 | https://api.github.com/repos/simonw/sqlite-utils/issues/73 | MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ== | psychemedia 82988 | 2020-01-10T14:02:56Z | 2020-01-10T14:09:23Z | NONE | Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop). Could it be sqlite3? I'm on 3.30.1. UPDATE: just tried it on jupyter.org/try and I get the error there, too. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | upsert_all() throws issue when upserting to empty table 545407916 | |
580745213 | https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213 | https://api.github.com/repos/simonw/sqlite-utils/issues/73 | MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw== | psychemedia 82988 | 2020-01-31T14:02:38Z | 2020-01-31T14:21:09Z | NONE | So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error. If I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly. It feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | upsert_all() throws issue when upserting to empty table 545407916 | |
1033641009 | https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1033641009 | https://api.github.com/repos/simonw/sqlite-utils/issues/203 | IC_kwDOCGYnMM49nBwx | psychemedia 82988 | 2022-02-09T11:06:18Z | 2022-02-09T11:06:18Z | NONE | Is there any progress elsewhere on the handling of compound / composite foreign keys, or is this PR still effectively open? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | changes to allow for compound foreign keys 743384829 | |
1041313679 | https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041313679 | https://api.github.com/repos/simonw/sqlite-utils/issues/406 | IC_kwDOCGYnMM4-ES-P | psychemedia 82988 | 2022-02-16T09:59:51Z | 2022-02-16T10:00:10Z | NONE | The `CustomColumnType()` approach looks good. This pushes you into the mindspace that you are defining and working with a custom column type. When creating the table, you could then error, or at least warn, if someone wasn't setting a column on a `type` or a custom column type, which I guess is where `mypy` comes in? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Creating tables with custom datatypes 1128466114 | |
1041325398 | https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1041325398 | https://api.github.com/repos/simonw/sqlite-utils/issues/402 | IC_kwDOCGYnMM4-EV1W | psychemedia 82988 | 2022-02-16T10:12:48Z | 2022-02-16T10:18:55Z | NONE | > My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns Other possible pairs: unconventional date/datetime and timezone pairs eg `2022-02-16::17.00, London`; or more generally, numerical value and unit of measurement pairs (eg if you want to cast into and out of different measurement units using packages like `pint`) or currencies etc. Actually, in that case, I guess you may be presenting things that are unit typed already, and so a conversion would need to parse things into an appropriate, possibly two column `value, unit` format. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Advanced class-based `conversions=` mechanism 1125297737 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
created_at (date) >30 ✖