{"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748436779, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-19T07:49:00Z", "updated_at": "2020-12-19T07:49:00Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15", "id": 748562288, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:44:22Z", "updated_at": "2020-12-20T04:44:22Z", "author_association": "CONTRIBUTOR", "body": "@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612151767, "label": "Expose scores from ZCOMPUTEDASSETATTRIBUTES"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623845014", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16", "id": 623845014, "node_id": "MDEyOklzc3VlQ29tbWVudDYyMzg0NTAxNA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-05T03:55:14Z", "updated_at": "2020-05-05T03:56:24Z", "author_association": "CONTRIBUTOR", "body": "I'm traveling w/o access to my Mac so can't help with any code right now. I suspected ZSCENEIDENTIFIER was a foreign key into one of these psi.sqlite tables. But looks like you're on to something connecting groups to assets. As for the UUID, I think there's two ints because each is 64-bits but UUIDs are 128-bits. Thus they need to be combined to get the 128 bit UUID. You might be able to use Apple's [NSUUID](https://developer.apple.com/documentation/foundation/nsuuid?language=objc), for example, by wrapping with pyObjC. Here's one [example](https://github.com/ronaldoussoren/pyobjc/blob/881c82a7ba90f193934b52b44143360c80dce5e5/pyobjc-framework-Cocoa/PyObjCTest/test_nsuuid.py) of using this in PyObjC's test suite. Interesting it's stored this way instead of a UUIDString as in Photos.sqlite. Perhaps it for faster indexing.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612287234, "label": "Import machine-learning detected labels (dog, llama etc) from Apple Photos"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17", "id": 624284539, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-05T20:20:05Z", "updated_at": "2020-05-05T20:20:05Z", "author_association": "CONTRIBUTOR", "body": "FYI, I've got an [issue](https://github.com/RhetTbull/osxphotos/issues/25) to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 612860531, "label": "Only install osxphotos if running on macOS"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626390317", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626390317, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5MDMxNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:11:24Z", "updated_at": "2020-05-10T21:50:58Z", "author_association": "CONTRIBUTOR", "body": "Ugh....Yeah, I think easiest is to catch the exception and return no place as you suggest. This particular bit of code involves un-archiving a serialized NSKeyedArchiver which uses an object table and it is certainly possible to create a circular reference that way. Because this is happening in the decode, the circular reference must be in the original data. Does Photos show valid reverse geolocation info for the photo in question? If so, Photos may be doing something beyond a simple decode of the binary plist. For now, I'll push a patch to catch the exception.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395507, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:54:45Z", "updated_at": "2020-05-10T21:54:45Z", "author_association": "CONTRIBUTOR", "body": "@simonw does Photos show valid reverse geolocation info? Are you sure you're using [bpylist2](https://github.com/xa4a/bpylist2) and not bpylist? They're both unfortunately imported as \"bpylist\" so if you somehow got the wrong (original bpylist) version installed, it could be the issue. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626395641, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T21:55:54Z", "updated_at": "2020-05-10T21:55:54Z", "author_association": "CONTRIBUTOR", "body": "Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21", "id": 626396379, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-10T22:01:48Z", "updated_at": "2020-05-10T22:01:48Z", "author_association": "CONTRIBUTOR", "body": "Frustrates me when package authors create a \"drop in\" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do \"import bpylist2 as bpylist\"", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615474990, "label": "bpylist.archiver.CircularReference: archive has a cycle with uid(13)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 626667235, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T12:20:34Z", "updated_at": "2020-05-11T12:20:34Z", "author_association": "CONTRIBUTOR", "body": "@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example:\r\n\r\n```python\r\nimport osxphotos\r\nphotos = osxphotos.PhotosDB().photos()\r\nexiftool = osxphotos.exiftool.ExifTool(photos[0].path)\r\nexifdata = exiftool.as_dict()\r\ntags = exifdata[\"IPTC:Keywords\"]\r\n```\r\n\r\nNot as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC.\r\n\r\nAs for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 627007458, "node_id": "MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-11T22:51:52Z", "updated_at": "2020-05-11T22:52:26Z", "author_association": "CONTRIBUTOR", "body": "I'm not familiar with `ExifReader`. I wrote my own wrapper around `exiftool` because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like [pyexiftool](https://github.com/smarnach/pyexiftool) didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in `ZEXTENDEDATTRIBUTES` table. I've got an open issue [#11](https://github.com/RhetTbull/osxphotos/issues/11) to add this to osxphotos but it hasn't bubbled to the top of my backlog yet. \r\n\r\nosxphotos will give you the location info: `PhotoInfo.location` returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS).\r\n\r\n```sql\r\nCREATE TABLE ZEXTENDEDATTRIBUTES (\r\n Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, \r\n Z_OPT INTEGER, ZFLASHFIRED INTEGER, \r\n ZISO INTEGER, ZMETERINGMODE INTEGER, \r\n ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, \r\n ZWHITEBALANCE INTEGER, ZASSET INTEGER, \r\n ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, \r\n ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, \r\n ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, \r\n ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, \r\n ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, \r\n ZLENSMODEL VARCHAR\r\n);\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22", "id": 628405453, "node_id": "MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-05-14T05:59:53Z", "updated_at": "2020-05-14T05:59:53Z", "author_association": "CONTRIBUTOR", "body": "I've added support for the above exif data to [v0.28.17](https://github.com/RhetTbull/osxphotos/releases/tag/v0.28.17) of osxphotos. `PhotoInfo.exif_info` will return an `ExifInfo` [dataclass](https://docs.python.org/3/library/dataclasses.html) object with the following properties:\r\n\r\n```python\r\n flash_fired: bool\r\n iso: int\r\n metering_mode: int\r\n sample_rate: int\r\n track_format: int\r\n white_balance: int\r\n aperture: float\r\n bit_rate: float\r\n duration: float\r\n exposure_bias: float\r\n focal_length: float\r\n fps: float\r\n latitude: float\r\n longitude: float\r\n shutter_speed: float\r\n camera_make: str\r\n camera_model: str\r\n codec: str\r\n lens_model: str\r\n```\r\n\r\nIt's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data\r\n\r\nNote: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions. ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 615626118, "label": "Try out ExifReader"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/3#issuecomment-934372104", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3", "id": 934372104, "node_id": "IC_kwDOD079W843sWMI", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2021-10-05T12:38:24Z", "updated_at": "2021-10-05T12:38:24Z", "author_association": "CONTRIBUTOR", "body": "As dogsheep-photos already uses [osxphotos](https://github.com/RhetTbull/osxphotos) to load photos you can access the EXIF data via osxphotos. Apple Photos imports a small subset of EXIF data at the time the photo is imported and osxphotos provides this via the [exif_info](https://github.com/RhetTbull/osxphotos#exifinfo) property. If you want the full EXIF data, osxphotos also provides a wrapper around [exiftool](https://github.com/RhetTbull/osxphotos#exiftool).", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 602533481, "label": "Import EXIF data into SQLite - lens used, ISO, aperture etc"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778246347", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33", "id": 778246347, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODI0NjM0Nw==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2021-02-12T15:00:43Z", "updated_at": "2021-02-12T15:00:43Z", "author_association": "CONTRIBUTOR", "body": "Yes, Big Sur Photos database doesn't have `ZGENERICASSET` table. PR #31 will fix this.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 803338729, "label": "photo-to-sqlite: command not found"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330", "issue_url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31", "id": 748562330, "node_id": "MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA==", "user": {"value": 41546558, "label": "RhetTbull"}, "created_at": "2020-12-20T04:45:08Z", "updated_at": "2020-12-20T04:45:08Z", "author_association": "CONTRIBUTOR", "body": "Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115", "reactions": "{\"total_count\": 1, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 1, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771511344, "label": "Update for Big Sur"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51", "id": 770150526, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg==", "user": {"value": 22578954, "label": "daniel-butler"}, "created_at": "2021-01-30T03:44:19Z", "updated_at": "2021-01-30T03:47:24Z", "author_association": "CONTRIBUTOR", "body": "I don't have much experience with github's rate limiting. In my day job we use the [tenacity library](https://github.com/jd/tenacity) to handle http errors we get.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 703246031, "label": "github-to-sqlite should handle rate limits better"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60", "id": 770069864, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA==", "user": {"value": 22578954, "label": "daniel-butler"}, "created_at": "2021-01-29T21:52:05Z", "updated_at": "2021-02-12T18:29:43Z", "author_association": "CONTRIBUTOR", "body": "For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called `gh-organization`. The github's owner id of gh-orgnization is `123456789`.\r\n\r\n```bash\r\ngithub-to-sqlite repos github.db gh-organization\r\n```\r\n\r\nI'm on a windows computer running git bash to be able to use the `|` command. This works for me\r\n```bash\r\nsqlite3 github.db \"SELECT full_name FROM repos WHERE owner = '123456789';\" | tr '\\n\\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }\r\n```\r\n\r\nOn a pure linux system I think this would work because the new line character is normally `\\n`\r\n```bash\r\nsqlite3 github.db \"SELECT full_name FROM repos WHERE owner = '123456789';\" | tr '\\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }`\r\n```\r\n\r\nAs expected I ran into rate limit issues #51 \r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797097140, "label": "Use Data from SQLite in other commands"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770112248", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60", "id": 770112248, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MDExMjI0OA==", "user": {"value": 22578954, "label": "daniel-butler"}, "created_at": "2021-01-30T00:01:03Z", "updated_at": "2021-01-30T01:14:42Z", "author_association": "CONTRIBUTOR", "body": "Yes that would be cool! I wouldn't mind helping. Is this the meat of it? https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/utils.py#L512\r\n\r\nIt looks like the cli option is added with this decorator : https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/cli.py#L14\r\n\r\nI looked a bit at utils.py in the GitHub repository. I was surprised at the amount of manual mapping of the API response you had to do to get this to work.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797097140, "label": "Use Data from SQLite in other commands"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/48#issuecomment-704503719", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/48", "id": 704503719, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNDUwMzcxOQ==", "user": {"value": 755825, "label": "adamjonas"}, "created_at": "2020-10-06T19:26:59Z", "updated_at": "2020-10-06T19:26:59Z", "author_association": "CONTRIBUTOR", "body": "ref #46 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 681228542, "label": "Add pull requests"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59", "id": 751375487, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw==", "user": {"value": 631242, "label": "frosencrantz"}, "created_at": "2020-12-26T17:08:44Z", "updated_at": "2020-12-26T17:08:44Z", "author_association": "CONTRIBUTOR", "body": "Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771872303, "label": "Remove unneeded exists=True for -a/--auth flag."}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-846413174", "issue_url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59", "id": 846413174, "node_id": "MDEyOklzc3VlQ29tbWVudDg0NjQxMzE3NA==", "user": {"value": 631242, "label": "frosencrantz"}, "created_at": "2021-05-22T14:06:19Z", "updated_at": "2021-05-22T14:06:19Z", "author_association": "CONTRIBUTOR", "body": "Thanks Simon!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771872303, "label": "Remove unneeded exists=True for -a/--auth flag."}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/swarm-to-sqlite/pull/10#issuecomment-707326192", "issue_url": "https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/10", "id": 707326192, "node_id": "MDEyOklzc3VlQ29tbWVudDcwNzMyNjE5Mg==", "user": {"value": 29426418, "label": "mattiaborsoi"}, "created_at": "2020-10-12T20:20:02Z", "updated_at": "2020-10-12T20:20:02Z", "author_association": "CONTRIBUTOR", "body": "This closes issue #8 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 719637258, "label": "Update utils.py to fix sqlite3.OperationalError"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/29#issuecomment-552134876", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/29", "id": 552134876, "node_id": "MDEyOklzc3VlQ29tbWVudDU1MjEzNDg3Ng==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2019-11-09T20:33:38Z", "updated_at": "2019-11-09T20:33:38Z", "author_association": "CONTRIBUTOR", "body": "\u2764\ufe0f thanks!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 518725064, "label": "`import` command fails on empty files"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-690860653", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50", "id": 690860653, "node_id": "MDEyOklzc3VlQ29tbWVudDY5MDg2MDY1Mw==", "user": {"value": 370930, "label": "mikepqr"}, "created_at": "2020-09-11T04:04:08Z", "updated_at": "2020-09-11T04:04:08Z", "author_association": "CONTRIBUTOR", "body": "There's probably a nicer way of doing (hence this is a comment rather than a PR), but this appears to fix it:\r\n```diff\r\n--- a/twitter_to_sqlite/utils.py\r\n+++ b/twitter_to_sqlite/utils.py\r\n@@ -181,6 +181,7 @@ def fetch_timeline(\r\n args[\"tweet_mode\"] = \"extended\"\r\n min_seen_id = None\r\n num_rate_limit_errors = 0\r\n+ seen_count = 0\r\n while True:\r\n if min_seen_id is not None:\r\n args[\"max_id\"] = min_seen_id - 1\r\n@@ -208,6 +209,7 @@ def fetch_timeline(\r\n yield tweet\r\n min_seen_id = min(t[\"id\"] for t in tweets)\r\n max_seen_id = max(t[\"id\"] for t in tweets)\r\n+ seen_count += len(tweets)\r\n if last_since_id is not None:\r\n max_seen_id = max((last_since_id, max_seen_id))\r\n last_since_id = max_seen_id\r\n@@ -217,7 +219,9 @@ def fetch_timeline(\r\n replace=True,\r\n )\r\n if stop_after is not None:\r\n- break\r\n+ if seen_count >= stop_after:\r\n+ break\r\n+ args[\"count\"] = min(args[\"count\"], stop_after - seen_count)\r\n time.sleep(sleep)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 698791218, "label": "favorites --stop_after=N stops after min(N, 200)"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754721153", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54", "id": 754721153, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyMTE1Mw==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T15:51:09Z", "updated_at": "2021-01-05T15:51:09Z", "author_association": "CONTRIBUTOR", "body": "Correction: the failure is on `lists-member.js` (I was thrown by the `block` variable name, but that's just a coincidence)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779088071, "label": "Archive import appears to be broken on recent exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-754729035", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54", "id": 754729035, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyOTAzNQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T16:03:29Z", "updated_at": "2021-01-05T16:03:29Z", "author_association": "CONTRIBUTOR", "body": "I was able to fix this, at least enough to get _my_ archive to import. Not sure if there's more work to be done here or not.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779088071, "label": "Archive import appears to be broken on recent exports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/issues/58#issuecomment-910121331", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/58", "id": 910121331, "node_id": "IC_kwDODEm0Qs42P1lz", "user": {"value": 42904, "label": "rubenv"}, "created_at": "2021-09-01T09:49:33Z", "updated_at": "2021-09-01T09:49:33Z", "author_association": "CONTRIBUTOR", "body": "Found the cause, it's the other commands. PR #59 submitted.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 984939366, "label": "Error: Use either --since or --since_id, not both - still broken"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-754728696", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55", "id": 754728696, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDcyODY5Ng==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-05T16:02:55Z", "updated_at": "2021-01-05T16:02:55Z", "author_association": "CONTRIBUTOR", "body": "This now works for me, though I'm entirely ensure if it's a just-my-export thing or a wider issue. Also, this doesn't contain any tests. So I'm not sure if there's more work to be done here, or if this is good enough.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779211940, "label": "Fix archive imports"}, "performed_via_github_app": null} {"html_url": "https://github.com/dogsheep/twitter-to-sqlite/pull/55#issuecomment-760950128", "issue_url": "https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/55", "id": 760950128, "node_id": "MDEyOklzc3VlQ29tbWVudDc2MDk1MDEyOA==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2021-01-15T13:44:52Z", "updated_at": "2021-01-15T13:44:52Z", "author_association": "CONTRIBUTOR", "body": "I found and fixed another bug, this one around importing the tweets table. @simonw let me know if you'd prefer this broken out into multiple PRs, happy to do that if it makes review/merging easier.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 779211940, "label": "Fix archive imports"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-714908859", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 714908859, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDkwODg1OQ==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2020-10-23T04:49:20Z", "updated_at": "2020-10-23T04:49:20Z", "author_association": "CONTRIBUTOR", "body": "Good luck on 1.0! It may also be worth lobbying for a `Framework::Datasette::1.0` classifier. This would be a nice way to allow the ecosystem to self-document a bit more [discoverably](https://pypi.org/search/?q=&o=&c=Framework+%3A%3A+Datasette%3A%3A+1.0). \r\n\r\nI was surprised to see the [PR for `Framework::Jupyter`](https://github.com/pypa/warehouse/pull/1905/files) is a... database migration! Of course, there may be more workflow to it!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-753531657", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 753531657, "node_id": "MDEyOklzc3VlQ29tbWVudDc1MzUzMTY1Nw==", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2021-01-02T21:25:36Z", "updated_at": "2021-01-02T21:25:36Z", "author_association": "CONTRIBUTOR", "body": "Actually, on more research, I found out this is handled by the [trove-classifiers package](https://github.com/pypa/trove-classifiers/blob/master/src/trove_classifiers/__init__.py#L2) now, so it's just a one-liner pr instead of fire-up-a-docker-container-and-do-some-migrations", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1012#issuecomment-970266123", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1012", "id": 970266123, "node_id": "IC_kwDOBm6k_c451RYL", "user": {"value": 45380, "label": "bollwyvl"}, "created_at": "2021-11-16T13:18:36Z", "updated_at": "2021-11-16T13:18:36Z", "author_association": "CONTRIBUTOR", "body": "Congratulations, looks like it went through! There was a bit of a hold-up\non the JupyterLab ones, but it's semi automated: a dependabot pr to\nwarehouse and a CI deploy, with a click in between.\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 718540751, "label": "For 1.0 update trove classifier in setup.py"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1019#issuecomment-708520800", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1019", "id": 708520800, "node_id": "MDEyOklzc3VlQ29tbWVudDcwODUyMDgwMA==", "user": {"value": 639012, "label": "jsfenfen"}, "created_at": "2020-10-14T16:37:19Z", "updated_at": "2020-10-14T16:37:19Z", "author_association": "CONTRIBUTOR", "body": "\ud83c\udf89 Thanks so much @simonw ! \ud83c\udf89 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 721050815, "label": "\"Edit SQL\" button on canned queries"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-714657366", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 714657366, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-22T17:51:29Z", "updated_at": "2020-10-22T17:51:29Z", "author_association": "CONTRIBUTOR", "body": "How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `\"extra_js_urls\": [ \"/static/app.js\"]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1033#issuecomment-716066000", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1033", "id": 716066000, "node_id": "MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA==", "user": {"value": 82988, "label": "psychemedia"}, "created_at": "2020-10-24T22:58:33Z", "updated_at": "2020-10-24T22:58:33Z", "author_association": "CONTRIBUTOR", "body": "From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note:\r\n\r\n```\r\ndatasette.urls.instance()\r\nReturns the URL to the Datasette instance root page. This is usually \"/\"\r\n```\r\n\r\nWhat about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to?\r\n\r\n- [ ] `https://example.com:PORT/weirdpath/datasette`\r\n- [ ] `https://example.com:PORT/weirdpath/`\r\n- [ ] `https://example.com:PORT/`\r\n- [ ] `https://example.com`\r\n- [ ] something else?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 725099777, "label": "datasette.urls.static_plugins(...) method"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/105#issuecomment-345503897", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/105", "id": 345503897, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NTUwMzg5Nw==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2017-11-19T09:38:08Z", "updated_at": "2017-11-19T09:38:08Z", "author_association": "CONTRIBUTOR", "body": "Thanks, I wrote this very simple reader because the default approach as described on the Datahub pages seemed to complicated. I had metadata from the `datapackage.json` attached to the returned DataFrames but removed this due to some attribute handling change in the latest Pandas version.\r\n\r\nThis could also be useful for getting from Data Package to SQL db: https://github.com/frictionlessdata/tableschema-sql-py\r\n\r\nI maintain a few climate science related dataset at https://github.com/openclimatedata/\r\n\r\nThe Data Retriever (mainly ecological data) by @ethanwhite et al. is also using the Data Package format for metadata and has some tooling for different dbs: \r\n\r\nhttps://frictionlessdata.io/articles/the-data-retriever/\r\nhttps://github.com/weecology/retriever\r\n\r\nThe Open Power System Data project also has a couple of datasets that show nicely how CSV is great for assembling and then already make SQLite files available. It's one of the first data sets I tried with Datasette, perfect for the use case of getting an API for putting power stations on a map ...\r\n\r\nhttps://data.open-power-system-data.org/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 274314940, "label": "Consider data-package as a format for metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1062#issuecomment-1260829829", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1062", "id": 1260829829, "node_id": "IC_kwDOBm6k_c5LJryF", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-09-28T12:27:19Z", "updated_at": "2022-09-28T12:27:19Z", "author_association": "CONTRIBUTOR", "body": "for teaching `register_output_renderer` to stream it seems like the two options are to\r\n\r\n1. a [nested query technique ](https://github.com/simonw/datasette/issues/526#issuecomment-505162238)to paginate through\r\n2. a fetching model that looks like something\r\n```python\r\nwith sqlite_timelimit(conn, time_limit_ms):\r\n c.execute(query)\r\n for chunk in c.fetchmany(chunk_size):\r\n yield from chunk\r\n```\r\ncurrently `db.execute` is not a generator, so this would probably need a new method?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 732674148, "label": "Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1062#issuecomment-1260909128", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1062", "id": 1260909128, "node_id": "IC_kwDOBm6k_c5LJ_JI", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-09-28T13:22:53Z", "updated_at": "2022-09-28T14:09:54Z", "author_association": "CONTRIBUTOR", "body": "if you went this route:\r\n\r\n```python\r\nwith sqlite_timelimit(conn, time_limit_ms):\r\n c.execute(query)\r\n for chunk in c.fetchmany(chunk_size):\r\n yield from chunk\r\n```\r\n\r\nthen `time_limit_ms` would probably have to be greatly extended, because the time spent in the loop will depend on the downstream processing.\r\n\r\ni wonder if this was why you were thinking this feature would need a dedicated connection?\r\n\r\n---\r\n\r\nreading more, there's no real limit i can find on the number of active cursors (or more precisely active prepared statements objects, because sqlite doesn't really have cursors). \r\n\r\nmaybe something like this would be okay?\r\n\r\n```python\r\nwith sqlite_timelimit(conn, time_limit_ms):\r\n c.execute(query)\r\n # step through at least one to evaluate the statement, not sure if this is necessary\r\n yield c.execute.fetchone()\r\nfor chunk in c.fetchmany(chunk_size):\r\n yield from chunk\r\n```\r\n\r\nthis seems quite weird that there's not more of limit of the number of active prepared statements, but i haven't been able to find one.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 732674148, "label": "Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1099#issuecomment-1402563930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1099", "id": 1402563930, "node_id": "IC_kwDOBm6k_c5TmW1a", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-01-24T20:11:11Z", "updated_at": "2023-01-24T20:11:11Z", "author_association": "CONTRIBUTOR", "body": "hi @simonw, this bug bit me today.\r\n\r\nthe UX for linking from a table to the foreign key seems tough! \r\n\r\nthe design in the other direction seems a lot easier, for a given primary key detail page, add links back to the tables that refer to the row.\r\n\r\nwould you be open to a PR that solved the second problem but not the first?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 743371103, "label": "Support linking to compound foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1099#issuecomment-1402898291", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1099", "id": 1402898291, "node_id": "IC_kwDOBm6k_c5Tnodz", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-01-25T00:55:06Z", "updated_at": "2023-01-25T00:55:06Z", "author_association": "CONTRIBUTOR", "body": "I went ahead and spiked something together, in #2003 ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 743371103, "label": "Support linking to compound foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1099#issuecomment-1402900354", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1099", "id": 1402900354, "node_id": "IC_kwDOBm6k_c5Tno-C", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2023-01-25T00:58:26Z", "updated_at": "2023-01-25T00:58:26Z", "author_association": "CONTRIBUTOR", "body": "> My original idea for compound foreign keys was to turn both of those columns into links, but that doesn't fit here because `database_name` is already part of a different foreign key.\r\n\r\nit's pretty hard to know what the right thing to do is if a field is part of multiple foreign keys. \r\n\r\nbut, if that's not the case, what about making each of the columns a link. seems like an improvement over the status quo.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 743371103, "label": "Support linking to compound foreign keys"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105588651", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105588651, "node_id": "IC_kwDOBm6k_c5B5fGr", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-21T18:15:39Z", "updated_at": "2022-04-21T18:15:39Z", "author_association": "CONTRIBUTOR", "body": "What if you split rendering and streaming into two things:\r\n\r\n- `render` is a function that returns a response\r\n- `stream` is a function that sends chunks, or yields chunks passed to an ASGI `send` callback\r\n\r\nThat way current plugins still work, and streaming is purely additive. A `stream` function could get a cursor or iterator of rows, instead of a list, so it could more efficiently handle large queries.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-1105642187", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 1105642187, "node_id": "IC_kwDOBm6k_c5B5sLL", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2022-04-21T18:59:08Z", "updated_at": "2022-04-21T18:59:08Z", "author_association": "CONTRIBUTOR", "body": "Ha! That was your idea (and a good one).\r\n\r\nBut it's probably worth measuring to see what overhead it adds. It did require both passing in the database and making the whole thing `async`. \r\n\r\nJust timing the queries themselves:\r\n\r\n1. [Using `AsGeoJSON(geometry) as geometry`](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++AsGeoJSON%28geometry%29+as+geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 10.235 ms\r\n2. [Leaving as binary](https://alltheplaces-datasette.fly.dev/alltheplaces?sql=select%0D%0A++id%2C%0D%0A++properties%2C%0D%0A++geometry%2C%0D%0A++spider%0D%0Afrom%0D%0A++places%0D%0Aorder+by%0D%0A++id%0D%0Alimit%0D%0A++1000) takes 8.63 ms\r\n\r\nLooking at the network panel:\r\n\r\n1. Takes about 200 ms for the `fetch` request\r\n2. Takes about 300 ms\r\n\r\nI'm not sure how best to time the GeoJSON generation, but it would be interesting to check. Maybe I'll write a plugin to add query times to response headers.\r\n\r\nThe other thing to consider with async streaming is that it might be well-suited for a slower response. When I have to get the whole result and send a response in a fixed amount of time, I need the most efficient query possible. If I can hang onto a connection and get things one chunk at a time, maybe it's ok if there's some overhead.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1101#issuecomment-869191854", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1101", "id": 869191854, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTE5MTg1NA==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-27T16:42:14Z", "updated_at": "2021-06-27T16:42:14Z", "author_association": "CONTRIBUTOR", "body": "This would really help with this issue: https://github.com/eyeseast/datasette-geojson/issues/7", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 749283032, "label": "register_output_renderer() should support streaming data"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1111#issuecomment-736322290", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1111", "id": 736322290, "node_id": "MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2020-12-01T08:54:47Z", "updated_at": "2020-12-01T08:54:47Z", "author_association": "CONTRIBUTOR", "body": "Somewhat related: https://github.com/simonw/datasette/issues/859\r\nI fixed the issue with forking and disabling the counts for hidden tables.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 751195017, "label": "Accessing a database's `.json` is slow for very large SQLite files"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1114#issuecomment-735436014", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1114", "id": 735436014, "node_id": "MDEyOklzc3VlQ29tbWVudDczNTQzNjAxNA==", "user": {"value": 2182, "label": "danp"}, "created_at": "2020-11-29T18:33:30Z", "updated_at": "2020-11-29T18:33:30Z", "author_association": "CONTRIBUTOR", "body": "Thank you!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 752966476, "label": "--load-extension=spatialite not working with datasetteproject/datasette docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1149#issuecomment-804415619", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1149", "id": 804415619, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDQxNTYxOQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-22T21:43:16Z", "updated_at": "2021-03-22T21:43:16Z", "author_association": "CONTRIBUTOR", "body": "Sounds like a good idea.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 769520939, "label": "Make it easier to theme Datasette with CSS"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1153#issuecomment-804640440", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1153", "id": 804640440, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNDY0MDQ0MA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-23T05:58:20Z", "updated_at": "2021-03-23T05:58:20Z", "author_association": "CONTRIBUTOR", "body": "Could there be a little widget that offers conversion from one to the other? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 771202454, "label": "Use YAML examples in documentation by default, not JSON"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1167#issuecomment-754619930", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1167", "id": 754619930, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDYxOTkzMA==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-05T12:57:57Z", "updated_at": "2021-01-05T12:57:57Z", "author_association": "CONTRIBUTOR", "body": "Not sure where exactly to put the actual docs (presumably somewhere in [docs/contributing.rst](https://github.com/simonw/datasette/blob/main/docs/contributing.rst)) but I've made a slight change to make it easier to run locally (copying [the approach in excalidraw](https://github.com/excalidraw/excalidraw/blob/ade2565f497243a5e428f4906d8ed80c872fd981/package.json#L90-L94)): https://github.com/simonw/datasette/compare/main...benpickles:prettier-docs\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777145954, "label": "Add Prettier to contributing documentation"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1168#issuecomment-869076254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1168", "id": 869076254, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NjI1NA==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-27T00:03:16Z", "updated_at": "2021-06-27T00:05:51Z", "author_association": "CONTRIBUTOR", "body": "> Related: Here's an implementation of a `get_metadata()` plugin hook by @brandonrobertz [next-LI@3fd8ce9](https://github.com/next-LI/datasette/commit/3fd8ce91f3108c82227bf65ff033929426c60437)\r\n\r\nHere's a plugin that implements metadata-within-DBs: [next-LI/datasette-live-config](https://github.com/next-LI/datasette-live-config)\r\n\r\nHow it works: If a database has a `__metadata` table, then it gets parsed and included in the global metadata. It also implements a database-action hook with a UI for managing config.\r\n\r\nMore context: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L109-L140", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777333388, "label": "Mechanism for storing metadata in _metadata tables"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1169#issuecomment-754007242", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1169", "id": 754007242, "node_id": "MDEyOklzc3VlQ29tbWVudDc1NDAwNzI0Mg==", "user": {"value": 3637, "label": "benpickles"}, "created_at": "2021-01-04T14:29:57Z", "updated_at": "2021-01-04T14:29:57Z", "author_association": "CONTRIBUTOR", "body": "I somewhat share your reluctance to add a package.json to seemingly every project out there but ultimately if they're project dependencies it's important they're managed within the codebase.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 777677671, "label": "Prettier package not actually being cached"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1191#issuecomment-1200732975", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1191", "id": 1200732975, "node_id": "IC_kwDOBm6k_c5Hkbsv", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-08-01T05:39:27Z", "updated_at": "2022-08-01T05:39:27Z", "author_association": "CONTRIBUTOR", "body": "I've got a URL shortening plugin that I would like to embed on the query page but I'd like avoid capturing the entire `query.html` template. A feature like this would solve it. Where's this at and how can I help?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 787098345, "label": "Ability for plugins to collaborate when adding extra HTML to blocks in default templates"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1200#issuecomment-777132761", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1200", "id": 777132761, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NzEzMjc2MQ==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-11T00:29:52Z", "updated_at": "2021-02-11T00:29:52Z", "author_association": "CONTRIBUTOR", "body": "I'm probably missing something but what's the use case here - what would this offer over adding `limit 10` to the query?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 792890765, "label": "?_size=10 option for the arbitrary query page would be useful"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1208#issuecomment-774286962", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1208", "id": 774286962, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-05T21:02:39Z", "updated_at": "2021-02-05T21:02:39Z", "author_association": "CONTRIBUTOR", "body": "@simonw could you please take a look at the PR 1211 that fixes this issue?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 794554881, "label": "A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1212#issuecomment-772007663", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1212", "id": 772007663, "node_id": "MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-02T21:36:56Z", "updated_at": "2021-02-02T21:36:56Z", "author_association": "CONTRIBUTOR", "body": "How do you get 4-5 minutes?\r\nI run my tests in WSL 2, so may be i need to try a real linux VM.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797651831, "label": "Tests are very slow. "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1212#issuecomment-782430028", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1212", "id": 782430028, "node_id": "MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA==", "user": {"value": 4488943, "label": "kbaikov"}, "created_at": "2021-02-19T22:54:13Z", "updated_at": "2021-02-19T22:54:13Z", "author_association": "CONTRIBUTOR", "body": "I will close this issue since it appears only in my particular setup.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 797651831, "label": "Tests are very slow. "}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-777927946", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 777927946, "node_id": "MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-12T02:29:54Z", "updated_at": "2021-02-12T02:29:54Z", "author_association": "CONTRIBUTOR", "body": "According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt \\\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\n```\r\n\r\nThis uses `/mnt/fixtures.db` whereas you're using `fixtures.db` - did you try using this path instead?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1220#issuecomment-778439617", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1220", "id": 778439617, "node_id": "MDEyOklzc3VlQ29tbWVudDc3ODQzOTYxNw==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-02-12T20:33:27Z", "updated_at": "2021-02-12T20:33:27Z", "author_association": "CONTRIBUTOR", "body": "That Docker command will mount your current directory inside the Docker container at `/mnt` - so you shouldn't need to change anything locally, just run\r\n\r\n```\r\ndocker run -p 8001:8001 -v `pwd`:/mnt \\\r\n datasetteproject/datasette \\\r\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db\r\n```\r\n\r\nand it will use the `fixtures.db` file within your current directory", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 806743116, "label": "Installing datasette via docker: Path 'fixtures.db' does not exist"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1236#issuecomment-842798043", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1236", "id": 842798043, "node_id": "MDEyOklzc3VlQ29tbWVudDg0Mjc5ODA0Mw==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-05-18T03:28:25Z", "updated_at": "2021-05-18T03:28:25Z", "author_association": "CONTRIBUTOR", "body": "That corner handle looks like a hamburger menu to me. Note that the default resize handle is not limited to two-way resize: http://jsfiddle.net/LLrh7Lte/", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 812228314, "label": "Ability to increase size of the SQL editor window"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1238#issuecomment-789186458", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1238", "id": 789186458, "node_id": "MDEyOklzc3VlQ29tbWVudDc4OTE4NjQ1OA==", "user": {"value": 198537, "label": "rgieseke"}, "created_at": "2021-03-02T20:19:30Z", "updated_at": "2021-03-02T20:19:30Z", "author_association": "CONTRIBUTOR", "body": "A custom `templates/index.html` seems to work and custom `pages` as a workaround with moving them to `pages/base_url_dir`.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 813899472, "label": "Custom pages don't work with base_url setting"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/125#issuecomment-381361734", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/125", "id": 381361734, "node_id": "MDEyOklzc3VlQ29tbWVudDM4MTM2MTczNA==", "user": {"value": 45057, "label": "russss"}, "created_at": "2018-04-14T21:26:30Z", "updated_at": "2018-04-14T21:26:30Z", "author_association": "CONTRIBUTOR", "body": "FWIW I am now doing this on my WTR app (instead of silently limiting maps to 1000).\r\n\r\n[Telefonica](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/18325) now has about 4000 markers and good old [BT](https://wtr-api.herokuapp.com/wtr-663ea99/licensee/8412) has 22,000 or so.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 275135393, "label": "Plot rows on a map with Leaflet and Leaflet.markercluster"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1258#issuecomment-1437671409", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1258", "id": 1437671409, "node_id": "IC_kwDOBm6k_c5VsR_x", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2023-02-20T23:39:58Z", "updated_at": "2023-02-20T23:39:58Z", "author_association": "CONTRIBUTOR", "body": "This is pretty annoying for FTS because sqlite throws an error instead of just doing something like returning all or no results. This makes users who are unfamiliar with SQL and Datasette think the canned query page is broken and is a frequent source of confusion.\r\n\r\nTo anyone dealing with this: My solution is to modify the canned query so that it returns no results which cues people to fill in the blank parameters.\r\n\r\nSo instead of `emails_fts match escape_fts(:search))`\r\n\r\nMy canned queries now look like this:\r\n\r\n`emails_fts match escape_fts(iif(:search==\"\", \"*\", :search))`\r\n\r\nThere are no asterisks in my data so the result is always blank.\r\n\r\nUltimately it would be nice to be able to handle this in the metadata. Either making some named parameters required or setting some default values.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 828858421, "label": "Allow canned query params to specify default values"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1262#issuecomment-802095132", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1262", "id": 802095132, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMjA5NTEzMg==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-18T16:37:45Z", "updated_at": "2021-03-18T16:37:45Z", "author_association": "CONTRIBUTOR", "body": "This sounds like a good use case for a plugin, since this will only be useful for a subset of Datasette users. It shouldn't be too difficult to add a button to do this with the available plugin hooks - have you taken a look at https://docs.datasette.io/en/latest/writing_plugins.html?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 834602299, "label": "Plugin hook that could support 'order by random()' for table view"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1265#issuecomment-802923254", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1265", "id": 802923254, "node_id": "MDEyOklzc3VlQ29tbWVudDgwMjkyMzI1NA==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-19T15:39:15Z", "updated_at": "2021-03-19T15:39:15Z", "author_association": "CONTRIBUTOR", "body": "It doesn't use basic auth, but you can put a whole datasette instance, or parts of this, behind a username/password prompt using https://github.com/simonw/datasette-auth-passwords", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 836123030, "label": "Support for HTTP Basic Authentication"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1274#issuecomment-805214307", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1274", "id": 805214307, "node_id": "MDEyOklzc3VlQ29tbWVudDgwNTIxNDMwNw==", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-03-23T20:12:29Z", "updated_at": "2021-03-23T20:12:29Z", "author_association": "CONTRIBUTOR", "body": "One issue I could see with adding first class support for metadata in hjson format is that this would require adding an additional dependency to handle this, for a feature that would be unused by many users. I wonder if this could fit in as a plugin instead; if a hook existed for loading metadata (maybe as part of https://github.com/simonw/datasette/issues/860) the metadata could then come from any source, as specified by plugins, e.g. hjson, toml, XML, a database table etc.\r\n\r\nUntil/unless this exists, a few ideas for how you could add comments:\r\n- Using YAML as you suggest.\r\n- A common pattern is adding a `\"comment\"` key for comments to any object in JSON - I don't think including an unnecessary key like this would break anything in Datasette, but not certain.\r\n- You could use another tool as a preprocessor for your JSON metadata - e.g. hjson or Jsonnet. You'd write the metadata in that format, and then convert that into JSON to actually use as your final metadata.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 839008371, "label": "Might there be some way to comment metadata.json?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1280#issuecomment-837166862", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1280", "id": 837166862, "node_id": "MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg==", "user": {"value": 10801138, "label": "blairdrummond"}, "created_at": "2021-05-10T19:07:46Z", "updated_at": "2021-05-10T19:07:46Z", "author_association": "CONTRIBUTOR", "body": "Do you have a list of sqlite versions you want to test against?\r\n\r\nOne cool thing I saw recently (that we started using) was using `import docker` within python, and then writing pytest functions which executed against the container\r\n\r\n[setup](https://github.com/StatCan/kubeflow-containers/blob/3c7dcfb5e7188982fb8ebcded82e84292720f720/conftest.py#L85)\r\n\r\n[example](https://github.com/StatCan/kubeflow-containers/blob/master/tests/jupyterlab-cpu/test_julia.py#L8-L18)\r\n\r\nThe inspiration for this came from the [jupyter docker-stacks](https://github.com/jupyter/docker-stacks/blob/09fb66007615ea68d9bce8f8e1a2cf9402f1e432/test/test_packages.py#L107)\r\n\r\nSo off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 842862708, "label": "Ability to run CI against multiple SQLite versions"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-810779928", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 810779928, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMDc3OTkyOA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-03-31T05:40:12Z", "updated_at": "2021-03-31T05:40:12Z", "author_association": "CONTRIBUTOR", "body": "Maybe the addition of two template files: 'one_database_index.html' and 'one_table_index.html' would be a better idea than the documentation diff idea. (They could include commented instructions to rename the preferred template 'index.html', along with any other necessary guidance.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-851567204", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 851567204, "node_id": "MDEyOklzc3VlQ29tbWVudDg1MTU2NzIwNA==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-05-31T15:42:10Z", "updated_at": "2021-11-04T03:15:01Z", "author_association": "CONTRIBUTOR", "body": "I very much want to make:\r\n https://list.SaferDisinfectants.org/disinfectants/listN \r\nhave this URL:\r\n https://list.SaferDisinfectants.org/\r\n \r\nI'm using only one table page on the site, with no pagination. I'm not using the home page, though when I tried to move my table to the home page as mentioned above, I failed to figure out how. \r\n\r\nI am using cloudflare, but I haven't figured out a forwarding or HTML re-write method of doing this, either.\r\n\r\nIs there any way I can get a prettier list URL? I'm on Vercel.\r\n\r\n(I have a wordpress site on the main domain on a separate host.)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1284#issuecomment-949604763", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1284", "id": 949604763, "node_id": "IC_kwDOBm6k_c44mdGb", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-10-22T12:54:34Z", "updated_at": "2021-10-22T12:54:34Z", "author_association": "CONTRIBUTOR", "body": "i'm going to take a swing at this today. we'll see.", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 845794436, "label": "Feature or Documentation Request: Individual table as home page template"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-812679221", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 812679221, "node_id": "MDEyOklzc3VlQ29tbWVudDgxMjY3OTIyMQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-02T19:34:01Z", "updated_at": "2021-04-02T19:34:01Z", "author_association": "CONTRIBUTOR", "body": "This shows the city in a different color (and not the comma), but I get the idea, and I like it. (Ooh, could be nice to have the gear have an option in array fields to show as bullets or commas or semicolons...)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1286#issuecomment-815978405", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1286", "id": 815978405, "node_id": "MDEyOklzc3VlQ29tbWVudDgxNTk3ODQwNQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-08T16:47:29Z", "updated_at": "2021-04-10T03:59:00Z", "author_association": "CONTRIBUTOR", "body": "This worked for me: \r\n`{{ cell.value | replace('\", \"','; ') | replace('[\\\"','') | replace('\\\"]','')}}`\r\n\r\nI'm sure there is a prettier (and more flexible) way, but for now, this is ever-so-much more pleasant to look at. \r\n\r\n------ AFTER:\r\n\"Screen\r\n\r\n------ BEFORE:\r\n\"Screen\r\n\r\n\r\n\r\n(Note: I didn't figure out how to have one item have no semicolon, while multi-items close with a semicolon, but this is good enough for now. I also didn't figure out how to set up a new jinja filter. I don't want to add to /datasette/utils/__init__.py as I assume that would get overwritten when upgrading datasette. Having a starter guide on creating jinja filters in datasette would be helpful. (The jinja documentation isn't datasette-specific enough for me to quite nail it.)\r\n", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 849220154, "label": "Better default display of arrays of items"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1298#issuecomment-823093669", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1298", "id": 823093669, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMzA5MzY2OQ==", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-04-20T08:38:10Z", "updated_at": "2021-04-20T08:40:22Z", "author_association": "CONTRIBUTOR", "body": "@dracos I appreciate your ideas!\r\n\r\n1. Ooh, I like this: https://codepen.io/astro87/pen/LYRQNbd?editors=1100 (That's the codepen from your linked stackoverflow.)\r\n2. I worry that a max height will be a problem when my facets are open. (I've got 35 active ingredients, and so I've set the default_facet_size to 35.)\r\n3. I don't understand this one. I'm observing the screenshot... very helpful! (Ah, okay, TR = Top Right and BR = Bottom Right. Absolute grid refers to position style.) All the scroll bars look a little wonky to me. I've also got a lot of facets, and prefer the extra horizontal space so that not as many facets disappear below the fold. My site also has end users... some will be on mobile... not sure what the absolute grid would do there... \r\n4. (I still think a hover-arrow that scrolls upon click would help, too...)\r\n\r\nBut meanwhile, I'm going to go ahead and see if I can apply that shadow. (Never would've thought of that.) Hmmm... I'm not an SCSS person. This looks helpful! https://jsonformatter.org/scss-to-css", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 855476501, "label": "improve table horizontal scroll experience"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1300#issuecomment-821970965", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1300", "id": 821970965, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMTk3MDk2NQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-04-18T10:41:15Z", "updated_at": "2021-04-18T10:41:15Z", "author_association": "CONTRIBUTOR", "body": "If I change the hookspec and add a row parameter, it works\r\n\r\nhttps://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/hookspecs.py#L58\r\n\r\n```\r\ndef render_cell(value, column, row, table, database, datasette):\r\n```\r\n\r\nBut to generate a URL, I need the primary keys, but I can't call `pks = await db.primary_keys(table)` inside a sync function. I can't call `datasette.utils.detect_primary_keys` either, because the db connection is not publicly exposed (AFAICT).\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860625833, "label": "Make row available to `render_cell` plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1300#issuecomment-821971059", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1300", "id": 821971059, "node_id": "MDEyOklzc3VlQ29tbWVudDgyMTk3MTA1OQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-04-18T10:42:19Z", "updated_at": "2021-04-18T10:42:19Z", "author_association": "CONTRIBUTOR", "body": "If there's a simpler way to generate a URL for a specific row, I'm all ears", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860625833, "label": "Make row available to `render_cell` plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1300#issuecomment-833132571", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1300", "id": 833132571, "node_id": "MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ==", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-05-06T00:16:50Z", "updated_at": "2021-05-06T00:18:05Z", "author_association": "CONTRIBUTOR", "body": "I ended up using some JS as a workaround. \r\n\r\nFirst, add a JS file in `metadata.yaml`:\r\n\r\n```yaml\r\nextra_js_urls:\r\n - '/static/app.js'\r\n```\r\nthen inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `` elements. \r\nYou need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post.\r\n~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run.\r\n\r\n```js\r\nwindow.addEventListener('DOMContentLoaded', () => {\r\n function renderBlobImages() {\r\n document.querySelectorAll('a[href*=\".blob\"]').forEach(el => {\r\n const img = document.createElement('img');\r\n img.className = 'blob-image';\r\n img.loading = 'lazy';\r\n img.src = el.href.replace('.blob', '.jpg');\r\n el.parentElement.replaceChild(img, el);\r\n });\r\n }\r\n\r\n renderBlobImages();\r\n});\r\n```\r\n\r\nwhile this does the job, I'd prefer handling this in Python where it belongs.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860625833, "label": "Make row available to `render_cell` plugin hook"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1301#issuecomment-1271035998", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1301", "id": 1271035998, "node_id": "IC_kwDOBm6k_c5Lwnhe", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2022-10-07T02:38:04Z", "updated_at": "2022-10-07T02:38:04Z", "author_association": "CONTRIBUTOR", "body": "the only mode that `publish cloudrun` supports right now is immutable", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 860722711, "label": "Publishing to cloudrun with immutable mode?"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1356#issuecomment-853895159", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1356", "id": 853895159, "node_id": "MDEyOklzc3VlQ29tbWVudDg1Mzg5NTE1OQ==", "user": {"value": 25778, "label": "eyeseast"}, "created_at": "2021-06-03T14:03:59Z", "updated_at": "2021-06-03T14:03:59Z", "author_association": "CONTRIBUTOR", "body": "(Putting thoughts here to keep the conversation in one place.)\r\n\r\nI think using datasette for this use-case is the right approach. I usually have both datasette and sqlite-utils installed in the same project, and that's where I'm trying out queries, so it probably makes the most sense to have datasette also manage the output (and maybe the input, too).\r\n\r\nIt seems like both `--get` and `--query` could work better as subcommands, rather than options, if you're looking at building out a full CLI experience in datasette. It would give a cleaner separation in what you're trying to do and let each have its own dedicated options. So something like this:\r\n\r\n```sh\r\n# run an arbitrary query\r\ndatasette query covid.db \"select * from ny_times_us_counties limit 1\" --format yaml\r\n\r\n# run a canned query\r\ndatasette get covid.db some-canned-query --format yaml\r\n```\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 910092577, "label": "Research: syntactic sugar for using --get with SQL queries, maybe \"datasette query\""}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1380#issuecomment-953334718", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1380", "id": 953334718, "node_id": "IC_kwDOBm6k_c440ru-", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-10-27T21:45:04Z", "updated_at": "2021-10-27T21:45:04Z", "author_association": "CONTRIBUTOR", "body": "I am also getting this issue, using the currently most recent version of datasette\r\n\r\n```\r\n$ datasette --version\r\ndatasette, version 0.59.1\r\n```\r\n\r\nIf I run `datasette` within just a folder of files, \r\n\r\n```\r\n$ datasette serve .\r\n```\r\n\r\nAdding new files while datasette is running shows no new files, and removing files causes datasette to return 500 errors. \r\n\r\n\r\n```\r\nhome\r\nError 500\r\n[Errno 2] No such file or directory: 'mydatabase.db'\r\nPowered by Datasette\r\n```\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 924748955, "label": "Serve all db files in a folder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1380#issuecomment-953366110", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1380", "id": 953366110, "node_id": "IC_kwDOBm6k_c440zZe", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-10-27T22:48:55Z", "updated_at": "2021-10-27T22:48:55Z", "author_association": "CONTRIBUTOR", "body": "It looks like if the files argument is a directory, `config_dir` is set, but files in that folder are only loaded into `self.files` at the `Datasette` class initialisation. \r\n\r\nI tried seeing if I could get `--reload` to work, but I'm getting issues trying to use that command when specifying a directory, as the command `serve` ends up in the files list(?): \r\n\r\n```\r\ndatasette serve . --reload\r\nError: Invalid value for '[FILES]...': Path 'serve' does not exist.\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 924748955, "label": "Serve all db files in a folder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1380#issuecomment-967747190", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1380", "id": 967747190, "node_id": "IC_kwDOBm6k_c45rqZ2", "user": {"value": 813732, "label": "glasnt"}, "created_at": "2021-11-13T00:47:26Z", "updated_at": "2021-11-13T00:47:26Z", "author_association": "CONTRIBUTOR", "body": "Would it make sense to run datasette with a fswatch/inotifywait on a folder, then? ", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 924748955, "label": "Serve all db files in a folder"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1065940779", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1065940779, "node_id": "IC_kwDOBm6k_c4_iPcr", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-12T18:49:29Z", "updated_at": "2022-03-12T18:50:07Z", "author_association": "CONTRIBUTOR", "body": "Hello! Just wanted to chime in and note that there's a plugin to have Datasette [watch for updates to an external metadata.yaml/json and update the internal settings accordingly](https://datasette.io/plugins/datasette-remote-metadata), so I think the cache/poll use case is already covered. @khusmann If you don't need truly dynamic metadata then what you've come up with or the plugin ought to work fine.\r\n\r\nMaking the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict. Trying to force all paths through a async hook would have performance ramifications and making everything use the internal meta will cause problems for users that need changes to take effect immediately. This is why I came to the non-async solution as it was the path of least change within Datasette. As always, open to new ideas, etc!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066006292", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066006292, "node_id": "IC_kwDOBm6k_c4_ifcU", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-13T02:09:44Z", "updated_at": "2022-03-13T02:09:44Z", "author_association": "CONTRIBUTOR", "body": "> If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config?\r\n\r\nReading from sqlite DBs is pretty quick and I didn't notice significant performance issues when I was benchmarking. I tested on very large Datasette deployments (hundreds of DBs, millions of rows). See [\"Many small queries are efficient in sqlite\"](https://sqlite.org/np1queryprob.html) for more information on the rationale here. Also note that in the [datasette-live-config](https://github.com/next-LI/datasette-live-config) reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had.\r\n\r\nIf you need to ensure fresh metadata is being read inside of a `render_cell` hook specifically, you don't need to do anything further! `get_metadata` gets called before `render_cell` every request, so it already has access to the synced meta. There shouldn't be a need to call `get_metadata(...)` or `metadata(...)` inside `render_cell`, you can just use `datasette._metadata_local` if you're really worried about performance.\r\n\r\n> The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases.\r\n\r\nYes correct, the datadette-remote-metadata plugin doesn't do that. But the datasette-live-config plugin does. [It supports a `__metadata` table](https://github.com/next-LI/datasette-live-config/blob/main/datasette_live_config/__init__.py#L107-L138) that, when it exists on an attached DB, gets pulled into the Datasette internal `_metadata` and is also accessible via `get_metadata`. Updating is instantaneous so there's no gotchas for users or security issues for users relying on the metadata-based permissions. Simon talked about eventually making something like this a standard feature of Datasette, but I'm not sure what the status is on that!\r\n\r\nGood luck!", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066169718", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066169718, "node_id": "IC_kwDOBm6k_c4_jHV2", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-13T19:48:49Z", "updated_at": "2022-03-13T19:48:49Z", "author_association": "CONTRIBUTOR", "body": "> For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests?\r\n\r\nYou shouldn't need to do this, as I mentioned previously. The code inside `render_cell` hook already has access to the most recently sync'd metadata via `datasette._metadata_local`. Refreshing the metadata for every cell seems ... excessive.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-1066222323", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 1066222323, "node_id": "IC_kwDOBm6k_c4_jULz", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2022-03-14T00:36:42Z", "updated_at": "2022-03-14T00:36:42Z", "author_association": "CONTRIBUTOR", "body": "> Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)\r\n\r\nAll good. Report back any issues you find with this stuff. Metadata/dynamic config hasn't been tested widely outside of what I've done AFAIK. If you find a strong use case for async meta, it's going to be better to know sooner rather than later!", "reactions": "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074182", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074182, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:37:42Z", "updated_at": "2021-06-26T23:37:42Z", "author_association": "CONTRIBUTOR", "body": "> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries.\r\n> > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try?\r\n> \r\n> Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160\r\n> \r\n> That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it.\r\n> \r\n> This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem.\r\n> \r\n> But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead.\r\n\r\n_Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.)\r\n\r\nIn my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1384#issuecomment-869074701", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1384", "id": 869074701, "node_id": "MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==", "user": {"value": 2670795, "label": "brandonrobertz"}, "created_at": "2021-06-26T23:45:18Z", "updated_at": "2021-06-26T23:45:37Z", "author_association": "CONTRIBUTOR", "body": "> Here's where the plugin hook is called, demonstrating the `fallback=` argument:\r\n> \r\n> https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472\r\n> \r\n> I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459?\r\n\r\nI think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 930807135, "label": "Plugin hook for dynamic metadata"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1388#issuecomment-876213177", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1388", "id": 876213177, "node_id": "MDEyOklzc3VlQ29tbWVudDg3NjIxMzE3Nw==", "user": {"value": 80737, "label": "aslakr"}, "created_at": "2021-07-08T07:47:17Z", "updated_at": "2021-07-08T07:47:17Z", "author_association": "CONTRIBUTOR", "body": "> This sounds like a valuable feature for people running Datasette behind a proxy.\r\n\r\nYes, in some cases it is easer to use e.g. Apache's [ProxyPass Directive](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass) with Unix Domain Socket like `unix:/home/www.socket|http://localhost/whatever/`.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 939051549, "label": "Serve using UNIX domain socket"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1396#issuecomment-946467547", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1396", "id": 946467547, "node_id": "IC_kwDOBm6k_c44afLb", "user": {"value": 72577720, "label": "MichaelTiemannOSC"}, "created_at": "2021-10-19T08:10:26Z", "updated_at": "2021-10-19T08:10:26Z", "author_association": "CONTRIBUTOR", "body": "Now that 0.59 has excellent annotated release notes, you can re-confirm this is fixed by updating the published Docker image and checking that these fixes still work ;-)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 944903881, "label": "\"invalid reference format\" publishing Docker image"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/14#issuecomment-346244871", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/14", "id": 346244871, "node_id": "MDEyOklzc3VlQ29tbWVudDM0NjI0NDg3MQ==", "user": {"value": 21148, "label": "jacobian"}, "created_at": "2017-11-22T05:06:30Z", "updated_at": "2017-11-22T05:06:30Z", "author_association": "CONTRIBUTOR", "body": "I'd also suggest taking a look at [stevedore](https://docs.openstack.org/stevedore/latest/), which has a ton of tools for doing plugin stuff. I've had good luck with it in the past.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 267707940, "label": "Datasette Plugins"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1401#issuecomment-884910320", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1401", "id": 884910320, "node_id": "IC_kwDOBm6k_c40vqjw", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-07-22T13:26:01Z", "updated_at": "2021-07-22T13:26:01Z", "author_association": "CONTRIBUTOR", "body": "ordered lists didn't work either, btw", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 950664971, "label": "unordered list is not rendering bullet points in description_html on database page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1401#issuecomment-950150483", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1401", "id": 950150483, "node_id": "IC_kwDOBm6k_c44oiVT", "user": {"value": 418191, "label": "jaywgraves"}, "created_at": "2021-10-23T13:09:10Z", "updated_at": "2021-10-23T13:09:10Z", "author_association": "CONTRIBUTOR", "body": "I think it's because of this in `app.css` \r\n\r\n```\r\nol,\r\nul {\r\n\tlist-style: none;\r\n}\r\n```\r\n\r\nhttps://github.com/simonw/datasette/blame/main/datasette/static/app.css#L35-L38\r\n\r\nYou could probably reinstate that by providing your own CSS.\r\nhttps://docs.datasette.io/en/0.24/custom_templates.html#custom-css-and-javascript", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 950664971, "label": "unordered list is not rendering bullet points in description_html on database page"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1419#issuecomment-892276385", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1419", "id": 892276385, "node_id": "IC_kwDOBm6k_c41Lw6h", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-08-04T00:58:49Z", "updated_at": "2021-08-04T00:58:49Z", "author_association": "CONTRIBUTOR", "body": "yes, [filter clause on aggregate queries were added to sqlite3 in 3.30](https://www.sqlite.org/releaselog/3_30_1.html)", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 959710008, "label": "`publish cloudrun` should deploy a more recent SQLite version"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1419#issuecomment-893114612", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1419", "id": 893114612, "node_id": "IC_kwDOBm6k_c41O9j0", "user": {"value": 536941, "label": "fgregg"}, "created_at": "2021-08-05T02:29:06Z", "updated_at": "2021-08-05T02:29:06Z", "author_association": "CONTRIBUTOR", "body": "there's a lot of complexity here, that's probably not worth addressing. i got what i needed by patching the dockerfile that cloudrun uses to install a newer version of sqlite.\r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 959710008, "label": "`publish cloudrun` should deploy a more recent SQLite version"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1425#issuecomment-895003796", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1425", "id": 895003796, "node_id": "IC_kwDOBm6k_c41WKyU", "user": {"value": 3243482, "label": "abdusco"}, "created_at": "2021-08-09T07:14:35Z", "updated_at": "2021-08-09T07:14:35Z", "author_association": "CONTRIBUTOR", "body": "I believe this also provides a workaround for the problem I face in https://github.com/simonw/datasette/issues/1300. \r\n\r\nNow I should be able to get table PKs and generate a row URL. I'll test this out and report my findings.\r\n\r\n\r\n```py\r\nfrom datasette.utils import path_from_row_pks\r\n\r\npks = await db.primary_keys(table)\r\nurl = self.ds.urls.row_blob(\r\n database,\r\n table,\r\n path_from_row_pks(row, pks, not pks),\r\n column,\r\n)\r\n```", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 963528457, "label": "render_cell() hook should support returning an awaitable"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1432#issuecomment-946255239", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1432", "id": 946255239, "node_id": "IC_kwDOBm6k_c44ZrWH", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-10-18T23:55:25Z", "updated_at": "2021-10-18T23:55:25Z", "author_association": "CONTRIBUTOR", "body": "I am getting this when I visit my live Datasette page:\r\n```\r\nThis Serverless Function has crashed.\r\nYour connection is working correctly.\r\nVercel is working correctly.\r\n500: INTERNAL_SERVER_ERROR\r\nCode: FUNCTION_INVOCATION_FAILED\r\nID: ...\r\n```\r\nAnd in the server logs, I'm getting\r\n\r\n```\r\n[GET] /disinfectants/listN\r\n19:53:14:23\r\nmodule initialization error: __init__() got an unexpected keyword argument 'config'\r\nmodule initialization error\r\n__init__() got an unexpected keyword argument 'config'\r\n```\r\n Which is the same error that @ashishdotme reported above.\r\n \r\n \r\n\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 969855774, "label": "Rename Datasette.__init__(config=) parameter to settings="}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1432#issuecomment-946287922", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1432", "id": 946287922, "node_id": "IC_kwDOBm6k_c44ZzUy", "user": {"value": 192568, "label": "mroswell"}, "created_at": "2021-10-19T01:16:41Z", "updated_at": "2021-10-19T01:16:41Z", "author_association": "CONTRIBUTOR", "body": "Resolved, with assistance from @ashishdotme (Thank you!)\r\n\r\nUpdated requirements.txt to include:\r\n```\r\ndatasette==0.59\r\ndatasette-publish-vercel==0.11\r\nsqlite-utils==3.6\r\n```\r\n\r\nRan:\r\n```\r\n$ pip3 install -r requirements.txt\r\n```\r\nThe site is back at work! Yay!\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 969855774, "label": "Rename Datasette.__init__(config=) parameter to settings="}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915279711", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915279711, "node_id": "IC_kwDOBm6k_c42jg9f", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-08T14:16:49Z", "updated_at": "2021-09-08T14:16:49Z", "author_association": "CONTRIBUTOR", "body": "on commit d57ab156b35ec642", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915299013", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915299013, "node_id": "IC_kwDOBm6k_c42jlrF", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-08T14:40:28Z", "updated_at": "2021-09-08T14:40:28Z", "author_association": "CONTRIBUTOR", "body": "What are the full errors you're getting?\r\n\r\nThis *may* be the same issue as described in https://github.com/simonw/datasette/pull/1223 - essentially the test suite (and corresponding Datasette features I assume) are by default implicitly dependent on your Sqlite installation having been compiled with the `SQLITE_ENABLE_FTS3_PARENTHESIS` option. If this is the same issue then I think this can be fixed either by recompiling with that option or (probably more easily) by running `pip install pysqlite3-binary`, which will be used in preference to your system Sqlite installation and has this option enabled.\r\n", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915302885", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915302885, "node_id": "IC_kwDOBm6k_c42jmnl", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-08T14:44:50Z", "updated_at": "2021-09-08T14:44:50Z", "author_association": "CONTRIBUTOR", "body": "thanks for the response! full errors attached; excerpt:\r\n\r\n```\r\n...\r\n\r\n\r\n def test_searchmode(table_metadata, querystring, expected_rows):\r\n with make_app_client(\r\n metadata={\"databases\": {\"fixtures\": {\"tables\": {\"searchable\": table_metadata}}}}\r\n ) as client:\r\n response = client.get(\"/fixtures/searchable.json?\" + querystring)\r\n> assert expected_rows == response.json[\"rows\"]\r\nE AssertionError: assert [[1, 'barry c...sel', 'puma']] == []\r\nE Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther']\r\nE Use -v to get the full diff\r\n\r\n/Users/t/dev/datasette/tests/test_api.py:1115: AssertionError\r\n```\r\n\r\n[errors.txt](https://github.com/simonw/datasette/files/7129719/errors.txt)\r\n\r\nA quick scan of #1223 suggests you're right. Unfortunately, pysqlite3-binary isn't available for Mac OS X, so I can't quickly check that that fixes it; will do so later.", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-915343886", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 915343886, "node_id": "IC_kwDOBm6k_c42jwoO", "user": {"value": 7476523, "label": "bobwhitelock"}, "created_at": "2021-09-08T15:32:06Z", "updated_at": "2021-09-08T15:32:06Z", "author_association": "CONTRIBUTOR", "body": "Thanks, that does look similar!\r\n\r\n> Unfortunately, pysqlite3-binary isn't available for Mac OS X, so I can't quickly check that that fixes it; will do so later.\r\n\r\nAh that makes sense, I guess that's why this isn't just always installed already. I wonder if a possible solution to this issue could be doing feature detection on whether this feature is supported by the current Sqlite version, and if not these tests could be disabled locally? But possibly there's a better way to handle this, will see what @simonw thinks", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null} {"html_url": "https://github.com/simonw/datasette/issues/1464#issuecomment-917642487", "issue_url": "https://api.github.com/repos/simonw/datasette/issues/1464", "id": 917642487, "node_id": "IC_kwDOBm6k_c42shz3", "user": {"value": 51016, "label": "ctb"}, "created_at": "2021-09-12T14:03:09Z", "updated_at": "2021-09-12T14:03:09Z", "author_association": "CONTRIBUTOR", "body": "haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there?", "reactions": "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", "issue": {"value": 991191951, "label": "clean checkout & clean environment has test failures"}, "performed_via_github_app": null}