html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/simonw/datasette/issues/370#issuecomment-1261930179,https://api.github.com/repos/simonw/datasette/issues/370,1261930179,IC_kwDOBm6k_c5LN4bD,72577720,MichaelTiemannOSC,2022-09-29T08:17:46Z,2022-09-29T08:17:46Z,CONTRIBUTOR,"Just watched this video which demonstrates the integration of *any* webapp into JupyterLab: https://youtu.be/FH1dKKmvFtc Maybe this is the answer?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",377155320,Integration with JupyterLab, https://github.com/simonw/datasette/issues/1396#issuecomment-946467547,https://api.github.com/repos/simonw/datasette/issues/1396,946467547,IC_kwDOBm6k_c44afLb,72577720,MichaelTiemannOSC,2021-10-19T08:10:26Z,2021-10-19T08:10:26Z,CONTRIBUTOR,"Now that 0.59 has excellent annotated release notes, you can re-confirm this is fixed by updating the published Docker image and checking that these fixes still work ;-)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",944903881,"""invalid reference format"" publishing Docker image", https://github.com/simonw/datasette/pull/2068#issuecomment-1547911570,https://api.github.com/repos/simonw/datasette/issues/2068,1547911570,IC_kwDOBm6k_c5cQ0GS,49699333,dependabot[bot],2023-05-15T13:59:35Z,2023-05-15T13:59:35Z,CONTRIBUTOR,Superseded by #2075.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1690842199,Bump sphinx from 6.1.3 to 7.0.0, https://github.com/simonw/datasette/pull/2064#issuecomment-1529737426,https://api.github.com/repos/simonw/datasette/issues/2064,1529737426,IC_kwDOBm6k_c5bLfDS,49699333,dependabot[bot],2023-05-01T13:58:50Z,2023-05-01T13:58:50Z,CONTRIBUTOR,Superseded by #2068.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1683229834,Bump sphinx from 6.1.3 to 6.2.1, https://github.com/simonw/datasette/pull/2063#issuecomment-1521837780,https://api.github.com/repos/simonw/datasette/issues/2063,1521837780,IC_kwDOBm6k_c5atWbU,49699333,dependabot[bot],2023-04-25T13:57:52Z,2023-04-25T13:57:52Z,CONTRIBUTOR,Superseded by #2064.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1681339696,Bump sphinx from 6.1.3 to 6.2.0, https://github.com/simonw/datasette/pull/2014#issuecomment-1487999503,https://api.github.com/repos/simonw/datasette/issues/2014,1487999503,IC_kwDOBm6k_c5YsRIP,49699333,dependabot[bot],2023-03-29T06:09:11Z,2023-03-29T06:09:11Z,CONTRIBUTOR,Superseded by #2047.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1566081801,Bump black from 22.12.0 to 23.1.0, https://github.com/simonw/datasette/pull/2043#issuecomment-1486944644,https://api.github.com/repos/simonw/datasette/issues/2043,1486944644,IC_kwDOBm6k_c5YoPmE,49699333,dependabot[bot],2023-03-28T13:58:20Z,2023-03-28T13:58:20Z,CONTRIBUTOR,Superseded by #2046.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1639446870,Bump furo from 2022.12.7 to 2023.3.23, https://github.com/simonw/datasette/pull/1982#issuecomment-1376620851,https://api.github.com/repos/simonw/datasette/issues/1982,1376620851,IC_kwDOBm6k_c5SDZEz,49699333,dependabot[bot],2023-01-10T02:03:18Z,2023-01-10T02:03:18Z,CONTRIBUTOR,"Looks like sphinx is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1525560504,Bump sphinx from 5.3.0 to 6.1.2, https://github.com/simonw/datasette/pull/1977#issuecomment-1375596856,https://api.github.com/repos/simonw/datasette/issues/1977,1375596856,IC_kwDOBm6k_c5R_fE4,49699333,dependabot[bot],2023-01-09T13:06:14Z,2023-01-09T13:06:14Z,CONTRIBUTOR,Superseded by #1982.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1522552817,Bump sphinx from 5.3.0 to 6.1.1, https://github.com/simonw/datasette/pull/1976#issuecomment-1373592231,https://api.github.com/repos/simonw/datasette/issues/1976,1373592231,IC_kwDOBm6k_c5R31qn,49699333,dependabot[bot],2023-01-06T13:02:15Z,2023-01-06T13:02:15Z,CONTRIBUTOR,Superseded by #1977.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1520712722,Bump sphinx from 5.3.0 to 6.1.0, https://github.com/simonw/datasette/pull/1974#issuecomment-1372188571,https://api.github.com/repos/simonw/datasette/issues/1974,1372188571,IC_kwDOBm6k_c5Rye-b,49699333,dependabot[bot],2023-01-05T13:02:40Z,2023-01-05T13:02:40Z,CONTRIBUTOR,Superseded by #1976.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1516376583,Bump sphinx from 5.3.0 to 6.0.0, https://github.com/simonw/datasette/pull/1685#issuecomment-1237381620,https://api.github.com/repos/simonw/datasette/issues/1685,1237381620,IC_kwDOBm6k_c5JwPH0,49699333,dependabot[bot],2022-09-05T18:36:47Z,2022-09-05T18:36:47Z,CONTRIBUTOR,"Looks like jinja2 is no longer updatable, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1180778860,"Update jinja2 requirement from <3.1.0,>=2.10.3 to >=2.10.3,<3.2.0", https://github.com/simonw/datasette/pull/1799#issuecomment-1237381569,https://api.github.com/repos/simonw/datasette/issues/1799,1237381569,IC_kwDOBm6k_c5JwPHB,49699333,dependabot[bot],2022-09-05T18:36:42Z,2022-09-05T18:36:42Z,CONTRIBUTOR,"Looks like aiofiles is no longer updatable, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1362242558,"Update aiofiles requirement from <0.9,>=0.4 to >=0.4,<22.2", https://github.com/simonw/datasette/pull/1693#issuecomment-1168704157,https://api.github.com/repos/simonw/datasette/issues/1693,1168704157,IC_kwDOBm6k_c5FqQKd,49699333,dependabot[bot],2022-06-28T13:11:36Z,2022-06-28T13:11:36Z,CONTRIBUTOR,Superseded by #1763.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1184850337,Bump black from 22.1.0 to 22.3.0, https://github.com/simonw/datasette/pull/1753#issuecomment-1163091750,https://api.github.com/repos/simonw/datasette/issues/1753,1163091750,IC_kwDOBm6k_c5FU18m,49699333,dependabot[bot],2022-06-22T13:22:34Z,2022-06-22T13:22:34Z,CONTRIBUTOR,Superseded by #1760.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1261826957,Bump furo from 2022.4.7 to 2022.6.4.1, https://github.com/simonw/datasette/pull/1593#issuecomment-1031455498,https://api.github.com/repos/simonw/datasette/issues/1593,1031455498,IC_kwDOBm6k_c49esMK,49699333,dependabot[bot],2022-02-07T13:13:22Z,2022-02-07T13:13:22Z,CONTRIBUTOR,Superseded by #1631.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1101705012,"Update pytest-asyncio requirement from <0.17,>=0.10 to >=0.10,<0.18", https://github.com/simonw/datasette/pull/1514#issuecomment-972852184,https://api.github.com/repos/simonw/datasette/issues/1514,972852184,IC_kwDOBm6k_c45_IvY,49699333,dependabot[bot],2021-11-18T13:11:15Z,2021-11-18T13:11:15Z,CONTRIBUTOR,Superseded by #1516.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1056117435,Bump black from 21.9b0 to 21.11b0, https://github.com/simonw/datasette/pull/1500#issuecomment-971568829,https://api.github.com/repos/simonw/datasette/issues/1500,971568829,IC_kwDOBm6k_c456Pa9,49699333,dependabot[bot],2021-11-17T13:13:58Z,2021-11-17T13:13:58Z,CONTRIBUTOR,Superseded by #1514.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1041158024,Bump black from 21.9b0 to 21.10b0, https://github.com/simonw/datasette/pull/1489#issuecomment-943594738,https://api.github.com/repos/simonw/datasette/issues/1489,943594738,IC_kwDOBm6k_c44Phzy,49699333,dependabot[bot],2021-10-14T18:04:13Z,2021-10-14T18:04:13Z,CONTRIBUTOR,"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`. You can also ignore all major, minor, or patch releases for a dependency by adding an [`ignore` condition](https://docs.github.com/en/code-security/supply-chain-security/configuration-options-for-dependency-updates#ignore) with the desired `update_types` to your config file. If you change your mind, just re-open this PR and I'll resolve any conflicts on it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1026379132,"Update pyyaml requirement from ~=5.3 to >=5.3,<7.0", https://github.com/simonw/datasette/pull/1489#issuecomment-943594735,https://api.github.com/repos/simonw/datasette/issues/1489,943594735,IC_kwDOBm6k_c44Phzv,49699333,dependabot[bot],2021-10-14T18:04:12Z,2021-10-14T18:04:12Z,CONTRIBUTOR,Looks like this PR is closed. If you re-open it I'll rebase it as long as no-one else has edited it (you can use `@dependabot reopen` if the branch has been deleted).,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1026379132,"Update pyyaml requirement from ~=5.3 to >=5.3,<7.0", https://github.com/simonw/datasette/pull/1453#issuecomment-919135732,https://api.github.com/repos/simonw/datasette/issues/1453,919135732,IC_kwDOBm6k_c42yOX0,49699333,dependabot[bot],2021-09-14T13:10:38Z,2021-09-14T13:10:38Z,CONTRIBUTOR,Superseded by #1471.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",982780906,Bump black from 21.7b0 to 21.8b0, https://github.com/simonw/datasette/pull/1318#issuecomment-838449572,https://api.github.com/repos/simonw/datasette/issues/1318,838449572,MDEyOklzc3VlQ29tbWVudDgzODQ0OTU3Mg==,49699333,dependabot[bot],2021-05-11T13:12:30Z,2021-05-11T13:12:30Z,CONTRIBUTOR,Superseded by #1321.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",876431852,Bump black from 21.4b2 to 21.5b0, https://github.com/dogsheep/dogsheep-photos/issues/3#issuecomment-934372104,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3,934372104,IC_kwDOD079W843sWMI,41546558,RhetTbull,2021-10-05T12:38:24Z,2021-10-05T12:38:24Z,CONTRIBUTOR,"As dogsheep-photos already uses [osxphotos](https://github.com/RhetTbull/osxphotos) to load photos you can access the EXIF data via osxphotos. Apple Photos imports a small subset of EXIF data at the time the photo is imported and osxphotos provides this via the [exif_info](https://github.com/RhetTbull/osxphotos#exifinfo) property. If you want the full EXIF data, osxphotos also provides a wrapper around [exiftool](https://github.com/RhetTbull/osxphotos#exiftool).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",602533481,"Import EXIF data into SQLite - lens used, ISO, aperture etc", https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778246347,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,778246347,MDEyOklzc3VlQ29tbWVudDc3ODI0NjM0Nw==,41546558,RhetTbull,2021-02-12T15:00:43Z,2021-02-12T15:00:43Z,CONTRIBUTOR,"Yes, Big Sur Photos database doesn't have `ZGENERICASSET` table. PR #31 will fix this.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-748562330,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,748562330,MDEyOklzc3VlQ29tbWVudDc0ODU2MjMzMA==,41546558,RhetTbull,2020-12-20T04:45:08Z,2020-12-20T04:45:08Z,CONTRIBUTOR,Fixes the issue mentioned here: https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748562288,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748562288,MDEyOklzc3VlQ29tbWVudDc0ODU2MjI4OA==,41546558,RhetTbull,2020-12-20T04:44:22Z,2020-12-20T04:44:22Z,CONTRIBUTOR,"@nickvazz @simonw I opened a [PR](https://github.com/dogsheep/dogsheep-photos/pull/31) that replaces the SQL for `ZCOMPUTEDASSETATTRIBUTES` to use osxphotos which now exposes all this data and has been updated for Big Sur. I did regression tests to confirm the extracted data is identical, with one exception which should not affect operation: the old code pulled data from `ZCOMPUTEDASSETATTRIBUTES` for missing photos while the main loop ignores missing photos and does not add them to `apple_photos`. The new code does not add rows to the `apple_photos_scores` table for missing photos.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767,Expose scores from ZCOMPUTEDASSETATTRIBUTES, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436779,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748436779,MDEyOklzc3VlQ29tbWVudDc0ODQzNjc3OQ==,41546558,RhetTbull,2020-12-19T07:49:00Z,2020-12-19T07:49:00Z,CONTRIBUTOR,@nickvazz ZGENERICASSET changed to ZASSET in Big Sur. Here's a list of other changes to the schema in Big Sur: https://github.com/RhetTbull/osxphotos/wiki/Changes-in-Photos-6---Big-Sur,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767,Expose scores from ZCOMPUTEDASSETATTRIBUTES, https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-628405453,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22,628405453,MDEyOklzc3VlQ29tbWVudDYyODQwNTQ1Mw==,41546558,RhetTbull,2020-05-14T05:59:53Z,2020-05-14T05:59:53Z,CONTRIBUTOR,"I've added support for the above exif data to [v0.28.17](https://github.com/RhetTbull/osxphotos/releases/tag/v0.28.17) of osxphotos. `PhotoInfo.exif_info` will return an `ExifInfo` [dataclass](https://docs.python.org/3/library/dataclasses.html) object with the following properties: ```python flash_fired: bool iso: int metering_mode: int sample_rate: int track_format: int white_balance: int aperture: float bit_rate: float duration: float exposure_bias: float focal_length: float fps: float latitude: float longitude: float shutter_speed: float camera_make: str camera_model: str codec: str lens_model: str ``` It's not all the EXIF data available in most files but is the data Photos deems important to save. Of course, you can get all the exif_data Note: this only works in Photos 5. As best as I can tell, EXIF data is not stored in the database for earlier versions. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615626118,Try out ExifReader, https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-627007458,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22,627007458,MDEyOklzc3VlQ29tbWVudDYyNzAwNzQ1OA==,41546558,RhetTbull,2020-05-11T22:51:52Z,2020-05-11T22:52:26Z,CONTRIBUTOR,"I'm not familiar with `ExifReader`. I wrote my own wrapper around `exiftool` because I wanted a simple way to write EXIF data when exporting photos (e.g. writing out to PersonInImage and keywords to IPTC:Keywords) and the existing python packages like [pyexiftool](https://github.com/smarnach/pyexiftool) didn't do quite what I wanted. If all you're after is the camera and shot info, that's available in `ZEXTENDEDATTRIBUTES` table. I've got an open issue [#11](https://github.com/RhetTbull/osxphotos/issues/11) to add this to osxphotos but it hasn't bubbled to the top of my backlog yet. osxphotos will give you the location info: `PhotoInfo.location` returns a tuple of (lat, lon) though this info is in ZEXTENDEDATTRIBUTES too (though it might not be correct as I believe Photos creates this table at import and the user might have changed the location of a photo, e.g. if camera didn't have GPS). ```sql CREATE TABLE ZEXTENDEDATTRIBUTES ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZFLASHFIRED INTEGER, ZISO INTEGER, ZMETERINGMODE INTEGER, ZSAMPLERATE INTEGER, ZTRACKFORMAT INTEGER, ZWHITEBALANCE INTEGER, ZASSET INTEGER, ZAPERTURE FLOAT, ZBITRATE FLOAT, ZDURATION FLOAT, ZEXPOSUREBIAS FLOAT, ZFOCALLENGTH FLOAT, ZFPS FLOAT, ZLATITUDE FLOAT, ZLONGITUDE FLOAT, ZSHUTTERSPEED FLOAT, ZCAMERAMAKE VARCHAR, ZCAMERAMODEL VARCHAR, ZCODEC VARCHAR, ZLENSMODEL VARCHAR ); ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615626118,Try out ExifReader, https://github.com/dogsheep/dogsheep-photos/issues/22#issuecomment-626667235,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/22,626667235,MDEyOklzc3VlQ29tbWVudDYyNjY2NzIzNQ==,41546558,RhetTbull,2020-05-11T12:20:34Z,2020-05-11T12:20:34Z,CONTRIBUTOR,"@simonw FYI, osxphotos includes a built in ExifTool class that uses [exiftool](https://exiftool.org/) to read and write exif data. It's not exposed yet in the docs because I really only use it right now in the osphotos command line interface to write tags when exporting. In v0.28.16 (just pushed) I added an ExifTool.as_dict() method which will give you a dict with all the exif tags in a file. For example: ```python import osxphotos photos = osxphotos.PhotosDB().photos() exiftool = osxphotos.exiftool.ExifTool(photos[0].path) exifdata = exiftool.as_dict() tags = exifdata[""IPTC:Keywords""] ``` Not as elegant perhaps as a python only implementation because ExifTool has to make subprocess calls to an external tool but exiftool is by far the best tool available for reading and writing EXIF data and it does support HEIC. As for implementation, ExifTool uses a singleton pattern so the first time you instantiate it, it spawns an IPC to exiftool but then keeps it open and uses the same process for any subsequent calls (even on different files). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615626118,Try out ExifReader, https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626396379,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626396379,MDEyOklzc3VlQ29tbWVudDYyNjM5NjM3OQ==,41546558,RhetTbull,2020-05-10T22:01:48Z,2020-05-10T22:01:48Z,CONTRIBUTOR,"Frustrates me when package authors create a ""drop in"" replacement with the same import name...this kind of thing has bitten me more than once! Would've been nicer I think for bpylist2 to do ""import bpylist2 as bpylist""","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395641,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626395641,MDEyOklzc3VlQ29tbWVudDYyNjM5NTY0MQ==,41546558,RhetTbull,2020-05-10T21:55:54Z,2020-05-10T21:55:54Z,CONTRIBUTOR,Did removing old bpylist solve the original problem or do you still have a photo that throws circular reference?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626395507,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626395507,MDEyOklzc3VlQ29tbWVudDYyNjM5NTUwNw==,41546558,RhetTbull,2020-05-10T21:54:45Z,2020-05-10T21:54:45Z,CONTRIBUTOR,"@simonw does Photos show valid reverse geolocation info? Are you sure you're using [bpylist2](https://github.com/xa4a/bpylist2) and not bpylist? They're both unfortunately imported as ""bpylist"" so if you somehow got the wrong (original bpylist) version installed, it could be the issue. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-626390317,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,626390317,MDEyOklzc3VlQ29tbWVudDYyNjM5MDMxNw==,41546558,RhetTbull,2020-05-10T21:11:24Z,2020-05-10T21:50:58Z,CONTRIBUTOR,"Ugh....Yeah, I think easiest is to catch the exception and return no place as you suggest. This particular bit of code involves un-archiving a serialized NSKeyedArchiver which uses an object table and it is certainly possible to create a circular reference that way. Because this is happening in the decode, the circular reference must be in the original data. Does Photos show valid reverse geolocation info for the photo in question? If so, Photos may be doing something beyond a simple decode of the binary plist. For now, I'll push a patch to catch the exception.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/17#issuecomment-624284539,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/17,624284539,MDEyOklzc3VlQ29tbWVudDYyNDI4NDUzOQ==,41546558,RhetTbull,2020-05-05T20:20:05Z,2020-05-05T20:20:05Z,CONTRIBUTOR,"FYI, I've got an [issue](https://github.com/RhetTbull/osxphotos/issues/25) to make osxphotos cross-platform but it's low on my priority list. About 90% of the functionality could be done cross-platform but right now the MacOS specific stuff is embedded throughout and would take some work. Though I try to minimize it, there's sprinklings of ObjC & Applescript throughout osxphotos.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612860531,Only install osxphotos if running on macOS, https://github.com/dogsheep/dogsheep-photos/issues/16#issuecomment-623845014,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/16,623845014,MDEyOklzc3VlQ29tbWVudDYyMzg0NTAxNA==,41546558,RhetTbull,2020-05-05T03:55:14Z,2020-05-05T03:56:24Z,CONTRIBUTOR,"I'm traveling w/o access to my Mac so can't help with any code right now. I suspected ZSCENEIDENTIFIER was a foreign key into one of these psi.sqlite tables. But looks like you're on to something connecting groups to assets. As for the UUID, I think there's two ints because each is 64-bits but UUIDs are 128-bits. Thus they need to be combined to get the 128 bit UUID. You might be able to use Apple's [NSUUID](https://developer.apple.com/documentation/foundation/nsuuid?language=objc), for example, by wrapping with pyObjC. Here's one [example](https://github.com/ronaldoussoren/pyobjc/blob/881c82a7ba90f193934b52b44143360c80dce5e5/pyobjc-framework-Cocoa/PyObjCTest/test_nsuuid.py) of using this in PyObjC's test suite. Interesting it's stored this way instead of a UUIDString as in Photos.sqlite. Perhaps it for faster indexing. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612287234,"Import machine-learning detected labels (dog, llama etc) from Apple Photos", https://github.com/simonw/sqlite-utils/issues/103#issuecomment-622599528,https://api.github.com/repos/simonw/sqlite-utils/issues/103,622599528,MDEyOklzc3VlQ29tbWVudDYyMjU5OTUyOA==,32605365,b0b5h4rp13,2020-05-01T22:49:12Z,2020-05-02T11:15:44Z,CONTRIBUTOR,"With SQLITE_MAX_VARS = 999, or even 899, This hits the problem with the batch rows causing a overflow (works fine if SQLITE_MAX_VARS = 799). p.s. I have tried a few list of dicts to sqlite modules and this was the easiest to use/understand ------------- file begins ------------------ import sqlite_utils as su data = [ {'tickerId': 913324382, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CONSTELLATION B', 'symbol': 'STZ B', 'disSymbol': 'STZ-B', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '163.13', 'change': '6.46', 'changeRatio': '0.0412', 'marketValue': '31180699895.63', 'volume': '417', 'turnoverRate': '0.0000'}, {'tickerId': 913323791, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Molina Health', 'symbol': 'MOH', 'disSymbol': 'MOH', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '173.25', 'change': '9.28', 'changeRatio': '0.0566', 'pPrice': '173.25', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '10520341695.50', 'volume': '1281557', 'turnoverRate': '0.0202'}, {'tickerId': 913257501, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seattle Genetics', 'symbol': 'SGEN', 'disSymbol': 'SGEN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '145.64', 'change': '8.41', 'changeRatio': '0.0613', 'pPrice': '146.45', 'pChange': '0.8100', 'pChRatio': '0.0056', 'marketValue': '25117961347.60', 'volume': '2791411', 'turnoverRate': '0.0162'}, {'tickerId': 925381971, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bandwidth', 'symbol': 'BAND', 'disSymbol': 'BAND', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '89.22', 'change': '7.66', 'changeRatio': '0.0939', 'pPrice': '89.00', 'pChange': '-0.2200', 'pChRatio': '-0.0025', 'marketValue': '2100025474.98', 'volume': '1508629', 'turnoverRate': '0.0641'}, {'tickerId': 913323935, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Magellan Health', 'symbol': 'MGLN', 'disSymbol': 'MGLN', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '68.00', 'change': '7.27', 'changeRatio': '0.1197', 'pPrice': '68.00', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '1697894040.00', 'volume': '448919', 'turnoverRate': '0.0180'}, {'tickerId': 913254854, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'On Assignment', 'symbol': 'ASGN', 'disSymbol': 'ASGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '53.04', 'change': '6.59', 'changeRatio': '0.1419', 'pPrice': '53.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '2811120000.00', 'volume': '1339771', 'turnoverRate': '0.0253'}, {'tickerId': 913255732, 'exchangeId': 95, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Arcturus', 'symbol': 'ARCT', 'disSymbol': 'ARCT', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '40.86', 'change': '6.36', 'changeRatio': '0.1843', 'pPrice': '42.60', 'pChange': '1.740', 'pChRatio': '0.0426', 'marketValue': '812021444.46', 'volume': '1577508', 'turnoverRate': '0.0794'}, {'tickerId': 913256616, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'DexCom', 'symbol': 'DXCM', 'disSymbol': 'DXCM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '341.52', 'change': '6.32', 'changeRatio': '0.0189', 'pPrice': '340.00', 'pChange': '-1.5200', 'pChRatio': '-0.0045', 'marketValue': '31522296000.00', 'volume': '1008849', 'turnoverRate': '0.0109'}, {'tickerId': 913255108, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Clorox', 'symbol': 'CLX', 'disSymbol': 'CLX', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '192.71', 'change': '6.27', 'changeRatio': '0.0336', 'pPrice': '192.95', 'pChange': '0.2400', 'pChRatio': '0.0012', 'marketValue': '24185773318.28', 'volume': '4996414', 'turnoverRate': '0.0398'}, {'tickerId': 925314627, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'FRANCO NEVADA', 'symbol': 'FNV', 'disSymbol': 'FNV', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '137.85', 'change': '5.64', 'changeRatio': '0.0427', 'pPrice': '138.50', 'pChange': '0.6500', 'pChRatio': '0.0047', 'marketValue': '26110405326.30', 'volume': '1047688', 'turnoverRate': '0.0055'}, {'tickerId': 913254955, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Aon Plc', 'symbol': 'AON', 'disSymbol': 'AON', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '178.21', 'change': '5.54', 'changeRatio': '0.0321', 'pPrice': '178.21', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '41181209117.22', 'volume': '2026234', 'turnoverRate': '0.0088'}, {'tickerId': 913324105, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Willis Towers', 'symbol': 'WLTW', 'disSymbol': 'WLTW', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '183.34', 'change': '5.05', 'changeRatio': '0.0283', 'pPrice': '183.34', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23597461124.96', 'volume': '968943', 'turnoverRate': '0.0075'}, {'tickerId': 913254759, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'TELADOC HEALTH', 'symbol': 'TDOC', 'disSymbol': 'TDOC', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '169.43', 'change': '4.84', 'changeRatio': '0.0294', 'pPrice': '168.88', 'pChange': '-0.5500', 'pChRatio': '-0.0032', 'marketValue': '12614616858.38', 'volume': '2628946', 'turnoverRate': '0.0353'}, {'tickerId': 913255222, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Emergent Bio', 'symbol': 'EBS', 'disSymbol': 'EBS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '78.70', 'change': '4.75', 'changeRatio': '0.0642', 'pPrice': '78.40', 'pChange': '-0.3000', 'pChRatio': '-0.0038', 'marketValue': '4113368277.10', 'volume': '783804', 'turnoverRate': '0.0150'}, {'tickerId': 913323443, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pool', 'symbol': 'POOL', 'disSymbol': 'POOL', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '216.02', 'change': '4.36', 'changeRatio': '0.0206', 'pPrice': '216.02', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8696077573.82', 'volume': '310837', 'turnoverRate': '0.0077'}, {'tickerId': 913257075, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Masimo', 'symbol': 'MASI', 'disSymbol': 'MASI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '218.00', 'change': '4.09', 'changeRatio': '0.0191', 'pPrice': '217.00', 'pChange': '-1.0000', 'pChRatio': '-0.0046', 'marketValue': '11797070000.00', 'volume': '542131', 'turnoverRate': '0.0100'}, {'tickerId': 913253761, 'exchangeId': 10, 'type': 2, 'secType': [62], 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Pope Resources', 'symbol': 'POPE', 'disSymbol': 'POPE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NAS', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '101.05', 'change': '3.95', 'changeRatio': '0.0407', 'pPrice': '99.90', 'pChange': '2.800', 'pChRatio': '0.0288', 'marketValue': '447370075.75', 'volume': '33138', 'turnoverRate': '0.0075'}, {'tickerId': 913323560, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Seneca Foods', 'symbol': 'SENEB', 'disSymbol': 'SENEB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '40.04', 'change': '3.84', 'changeRatio': '0.1061', 'marketValue': '347950039.71', 'volume': '501'}, {'tickerId': 913324274, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Resmed', 'symbol': 'RMD', 'disSymbol': 'RMD', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '159.07', 'change': '3.75', 'changeRatio': '0.0241', 'pPrice': '159.07', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '23004217759.29', 'volume': '1267075', 'turnoverRate': '0.0088'}, {'tickerId': 913323736, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Vertex Pharms', 'symbol': 'VRTX', 'disSymbol': 'VRTX', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '254.90', 'change': '3.70', 'changeRatio': '0.0147', 'pPrice': '255.00', 'pChange': '0.1000', 'pChRatio': '0.0004', 'marketValue': '66062980780.10', 'volume': '1939843', 'turnoverRate': '0.0075'}, {'tickerId': 913323767, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MCCORMICK VTG', 'symbol': 'MKC V', 'disSymbol': 'MKC-V', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '159.99', 'change': '3.42', 'changeRatio': '0.0218', 'marketValue': '21262671000.00', 'volume': '432', 'turnoverRate': '0.0000'}, {'tickerId': 950118595, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'ZOOM VIDEO', 'symbol': 'ZM', 'disSymbol': 'ZM', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '138.56', 'change': '3.39', 'changeRatio': '0.0251', 'pPrice': '138.99', 'pChange': '0.4300', 'pChRatio': '0.0031', 'marketValue': '38620532420.16', 'volume': '13786017', 'turnoverRate': '0.0495'}, {'tickerId': 916040738, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'WHEATON PRECIOUS', 'symbol': 'WPM', 'disSymbol': 'WPM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '41.10', 'change': '3.34', 'changeRatio': '0.0885', 'pPrice': '41.09', 'pChange': '-0.0100', 'pChRatio': '-0.0002', 'marketValue': '18404536146.30', 'volume': '5019137', 'turnoverRate': '0.0112'}, {'tickerId': 913257174, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Royal Gold', 'symbol': 'RGLD', 'disSymbol': 'RGLD', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '125.86', 'change': '3.33', 'changeRatio': '0.0272', 'pPrice': '125.86', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8253015011.08', 'volume': '853473', 'turnoverRate': '0.0130'}, {'tickerId': 913254394, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Fortune Brand', 'symbol': 'FBHS', 'disSymbol': 'FBHS', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '51.50', 'change': '3.30', 'changeRatio': '0.0685', 'pPrice': '51.50', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '7194870278.50', 'volume': '3004021', 'turnoverRate': '0.0214'}, {'tickerId': 913323312, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYK', 'disSymbol': 'LBTYK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '21.49', 'change': '3.18', 'changeRatio': '0.1737', 'pPrice': '21.48', 'pChange': '-0.0100', 'pChRatio': '-0.0005', 'marketValue': '13594662302.41', 'volume': '19980228', 'turnoverRate': '0.0315'}, {'tickerId': 913323882, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Preformed Line', 'symbol': 'PLPC', 'disSymbol': 'PLPC', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '52.82', 'change': '3.14', 'changeRatio': '0.0632', 'pPrice': '52.10', 'pChange': '-0.7200', 'pChRatio': '-0.0136', 'marketValue': '264979981.20', 'volume': '9305', 'turnoverRate': '0.0018'}, {'tickerId': 913323248, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Discovery', 'symbol': 'DISCB', 'disSymbol': 'DISCB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'status': 'A', 'close': '57.95', 'change': '23.63', 'changeRatio': '0.6884', 'pPrice': '54.26', 'pChange': '-3.6900', 'pChRatio': '-0.0637', 'marketValue': '29362894177.95', 'volume': '218305', 'turnoverRate': '0.0004'}, {'tickerId': 913323930, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'MercadoLibre', 'symbol': 'MELI', 'disSymbol': 'MELI', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '605.52', 'change': '22.01', 'changeRatio': '0.0377', 'pPrice': '603.69', 'pChange': '-1.8300', 'pChRatio': '-0.0030', 'marketValue': '30226598045.28', 'volume': '699008', 'turnoverRate': '0.0140'}, {'tickerId': 913257170, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Global', 'symbol': 'LBTYA', 'disSymbol': 'LBTYA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '22.28', 'change': '2.86', 'changeRatio': '0.1473', 'pPrice': '22.29', 'pChange': '0.0100', 'pChRatio': '0.0004', 'marketValue': '14094419548.52', 'volume': '10534672', 'turnoverRate': '0.0167'}, {'tickerId': 913303991, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Liberty Brodband', 'symbol': 'LBRDK', 'disSymbol': 'LBRDK', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '125.44', 'change': '2.76', 'changeRatio': '0.0225', 'pPrice': '125.44', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '22817900904.96', 'volume': '926177', 'turnoverRate': '0.0042'}, {'tickerId': 913257082, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Helen of Troy', 'symbol': 'HELE', 'disSymbol': 'HELE', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '167.04', 'change': '2.76', 'changeRatio': '0.0168', 'pPrice': '167.04', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '4216707982.08', 'volume': '341465', 'turnoverRate': '0.0135'}, {'tickerId': 913256458, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Forrester', 'symbol': 'FORR', 'disSymbol': 'FORR', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '33.88', 'change': '2.58', 'changeRatio': '0.0824', 'marketValue': '635419400.00', 'volume': '85115', 'turnoverRate': '0.0045'}, {'tickerId': 950158952, 'exchangeId': 95, 'type': 2, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'LYRA THERAPEUTICS, INC.', 'symbol': 'LYRA', 'disSymbol': 'LYRA', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NMS', 'listStatus': 1, 'template': 'ipo', 'status': 'A', 'close': '18.56', 'change': '2.56', 'changeRatio': '0.1600', 'pPrice': '18.96', 'pChange': '0.4000', 'pChRatio': '0.0216', 'marketValue': '229705575.68', 'volume': '1738472', 'turnoverRate': '0.1405'}, {'tickerId': 913257570, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bio-Techne', 'symbol': 'TECH', 'disSymbol': 'TECH', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '227.54', 'change': '2.54', 'changeRatio': '0.0113', 'pPrice': '227.54', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '8726538309.18', 'volume': '497006', 'turnoverRate': '0.0130'}, {'tickerId': 913323246, 'exchangeId': 96, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Bel Fuse', 'symbol': 'BELFB', 'disSymbol': 'BELFB', 'disExchangeCode': 'NASDAQ', 'exchangeCode': 'NSQ', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '9.99', 'change': '2.53', 'changeRatio': '0.3391', 'pPrice': '9.75', 'pChange': '-0.2400', 'pChRatio': '-0.0240', 'marketValue': '122562454.86', 'volume': '177634', 'turnoverRate': '0.0145'}, {'tickerId': 916040647, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Agnico Eagle', 'symbol': 'AEM', 'disSymbol': 'AEM', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '61.20', 'change': '2.52', 'changeRatio': '0.0429', 'pPrice': '61.10', 'pChange': '-0.1000', 'pChRatio': '-0.0016', 'marketValue': '14739911553.60', 'volume': '2820765', 'turnoverRate': '0.0117'}, {'tickerId': 913303768, 'exchangeId': 12, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'CHASE CORP', 'symbol': 'CCF', 'disSymbol': 'CCF', 'disExchangeCode': 'AMEX', 'exchangeCode': 'ASE', 'listStatus': 1, 'template': 'stock', 'status': 'D', 'close': '96.71', 'change': '2.45', 'changeRatio': '0.0260', 'marketValue': '916799598.60', 'volume': '29229', 'turnoverRate': '0.0031'}, {'tickerId': 913324557, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'Allergan', 'symbol': 'AGN', 'disSymbol': 'AGN', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'A', 'close': '189.74', 'change': '2.40', 'changeRatio': '0.0128', 'pPrice': '189.76', 'pChange': '0.0200', 'pChRatio': '0.0001', 'marketValue': '62424842326.10', 'volume': '5787032', 'turnoverRate': '0.0176'}, {'tickerId': 913324566, 'exchangeId': 11, 'type': 2, 'secType': 61, 'regionId': 6, 'regionCode': 'US', 'currencyId': 247, 'name': 'West Pharm Svc', 'symbol': 'WST', 'disSymbol': 'WST', 'disExchangeCode': 'NYSE', 'exchangeCode': 'NYSE', 'listStatus': 1, 'template': 'stock', 'derivativeSupport': 1, 'status': 'D', 'close': '191.64', 'change': '2.38', 'changeRatio': '0.0126', 'pPrice': '191.64', 'pChange': '0.0000', 'pChRatio': '0.0000', 'marketValue': '14078267117.08', 'volume': '352460', 'turnoverRate': '0.0042'} ] db = su.Database(f""overnight hold.db"" ) db['active'].insert_all(data) --------------- file ends ----------------------","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",610517472,sqlite3.OperationalError: too many SQL variables in insert_all when using rows with varying numbers of columns, https://github.com/simonw/datasette/issues/456#issuecomment-661524006,https://api.github.com/repos/simonw/datasette/issues/456,661524006,MDEyOklzc3VlQ29tbWVudDY2MTUyNDAwNg==,32467826,abeyerpath,2020-07-21T01:15:07Z,2020-07-21T01:15:07Z,CONTRIBUTOR,"Bumping this, as the previous fix is passing the wrong type, and not actually addressing the issue... The `exclude` argument needs an iterable of packages instead of a single string (but since `str` is iterable, it's currently excluding packages `t`, `e`, and `s`.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",442327592,Installing installs the tests package, https://github.com/dogsheep/swarm-to-sqlite/pull/10#issuecomment-707326192,https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/10,707326192,MDEyOklzc3VlQ29tbWVudDcwNzMyNjE5Mg==,29426418,mattiaborsoi,2020-10-12T20:20:02Z,2020-10-12T20:20:02Z,CONTRIBUTOR,This closes issue #8 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",719637258,Update utils.py to fix sqlite3.OperationalError, https://github.com/simonw/datasette/pull/1313#issuecomment-829352402,https://api.github.com/repos/simonw/datasette/issues/1313,829352402,MDEyOklzc3VlQ29tbWVudDgyOTM1MjQwMg==,27856297,dependabot-preview[bot],2021-04-29T15:47:23Z,2021-04-29T15:47:23Z,CONTRIBUTOR,This pull request will no longer be automatically closed when a new version is found as this pull request was created by Dependabot Preview and this repo is using a `version: 2` config file. You can close this pull request and let Dependabot re-create it the next time it checks for updates.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",871046111,Bump black from 20.8b1 to 21.4b2, https://github.com/simonw/datasette/pull/1311#issuecomment-829260725,https://api.github.com/repos/simonw/datasette/issues/1311,829260725,MDEyOklzc3VlQ29tbWVudDgyOTI2MDcyNQ==,27856297,dependabot-preview[bot],2021-04-29T13:58:08Z,2021-04-29T13:58:08Z,CONTRIBUTOR,Superseded by #1313.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",870227815,Bump black from 20.8b1 to 21.4b1, https://github.com/simonw/datasette/pull/1309#issuecomment-828679943,https://api.github.com/repos/simonw/datasette/issues/1309,828679943,MDEyOklzc3VlQ29tbWVudDgyODY3OTk0Mw==,27856297,dependabot-preview[bot],2021-04-28T18:26:03Z,2021-04-28T18:26:03Z,CONTRIBUTOR,Superseded by #1311.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",869237023,Bump black from 20.8b1 to 21.4b0, https://github.com/simonw/datasette/pull/952#issuecomment-686061028,https://api.github.com/repos/simonw/datasette/issues/952,686061028,MDEyOklzc3VlQ29tbWVudDY4NjA2MTAyOA==,27856297,dependabot-preview[bot],2020-09-02T22:26:14Z,2020-09-02T22:26:14Z,CONTRIBUTOR,"Looks like black is up-to-date now, so this is no longer needed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",687245650,"Update black requirement from ~=19.10b0 to >=19.10,<21.0", https://github.com/simonw/datasette/pull/730#issuecomment-623463200,https://api.github.com/repos/simonw/datasette/issues/730,623463200,MDEyOklzc3VlQ29tbWVudDYyMzQ2MzIwMA==,27856297,dependabot-preview[bot],2020-05-04T13:27:22Z,2020-05-04T13:27:22Z,CONTRIBUTOR,Superseded by #753.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",604001627,"Update pytest-asyncio requirement from ~=0.10.0 to >=0.10,<0.12", https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-770150526,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51,770150526,MDEyOklzc3VlQ29tbWVudDc3MDE1MDUyNg==,22578954,daniel-butler,2021-01-30T03:44:19Z,2021-01-30T03:47:24Z,CONTRIBUTOR,I don't have much experience with github's rate limiting. In my day job we use the [tenacity library](https://github.com/jd/tenacity) to handle http errors we get.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",703246031,github-to-sqlite should handle rate limits better, https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770112248,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60,770112248,MDEyOklzc3VlQ29tbWVudDc3MDExMjI0OA==,22578954,daniel-butler,2021-01-30T00:01:03Z,2021-01-30T01:14:42Z,CONTRIBUTOR,"Yes that would be cool! I wouldn't mind helping. Is this the meat of it? https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/utils.py#L512 It looks like the cli option is added with this decorator : https://github.com/dogsheep/twitter-to-sqlite/blob/21fc1cad6dd6348c67acff90a785b458d3a81275/twitter_to_sqlite/cli.py#L14 I looked a bit at utils.py in the GitHub repository. I was surprised at the amount of manual mapping of the API response you had to do to get this to work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797097140,Use Data from SQLite in other commands, https://github.com/dogsheep/github-to-sqlite/issues/60#issuecomment-770069864,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/60,770069864,MDEyOklzc3VlQ29tbWVudDc3MDA2OTg2NA==,22578954,daniel-butler,2021-01-29T21:52:05Z,2021-02-12T18:29:43Z,CONTRIBUTOR,"For the purposes below I am assuming the organization I would get all the repositories and their related commits from is called `gh-organization`. The github's owner id of gh-orgnization is `123456789`. ```bash github-to-sqlite repos github.db gh-organization ``` I'm on a windows computer running git bash to be able to use the `|` command. This works for me ```bash sqlite3 github.db ""SELECT full_name FROM repos WHERE owner = '123456789';"" | tr '\n\r' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; } ``` On a pure linux system I think this would work because the new line character is normally `\n` ```bash sqlite3 github.db ""SELECT full_name FROM repos WHERE owner = '123456789';"" | tr '\n' ' ' | xargs | { read repos; github-to-sqlite commits github.db $repos; }` ``` As expected I ran into rate limit issues #51 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797097140,Use Data from SQLite in other commands, https://github.com/simonw/datasette/issues/2104#issuecomment-1641082395,https://api.github.com/repos/simonw/datasette/issues/2104,1641082395,IC_kwDOBm6k_c5h0O4b,15178711,asg017,2023-07-18T22:41:37Z,2023-07-18T22:41:37Z,CONTRIBUTOR,"For filtering virtual table's ""shadow tables"" (ex the FTS5 _content and most the spatialite tables), you can use `pragma_table_list` (first appeared in SQLite 3.37 (2021-11-27), which has a `type` column that calls out `type=""shadow""` tables https://www.sqlite.org/pragma.html#pragma_table_list","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1808215339,Tables starting with an underscore should be treated as hidden, https://github.com/simonw/datasette/issues/2087#issuecomment-1616853644,https://api.github.com/repos/simonw/datasette/issues/2087,1616853644,IC_kwDOBm6k_c5gXzqM,15178711,asg017,2023-07-02T22:00:48Z,2023-07-02T22:00:48Z,CONTRIBUTOR,"I just saw in the docs that Dasette auto-detects `settings.json`: > settings.json - settings that would normally be passed using --setting - here they should be stored as a JSON object of key/value pairs > [*Source*](https://docs.datasette.io/en/stable/settings.html#:~:text=settings.json%20%2D%20settings%20that%20would%20normally%20be%20passed%20using%20%2D%2Dsetting%20%2D%20here%20they%20should%20be%20stored%20as%20a%20JSON%20object%20of%20key/value%20pairs)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1765870617,`--settings settings.json` option, https://github.com/simonw/datasette/issues/2093#issuecomment-1616286848,https://api.github.com/repos/simonw/datasette/issues/2093,1616286848,IC_kwDOBm6k_c5gVpSA,15178711,asg017,2023-07-02T02:17:46Z,2023-07-02T02:17:46Z,CONTRIBUTOR,"Storing metadata in the database won't be required. I imagine there'll be many different ways to store metadata, including any possible `datasette_metadata` or sqlite-docs, or the older metadata.json way. The next question will be how precedence should work - i'd imagine metadata.json > plugins > datasette_metadata > sqlite-docs","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.toml` File", https://github.com/simonw/datasette/pull/2052#issuecomment-1616095810,https://api.github.com/repos/simonw/datasette/issues/2052,1616095810,IC_kwDOBm6k_c5gU6pC,15178711,asg017,2023-07-01T20:31:31Z,2023-07-01T20:31:31Z,CONTRIBUTOR,"> Just curious, is there a query that can be used to compile this programmatically, or did you identify these through memory? I just did a github search for `user:simonw ""def extra_js_urls(""` ! Though I'm sure other plugins made by people other than Simon also exist out there https://github.com/search?q=user%3Asimonw+%22def+extra_js_urls%28%22&type=code","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/issues/2093#issuecomment-1613896210,https://api.github.com/repos/simonw/datasette/issues/2093,1613896210,IC_kwDOBm6k_c5gMhoS,15178711,asg017,2023-06-29T22:53:33Z,2023-06-29T22:53:33Z,CONTRIBUTOR,"Maybe we can have a separate issue for revamping `metadata.json`? A `datasette_metadata` table or the `sqlite-docs` extension seem like two reasonable additions that we can work through. Storing metadata inside a SQLite database makes sense, but I don't think storing `datasette.*` style config (ex ports, settings, etc.) inside a SQLite DB makes sense, since it's very environment-dependent","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.toml` File", https://github.com/simonw/datasette/issues/2093#issuecomment-1613895188,https://api.github.com/repos/simonw/datasette/issues/2093,1613895188,IC_kwDOBm6k_c5gMhYU,15178711,asg017,2023-06-29T22:51:53Z,2023-06-29T22:51:53Z,CONTRIBUTOR,"I agree with not liking `metadata.json` stuff in a `datasette.*` config file. Editing description of a table/column in a file like `datasette.*` seems odd to me. Though since plugin configuration currently lives in `metadata.json`, I think it should be removed from there and placed in `datasette.*`, at least for top-level config like `datasette-auth-github`'s config. Keeping `metadata.json` strictly for documentation/licensing/column units makes sense to me, but anything plugin related should be in some config file, like `datasette.*`. And ya, supporting both `datasette.*` and CLI flags makes a lot of sense to me. Any `--setting` flag should override anything in `datasette.*` for easier debugging, with possibly a warning message so people don't get confused. Same with `--port` and a port defined in `datasette.*`","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1781530343,"Proposal: Combine settings, metadata, static, etc. into a single `datasette.toml` File", https://github.com/simonw/datasette/pull/2052#issuecomment-1613778296,https://api.github.com/repos/simonw/datasette/issues/2052,1613778296,IC_kwDOBm6k_c5gME14,15178711,asg017,2023-06-29T20:36:09Z,2023-06-29T20:36:09Z,CONTRIBUTOR,"Ok @hydrosquall a couple things before this PR should be good to go: - Can we move `datasette/static/table-example-plugins.js` into `demos/plugins/static`? - For `datasetteManager.VERSION`, can we fill that in or just comment it out for now? Not sure how difficult it'll be to inject it server-side. I imagine we could also have a small build process with esbuild/rollup that just injects a version string into `manager.js` directly, so we don't have to worry about server-rendering (but that can be a future PR) In terms of how to integrate this into Datasette, a few options I can see working: - Push this as-is and figure it out before the next release - Hide this feature behind a settings flag (`--setting unstable-js-plugins on`) and use that setting to hide/show `` in `base.html` I'll let @simonw decide which one to work with. I kindof like the idea of having an ""unstable"" opt-in process to enable JS plugins, to give us time to try it out with a wide variety of plugins until we feel its ready. I'm also curious to see how ""plugins for a plugin' would work, like #1542. For example, if the leaflet plugin showed default markers, but also included its own hook for other plugins to add more markers/styling. I'm imagine that the individual plugin would re-create their own plugin system compared to this, since handling ""plugins of plugins"" at the top with Datasette seems really convoluted. Also for posterity, here's a list of Simon's Datasette plugins that use ""extra_js_urls()"", which probably means they can be ported/re-written to use this new plugin system: - [`datasette-vega`](https://github.com/simonw/datasette-vega/blob/00de059ab1ef77394ba9f9547abfacf966c479c4/datasette_vega/__init__.py#L25) - [`datasette-cluster-map`](https://github.com/simonw/datasette-cluster-map/blob/795d25ad9ff6cba0307191f44fecc8f8070bef5c/datasette_cluster_map/__init__.py#L14) - [`datasette-leaflet-geojson`](https://github.com/simonw/datasette-leaflet-geojson/blob/64713aa497750400b9ac2c12e8bb6ffab8eb77f3/datasette_leaflet_geojson/__init__.py#L47) - [`datasette-pretty-traces`](https://github.com/simonw/datasette-pretty-traces/blob/5219d65eca3d7d7a73bb9d3120df42fe046a1315/datasette_pretty_traces/__init__.py#L5) - [`datasette-youtube-embed`](https://github.com/simonw/datasette-youtube-embed/blob/4b4a0d7e58ebe15f47e9baf68beb9908c1d899da/datasette_youtube_embed/__init__.py#L55) - [`datasette-leaflet-freedraw`](https://github.com/simonw/datasette-leaflet-freedraw/blob/8f28c2c2080ec9d29f18386cc6a2573a1c8fbde7/datasette_leaflet_freedraw/__init__.py#L66) - [`datasette-hovercards`](https://github.com/simonw/datasette-hovercards/blob/9439ba46b7140fb03223faff0d21aeba5615a287/datasette_hovercards/__init__.py#L5) - [`datasette-mp3-audio`](https://github.com/simonw/datasette-mp3-audio/blob/4402168792f452a46ab7b488e40ec49cd4b12185/datasette_mp3_audio/__init__.py#L6) - [`datasette-geojson-map`](https://github.com/simonw/datasette-geojson-map/blob/32af5f1fd1a07278bbf8071fbb20a61e0f613246/datasette_geojson_map/__init__.py#L30)","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 1}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1606352600,https://api.github.com/repos/simonw/datasette/issues/2052,1606352600,IC_kwDOBm6k_c5fvv7Y,15178711,asg017,2023-06-26T00:17:04Z,2023-06-26T00:17:04Z,CONTRIBUTOR,":wave: would love to see this get merged soon! I want to make a javascript plugin on top of the code-mirror editor to make a few things nicer (function auto-complete, table/column descriptions, etc.), and this would help out a bunch","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/issues/1884#issuecomment-1321460293,https://api.github.com/repos/simonw/datasette/issues/1884,1321460293,IC_kwDOBm6k_c5Ow-JF,15178711,asg017,2022-11-21T04:40:55Z,2022-11-21T04:40:55Z,CONTRIBUTOR,"Counting any virtual tables can be pretty tricky. On one hand, counting a [CSV virtual table](https://www.sqlite.org/csv.html) would return the number of rows in the CSV, which is helpful (but can be I/O intensive). Counting a [FTS5 virtual table](https://www.sqlite.org/fts5.html) would return the number of entries in the FTS index, which is kindof helpful, but can be misleading in some cases. On the other hand, arbitrarily running `COUNT(*)` on some virtual tables can be incredibly expensive. SQLite offers new shortcuts/pushdowns on `COUNT(*)` queries for virtual tables, and instead calls the underlying vtab implementation and iterates through all rows in the table without discretion. For example, a virtual table that's backed by a Postgres table would call `select * from pg_table`, which would use up a lot of network and CPU calls. Or a virtual table backed by a [google sheet](https://github.com/0x6b/libgsqlite) would make network/API requests to get all the rows from the sheet just to make a count. The [`pragma_table_list`](https://www.sqlite.org/pragma.html#pragma_table_list) pragma tells you when a table is a regular table or virtual (in the `type` column), but was only added in version 3.37.0 (2021-11-27). Personally, I wouldnt try to `COUNT(*)` virtual tables - it depends on how the virtual table is implemented, it requires that the connection has the proper extensions loaded, and it may accientally cause perf issues for new-age extensions. A few extensions that I'm writing have virtual tables that wouldn't benefit much from `COUNT(*)`, and the fact that SQLite iterates through all rows in a table to count just makes things worse. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1439009231,Exclude virtual tables from datasette inspect, https://github.com/simonw/datasette/issues/1851#issuecomment-1292519956,https://api.github.com/repos/simonw/datasette/issues/1851,1292519956,IC_kwDOBm6k_c5NCkoU,15178711,asg017,2022-10-26T19:20:33Z,2022-10-26T19:20:33Z,CONTRIBUTOR,"> This could use a new plugin hook, too. I don't want to complicate your life too much, but for things like GIS, I'd want a way to turn regular JSON into SpatiaLite geometries or combine X/Y coordinates into point geometries and such. Happy to help however I can. @eyeseast Maybe you could do this with triggers? Like you can insert JSON-friendly data into a ""raw"" table, and create a trigger that transforms that inserted data into the proper table Here's an example: ```sql -- meant to be updated from a Datasette insert create table points_raw(longitude int, latitude int); -- the target table with proper spatliate geometries create table points(point geometry); CREATE TRIGGER insert_points_raw INSERT ON points_raw BEGIN insert into points(point) values (makepoint(new.longitude, new.latitude)) END; ``` You could then POST a new row to `points_raw` like this: ``` POST /db/points_raw Authorization: Bearer xxx Content-Type: application/json { ""row"": { ""longitude"": 27.64356, ""latitude"": -47.29384 } } ``` Then SQLite with run the trigger and insert a new row in `points` with the correct geometry point. Downside is you'd have duplicated data with `points_raw`, but maybe it could be a `TEMP` table (or have a cron that deletes all rows from that table every so often?)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1421544654,API to insert a single record into an existing table, https://github.com/simonw/datasette/pull/1789#issuecomment-1223347322,https://api.github.com/repos/simonw/datasette/issues/1789,1223347322,IC_kwDOBm6k_c5I6sx6,15178711,asg017,2022-08-23T00:03:20Z,2022-08-23T00:03:20Z,CONTRIBUTOR,"@simonw to build the extension on ubuntu, you can run: ``` apt-get update && apt-get install libsqlite3-dev gcc gcc ext.c -fPIC -shared -o ext.so ``` I'm not the best with Actions, but if you set the cache key to `ext.c`, run those two commands to download dependencies + compile to `ext.so`, then the unit test should pick it up and run it correctly. Let me know if you want me to update the PR with that added","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1344823170,Add new entrypoint option to `--load-extension`, https://github.com/simonw/datasette/pull/1789#issuecomment-1221576460,https://api.github.com/repos/simonw/datasette/issues/1789,1221576460,IC_kwDOBm6k_c5Iz8cM,15178711,asg017,2022-08-21T16:16:42Z,2022-08-21T16:16:42Z,CONTRIBUTOR,"Rebased, Read the docs failure should now now fixed Re docs - ya that's a pretty ambitious page, I'm still not 100% sure what the best practices are/should be... Would be happy to make that page in a future PR","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1344823170,Add new entrypoint option to `--load-extension`, https://github.com/simonw/datasette/issues/1528#issuecomment-975955589,https://api.github.com/repos/simonw/datasette/issues/1528,975955589,IC_kwDOBm6k_c46K-aF,15178711,asg017,2021-11-22T22:00:30Z,2021-11-22T22:00:30Z,CONTRIBUTOR,"Oh, another thing to consider: I believe this would be the first `""_file""` key in datasette's metadata, compared to other `""_url""` keys like `""license_url""` or `""about_url""`. Not too sure what considerations to include with this (ex should missing files cause Datasette to stop before starting, should build scripts bundle these sql files somewhere during `datasette package`, etc.)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1060631257,"Add new `""sql_file""` key to Canned Queries in metadata?", https://github.com/simonw/datasette/pull/666#issuecomment-590022164,https://api.github.com/repos/simonw/datasette/issues/666,590022164,MDEyOklzc3VlQ29tbWVudDU5MDAyMjE2NA==,13896256,kevindkeogh,2020-02-23T03:26:00Z,2020-02-23T03:26:00Z,CONTRIBUTOR,"It was very helpful for me, using it for a 15M row table. Added a test, happy to amend though!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",562085508,"Use inspect-file, if possible, for total row count", https://github.com/simonw/datasette/issues/394#issuecomment-499923145,https://api.github.com/repos/simonw/datasette/issues/394,499923145,MDEyOklzc3VlQ29tbWVudDQ5OTkyMzE0NQ==,13896256,kevindkeogh,2019-06-07T15:10:57Z,2019-06-07T15:11:07Z,CONTRIBUTOR,"Putting this here in case anyone else encounters the same issue with nginx, I was able to resolve it by passing the header in the nginx proxy config (i.e., `proxy_set_header Host $host`).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-499320973,https://api.github.com/repos/simonw/datasette/issues/394,499320973,MDEyOklzc3VlQ29tbWVudDQ5OTMyMDk3Mw==,13896256,kevindkeogh,2019-06-06T02:07:59Z,2019-06-06T02:07:59Z,CONTRIBUTOR,"Hey was this ever merged? Trying to run this behind nginx, and encountering this issue.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/pull/1348#issuecomment-850077261,https://api.github.com/repos/simonw/datasette/issues/1348,850077261,MDEyOklzc3VlQ29tbWVudDg1MDA3NzI2MQ==,10801138,blairdrummond,2021-05-28T03:05:38Z,2021-05-28T03:05:38Z,CONTRIBUTOR,"Note, the CVEs are probably resolvable with this https://github.com/simonw/datasette/pull/1296 . My experience is that Ubuntu seems to manage these better? Though that is surprising :/ ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",904598267,DRAFT: add test and scan for docker images, https://github.com/simonw/datasette/issues/1280#issuecomment-837166862,https://api.github.com/repos/simonw/datasette/issues/1280,837166862,MDEyOklzc3VlQ29tbWVudDgzNzE2Njg2Mg==,10801138,blairdrummond,2021-05-10T19:07:46Z,2021-05-10T19:07:46Z,CONTRIBUTOR,"Do you have a list of sqlite versions you want to test against? One cool thing I saw recently (that we started using) was using `import docker` within python, and then writing pytest functions which executed against the container [setup](https://github.com/StatCan/kubeflow-containers/blob/3c7dcfb5e7188982fb8ebcded82e84292720f720/conftest.py#L85) [example](https://github.com/StatCan/kubeflow-containers/blob/master/tests/jupyterlab-cpu/test_julia.py#L8-L18) The inspiration for this came from the [jupyter docker-stacks](https://github.com/jupyter/docker-stacks/blob/09fb66007615ea68d9bce8f8e1a2cf9402f1e432/test/test_packages.py#L107) So off the top of my head, could look at building the container with different sqlite versions as a build-arg, then run tests against the containers. Just brainstorming though","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842862708,Ability to run CI against multiple SQLite versions, https://github.com/simonw/datasette/pull/1296#issuecomment-835491318,https://api.github.com/repos/simonw/datasette/issues/1296,835491318,MDEyOklzc3VlQ29tbWVudDgzNTQ5MTMxOA==,10801138,blairdrummond,2021-05-08T19:59:01Z,2021-05-08T19:59:01Z,CONTRIBUTOR,"I have also found that ubuntu has fewer vulnerabilities than the buster based images. ``` ➜ ~ docker pull python:3-buster ➜ ~ trivy image python:3-buster | head 2021-04-28T17:14:29.313-0400 INFO Detecting Debian vulnerabilities... 2021-04-28T17:14:29.393-0400 INFO Trivy skips scanning programming language libraries because no supported file was detected python:3-buster (debian 10.9) ============================= Total: 1621 (UNKNOWN: 13, LOW: 1106, MEDIUM: 343, HIGH: 145, CRITICAL: 14) +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ | LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION | TITLE | +------------------------------+---------------------+----------+------------------------------+---------------+--------------------------------------------------------------+ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829,Dockerfile: use Ubuntu 20.10 as base, https://github.com/simonw/datasette/pull/434#issuecomment-489163939,https://api.github.com/repos/simonw/datasette/issues/434,489163939,MDEyOklzc3VlQ29tbWVudDQ4OTE2MzkzOQ==,10352819,rprimet,2019-05-03T16:49:45Z,2019-05-03T16:50:03Z,CONTRIBUTOR,"> The second time I ran the command I got an error: > > ERROR: (gcloud.beta.run.deploy) Deployment endpoint was not found. Perhaps the > provided region was invalid. Set the `run/region` property to a valid region and > retry. Ex: `gcloud config set run/region us-central1` > Yes, I was able to reproduce this; I used to get prompted for a run region interactively by the `gcloud` tool before, but maybe this is changing? (the [documentation](https://cloud.google.com/run/docs/deploying) now assumes `run/region` is set). Not sure which course of action is best: making `datasette` ensure that `run/region` is set beforehand or wait a bit until the gcloud CLI stabilizes?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",434321685,"""datasette publish cloudrun"" command to publish to Google Cloud Run", https://github.com/simonw/sqlite-utils/issues/529#issuecomment-1592110694,https://api.github.com/repos/simonw/sqlite-utils/issues/529,1592110694,IC_kwDOCGYnMM5e5a5m,7908073,chapmanjacobd,2023-06-14T23:11:47Z,2023-06-14T23:12:12Z,CONTRIBUTOR,"sorry i was wrong. `sqlite-utils --raw-lines` works correctly ``` sqlite-utils --raw-lines :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ sqlite-utils --csv --no-headers :memory: ""SELECT * FROM (VALUES ('test'), ('line2'))"" | cat -A test$ line2$ ``` I think this was fixed somewhat recently","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1581090327,Microsoft line endings, https://github.com/simonw/sqlite-utils/issues/535#issuecomment-1592052320,https://api.github.com/repos/simonw/sqlite-utils/issues/535,1592052320,IC_kwDOCGYnMM5e5Mpg,7908073,chapmanjacobd,2023-06-14T22:05:28Z,2023-06-14T22:05:28Z,CONTRIBUTOR,piping to `jq` is good enough usually,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1655860104,rows: --transpose or psql extended view-like functionality, https://github.com/simonw/sqlite-utils/issues/555#issuecomment-1592047502,https://api.github.com/repos/simonw/sqlite-utils/issues/555,1592047502,IC_kwDOCGYnMM5e5LeO,7908073,chapmanjacobd,2023-06-14T22:00:10Z,2023-06-14T22:01:57Z,CONTRIBUTOR,"You may want to try doing a performance comparison between this and just selecting all the ids with few constraints and then doing the filtering within python. That might seem like a lazy-programmer, inefficient way but queries with large resultsets are a different profile than what databases like SQLITE are designed for. That is not to say that SQLITE is slow or that python is always faster but when you start reading >20% of an index there is an equilibrium that is reached. Especially when adding in writing extra temp tables and stuff to memory/disk. And especially given the `NOT IN` style of query... You may also try chunking like this: ```py def chunks(lst, n) -> Generator: for i in range(0, len(lst), n): yield lst[i : i + n] SQLITE_PARAM_LIMIT = 32765 data = [] chunked = chunks(video_ids, consts.SQLITE_PARAM_LIMIT) for ids in chunked: data.expand( list( db.query( f""""""SELECT * from videos WHERE id in ("""""" + "","".join([""?""] * len(ids)) + "")"", (*ids,), ) ) ) ``` but that actually won't work with your `NOT IN` requirements. You need to query the full resultset to check any row. Since you are doing stuff with files/videos in SQLITE you might be interested in my side project: https://github.com/chapmanjacobd/library","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1733198948,Filter table by a large bunch of ids, https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1590531892,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1590531892,IC_kwDOCGYnMM5ezZc0,7908073,chapmanjacobd,2023-06-14T06:09:21Z,2023-06-14T06:09:21Z,CONTRIBUTOR,"I put together a [simple script](https://github.com/chapmanjacobd/library/blob/42129c5ebe15f9d74653c0f5ca4ed0c991d383e0/xklb/scripts/dedupe_db.py) to upsert and remove duplicate rows based on business keys. If anyone has similar problems with above this might help ``` CREATE TABLE my_table ( id INTEGER PRIMARY KEY, column1 TEXT, column2 TEXT, column3 TEXT ); INSERT INTO my_table (column1, column2, column3) VALUES ('Value 1', 'Duplicate 1', 'Duplicate A'), ('Value 2', 'Duplicate 2', 'Duplicate B'), ('Value 3', 'Duplicate 2', 'Duplicate C'), ('Value 4', 'Duplicate 3', 'Duplicate D'), ('Value 5', 'Duplicate 3', 'Duplicate E'), ('Value 6', 'Duplicate 3', 'Duplicate F'); ``` ``` library dedupe-db test.db my_table --bk column2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327,Aliased ROWID option for tables created from alter=True commands, https://github.com/simonw/sqlite-utils/issues/557#issuecomment-1577355134,https://api.github.com/repos/simonw/sqlite-utils/issues/557,1577355134,IC_kwDOCGYnMM5eBId-,7908073,chapmanjacobd,2023-06-05T19:26:26Z,2023-06-05T19:26:26Z,CONTRIBUTOR,"this isn't really actionable... I'm just being a whiny baby. I have tasted the milk of being able to use `upsert_all`, `insert_all`, etc without having to write DDL to create tables. The meat of the issue is that SQLITE doesn't make rowid stable between vacuums so it is not possible to take shortcuts","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740150327,Aliased ROWID option for tables created from alter=True commands, https://github.com/simonw/sqlite-utils/issues/510#issuecomment-1318777114,https://api.github.com/repos/simonw/sqlite-utils/issues/510,1318777114,IC_kwDOCGYnMM5OmvEa,7908073,chapmanjacobd,2022-11-17T15:09:47Z,2022-11-17T15:09:47Z,CONTRIBUTOR,"why close? is the only problem that the _config table that incorrectly says 4 for fts5? if so, that's still something that should be fixed","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1434911255,Cannot enable FTS5 despite it being available, https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304320521,https://api.github.com/repos/simonw/sqlite-utils/issues/511,1304320521,IC_kwDOCGYnMM5NvloJ,7908073,chapmanjacobd,2022-11-04T22:54:09Z,2022-11-04T22:59:54Z,CONTRIBUTOR,I ran `PRAGMA integrity_check` and it returned `ok`. but then I tried restoring from a backup and I didn't get this `IntegrityError: constraint failed` error. So I think it was just something wrong with my database. If it happens again I will first try to reindex and see if that fixes the issue,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1436539554,"[insert_all, upsert_all] IntegrityError: constraint failed", https://github.com/simonw/sqlite-utils/issues/511#issuecomment-1304078945,https://api.github.com/repos/simonw/sqlite-utils/issues/511,1304078945,IC_kwDOCGYnMM5Nuqph,7908073,chapmanjacobd,2022-11-04T19:38:36Z,2022-11-04T20:13:17Z,CONTRIBUTOR,"Even more bizarre, the source db only has one record and the target table has no conflicting record: ``` 875 0.3s lb:/ (main|✚2) [0|0]🌺 sqlite-utils tube_71.db 'select * from media where path = ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz""' | jq [ { ""size"": null, ""time_created"": null, ""play_count"": 1, ""language"": null, ""view_count"": null, ""width"": null, ""height"": null, ""fps"": null, ""average_rating"": null, ""live_status"": null, ""age_limit"": null, ""uploader"": null, ""time_played"": 0, ""path"": ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"", ""id"": ""088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz/074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv"", ""ie_key"": ""ArchiveOrg"", ""playlist_path"": ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz"", ""duration"": 1424.05, ""tags"": null, ""title"": ""074 - Home Away from Home, Rainy Day Robot, Odie the Amazing DVDRip XviD [PhZ].mkv"" } ] 875 0.3s lb:/ (main|✚2) [0|0]🥧 sqlite-utils video.db 'select * from media where path = ""https://archive.org/details/088ghostofachanceroygetssackedrevengeofthelivinglunchdvdripxvidphz""' | jq [] ``` I've been able to use this code successfully several times before so not sure what's causing the issue. I guess the way that I'm handling multiple databases is an issue, though it hasn't ever inserted into the source db, not sure what's different. The only reasonable explanation is that it is trying to insert into the source db from the source db for some reason? Or maybe sqlite3 is checking the source db for primary key violation because the table name is the same","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1436539554,"[insert_all, upsert_all] IntegrityError: constraint failed", https://github.com/simonw/sqlite-utils/issues/50#issuecomment-1303660293,https://api.github.com/repos/simonw/sqlite-utils/issues/50,1303660293,IC_kwDOCGYnMM5NtEcF,7908073,chapmanjacobd,2022-11-04T14:38:36Z,2022-11-04T14:38:36Z,CONTRIBUTOR,where did you see the limit as 999? I believe the limit has been 32766 for quite some time. If you could detect which one this could speed up batch insert of some types of data significantly,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",473083260,"""Too many SQL variables"" on large inserts", https://github.com/simonw/sqlite-utils/issues/507#issuecomment-1297859539,https://api.github.com/repos/simonw/sqlite-utils/issues/507,1297859539,IC_kwDOCGYnMM5NW8PT,7908073,chapmanjacobd,2022-11-01T00:40:16Z,2022-11-01T00:40:16Z,CONTRIBUTOR,"Ideally people could fix their data if they run into this issue. If you are using filenames try [convmv](https://linux.die.net/man/1/convmv) ``` convmv --preserve-mtimes -f utf8 -t utf8 --notest -i -r . ``` maybe this script will also help: ```py import argparse, shutil from pathlib import Path import ftfy from xklb import utils from xklb.utils import log def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument(""paths"", nargs='*') parser.add_argument(""--verbose"", ""-v"", action=""count"", default=0) args = parser.parse_args() log.info(utils.dict_filter_bool(args.__dict__)) return args def rename_invalid_paths() -> None: args = parse_args() for path in args.paths: log.info(path) for p in sorted([str(p) for p in Path(path).rglob(""*"")], key=len): fixed = ftfy.fix_text(p, uncurl_quotes=False).replace(""\r\n"", ""\n"").replace(""\r"", ""\n"").replace(""\n"", """") if p != fixed: try: shutil.move(p, fixed) except FileNotFoundError: log.warning(""FileNotFound. %s"", p) else: log.info(fixed) if __name__ == ""__main__"": rename_invalid_paths() ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1430325103,conn.execute: UnicodeEncodeError: 'utf-8' codec can't encode character, https://github.com/simonw/sqlite-utils/pull/508#issuecomment-1297788531,https://api.github.com/repos/simonw/sqlite-utils/issues/508,1297788531,IC_kwDOCGYnMM5NWq5z,7908073,chapmanjacobd,2022-10-31T22:54:33Z,2022-11-17T15:11:16Z,CONTRIBUTOR,"Maybe this is actually a problem in the python sqlite bindings. Given [SQLITE's stance on this](https://www.sqlite.org/invalidutf.html) they should probably use `encode('utf-8', 'surrogatepass')`. As far as I understand the error here won't actually be resolved by this PR as-is. We would need to modify the data with `surrogateescape`... :/ or modify the sqlite3 module to use `surrogatepass`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1430563092,Allow surrogates in parameters, https://github.com/simonw/sqlite-utils/pull/499#issuecomment-1292401308,https://api.github.com/repos/simonw/sqlite-utils/issues/499,1292401308,IC_kwDOCGYnMM5NCHqc,7908073,chapmanjacobd,2022-10-26T17:54:26Z,2022-10-26T17:54:51Z,CONTRIBUTOR,"The problem with how it is currently is that the transformed fts table _will_ return incorrect results (unless the table was only 1 row or something), even if create_triggers was enabled previously. Maybe the simplest solution is to disable fts on a transformed table rather than try to recreate it? Thoughts?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1405196044,feat: recreate fts triggers after table transform, https://github.com/simonw/sqlite-utils/pull/498#issuecomment-1274153135,https://api.github.com/repos/simonw/sqlite-utils/issues/498,1274153135,IC_kwDOCGYnMM5L8giv,7908073,chapmanjacobd,2022-10-11T06:34:31Z,2022-10-11T06:34:31Z,CONTRIBUTOR,nevermind it was because I was running `db[table].transform`. The fts tables would still be there but the triggers would be dropped,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1404013495,fix: enable-fts permanently save triggers, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223554,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223554,IC_kwDOCGYnMM5LWoVC,7908073,chapmanjacobd,2022-10-01T03:42:50Z,2022-10-01T03:42:50Z,CONTRIBUTOR,oh weird. it inserts into db2,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489,`with db:` for transactions, https://github.com/simonw/sqlite-utils/issues/409#issuecomment-1264223363,https://api.github.com/repos/simonw/sqlite-utils/issues/409,1264223363,IC_kwDOCGYnMM5LWoSD,7908073,chapmanjacobd,2022-10-01T03:41:45Z,2022-10-01T03:41:45Z,CONTRIBUTOR,"``` pytest xklb/check.py --pdb xklb/check.py:11: in test_transaction assert list(db2[""t""].rows) == [] E AssertionError: assert [{'foo': 1}] == [] E + where [{'foo': 1}] = list() E + where = .rows >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > /home/xk/github/xk/lb/xklb/check.py(11)test_transaction() 9 with db1.conn: 10 db1[""t""].insert({""foo"": 1}) ---> 11 assert list(db2[""t""].rows) == [] 12 assert list(db2[""t""].rows) == [{""foo"": 1}] ``` It fails because it is already inserted. btw if you put these two lines in you pyproject.toml you can get `ipdb` in pytest ``` [tool.pytest.ini_options] addopts = ""--pdbcls=IPython.terminal.debugger:TerminalPdb --ignore=tests/data --capture=tee-sys --log-cli-level=ERROR"" ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149661489,`with db:` for transactions, https://github.com/simonw/sqlite-utils/issues/493#issuecomment-1264219650,https://api.github.com/repos/simonw/sqlite-utils/issues/493,1264219650,IC_kwDOCGYnMM5LWnYC,7908073,chapmanjacobd,2022-10-01T03:22:50Z,2022-10-01T03:23:58Z,CONTRIBUTOR,"this is likely what you are looking for: https://stackoverflow.com/a/51076749/697964 but yeah I would say just disable smart quotes","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1386562662,Tiny typographical error in install/uninstall docs, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1264218914,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1264218914,IC_kwDOCGYnMM5LWnMi,7908073,chapmanjacobd,2022-10-01T03:18:36Z,2023-06-14T22:14:24Z,CONTRIBUTOR,"> some good concrete use-cases in mind I actually found myself wanting something like this the past couple days. The use-case was databases with slightly different schema but same table names. here is a full script: ``` import argparse from pathlib import Path from sqlite_utils import Database def connect(args, conn=None, **kwargs) -> Database: db = Database(conn or args.database, **kwargs) with db.conn: db.conn.execute(""PRAGMA main.cache_size = 8000"") return db def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument(""database"") parser.add_argument(""dbs_folder"") parser.add_argument(""--db"", ""-db"", help=argparse.SUPPRESS) parser.add_argument(""--verbose"", ""-v"", action=""count"", default=0) args = parser.parse_args() if args.db: args.database = args.db Path(args.database).touch() args.db = connect(args) return args def merge_db(args, source_db): source_db = str(Path(source_db).resolve()) s_db = connect(argparse.Namespace(database=source_db, verbose = args.verbose)) for table in s_db.table_names(): data = s_db[table].rows args.db[table].insert_all(data, alter=True, replace=True) args.db.conn.commit() def merge_directory(): args = parse_args() source_dbs = list(Path(args.dbs_folder).glob('*.db')) for s_db in source_dbs: merge_db(args, s_db) if __name__ == '__main__': merge_directory() ``` edit: I've made some improvements to this and put it on PyPI: ``` $ pip install xklb $ lb merge-db -h usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...] Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided, and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row replacing all existing fields. Upsert mode will update matching PK rows such that if a source row has a NULL field and the destination row has a value then the value will be preserved instead of changed to the source row's NULL value. Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db Test first by using temp databases as the destination db. Try out different modes / flags until you are satisfied with the behavior of the program library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db Merge database data and tables library merge-dbs --upsert --pk path video.db tv.db movies.db library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path audio-fts.db audio.db library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v positional arguments: database source_dbs ``` Also if you want to dedupe a table based on a ""business key"" which isn't explicitly your primary key(s) you can run this: ``` $ lb dedupe-db -h usage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS] Dedupe your database (not to be confused with the dedupe subcommand) It should not need to be said but *backup* your database before trying this tool! Dedupe-DB will help remove duplicate rows based on non-primary-key business keys library dedupe-db ./video.db media --bk path If --primary-keys is not provided table metadata primary keys will be used If --only-columns is not provided all non-primary and non-business key columns will be upserted positional arguments: database table options: -h, --help show this help message and exit --skip-0 --only-columns ONLY_COLUMNS Comma separated column names to upsert --primary-keys PRIMARY_KEYS, --pk PRIMARY_KEYS Comma separated primary keys --business-keys BUSINESS_KEYS, --bk BUSINESS_KEYS Comma separated business keys ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/issues/491#issuecomment-1256858763,https://api.github.com/repos/simonw/sqlite-utils/issues/491,1256858763,IC_kwDOCGYnMM5K6iSL,7908073,chapmanjacobd,2022-09-24T04:50:59Z,2022-09-24T04:52:08Z,CONTRIBUTOR,"Instead of outputting binary data to stdout the interface might be better like this ``` sqlite-utils merge animals.db cats.db dogs.db ``` similar to `zip`, `ogr2ogr`, etc Actually I think this might already be possible within `ogr2ogr`. I don't believe spatial data is a requirement though it might add an `ogc_id` column or something ``` cp cats.db animals.db ogr2ogr -append animals.db dogs.db ogr2ogr -append animals.db another.db ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1383646615,Ability to merge databases and tables, https://github.com/simonw/sqlite-utils/issues/433#issuecomment-1252898131,https://api.github.com/repos/simonw/sqlite-utils/issues/433,1252898131,IC_kwDOCGYnMM5KrbVT,7908073,chapmanjacobd,2022-09-20T20:51:21Z,2022-09-20T20:56:07Z,CONTRIBUTOR,"When I run `reset` it fixes my terminal. I suspect it is related to the progress bar https://linux.die.net/man/1/reset ``` 950 1s /m/d/03_Downloads 🐑 echo $TERM xterm-kitty ▓░▒░ /m/d/03_Downloads 🌏 kitty -v kitty 0.26.2 created by Kovid Goyal $ sqlite-utils insert test.db facility facility-boundary-us-all.csv --csv blah blah blah (no offense) $ $ reset $ ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1239034903,CLI eats my cursor, https://github.com/simonw/sqlite-utils/pull/480#issuecomment-1232356302,https://api.github.com/repos/simonw/sqlite-utils/issues/480,1232356302,IC_kwDOCGYnMM5JdEPO,7908073,chapmanjacobd,2022-08-31T01:51:49Z,2022-08-31T01:51:49Z,CONTRIBUTOR,Thanks for pointing me to the right place,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1355433619,search_sql add include_rank option, https://github.com/simonw/datasette/issues/1464#issuecomment-918621705,https://api.github.com/repos/simonw/datasette/issues/1464,918621705,IC_kwDOBm6k_c42wQ4J,7476523,bobwhitelock,2021-09-13T22:17:17Z,2021-09-13T22:17:17Z,CONTRIBUTOR,"> haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there? Not certain, but I think tests in CI run on Ubuntu and don't appear to install any additional Sqlite-related dependencies, and so my guess is the version of Sqlite installed by default on Ubuntu has the `SQLITE_ENABLE_FTS3_PARENTHESIS` option enabled and so doesn't run into this issue.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",991191951,clean checkout & clean environment has test failures, https://github.com/simonw/datasette/issues/1464#issuecomment-915343886,https://api.github.com/repos/simonw/datasette/issues/1464,915343886,IC_kwDOBm6k_c42jwoO,7476523,bobwhitelock,2021-09-08T15:32:06Z,2021-09-08T15:32:06Z,CONTRIBUTOR,"Thanks, that does look similar! > Unfortunately, pysqlite3-binary isn't available for Mac OS X, so I can't quickly check that that fixes it; will do so later. Ah that makes sense, I guess that's why this isn't just always installed already. I wonder if a possible solution to this issue could be doing feature detection on whether this feature is supported by the current Sqlite version, and if not these tests could be disabled locally? But possibly there's a better way to handle this, will see what @simonw thinks","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",991191951,clean checkout & clean environment has test failures, https://github.com/simonw/datasette/issues/1464#issuecomment-915299013,https://api.github.com/repos/simonw/datasette/issues/1464,915299013,IC_kwDOBm6k_c42jlrF,7476523,bobwhitelock,2021-09-08T14:40:28Z,2021-09-08T14:40:28Z,CONTRIBUTOR,"What are the full errors you're getting? This *may* be the same issue as described in https://github.com/simonw/datasette/pull/1223 - essentially the test suite (and corresponding Datasette features I assume) are by default implicitly dependent on your Sqlite installation having been compiled with the `SQLITE_ENABLE_FTS3_PARENTHESIS` option. If this is the same issue then I think this can be fixed either by recompiling with that option or (probably more easily) by running `pip install pysqlite3-binary`, which will be used in preference to your system Sqlite installation and has this option enabled. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",991191951,clean checkout & clean environment has test failures, https://github.com/simonw/datasette/issues/1274#issuecomment-805214307,https://api.github.com/repos/simonw/datasette/issues/1274,805214307,MDEyOklzc3VlQ29tbWVudDgwNTIxNDMwNw==,7476523,bobwhitelock,2021-03-23T20:12:29Z,2021-03-23T20:12:29Z,CONTRIBUTOR,"One issue I could see with adding first class support for metadata in hjson format is that this would require adding an additional dependency to handle this, for a feature that would be unused by many users. I wonder if this could fit in as a plugin instead; if a hook existed for loading metadata (maybe as part of https://github.com/simonw/datasette/issues/860) the metadata could then come from any source, as specified by plugins, e.g. hjson, toml, XML, a database table etc. Until/unless this exists, a few ideas for how you could add comments: - Using YAML as you suggest. - A common pattern is adding a `""comment""` key for comments to any object in JSON - I don't think including an unnecessary key like this would break anything in Datasette, but not certain. - You could use another tool as a preprocessor for your JSON metadata - e.g. hjson or Jsonnet. You'd write the metadata in that format, and then convert that into JSON to actually use as your final metadata.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",839008371,Might there be some way to comment metadata.json?, https://github.com/simonw/datasette/issues/1265#issuecomment-802923254,https://api.github.com/repos/simonw/datasette/issues/1265,802923254,MDEyOklzc3VlQ29tbWVudDgwMjkyMzI1NA==,7476523,bobwhitelock,2021-03-19T15:39:15Z,2021-03-19T15:39:15Z,CONTRIBUTOR,"It doesn't use basic auth, but you can put a whole datasette instance, or parts of this, behind a username/password prompt using https://github.com/simonw/datasette-auth-passwords","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",836123030,Support for HTTP Basic Authentication, https://github.com/simonw/datasette/issues/1262#issuecomment-802095132,https://api.github.com/repos/simonw/datasette/issues/1262,802095132,MDEyOklzc3VlQ29tbWVudDgwMjA5NTEzMg==,7476523,bobwhitelock,2021-03-18T16:37:45Z,2021-03-18T16:37:45Z,CONTRIBUTOR,"This sounds like a good use case for a plugin, since this will only be useful for a subset of Datasette users. It shouldn't be too difficult to add a button to do this with the available plugin hooks - have you taken a look at https://docs.datasette.io/en/latest/writing_plugins.html?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",834602299,Plugin hook that could support 'order by random()' for table view, https://github.com/simonw/datasette/issues/1220#issuecomment-778439617,https://api.github.com/repos/simonw/datasette/issues/1220,778439617,MDEyOklzc3VlQ29tbWVudDc3ODQzOTYxNw==,7476523,bobwhitelock,2021-02-12T20:33:27Z,2021-02-12T20:33:27Z,CONTRIBUTOR,"That Docker command will mount your current directory inside the Docker container at `/mnt` - so you shouldn't need to change anything locally, just run ``` docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db ``` and it will use the `fixtures.db` file within your current directory","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1220#issuecomment-777927946,https://api.github.com/repos/simonw/datasette/issues/1220,777927946,MDEyOklzc3VlQ29tbWVudDc3NzkyNzk0Ng==,7476523,bobwhitelock,2021-02-12T02:29:54Z,2021-02-12T02:29:54Z,CONTRIBUTOR,"According to https://github.com/simonw/datasette/blob/master/docs/installation.rst#using-docker it should be ``` docker run -p 8001:8001 -v `pwd`:/mnt \ datasetteproject/datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db ``` This uses `/mnt/fixtures.db` whereas you're using `fixtures.db` - did you try using this path instead?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1200#issuecomment-777132761,https://api.github.com/repos/simonw/datasette/issues/1200,777132761,MDEyOklzc3VlQ29tbWVudDc3NzEzMjc2MQ==,7476523,bobwhitelock,2021-02-11T00:29:52Z,2021-02-11T00:29:52Z,CONTRIBUTOR,I'm probably missing something but what's the use case here - what would this offer over adding `limit 10` to the query?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",792890765,?_size=10 option for the arbitrary query page would be useful, https://github.com/simonw/datasette/pull/1158#issuecomment-750389683,https://api.github.com/repos/simonw/datasette/issues/1158,750389683,MDEyOklzc3VlQ29tbWVudDc1MDM4OTY4Mw==,6774676,eumiro,2020-12-23T17:02:50Z,2020-12-23T17:02:50Z,CONTRIBUTOR,"The dict/set suggestion comes from `pyupgrade --py36-plus`, but then had to `black` the change. The rest comes from PyCharm's Inspect code function. I reviewed all the suggestions and fixed a thing or two, such as leading/trailing spaces in the docstrings or turned around the chained conditions. Then I tried to convert all `os.path/glob/open` to `Path`, but there were some local test issues, so I'll have to start over in smaller chunks if you want to have that too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",773913793,Modernize code to Python 3.6+, https://github.com/simonw/datasette/pull/1256#issuecomment-795112935,https://api.github.com/repos/simonw/datasette/issues/1256,795112935,MDEyOklzc3VlQ29tbWVudDc5NTExMjkzNQ==,6371750,JBPressac,2021-03-10T08:59:45Z,2021-03-10T08:59:45Z,CONTRIBUTOR,"Sorry, I meant ""minor typo"" not ""minor type"".","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",827341657,Minor type in IP adress, https://github.com/simonw/datasette/issues/766#issuecomment-791509910,https://api.github.com/repos/simonw/datasette/issues/766,791509910,MDEyOklzc3VlQ29tbWVudDc5MTUwOTkxMA==,6371750,JBPressac,2021-03-05T15:57:35Z,2021-03-05T16:35:21Z,CONTRIBUTOR,"Hello, I have the same wildcards search problems with an instance of Datasette. http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwerz&_sort=rowid is OK but http://crbc-dataset.huma-num.fr/inventaires/fonds_auguste_dupouy_1872_1967?_search=gwe* is not (FTS is activated on ""Reference"" ""IntituleAnalyse"" ""NomDuProducteur"" ""PresentationDuContenu"" ""Notes""). Notice that a SQL query as below launched directly from SQLite in the server's shell, retrieves results. `select * from fonds_auguste_dupouy_1872_1967_fts where IntituleAnalyse MATCH ""gwe*"";` Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",617323873,Enable wildcard-searches by default, https://github.com/simonw/datasette/issues/998#issuecomment-743080047,https://api.github.com/repos/simonw/datasette/issues/998,743080047,MDEyOklzc3VlQ29tbWVudDc0MzA4MDA0Nw==,6371750,JBPressac,2020-12-11T09:25:09Z,2020-12-11T09:25:09Z,CONTRIBUTOR,"Hello Simon, I have a similar problem with horizontal scrollbar display with Datasette version 0.51 and superior for a table with more than 30 rows. With Datasette 0.50, the horizontal scrollbar is displayed, if I upgrade Datasette to 0.51 and superior, the horizontal scrollbar disappears. Datasette 0.50: horizontal scrollbar ![2020-12-11 10_23_28-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885620-a5f17800-3b9a-11eb-8870-654e7d4372ca.png) Datasette 0.51 and superior: no horizontal scrollbar ![2020-12-11 10_24_55-CN=Microsoft Windows, O=Microsoft Corporation, L=Redmond, S=Washington, C=US](https://user-images.githubusercontent.com/6371750/101885782-dfc27e80-3b9a-11eb-9d55-6c9a56227bf2.png) Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",717699884,Wide tables should scroll horizontally within the page, https://github.com/simonw/datasette/issues/656#issuecomment-576293773,https://api.github.com/repos/simonw/datasette/issues/656,576293773,MDEyOklzc3VlQ29tbWVudDU3NjI5Mzc3Mw==,6371750,JBPressac,2020-01-20T14:17:11Z,2020-01-20T14:17:11Z,CONTRIBUTOR,Seems that headers and definitions has simply to be filled as an HTML table in the description field of matadata.json.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546961357,Display of the column definitions, https://github.com/simonw/datasette/pull/644#issuecomment-565755208,https://api.github.com/repos/simonw/datasette/issues/644,565755208,MDEyOklzc3VlQ29tbWVudDU2NTc1NTIwOA==,6025893,chris48s,2019-12-14T21:33:31Z,2019-12-14T21:33:31Z,CONTRIBUTOR,"Hi @simonw Have you had a chance to look at this at all? I'm going to have a chunk of time free next week so if there is additional work needed on this, that would be a particularly convenient time for me to revisit this. Cheers","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",530513784,Validate metadata json on startup, https://github.com/simonw/datasette/issues/502#issuecomment-812813732,https://api.github.com/repos/simonw/datasette/issues/502,812813732,MDEyOklzc3VlQ29tbWVudDgxMjgxMzczMg==,5413548,louispotok,2021-04-03T05:16:54Z,2021-04-03T05:16:54Z,CONTRIBUTOR,"For what it's worth, if anyone finds this in the future, I was having the same issue. After digging through the code, it turned out that the database download is only available if it the db served in immutable mode, so `datasette serve -i xyz.db` rather than the doc's quickstart recommendation of `datasette serve xyz.db`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",453131917,Exporting sqlite database(s)?, https://github.com/simonw/datasette/issues/1212#issuecomment-782430028,https://api.github.com/repos/simonw/datasette/issues/1212,782430028,MDEyOklzc3VlQ29tbWVudDc4MjQzMDAyOA==,4488943,kbaikov,2021-02-19T22:54:13Z,2021-02-19T22:54:13Z,CONTRIBUTOR,I will close this issue since it appears only in my particular setup.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797651831,Tests are very slow. , https://github.com/simonw/datasette/issues/1208#issuecomment-774286962,https://api.github.com/repos/simonw/datasette/issues/1208,774286962,MDEyOklzc3VlQ29tbWVudDc3NDI4Njk2Mg==,4488943,kbaikov,2021-02-05T21:02:39Z,2021-02-05T21:02:39Z,CONTRIBUTOR,@simonw could you please take a look at the PR 1211 that fixes this issue?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",794554881,A lot of open(file) functions are used without a context manager thus producing ResourceWarning: unclosed file <_io.TextIOWrapper, https://github.com/simonw/datasette/issues/1212#issuecomment-772007663,https://api.github.com/repos/simonw/datasette/issues/1212,772007663,MDEyOklzc3VlQ29tbWVudDc3MjAwNzY2Mw==,4488943,kbaikov,2021-02-02T21:36:56Z,2021-02-02T21:36:56Z,CONTRIBUTOR,"How do you get 4-5 minutes? I run my tests in WSL 2, so may be i need to try a real linux VM.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797651831,Tests are very slow. , https://github.com/simonw/datasette/pull/1211#issuecomment-771127458,https://api.github.com/repos/simonw/datasette/issues/1211,771127458,MDEyOklzc3VlQ29tbWVudDc3MTEyNzQ1OA==,4488943,kbaikov,2021-02-01T20:13:39Z,2021-02-01T20:13:39Z,CONTRIBUTOR,Ping @simonw ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",797649915,Use context manager instead of plain open, https://github.com/simonw/datasette/pull/107#issuecomment-345117690,https://api.github.com/repos/simonw/datasette/issues/107,345117690,MDEyOklzc3VlQ29tbWVudDM0NTExNzY5MA==,3433657,raynae,2017-11-17T01:29:41Z,2017-11-17T01:29:41Z,CONTRIBUTOR,"Thanks for bearing with me. I was getting a message about my branch diverging when I tried to push after rebasing, so I merged master into isnull, seems like that did the trick. Let me know if I should make any corrections.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274343647,add support for ?field__isnull=1, https://github.com/simonw/datasette/pull/107#issuecomment-344811268,https://api.github.com/repos/simonw/datasette/issues/107,344811268,MDEyOklzc3VlQ29tbWVudDM0NDgxMTI2OA==,3433657,raynae,2017-11-16T04:17:45Z,2017-11-16T04:17:45Z,CONTRIBUTOR,"Thanks for the guidance. I added a unit test and made a slight change to utils.py. I didn't realize this, but evidently string.format only complains if you supply less arguments than there are format placeholders, so the original commit worked, but was adding a superfluous named param. I added a conditional that prevents the named param from being created and ensures the correct number of args are passed to sting.format. It has the side effect of hiding the SQL query in /templates/table.html when there are no other where clauses--not sure if that's the desired outcome here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274343647,add support for ?field__isnull=1, https://github.com/simonw/datasette/issues/1425#issuecomment-895003796,https://api.github.com/repos/simonw/datasette/issues/1425,895003796,IC_kwDOBm6k_c41WKyU,3243482,abdusco,2021-08-09T07:14:35Z,2021-08-09T07:14:35Z,CONTRIBUTOR,"I believe this also provides a workaround for the problem I face in https://github.com/simonw/datasette/issues/1300. Now I should be able to get table PKs and generate a row URL. I'll test this out and report my findings. ```py from datasette.utils import path_from_row_pks pks = await db.primary_keys(table) url = self.ds.urls.row_blob( database, table, path_from_row_pks(row, pks, not pks), column, ) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",963528457,render_cell() hook should support returning an awaitable, https://github.com/simonw/datasette/pull/1130#issuecomment-861497548,https://api.github.com/repos/simonw/datasette/issues/1130,861497548,MDEyOklzc3VlQ29tbWVudDg2MTQ5NzU0OA==,3243482,abdusco,2021-06-15T13:27:48Z,2021-06-15T13:27:48Z,CONTRIBUTOR,"There's a workaround: https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ and a future fix: https://css-tricks.com/safari-15-new-ui-theme-colors-and-a-css-tricks-cameo/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",756876238,Fix footer not sticking to bottom in short pages, https://github.com/simonw/datasette/issues/1300#issuecomment-833132571,https://api.github.com/repos/simonw/datasette/issues/1300,833132571,MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ==,3243482,abdusco,2021-05-06T00:16:50Z,2021-05-06T00:18:05Z,CONTRIBUTOR,"I ended up using some JS as a workaround. First, add a JS file in `metadata.yaml`: ```yaml extra_js_urls: - '/static/app.js' ``` then inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `` elements. You need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post. ~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run. ```js window.addEventListener('DOMContentLoaded', () => { function renderBlobImages() { document.querySelectorAll('a[href*="".blob""]').forEach(el => { const img = document.createElement('img'); img.className = 'blob-image'; img.loading = 'lazy'; img.src = el.href.replace('.blob', '.jpg'); el.parentElement.replaceChild(img, el); }); } renderBlobImages(); }); ``` while this does the job, I'd prefer handling this in Python where it belongs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833,Make row available to `render_cell` plugin hook, https://github.com/simonw/datasette/issues/1300#issuecomment-821971059,https://api.github.com/repos/simonw/datasette/issues/1300,821971059,MDEyOklzc3VlQ29tbWVudDgyMTk3MTA1OQ==,3243482,abdusco,2021-04-18T10:42:19Z,2021-04-18T10:42:19Z,CONTRIBUTOR,"If there's a simpler way to generate a URL for a specific row, I'm all ears","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833,Make row available to `render_cell` plugin hook, https://github.com/simonw/datasette/issues/1300#issuecomment-821970965,https://api.github.com/repos/simonw/datasette/issues/1300,821970965,MDEyOklzc3VlQ29tbWVudDgyMTk3MDk2NQ==,3243482,abdusco,2021-04-18T10:41:15Z,2021-04-18T10:41:15Z,CONTRIBUTOR,"If I change the hookspec and add a row parameter, it works https://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/hookspecs.py#L58 ``` def render_cell(value, column, row, table, database, datasette): ``` But to generate a URL, I need the primary keys, but I can't call `pks = await db.primary_keys(table)` inside a sync function. I can't call `datasette.utils.detect_primary_keys` either, because the db connection is not publicly exposed (AFAICT). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860625833,Make row available to `render_cell` plugin hook, https://github.com/simonw/datasette/pull/1130#issuecomment-738907852,https://api.github.com/repos/simonw/datasette/issues/1130,738907852,MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg==,3243482,abdusco,2020-12-04T17:22:29Z,2020-12-04T17:31:25Z,CONTRIBUTOR,"EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it. ```css body { min-height: 100vh; min-height: -webkit-fill-available; } html { height: -webkit-fill-available; } ``` https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ --- It's actually not that difficult to fix. Well, this is actually a workaround to keep viewport in place. I usually put a transition (forgot to do it here) that keeps page from resizing. ```css .container { min-height: 100vh; transition: height 10000s steps(0); } ``` `steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step. This fix also prevents page from jumping around when the keyboard pops up and down.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",756876238,Fix footer not sticking to bottom in short pages, https://github.com/simonw/datasette/issues/1111#issuecomment-736322290,https://api.github.com/repos/simonw/datasette/issues/1111,736322290,MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA==,3243482,abdusco,2020-12-01T08:54:47Z,2020-12-01T08:54:47Z,CONTRIBUTOR,"Somewhat related: https://github.com/simonw/datasette/issues/859 I fixed the issue with forking and disabling the counts for hidden tables.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",751195017,Accessing a database's `.json` is slow for very large SQLite files, https://github.com/simonw/datasette/pull/883#issuecomment-652394742,https://api.github.com/repos/simonw/datasette/issues/883,652394742,MDEyOklzc3VlQ29tbWVudDY1MjM5NDc0Mg==,3243482,abdusco,2020-07-01T12:41:13Z,2020-07-01T12:41:13Z,CONTRIBUTOR,"Well tests need to be updated. I need to get tests working on Windows.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648749062,Skip counting hidden tables, https://github.com/simonw/datasette/pull/883#issuecomment-652297139,https://api.github.com/repos/simonw/datasette/issues/883,652297139,MDEyOklzc3VlQ29tbWVudDY1MjI5NzEzOQ==,3243482,abdusco,2020-07-01T09:11:29Z,2020-07-01T09:11:29Z,CONTRIBUTOR,"Turns out we should include hidden tables in the result dict, or we're breaking tests. I've committed a refactor https://github.com/simonw/datasette/pull/883/commits/4f06e1bf6fbe4b73be770b87f610bf7c0e6e3ea7","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648749062,Skip counting hidden tables, https://github.com/simonw/datasette/issues/877#issuecomment-652261382,https://api.github.com/repos/simonw/datasette/issues/877,652261382,MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg==,3243482,abdusco,2020-07-01T08:03:17Z,2020-07-01T08:03:23Z,CONTRIBUTOR,Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648421105,Consider dropping explicit CSRF protection entirely?, https://github.com/simonw/datasette/issues/877#issuecomment-652255960,https://api.github.com/repos/simonw/datasette/issues/877,652255960,MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA==,3243482,abdusco,2020-07-01T07:52:25Z,2020-07-01T08:10:00Z,CONTRIBUTOR,"I am calling the API from another origin, so injecting CSRF token into templates wouldn't work. EDIT: I'll try the new version, it sounds promising","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648421105,Consider dropping explicit CSRF protection entirely?, https://github.com/simonw/datasette/issues/877#issuecomment-652166115,https://api.github.com/repos/simonw/datasette/issues/877,652166115,MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ==,3243482,abdusco,2020-07-01T03:28:07Z,2020-07-01T03:28:07Z,CONTRIBUTOR,"Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",648421105,Consider dropping explicit CSRF protection entirely?, https://github.com/simonw/datasette/issues/859#issuecomment-652160909,https://api.github.com/repos/simonw/datasette/issues/859,652160909,MDEyOklzc3VlQ29tbWVudDY1MjE2MDkwOQ==,3243482,abdusco,2020-07-01T03:09:32Z,2020-07-01T03:10:21Z,CONTRIBUTOR,"I've just realized Datasette tries to count hidden tables too. There are 5 visible tables, 25 hidden tables, which I haven't realize earlier to consider their effect. I've turned off counting for hidden tables to see if it has any effect. What's the point of counting FTS tables?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-648669523,https://api.github.com/repos/simonw/datasette/issues/859,648669523,MDEyOklzc3VlQ29tbWVudDY0ODY2OTUyMw==,3243482,abdusco,2020-06-24T08:13:23Z,2020-06-24T10:30:36Z,CONTRIBUTOR,"I tried setting `cache_size_kb=0` then `cache_size_kb=100000`, still getting this behavior. I even changed `Database::table_counts` and lowered time limit to 1 ```py table_count = ( await self.execute( ""select count(*) from [{}]"".format(table), custom_time_limit=1, ) ).rows[0][0] counts[table] = table_count ``` I feel like 10 seconds is a magic number, like a processing timeout and datasette gives up and returns the page. Index page loads instantly, table page, query page, as well. But when I return to database page after some time, it loads in 10s. EDIT: It's always like 10 + 0.3s, like 10s wait and timeout then 300ms to render the page","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-648232645,https://api.github.com/repos/simonw/datasette/issues/859,648232645,MDEyOklzc3VlQ29tbWVudDY0ODIzMjY0NQ==,3243482,abdusco,2020-06-23T15:19:53Z,2020-06-23T15:19:53Z,CONTRIBUTOR,"The issue seems to appear sporadically, like when I return to database page after a while, during which some records have been added to the database. I've just visited database, page first visit took ~10s, consecutive visits took 0.3s.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647936117,https://api.github.com/repos/simonw/datasette/issues/859,647936117,MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw==,3243482,abdusco,2020-06-23T06:25:17Z,2020-06-23T06:25:17Z,CONTRIBUTOR,"> > > ``` > sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50 > ``` > > Looks like that will take 35 minutes to run (it's not a particularly fast tool). Try chunking write operations into batches every 1000 records or so.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647935300,https://api.github.com/repos/simonw/datasette/issues/859,647935300,MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA==,3243482,abdusco,2020-06-23T06:23:01Z,2020-06-23T06:23:01Z,CONTRIBUTOR,"> You said ""200k+, 50+ rows in a couple of tables"" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes. Ah that was a typo, I meant 50k.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647925594,https://api.github.com/repos/simonw/datasette/issues/859,647925594,MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA==,3243482,abdusco,2020-06-23T05:55:21Z,2020-06-23T06:28:29Z,CONTRIBUTOR,"Hmm, not seeing the problem now. I've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms. I have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page? EDIT: It turns out it doesn't hold cache with mutable databases. I'll update the issue with more findings and a better way to reproduce the problem if I encounter it again.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647923666,https://api.github.com/repos/simonw/datasette/issues/859,647923666,MDEyOklzc3VlQ29tbWVudDY0NzkyMzY2Ng==,3243482,abdusco,2020-06-23T05:49:31Z,2020-06-23T05:49:31Z,CONTRIBUTOR,"I think I should mention that having FTS on all tables mean I have 5 visible, 25 hidden (FTS) tables displayed on database page.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647922203,https://api.github.com/repos/simonw/datasette/issues/859,647922203,MDEyOklzc3VlQ29tbWVudDY0NzkyMjIwMw==,3243482,abdusco,2020-06-23T05:44:58Z,2021-01-05T08:22:43Z,CONTRIBUTOR,"I'm seeing the problem on database page. Index page and table page runs quite fast. - Tables have <10 columns (`id`, `url`, `title`, `body_html`, `date`, `author`, `meta` (for keeping unstructured json)). I've added index on `date` columns (using `sqlite-utils`) in addition to the index present on `id` columns. - All tables have FTS enabled on `text` and `varchar` columns (`title`, `body_html` etc) to speed up searching. - There are couple of tables related with foreign keys (think a thread in a forum and posts in that thread, related with `thread_id`) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647194131,https://api.github.com/repos/simonw/datasette/issues/859,647194131,MDEyOklzc3VlQ29tbWVudDY0NzE5NDEzMQ==,3243482,abdusco,2020-06-21T23:15:54Z,2020-06-21T23:26:09Z,CONTRIBUTOR,"I'm not sure if table counts are to blame. There shouldn't be a ~3 orders of magnitude difference. ```fish user@klein /a/w/scrapyard (master)> set sql ""select count(*) from table_1; select count(*) from table_2; select count(*) from table_3;"" user@klein /a/w/scrapyard (master)> time sqlite3 scrapyard.db ""$sql"" 187489 46492 2229 ________________________________________________________ Executed in 25.57 millis fish external usr time 3.55 millis 0.00 micros 3.55 millis sys time 22.42 millis 1123.00 micros 21.30 millis ``` but not letting datasette count the tables definitely helps.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-647135713,https://api.github.com/repos/simonw/datasette/issues/859,647135713,MDEyOklzc3VlQ29tbWVudDY0NzEzNTcxMw==,3243482,abdusco,2020-06-21T14:30:02Z,2020-06-21T14:30:02Z,CONTRIBUTOR,"Oops, the same method is called from both index and database pages. But removing select count queries speed up the page load quite a bit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/851#issuecomment-645293374,https://api.github.com/repos/simonw/datasette/issues/851,645293374,MDEyOklzc3VlQ29tbWVudDY0NTI5MzM3NA==,3243482,abdusco,2020-06-17T10:32:02Z,2020-06-17T10:32:28Z,CONTRIBUTOR,"Welp, I'm an idiot. Turns out I had a sneaky comma `,` after `sql` key: ``` ... (:name, :url), ``` which tells sqlite to expect another `values(...)` list. Correcting the SQL solved the issue. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",640330278,Having trouble getting writable canned queries to work, https://github.com/simonw/datasette/issues/511#issuecomment-510730200,https://api.github.com/repos/simonw/datasette/issues/511,510730200,MDEyOklzc3VlQ29tbWVudDUxMDczMDIwMA==,3243482,abdusco,2019-07-12T03:23:22Z,2019-07-12T03:23:22Z,CONTRIBUTOR,"@simonw yes it works fine on Windows, but test suite doesn't run properly, for that I had to use WSL","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",456578474,Get Datasette tests passing on Windows in GitHub Actions, https://github.com/simonw/datasette/pull/554#issuecomment-509629331,https://api.github.com/repos/simonw/datasette/issues/554,509629331,MDEyOklzc3VlQ29tbWVudDUwOTYyOTMzMQ==,3243482,abdusco,2019-07-09T12:51:35Z,2019-07-09T12:51:35Z,CONTRIBUTOR,"I wanted to add a test for it too, but I've realized it's impossible to test a server process as we cannot get its exit code. ```python # tests/test_cli.py def test_static_mounts_on_windows(): if sys.platform != ""win32"": return runner = CliRunner() result = runner.invoke( cli, [""serve"", ""--static"", r""s:C:\\""] ) assert result.exit_code == 0 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",465728430,Fix static mounts using relative paths and prevent traversal exploits, https://github.com/simonw/datasette/pull/554#issuecomment-509618339,https://api.github.com/repos/simonw/datasette/issues/554,509618339,MDEyOklzc3VlQ29tbWVudDUwOTYxODMzOQ==,3243482,abdusco,2019-07-09T12:16:32Z,2019-07-09T12:16:32Z,CONTRIBUTOR,I've also added another fix for using static mounts with absolute paths on Windows. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",465728430,Fix static mounts using relative paths and prevent traversal exploits, https://github.com/simonw/datasette/issues/1258#issuecomment-1437671409,https://api.github.com/repos/simonw/datasette/issues/1258,1437671409,IC_kwDOBm6k_c5VsR_x,2670795,brandonrobertz,2023-02-20T23:39:58Z,2023-02-20T23:39:58Z,CONTRIBUTOR,"This is pretty annoying for FTS because sqlite throws an error instead of just doing something like returning all or no results. This makes users who are unfamiliar with SQL and Datasette think the canned query page is broken and is a frequent source of confusion. To anyone dealing with this: My solution is to modify the canned query so that it returns no results which cues people to fill in the blank parameters. So instead of `emails_fts match escape_fts(:search))` My canned queries now look like this: `emails_fts match escape_fts(iif(:search=="""", ""*"", :search))` There are no asterisks in my data so the result is always blank. Ultimately it would be nice to be able to handle this in the metadata. Either making some named parameters required or setting some default values.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",828858421,Allow canned query params to specify default values, https://github.com/simonw/datasette/issues/1191#issuecomment-1200732975,https://api.github.com/repos/simonw/datasette/issues/1191,1200732975,IC_kwDOBm6k_c5Hkbsv,2670795,brandonrobertz,2022-08-01T05:39:27Z,2022-08-01T05:39:27Z,CONTRIBUTOR,I've got a URL shortening plugin that I would like to embed on the query page but I'd like avoid capturing the entire `query.html` template. A feature like this would solve it. Where's this at and how can I help?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",787098345,Ability for plugins to collaborate when adding extra HTML to blocks in default templates, https://github.com/simonw/datasette/issues/1713#issuecomment-1173358747,https://api.github.com/repos/simonw/datasette/issues/1713,1173358747,IC_kwDOBm6k_c5F8Aib,2670795,brandonrobertz,2022-07-04T05:16:35Z,2022-07-04T05:16:35Z,CONTRIBUTOR,"This feature is pretty important and would be nice if it would be all within Datasette (no separate CLI/deploy required). My workflow now is to basically just copy the result and paste into a Google Sheet, which works, but then it's not discoverable to other journalists browsing the Datasette instance. I started building a plugin similar to [datasette-saved-queries](https://datasette.io/plugins/datasette-saved-queries) but one that maintains its own DB (required if you're working with all immutable DBs), but got bogged down in details.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1203943272,Datasette feature for publishing snapshots of query results, https://github.com/simonw/datasette/issues/1384#issuecomment-1066222323,https://api.github.com/repos/simonw/datasette/issues/1384,1066222323,IC_kwDOBm6k_c4_jULz,2670795,brandonrobertz,2022-03-14T00:36:42Z,2022-03-14T00:36:42Z,CONTRIBUTOR,"> Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :) All good. Report back any issues you find with this stuff. Metadata/dynamic config hasn't been tested widely outside of what I've done AFAIK. If you find a strong use case for async meta, it's going to be better to know sooner rather than later!","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1066169718,https://api.github.com/repos/simonw/datasette/issues/1384,1066169718,IC_kwDOBm6k_c4_jHV2,2670795,brandonrobertz,2022-03-13T19:48:49Z,2022-03-13T19:48:49Z,CONTRIBUTOR,"> For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests? You shouldn't need to do this, as I mentioned previously. The code inside `render_cell` hook already has access to the most recently sync'd metadata via `datasette._metadata_local`. Refreshing the metadata for every cell seems ... excessive.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1066006292,https://api.github.com/repos/simonw/datasette/issues/1384,1066006292,IC_kwDOBm6k_c4_ifcU,2670795,brandonrobertz,2022-03-13T02:09:44Z,2022-03-13T02:09:44Z,CONTRIBUTOR,"> If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config? Reading from sqlite DBs is pretty quick and I didn't notice significant performance issues when I was benchmarking. I tested on very large Datasette deployments (hundreds of DBs, millions of rows). See [""Many small queries are efficient in sqlite""](https://sqlite.org/np1queryprob.html) for more information on the rationale here. Also note that in the [datasette-live-config](https://github.com/next-LI/datasette-live-config) reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had. If you need to ensure fresh metadata is being read inside of a `render_cell` hook specifically, you don't need to do anything further! `get_metadata` gets called before `render_cell` every request, so it already has access to the synced meta. There shouldn't be a need to call `get_metadata(...)` or `metadata(...)` inside `render_cell`, you can just use `datasette._metadata_local` if you're really worried about performance. > The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases. Yes correct, the datadette-remote-metadata plugin doesn't do that. But the datasette-live-config plugin does. [It supports a `__metadata` table](https://github.com/next-LI/datasette-live-config/blob/main/datasette_live_config/__init__.py#L107-L138) that, when it exists on an attached DB, gets pulled into the Datasette internal `_metadata` and is also accessible via `get_metadata`. Updating is instantaneous so there's no gotchas for users or security issues for users relying on the metadata-based permissions. Simon talked about eventually making something like this a standard feature of Datasette, but I'm not sure what the status is on that! Good luck!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1065940779,https://api.github.com/repos/simonw/datasette/issues/1384,1065940779,IC_kwDOBm6k_c4_iPcr,2670795,brandonrobertz,2022-03-12T18:49:29Z,2022-03-12T18:50:07Z,CONTRIBUTOR,"Hello! Just wanted to chime in and note that there's a plugin to have Datasette [watch for updates to an external metadata.yaml/json and update the internal settings accordingly](https://datasette.io/plugins/datasette-remote-metadata), so I think the cache/poll use case is already covered. @khusmann If you don't need truly dynamic metadata then what you've come up with or the plugin ought to work fine. Making the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict. Trying to force all paths through a async hook would have performance ramifications and making everything use the internal meta will cause problems for users that need changes to take effect immediately. This is why I came to the non-async solution as it was the path of least change within Datasette. As always, open to new ideas, etc!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/859#issuecomment-905904540,https://api.github.com/repos/simonw/datasette/issues/859,905904540,IC_kwDOBm6k_c41_wGc,2670795,brandonrobertz,2021-08-25T21:59:14Z,2021-08-25T21:59:55Z,CONTRIBUTOR,"I did two tests: one with 1000 5-30mb DBs and a second with 20 multi gig DBs. For the second, I created them like so: `for i in {1..20}; do sqlite-generate db$i.db --tables ${i}00 --rows 100,2000 --columns 5,100 --pks 0 --fks 0; done` This was for deciding whether to use lots of small DBs or to group things into a smaller number of bigger DBs. The second strategy wins. By simply persisting the `_internal` DB to disk, I was able to avoid most of the performance issues I was experiencing previously. (To do this, I changed the `datasette/internal_db.py:init_internal_db` creates to if not exists, and changed the `_internal` DB instantiation in `datasette/app.py:Datasette.__init__` to a path with `is_mutable=True`.) Super rough, but the pages now load so I can continue testing ideas.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-905899177,https://api.github.com/repos/simonw/datasette/issues/859,905899177,IC_kwDOBm6k_c41_uyp,2670795,brandonrobertz,2021-08-25T21:48:00Z,2021-08-25T21:48:00Z,CONTRIBUTOR,"Upon first stab, there's two issues here: - DB/table/row counts (as discussed above). This isn't too bad if the DBs are actually above the MAX limit check. - Populating the internal DB. On first load of a giant set of DBs, it can take 10-20 mins to populate. By altering datasette and persisting the internal DB to disk, this problem is vastly improved, but I'm sure this will cause problems elsewhere.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/859#issuecomment-904982056,https://api.github.com/repos/simonw/datasette/issues/859,904982056,IC_kwDOBm6k_c418O4o,2670795,brandonrobertz,2021-08-24T21:15:04Z,2021-08-24T21:15:30Z,CONTRIBUTOR,"I'm running into issues with this as well. All other pages seem to work with lots of DBs except the home page, which absolutely tanks. Would be willing to put some work into this, if there's been any kind of progress on concepts on how this ought to work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",642572841,Database page loads too slowly with many large tables (due to table counts), https://github.com/simonw/datasette/issues/1168#issuecomment-869076254,https://api.github.com/repos/simonw/datasette/issues/1168,869076254,MDEyOklzc3VlQ29tbWVudDg2OTA3NjI1NA==,2670795,brandonrobertz,2021-06-27T00:03:16Z,2021-06-27T00:05:51Z,CONTRIBUTOR,"> Related: Here's an implementation of a `get_metadata()` plugin hook by @brandonrobertz [next-LI@3fd8ce9](https://github.com/next-LI/datasette/commit/3fd8ce91f3108c82227bf65ff033929426c60437) Here's a plugin that implements metadata-within-DBs: [next-LI/datasette-live-config](https://github.com/next-LI/datasette-live-config) How it works: If a database has a `__metadata` table, then it gets parsed and included in the global metadata. It also implements a database-action hook with a UI for managing config. More context: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L109-L140","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777333388,Mechanism for storing metadata in _metadata tables, https://github.com/simonw/datasette/issues/1384#issuecomment-869074701,https://api.github.com/repos/simonw/datasette/issues/1384,869074701,MDEyOklzc3VlQ29tbWVudDg2OTA3NDcwMQ==,2670795,brandonrobertz,2021-06-26T23:45:18Z,2021-06-26T23:45:37Z,CONTRIBUTOR,"> Here's where the plugin hook is called, demonstrating the `fallback=` argument: > > https://github.com/simonw/datasette/blob/05a312caf3debb51aa1069939923a49e21cd2bd1/datasette/app.py#L426-L472 > > I'm not convinced of the use-case for passing `fallback=` to the hook here - is there a reason a plugin might care whether fallback is `True` or `False`, seeing as the `metadata()` method already respects that fallback logic on line 459? I think you're right. I can't think of a reason why the plugin would care about the `fallback` parameter since plugins are currently mandated to return a full, global metadata dict.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-869074182,https://api.github.com/repos/simonw/datasette/issues/1384,869074182,MDEyOklzc3VlQ29tbWVudDg2OTA3NDE4Mg==,2670795,brandonrobertz,2021-06-26T23:37:42Z,2021-06-26T23:37:42Z,CONTRIBUTOR,"> > Hmmm... that's tricky, since one of the most obvious ways to use this hook is to load metadata from database tables using SQL queries. > > @brandonrobertz do you have a working example of using this hook to populate metadata from database tables I can try? > > Answering my own question: here's how Brandon implements it in his `datasette-live-config` plugin: https://github.com/next-LI/datasette-live-config/blob/72e335e887f1c69c54c6c2441e07148955b0fc9f/datasette_live_config/__init__.py#L50-L160 > > That's using a completely separate SQLite connection (actually wrapped in `sqlite-utils`) and making blocking synchronous calls to it. > > This is a pragmatic solution, which works - and likely performs just fine, because SQL queries like this against a small database are so fast that not running them asynchronously isn't actually a problem. > > But... it's weird. Everywhere else in Datasette land uses `await db.execute(...)` - but here's an example where users are encouraged to use blocking calls instead. _Ideally_ this hook would be asynchronous, but when I started down that path I quickly realized how large of a change this would be, since metadata gets used synchronously across the entire Datasette codebase. (And calling async code from sync is non-trivial.) In my live-configuration implementation I use synchronous reads using a persistent sqlite connection. This works pretty well in practice, but I agree it's limiting. My thinking around this was to go with the path of least change as `Datasette.metadata()` is a critical core function.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/pull/1368#issuecomment-865204472,https://api.github.com/repos/simonw/datasette/issues/1368,865204472,MDEyOklzc3VlQ29tbWVudDg2NTIwNDQ3Mg==,2670795,brandonrobertz,2021-06-21T17:11:37Z,2021-06-21T17:11:37Z,CONTRIBUTOR,If this is a concept ACK then I will move onto fixing the tests (adding new ones) and updating the documentation for the new plugin hook.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",913865304,DRAFT: A new plugin hook for dynamic metadata, https://github.com/simonw/datasette/pull/1368#issuecomment-856182547,https://api.github.com/repos/simonw/datasette/issues/1368,856182547,MDEyOklzc3VlQ29tbWVudDg1NjE4MjU0Nw==,2670795,brandonrobertz,2021-06-07T18:59:47Z,2021-06-07T23:04:25Z,CONTRIBUTOR,"Note that if we went with a ""update_metadata"" hook, the hook signature would look something like this (it would return nothing): ``` update_metadata( datasette=self, metadata=metadata, key=key, database=database, table=table, fallback=fallback ) ``` The Datasette function `_metadata_recursive_update(self, orig, updated)` would disappear into the plugins. Doing this, though, we'd lose the easy ability to make the local metadata.yaml immutable (since we'd no longer have the recursive update).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",913865304,DRAFT: A new plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/767#issuecomment-632555800,https://api.github.com/repos/simonw/datasette/issues/767,632555800,MDEyOklzc3VlQ29tbWVudDYzMjU1NTgwMA==,2657547,rixx,2020-05-22T08:00:23Z,2020-05-22T08:00:23Z,CONTRIBUTOR,That would be perfect!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",620969465,Allow to specify a URL fragment for canned queries, https://github.com/simonw/datasette/pull/602#issuecomment-549246007,https://api.github.com/repos/simonw/datasette/issues/602,549246007,MDEyOklzc3VlQ29tbWVudDU0OTI0NjAwNw==,2657547,rixx,2019-11-04T07:29:33Z,2019-11-04T07:29:33Z,CONTRIBUTOR,Not sure – I'm always a bit weirded out when elements that I clicked disappear on me.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509535510,Offer to format readonly SQL, https://github.com/simonw/datasette/pull/601#issuecomment-544214418,https://api.github.com/repos/simonw/datasette/issues/601,544214418,MDEyOklzc3VlQ29tbWVudDU0NDIxNDQxOA==,2657547,rixx,2019-10-20T02:29:49Z,2019-10-20T02:29:49Z,CONTRIBUTOR,Submitted in #602!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509340359,Don't auto-format SQL on page load, https://github.com/simonw/datasette/pull/601#issuecomment-544008944,https://api.github.com/repos/simonw/datasette/issues/601,544008944,MDEyOklzc3VlQ29tbWVudDU0NDAwODk0NA==,2657547,rixx,2019-10-18T23:40:48Z,2019-10-18T23:40:48Z,CONTRIBUTOR,"The only negative impact that comes to mind is that now you have no way to get the read-only query to be formatted nicely, I think, so maybe a second PR adding the formatting functionality even to the read-only page would be good?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509340359,Don't auto-format SQL on page load, https://github.com/simonw/datasette/pull/601#issuecomment-544008463,https://api.github.com/repos/simonw/datasette/issues/601,544008463,MDEyOklzc3VlQ29tbWVudDU0NDAwODQ2Mw==,2657547,rixx,2019-10-18T23:39:21Z,2019-10-18T23:39:21Z,CONTRIBUTOR,"That looks right, and I completely agree with the intent.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",509340359,Don't auto-format SQL on page load, https://github.com/simonw/datasette/pull/590#issuecomment-541587823,https://api.github.com/repos/simonw/datasette/issues/590,541587823,MDEyOklzc3VlQ29tbWVudDU0MTU4NzgyMw==,2657547,rixx,2019-10-14T09:58:23Z,2019-10-14T09:58:23Z,CONTRIBUTOR,Added tests.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",505818256,Handle spaces in DB names, https://github.com/simonw/datasette/pull/590#issuecomment-541562581,https://api.github.com/repos/simonw/datasette/issues/590,541562581,MDEyOklzc3VlQ29tbWVudDU0MTU2MjU4MQ==,2657547,rixx,2019-10-14T08:57:46Z,2019-10-14T08:57:46Z,CONTRIBUTOR,"Ah, thank you – I saw the need for unit tests but wasn't sure what the best way to add one would be.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",505818256,Handle spaces in DB names, https://github.com/simonw/datasette/issues/512#issuecomment-541119038,https://api.github.com/repos/simonw/datasette/issues/512,541119038,MDEyOklzc3VlQ29tbWVudDU0MTExOTAzOA==,2657547,rixx,2019-10-11T15:49:13Z,2019-10-11T15:49:13Z,CONTRIBUTOR,"How open are you to changing the config variable names (with appropriate deprecation, of course)? `""about_url_text"", ""license_url_text""` etc might be better suited to convey that these are just meant as basically URL titles.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",457147936,"""about"" parameter in metadata does not appear when alone", https://github.com/simonw/datasette/issues/507#issuecomment-541118904,https://api.github.com/repos/simonw/datasette/issues/507,541118904,MDEyOklzc3VlQ29tbWVudDU0MTExODkwNA==,2657547,rixx,2019-10-11T15:48:49Z,2019-10-11T15:48:49Z,CONTRIBUTOR,Headless Chrome and Firefox via Selenium are a solid choice in my experience. You may be interested in how pretix and pretalx solve this problem: They use pytest to create those screenshots on release to make sure they are up to date. See [this writeup](https://behind.pretix.eu/2018/11/15/automated-screenshots/) and [this repo](https://github.com/pretix/pretix-screenshots).,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455852801,Every datasette plugin on the ecosystem page should have a screenshot, https://github.com/simonw/datasette/issues/585#issuecomment-541052329,https://api.github.com/repos/simonw/datasette/issues/585,541052329,MDEyOklzc3VlQ29tbWVudDU0MTA1MjMyOQ==,2657547,rixx,2019-10-11T12:53:51Z,2019-10-11T12:53:51Z,CONTRIBUTOR,"I think this would be good, yeah – currently, databases are explicitly sorted by name in the IndexView, we could just remove that part (and use an `OrderedDict` for consistency, I suppose)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",503217375,"Databases on index page should display in order they were passed to ""datasette serve""?", https://github.com/simonw/datasette/issues/523#issuecomment-504809397,https://api.github.com/repos/simonw/datasette/issues/523,504809397,MDEyOklzc3VlQ29tbWVudDUwNDgwOTM5Nw==,2657547,rixx,2019-06-24T01:38:14Z,2019-06-24T01:38:14Z,CONTRIBUTOR,"Ah, apologies – I had found and read those issues, but I was under the impression that they refered only to the filtered row count, not the unfiltered total row count.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459627549,Show total/unfiltered row count when filtering, https://github.com/simonw/sqlite-utils/issues/449#issuecomment-1179579878,https://api.github.com/repos/simonw/sqlite-utils/issues/449,1179579878,IC_kwDOCGYnMM5GTvXm,1690072,davidleejy,2022-07-09T17:41:32Z,2022-07-09T17:41:50Z,CONTRIBUTOR,Learnt that the types in Sqlite-utils differ somewhat from those in Sqlite. I've changed my test to account for this difference and the test has passed successfully. I will submit a PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279863844,Utilities for duplicating tables and creating a table with the results of a query, https://github.com/simonw/sqlite-utils/issues/449#issuecomment-1174027079,https://api.github.com/repos/simonw/sqlite-utils/issues/449,1174027079,IC_kwDOCGYnMM5F-jtH,1690072,davidleejy,2022-07-04T17:33:04Z,2022-07-04T17:48:43Z,CONTRIBUTOR,"I've written the code and test. Would you be able to advise how to compare table columns in a pytest function properly? Experiencing a challenge when comparing columns. Test: ```python def test_duplicate(fresh_db): table = fresh_db.create_table( ""table1"", { ""text_col"": str, ""float_col"": float, ""int_col"": int, ""bool_col"": bool, ""bytes_col"": bytes, ""datetime_col"": datetime.datetime, }, ) dt = datetime.datetime.now() b = bytes('hello world', 'utf-8') data = {""text_col"": ""Cleo"", ""float_col"": 3.14, ""int_col"": -2, ""bool_col"": True, ""bytes_col"": b, ""datetime_col"": str(dt)} table1 = fresh_db[""table1""] row_id = table1.insert(data).last_rowid table1.duplicate('table2') table2 = fresh_db[""table2""] assert data == table2.get(row_id) assert table1.columns == table2.columns # FAILS HERE ``` Result: ![Screenshot 2022-07-05 at 1 31 55 AM](https://user-images.githubusercontent.com/1690072/177198814-daac48c9-5746-49d0-a14a-14fe181c5a2f.png) Failure is due to column types being named differently -- e.g. 'FLOAT' vs 'REAL', 'INTEGER' vs 'INT'. How should I go about comparing columns while accounting for equivalent types? Or did I miss out something in my duplication code correctly? Here's how I did it: in `db.py`, I've added the following code: ```python class Table(Queryable): [...] def duplicate( self, name_new: str ) -> ""Table"": """""" Duplicate this table in this database. :param name_new: Name of new table. """""" assert self.exists() with self.db.conn: sql = ""CREATE TABLE [{new_table}] AS SELECT * FROM [{table}];"".format( new_table = name_new, table = self.name, ) self.db.execute(sql) return self.db[name_new] ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1279863844,Utilities for duplicating tables and creating a table with the results of a query, https://github.com/simonw/datasette/issues/1886#issuecomment-1313252879,https://api.github.com/repos/simonw/datasette/issues/1886,1313252879,IC_kwDOBm6k_c5ORqYP,883348,adipasquale,2022-11-14T08:10:23Z,2022-11-14T08:10:23Z,CONTRIBUTOR,"Hi @simonw and thanks for the great tools you're publishing, your dedication is inspiring! I work for the French Ministry of Culture on a surveying tool for objects protected for their historical value. It is part of a program building modern public services called [beta.gouv.fr](https://beta.gouv.fr/). In that context I'm using data published by the Ministry that I have ingested into datasette and published on a free Fly instance : https://collectif-objets-datasette.fly.dev . I have also ingested another data set with infos about french cities on this instance so that I can perform joined queries. The surveying tool synchronizes its data regularly from this datasette instance, and I also use it to perform queries when asked generic questions about the distribution of objects. (The data is not very accessible as it's undocumented and for internal usage mostly)","{""total_count"": 3, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 3, ""rocket"": 0, ""eyes"": 0}",1447050738,"Call for birthday presents: if you're using Datasette, let us know how you're using it here", https://github.com/simonw/datasette/issues/1813#issuecomment-1250901367,https://api.github.com/repos/simonw/datasette/issues/1813,1250901367,IC_kwDOBm6k_c5Kjz13,883348,adipasquale,2022-09-19T11:34:45Z,2022-09-19T11:34:45Z,CONTRIBUTOR,"oh and by writing this I just realized the difference: the URL on fly.io is with a custom SQL command whereas the local one is without. It seems that there is no pagination when using custom SQL commands which makes sense Sorry for this useless issue, maybe this can be useful for someone else / me in the future. Thanks again for this wonderful project !","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1377811868,missing next and next_url in JSON responses from an instance deployed on Fly , https://github.com/simonw/datasette/issues/1522#issuecomment-976117989,https://api.github.com/repos/simonw/datasette/issues/1522,976117989,IC_kwDOBm6k_c46LmDl,813732,glasnt,2021-11-23T03:00:34Z,2021-11-23T03:00:34Z,CONTRIBUTOR,"I tried deploying the most recent version of the Dockerfile in this thread ([link to comment](https://github.com/simonw/datasette/issues/1522#issuecomment-974605128)), and after trying a few different different combinations, I was only successful when I used `--no-cpu-throttling` (""CPU Is always allocated"" in the UI) Using this method, I got a very similar issue to you: The first time I'd load the site I'd get a 503. But after that first load, I didn't get the issue again. It would re-occur if the service started from cold boot. I suspect this is a race condition in the supervisord configuration. The errors I got were the same `Connection refused: AH00957: http: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed`, and that seems to indicate that `datasette` hadn't yet started. Looking at the order of logs getting back, the processes reported successfully completing loading after the first 503 was returned, so that makes me think race condition. I can replicate this locally, if I `docker run` and request `localhost:5000/prefix` _before_ I get the `datasette entered RUNNING state` message. Cloud Run wakes up when requests are received, so this test would semi-replicate that, but local docker would be the equivalent of a persistent process, hence it doesn't normally exhibit the same issues. Unfortunately supervisor/supervisor issue 122 (not linking as to prevent cross-project link spam) seems to say that dependency chaining is a feature that's been asked for for a long time, but hasn't been implemented. You could try some suggestions in that thread. ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058896236,Deploy a live instance of demos/apache-proxy, https://github.com/simonw/datasette/issues/1380#issuecomment-967747190,https://api.github.com/repos/simonw/datasette/issues/1380,967747190,IC_kwDOBm6k_c45rqZ2,813732,glasnt,2021-11-13T00:47:26Z,2021-11-13T00:47:26Z,CONTRIBUTOR,"Would it make sense to run datasette with a fswatch/inotifywait on a folder, then? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",924748955,Serve all db files in a folder, https://github.com/simonw/datasette/issues/1380#issuecomment-953366110,https://api.github.com/repos/simonw/datasette/issues/1380,953366110,IC_kwDOBm6k_c440zZe,813732,glasnt,2021-10-27T22:48:55Z,2021-10-27T22:48:55Z,CONTRIBUTOR,"It looks like if the files argument is a directory, `config_dir` is set, but files in that folder are only loaded into `self.files` at the `Datasette` class initialisation. I tried seeing if I could get `--reload` to work, but I'm getting issues trying to use that command when specifying a directory, as the command `serve` ends up in the files list(?): ``` datasette serve . --reload Error: Invalid value for '[FILES]...': Path 'serve' does not exist. ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",924748955,Serve all db files in a folder, https://github.com/simonw/datasette/issues/1380#issuecomment-953334718,https://api.github.com/repos/simonw/datasette/issues/1380,953334718,IC_kwDOBm6k_c440ru-,813732,glasnt,2021-10-27T21:45:04Z,2021-10-27T21:45:04Z,CONTRIBUTOR,"I am also getting this issue, using the currently most recent version of datasette ``` $ datasette --version datasette, version 0.59.1 ``` If I run `datasette` within just a folder of files, ``` $ datasette serve . ``` Adding new files while datasette is running shows no new files, and removing files causes datasette to return 500 errors. ``` home Error 500 [Errno 2] No such file or directory: 'mydatabase.db' Powered by Datasette ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",924748955,Serve all db files in a folder, https://github.com/dogsheep/github-to-sqlite/pull/48#issuecomment-704503719,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/48,704503719,MDEyOklzc3VlQ29tbWVudDcwNDUwMzcxOQ==,755825,adamjonas,2020-10-06T19:26:59Z,2020-10-06T19:26:59Z,CONTRIBUTOR,ref #46 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",681228542,Add pull requests, https://github.com/simonw/datasette/issues/1612#issuecomment-1021497165,https://api.github.com/repos/simonw/datasette/issues/1612,1021497165,IC_kwDOBm6k_c484s9N,639012,jsfenfen,2022-01-25T18:44:23Z,2022-01-25T18:44:23Z,CONTRIBUTOR,"OMG, this might be the fastest OS ticket I've ever filed, thanks so much @simonw ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1114147905,Move canned queries closer to the SQL input area, https://github.com/simonw/datasette/issues/1019#issuecomment-708520800,https://api.github.com/repos/simonw/datasette/issues/1019,708520800,MDEyOklzc3VlQ29tbWVudDcwODUyMDgwMA==,639012,jsfenfen,2020-10-14T16:37:19Z,2020-10-14T16:37:19Z,CONTRIBUTOR,🎉 Thanks so much @simonw ! 🎉 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",721050815,"""Edit SQL"" button on canned queries", https://github.com/simonw/datasette/issues/394#issuecomment-567133734,https://api.github.com/repos/simonw/datasette/issues/394,567133734,MDEyOklzc3VlQ29tbWVudDU2NzEzMzczNA==,639012,jsfenfen,2019-12-18T17:33:23Z,2019-12-18T17:33:23Z,CONTRIBUTOR,"FWIW I did a dumb merge of the branch here: https://github.com/jsfenfen/datasette and it seemed to work in that I could run stuff at a subdirectory, but ended up abandoning it in favor of just posting a subdomain because getting the nginx configs right was making me crazy. I still would prefer posting at a subdirectory but the subdomain seems simpler at the moment. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/simonw/datasette/issues/394#issuecomment-556749086,https://api.github.com/repos/simonw/datasette/issues/394,556749086,MDEyOklzc3VlQ29tbWVudDU1Njc0OTA4Ng==,639012,jsfenfen,2019-11-21T01:15:34Z,2019-11-21T01:21:45Z,CONTRIBUTOR,"Hey @simonw is the url_prefix config option available in another branch, it looks like you've written some tests for it above? In 0.32 I get ""url_prefix is not a valid option"". I think this would be *really helpful*! This would be really handy for proxying datasette in another domain's *subdirectory* I believe this will allow folks to run upstream authentication, but the links break if the url_prefix doesn't match. I'd prefer not to host a proxied version of datasette on a subdomain (e.g. datasette.myurl.com b/c then I gotta worry about sharing authorization cookies with the subdomain, which I just assume not do, but...) Edit: I see the wip-url-prefix branch, I may try with that https://github.com/simonw/datasette/commit/8da2db4b71096b19e7a9ef1929369b8483d448bf","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",396212021,base_url configuration setting, https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-846413174,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59,846413174,MDEyOklzc3VlQ29tbWVudDg0NjQxMzE3NA==,631242,frosencrantz,2021-05-22T14:06:19Z,2021-05-22T14:06:19Z,CONTRIBUTOR,Thanks Simon!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771872303,Remove unneeded exists=True for -a/--auth flag., https://github.com/dogsheep/github-to-sqlite/pull/59#issuecomment-751375487,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/59,751375487,MDEyOklzc3VlQ29tbWVudDc1MTM3NTQ4Nw==,631242,frosencrantz,2020-12-26T17:08:44Z,2020-12-26T17:08:44Z,CONTRIBUTOR,"Hi @simonw, do I need to do anything else for this PR to be considered to be included? I've tried using this project and it is quite nice to be able to explore a repository, but noticed that a couple commands don't allow you to use authorization from the environment variable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771872303,Remove unneeded exists=True for -a/--auth flag., https://github.com/simonw/sqlite-utils/issues/556#issuecomment-1575310378,https://api.github.com/repos/simonw/sqlite-utils/issues/556,1575310378,IC_kwDOCGYnMM5d5VQq,601708,mcint,2023-06-04T01:21:15Z,2023-06-04T01:21:15Z,CONTRIBUTOR,"I've resolved my use, with the line-buffered output and while read loop for line buffered input, but I leave this here so the incremental saving or line-buffered use-case can be explicitly handled or rejected (or deferred).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1740026046,Support storing incrementally piped values, https://github.com/simonw/sqlite-utils/issues/278#issuecomment-864621099,https://api.github.com/repos/simonw/sqlite-utils/issues/278,864621099,MDEyOklzc3VlQ29tbWVudDg2NDYyMTA5OQ==,601708,mcint,2021-06-20T22:39:57Z,2021-06-20T22:39:57Z,CONTRIBUTOR,"Fair. I looked into it, it looks like it could be done, but it would be _a bit ugly_. I can upload and link a gist of my exploration. **Click** can parse a first argument while still recognizing it as a sub-command keyword. From there, the program could: 1. ignore it preemptively if it matches a sub-command 2. and/or check if a (db) file exists at the path. It would then also need to set a shared db argument variable. Click also makes it easy to parse arguments from environment variables. If you're amenable, I may submit a patch for only that, which would update each sub-command to check for a DB/SQLITE_UTILS_DB environment variable. The goal would be usage that looks like: `DB=./convenient.db sqlite-utils [operation] [args]`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923697888,"Support db as first parameter before subcommand, or as environment variable", https://github.com/simonw/datasette/pull/280#issuecomment-391355030,https://api.github.com/repos/simonw/datasette/issues/280,391355030,MDEyOklzc3VlQ29tbWVudDM5MTM1NTAzMA==,565628,r4vi,2018-05-23T13:53:27Z,2018-05-23T15:22:45Z,CONTRIBUTOR,"No objections; It's good to go @simonw On Wed, 23 May 2018, 14:51 Simon Willison, wrote: > @r4vi any objections to me merging this? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > , or mute > the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/pull/280#issuecomment-391290271,https://api.github.com/repos/simonw/datasette/issues/280,391290271,MDEyOklzc3VlQ29tbWVudDM5MTI5MDI3MQ==,565628,r4vi,2018-05-23T09:53:38Z,2018-05-23T09:53:38Z,CONTRIBUTOR,"Running: ```bash docker run -p 8001:8001 -v `pwd`:/mnt datasette \ datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \ --load-extension=/usr/local/lib/mod_spatialite.so ``` is now returning FTS5 enabled in the versions output: ```json { ""datasette"": { ""version"": ""0.22"" }, ""python"": { ""full"": ""3.6.5 (default, May 5 2018, 03:07:21) \n[GCC 6.3.0 20170516]"", ""version"": ""3.6.5"" }, ""sqlite"": { ""extensions"": { ""json1"": null, ""spatialite"": ""4.4.0-RC0"" }, ""fts_versions"": [ ""FTS5"", ""FTS4"", ""FTS3"" ], ""version"": ""3.23.1"" } } ``` The old query didn't work because specifying `(t TEXT)` caused an error","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/pull/280#issuecomment-391141391,https://api.github.com/repos/simonw/datasette/issues/280,391141391,MDEyOklzc3VlQ29tbWVudDM5MTE0MTM5MQ==,565628,r4vi,2018-05-22T21:08:39Z,2018-05-22T21:08:39Z,CONTRIBUTOR,"I'm going to clean this up for consistency tomorrow morning so hold off merging until then please On Tue, May 22, 2018 at 6:34 PM, Simon Willison wrote: > Yeah let's try this without pysqlite3 and see if we still get the correct > version. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > , or mute > the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/datasette/pull/280#issuecomment-391059008,https://api.github.com/repos/simonw/datasette/issues/280,391059008,MDEyOklzc3VlQ29tbWVudDM5MTA1OTAwOA==,565628,r4vi,2018-05-22T16:40:27Z,2018-05-22T16:40:27Z,CONTRIBUTOR,"```python >>> import sqlite3 >>> sqlite3.sqlite_version '3.23.1' >>> ``` running the above in the container seems to show 3.23.1 too so maybe we don't need pysqlite3 at all?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325373747,Build Dockerfile with recent Sqlite + Spatialite, https://github.com/simonw/sqlite-utils/issues/523#issuecomment-1407264466,https://api.github.com/repos/simonw/sqlite-utils/issues/523,1407264466,IC_kwDOCGYnMM5T4SbS,536941,fgregg,2023-01-28T02:41:14Z,2023-01-28T02:41:14Z,CONTRIBUTOR,"I also often then run another little script to cast all empty strings to null, but i save that for another issue if this gets accepted.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1560651350,Feature request: trim all leading and trailing white space for all columns for all tables in a database, https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1404070841,https://api.github.com/repos/simonw/sqlite-utils/issues/203,1404070841,IC_kwDOCGYnMM5TsGu5,536941,fgregg,2023-01-25T18:47:18Z,2023-01-25T18:47:18Z,CONTRIBUTOR,i'll adopt this PR to make the changes @simonw suggested https://github.com/simonw/sqlite-utils/pull/203#issuecomment-753567932,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743384829,changes to allow for compound foreign keys, https://github.com/simonw/datasette/pull/2003#issuecomment-1404065571,https://api.github.com/repos/simonw/datasette/issues/2003,1404065571,IC_kwDOBm6k_c5TsFcj,536941,fgregg,2023-01-25T18:44:42Z,2023-01-25T18:44:42Z,CONTRIBUTOR,see this related discussion to a change in API in sqlite-utils https://github.com/simonw/sqlite-utils/pull/203#issuecomment-753567932,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1555701851,Show referring tables and rows when the referring foreign key is compound, https://github.com/simonw/datasette/issues/1099#issuecomment-1402900354,https://api.github.com/repos/simonw/datasette/issues/1099,1402900354,IC_kwDOBm6k_c5Tno-C,536941,fgregg,2023-01-25T00:58:26Z,2023-01-25T00:58:26Z,CONTRIBUTOR,"> My original idea for compound foreign keys was to turn both of those columns into links, but that doesn't fit here because `database_name` is already part of a different foreign key. it's pretty hard to know what the right thing to do is if a field is part of multiple foreign keys. but, if that's not the case, what about making each of the columns a link. seems like an improvement over the status quo.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743371103,Support linking to compound foreign keys, https://github.com/simonw/datasette/issues/1099#issuecomment-1402898291,https://api.github.com/repos/simonw/datasette/issues/1099,1402898291,IC_kwDOBm6k_c5Tnodz,536941,fgregg,2023-01-25T00:55:06Z,2023-01-25T00:55:06Z,CONTRIBUTOR,"I went ahead and spiked something together, in #2003 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743371103,Support linking to compound foreign keys, https://github.com/simonw/datasette/pull/2003#issuecomment-1402898033,https://api.github.com/repos/simonw/datasette/issues/2003,1402898033,IC_kwDOBm6k_c5TnoZx,536941,fgregg,2023-01-25T00:54:41Z,2023-01-25T00:54:41Z,CONTRIBUTOR,"@simonw, let me know what you think about this approach!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1555701851,Show referring tables and rows when the referring foreign key is compound, https://github.com/simonw/datasette/issues/1099#issuecomment-1402563930,https://api.github.com/repos/simonw/datasette/issues/1099,1402563930,IC_kwDOBm6k_c5TmW1a,536941,fgregg,2023-01-24T20:11:11Z,2023-01-24T20:11:11Z,CONTRIBUTOR,"hi @simonw, this bug bit me today. the UX for linking from a table to the foreign key seems tough! the design in the other direction seems a lot easier, for a given primary key detail page, add links back to the tables that refer to the row. would you be open to a PR that solved the second problem but not the first?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743371103,Support linking to compound foreign keys, https://github.com/simonw/datasette/issues/1614#issuecomment-1364345119,https://api.github.com/repos/simonw/datasette/issues/1614,1364345119,IC_kwDOBm6k_c5RUkEf,536941,fgregg,2022-12-23T21:27:10Z,2022-12-23T21:27:10Z,CONTRIBUTOR,is this issue closed by #1893?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1115435536,Try again with SQLite codemirror support, https://github.com/simonw/datasette/issues/1796#issuecomment-1364345071,https://api.github.com/repos/simonw/datasette/issues/1796,1364345071,IC_kwDOBm6k_c5RUkDv,536941,fgregg,2022-12-23T21:27:02Z,2022-12-23T21:27:02Z,CONTRIBUTOR,@simonw is this issue closed by #1893?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1355148385,Research an upgrade to CodeMirror 6, https://github.com/simonw/datasette/issues/1886#issuecomment-1321241426,https://api.github.com/repos/simonw/datasette/issues/1886,1321241426,IC_kwDOBm6k_c5OwItS,536941,fgregg,2022-11-20T20:58:54Z,2022-11-20T20:58:54Z,CONTRIBUTOR,i wrote up a blog post of how i'm using it! https://bunkum.us/2022/11/20/mgdo-stack.html,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1447050738,"Call for birthday presents: if you're using Datasette, let us know how you're using it here", https://github.com/simonw/datasette/issues/1890#issuecomment-1317889323,https://api.github.com/repos/simonw/datasette/issues/1890,1317889323,IC_kwDOBm6k_c5OjWUr,536941,fgregg,2022-11-17T00:47:36Z,2022-11-17T00:47:36Z,CONTRIBUTOR,amazing! thanks @simonw ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1448143294,Autocomplete text entry for filter values that correspond to facets, https://github.com/simonw/datasette/pull/1870#issuecomment-1295667649,https://api.github.com/repos/simonw/datasette/issues/1870,1295667649,IC_kwDOBm6k_c5NOlHB,536941,fgregg,2022-10-29T00:52:43Z,2022-10-29T00:53:43Z,CONTRIBUTOR,"> Are you saying that I can build a container, but then when I run it and it does `datasette serve -i data.db ...` it will somehow modify the image, or create a new modified filesystem layer in the runtime environment, as a result of running that `serve` command? Somehow, `datasette serve -i data.db` will lead to the `data.db` being modified, which will trigger a [copy-on-write](https://docs.docker.com/storage/storagedriver/#the-copy-on-write-cow-strategy) of `data.db` into the read-write layer of the container. I don't understand **how** that happens. it kind of feels like a bug in sqlite, but i can't quite follow the sqlite code.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1426379903,"don't use immutable=1, only mode=ro", https://github.com/simonw/datasette/pull/1870#issuecomment-1294285471,https://api.github.com/repos/simonw/datasette/issues/1870,1294285471,IC_kwDOBm6k_c5NJTqf,536941,fgregg,2022-10-28T01:06:03Z,2022-10-28T01:06:03Z,CONTRIBUTOR,"as far as i can tell, [this is where the ""immutable"" argument is used](https://github.com/sqlite/sqlite/blob/c97bb14fab566f6fa8d967c8fd1e90f3702d5b73/src/pager.c#L4926-L4931) in sqlite: ```c pPager->noLock = sqlite3_uri_boolean(pPager->zFilename, ""nolock"", 0); if( (iDc & SQLITE_IOCAP_IMMUTABLE)!=0 || sqlite3_uri_boolean(pPager->zFilename, ""immutable"", 0) ){ vfsFlags |= SQLITE_OPEN_READONLY; goto act_like_temp_file; } ``` so it does set the read only flag, but then has a goto.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1426379903,"don't use immutable=1, only mode=ro", https://github.com/simonw/datasette/pull/1870#issuecomment-1294237783,https://api.github.com/repos/simonw/datasette/issues/1870,1294237783,IC_kwDOBm6k_c5NJIBX,536941,fgregg,2022-10-27T23:42:18Z,2022-10-27T23:42:18Z,CONTRIBUTOR,Relevant sqlite forum thread: https://www.sqlite.org/forum/forumpost/02f7bda329f41e30451472421cf9ce7f715b768ce3db02797db1768e47950d48,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1426379903,"don't use immutable=1, only mode=ro", https://github.com/simonw/datasette/issues/1836#issuecomment-1272357976,https://api.github.com/repos/simonw/datasette/issues/1836,1272357976,IC_kwDOBm6k_c5L1qRY,536941,fgregg,2022-10-08T16:56:51Z,2022-10-08T16:56:51Z,CONTRIBUTOR,"when you are running from docker, you **always** will want to run as `mode=ro` because the same thing that is causing duplication in the inspect layer will cause duplication in the final container read/write layer when `datasette serve` runs.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1271103097,https://api.github.com/repos/simonw/datasette/issues/1836,1271103097,IC_kwDOBm6k_c5Lw355,536941,fgregg,2022-10-07T04:43:41Z,2022-10-07T04:43:41Z,CONTRIBUTOR,"@simonw, should i open up a new issue for investigating the differences between ""immutable=1"" and ""mode=ro"" and possibly switching to ""mode=ro"". Or would you like to keep that conversation in this issue?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1480#issuecomment-1271101072,https://api.github.com/repos/simonw/datasette/issues/1480,1271101072,IC_kwDOBm6k_c5Lw3aQ,536941,fgregg,2022-10-07T04:39:10Z,2022-10-07T04:39:10Z,CONTRIBUTOR,switching from `immutable=1` to `mode=ro` completely addressed this. see https://github.com/simonw/datasette/issues/1836#issuecomment-1271100651 for details.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369,Exceeding Cloud Run memory limits when deploying a 4.8G database, https://github.com/simonw/datasette/issues/1836#issuecomment-1271100651,https://api.github.com/repos/simonw/datasette/issues/1836,1271100651,IC_kwDOBm6k_c5Lw3Tr,536941,fgregg,2022-10-07T04:38:14Z,2022-10-07T04:38:14Z,CONTRIBUTOR,"> yes, and i also think that this is causing the apparent memory problems in #1480. when the container starts up, it will make some operation on the database in `immutable` mode which apparently makes some small change to the db file. if that's so, then the db files will be copied to the read/write layer which counts against cloudrun's memory allocation! > > running a test of that now. this completely addressed #1480 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1301#issuecomment-1271035998,https://api.github.com/repos/simonw/datasette/issues/1301,1271035998,IC_kwDOBm6k_c5Lwnhe,536941,fgregg,2022-10-07T02:38:04Z,2022-10-07T02:38:04Z,CONTRIBUTOR,the only mode that `publish cloudrun` supports right now is immutable,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",860722711,Publishing to cloudrun with immutable mode?, https://github.com/simonw/datasette/issues/1836#issuecomment-1271020193,https://api.github.com/repos/simonw/datasette/issues/1836,1271020193,IC_kwDOBm6k_c5Lwjqh,536941,fgregg,2022-10-07T02:15:05Z,2022-10-07T02:21:08Z,CONTRIBUTOR,"when i hack the connect method to open non mutable files with ""mode=ro"" and not ""immutable=1"" https://github.com/simonw/datasette/blob/eff112498ecc499323c26612d707908831446d25/datasette/database.py#L79 then: ```bash 870 B RUN /bin/sh -c datasette inspect nlrb.db --inspect-file inspect-data.json ``` the `datasette inspect` layer is only the size of the json file!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1271008997,https://api.github.com/repos/simonw/datasette/issues/1836,1271008997,IC_kwDOBm6k_c5Lwg7l,536941,fgregg,2022-10-07T02:00:37Z,2022-10-07T02:00:49Z,CONTRIBUTOR,"yes, and i also think that this is causing the apparent memory problems in #1480. when the container starts up, it will make some operation on the database in `immutable` mode which apparently makes some small change to the db file. if that's so, then the db files will be copied to the read/write layer which counts against cloudrun's memory allocation! running a test of that now. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1271003212,https://api.github.com/repos/simonw/datasette/issues/1836,1271003212,IC_kwDOBm6k_c5LwfhM,536941,fgregg,2022-10-07T01:52:04Z,2022-10-07T01:52:04Z,CONTRIBUTOR,"and if we try immutable mode, which is how things are opened by `datasette inspect` we duplicate the files!!! ```python # test_sql_immutable.py import sqlite3 import sys db_name = sys.argv[1] conn = sqlite3.connect(f'file:/app/{db_name}?immutable=1', uri=True) cur = conn.cursor() cur.execute('select count(*) from filing') print(cur.fetchone()) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1270992795,https://api.github.com/repos/simonw/datasette/issues/1836,1270992795,IC_kwDOBm6k_c5Lwc-b,536941,fgregg,2022-10-07T01:29:15Z,2022-10-07T01:50:14Z,CONTRIBUTOR,"fascinatingly, telling python to open sqlite in read only mode makes this layer have a size of 0 ```python # test_sql_ro.py import sqlite3 import sys db_name = sys.argv[1] conn = sqlite3.connect(f'file:/app/{db_name}?mode=ro', uri=True) cur = conn.cursor() cur.execute('select count(*) from filing') print(cur.fetchone()) ``` that's quite weird because setting the file permissions to read only didn't do anything. (on reflection, that chmod isn't doing anything because the dockerfile commands are run as root)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1270988081,https://api.github.com/repos/simonw/datasette/issues/1836,1270988081,IC_kwDOBm6k_c5Lwb0x,536941,fgregg,2022-10-07T01:19:01Z,2022-10-07T01:27:35Z,CONTRIBUTOR,"okay, some progress!! running some sql against a database file causes that file to get duplicated even if it doesn't apparently change the file. make a little test script like this: ```python # test_sql.py import sqlite3 import sys db_name = sys.argv[1] conn = sqlite3.connect(f'file:/app/{db_name}', uri=True) cur = conn.cursor() cur.execute('select count(*) from filing') print(cur.fetchone()) ``` then ```docker RUN python test_sql.py nlrb.db ``` produced a layer that's the same size as `nlrb.db`!! ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1270936982,https://api.github.com/repos/simonw/datasette/issues/1836,1270936982,IC_kwDOBm6k_c5LwPWW,536941,fgregg,2022-10-07T00:52:41Z,2022-10-07T00:52:41Z,CONTRIBUTOR,"it's not that the inspect command is somehow changing the db files. if i set them to only read-only, the ""inspect"" layer still has the same very large size.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1836#issuecomment-1270923537,https://api.github.com/repos/simonw/datasette/issues/1836,1270923537,IC_kwDOBm6k_c5LwMER,536941,fgregg,2022-10-07T00:46:08Z,2022-10-07T00:46:08Z,CONTRIBUTOR,"i thought it was maybe to do with reading through all the files, but that does not seem to be the case if i make a little test file like: ```python # test_read.py import hashlib import sys import pathlib HASH_BLOCK_SIZE = 1024 * 1024 def inspect_hash(path): """"""Calculate the hash of a database, efficiently."""""" m = hashlib.sha256() with path.open(""rb"") as fp: while True: data = fp.read(HASH_BLOCK_SIZE) if not data: break m.update(data) return m.hexdigest() inspect_hash(pathlib.Path(sys.argv[1])) ``` then a line in the Dockerfile like ```docker RUN python test_read.py nlrb.db && echo ""[]"" > /etc/inspect.json ``` just produes a layer of `3B` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1400374908,docker image is duplicating db files somehow, https://github.com/simonw/datasette/issues/1480#issuecomment-1269847461,https://api.github.com/repos/simonw/datasette/issues/1480,1269847461,IC_kwDOBm6k_c5LsFWl,536941,fgregg,2022-10-06T11:21:49Z,2022-10-06T11:21:49Z,CONTRIBUTOR,"thanks @simonw, i'll spend a little more time trying to figure out why this isn't working on cloudrun, and then will flip over to fly if i can't. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369,Exceeding Cloud Run memory limits when deploying a 4.8G database, https://github.com/simonw/datasette/issues/1480#issuecomment-1268629159,https://api.github.com/repos/simonw/datasette/issues/1480,1268629159,IC_kwDOBm6k_c5Lnb6n,536941,fgregg,2022-10-05T16:00:55Z,2022-10-05T16:00:55Z,CONTRIBUTOR,"as a next step, i'll fetch the docker image from the google registry, and see what memory and disk usage looks like when i run it locally.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369,Exceeding Cloud Run memory limits when deploying a 4.8G database, https://github.com/simonw/datasette/issues/1480#issuecomment-1268613335,https://api.github.com/repos/simonw/datasette/issues/1480,1268613335,IC_kwDOBm6k_c5LnYDX,536941,fgregg,2022-10-05T15:45:49Z,2022-10-05T15:45:49Z,CONTRIBUTOR,"running into this as i continue to grow my labor data warehouse. Here a CloudRun PM says the container size should **not** count against memory: https://stackoverflow.com/a/56570717","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1015646369,Exceeding Cloud Run memory limits when deploying a 4.8G database, https://github.com/simonw/datasette/issues/1062#issuecomment-1260909128,https://api.github.com/repos/simonw/datasette/issues/1062,1260909128,IC_kwDOBm6k_c5LJ_JI,536941,fgregg,2022-09-28T13:22:53Z,2022-09-28T14:09:54Z,CONTRIBUTOR,"if you went this route: ```python with sqlite_timelimit(conn, time_limit_ms): c.execute(query) for chunk in c.fetchmany(chunk_size): yield from chunk ``` then `time_limit_ms` would probably have to be greatly extended, because the time spent in the loop will depend on the downstream processing. i wonder if this was why you were thinking this feature would need a dedicated connection? --- reading more, there's no real limit i can find on the number of active cursors (or more precisely active prepared statements objects, because sqlite doesn't really have cursors). maybe something like this would be okay? ```python with sqlite_timelimit(conn, time_limit_ms): c.execute(query) # step through at least one to evaluate the statement, not sure if this is necessary yield c.execute.fetchone() for chunk in c.fetchmany(chunk_size): yield from chunk ``` this seems quite weird that there's not more of limit of the number of active prepared statements, but i haven't been able to find one. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",732674148,Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows, https://github.com/simonw/datasette/issues/1062#issuecomment-1260829829,https://api.github.com/repos/simonw/datasette/issues/1062,1260829829,IC_kwDOBm6k_c5LJryF,536941,fgregg,2022-09-28T12:27:19Z,2022-09-28T12:27:19Z,CONTRIBUTOR,"for teaching `register_output_renderer` to stream it seems like the two options are to 1. a [nested query technique ](https://github.com/simonw/datasette/issues/526#issuecomment-505162238)to paginate through 2. a fetching model that looks like something ```python with sqlite_timelimit(conn, time_limit_ms): c.execute(query) for chunk in c.fetchmany(chunk_size): yield from chunk ``` currently `db.execute` is not a generator, so this would probably need a new method?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",732674148,Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows, https://github.com/simonw/datasette/issues/526#issuecomment-1259718517,https://api.github.com/repos/simonw/datasette/issues/526,1259718517,IC_kwDOBm6k_c5LFcd1,536941,fgregg,2022-09-27T16:02:51Z,2022-09-27T16:04:46Z,CONTRIBUTOR,"i think that `max_returned_rows` **is** a defense mechanism, just not for connection exhaustion. `max_returned_rows` is a defense mechanism against **memory bombs**. if you are potentially yielding out hundreds of thousands or even millions of rows, you need to be quite careful about data flow to not run out of memory on the server, or on the client. you have a lot of places in your code that are protective of that right now, but `max_returned_rows` acts as the final backstop. so, given that, it makes sense to have removing `max_returned_rows` altogether be a non-goal, but instead allow for for specific codepaths (like streaming csv's) be able to bypass. that could dramatically lower the surface area for a memory-bomb attack.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/526#issuecomment-1258910228,https://api.github.com/repos/simonw/datasette/issues/526,1258910228,IC_kwDOBm6k_c5LCXIU,536941,fgregg,2022-09-27T03:11:07Z,2022-09-27T03:11:07Z,CONTRIBUTOR,"i think this feature would be safe, as its really only the time limit that can, and imo, should protect against long running queries, as it is pretty easy to make very expensive queries that don't return many rows. moving away from `max_returned_rows` will requires some thinking about: 1. memory usage and data flows to handle potentially very large result sets 2. how to avoid rendering tens or hundreds of thousands of [html rows](#1655).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/526#issuecomment-1258878311,https://api.github.com/repos/simonw/datasette/issues/526,1258878311,IC_kwDOBm6k_c5LCPVn,536941,fgregg,2022-09-27T02:19:48Z,2022-09-27T02:19:48Z,CONTRIBUTOR,"this sql query doesn't trip up `maximum_returned_rows` but does timeout ```sql with recursive counter(x) as ( select 0 union select x + 1 from counter ) select * from counter LIMIT 10 OFFSET 100000000 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/526#issuecomment-1258871525,https://api.github.com/repos/simonw/datasette/issues/526,1258871525,IC_kwDOBm6k_c5LCNrl,536941,fgregg,2022-09-27T02:09:32Z,2022-09-27T02:14:53Z,CONTRIBUTOR,"thanks @simonw, i learned something i didn't know about sqlite's execution model! > Imagine if Datasette CSVs did allow unlimited retrievals. Someone could hit the CSV endpoint for that recursive query and tie up Datasette's SQL connection effectively forever. why wouldn't the `sqlite_timelimit` guard prevent that? --- on my local version which has the code to [turn off truncations for query csv](#1820), `sqlite_timelimit` does protect me. ![Screenshot 2022-09-26 at 22-14-31 Error 500](https://user-images.githubusercontent.com/536941/192415680-94b32b7f-868f-4b89-8194-5752d45f6009.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/526#issuecomment-1258849766,https://api.github.com/repos/simonw/datasette/issues/526,1258849766,IC_kwDOBm6k_c5LCIXm,536941,fgregg,2022-09-27T01:27:03Z,2022-09-27T01:27:03Z,CONTRIBUTOR,"i agree with that concern! but if i'm understanding the code correctly, `maximum_returned_rows` does not protect against long-running queries in any way.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/pull/1820#issuecomment-1258803261,https://api.github.com/repos/simonw/datasette/issues/1820,1258803261,IC_kwDOBm6k_c5LB9A9,536941,fgregg,2022-09-27T00:03:09Z,2022-09-27T00:03:09Z,CONTRIBUTOR,"the pattern in this PR `max_returned_rows` control the maximum rows rendered through html and json, and the csv render bypasses that. i think it would be better to have each of these different query renderers have more direct control for how many rows to fetch, instead of relying on the internals of the `execute` method. generally, users will not want to paginate through tens of thousands of results, but often will want to download a full query as json or as csv. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1386456717,[SPIKE] Don't truncate query CSVs, https://github.com/simonw/datasette/issues/526#issuecomment-1258337011,https://api.github.com/repos/simonw/datasette/issues/526,1258337011,IC_kwDOBm6k_c5LALLz,536941,fgregg,2022-09-26T16:49:48Z,2022-09-26T16:49:48Z,CONTRIBUTOR,"i think the smallest change that gets close to what i want is to change the behavior so that `max_returned_rows` is not applied in the `execute` method when we are are asking for a csv of query. there are some infelicities for that approach, but i'll make a PR to make it easier to discuss.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/526#issuecomment-1258167564,https://api.github.com/repos/simonw/datasette/issues/526,1258167564,IC_kwDOBm6k_c5K_h0M,536941,fgregg,2022-09-26T14:57:44Z,2022-09-26T15:08:36Z,CONTRIBUTOR,"reading the database execute method i have a few questions. https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L229-L242 --- unless i'm missing something (which is very likely!!), the `max_returned_rows` argument doesn't actually offer any protections against running very expensive queries. It's not like adding a `LIMIT max_rows` argument. it make sense that it isn't because, the query could already have an `LIMIT` argument. Doing something like `select * from (query) limit {max_returned_rows}` **might** be protective but wouldn't always. Instead the code executes the full original query, and if still has time it fetches out the first `max_rows + 1` rows. this *does* offer some protection of memory exhaustion, as you won't hydrate a huge result set into python (however, there are [data flow patterns](https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113) that could avoid that too) given the current architecture, i don't see how creating a new connection would be use? --- If we just removed the `max_return_rows` limitation, then i think most things would be fine **except** for the QueryViews. Right now rendering, just [5000 rows takes a lot of client-side memory](https://github.com/simonw/datasette/issues/1655) so some form of pagination would be required. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/1655#issuecomment-1258166572,https://api.github.com/repos/simonw/datasette/issues/1655,1258166572,IC_kwDOBm6k_c5K_hks,536941,fgregg,2022-09-26T14:57:04Z,2022-09-26T14:57:04Z,CONTRIBUTOR,"I think that paginating, even in javascript, could be very helpful. Maybe render json or csv into the page and let javascript loading that into the dom?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515,query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data, https://github.com/simonw/datasette/issues/1727#issuecomment-1258129113,https://api.github.com/repos/simonw/datasette/issues/1727,1258129113,IC_kwDOBm6k_c5K_YbZ,536941,fgregg,2022-09-26T14:30:11Z,2022-09-26T14:48:31Z,CONTRIBUTOR,"from your analysis, it seems like the GIL is blocking on loading of the data from sqlite to python, (particularly in the `fetchmany` call) this is probably a simplistic idea, but what if you had the python code in the `execute` method iterate over the cursor and yield out rows or small chunks of rows. something like: ```python with sqlite_timelimit(conn, time_limit_ms): try: cursor = conn.cursor() cursor.execute(sql, params if params is not None else {}) except: ... max_returned_rows = self.ds.max_returned_rows if max_returned_rows == page_size: max_returned_rows += 1 if max_returned_rows and truncate: for i, row in enumerate(cursor): yield row if i == max_returned_rows - 1: break else: for row in cursor: yield row truncated = False ``` this kind of thing works well with a postgres server side cursor, but i'm not sure if it will hold for sqlite. you would still spend about the same amount of time in python and would be contending for the gil, but it would be could be non blocking. depending on the data flow, this could also some benefit for memory. (data stays in more compact sqlite-land until you need it)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117,Research: demonstrate if parallel SQL queries are worthwhile, https://github.com/simonw/datasette/issues/526#issuecomment-1254064260,https://api.github.com/repos/simonw/datasette/issues/526,1254064260,IC_kwDOBm6k_c5Kv4CE,536941,fgregg,2022-09-21T18:17:04Z,2022-09-21T18:18:01Z,CONTRIBUTOR,"hi @simonw, this is becoming more of a bother for my [labor data warehouse](https://labordata.bunkum.us/). Is there any research or a spike i could do that would help you investigate this issue?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/1779#issuecomment-1214437408,https://api.github.com/repos/simonw/datasette/issues/1779,1214437408,IC_kwDOBm6k_c5IYtgg,536941,fgregg,2022-08-14T19:42:58Z,2022-08-14T19:42:58Z,CONTRIBUTOR,thanks @simonw!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1334628400,google cloudrun updated their limits on maxscale based on memory and cpu count, https://github.com/simonw/datasette/issues/1779#issuecomment-1210675046,https://api.github.com/repos/simonw/datasette/issues/1779,1210675046,IC_kwDOBm6k_c5IKW9m,536941,fgregg,2022-08-10T13:28:37Z,2022-08-10T13:28:37Z,CONTRIBUTOR,maybe a simpler solution is to set the maxscale to like 2? since datasette is not set up to make use of container scaling anyway?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1334628400,google cloudrun updated their limits on maxscale based on memory and cpu count, https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190277829,https://api.github.com/repos/simonw/sqlite-utils/issues/456,1190277829,IC_kwDOCGYnMM5G8jLF,536941,fgregg,2022-07-20T13:19:15Z,2022-07-20T13:19:15Z,CONTRIBUTOR,hadley wickham's melt and reshape could be good inspo: http://had.co.nz/reshape/introduction.pdf,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1310243385,feature request: pivot command, https://github.com/simonw/sqlite-utils/issues/456#issuecomment-1190272780,https://api.github.com/repos/simonw/sqlite-utils/issues/456,1190272780,IC_kwDOCGYnMM5G8h8M,536941,fgregg,2022-07-20T13:14:54Z,2022-07-20T13:14:54Z,CONTRIBUTOR,"for example, i have data on votes that look like this: | ballot_id | option_id | choice | |-|-|-| | 1 | 1 | 0 | | 1 | 2 | 1 | | 1 | 3 | 0 | | 1 | 4 | 1 | | 2 | 1 | 1 | | 2 | 2 | 0 | | 2 | 3 | 1 | | 2 | 4 | 0 | and i want to reshape from this long form to this wide form: | ballot_id | option_id_1 | option_id_2 | option_id_3 | option_id_ 4| |-|-|-|-| -| | 1 | 0 | 1 | 0 | 1 | | 2 | 1 | 0 | 1| 0 | i could do such a think like this. ```sql select ballot_id, sum(choice) filter (where option_id = 1) as option_id_1, sum(choice) filter (where option_id = 2) as option_id_2, sum(choice) filter (where option_id = 3) as option_id_3, sum(choice) filter (where option_id = 4) as option_id_4 from vote group by ballot_id ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1310243385,feature request: pivot command, https://github.com/simonw/sqlite-utils/issues/423#issuecomment-1189010812,https://api.github.com/repos/simonw/sqlite-utils/issues/423,1189010812,IC_kwDOCGYnMM5G3t18,536941,fgregg,2022-07-19T12:47:39Z,2022-07-19T12:47:39Z,CONTRIBUTOR,just ran into this!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1199158210,.extract() doesn't set foreign key when extracted columns contain NULL value, https://github.com/simonw/datasette/issues/1713#issuecomment-1103312860,https://api.github.com/repos/simonw/datasette/issues/1713,1103312860,IC_kwDOBm6k_c5Bwzfc,536941,fgregg,2022-04-20T00:52:19Z,2022-04-20T00:52:19Z,CONTRIBUTOR,feels related to #1402 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1203943272,Datasette feature for publishing snapshots of query results, https://github.com/simonw/datasette/issues/1549#issuecomment-1087428593,https://api.github.com/repos/simonw/datasette/issues/1549,1087428593,IC_kwDOBm6k_c5A0Nfx,536941,fgregg,2022-04-04T11:17:13Z,2022-04-04T11:17:13Z,CONTRIBUTOR,"another way to get the behavior of downloading the file is to use the download attribute of the anchor tag https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#attr-download","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1077620955,Redesign CSV export to improve usability, https://github.com/simonw/datasette/issues/1684#issuecomment-1078126065,https://api.github.com/repos/simonw/datasette/issues/1684,1078126065,IC_kwDOBm6k_c5AQuXx,536941,fgregg,2022-03-24T20:08:56Z,2022-03-24T20:13:19Z,CONTRIBUTOR,"would be nice if the behavior was 1. try to facet all the columns 2. for bigger tables try to facet the indexed columns 3. for the biggest tables, turn off autofacetting completely This is based on my assumption that what determines autofaceting is the rarity of unique values. Which may not be true!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1179998071,Mechanism for disabling faceting on large tables only, https://github.com/simonw/datasette/issues/1581#issuecomment-1077047295,https://api.github.com/repos/simonw/datasette/issues/1581,1077047295,IC_kwDOBm6k_c5AMm__,536941,fgregg,2022-03-24T04:08:18Z,2022-03-24T04:08:18Z,CONTRIBUTOR,this has been addressed by the datasette-hashed-urls plugin,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1089529555,"when hashed urls are turned on, the _memory db has improperly long-lived cache expiry", https://github.com/simonw/datasette/pull/1582#issuecomment-1077047152,https://api.github.com/repos/simonw/datasette/issues/1582,1077047152,IC_kwDOBm6k_c5AMm9w,536941,fgregg,2022-03-24T04:07:58Z,2022-03-24T04:07:58Z,CONTRIBUTOR,this has been obviated by the datasette-hashed-urls plugin,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1090055810,don't set far expiry if hash is '000', https://github.com/simonw/datasette/issues/1655#issuecomment-1062450649,https://api.github.com/repos/simonw/datasette/issues/1655,1062450649,IC_kwDOBm6k_c4_U7XZ,536941,fgregg,2022-03-09T01:10:46Z,2022-03-09T01:10:46Z,CONTRIBUTOR,"i increased the max_returned_row, because I have some scripts that get CSVs from this site, and this makes doing pagination of CSVs less annoying for many cases. i know that's streaming csvs is something you are hoping to address in 1.0. let me know if there's anything i can do to help with that. as for what if anything can be done about the size of the dom, I don't have any ideas right now, but i'll poke around.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1163369515,query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data, https://github.com/simonw/datasette/issues/1641#issuecomment-1049879118,https://api.github.com/repos/simonw/datasette/issues/1641,1049879118,IC_kwDOBm6k_c4-k-JO,536941,fgregg,2022-02-24T13:49:26Z,2022-02-24T13:49:26Z,CONTRIBUTOR,"maybe worth considering adding buttons for paren, asterisk, etc. under the input text box on mobile?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1149310456,Tweak mobile keyboard settings, https://github.com/simonw/sqlite-utils/issues/403#issuecomment-1033332570,https://api.github.com/repos/simonw/sqlite-utils/issues/403,1033332570,IC_kwDOCGYnMM49l2da,536941,fgregg,2022-02-09T04:22:43Z,2022-02-09T04:22:43Z,CONTRIBUTOR,dddoooope,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1126692066,Document how to add a primary key to a rowid table using `sqlite-utils transform --pk`, https://github.com/simonw/sqlite-utils/issues/403#issuecomment-1032126353,https://api.github.com/repos/simonw/sqlite-utils/issues/403,1032126353,IC_kwDOCGYnMM49hP-R,536941,fgregg,2022-02-08T01:45:15Z,2022-02-08T01:45:31Z,CONTRIBUTOR,"you can hack something like this to achieve this result: `sqlite-utils convert my_database my_table rowid ""{'id': value}"" --multi`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1126692066,Document how to add a primary key to a rowid table using `sqlite-utils transform --pk`, https://github.com/simonw/sqlite-utils/issues/26#issuecomment-1032120014,https://api.github.com/repos/simonw/sqlite-utils/issues/26,1032120014,IC_kwDOCGYnMM49hObO,536941,fgregg,2022-02-08T01:32:34Z,2022-02-08T01:32:34Z,CONTRIBUTOR,"if you are curious about prior art, https://github.com/jsnell/json-to-multicsv is really good!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455486286,Mechanism for turning nested JSON into foreign keys / many-to-many, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1009548580,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1009548580,IC_kwDOCGYnMM48LH0k,536941,fgregg,2022-01-11T02:43:34Z,2022-01-11T02:43:34Z,CONTRIBUTOR,thanks so much! always a pleasure to see how you work through these things,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008275546,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008275546,IC_kwDOCGYnMM48GRBa,536941,fgregg,2022-01-09T11:01:15Z,2022-01-09T13:37:51Z,CONTRIBUTOR,"i don’t want to be such a partisan for analyze, but the query planner deciding *not* to use an index based on information collected by analyze is not necessarily a bug, but could be the correct choice. the original poster in that stack overflow doesn’t say there’s a performance regression ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008166084,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008166084,IC_kwDOCGYnMM48F2TE,536941,fgregg,2022-01-08T22:32:47Z,2022-01-08T22:32:47Z,CONTRIBUTOR,or using “ pragma optimize”,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008164786,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008164786,IC_kwDOCGYnMM48F1-y,536941,fgregg,2022-01-08T22:24:19Z,2022-01-08T22:24:19Z,CONTRIBUTOR,the out-of-date scenario you describe could be addressed by automatically adding an analyze to the insert or convert commands if they implicate an index,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008164116,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008164116,IC_kwDOCGYnMM48F10U,536941,fgregg,2022-01-08T22:18:57Z,2022-01-08T22:18:57Z,CONTRIBUTOR,"the table with the query ran so bad was about 50k. i think the scenario should not be worse than no stats. i also did not know that sqlite was so different from postgres and needed an explicit analyze call.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1008161965,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1008161965,IC_kwDOCGYnMM48F1St,536941,fgregg,2022-01-08T22:02:56Z,2022-01-08T22:02:56Z,CONTRIBUTOR,"for options 2 and 3, i would worry about discoverablity. in other db’s it is not necessary to explicitly call analyze for most indices. ie for postgres > The system regularly collects statistics on all of a table's columns. Newly-created non-expression indexes can immediately use these statistics to determine an index's usefulness. i suppose i would propose raising a warning if the stats table is created that explains what is going on and informs users about a —no-analyze argument.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/datasette/pull/1574#issuecomment-1007844190,https://api.github.com/repos/simonw/datasette/issues/1574,1007844190,IC_kwDOBm6k_c48Ente,536941,fgregg,2022-01-08T00:42:12Z,2022-01-08T00:42:12Z,CONTRIBUTOR,is there a reason to not always use the slim option?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1084193403,introduce new option for datasette package to use a slim base image, https://github.com/simonw/sqlite-utils/issues/365#issuecomment-1007636709,https://api.github.com/repos/simonw/sqlite-utils/issues/365,1007636709,IC_kwDOCGYnMM48D1Dl,536941,fgregg,2022-01-07T18:28:33Z,2022-01-07T18:29:43Z,CONTRIBUTOR,"i added an index to one table with sqlite-utils, and then a query that used to take about 1 second started taking hundreds of seconds. running analyze got me back to sub second speed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1096558279,create-index should run analyze after creating index, https://github.com/simonw/datasette/issues/1583#issuecomment-1002825217,https://api.github.com/repos/simonw/datasette/issues/1583,1002825217,IC_kwDOBm6k_c47xeYB,536941,fgregg,2021-12-30T00:34:16Z,2021-12-30T00:34:16Z,CONTRIBUTOR,"if that is not desirable, it might be good to document that users might want to set up a lifecycle rule to automatically delete these build artifacts. something like https://stackoverflow.com/questions/59937542/can-i-delete-container-images-from-google-cloud-storage-artifacts-bucket","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1090810196,consider adding deletion step of cloudbuild artifacts to gcloud publish, https://github.com/simonw/datasette/issues/1561#issuecomment-997128712,https://api.github.com/repos/simonw/datasette/issues/1561,997128712,IC_kwDOBm6k_c47bvoI,536941,fgregg,2021-12-18T02:35:48Z,2021-12-18T02:35:48Z,CONTRIBUTOR,interesting! i love this feature. this + full caching with cloudflare is really super!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1082765654,"add hash id to ""_memory"" url if hashed url mode is turned on and crossdb is also turned on", https://github.com/simonw/datasette/issues/526#issuecomment-993078038,https://api.github.com/repos/simonw/datasette/issues/526,993078038,IC_kwDOBm6k_c47MSsW,536941,fgregg,2021-12-14T01:46:52Z,2021-12-14T01:46:52Z,CONTRIBUTOR,"the nested query idea is very nice, and i stole if for [my client side paginator](https://observablehq.com/d/1d5da3a3c3f2f347#DatasetteClient). However, it won't do the right thing if the original query orders by random(). If you go the nested query route, maybe raise a 4XX status code if the query has such a clause?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/1553#issuecomment-993014772,https://api.github.com/repos/simonw/datasette/issues/1553,993014772,IC_kwDOBm6k_c47MDP0,536941,fgregg,2021-12-13T23:46:18Z,2021-12-13T23:46:18Z,CONTRIBUTOR,these headers would also be relevant for json exports of custom queries,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1079111498,if csv export is truncated in non streaming mode set informative response header, https://github.com/simonw/datasette/issues/1553#issuecomment-992986587,https://api.github.com/repos/simonw/datasette/issues/1553,992986587,IC_kwDOBm6k_c47L8Xb,536941,fgregg,2021-12-13T22:57:04Z,2021-12-13T22:57:04Z,CONTRIBUTOR,would also be good if the header said the what the max row limit was,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1079111498,if csv export is truncated in non streaming mode set informative response header, https://github.com/simonw/datasette/issues/526#issuecomment-992971072,https://api.github.com/repos/simonw/datasette/issues/526,992971072,IC_kwDOBm6k_c47L4lA,536941,fgregg,2021-12-13T22:29:34Z,2021-12-13T22:29:34Z,CONTRIBUTOR,just came by to open this issue. would make my data analysis in observable a lot better!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",459882902,Stream all results for arbitrary SQL and canned queries, https://github.com/simonw/datasette/issues/1549#issuecomment-991754237,https://api.github.com/repos/simonw/datasette/issues/1549,991754237,IC_kwDOBm6k_c47HPf9,536941,fgregg,2021-12-11T19:14:39Z,2021-12-11T19:14:39Z,CONTRIBUTOR,"that option is not available on [custom queries](https://labordata.bunkum.us/odpr-962a140?sql=with+local_union_filings+as+%28%0D%0A++select+*+from+lm_data+%0D%0A++where%0D%0A++++yr_covered+%3E+cast%28strftime%28%27%25Y%27%2C+%27now%27%2C+%27-5+years%27%29+as+int%29%0D%0A++++and+desig_name+%3D+%27LU%27%0D%0A++order+by+yr_covered+desc%0D%0A%29%2C%0D%0Amost_recent_filing+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from+local_union_filings%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++most_recent_filing%0D%0Awhere%0D%0A++next_election+%3E%3D+strftime%28%27%25Y-%25m%27%2C+%27now%27%29%0D%0A++and+next_election+%3C+strftime%28%27%25Y-%25m%27%2C+%27now%27%2C+%27%2B1+year%27%29%0D%0Aorder+by%0D%0A++members+desc%3B). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1077620955,Redesign CSV export to improve usability, https://github.com/simonw/sqlite-utils/issues/353#issuecomment-991405755,https://api.github.com/repos/simonw/sqlite-utils/issues/353,991405755,IC_kwDOCGYnMM47F6a7,536941,fgregg,2021-12-11T01:38:29Z,2021-12-11T01:38:29Z,CONTRIBUTOR,"wow! that's awesome! thanks so much, @simonw!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1077102934,"Allow passing a file of code to ""sqlite-utils convert""", https://github.com/simonw/sqlite-utils/issues/26#issuecomment-964205475,https://api.github.com/repos/simonw/sqlite-utils/issues/26,964205475,IC_kwDOCGYnMM45eJuj,536941,fgregg,2021-11-09T14:31:29Z,2021-11-09T14:31:29Z,CONTRIBUTOR,i was just reaching for a tool to do this this morning,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",455486286,Mechanism for turning nested JSON into foreign keys / many-to-many, https://github.com/simonw/datasette/pull/1495#issuecomment-954384496,https://api.github.com/repos/simonw/datasette/issues/1495,954384496,IC_kwDOBm6k_c444sBw,536941,fgregg,2021-10-29T03:07:13Z,2021-10-29T03:07:13Z,CONTRIBUTOR,"okay @simonw, made the requested changes. tests are running locally. i think this is ready for you to look at again.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1033678984,Allow routes to have extra options, https://github.com/simonw/datasette/issues/1284#issuecomment-949604763,https://api.github.com/repos/simonw/datasette/issues/1284,949604763,IC_kwDOBm6k_c44mdGb,536941,fgregg,2021-10-22T12:54:34Z,2021-10-22T12:54:34Z,CONTRIBUTOR,i'm going to take a swing at this today. we'll see.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",845794436,Feature or Documentation Request: Individual table as home page template, https://github.com/simonw/datasette/issues/1419#issuecomment-893114612,https://api.github.com/repos/simonw/datasette/issues/1419,893114612,IC_kwDOBm6k_c41O9j0,536941,fgregg,2021-08-05T02:29:06Z,2021-08-05T02:29:06Z,CONTRIBUTOR,"there's a lot of complexity here, that's probably not worth addressing. i got what i needed by patching the dockerfile that cloudrun uses to install a newer version of sqlite. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",959710008,`publish cloudrun` should deploy a more recent SQLite version, https://github.com/simonw/datasette/issues/1419#issuecomment-892276385,https://api.github.com/repos/simonw/datasette/issues/1419,892276385,IC_kwDOBm6k_c41Lw6h,536941,fgregg,2021-08-04T00:58:49Z,2021-08-04T00:58:49Z,CONTRIBUTOR,"yes, [filter clause on aggregate queries were added to sqlite3 in 3.30](https://www.sqlite.org/releaselog/3_30_1.html)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",959710008,`publish cloudrun` should deploy a more recent SQLite version, https://github.com/simonw/datasette/issues/1401#issuecomment-884910320,https://api.github.com/repos/simonw/datasette/issues/1401,884910320,IC_kwDOBm6k_c40vqjw,536941,fgregg,2021-07-22T13:26:01Z,2021-07-22T13:26:01Z,CONTRIBUTOR,"ordered lists didn't work either, btw","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",950664971,unordered list is not rendering bullet points in description_html on database page, https://github.com/simonw/datasette/issues/1401#issuecomment-950150483,https://api.github.com/repos/simonw/datasette/issues/1401,950150483,IC_kwDOBm6k_c44oiVT,418191,jaywgraves,2021-10-23T13:09:10Z,2021-10-23T13:09:10Z,CONTRIBUTOR,"I think it's because of this in `app.css` ``` ol, ul { list-style: none; } ``` https://github.com/simonw/datasette/blame/main/datasette/static/app.css#L35-L38 You could probably reinstate that by providing your own CSS. https://docs.datasette.io/en/0.24/custom_templates.html#custom-css-and-javascript","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",950664971,unordered list is not rendering bullet points in description_html on database page, https://github.com/simonw/datasette/pull/653#issuecomment-582106085,https://api.github.com/repos/simonw/datasette/issues/653,582106085,MDEyOklzc3VlQ29tbWVudDU4MjEwNjA4NQ==,418191,jaywgraves,2020-02-04T20:43:43Z,2020-02-04T20:43:43Z,CONTRIBUTOR,but this also doesn't have to land at all if it doesn't match your use case. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",541331755,allow leading comments in SQL input field, https://github.com/simonw/datasette/pull/653#issuecomment-582105810,https://api.github.com/repos/simonw/datasette/issues/653,582105810,MDEyOklzc3VlQ29tbWVudDU4MjEwNTgxMA==,418191,jaywgraves,2020-02-04T20:43:01Z,2020-02-04T20:43:01Z,CONTRIBUTOR,"I *think* the existing code will be OK even if I strip the lines in the middle of a new line delimited string. It's only used for the validation, SQLite handles the `--` just fine and the whole SQL textarea still gets sent once it passes validation. I can add your test case to my branch later this evening though. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",541331755,allow leading comments in SQL input field, https://github.com/simonw/datasette/issues/329#issuecomment-422915450,https://api.github.com/repos/simonw/datasette/issues/329,422915450,MDEyOklzc3VlQ29tbWVudDQyMjkxNTQ1MA==,418191,jaywgraves,2018-09-19T18:45:02Z,2018-09-20T10:50:50Z,CONTRIBUTOR,"That works for me. Was able to pull the public image and no errors on my canned query. (~although a small rendering bug. I'll create an issue and if I have time today, a PR to fix~ this turned out to be my error.) Thanks for the quick response!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",336465018,Travis should push tagged images to Docker Hub for each release, https://github.com/simonw/datasette/issues/329#issuecomment-422821483,https://api.github.com/repos/simonw/datasette/issues/329,422821483,MDEyOklzc3VlQ29tbWVudDQyMjgyMTQ4Mw==,418191,jaywgraves,2018-09-19T14:17:42Z,2018-09-19T14:17:42Z,CONTRIBUTOR,"I'm using the docker image (0.23.2) and notice some differences/bugs between the docs and the published version with canned queries. (submitted a tiny doc fix also) I was able to build the docker container locally using `master` and I'm using that for now. Would it be possible to manually push 0.24 to DockerHub until the TravisCI stuff is fixed? I would like to run this in our Kubernetes cluster but don't want to publish a version in our internal registry if I don't have to. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",336465018,Travis should push tagged images to Docker Hub for each release, https://github.com/simonw/datasette/issues/369#issuecomment-435768450,https://api.github.com/repos/simonw/datasette/issues/369,435768450,MDEyOklzc3VlQ29tbWVudDQzNTc2ODQ1MA==,416374,gfrmin,2018-11-05T06:31:59Z,2018-11-05T06:31:59Z,CONTRIBUTOR,"That would be ideal, but you know better than me whether the CSV streaming trick works for custom SQL queries.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",374953006,Interface should show same JSON shape options for custom SQL queries, https://github.com/simonw/datasette/issues/366#issuecomment-429737929,https://api.github.com/repos/simonw/datasette/issues/366,429737929,MDEyOklzc3VlQ29tbWVudDQyOTczNzkyOQ==,416374,gfrmin,2018-10-15T07:32:57Z,2018-10-15T07:32:57Z,CONTRIBUTOR,"Very hacky solution is to write now.json file forcing the usage of v1 of Zeit cloud, see https://github.com/slygent/datasette/commit/3ab824793ec6534b6dd87078aa46b11c4fa78ea3 This does work, at least.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",369716228,Default built image size over Zeit Now 100MiB limit, https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-690860653,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50,690860653,MDEyOklzc3VlQ29tbWVudDY5MDg2MDY1Mw==,370930,mikepqr,2020-09-11T04:04:08Z,2020-09-11T04:04:08Z,CONTRIBUTOR,"There's probably a nicer way of doing (hence this is a comment rather than a PR), but this appears to fix it: ```diff --- a/twitter_to_sqlite/utils.py +++ b/twitter_to_sqlite/utils.py @@ -181,6 +181,7 @@ def fetch_timeline( args[""tweet_mode""] = ""extended"" min_seen_id = None num_rate_limit_errors = 0 + seen_count = 0 while True: if min_seen_id is not None: args[""max_id""] = min_seen_id - 1 @@ -208,6 +209,7 @@ def fetch_timeline( yield tweet min_seen_id = min(t[""id""] for t in tweets) max_seen_id = max(t[""id""] for t in tweets) + seen_count += len(tweets) if last_since_id is not None: max_seen_id = max((last_since_id, max_seen_id)) last_since_id = max_seen_id @@ -217,7 +219,9 @@ def fetch_timeline( replace=True, ) if stop_after is not None: - break + if seen_count >= stop_after: + break + args[""count""] = min(args[""count""], stop_after - seen_count) time.sleep(sleep) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",698791218,"favorites --stop_after=N stops after min(N, 200)", https://github.com/simonw/datasette/pull/1296#issuecomment-819467759,https://api.github.com/repos/simonw/datasette/issues/1296,819467759,MDEyOklzc3VlQ29tbWVudDgxOTQ2Nzc1OQ==,295329,camallen,2021-04-14T12:07:37Z,2021-04-14T12:11:36Z,CONTRIBUTOR,"> Removing /var/lib/apt and /var/lib/dpkg makes apt and dpkg unusable in images based on this one. Running `apt-get clean` and removing /var/lib/apt/lists achieves similar size savings. this PR helps me as removing the /var/lib/apt and /var/lib/dpkg directories breaks my ability to add packages when using `datasetteproject/datasette:0.56` as a base image. ---- Shorterm workaround for me was to use this in my Dockerfile ``` FROM datasetteproject/datasette:0.56 RUN mkdir -p /var/lib/apt RUN mkdir -p /var/lib/dpkg RUN mkdir -p /var/lib/dpkg/updates RUN mkdir -p /var/lib/dpkg/info RUN touch /var/lib/dpkg/status RUN apt-get update # and install your packages etc ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855446829,Dockerfile: use Ubuntu 20.10 as base, https://github.com/simonw/datasette/pull/1229#issuecomment-782053455,https://api.github.com/repos/simonw/datasette/issues/1229,782053455,MDEyOklzc3VlQ29tbWVudDc4MjA1MzQ1NQ==,295329,camallen,2021-02-19T12:47:19Z,2021-02-19T12:47:19Z,CONTRIBUTOR,I believe this pr and #1031 are related and fix the same issue.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810507413,ensure immutable databses when starting in configuration directory mode with, https://github.com/simonw/datasette/issues/57#issuecomment-344151223,https://api.github.com/repos/simonw/datasette/issues/57,344151223,MDEyOklzc3VlQ29tbWVudDM0NDE1MTIyMw==,247192,macropin,2017-11-14T05:32:28Z,2017-11-14T05:33:03Z,CONTRIBUTOR,"The pattern is called ""multi-stage builds"". And the result is a svelte 226MB image (201MB for 3.6-slim) vs 700MB+ for the full image. It's possible to get it even smaller, but that takes a lot more work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273127694,Ship a Docker image of the whole thing, https://github.com/simonw/datasette/issues/57#issuecomment-344147583,https://api.github.com/repos/simonw/datasette/issues/57,344147583,MDEyOklzc3VlQ29tbWVudDM0NDE0NzU4Mw==,247192,macropin,2017-11-14T05:03:47Z,2017-11-14T05:03:47Z,CONTRIBUTOR,"Let me know if you'd like a PR. The image is usable as `docker run --rm -t -i -p 9000:8001 -v $(pwd)/db:/db datasette datasette serve /db/chinook.db`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273127694,Ship a Docker image of the whole thing, https://github.com/simonw/datasette/issues/57#issuecomment-344145265,https://api.github.com/repos/simonw/datasette/issues/57,344145265,MDEyOklzc3VlQ29tbWVudDM0NDE0NTI2NQ==,247192,macropin,2017-11-14T04:45:38Z,2017-11-14T04:45:38Z,CONTRIBUTOR,"I'm happy to contribute this. Just let me know if you want a Dockerfile for development or production purposes, or both. If it's prod then we can just pip install the source from pypi, otherwise for dev we'll need a `requirements.txt` to speed up rebuilds.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",273127694,Ship a Docker image of the whole thing, https://github.com/simonw/datasette/issues/1238#issuecomment-789186458,https://api.github.com/repos/simonw/datasette/issues/1238,789186458,MDEyOklzc3VlQ29tbWVudDc4OTE4NjQ1OA==,198537,rgieseke,2021-03-02T20:19:30Z,2021-03-02T20:19:30Z,CONTRIBUTOR,A custom `templates/index.html` seems to work and custom `pages` as a workaround with moving them to `pages/base_url_dir`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813899472,Custom pages don't work with base_url setting, https://github.com/simonw/datasette/pull/279#issuecomment-391077700,https://api.github.com/repos/simonw/datasette/issues/279,391077700,MDEyOklzc3VlQ29tbWVudDM5MTA3NzcwMA==,198537,rgieseke,2018-05-22T17:38:17Z,2018-05-22T17:38:17Z,CONTRIBUTOR,"Alright, that should work now -- let me know if you would prefer any different behaviour.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325352370,Add version number support with Versioneer, https://github.com/simonw/datasette/pull/279#issuecomment-391073267,https://api.github.com/repos/simonw/datasette/issues/279,391073267,MDEyOklzc3VlQ29tbWVudDM5MTA3MzI2Nw==,198537,rgieseke,2018-05-22T17:24:16Z,2018-05-22T17:24:16Z,CONTRIBUTOR,"Sorry, just realised you rely on `version` being a module ...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325352370,Add version number support with Versioneer, https://github.com/simonw/datasette/pull/279#issuecomment-391073009,https://api.github.com/repos/simonw/datasette/issues/279,391073009,MDEyOklzc3VlQ29tbWVudDM5MTA3MzAwOQ==,198537,rgieseke,2018-05-22T17:23:26Z,2018-05-22T17:23:26Z,CONTRIBUTOR,"> I think I prefer the aesthetics of just ""0.22"" for the version string if it's a tagged release with no additional changes - does that work? Yes! That's the default versioneer behaviour. > I'd like to continue to provide a tuple that can be imported from the version.py module as well, as seen here: Should work now, it can be a two (for a tagged version), three or four items tuple. ``` In [2]: datasette.__version__ Out[2]: '0.12+292.ga70c2a8.dirty' In [3]: datasette.__version_info__ Out[3]: ('0', '12+292', 'ga70c2a8', 'dirty') ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",325352370,Add version number support with Versioneer, https://github.com/simonw/datasette/issues/273#issuecomment-390250253,https://api.github.com/repos/simonw/datasette/issues/273,390250253,MDEyOklzc3VlQ29tbWVudDM5MDI1MDI1Mw==,198537,rgieseke,2018-05-18T15:49:52Z,2018-05-18T15:49:52Z,CONTRIBUTOR,"Shouldn't [versioneer](https://github.com/warner/python-versioneer) do that? E.g. 0.21+2.g1076c97 You'd need to install via `pip install git+https://github.com/simow/datasette.git` though, this does a temp git clone.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",324451322,Figure out a way to have /-/version return current git commit hash, https://github.com/simonw/datasette/issues/27#issuecomment-345652450,https://api.github.com/repos/simonw/datasette/issues/27,345652450,MDEyOklzc3VlQ29tbWVudDM0NTY1MjQ1MA==,198537,rgieseke,2017-11-20T10:19:39Z,2017-11-20T10:19:39Z,CONTRIBUTOR,"If Data Package metadata gets adopted (#105) the views spec work might also be worth a look: http://frictionlessdata.io/specs/views/ http://datahub.io/docs/features/views ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",267886330,Ability to plot a simple graph, https://github.com/simonw/datasette/issues/105#issuecomment-345503897,https://api.github.com/repos/simonw/datasette/issues/105,345503897,MDEyOklzc3VlQ29tbWVudDM0NTUwMzg5Nw==,198537,rgieseke,2017-11-19T09:38:08Z,2017-11-19T09:38:08Z,CONTRIBUTOR,"Thanks, I wrote this very simple reader because the default approach as described on the Datahub pages seemed to complicated. I had metadata from the `datapackage.json` attached to the returned DataFrames but removed this due to some attribute handling change in the latest Pandas version. This could also be useful for getting from Data Package to SQL db: https://github.com/frictionlessdata/tableschema-sql-py I maintain a few climate science related dataset at https://github.com/openclimatedata/ The Data Retriever (mainly ecological data) by @ethanwhite et al. is also using the Data Package format for metadata and has some tooling for different dbs: https://frictionlessdata.io/articles/the-data-retriever/ https://github.com/weecology/retriever The Open Power System Data project also has a couple of datasets that show nicely how CSV is great for assembling and then already make SQLite files available. It's one of the first data sets I tried with Datasette, perfect for the use case of getting an API for putting power stations on a map ... https://data.open-power-system-data.org/","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274314940,Consider data-package as a format for metadata, https://github.com/simonw/datasette/pull/2052#issuecomment-1548617257,https://api.github.com/repos/simonw/datasette/issues/2052,1548617257,IC_kwDOBm6k_c5cTgYp,193185,cldellow,2023-05-15T21:32:20Z,2023-05-15T21:32:20Z,CONTRIBUTOR,"> Were you picturing that the whole plugin config object could be returned as a promise, or that the individual hooks (like makeColumnActions or makeAboveTablePanelConfigs supported returning a promise of arrays instead only returning plain arrays? The latter - that you could return a promise of arrays, so it parallels the [""await me maybe"" pattern in Datasette](https://simonwillison.net/2020/Sep/2/await-me-maybe/), where you can return either a value, a callable or an awaitable. > I have a hunch that what you're describing might be achievable without adding Promises to the API with something Oops, I did a poor job explaining. Yes, this would work - but it requires me to continue to communicate the column names out of band (in order to fetch the facet data per-column before registering my plugin), vs being able to re-use them from the plugin implementation. This isn't that big of a deal - it'd be a nice ergonomic improvement, but nowhere near as a big of an improvement as having an officially sanctioned way to add stuff to the column menus in the first place. This could also be layered on in a future commit without breaking v1 users, too, so it's not at all urgent. > especially if those lines are encapsulated by a function we provide (maybe something that's available on the window provided by Datasette as an inline script tag Ah, this is maybe the the key point. Since it's all hosted inside Datasette, Datasette can provide some arbitrary sugar to make it easier to work with. My experience with async scripts in JS is that people sometimes don't understand the race conditions inherent to them. If they copy/paste from a tutorial, it does just work. But then they'll delete half the code, and by chance it still works on their machine/Datasette templates, and now someone's headed for an annoying debugging session -- maybe them, maybe someone else who tries to re-use their plugin. Again, a fairly minor thing, though.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1651082214,"feat: Javascript Plugin API (Custom panels, column menu items with JS actions)", https://github.com/simonw/datasette/pull/2052#issuecomment-1530822437,https://api.github.com/repos/simonw/datasette/issues/2052,1530822437,IC_kwDOBm6k_c5bPn8l,193185,cldellow,2023-05-02T03:35:30Z,2023-05-02T16:02:38Z,CONTRIBUTOR,"Also, just checking - is this how I'd write bulletproof plugin registration code that is robust against the order in which the script tags load (eg if both my code and the Datasette code are loaded via a `