html_url,issue_url,id,node_id,user,user_label,created_at,updated_at,author_association,body,reactions,issue,issue_label,performed_via_github_app https://github.com/dogsheep/apple-notes-to-sqlite/issues/6#issuecomment-1493442956,https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6,1493442956,IC_kwDOJHON9s5ZBCGM,14314871,amlestin,2023-04-02T21:20:43Z,2023-04-02T21:25:37Z,NONE,"I'm experiencing something similar. My apostrophes (') turn into (‚Äô) and the output is truncated. Hoping to debug next weekend ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1617602868,Character encoding problem, https://github.com/dogsheep/apple-notes-to-sqlite/issues/6#issuecomment-1508784533,https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/6,1508784533,IC_kwDOJHON9s5Z7jmV,579727,sirnacnud,2023-04-14T15:22:09Z,2023-04-14T15:22:09Z,NONE,"Just changing the encoding in `extract_notes` to `utf8` seems to fix it for my titles that were messed up. ![Screen Shot 2023-04-14 at 5 14 18 PM](https://user-images.githubusercontent.com/579727/232086062-e7edc4d1-0880-417a-925b-fd6c65b05155.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1617602868,Character encoding problem, https://github.com/dogsheep/apple-notes-to-sqlite/issues/8#issuecomment-1468898285,https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/8,1468898285,IC_kwDOJHON9s5XjZvt,41546558,RhetTbull,2023-03-14T22:00:21Z,2023-03-14T22:00:21Z,NONE,"Well that's embarrassing. I made a fork using macnotesapp and it's actually slower. This is because the Scripting Bridge sometimes fails to return the folder and thus macnotesapp resorts to AppleScript in this situation. The repeated AppleScript calls on a large library are slower than your ""slurp it all in"" approach. I've got some ideas about how to improve this--will make another attempt if I can fix the issues.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1617823309,Increase performance using macnotesapp, https://github.com/dogsheep/dogsheep-photos/issues/15#issuecomment-748436115,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/15,748436115,MDEyOklzc3VlQ29tbWVudDc0ODQzNjExNQ==,8573886,nickvazz,2020-12-19T07:43:38Z,2020-12-19T07:47:36Z,NONE,"Hey Simon! I really enjoy datasette so far, just started trying it out today following your iPhone photos [example](https://simonwillison.net/2020/May/21/dogsheep-photos/). I am not sure if you had run into this or not, but it seems like they might have changed one of the column names from `ZGENERICASSET` to `ZASSET`. Should I open a PR? Would change: - [here](https://github.com/dogsheep/dogsheep-photos/blob/master/dogsheep_photos/cli.py#L209-L213) - [here](https://github.com/dogsheep/dogsheep-photos/blob/master/dogsheep_photos/cli.py#L238) - [here](https://github.com/dogsheep/dogsheep-photos/blob/master/dogsheep_photos/cli.py#L240)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",612151767,Expose scores from ZCOMPUTEDASSETATTRIBUTES, https://github.com/dogsheep/dogsheep-photos/issues/20#issuecomment-633234781,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/20,633234781,MDEyOklzc3VlQ29tbWVudDYzMzIzNDc4MQ==,41439,dmd,2020-05-24T13:56:13Z,2020-05-24T13:56:13Z,NONE,"As that seems to be closed, can you give a hint on how to make this work?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",613006393,Ability to serve thumbnailed Apple Photo from its place on disk, https://github.com/dogsheep/dogsheep-photos/issues/21#issuecomment-748436195,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/21,748436195,MDEyOklzc3VlQ29tbWVudDc0ODQzNjE5NQ==,8573886,nickvazz,2020-12-19T07:44:32Z,2020-12-19T07:44:49Z,NONE,"I have also run into this a bit, would it be possible to post your `requirements.txt` so I can try and reproduce your [blog post](https://simonwillison.net/2020/May/21/dogsheep-photos/)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",615474990,bpylist.archiver.CircularReference: archive has a cycle with uid(13), https://github.com/dogsheep/dogsheep-photos/issues/28#issuecomment-751125270,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/28,751125270,MDEyOklzc3VlQ29tbWVudDc1MTEyNTI3MA==,129786,jmelloy,2020-12-24T22:26:22Z,2020-12-24T22:26:22Z,NONE,This comes around if you’ve run the photo export without running an s3 upload. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",624490929,Invalid SQL no such table: main.uploads, https://github.com/dogsheep/dogsheep-photos/issues/3#issuecomment-934207940,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/3,934207940,IC_kwDOD079W843ruHE,1751612,jratike80,2021-10-05T08:57:41Z,2021-10-05T08:57:41Z,NONE,"Maybe the exif-loader from the SpatiaLite project could be useful as a reference even it is written in C and it also saves images as blobs https://www.gaia-gis.it/fossil/spatialite-tools/file?name=exif_loader.c&ci=tip. The tool is also integrated into the spatialite-gui application. I found some user documentation from the web archive http://web.archive.org/web/20180629041238/https://www.gaia-gis.it/spatialite-2.3.1/spatialite-exif-2.3.1.html.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",602533481,"Import EXIF data into SQLite - lens used, ISO, aperture etc", https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-791053721,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,791053721,MDEyOklzc3VlQ29tbWVudDc5MTA1MzcyMQ==,6213,dsisnero,2021-03-05T00:31:27Z,2021-03-05T00:31:27Z,NONE,I am getting the same thing for US West (N. California) us-west-1,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload, https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-882091516,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,882091516,IC_kwDOD079W840k6X8,10793464,aaronyih1,2021-07-18T17:29:39Z,2021-07-18T17:33:02Z,NONE,Same here for US West (N. California) us-west-1. Running on Catalina.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload, https://github.com/dogsheep/dogsheep-photos/issues/32#issuecomment-884688833,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/32,884688833,IC_kwDOD079W840u0fB,10793464,aaronyih1,2021-07-22T06:40:25Z,2021-07-22T06:40:25Z,NONE,The solution here is to upload an image to the bucket first. It is caused because it does not properly handle the case when there are no images in the bucket.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803333769,KeyError: 'Contents' on running upload, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-777951854,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,777951854,MDEyOklzc3VlQ29tbWVudDc3Nzk1MTg1NA==,675335,leafgarland,2021-02-12T03:54:39Z,2021-02-12T03:54:39Z,NONE,"I think that is a typo in the docs, you can use > dogsheep-photos apple-photos photos.db","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778002092,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,778002092,MDEyOklzc3VlQ29tbWVudDc3ODAwMjA5Mg==,11855322,robmarkcole,2021-02-12T06:19:32Z,2021-02-12T06:19:32Z,NONE,"hi @leafgarland that results in a new error: ``` (venv) (base) Robins-MacBook:datasette robin$ dogsheep-photos apple-photos photos.db Traceback (most recent call last): File ""/Users/robin/datasette/venv/bin/dogsheep-photos"", line 8, in sys.exit(cli()) File ""/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/robin/datasette/venv/lib/python3.8/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/Users/robin/datasette/venv/lib/python3.8/site-packages/dogsheep_photos/cli.py"", line 206, in apple_photos db.conn.execute( sqlite3.OperationalError: no such table: attached.ZGENERICASSET ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/dogsheep/dogsheep-photos/issues/33#issuecomment-778014990,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/33,778014990,MDEyOklzc3VlQ29tbWVudDc3ODAxNDk5MA==,675335,leafgarland,2021-02-12T06:54:14Z,2021-02-12T06:54:14Z,NONE,"Ahh, that might be because macOS Big Sur has changed the structure of the photos db. Might need to wait for a later release, there is a PR which adds support for Big Sur. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803338729,photo-to-sqlite: command not found, https://github.com/dogsheep/dogsheep-photos/issues/35#issuecomment-813249000,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/35,813249000,MDEyOklzc3VlQ29tbWVudDgxMzI0OTAwMA==,1151557,ligurio,2021-04-05T07:37:57Z,2021-04-05T07:37:57Z,NONE,"There are trained ML models used in Photoprism: - https://dl.photoprism.org/tensorflow/nasnet.zip - https://dl.photoprism.org/tensorflow/nsfw.zip","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842695374,Support to annotate photos on other than macOS OSes, https://github.com/dogsheep/dogsheep-photos/issues/7#issuecomment-906015471,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/7,906015471,IC_kwDOD079W842ALLv,18232,dkam,2021-08-26T02:01:01Z,2021-08-26T02:01:01Z,NONE,Perceptual hashes might be what you're after : http://phash.org,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",602585497,Integrate image content hashing, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1035717429,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,1035717429,IC_kwDOD079W849u8s1,18504,harperreed,2022-02-11T01:55:38Z,2022-02-11T01:55:38Z,NONE,I would love this merged! ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1190995982,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,1190995982,IC_kwDOD079W85G_SgO,19231792,jakewilkins,2022-07-21T03:26:38Z,2023-04-14T22:41:31Z,NONE,"👋 Any update on getting this merged? Alternatively, is there a work around for this issue to unblock myself? edit to add: huge fan of both this project and `osxphotos`, thanks so much for your work here 🙏 If I had any experience with Python I would offer to help but somehow I've managed to not write any Python in 10+ years of programming 😅 Edit again to add: > Alternatively, is there a work around for this issue to unblock myself? Yes, there is. I was able to apply the patch of this PR and it applies (mostly) cleanly and works. - verified I have a high enough version of `osxphotos` - downloaded the .patch of this (by appending `.patch` to the URL) - edited the patch to remove the `setup.py` changes - `cd` to the directory containing `dogsheep-photos` and `git apply 31.patch` ","{""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-1382655354,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,1382655354,IC_kwDOD079W85SaaV6,2704860,fidiego,2023-01-14T04:08:36Z,2023-01-14T04:08:36Z,NONE,"I just tried this branch and saw some errors. I installed this PR locally with: ```bash pip install https://github.com/RhetTbull/dogsheep-photos/archive/update_for_bigsur.zip ```
System Details **OS:** MacOS Monterey **Python Version:** Python 3.10.8
Stacktrace ```python Traceback (most recent call last): File ""/Users/df/.venvs/photo-experiments/bin/dogsheep-photos"", line 8, in sys.exit(cli()) File ""/Users/df/.venvs/photo-experiments/lib/python3.10/site-packages/click/core.py"", line 1130, in __call__ return self.main(*args, **kwargs) File ""/Users/df/.venvs/photo-experiments/lib/python3.10/site-packages/click/core.py"", line 1055, in main rv = self.invoke(ctx) File ""/Users/df/.venvs/photo-experiments/lib/python3.10/site-packages/click/core.py"", line 1657, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/Users/df/.venvs/photo-experiments/lib/python3.10/site-packages/click/core.py"", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/df/.venvs/photo-experiments/lib/python3.10/site-packages/click/core.py"", line 760, in invoke return __callback(*args, **kwargs) File ""/Users/df/.venvs/photo-experiments/lib/python3.10/site-packages/dogsheep_photos/cli.py"", line 254, in apple_photos sha256 = calculate_hash(pathlib.Path(photo.path)) File ""/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py"", line 960, in __new__ self = cls._from_parts(args) File ""/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py"", line 594, in _from_parts drv, root, parts = self._parse_args(args) File ""/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pathlib.py"", line 578, in _parse_args a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType ```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/dogsheep-photos/pull/31#issuecomment-811362316,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/31,811362316,MDEyOklzc3VlQ29tbWVudDgxMTM2MjMxNg==,871250,PabloLerma,2021-03-31T19:14:39Z,2021-03-31T19:14:39Z,NONE,👋 could I help somehow for this to be merged? As Big Sur is going to be more used as the time goes I think it would be nice to merge and publish a new version. Nice work!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771511344,Update for Big Sur, https://github.com/dogsheep/dogsheep-photos/pull/36#issuecomment-1006708046,https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36,1006708046,IC_kwDOD079W848ASVO,71983,scoates,2022-01-06T16:04:46Z,2022-01-06T16:04:46Z,NONE,"This one got me, today, too. 👍","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",988493790,Correct naming of tool in readme, https://github.com/dogsheep/dogsheep.github.io/pull/6#issuecomment-1021264135,https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/6,1021264135,IC_kwDODMzF1s4830EH,1151557,ligurio,2022-01-25T14:52:40Z,2022-01-25T14:52:40Z,NONE,"@simonw, could you review?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",842765105,Add testres-db tool, https://github.com/dogsheep/evernote-to-sqlite/issues/11#issuecomment-777690332,https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/11,777690332,MDEyOklzc3VlQ29tbWVudDc3NzY5MDMzMg==,3613583,dskrad,2021-02-11T18:16:01Z,2021-02-11T18:16:01Z,NONE,"I solved this issue by modifying line 31 of utils.py content = ET.tostring(ET.fromstring(content_xml.strip())).decode(""utf-8"")","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",792851444,XML parse error, https://github.com/dogsheep/evernote-to-sqlite/issues/14#issuecomment-911772943,https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/14,911772943,IC_kwDOEhK-wc42WI0P,46968,step21,2021-09-02T14:53:11Z,2021-09-02T14:53:11Z,NONE,"Additionally, assuming the line numbers match up with the provided enenx file, the mentioned line plus one before and after is as follows: ``` ]]>

```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",986829194,xml.etree.ElementTree.Parse Error - mismatched tag, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765495861,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,765495861,MDEyOklzc3VlQ29tbWVudDc2NTQ5NTg2MQ==,25372415,cobiadigital,2021-01-22T15:44:00Z,2021-01-22T15:44:00Z,NONE,"Risk of autoimmune disorders: https://www.snpedia.com/index.php/Genotype ``` select rsid, genotype, case genotype when 'AA' then '2x risk of rheumatoid arthritis and other autoimmune diseases' when 'GG' then 'Normal risk for autoimmune disorders' end as interpretation from genome where rsid = 'rs2476601' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765498984,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,765498984,MDEyOklzc3VlQ29tbWVudDc2NTQ5ODk4NA==,25372415,cobiadigital,2021-01-22T15:48:25Z,2021-01-22T15:49:33Z,NONE,"The ""Warrior Gene"" https://www.snpedia.com/index.php/Rs4680 ``` select rsid, genotype, case genotype when 'AA' then '(worrier) advantage in memory and attention tasks' when 'AG' then 'Intermediate dopamine levels, other effects' when 'GG' then '(warrior) multiple associations, see details' end as interpretation from genome where rsid = 'rs4680' ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765502845,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,765502845,MDEyOklzc3VlQ29tbWVudDc2NTUwMjg0NQ==,25372415,cobiadigital,2021-01-22T15:53:19Z,2021-01-22T15:53:19Z,NONE,"rs7903146 Influences risk of Type-2 diabetes https://www.snpedia.com/index.php/Rs7903146 ``` select rsid, genotype, case genotype when 'CC' then 'Normal (lower) risk of Type 2 Diabetes and Gestational Diabetes.' when 'CT' then '1.4x increased risk for diabetes (and perhaps colon cancer).' when 'TT' then '2x increased risk for Type-2 diabetes' end as interpretation from genome where rsid = 'rs7903146' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765506901,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,765506901,MDEyOklzc3VlQ29tbWVudDc2NTUwNjkwMQ==,25372415,cobiadigital,2021-01-22T15:58:41Z,2021-01-22T15:58:58Z,NONE,"Both rs10757274 and rs2383206 can both indicate higher risks of heart disease https://www.snpedia.com/index.php/Rs2383206 ``` select rsid, genotype, case genotype when 'AA' then 'Normal' when 'AG' then '~1.2x increased risk for heart disease' when 'GG' then '~1.3x increased risk for heart disease' end as interpretation from genome where rsid = 'rs10757274' ``` ``` select rsid, genotype, case genotype when 'AA' then 'Normal' when 'AG' then '1.4x increased risk for heart disease' when 'GG' then '1.7x increased risk for heart disease' end as interpretation from genome where rsid = 'rs2383206' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765523517,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,765523517,MDEyOklzc3VlQ29tbWVudDc2NTUyMzUxNw==,25372415,cobiadigital,2021-01-22T16:20:25Z,2021-01-22T16:20:25Z,NONE,"rs53576: the oxytocin receptor (OXTR) gene ``` select rsid, genotype, case genotype when 'AA' then 'Lack of empathy?' when 'AG' then 'Lack of empathy?' when 'GG' then 'Optimistic and empathetic; handle stress well' end as interpretation from genome where rsid = 'rs53576' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-765525338,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,765525338,MDEyOklzc3VlQ29tbWVudDc2NTUyNTMzOA==,25372415,cobiadigital,2021-01-22T16:22:44Z,2021-01-22T16:22:44Z,NONE,"rs1333049 associated with coronary artery disease https://www.snpedia.com/index.php/Rs1333049 ``` select rsid, genotype, case genotype when 'CC' then '1.9x increased risk for coronary artery disease' when 'CG' then '1.5x increased risk for CAD' when 'GG' then 'normal' end as interpretation from genome where rsid = 'rs1333049' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/genome-to-sqlite/issues/1#issuecomment-831004775,https://api.github.com/repos/dogsheep/genome-to-sqlite/issues/1,831004775,MDEyOklzc3VlQ29tbWVudDgzMTAwNDc3NQ==,25372415,cobiadigital,2021-05-03T03:46:23Z,2021-05-03T03:46:23Z,NONE,"RS1800955 is related to novelty seeking and ADHD https://www.snpedia.com/index.php/Rs1800955 `select rsid, genotype, case genotype when 'CC' then 'increased susceptibility to novelty seeking' when 'CT' then 'increased susceptibility to novelty seeking' when 'TT' then 'normal' end as interpretation from genome where rsid = 'rs1800955'`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",496415321,Figure out some interesting example SQL queries, https://github.com/dogsheep/github-to-sqlite/issues/15#issuecomment-605439685,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15,605439685,MDEyOklzc3VlQ29tbWVudDYwNTQzOTY4NQ==,2029,garethr,2020-03-28T12:17:01Z,2020-03-28T12:17:01Z,NONE,"That looks great, thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",544571092,Assets table with downloads, https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-571412923,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16,571412923,MDEyOklzc3VlQ29tbWVudDU3MTQxMjkyMw==,15092,jayvdb,2020-01-07T03:06:46Z,2020-01-07T03:06:46Z,NONE,"I re-tried after doing `auth`, and I get the same result.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546051181,Exception running first command: IndexError: list index out of range, https://github.com/dogsheep/github-to-sqlite/issues/16#issuecomment-602136481,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16,602136481,MDEyOklzc3VlQ29tbWVudDYwMjEzNjQ4MQ==,15092,jayvdb,2020-03-22T02:08:57Z,2020-03-22T02:08:57Z,NONE,"I'd love to be using your library as a better cached gh layer for a new library I have built, replacing large parts of the very ugly https://github.com/jayvdb/pypidb/blob/master/pypidb/_github.py , and then probably being able to rebuild the setuppy chunk as a feature here at a later stage. I would also need tokenless and netrc support, but I would be happy to add those bits.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",546051181,Exception running first command: IndexError: list index out of range, https://github.com/dogsheep/github-to-sqlite/issues/33#issuecomment-622279374,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33,622279374,MDEyOklzc3VlQ29tbWVudDYyMjI3OTM3NA==,2029,garethr,2020-05-01T07:12:47Z,2020-05-01T07:12:47Z,NONE,"I also go it working with: ```yaml run: echo ${{ secrets.github_token }} | github-to-sqlite auth ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",609950090,Fall back to authentication via ENV, https://github.com/dogsheep/github-to-sqlite/issues/38#issuecomment-623038148,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38,623038148,MDEyOklzc3VlQ29tbWVudDYyMzAzODE0OA==,5779832,zzeleznick,2020-05-03T01:18:57Z,2020-05-03T01:18:57Z,NONE,"Thanks, @simonw! I feel a little foolish in hindsight, but I'm on the same page now and am glad to have discovered first-hand a motivation for this `repos_starred` use case.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",611284481,[Feature Request] Support Repo Name in Search 🥺, https://github.com/dogsheep/github-to-sqlite/issues/38#issuecomment-623044643,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38,623044643,MDEyOklzc3VlQ29tbWVudDYyMzA0NDY0Mw==,5779832,zzeleznick,2020-05-03T02:34:32Z,2020-05-03T02:34:32Z,NONE,"1. More than glad to share feedback from the sidelines as a [starrer](https://github-to-sqlite.dogsheep.net/github?sql=select%0D%0A++starred_at%2C%0D%0A++starred_by%2C%0D%0A++full_name+as+repo_name%0D%0Afrom%0D%0A++repos_starred%0D%0Awhere%0D%0A++starred_by+%3D+%22zzeleznick%22%0D%0Aorder+by%0D%0A++starred_at+desc). ``` -- Motivation: -- Datasette is a data hammer and I'm looking for nails -- e.g. Find which repos a user has starred => trigger a TBD downstream action select starred_at, starred_by, full_name as repo_name from repos_starred where starred_by = ""zzeleznick"" order by starred_at desc ``` | starred_at | starred_by | repo_name | | --- | --- | --- | | 2020-02-11T01:08:59Z | zzeleznick | dogsheep/twitter-to-sqlite | | 2020-01-11T21:57:34Z | zzeleznick | simonw/datasette | 2. In my day job, I use [airflow](https://github.com/apache/airflow), and that's the mental model I'm bringing to [datasette](https://github.com/simonw/datasette). 3. I see your project like [twitter-to-sqlite](https://github.com/dogsheep/twitter-to-sqlite) akin to [Operators](https://airflow.apache.org/docs/stable/_api/index.html#pythonapi-operators) in Airflow world.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",611284481,[Feature Request] Support Repo Name in Search 🥺, https://github.com/dogsheep/github-to-sqlite/issues/46#issuecomment-1359468823,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/46,1359468823,IC_kwDODFdgUs5RB9kX,1839645,choldgraf,2022-12-20T14:39:39Z,2022-12-20T14:40:15Z,NONE,"Just a quick +1 to this one from me - I would like to do a better job of tracking who is reviewing one another's pull requests in repositories, since this is a specific kind of maintenance work that I think often goes unrewarded. I can't seem to figure this out just by looking at the `pull_request` or `issue_comments` tables, so I think it would be helpful to support PR reviews natively (even if just for summary statistics). Alternatively if there is a way in the API to tell if an issue comment is part of a review, then perhaps you could quickly calculate the number of unique reviews that an author performed. But that was beyond my SQL-foo :-) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",664485022,Feature: pull request reviews and comments, https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-1208757153,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51,1208757153,IC_kwDODFdgUs5IDCuh,9020979,hydrosquall,2022-08-09T00:29:44Z,2022-08-09T00:29:44Z,NONE,"I've been looking into how to to get this data out of Github (especially now there are ""secondary rate limits"" without an advertised allowance separate from the regular rate limits. I've had decent success with the Airbyte github extractor (aside from one data quality issue https://github.com/airbytehq/airbyte/pull/15420 ). Airbyte splits data extraction between the GraphQL and REST endpoints depending on the resource type, but they're very comprehensive. https://github.com/airbytehq/airbyte/blob/306a75ef5370728e0912cf52a1a898a530db0c90/airbyte-integrations/connectors/source-github/source_github/streams.py#L22-L122 Before this, I tried a few solutions in my own custom wrapper mentioned in this thread + its children https://github.com/PyGithub/PyGithub/issues/1989 , but they weren't working as expected.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",703246031,github-to-sqlite should handle rate limits better, https://github.com/dogsheep/github-to-sqlite/issues/51#issuecomment-1279224780,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/51,1279224780,IC_kwDODFdgUs5MP2vM,7908073,chapmanjacobd,2022-10-14T16:34:07Z,2022-10-14T16:34:07Z,NONE,"also, it says that authenticated requests have a much higher ""rate limit"". Unauthenticated requests only get 60 req/hour ?? seems more like a quota than a ""rate limit"" (although I guess that is semantic equivalence) You would want to use `x-ratelimit-reset` ``` time.sleep(r['x-ratelimit-reset'] + 1 - time.time()) ``` But a more complete solution would bring authenticated requests to the other subcommands. I'm surprised only `github-to-sqlite get` is using the `--auth=` CLI flag","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",703246031,github-to-sqlite should handle rate limits better, https://github.com/dogsheep/github-to-sqlite/issues/64#issuecomment-860895838,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64,860895838,MDEyOklzc3VlQ29tbWVudDg2MDg5NTgzOA==,231498,khimaros,2021-06-14T18:23:21Z,2021-06-14T21:37:35Z,NONE,"i have a basic working version at https://github.com/khimaros/github-to-sqlite this can be tested with `github-to-sqlite events.db khimaros/events` caveat: the GitHub API doesn't seem to provide a complete history of events.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",920636216,"feature: support ""events""", https://github.com/dogsheep/github-to-sqlite/issues/64#issuecomment-861035862,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64,861035862,MDEyOklzc3VlQ29tbWVudDg2MTAzNTg2Mg==,231498,khimaros,2021-06-14T22:29:20Z,2021-06-14T22:29:20Z,NONE,"it looks like the v4 GraphQL API is the only way to get data beyond 90 days from GitHub. this is significant change, but may be worth considering in the future.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",920636216,"feature: support ""events""", https://github.com/dogsheep/github-to-sqlite/issues/64#issuecomment-861087651,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/64,861087651,MDEyOklzc3VlQ29tbWVudDg2MTA4NzY1MQ==,231498,khimaros,2021-06-15T00:48:37Z,2021-06-15T00:48:37Z,NONE,"@simonw -- i've created an omega-query that fetched most of what was interesting to me for a single user. found by poking around in the ""Explorer"" tab in https://docs.github.com/en/graphql/overview/explorer note: pagination is still required via `first` and `last` but it seems to allow unlimited history. ``` query MyQuery { __typename user(login: """") { id pinnedItems(first: 100) { edges { node } } pullRequests(first: 100) { nodes { body title state createdAt } } createdAt issues(first: 100) { pageInfo { endCursor startCursor } nodes { title url createdAt body } } issueComments(first: 100) { edges { node { id updatedAt url body } } } repositories(first: 100) { nodes { createdAt description parent { name } pinnedIssues(first: 100) { edges { node { id } } } pinnedDiscussions(first: 100) { edges { node { id } } } } } starredRepositories(first: 100) { edges { node { id } } } } } ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",920636216,"feature: support ""events""", https://github.com/dogsheep/github-to-sqlite/pull/65#issuecomment-1266141699,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65,1266141699,IC_kwDODFdgUs5Ld8oD,231498,khimaros,2022-10-03T22:35:03Z,2022-10-03T22:35:03Z,NONE,"@simonw rebased against latest, please let me know if i should drop this PR.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923270900,basic support for events, https://github.com/dogsheep/github-to-sqlite/pull/65#issuecomment-885964242,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/65,885964242,IC_kwDODFdgUs40zr3S,231498,khimaros,2021-07-23T23:45:35Z,2021-07-23T23:45:35Z,NONE,@simonw is this PR of interest to you?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",923270900,basic support for events, https://github.com/dogsheep/github-to-sqlite/pull/66#issuecomment-929651819,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/66,929651819,IC_kwDODFdgUs43aVxr,30531572,sarcasticadmin,2021-09-28T21:50:31Z,2021-09-28T21:50:31Z,NONE,@simonw any feedback/thoughts? ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",975161924,Add --merged-by flag to pull-requests sub command, https://github.com/dogsheep/github-to-sqlite/pull/76#issuecomment-1238190601,https://api.github.com/repos/dogsheep/github-to-sqlite/issues/76,1238190601,IC_kwDODFdgUs5JzUoJ,2757699,OverkillGuy,2022-09-06T13:58:20Z,2022-09-06T13:59:08Z,NONE,"Tested PR just now in private org, fetched >2k repos infos flawlessly! poetry run github-to-sqlite repos --organization github.db MYORG","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1363280254,Add organization support to repos command, https://github.com/dogsheep/google-takeout-to-sqlite/issues/10#issuecomment-1073152522,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/10,1073152522,IC_kwDODFE5qs4_9wIK,9290214,csusanu,2022-03-20T02:38:07Z,2022-03-20T02:38:07Z,NONE,"[This line](https://github.com/dogsheep/google-takeout-to-sqlite/blob/e54e544427f1cc3ea8189f0e95f54046301a8645/google_takeout_to_sqlite/utils.py) needs to say `""MyActivity.json""` instead of `""My Activity.json""`. Google must have changed the file name.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1123393829,sqlite3.OperationalError: no such table: main.my_activity, https://github.com/dogsheep/google-takeout-to-sqlite/issues/2#issuecomment-747130908,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/2,747130908,MDEyOklzc3VlQ29tbWVudDc0NzEzMDkwOA==,231498,khimaros,2020-12-17T00:47:04Z,2020-12-17T00:47:43Z,NONE,"it looks like almost all of the memory consumption is coming from `json.load()`. another direction here may be to use the new ""Semantic Location History"" data which is already broken down by year and month. it also provides much more interesting data, such as estimated address, form of travel, etc.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",769376447,killed by oomkiller on large location-history, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-780817596,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,780817596,MDEyOklzc3VlQ29tbWVudDc4MDgxNzU5Ng==,306240,UtahDave,2021-02-17T20:01:35Z,2021-02-17T20:01:35Z,NONE,I've got this almost working. Just needs some polish,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-781451701,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,781451701,MDEyOklzc3VlQ29tbWVudDc4MTQ1MTcwMQ==,203343,Btibert3,2021-02-18T16:06:21Z,2021-02-18T16:06:21Z,NONE,Awesome!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-783688547,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,783688547,MDEyOklzc3VlQ29tbWVudDc4MzY4ODU0Nw==,306240,UtahDave,2021-02-22T21:31:28Z,2021-02-22T21:31:28Z,NONE,"@Btibert3 I've opened a PR with my initial attempt at this. Would you be willing to give this a try? https://github.com/dogsheep/google-takeout-to-sqlite/pull/5","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790198930,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,790198930,MDEyOklzc3VlQ29tbWVudDc5MDE5ODkzMA==,203343,Btibert3,2021-03-04T00:58:40Z,2021-03-04T00:58:40Z,NONE,"I am just seeing this sorry, yes! I will kick the tires later on tonight. My apologies for the delay.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/issues/4#issuecomment-790934616,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/4,790934616,MDEyOklzc3VlQ29tbWVudDc5MDkzNDYxNg==,203343,Btibert3,2021-03-04T20:54:44Z,2021-03-04T20:54:44Z,NONE,"Sorry for the delay, I got sidetracked after class last night. I am getting the following error: ``` /content# google-takeout-to-sqlite mbox takeout.db Takeout/Mail/gmail.mbox Usage: google-takeout-to-sqlite [OPTIONS] COMMAND [ARGS]...Try 'google-takeout-to-sqlite --help' for help. Error: No such command 'mbox'. ``` On the box, I installed with pip after cloning: https://github.com/UtahDave/google-takeout-to-sqlite.git","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",778380836,Feature Request: Gmail, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-783794520,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,783794520,MDEyOklzc3VlQ29tbWVudDc4Mzc5NDUyMA==,306240,UtahDave,2021-02-23T01:13:54Z,2021-02-23T01:13:54Z,NONE,"Also, @simonw I created a test based off the existing tests. I think it's working correctly","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-784638394,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,784638394,MDEyOklzc3VlQ29tbWVudDc4NDYzODM5NA==,306240,UtahDave,2021-02-24T00:36:18Z,2021-02-24T00:36:18Z,NONE,I noticed that @simonw is using black for formatting. I ran black on my additions in this PR.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790389335,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,790389335,MDEyOklzc3VlQ29tbWVudDc5MDM4OTMzNQ==,306240,UtahDave,2021-03-04T07:32:04Z,2021-03-04T07:32:04Z,NONE,"> The command takes quite a while to start running, presumably because this line causes it to have to scan the WHOLE file in order to generate a count: > > https://github.com/dogsheep/google-takeout-to-sqlite/blob/a3de045eba0fae4b309da21aa3119102b0efc576/google_takeout_to_sqlite/utils.py#L66-L67 > > I'm fine with waiting though. It's not like this is a command people run every day - and without that count we can't show a progress bar, which seems pretty important for a process that takes this long. The wait is from python loading the mbox file. This happens regardless if you're getting the length of the mbox. The mbox module is on the slow side. It is possible to do one's own parsing of the mbox, but I kind of wanted to avoid doing that.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-790391711,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,790391711,MDEyOklzc3VlQ29tbWVudDc5MDM5MTcxMQ==,306240,UtahDave,2021-03-04T07:36:24Z,2021-03-04T07:36:24Z,NONE,"> Looks like you're doing this: > > ```python > elif message.get_content_type() == ""text/plain"": > body = message.get_payload(decode=True) > ``` > > So presumably that decodes to a unicode string? > > I imagine the reason the column is a `BLOB` for me is that `sqlite-utils` determines the column type based on the first batch of items - https://github.com/simonw/sqlite-utils/blob/09c3386f55f766b135b6a1c00295646c4ae29bec/sqlite_utils/db.py#L1927-L1928 - and I got unlucky and had something in my first batch that wasn't a unicode string. Ah, that's good to know. I think explicitly creating the tables will be a great improvement. I'll add that. Also, I noticed after I opened this PR that the `message.get_payload()` is being deprecated in favor of `message.get_content()` or something like that. I'll see if that handles the decoding better, too. Thanks for the feedback. I should have time tomorrow to put together some improvements.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791089881,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,791089881,MDEyOklzc3VlQ29tbWVudDc5MTA4OTg4MQ==,28565,maxhawkins,2021-03-05T02:03:19Z,2021-03-05T02:03:19Z,NONE,"I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout. Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-791530093,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,791530093,MDEyOklzc3VlQ29tbWVudDc5MTUzMDA5Mw==,306240,UtahDave,2021-03-05T16:28:07Z,2021-03-05T16:28:07Z,NONE,"> I just tried to run this on a small VPS instance with 2GB of memory and it crashed out of memory while processing a 12GB mbox from Takeout. > > Is it possible to stream the emails to sqlite instead of loading it all into memory and upserting at once? @maxhawkins a limitation of the python mbox module is it loads the entire mbox into memory. I did find another approach to this problem that didn't use the builtin python mbox module and created a generator so that it didn't have to load the whole mbox into memory. I was hoping to use standard library modules, but this might be a good reason to investigate that approach a bit more. My worry is making sure a custom processor handles all the ins and outs of the mbox format correctly. Hm. As I'm writing this, I thought of something. I think I can parse each message one at a time, and then use an mbox function to load each message using the python mbox module. That way the mbox module can still deal with the specifics of the mbox format, but I can use a generator. I'll give that a try. Thanks for the feedback @maxhawkins and @simonw. I'll give that a try. @simonw can we hold off on merging this until I can test this new approach?","{""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-849708617,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,849708617,MDEyOklzc3VlQ29tbWVudDg0OTcwODYxNw==,28565,maxhawkins,2021-05-27T15:01:42Z,2021-05-27T15:01:42Z,NONE,Any updates?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-884672647,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,884672647,IC_kwDODFE5qs40uwiH,28565,maxhawkins,2021-07-22T05:56:31Z,2021-07-22T14:03:08Z,NONE,"How does this commit look? https://github.com/maxhawkins/google-takeout-to-sqlite/commit/72802a83fee282eb5d02d388567731ba4301050d It seems that Takeout's mbox format is pretty simple, so we can get away with just splitting the file on lines begining with `From `. My commit just splits the file every time a line starts with `From ` and uses `email.message_from_bytes` to parse each chunk. I was able to load a 12GB takeout mbox without the program using more than a couple hundred MB of memory during the import process. It does make us lose the progress bar, but maybe I can add that back in a later commit.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885022230,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885022230,IC_kwDODFE5qs40wF4W,28565,maxhawkins,2021-07-22T15:51:46Z,2021-07-22T15:51:46Z,NONE,One thing I noticed is this importer doesn't save attachments along with the body of the emails. It would be nice if those got stored as blobs in a separate attachments table so attachments can be included while fetching search results.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885094284,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885094284,IC_kwDODFE5qs40wXeM,28565,maxhawkins,2021-07-22T17:41:32Z,2021-07-22T17:41:32Z,NONE,I added a follow-up commit that deals with emails that don't have a `Date` header: https://github.com/maxhawkins/google-takeout-to-sqlite/commit/4bc70103582c10802c85a523ef1e99a8a2154aa9,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-885098025,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,885098025,IC_kwDODFE5qs40wYYp,306240,UtahDave,2021-07-22T17:47:50Z,2021-07-22T17:47:50Z,NONE,"Hi @maxhawkins , I'm sorry, I haven't had any time to work on this. I'll have some time tomorrow to test your commits. I think they look great. I'm great with your commits superseding my initial attempt here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/5#issuecomment-888075098,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/5,888075098,IC_kwDODFE5qs407vNa,28565,maxhawkins,2021-07-28T07:18:56Z,2021-07-28T07:18:56Z,NONE,"> I'm not sure why but my most recent import, when displayed in Datasette, looks like this: > > I did some investigation into this issue and made a fix [here](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8ee555c2889a38ff42b95664ee074b4a01a82f06). The problem was that some messages (like gchat logs) don't have a `Message-Id` and we need to use `X-GM-THRID` as the pkey instead. @simonw While looking into this I found something unexpected about how sqlite_utils handles upserts if the pkey column is `None`. When the pkey is NULL I'd expect the function to either use rowid or throw an exception. Instead, it seems upsert_all creates a row where all columns are NULL instead of using the values provided as parameters.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813880401,WIP: Add Gmail takeout mbox import, https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1002735370,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1002735370,IC_kwDODFE5qs47xIcK,203343,Btibert3,2021-12-29T18:58:23Z,2021-12-29T18:58:23Z,NONE,"@maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-1003437288,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,1003437288,IC_kwDODFE5qs47zzzo,28565,maxhawkins,2021-12-31T19:06:20Z,2021-12-31T19:06:20Z,NONE,"> @maxhawkins how hard would it be to add an entry to the table that includes the HTML version of the email, if it exists? I just attempted your the PR branch on a very small mbox file, and it worked great. My use case is a research project and I need to access more than just the body plain text. Shouldn't be hard. The easiest way is probably to remove the `if body.content_type == ""text/html""` clause from [utils.py:254](https://github.com/dogsheep/google-takeout-to-sqlite/pull/8/commits/8e6d487b697ce2e8ad885acf613a157bfba84c59#diff-25ad9dd1ced1b8bfc37fda8444819c803232c08891e4af3d4064aa205d8174eaR254) and just return content directly without parsing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-894581223,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,894581223,IC_kwDODFE5qs41Ujnn,28565,maxhawkins,2021-08-07T00:57:48Z,2021-08-07T00:57:48Z,NONE,"Just added two more fixes: * Added parsing for rfc 2047 encoded unicode headers * Body is now stored as TEXT rather than a BLOB regardless of what order the messages are parsed in. I was able to run this on my Takeout export and everything seems to work fine. @simonw let me know if this looks good to merge.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/google-takeout-to-sqlite/pull/8#issuecomment-896378525,https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8,896378525,IC_kwDODFE5qs41baad,28565,maxhawkins,2021-08-10T23:28:45Z,2021-08-10T23:28:45Z,NONE,"I added parsing of text/html emails using BeautifulSoup. Around half of the emails in my archive don't include a text/plain payload so adding html parsing makes a good chunk of them searchable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",954546309,Add Gmail takeout mbox import (v2), https://github.com/dogsheep/hacker-news-to-sqlite/pull/6#issuecomment-1489110168,https://api.github.com/repos/dogsheep/hacker-news-to-sqlite/issues/6,1489110168,IC_kwDODtX3eM5YwgSY,1231935,xavdid,2023-03-29T18:36:16Z,2023-03-29T18:36:16Z,NONE,@simonw can you take a look when you have a chance?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1641117021,Add permalink virtual field to items table, https://github.com/dogsheep/healthkit-to-sqlite/issues/11#issuecomment-711083698,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/11,711083698,MDEyOklzc3VlQ29tbWVudDcxMTA4MzY5OA==,572,jarib,2020-10-17T21:39:15Z,2020-10-17T21:39:15Z,NONE,Nice! Works perfectly. Thanks for the quick response and great tooling in general.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",723838331,export.xml file name varies with different language settings, https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-1163917719,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,1163917719,IC_kwDOC8tyDs5FX_mX,956433,Mjboothaus,2022-06-23T04:35:02Z,2022-06-23T04:35:02Z,NONE,In terms of unique identifiers - could you use values stored in `HKMetadataKeySyncIdentifier`?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text", https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-877805513,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,877805513,MDEyOklzc3VlQ29tbWVudDg3NzgwNTUxMw==,956433,Mjboothaus,2021-07-11T14:03:01Z,2021-07-11T14:03:01Z,NONE,"Hi Simon -- just experimenting with your excellent software! Up to this point in time I have been using the (paid) [HealthFit App](https://apps.apple.com/au/app/healthfit/id1202650514) to export my workouts from my Apple Watch, one walk at the time into either .GPX or .FIT format and then using another library to suck it into Python and eventually here to my ""Emmaus Walking"" app: https://share.streamlit.io/mjboothaus/emmaus_walking/emmaus_walking/app.py I just used `healthkit-to-sqlite` to convert my export.zip file and it all ""just worked"". I did notice the issue with various numeric fields being stored in the SQLite db as TEXT for now and just thought I'd flag it - but you're already self-reported this issue. Keep up the great work! I was curious if you have any thoughts about periodically exporting ""export.zip"" and how to just update the SQLite file instead of re-creating it each time. Hopefully Apple will give some thought to managing this data in a more sensible fashion as it grows over time. Ideally one could pull it from iCloud (where it is allegedly being backed up). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text", https://github.com/dogsheep/healthkit-to-sqlite/issues/12#issuecomment-877874117,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/12,877874117,MDEyOklzc3VlQ29tbWVudDg3Nzg3NDExNw==,956433,Mjboothaus,2021-07-11T23:03:37Z,2021-07-11T23:03:37Z,NONE,P.s. wondering if you have explored using the spatialite functionality with the location data in workouts?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",727848625,"Some workout columns should be float, not text", https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-1073123231,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,1073123231,IC_kwDOC8tyDs4_9o-f,343884,lchski,2022-03-19T22:39:29Z,2022-03-19T22:39:29Z,NONE,"I have this issue, too, with a fresh export. None of my `Workout` entries in `export.xml` have an `id` key, though [the sample `export.xml` in the tests folder doesn’t either](https://github.com/dogsheep/healthkit-to-sqlite/blob/main/tests/zip_contents/apple_health_export/export.xml#L14-L21), so I don’t think this is the culprit. Indeed, it seems @simonw is using the [`hash_id` function from `sqlite_utils`](https://sqlite-utils.datasette.io/en/stable/python-api.html#setting-an-id-based-on-the-hash-of-the-row-contents), which creates a column (`id`, in this case) based on a hash of the row’s contents. When I run the script, a `workouts` table is created, with one entry: my first workout. No `workout_points` table is created, as [I’d expect from `utils.py`](https://github.com/dogsheep/healthkit-to-sqlite/blob/main/healthkit_to_sqlite/utils.py#L89-L90). I then get essentially the same error as noted in this thread: ```Importing from HealthKit [###################################-] 98% 00:00:01 Traceback (most recent call last): File ""/Users/lchski/.pyenv/versions/3.10.3/bin/healthkit-to-sqlite"", line 8, in sys.exit(cli()) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 1128, in __call__ return self.main(*args, **kwargs) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 1053, in main rv = self.invoke(ctx) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/click/core.py"", line 754, in invoke return __callback(*args, **kwargs) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/healthkit_to_sqlite/cli.py"", line 57, in cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update, zipfile=zf) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/healthkit_to_sqlite/utils.py"", line 34, in convert_xml_to_sqlite workout_to_db(el, db, zipfile) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/healthkit_to_sqlite/utils.py"", line 57, in workout_to_db pk = db[""workouts""].insert(record, alter=True, hash_id=""id"").last_pk File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 2822, in insert return self.insert_all( File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 2950, in insert_all self.insert_chunk( File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 2715, in insert_chunk result = self.db.execute(query, params) File ""/Users/lchski/.pyenv/versions/3.10.3/lib/python3.10/site-packages/sqlite_utils/db.py"", line 458, in execute return self.conn.execute(sql, parameters) sqlite3.IntegrityError: UNIQUE constraint failed: workouts.id ``` Are there maybe duplicate workouts in the data, which’d cause multiple rows to share the same `id`? It’s strange, though, that no `workout_points` is created at all. Export created from iOS 15.3.1.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-1073139067,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,1073139067,IC_kwDOC8tyDs4_9s17,343884,lchski,2022-03-20T00:54:18Z,2022-03-20T00:54:18Z,NONE,"Update: this appears to be because of running the command twice without clearing the DB in between. Tries to insert a Workout that already exists, causing a collision on the (auto-generated) `id` column. Had a different error with a clean DB, likely due to the workout points format; will make a new issue for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-1629123734,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,1629123734,IC_kwDOC8tyDs5hGnSW,44622670,philipp-heinrich,2023-07-10T14:46:52Z,2023-07-10T14:46:52Z,NONE,@simonw any chance to get this fixed soon? ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-798436026,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,798436026,MDEyOklzc3VlQ29tbWVudDc5ODQzNjAyNg==,1234956,n8henrie,2021-03-13T14:23:16Z,2021-03-13T14:23:16Z,NONE,"This PR allows my import to succeed. It looks like some events don't have an `id`, but do have `HKExternalUUID` (which gets turned into `metadata_HKExternalUUID`), so I use this as a fallback. If a record has neither of these, I changed it to just print the record (for debugging) and `return`. For some odd reason this ran fine at first, and now (after removing the generated db and trying again) I'm getting a different error (duplicate column name). Looks like it may have run when I had two successive runs without remembering to delete the db in between. Will try to refactor.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/14#issuecomment-798468572,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/14,798468572,MDEyOklzc3VlQ29tbWVudDc5ODQ2ODU3Mg==,1234956,n8henrie,2021-03-13T14:47:31Z,2021-03-13T14:47:31Z,NONE,"Ok, new PR works. I'm not `git` enough so I just force-pushed over the old one. I still end up with a lot of activities that are missing an `id` and therefore skipped (since this is used as the primary key). For example: ``` {'workoutActivityType': 'HKWorkoutActivityTypeRunning', 'duration': '35.31666666666667', 'durationUnit': 'min', 'totalDistance': '4.010870267636999', 'totalDistanceUnit': 'mi', 'totalEnergyBurned': '660.3516235351562', 'totalEnergyBurnedUnit': 'Cal', 'sourceName': 'Strava', 'sourceVersion': '22810', 'creationDate': '2020-07-16 13:38:26 -0700', 'startDate': '2020-07-16 06:38:26 -0700', 'endDate': '2020-07-16 07:13:45 -0700'} ``` I also end up with some unhappy characters (in the skipped events), such as: `'sourceName': 'Nathan’s Apple\xa0Watch',`. But it's successfully making it through the file, and the resulting db opens in datasette, so I'd call that progress.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771608692,UNIQUE constraint failed: workouts.id, https://github.com/dogsheep/healthkit-to-sqlite/issues/21#issuecomment-903950096,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/21,903950096,IC_kwDOC8tyDs414S8Q,32016596,FabianHertwig,2021-08-23T17:00:59Z,2021-08-23T17:00:59Z,NONE,"I think the issue is that I have records like these: ```xml ``` And if sqlite is case insensitive, then `metadata_meal` and `metadata_Meal` result in the same column.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",977128935,Duplicate Column, https://github.com/dogsheep/healthkit-to-sqlite/issues/24#issuecomment-1464786643,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/24,1464786643,IC_kwDOC8tyDs5XTt7T,956433,Mjboothaus,2023-03-11T02:01:27Z,2023-03-11T02:01:27Z,NONE,Thanks for reporting this and providing a solution -- I was puzzled by this error when I revisited my walking data and experienced this issues. I haven't tried the fix yet.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1515883470,DOC: xml.etree.ElementTree.ParseError due to healthkit version 12 , https://github.com/dogsheep/healthkit-to-sqlite/issues/24#issuecomment-1464796494,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/24,1464796494,IC_kwDOC8tyDs5XTwVO,956433,Mjboothaus,2023-03-11T02:23:42Z,2023-03-11T02:23:42Z,NONE,@simonw - maybe put in some error handling to trap for poorly formed XML (from Apple engineers) so that it suggests that there are problems with export.zip rather than odd looking Python errors :),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1515883470,DOC: xml.etree.ElementTree.ParseError due to healthkit version 12 , https://github.com/dogsheep/healthkit-to-sqlite/issues/9#issuecomment-514745798,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9,514745798,MDEyOklzc3VlQ29tbWVudDUxNDc0NTc5OA==,166463,tholo,2019-07-24T18:25:36Z,2019-07-24T18:25:36Z,NONE,"This is on macOS 10.14.6, with Python 3.7.4, packages in the virtual environment: ``` Package Version ------------------- ------- aiofiles 0.4.0 Click 7.0 click-default-group 1.2.1 datasette 0.29.2 h11 0.8.1 healthkit-to-sqlite 0.3.1 httptools 0.0.13 hupper 1.8.1 importlib-metadata 0.18 Jinja2 2.10.1 MarkupSafe 1.1.1 Pint 0.8.1 pip 19.2.1 pluggy 0.12.0 setuptools 41.0.1 sqlite-utils 1.7 tabulate 0.8.3 uvicorn 0.8.4 uvloop 0.12.2 websockets 7.0 zipp 0.5.2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",472429048,Too many SQL variables, https://github.com/dogsheep/healthkit-to-sqlite/issues/9#issuecomment-515370687,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9,515370687,MDEyOklzc3VlQ29tbWVudDUxNTM3MDY4Nw==,166463,tholo,2019-07-26T09:01:19Z,2019-07-26T09:01:19Z,NONE,"Yes, that did fix the issue I was seeing — it will now import my complete HealthKit data. Thorsten > On Jul 25, 2019, at 23:07, Simon Willison wrote: > > @tholo this should be fixed in just-released version 0.3.2 - could you run a pip install -U healthkit-to-sqlite and let me know if it works for you now? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub , or mute the thread . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",472429048,Too many SQL variables, https://github.com/dogsheep/healthkit-to-sqlite/pull/13#issuecomment-904642396,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/13,904642396,IC_kwDOC8tyDs41679c,32016596,FabianHertwig,2021-08-24T13:27:40Z,2021-08-24T13:28:26Z,NONE,This would fix #21 and make #22 obsolete.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743071410,SQLite does not have case sensitive columns, https://github.com/dogsheep/healthkit-to-sqlite/pull/22#issuecomment-904641261,https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/22,904641261,IC_kwDOC8tyDs4167rt,32016596,FabianHertwig,2021-08-24T13:26:20Z,2021-08-24T13:26:20Z,NONE,Did not see that #13 fixes the same issue in a similar way. You can decide which one to merge ;),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",978086284,Make sure that case-insensitive column names are unique, https://github.com/dogsheep/pocket-to-sqlite/issues/10#issuecomment-1239516561,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/10,1239516561,IC_kwDODLZ_YM5J4YWR,11887,ashanan,2022-09-07T15:07:38Z,2022-09-07T15:07:38Z,NONE,Thanks!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1246826792,"When running `auth` command, don't overwrite an existing auth.json file", https://github.com/dogsheep/pocket-to-sqlite/issues/11#issuecomment-1221521377,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/11,1221521377,IC_kwDODLZ_YM5Izu_h,2467,fernand0,2022-08-21T10:51:37Z,2022-08-21T10:51:37Z,NONE,I didn't see there is a PR about this: https://github.com/dogsheep/pocket-to-sqlite/pull/7,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1345452427,"-a option is used for ""--auth"" and for ""--all""", https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774726123,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9,774726123,MDEyOklzc3VlQ29tbWVudDc3NDcyNjEyMw==,12669260,jfeiwell,2021-02-07T18:21:08Z,2021-02-07T18:21:08Z,NONE,@simonw any ideas here?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",801780625,SSL Error, https://github.com/dogsheep/pocket-to-sqlite/issues/9#issuecomment-774730656,https://api.github.com/repos/dogsheep/pocket-to-sqlite/issues/9,774730656,MDEyOklzc3VlQ29tbWVudDc3NDczMDY1Ng==,635179,merwok,2021-02-07T18:45:04Z,2021-02-07T18:45:04Z,NONE,"That URL uses TLS 1.3, but maybe only if the client supports it. It could be your Python version or your SSL library that’s not recent enough.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",801780625,SSL Error, https://github.com/dogsheep/swarm-to-sqlite/issues/12#issuecomment-941274088,https://api.github.com/repos/dogsheep/swarm-to-sqlite/issues/12,941274088,IC_kwDODD6af844GrPo,33631,fs111,2021-10-12T18:31:57Z,2021-10-12T18:31:57Z,NONE,I am running into the same problem. Is there any workaround?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",951817328,403 when getting token, https://github.com/dogsheep/twitter-to-sqlite/issues/31#issuecomment-1251845216,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/31,1251845216,IC_kwDODEm0Qs5KnaRg,150986,dckc,2022-09-20T05:05:03Z,2022-09-20T05:05:03Z,NONE,"yay! Thanks a bunch for the `twitter-to-sqlite friends` command! The twitter ""Download an archive of your data"" feature doesn't include who I follow, so this is particularly handy. The whole Dogsheep thing is great :) I've written about similar things under [cloud-services](https://www.madmode.com/search/label/cloud-services/): - 2021: [Closet Librarian Approach to Cloud Services](https://www.madmode.com/2021/closet-librarian-approach-cloud-services.html) - 2015: [jukekb \- Browse iTunes libraries and upload playlists to Google Music](https://www.madmode.com/2015/jukekb)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",520508502,"""friends"" command (similar to ""followers"")", https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47,645515103,MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw==,73579,hpk42,2020-06-17T17:30:01Z,2020-06-17T17:30:01Z,NONE,"It's the one with python3.7:: >>> sqlite3.sqlite_version '3.11.0' On Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote: > That means your version of SQLite is old enough that it doesn't support the FTS5 extension. > > Could you share what operating system you're running, and what the output is that you get from running this? > > python -c 'import sqlite3; print(sqlite3.connect("":memory:"").execute(""select sqlite_version()"").fetchone()[0])' > > I can teach this tool to fall back on FTS4 if FTS5 isn't available. > > -- > You are receiving this because you authored the thread. > Reply to this email directly or view it on GitHub: > https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",639542974,Fall back to FTS4 if FTS5 is not available, https://github.com/dogsheep/twitter-to-sqlite/issues/50#issuecomment-691501132,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/50,691501132,MDEyOklzc3VlQ29tbWVudDY5MTUwMTEzMg==,706257,bcongdon,2020-09-12T14:48:10Z,2020-09-12T14:48:10Z,NONE,"This seems to be an issue even with larger values of `--stop_after`: ``` $ twitter-to-sqlite favorites twitter.db --stop_after=2000 Importing favorites [####################################] 198 $ ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",698791218,"favorites --stop_after=N stops after min(N, 200)", https://github.com/dogsheep/twitter-to-sqlite/issues/52#issuecomment-729484478,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/52,729484478,MDEyOklzc3VlQ29tbWVudDcyOTQ4NDQ3OA==,4169772,fatihky,2020-11-18T07:12:45Z,2020-11-18T07:12:45Z,NONE,I'm so sorry that you already have `--since_id` option and that's enough for the case I've mentioned. Thank you for this excellent tool!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",745393298,Discussion: Adding support for fetching only fresh tweets, https://github.com/dogsheep/twitter-to-sqlite/issues/53#issuecomment-748436453,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/53,748436453,MDEyOklzc3VlQ29tbWVudDc0ODQzNjQ1Mw==,27,anotherjesse,2020-12-19T07:47:01Z,2020-12-19T07:47:01Z,NONE,"I think this should probably be closed as won't fix. Attempting to make a patch for this I realized that the since_id would limit to tweets posted since that since_id, not when it was favorited. So favoriting something in the older would be missed if you used `--since` with a cron script Better to just use `--stop_after` set to a couple hundred","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",771324837,--since support for favorites, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-1370786026,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,1370786026,IC_kwDODEm0Qs5RtIjq,6764957,swyxio,2023-01-04T11:06:44Z,2023-01-04T11:06:44Z,NONE,"as of 2023 it appears that `tweets: not yet implemented` is happening.. pretty important for a twitter export functionality! this seems an impossible task with twitter liable to switch this around every other day","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071,Archive import appears to be broken on recent exports, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-767888743,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,767888743,MDEyOklzc3VlQ29tbWVudDc2Nzg4ODc0Mw==,19328961,henry501,2021-01-26T23:07:41Z,2021-01-26T23:07:41Z,NONE,"My import got much further with the applied fixes than 0.21.3, but not 100%. I do appear to have all of the tweets imported at least. Not sure when I'll have a chance to look further to try to fix or see what didn't make it into the import. Here's my output: ``` direct-messages-group: not yet implemented branch-links: not yet implemented periscope-expired-broadcasts: not yet implemented direct-messages: not yet implemented mute: not yet implemented periscope-comments-made-by-user: not yet implemented periscope-ban-information: not yet implemented periscope-profile-description: not yet implemented screen-name-change: not yet implemented manifest: not yet implemented fleet: not yet implemented user-link-clicks: not yet implemented periscope-broadcast-metadata: not yet implemented contact: not yet implemented fleet-mute: not yet implemented device-token: not yet implemented protected-history: not yet implemented direct-message-mute: not yet implemented Traceback (most recent call last): File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/bin/twitter-to-sqlite"", line 33, in sys.exit(load_entry_point('twitter-to-sqlite==0.21.3', 'console_scripts', 'twitter-to-sqlite')()) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py"", line 829, in __call__ return self.main(*args, **kwargs) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py"", line 782, in main rv = self.invoke(ctx) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py"", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py"", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/click/core.py"", line 610, in invoke return callback(*args, **kwargs) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/cli.py"", line 772, in import_ archive.import_from_file(db, filepath.name, open(filepath, ""rb"").read()) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py"", line 233, in import_from_file to_insert = transformer(data) File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py"", line 21, in callback return {filename: [fn(item) for item in data]} File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py"", line 21, in return {filename: [fn(item) for item in data]} File ""/Users/henry/.local/share/virtualenvs/python-sqlite-testing-mF3G2xKl/lib/python3.9/site-packages/twitter_to_sqlite/archive.py"", line 88, in ageinfo return item[""ageMeta""][""ageInfo""] KeyError: 'ageInfo' ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071,Archive import appears to be broken on recent exports, https://github.com/dogsheep/twitter-to-sqlite/issues/54#issuecomment-927312650,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/54,927312650,IC_kwDODEm0Qs43RasK,2182,danp,2021-09-26T14:09:51Z,2021-09-26T14:09:51Z,NONE,"Similar trouble with ageinfo using 0.22. Here's what my ageinfo.js file looks like: ``` window.YTD.ageinfo.part0 = [ { ""ageMeta"" : { } } ] ``` Commenting out the registration for ageinfo in archive.py gets my archive to import.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779088071,Archive import appears to be broken on recent exports, https://github.com/dogsheep/twitter-to-sqlite/issues/56#issuecomment-769973212,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/56,769973212,MDEyOklzc3VlQ29tbWVudDc2OTk3MzIxMg==,42315895,gsajko,2021-01-29T18:29:02Z,2021-01-29T18:31:55Z,NONE,"I think it was with `twitter-to-sqlite home-timeline home.db -a auth.json --since` and Im using only this command to grab tweets from cron tab `2,7,12,17,22,27,32,37,42,47,52,57 * * * * run-one /home/gsajko/miniconda3/bin/twitter-to-sqlite home-timeline /home/gsajko/work/custom_twitter_feed/home.db -a /home/gsajko/work/custom_twitter_feed/auth/auth.json --since` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",796736607,Not all quoted statuses get fetched?, https://github.com/dogsheep/twitter-to-sqlite/issues/56#issuecomment-772408273,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/56,772408273,MDEyOklzc3VlQ29tbWVudDc3MjQwODI3Mw==,42315895,gsajko,2021-02-03T10:36:36Z,2021-02-03T10:36:36Z,NONE,"I figured it out. Those tweets are in database, because somebody quote tweeted them, or retweeted them. And if you grab quoted tweet or reweeted tweet from other tweet json, It doesn't grab all of the details. So if someone quote tweeted a quote tweet, the second quote tweet won't have `quoted_status`. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",796736607,Not all quoted statuses get fetched?, https://github.com/dogsheep/twitter-to-sqlite/issues/57#issuecomment-860063190,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/57,860063190,MDEyOklzc3VlQ29tbWVudDg2MDA2MzE5MA==,232237,stiles,2021-06-12T14:46:44Z,2021-06-12T14:46:44Z,NONE,I'm having the same issue (same versions of python and twitter-to-sqlite). It's the `user-timeline` command. Other commands are working. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",907645813,"Error: Use either --since or --since_id, not both", https://github.com/dogsheep/twitter-to-sqlite/issues/60#issuecomment-1279249898,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/60,1279249898,IC_kwDODEm0Qs5MP83q,7908073,chapmanjacobd,2022-10-14T16:58:26Z,2022-10-14T16:58:26Z,NONE,"You could try using `msys2`. I've had better luck running python CLIs within that system on Windows. Here is a guide: https://github.com/chapmanjacobd/lb/blob/main/Windows.md#prep","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1063982712,Execution on Windows, https://github.com/dogsheep/twitter-to-sqlite/issues/61#issuecomment-1297201971,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/61,1297201971,IC_kwDODEm0Qs5NUbsz,3153638,Profpatsch,2022-10-31T14:47:58Z,2022-10-31T14:47:58Z,NONE,There’s also a limit of 3200 tweets. I wonder if that can be circumvented somehow.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1077560091,"Data Pull fails for ""Essential"" level access to the Twitter API (for Documentation)", https://github.com/dogsheep/twitter-to-sqlite/issues/62#issuecomment-1001222213,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/62,1001222213,IC_kwDODEm0Qs47rXBF,6764957,swyxio,2021-12-26T17:59:25Z,2021-12-26T17:59:25Z,NONE,just confirmed that this error does not occur when i use my public main account. gets more interesting!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1088816961,KeyError: 'created_at' for private accounts?, https://github.com/dogsheep/twitter-to-sqlite/issues/62#issuecomment-1049775451,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/62,1049775451,IC_kwDODEm0Qs4-kk1b,43036882,miuku,2022-02-24T11:43:31Z,2022-02-24T11:43:31Z,NONE,i seem to have fixed this issue by applying for [elevated API access](https://developer.twitter.com/en/portal/products/elevated),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1088816961,KeyError: 'created_at' for private accounts?, https://github.com/dogsheep/twitter-to-sqlite/issues/62#issuecomment-1050123919,https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/62,1050123919,IC_kwDODEm0Qs4-l56P,6764957,swyxio,2022-02-24T18:10:18Z,2022-02-24T18:10:18Z,NONE,gonna close this for now since i'm not actively working on it.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1088816961,KeyError: 'created_at' for private accounts?, https://github.com/simonw/datasette/issues/100#issuecomment-344864254,https://api.github.com/repos/simonw/datasette/issues/100,344864254,MDEyOklzc3VlQ29tbWVudDM0NDg2NDI1NA==,13304454,coisnepe,2017-11-16T09:25:10Z,2017-11-16T09:25:10Z,NONE,@simonw I see. I upgraded sanic-jinja2 and jinja2: it now works flawlessly. Thank you!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274160723,TemplateAssertionError: no filter named 'tojson', https://github.com/simonw/datasette/issues/1003#issuecomment-706302863,https://api.github.com/repos/simonw/datasette/issues/1003,706302863,MDEyOklzc3VlQ29tbWVudDcwNjMwMjg2Mw==,649467,mhalle,2020-10-09T17:17:06Z,2020-10-09T17:17:06Z,NONE,"I agree on the descriptive and python-consistent naming. There is already a tojson, but frankly i find the ""to"" and ""from"" confusing in a text templating language where what's a string and what's data isn't 100% transparent.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",718238967,from_json jinja2 filter, https://github.com/simonw/datasette/issues/101#issuecomment-344597274,https://api.github.com/repos/simonw/datasette/issues/101,344597274,MDEyOklzc3VlQ29tbWVudDM0NDU5NzI3NA==,450244,eaubin,2017-11-15T13:48:55Z,2017-11-15T13:48:55Z,NONE,This is a duplicate of https://github.com/simonw/datasette/issues/100,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",274161964,TemplateAssertionError: no filter named 'tojson', https://github.com/simonw/datasette/issues/1032#issuecomment-712397537,https://api.github.com/repos/simonw/datasette/issues/1032,712397537,MDEyOklzc3VlQ29tbWVudDcxMjM5NzUzNw==,236498,saulpw,2020-10-19T19:37:55Z,2020-10-19T19:37:55Z,NONE,"python-dateutil is awesome, but it can only guess at one date at a time. So if you have a column of dates that are (presumably) in the same format, it can't use the full set of dates to deduce the format. Also, once it has parsed a date, you can't get the format it used, whether to parse or render other dates. These limitations prevent it from being a silver bullet for date parsing, though they're not enough for me to stop using it!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",724878151,Bring date parsing into Datasette core, https://github.com/simonw/datasette/issues/1036#issuecomment-762385981,https://api.github.com/repos/simonw/datasette/issues/1036,762385981,MDEyOklzc3VlQ29tbWVudDc2MjM4NTk4MQ==,4997607,philshem,2021-01-18T17:32:13Z,2021-01-18T17:34:50Z,NONE,"Hi Simon Just finding this old issue regarding downloading blobs. Nice work! As a feature request, maybe it would be possible to assign a blob column as a certain data type (e.g. `.jpg`) and then each blob could be downloaded as that type of file (perhaps if the file types were constrained to normal blobs that people store in sqlite databases, this could avoid the execution stuff mentioned above). I guess the column blob-type definition could fit into this dropdown selection: Let me know if I should open a new issue with a feature request. (This could slowly go in the direction of displaying image blob-types in the browser.) Thanks for the great tool! --- edit: just reading the rest of the twitter thread: https://twitter.com/simonw/status/1318685933256855552 perhaps this is already possible in some form with the plugin datasette-media: https://github.com/simonw/datasette-media","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",725996507,Make it possible to download BLOB data from the Datasette UI, https://github.com/simonw/datasette/issues/1036#issuecomment-762391426,https://api.github.com/repos/simonw/datasette/issues/1036,762391426,MDEyOklzc3VlQ29tbWVudDc2MjM5MTQyNg==,4997607,philshem,2021-01-18T17:45:00Z,2021-01-18T17:45:00Z,NONE,"It might be possible with this library: https://docs.python.org/3/library/imghdr.html quick test of the downloaded blob: ``` >>> import imghdr >>> imghdr.what('material_culture-1-image.blob') 'jpeg' ``` The output plugin would be cool. I'll look into making my first datasette plugin. I'm also imagining displaying the image in the browser -- but that would be a step 2. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",725996507,Make it possible to download BLOB data from the Datasette UI, https://github.com/simonw/datasette/issues/1050#issuecomment-718317997,https://api.github.com/repos/simonw/datasette/issues/1050,718317997,MDEyOklzc3VlQ29tbWVudDcxODMxNzk5Nw==,283343,thadk,2020-10-29T02:24:50Z,2020-10-29T02:29:24Z,NONE,"Unsolicited feedback for an unreleased feature of the [current](https://github.com/simonw/datasette/commit/5e0b72247ecab4ce0fcec599b77a83d73a480872) unreleased GitHub version (I casually wanted to access a blob row) – the existing #1036 route doesn't support special characters in database or table names (e.g. `@()` ). Maybe this is motivation for your new idea here. Also I got this error/crash with my blob and wasn't able to get the file: https://gist.github.com/thadk/28ac32af0e88747ce9056c90b0b19d34","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",729057388,Switch to .blob render extension for BLOB downloads, https://github.com/simonw/datasette/issues/1082#issuecomment-721547177,https://api.github.com/repos/simonw/datasette/issues/1082,721547177,MDEyOklzc3VlQ29tbWVudDcyMTU0NzE3Nw==,39538958,justmars,2020-11-04T06:52:30Z,2020-11-04T06:53:16Z,NONE,"I think I tried the same db size on the following scenarios in Digital Ocean: 1. Basic ($5/month) with 512MB RAM 2. Basic ($10/month) with 1GB RAM 3. Pro ($12/month) with 1GB RAM All such attempts conked out with ""out of memory"" errors","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",735852274,DigitalOcean buildpack memory errors for large sqlite db?, https://github.com/simonw/datasette/issues/1091#issuecomment-726798745,https://api.github.com/repos/simonw/datasette/issues/1091,726798745,MDEyOklzc3VlQ29tbWVudDcyNjc5ODc0NQ==,6739646,tballison,2020-11-13T14:35:22Z,2020-11-13T14:35:22Z,NONE,"I'm starting this with docker like so: `docker run --name datasette -d -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/file_profiles.db --config sql_time_limit_ms:120000 --config max_returned_rows:100000 --config base_url:/datasette/ --config cache_size_kb:50000` I'm not doing any templating or anything else custom. Apropos of nothing, I swapped out a simpler db, so this query should now work: https://corpora.tika.apache.org/datasette/file_profiles?sql=select%0D%0A++*%0D%0Afrom%0D%0A++file_profiles+fp%0D%0Alimit%0D%0A++10","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-726801731,https://api.github.com/repos/simonw/datasette/issues/1091,726801731,MDEyOklzc3VlQ29tbWVudDcyNjgwMTczMQ==,6739646,tballison,2020-11-13T14:40:56Z,2020-11-13T14:40:56Z,NONE,"My headers aren't clickable/sortable with custom sql, but I think that's by design. In the default view, https://corpora.tika.apache.org/datasette/file_profiles/file_profiles, ah, y, now I see that the headers should be sortable, but you're right the base_url is not applied. base_url works with ""View and Edit SQL"" and with ""(advanced)"" As you point out, does not work with the export csv, json, other or with the ""Next page"" navigational button at the bottom.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-729018386,https://api.github.com/repos/simonw/datasette/issues/1091,729018386,MDEyOklzc3VlQ29tbWVudDcyOTAxODM4Ng==,6739646,tballison,2020-11-17T15:48:58Z,2020-11-17T15:48:58Z,NONE,"I don't think we are, but I'll check with Maruan. I think this is the relevant part of our config? ``` Alias ""/base/"" ""/usr/share/corpora/"" Options +Indexes -Multiviews AllowOverride None ProxyPreserveHost On ProxyPass /datasette http://0.0.0.0:8001 ProxyPassReverse /datasette http://0.0.0.0:8001 ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-729045320,https://api.github.com/repos/simonw/datasette/issues/1091,729045320,MDEyOklzc3VlQ29tbWVudDcyOTA0NTMyMA==,6739646,tballison,2020-11-17T16:31:00Z,2020-11-17T16:31:00Z,NONE,We're using mod_proxy.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-741804334,https://api.github.com/repos/simonw/datasette/issues/1091,741804334,MDEyOklzc3VlQ29tbWVudDc0MTgwNDMzNA==,6739646,tballison,2020-12-09T14:26:05Z,2020-12-09T14:26:05Z,NONE,"Anything we can do to help debug this? Thank you, again!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-742001510,https://api.github.com/repos/simonw/datasette/issues/1091,742001510,MDEyOklzc3VlQ29tbWVudDc0MjAwMTUxMA==,6739646,tballison,2020-12-09T19:36:42Z,2020-12-09T19:38:04Z,NONE,"I don't think this fixes it: ``` grep -R datasette . ./sites-available/000-default.conf: ProxyPass /datasette http://127.0.0.1:8001/ ./sites-available/000-default.conf: #ProxyPassReverse /datasette http://127.0.0.1:8001/ ./sites-available/corpora-le-ssl.conf: ProxyPass /datasette http://0.0.0.0:8001 ./sites-available/corpora-le-ssl.conf: #ProxyPassReverse /datasette http://0.0.0.0:8001 ./sites-enabled/corpora-le-ssl.conf: ProxyPass /datasette http://0.0.0.0:8001 ./sites-enabled/corpora-le-ssl.conf: #ProxyPassReverse /datasette http://0.0.0.0:8001 ``` And I confirmed that I actually restarted the server. :rofl: https://corpora.tika.apache.org/datasette/file_profiles","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-742010306,https://api.github.com/repos/simonw/datasette/issues/1091,742010306,MDEyOklzc3VlQ29tbWVudDc0MjAxMDMwNg==,6739646,tballison,2020-12-09T19:53:18Z,2020-12-09T19:59:52Z,NONE,"I can't imagine this helps (esp. given your point about potential rewrites), but you can see that /datasette/ was correctly added to the sql form, but not to the ""export-links"" ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-756425587,https://api.github.com/repos/simonw/datasette/issues/1091,756425587,MDEyOklzc3VlQ29tbWVudDc1NjQyNTU4Nw==,19328961,henry501,2021-01-07T22:27:19Z,2021-01-07T22:27:19Z,NONE,"I found this issue while troubleshooting the same behavior with an nginx reverse proxy. The solution was to make sure I set: `proxy_pass http://server:8001/baseurl/ ` instead of just: `proxy_pass http://server:8001 ` The custom SQL query and header links are now correct. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-758280611,https://api.github.com/repos/simonw/datasette/issues/1091,758280611,MDEyOklzc3VlQ29tbWVudDc1ODI4MDYxMQ==,6739646,tballison,2021-01-11T23:06:10Z,2021-01-11T23:06:10Z,NONE,"+1 Yep! Fixes it. If I navigate to https://corpora.tika.apache.org/datasette, I get a 404 (database not found: datasette), but if I navigate to https://corpora.tika.apache.org/datasette/file_profiles/, everything WORKS! Thank you!","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-758448525,https://api.github.com/repos/simonw/datasette/issues/1091,758448525,MDEyOklzc3VlQ29tbWVudDc1ODQ0ODUyNQ==,19328961,henry501,2021-01-12T06:55:08Z,2021-01-12T06:55:08Z,NONE,"Great, really happy I could help! Reverse proxies get tricky.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1091#issuecomment-758668359,https://api.github.com/repos/simonw/datasette/issues/1091,758668359,MDEyOklzc3VlQ29tbWVudDc1ODY2ODM1OQ==,6739646,tballison,2021-01-12T13:52:29Z,2021-01-12T13:52:29Z,NONE,"Y, thank you to both of you!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",742011049,.json and .csv exports fail to apply base_url, https://github.com/simonw/datasette/issues/1094#issuecomment-731260091,https://api.github.com/repos/simonw/datasette/issues/1094,731260091,MDEyOklzc3VlQ29tbWVudDczMTI2MDA5MQ==,4808085,bapowell,2020-11-20T16:11:29Z,2020-11-20T16:11:29Z,NONE,"I can confirm this issue, running version 0.51.1 under Windows. Fixed by commenting out the following line near the top of datasette\utils\asgi.py : `#from os import EX_CANTCREAT` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",743011397,import EX_CANTCREAT means datasette fails to work on Windows, https://github.com/simonw/datasette/issues/1116#issuecomment-736005833,https://api.github.com/repos/simonw/datasette/issues/1116,736005833,MDEyOklzc3VlQ29tbWVudDczNjAwNTgzMw==,2789593,nattaylor,2020-11-30T19:54:39Z,2020-11-30T19:54:39Z,NONE,"@simonw thanks for investigating so quickly. If it is undesirable to change that hidden behavior, maybe something like this is a suitable workaround: ``` SELECT * FROM pragma_table_xinfo('deeds') where hidden in (0,2); 0|body|TEXT|0||0|0 1|id|INT GENERATED ALWAYS|0||0|2 2|consideration|INT GENERATED ALWAYS|0||0|2 ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",753668177,GENERATED column support, https://github.com/simonw/datasette/issues/1134#issuecomment-742260116,https://api.github.com/repos/simonw/datasette/issues/1134,742260116,MDEyOklzc3VlQ29tbWVudDc0MjI2MDExNg==,2181410,clausjuhl,2020-12-10T05:57:17Z,2020-12-10T05:57:17Z,NONE,"Hi Simon Thank you for the quick fix! And glad you like our use of Datasette (launches 1. january 2021). It's a site that currently (more to come) makes all minutes and their annexes from Aarhus City Council and the major committees (1997-2019) available to the public. So we're putting Datasette to good use :)","{""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 2, ""rocket"": 0, ""eyes"": 0}",760312579,"""_searchmode=raw"" throws an index out of range error when combined with ""_search_COLUMN""", https://github.com/simonw/datasette/issues/1142#issuecomment-743732440,https://api.github.com/repos/simonw/datasette/issues/1142,743732440,MDEyOklzc3VlQ29tbWVudDc0MzczMjQ0MA==,6622733,nitinpaultifr,2020-12-12T09:56:40Z,2020-12-12T09:56:40Z,NONE,'Include all rows' seem like a fairly obvious alternative,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",763361458,"""Stream all rows"" is not at all obvious", https://github.com/simonw/datasette/issues/1142#issuecomment-743998792,https://api.github.com/repos/simonw/datasette/issues/1142,743998792,MDEyOklzc3VlQ29tbWVudDc0Mzk5ODc5Mg==,6622733,nitinpaultifr,2020-12-13T12:14:06Z,2020-12-13T12:14:06Z,NONE,"Agreed, it would definitely provide better controls. However, I do feel it makes for a bit of inconsistent UX for the 'Advanced export' section, with links to download for JSON, checkboxes and radio buttons + button to download for CSV. Do you think this example makes the UX a bit nicer/consistent? ![Screenshot 2020-12-13 at 5 38 43 PM](https://user-images.githubusercontent.com/6622733/102011444-1dc1cd00-3d6a-11eb-9e38-5af198161e80.png) I could give it a try if you'd like but I've never contributed to an actual project! ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",763361458,"""Stream all rows"" is not at all obvious", https://github.com/simonw/datasette/issues/1142#issuecomment-744522099,https://api.github.com/repos/simonw/datasette/issues/1142,744522099,MDEyOklzc3VlQ29tbWVudDc0NDUyMjA5OQ==,6622733,nitinpaultifr,2020-12-14T15:37:47Z,2020-12-14T15:37:47Z,NONE,"Alright I could give it a try! This might be a stupid question, can you tell me how to run the server from my fork? So that I can test the changes?","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",763361458,"""Stream all rows"" is not at all obvious", https://github.com/simonw/datasette/issues/1142#issuecomment-745162571,https://api.github.com/repos/simonw/datasette/issues/1142,745162571,MDEyOklzc3VlQ29tbWVudDc0NTE2MjU3MQ==,6622733,nitinpaultifr,2020-12-15T09:22:58Z,2020-12-15T09:22:58Z,NONE,"You're right, probably more straightforward to have the links for JSON. I was imagining to toggle the `href` for the 'Export JSON' link (button) to the selected shape, but it'll probably be needlessly complex in the end.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",763361458,"""Stream all rows"" is not at all obvious", https://github.com/simonw/datasette/issues/1143#issuecomment-744618787,https://api.github.com/repos/simonw/datasette/issues/1143,744618787,MDEyOklzc3VlQ29tbWVudDc0NDYxODc4Nw==,114388,yurivish,2020-12-14T18:15:00Z,2020-12-15T02:21:53Z,NONE,"From a quick look at the README, it does seem to do everything I need, thanks! I think the argument for inclusion in core is to lower the chances of unwanted data access. A local server can be accessed by anybody who can make an HTTP request to your computer regardless of CORS rules, but the default `*` rule additionally opens up access to the local instance to any website you visit while it is running. That's probably not what people typically intend, particularly when the data is of a sensitive nature. A default of requiring the user to specify the origin (allowing `*` but encouraging a narrower scope) would solve this problem entirely, I think. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",764059235,"More flexible CORS support in core, to encourage good security practices", https://github.com/simonw/datasette/issues/1144#issuecomment-744489028,https://api.github.com/repos/simonw/datasette/issues/1144,744489028,MDEyOklzc3VlQ29tbWVudDc0NDQ4OTAyOA==,475613,MarkusH,2020-12-14T14:47:11Z,2020-12-14T14:47:11Z,NONE,"Thanks for opening the issue, @simonw. Let me elaborate on my Tweets. [datasette-chartjs](https://github.com/MarkusH/datasette-chartjs) provides drop down lists to pick the chart visualization (e.g. bar, line, doughnut, pie, ...) as well as the column used for the ""x axis"" (e.g. time). A user can change the values on-demand. The chart will be redrawn w/o querying the database again. However, if a user wants to change the underlying query, they will use the SQL field provided by datasette or any of the other datasette built-in features to amend a query. In order to maintain a user's selections for the plugin, datasette-chartjs copies some parts of [datasette-vega](https://github.com/simonw/datasette-vega) which persist the chosen visualization and column in the hash part of a URL (the stuff behind the `#`). The plugin load the config from the hash upon initialization on the next page and use it accordingly. Additionally, datasette-vega and datasette-chartjs need to make sure to include the hash in all links and forms that cause a reload of the page. This is, such that the config persists between clicks. This ticket is about moving thes parts into datasette that provide the functionality to do so. This includes: 1. a way to load config options with a given prefix from the current URL hash 1. a way to update the current URL hash with a new config value or a bunch of config options 1. updating all necessary links and forms on the current page to include the URL hash whenever its updated 1. to prevent leaking config options to external pages, only ""internal"" links should be updated There's another, optional, feature that we might want to think about during the design phase: the scope of the config. Links within a datasette instance have 1 of 3 scopes: 1. global, for the whole datasette project 1. database, for all tables in a database 1. table, only for a table within a database When updating the links and forms as pointed out in 3. above, it might be worth considering which links need to be updated. I could imagine a plugin that wants to persist some setting across all tables within a database but another setting only within a table.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",765637324,JavaScript to help plugins interact with the fragment part of the URL, https://github.com/simonw/datasette/issues/1150#issuecomment-751476406,https://api.github.com/repos/simonw/datasette/issues/1150,751476406,MDEyOklzc3VlQ29tbWVudDc1MTQ3NjQwNg==,18221871,noklam,2020-12-27T14:51:39Z,2020-12-27T14:51:39Z,NONE,"I like the idea of _internal, it's a nice way to get a data catalog quickly. I wonder if this trick applies to db other than SQLite.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",770436876,Maintain an in-memory SQLite table of connected databases and their tables, https://github.com/simonw/datasette/issues/1165#issuecomment-753033121,https://api.github.com/repos/simonw/datasette/issues/1165,753033121,MDEyOklzc3VlQ29tbWVudDc1MzAzMzEyMQ==,154364,dracos,2020-12-31T19:33:47Z,2020-12-31T19:33:47Z,NONE,"Sorry to go on about it, but it's my only example ;) And thought it might be of interest/use. Here is FixMyStreet's Cypress workflow https://github.com/mysociety/fixmystreet/blob/master/.github/workflows/cypress.yml with the master script that sets up server etc at https://github.com/mysociety/fixmystreet/blob/master/bin/browser-tests (that has features such as working inside/outside Vagrant, and can do JS code coverage) and then the tests are at https://github.com/mysociety/fixmystreet/tree/master/.cypress/cypress/integration","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",776635426,Mechanism for executing JavaScript unit tests, https://github.com/simonw/datasette/issues/1166#issuecomment-783560017,https://api.github.com/repos/simonw/datasette/issues/1166,783560017,MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw==,94334,thorn0,2021-02-22T18:00:57Z,2021-02-22T18:13:11Z,NONE,"Hi! I don't think Prettier supports this syntax for globs: `datasette/static/*[!.min].js` Are you sure that works? Prettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (`foo-[1-5].js`). Tested it. Apparently, it works as a negated character class in regexes (like `[^.min]`). I wonder where this syntax comes from. Micromatch doesn't support that: ```js micromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']); // result: [""static/n.js""] -- brackets are treated like [!.min] in regexes, without negation ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",777140799,Adopt Prettier for JavaScript code formatting, https://github.com/simonw/datasette/issues/1171#issuecomment-754911290,https://api.github.com/repos/simonw/datasette/issues/1171,754911290,MDEyOklzc3VlQ29tbWVudDc1NDkxMTI5MA==,59874,rcoup,2021-01-05T21:31:15Z,2021-01-05T21:31:15Z,NONE,"We did this for [Sno](https://sno.earth) under macOS — it's a PyInstaller binary/setup which uses [Packages](http://s.sudre.free.fr/Software/Packages/about.html) for packaging. * [Building & Signing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L67-L95) * [Packaging & Notarizing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L121-L215) * [Github Workflow](https://github.com/koordinates/sno/blob/master/.github/workflows/build.yml#L228-L269) has the CI side of it FYI (if you ever get to it) for Windows you need to get a code signing certificate. And if you want automated CI, you'll want to get an ""EV CodeSigning for HSM"" certificate from GlobalSign, which then lets you put the certificate into Azure Key Vault. Which you can use with [azuresigntool](https://github.com/vcsjones/AzureSignTool) to sign your code & installer. (Non-EV certificates are a waste of time, the user still gets big warnings at install time). ","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",778450486,GitHub Actions workflow to build and sign macOS binary executables, https://github.com/simonw/datasette/issues/1175#issuecomment-1195442266,https://api.github.com/repos/simonw/datasette/issues/1175,1195442266,IC_kwDOBm6k_c5HQQBa,8523191,RamiAwar,2022-07-26T12:52:10Z,2022-07-26T12:52:10Z,NONE,"I'm using this in a separate FastAPI app, worked perfectly when I changed the AsyncBoundLogger to BoundLogger only. Also for some reason, I'm now getting some logs surfacing from internal packages, like Elasticsearch. But don't have time to deal with that now.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779156520,Use structlog for logging, https://github.com/simonw/datasette/issues/1175#issuecomment-762488336,https://api.github.com/repos/simonw/datasette/issues/1175,762488336,MDEyOklzc3VlQ29tbWVudDc2MjQ4ODMzNg==,758858,hannseman,2021-01-18T21:59:28Z,2021-01-18T22:00:31Z,NONE,"I encountered your issue when trying to find a solution and came up with the following, maybe it can help. ```python import logging.config from typing import Tuple import structlog import uvicorn from example.config import settings shared_processors: Tuple[structlog.types.Processor, ...] = ( structlog.contextvars.merge_contextvars, structlog.stdlib.add_logger_name, structlog.stdlib.add_log_level, structlog.processors.TimeStamper(fmt=""iso""), ) logging_config = { ""version"": 1, ""disable_existing_loggers"": False, ""formatters"": { ""json"": { ""()"": structlog.stdlib.ProcessorFormatter, ""processor"": structlog.processors.JSONRenderer(), ""foreign_pre_chain"": shared_processors, }, ""console"": { ""()"": structlog.stdlib.ProcessorFormatter, ""processor"": structlog.dev.ConsoleRenderer(), ""foreign_pre_chain"": shared_processors, }, **uvicorn.config.LOGGING_CONFIG[""formatters""], }, ""handlers"": { ""default"": { ""level"": ""DEBUG"", ""class"": ""logging.StreamHandler"", ""formatter"": ""json"" if not settings.debug else ""console"", }, ""uvicorn.access"": { ""level"": ""INFO"", ""class"": ""logging.StreamHandler"", ""formatter"": ""access"", }, ""uvicorn.default"": { ""level"": ""INFO"", ""class"": ""logging.StreamHandler"", ""formatter"": ""default"", }, }, ""loggers"": { """": {""handlers"": [""default""], ""level"": ""INFO""}, ""uvicorn.error"": { ""handlers"": [""default"" if not settings.debug else ""uvicorn.default""], ""level"": ""INFO"", ""propagate"": False, }, ""uvicorn.access"": { ""handlers"": [""default"" if not settings.debug else ""uvicorn.access""], ""level"": ""INFO"", ""propagate"": False, }, }, } def setup_logging() -> None: structlog.configure( processors=[ structlog.stdlib.filter_by_level, *shared_processors, structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.StackInfoRenderer(), structlog.processors.format_exc_info, structlog.stdlib.ProcessorFormatter.wrap_for_formatter, ], context_class=dict, logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=structlog.stdlib.AsyncBoundLogger, cache_logger_on_first_use=True, ) logging.config.dictConfig(logging_config) ``` And then it'll be run on the startup event: ```python @app.on_event(""startup"") async def startup_event() -> None: setup_logging() ``` It depends on a setting called `debug` which controls if it should output the regular uvicorn logging or json. ","{""total_count"": 15, ""+1"": 7, ""-1"": 0, ""laugh"": 1, ""hooray"": 1, ""confused"": 0, ""heart"": 5, ""rocket"": 1, ""eyes"": 0}",779156520,Use structlog for logging, https://github.com/simonw/datasette/issues/1175#issuecomment-984569477,https://api.github.com/repos/simonw/datasette/issues/1175,984569477,IC_kwDOBm6k_c46r1aF,24821294,AnkitKundariya,2021-12-02T12:09:30Z,2021-12-02T12:09:30Z,NONE,"@hannseman I have tried the above suggestion given by you but somehow I'm getting the below error. _note : I'm running my application with Docker._ `app_1 | {""event"": ""Exception in ASGI application\n"", ""exc_info"": ["""", ""RuntimeError('no running event loop')"", """"], ""logger"": ""uvicorn.error"", ""level"": ""error"", ""timestamp"": ""2021-12-02T12:06:36.011448Z""} `","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",779156520,Use structlog for logging, https://github.com/simonw/datasette/issues/1181#issuecomment-998999230,https://api.github.com/repos/simonw/datasette/issues/1181,998999230,IC_kwDOBm6k_c47i4S-,9308268,rayvoelker,2021-12-21T18:25:15Z,2021-12-21T18:25:15Z,NONE,"I wonder if I'm encountering the same bug (or something related). I had previously been using the .csv feature to run queries and then fetch results for the pandas `read_csv()` function, but it seems to have stopped working recently. https://ilsweb.cincinnatilibrary.org/collection-analysis/collection-analysis/current_collection-3d56dbf.csv?sql=select%0D%0A++*%0D%0Afrom%0D%0A++bib%0D%0Alimit%0D%0A++100&_size=max Datasette v0.59.4 ![image](https://user-images.githubusercontent.com/9308268/146979957-66911877-2cd9-4022-bc76-fd54e4a3a6f7.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",781262510,"Certain database names results in 404: ""Database not found: None""", https://github.com/simonw/datasette/issues/1196#issuecomment-765639968,https://api.github.com/repos/simonw/datasette/issues/1196,765639968,MDEyOklzc3VlQ29tbWVudDc2NTYzOTk2OA==,2826376,QAInsights,2021-01-22T19:37:15Z,2021-01-22T19:37:15Z,NONE,I tried deployment in WSL. It is working fine https://jmeter.vercel.app/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",791237799,Access Denied Error in Windows, https://github.com/simonw/datasette/issues/1196#issuecomment-819775388,https://api.github.com/repos/simonw/datasette/issues/1196,819775388,MDEyOklzc3VlQ29tbWVudDgxOTc3NTM4OA==,1219001,robroc,2021-04-14T19:28:38Z,2021-04-14T19:28:38Z,NONE,@QAInsights I'm having a similar problem when publishing to Cloud Run on Windows. It's not able to access certain packages in my conda environment where Datasette is installed. Can you explain how you got it to work in WSL? Were you able to access the .db file in the Windows file system? Thank you.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",791237799,Access Denied Error in Windows, https://github.com/simonw/datasette/issues/120#issuecomment-355487646,https://api.github.com/repos/simonw/datasette/issues/120,355487646,MDEyOklzc3VlQ29tbWVudDM1NTQ4NzY0Ng==,723567,nickdirienzo,2018-01-05T07:10:12Z,2018-01-05T07:10:12Z,NONE,"Ah, glad I found this issue. I have private data that I'd like to share to a few different people. Personally, a shared username and password would be sufficient for me, more-or-less Basic Auth. Do you have more complex requirements in mind? I'm not sure if ""plugin"" means ""build a plugin"" or ""find a plugin"" or something else entirely. FWIW, I stumbled upon [sanic-auth](https://github.com/pyx/sanic-auth) which looks like a new project to bring some interfaces around auth to sanic, similar to Flask. Alternatively, it shouldn't be too bad to add in Basic Auth. If we went down that route, that would probably be best built as a separate package for sanic that `datasette` brings in. What are your thoughts around this?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275087397,Plugin that adds an authentication layer of some sort, https://github.com/simonw/datasette/issues/120#issuecomment-439421164,https://api.github.com/repos/simonw/datasette/issues/120,439421164,MDEyOklzc3VlQ29tbWVudDQzOTQyMTE2NA==,36796532,ad-si,2018-11-16T15:05:18Z,2018-11-16T15:05:18Z,NONE,This would be an awesome feature ❤️ ,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275087397,Plugin that adds an authentication layer of some sort, https://github.com/simonw/datasette/issues/120#issuecomment-496966227,https://api.github.com/repos/simonw/datasette/issues/120,496966227,MDEyOklzc3VlQ29tbWVudDQ5Njk2NjIyNw==,26342344,duarteocarmo,2019-05-29T14:40:52Z,2019-05-29T14:40:52Z,NONE,I would really like this. If you give me some pointers @simonw I'm willing to PR!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275087397,Plugin that adds an authentication layer of some sort, https://github.com/simonw/datasette/issues/1210#issuecomment-773977128,https://api.github.com/repos/simonw/datasette/issues/1210,773977128,MDEyOklzc3VlQ29tbWVudDc3Mzk3NzEyOA==,525780,heyarne,2021-02-05T11:30:34Z,2021-02-05T11:30:34Z,NONE,"Thanks for your quick reply! Having changed my `metadata.yml`, queries AND database I can't really reproduce it anymore, sorry. But at least I'm happy to say that it works now! :) Thanks again for the super nifty tool, very appreciated.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",796234313,Immutable Database w/ Canned Queries, https://github.com/simonw/datasette/issues/1217#issuecomment-1303299509,https://api.github.com/repos/simonw/datasette/issues/1217,1303299509,IC_kwDOBm6k_c5NrsW1,31312775,mattmalcher,2022-11-04T11:35:13Z,2022-11-04T11:35:13Z,NONE,"The following worked for deployment to RStudio / Posit Connect An app.py along the lines of: ```python from pathlib import Path from datasette.app import Datasette example_db = Path(__file__).parent / ""data"" / ""example.db"" # use connect 'Content URL' setting here to set app to /datasette/ ds = Datasette(files=[example_db], settings={""base_url"": ""/datasette/""}) ds._startup_invoked = True ds_app = ds.app() ``` Then to deploy, from within a virtualenv with `rsconnect-python` ```sh rsconnect write-manifest fastapi -p $VIRTUAL_ENV/bin/python -e app:ds_app -o . rsconnect deploy manifest manifest.json -n -t ""Example Datasette"" ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",802513359,Possible to deploy as a python app (for Rstudio connect server)?, https://github.com/simonw/datasette/issues/1217#issuecomment-1303301786,https://api.github.com/repos/simonw/datasette/issues/1217,1303301786,IC_kwDOBm6k_c5Nrs6a,31312775,mattmalcher,2022-11-04T11:37:52Z,2022-11-04T11:37:52Z,NONE,"All seems to work well, but there are some glitches to do with proxies, see #1883 . Excited to use this :)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",802513359,Possible to deploy as a python app (for Rstudio connect server)?, https://github.com/simonw/datasette/issues/1217#issuecomment-774385092,https://api.github.com/repos/simonw/datasette/issues/1217,774385092,MDEyOklzc3VlQ29tbWVudDc3NDM4NTA5Mg==,6165713,plpxsk,2021-02-06T02:49:11Z,2021-02-06T02:49:11Z,NONE,"A good reference seems to be the note to run `datasette` as a module in https://github.com/simonw/datasette/pull/556 ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",802513359,Possible to deploy as a python app (for Rstudio connect server)?, https://github.com/simonw/datasette/issues/1217#issuecomment-774528913,https://api.github.com/repos/simonw/datasette/issues/1217,774528913,MDEyOklzc3VlQ29tbWVudDc3NDUyODkxMw==,639730,virtadpt,2021-02-06T19:23:41Z,2021-02-06T19:23:41Z,NONE,I've had a lot of success running it as an OpenFaaS lambda.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",802513359,Possible to deploy as a python app (for Rstudio connect server)?, https://github.com/simonw/datasette/issues/1218#issuecomment-784157345,https://api.github.com/repos/simonw/datasette/issues/1218,784157345,MDEyOklzc3VlQ29tbWVudDc4NDE1NzM0NQ==,1244799,soobrosa,2021-02-23T12:12:17Z,2021-02-23T12:12:17Z,NONE,"Topline this fixed the same problem for me. ``` brew install python@3.7 ln -s /usr/local/opt/python@3.7/bin/python3.7 /usr/local/opt/python/bin/python3.7 pip3 uninstall -y numpy pip3 uninstall -y setuptools pip3 install setuptools pip3 install numpy pip3 install datasette-publish-fly ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",803356942, /usr/local/opt/python3/bin/python3.6: bad interpreter: No such file or directory, https://github.com/simonw/datasette/issues/1220#issuecomment-778008752,https://api.github.com/repos/simonw/datasette/issues/1220,778008752,MDEyOklzc3VlQ29tbWVudDc3ODAwODc1Mg==,30607,aborruso,2021-02-12T06:37:34Z,2021-02-12T06:37:34Z,NONE,"I have used my path, I'm running it from the folder in wich I have the db. Do I must an absolute path? Do I must create exactly that folder?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1220#issuecomment-778467759,https://api.github.com/repos/simonw/datasette/issues/1220,778467759,MDEyOklzc3VlQ29tbWVudDc3ODQ2Nzc1OQ==,30607,aborruso,2021-02-12T21:35:17Z,2021-02-12T21:35:17Z,NONE,Thank you,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",806743116,Installing datasette via docker: Path 'fixtures.db' does not exist, https://github.com/simonw/datasette/issues/1228#issuecomment-1072954795,https://api.github.com/repos/simonw/datasette/issues/1228,1072954795,IC_kwDOBm6k_c4_8_2r,7107523,Kabouik,2022-03-19T06:44:40Z,2022-03-19T06:44:40Z,NONE,"> ... unless your data had a column called `n`? Exactly, that's highly likely even though I can't double check from this computer just now. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",810397025,500 error caused by faceting if a column called `n` exists, https://github.com/simonw/datasette/issues/123#issuecomment-698110186,https://api.github.com/repos/simonw/datasette/issues/123,698110186,MDEyOklzc3VlQ29tbWVudDY5ODExMDE4Ng==,45416,obra,2020-09-24T04:49:51Z,2020-09-24T04:49:51Z,NONE,"As a half-measure, I'd get value out of being able to upload a CSV and have datasette run csv-to-sqlite on it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/simonw/datasette/issues/123#issuecomment-698174957,https://api.github.com/repos/simonw/datasette/issues/123,698174957,MDEyOklzc3VlQ29tbWVudDY5ODE3NDk1Nw==,45416,obra,2020-09-24T07:42:05Z,2020-09-24T07:42:05Z,NONE," Oh. Awesome. On Thu, Sep 24, 2020 at 12:28:53AM -0700, Simon Willison wrote: > @obra there's a plugin for that! https://github.com/simonw/ > datasette-upload-csvs > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe.* > -- ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/simonw/datasette/issues/123#issuecomment-735440555,https://api.github.com/repos/simonw/datasette/issues/123,735440555,MDEyOklzc3VlQ29tbWVudDczNTQ0MDU1NQ==,11912854,jsancho-gpl,2020-11-29T19:12:30Z,2020-11-29T19:12:30Z,NONE,"[datasette-connectors](https://github.com/pytables/datasette-connectors) provides an API for making connectors for any file based database. For example, [datasette-pytables](https://github.com/pytables/datasette-pytables) is a connector for HDF5 files, so now is possible to use this type of files with Datasette. It'd be nice if Datasette coud provide that API directly, for other file formats and for urls too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/simonw/datasette/issues/123#issuecomment-882096402,https://api.github.com/repos/simonw/datasette/issues/123,882096402,IC_kwDOBm6k_c40k7kS,921217,RayBB,2021-07-18T18:07:29Z,2021-07-18T18:07:29Z,NONE,"I also love the idea for this feature and wonder if it could work without having to download the whole database into memory at once if it's a rather large db. Obviously this could be slower but could have many use cases. My comment is partially inspired by this post about streaming sqlite dbs from github pages or such https://news.ycombinator.com/item?id=27016630 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125561,Datasette serve should accept paths/URLs to CSVs and other file formats, https://github.com/simonw/datasette/issues/1230#issuecomment-781330466,https://api.github.com/repos/simonw/datasette/issues/1230,781330466,MDEyOklzc3VlQ29tbWVudDc4MTMzMDQ2Ng==,7107523,Kabouik,2021-02-18T13:06:22Z,2021-02-18T15:22:15Z,NONE,"[Edit] Oh, I just saw the ""Load all"" button under the cluster map as well as the [setting to alter the max number or results](https://docs.datasette.io/en/stable/settings.html#max-returned-rows). So I guess this issue only is about the Vega charts.
Note that datasette-cluster-map also seems to be limited to 998 displayed points: ![ss-2021-02-18_140548](https://user-images.githubusercontent.com/7107523/108361225-15fb2a80-71ea-11eb-9a19-d885e8513f55.png)
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",811054000,"Vega charts are plotted only for rows on the visible page, cluster maps only for rows in the remaining pages", https://github.com/simonw/datasette/issues/1238#issuecomment-790857004,https://api.github.com/repos/simonw/datasette/issues/1238,790857004,MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA==,79913,tsibley,2021-03-04T19:06:55Z,2021-03-04T19:06:55Z,NONE,"@rgieseke Ah, that's super helpful. Thank you for the workaround for now!","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",813899472,Custom pages don't work with base_url setting, https://github.com/simonw/datasette/issues/124#issuecomment-346987395,https://api.github.com/repos/simonw/datasette/issues/124,346987395,MDEyOklzc3VlQ29tbWVudDM0Njk4NzM5NQ==,50138,janimo,2017-11-26T06:24:08Z,2017-11-26T06:24:08Z,NONE,"Are there performance gains when using immutable as opposed to read-only? From what I see other processes can still modify the DB when immutable, but there are no change notifications.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125805,Option to open readonly but not immutable, https://github.com/simonw/datasette/issues/124#issuecomment-347123991,https://api.github.com/repos/simonw/datasette/issues/124,347123991,MDEyOklzc3VlQ29tbWVudDM0NzEyMzk5MQ==,50138,janimo,2017-11-27T09:25:15Z,2017-11-27T09:25:15Z,NONE,"That's the only reference to immutable I saw as well, making me think that there may be no perceivable advantages over simply using mode=ro. Since the database is never or seldom updated the change notifications should not impact performance.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275125805,Option to open readonly but not immutable, https://github.com/simonw/datasette/issues/1240#issuecomment-784312460,https://api.github.com/repos/simonw/datasette/issues/1240,784312460,MDEyOklzc3VlQ29tbWVudDc4NDMxMjQ2MA==,7107523,Kabouik,2021-02-23T16:07:10Z,2021-02-23T16:08:28Z,NONE,"Likewise, while answering to another issue regarding the Vega plugin, I realized that there is no such way of linking rows after a custom query, I only get this ""Link"" column with individual URLs for the default SQL view: ![ss-2021-02-23_170559](https://user-images.githubusercontent.com/7107523/108871491-1e3fd500-75f1-11eb-8f76-5d5a82cc14d7.png) Or is it there and I am just missing the option in my custom queries?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",814591962,Allow facetting on custom queries, https://github.com/simonw/datasette/issues/1241#issuecomment-784347646,https://api.github.com/repos/simonw/datasette/issues/1241,784347646,MDEyOklzc3VlQ29tbWVudDc4NDM0NzY0Ng==,7107523,Kabouik,2021-02-23T16:55:26Z,2021-02-23T16:57:39Z,NONE,"> I think it's possible that many users these days no longer assume they can paste a URL from the browser address bar (if they ever understood that at all) because to many apps are SPAs with broken URLs. Absolutely, that's why I thought my corner case with `iframe` preventing access to the datasette URL could actually be relevant in more general situations.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",814595021,Share button for copying current URL, https://github.com/simonw/datasette/issues/1245#issuecomment-812711365,https://api.github.com/repos/simonw/datasette/issues/1245,812711365,MDEyOklzc3VlQ29tbWVudDgxMjcxMTM2NQ==,1111743,jungle-boogie,2021-04-02T20:53:35Z,2021-04-02T20:53:35Z,NONE,"Yes, I agree. Alternatively, maybe the header could be at the top and bottom, above the next page button. Maybe even have the header 50 records down?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",817544251,"Sticky table column headers would be useful, especially on the query page", https://github.com/simonw/datasette/issues/1253#issuecomment-955384545,https://api.github.com/repos/simonw/datasette/issues/1253,955384545,IC_kwDOBm6k_c448gLh,1449512,dufferzafar,2021-10-30T16:00:42Z,2021-10-30T16:00:42Z,NONE,"Yeah, I was pressing Ctrl + Enter as well. Came here to open this issue and found out Shift + Enter works. @simonw Any way to configure this?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",826064552,"Capture ""Ctrl + Enter"" or ""⌘ + Enter"" to send SQL query?", https://github.com/simonw/datasette/issues/1255#issuecomment-812680519,https://api.github.com/repos/simonw/datasette/issues/1255,812680519,MDEyOklzc3VlQ29tbWVudDgxMjY4MDUxOQ==,1111743,jungle-boogie,2021-04-02T19:37:57Z,2021-04-02T19:37:57Z,NONE,"Hello, I'm also experiencing a timeout in my environment. I don't know if it's because I need more indexes or a more powerful system. My data has 1,271,111 and when I try to create a facet, there's a time out. I've tried this on two different rows that should significantly filter down data: `CITY` and `PARTY_REG`. Simon's johns_hopkins_csse_daily_reports has more rows and it setup with two facets on load. He does have four indexes created, though. Do I need more indexes? I have one simple one so far: ``` CREATE INDEX [idx_party_reg] ON [county_active] ([PARTY_REG]); ``` I'm running Datasette 0.56 installed via pip with Python 3.7.3. `4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux` ``` $ cat /etc/os-release PRETTY_NAME=""Debian GNU/Linux 10 (buster)"" NAME=""Debian GNU/Linux"" VERSION_ID=""10"" VERSION=""10 (buster)"" VERSION_CODENAME=buster ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",826700095,Facets timing out but work when filtering, https://github.com/simonw/datasette/issues/1255#issuecomment-812710120,https://api.github.com/repos/simonw/datasette/issues/1255,812710120,MDEyOklzc3VlQ29tbWVudDgxMjcxMDEyMA==,1111743,jungle-boogie,2021-04-02T20:50:08Z,2021-04-02T20:50:08Z,NONE,"Hello again, I was able to get my facets running with this `settings.json`, which was lifted from one of Simon's datasette's and slightly modified. ``` { ""default_page_size"": 100, ""max_returned_rows"": 1000, ""num_sql_threads"": 3, ""sql_time_limit_ms"": 9000, ""default_facet_size"": 10, ""facet_time_limit_ms"": 9000, ""facet_suggest_time_limit_ms"": 500, ""hash_urls"": false, ""allow_facet"": true, ""suggest_facets"": false, ""default_cache_ttl"": 5, ""default_cache_ttl_hashed"": 31536000, ""cache_size_kb"": 0, ""allow_csv_stream"": true, ""max_csv_mb"": 100, ""truncate_cells_html"": 2048, ""template_debug"": false, ""base_url"": ""/"" } ```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",826700095,Facets timing out but work when filtering, https://github.com/simonw/datasette/issues/1258#issuecomment-807459633,https://api.github.com/repos/simonw/datasette/issues/1258,807459633,MDEyOklzc3VlQ29tbWVudDgwNzQ1OTYzMw==,1385831,wdccdw,2021-03-25T20:48:33Z,2021-03-25T20:49:34Z,NONE,"What about allowing default parameters when defining the query in metadata.yml? Something like: ``` databases: fec: queries: search_by_name: params: - q default-param-values: q: ""text to search"" sql: |- SELECT... ``` For now, I'm using a custom database-.html file that hardcodes a default param in the link, but I'd rather not customize the template just for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",828858421,Allow canned query params to specify default values, https://github.com/simonw/datasette/issues/1261#issuecomment-803499509,https://api.github.com/repos/simonw/datasette/issues/1261,803499509,MDEyOklzc3VlQ29tbWVudDgwMzQ5OTUwOQ==,812795,brimstone,2021-03-21T02:06:43Z,2021-03-21T02:06:43Z,NONE,I can confirm 0.9.2 fixes the problem. Thanks for the fast response!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",832092321,Some links aren't properly URL encoded., https://github.com/simonw/datasette/issues/1262#issuecomment-802164134,https://api.github.com/repos/simonw/datasette/issues/1262,802164134,MDEyOklzc3VlQ29tbWVudDgwMjE2NDEzNA==,19328961,henry501,2021-03-18T17:55:00Z,2021-03-18T17:55:00Z,NONE,"Thanks for the comments. I'll take a look at the documentation to familiarize myself, as I haven't tried to write any plugins yet. With some luck I might be ready to write it when the hook is implemented.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",834602299,Plugin hook that could support 'order by random()' for table view, https://github.com/simonw/datasette/issues/1265#issuecomment-803160804,https://api.github.com/repos/simonw/datasette/issues/1265,803160804,MDEyOklzc3VlQ29tbWVudDgwMzE2MDgwNA==,468612,yunzheng,2021-03-19T22:05:12Z,2021-03-19T22:05:12Z,NONE,Wow that was fast! Thanks for this very cool project and quick update! 👍 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",836123030,Support for HTTP Basic Authentication, https://github.com/simonw/datasette/issues/1272#issuecomment-1199115002,https://api.github.com/repos/simonw/datasette/issues/1272,1199115002,IC_kwDOBm6k_c5HeQr6,37748899,xmichele,2022-07-29T10:22:58Z,2022-07-29T10:22:58Z,NONE,"> test_dockerfile.py > I'm skipping this for the moment because the new Dockerfile shape introduced in [#1249 (comment)](https://github.com/simonw/datasette/issues/1249#issuecomment-804404544) isn't compatible with this technique, since it installs Datasette from PyPI rather than directly from the repo. > > Will need to change that if I want to do this unit tests thing. Hello, can't you copy in a later step directly in the output directory, e.g. COPY test_dockerfile.py /usr/lib/python*/.. or something like that ? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",838245338,Unit tests for the Dockerfile, https://github.com/simonw/datasette/issues/1276#issuecomment-811209922,https://api.github.com/repos/simonw/datasette/issues/1276,811209922,MDEyOklzc3VlQ29tbWVudDgxMTIwOTkyMg==,1314318,justinallen,2021-03-31T16:27:26Z,2021-03-31T16:27:26Z,NONE,Fantastic. Thank you! ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",841456306,"Invalid SQL: ""no such table: pragma_database_list"" on database page", https://github.com/simonw/datasette/issues/1286#issuecomment-860047794,https://api.github.com/repos/simonw/datasette/issues/1286,860047794,MDEyOklzc3VlQ29tbWVudDg2MDA0Nzc5NA==,4068,frafra,2021-06-12T12:36:15Z,2021-06-12T12:36:15Z,NONE,"@mroswell That is a very nice solution. I wonder if custom classes, like `col-columnName-value` could be automatically added to cells when facets on such column are enabled, to allow custom styling without having to modify templates or add custom JavaScript code.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",849220154,Better default display of arrays of items, https://github.com/simonw/datasette/issues/1298#issuecomment-1125083348,https://api.github.com/repos/simonw/datasette/issues/1298,1125083348,IC_kwDOBm6k_c5DD2jU,7150,llimllib,2022-05-12T14:43:51Z,2022-05-12T14:43:51Z,NONE,"user report: I found this issue because the first time I tried to use datasette for real, I displayed a large table, and thought there was no horizontal scroll bar at all. I didn't even consider that I had to scroll all the way to the end of the page to find it. Just chipping in to say that this confused me, and I didn't even find the scroll bar until after I saw this issue. I don't know what the right answer is, but IMO the UI should suggest to the user that there is a way to view the data that's hidden to the right.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/1298#issuecomment-823064725,https://api.github.com/repos/simonw/datasette/issues/1298,823064725,MDEyOklzc3VlQ29tbWVudDgyMzA2NDcyNQ==,154364,dracos,2021-04-20T07:57:14Z,2021-04-20T07:57:14Z,NONE,"My suggestions, originally made on twitter, but might be better here now: 1. Could have a CSS shadow (one of the comments on https://stackoverflow.com/questions/44793453/how-do-i-add-a-top-and-bottom-shadow-while-scrolling-but-only-when-needed is a codepen for horizontal instead of vertical); 2. Could give the table a max-height (either the window or work out the available space) so that it is both vertically/horizontally scrollable and you don't have to scroll to the bottom in order to see this; 3. On a desktop browser, what I think I'd want is an absolute grid to work with - left query/filters, TR chart (or map), BR table. No problem with scrolling then. Here is a mockup I made when this was about the map plugin: ![image](https://user-images.githubusercontent.com/154364/115358389-82c47e00-a1b5-11eb-8a63-0ca14fd23d32.png) ![image](https://user-images.githubusercontent.com/154364/115358454-97087b00-a1b5-11eb-9501-cf884ae72d7c.png) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/1298#issuecomment-823102978,https://api.github.com/repos/simonw/datasette/issues/1298,823102978,MDEyOklzc3VlQ29tbWVudDgyMzEwMjk3OA==,154364,dracos,2021-04-20T08:51:23Z,2021-04-20T08:51:23Z,NONE,"2. Max height would still let you scroll the page to underneath the facets to the table, but would mean the table would never take up more than your window size, so the horizontal scrollbar would be visible as soon as the table took up the size of the window. 3. Yes, this wouldn't be for mobile :) It'd be desktop-only styling. On mobile you can scroll much more easily with touch, anyway. In your case, perhaps better would be the whole top half would be facets, bottom left quadrant chart, bottom right table. Depends upon the particular use case, as you say.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",855476501,improve table horizontal scroll experience, https://github.com/simonw/datasette/issues/1304#issuecomment-981980048,https://api.github.com/repos/simonw/datasette/issues/1304,981980048,IC_kwDOBm6k_c46h9OQ,30934,20after4,2021-11-29T20:13:53Z,2021-11-29T20:14:11Z,NONE,There isn't any way to do this with sqlite as far as I know. The only option is to insert the right number of ? placeholders into the sql template and then provide an array of values.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1304#issuecomment-988459453,https://api.github.com/repos/simonw/datasette/issues/1304,988459453,IC_kwDOBm6k_c466rG9,9308268,rayvoelker,2021-12-08T03:15:27Z,2021-12-08T03:15:27Z,NONE,"I was thinking if there were a way to use some sort of sting function to ""unpack"" the values and convert them into ints... hm","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1304#issuecomment-988461884,https://api.github.com/repos/simonw/datasette/issues/1304,988461884,IC_kwDOBm6k_c466rs8,30934,20after4,2021-12-08T03:20:26Z,2021-12-08T03:20:26Z,NONE,"The easiest or most straightforward thing to do is to use named parameters like: ```sql select * where key IN (:p1, :p2, :p3) ``` And simply construct the list of placeholders dynamically based on the number of values. Doing this is possible with datasette if you forgo ""canned queries"" and just use the raw query endpoint and pass the query sql, along with p1, p2 ... in the request.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1304#issuecomment-988463455,https://api.github.com/repos/simonw/datasette/issues/1304,988463455,IC_kwDOBm6k_c466sFf,30934,20after4,2021-12-08T03:23:14Z,2021-12-08T03:23:14Z,NONE,I actually think it would be a useful thing to add support for in datasette. It wouldn't be difficult to unwind an array of params and add the placeholders automatically.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",863884805,"Document how to send multiple values for ""Named parameters"" ", https://github.com/simonw/datasette/issues/1310#issuecomment-828670621,https://api.github.com/repos/simonw/datasette/issues/1310,828670621,MDEyOklzc3VlQ29tbWVudDgyODY3MDYyMQ==,3747136,ColinMaudry,2021-04-28T18:12:08Z,2021-04-28T18:12:08Z,NONE,"Apparently, beside a string, Reponse could also [work with bytes](https://github.com/simonw/datasette/blob/master/datasette/utils/asgi.py#L338).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",870125126,I'm creating a plugin to export a spreadsheet file (.ods or .xlsx), https://github.com/simonw/datasette/issues/1310#issuecomment-829885904,https://api.github.com/repos/simonw/datasette/issues/1310,829885904,MDEyOklzc3VlQ29tbWVudDgyOTg4NTkwNA==,3747136,ColinMaudry,2021-04-30T06:58:46Z,2021-04-30T07:26:11Z,NONE,"I made it work with openpyxl. I'm not sure all the code under `@hookimpl` is necessary... but it works :) ```python from datasette import hookimpl from datasette.utils.asgi import Response from openpyxl import Workbook from openpyxl.writer.excel import save_virtual_workbook from openpyxl.cell import WriteOnlyCell from openpyxl.styles import Alignment, Font, PatternFill from tempfile import NamedTemporaryFile def render_spreadsheet(rows): wb = Workbook(write_only=True) ws = wb.create_sheet() ws = wb.active ws.title = ""decp"" columns = rows[0].keys() headers = [] for col in columns : c = WriteOnlyCell(ws, col) c.fill = PatternFill(""solid"", fgColor=""DDEFFF"") headers.append(c) ws.append(headers) for row in rows: wsRow = [] for col in columns: c = WriteOnlyCell(ws, row[col]) if col == ""objet"" : c.alignment = Alignment(wrapText = True) wsRow.append(c) ws.append(wsRow) with NamedTemporaryFile() as tmp: wb.save(tmp.name) tmp.seek(0) return Response( tmp.read(), headers={ 'Content-Disposition': 'attachment; filename=decp.xlsx', 'Content-type': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' } ) @hookimpl def register_output_renderer(): return {""extension"": ""xlsx"", ""render"": render_spreadsheet, ""can_render"": lambda: False} ``` The key part was to find the right function to wrap the spreadsheet object `wb`. `NamedTemporaryFile()` did it! I'll update this issue when the plugin is packaged and ready for broader use.","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",870125126,I'm creating a plugin to export a spreadsheet file (.ods or .xlsx), https://github.com/simonw/datasette/issues/1327#issuecomment-847271122,https://api.github.com/repos/simonw/datasette/issues/1327,847271122,MDEyOklzc3VlQ29tbWVudDg0NzI3MTEyMg==,20846286,GmGniap,2021-05-24T19:10:21Z,2021-05-24T19:10:21Z,NONE,"wow, thanks a lot @simonw , problem is solved. I converted my current json file into utf-8 format with Python script. It's working now. I'm using with Window 10. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",892457208,Support Unicode characters in metadata.json, https://github.com/simonw/datasette/issues/1331#issuecomment-842495820,https://api.github.com/repos/simonw/datasette/issues/1331,842495820,MDEyOklzc3VlQ29tbWVudDg0MjQ5NTgyMA==,475613,MarkusH,2021-05-17T17:18:05Z,2021-05-17T17:18:05Z,NONE,"Wow, you are _fast_! I didn't notice dependabot had opened a PR already. I was about to. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/1331#issuecomment-842499728,https://api.github.com/repos/simonw/datasette/issues/1331,842499728,MDEyOklzc3VlQ29tbWVudDg0MjQ5OTcyOA==,475613,MarkusH,2021-05-17T17:24:30Z,2021-05-17T17:24:30Z,NONE,"> I wonder if there are any new 3.0 features we should be taking advantage of here that would justify pinning to 3.0 minimum? The changelog reads like bug fixes and removal of deprecated parts to me","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/1331#issuecomment-844970776,https://api.github.com/repos/simonw/datasette/issues/1331,844970776,MDEyOklzc3VlQ29tbWVudDg0NDk3MDc3Ng==,475613,MarkusH,2021-05-20T10:40:25Z,2021-05-20T10:40:25Z,NONE,"Any chance you could push a new datasette release with the updated dependencies in the setup.py, @simonw?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/1331#issuecomment-846137332,https://api.github.com/repos/simonw/datasette/issues/1331,846137332,MDEyOklzc3VlQ29tbWVudDg0NjEzNzMzMg==,4312421,stonebig,2021-05-21T17:57:53Z,2021-05-21T17:57:53Z,NONE,I'm stuck also because datasette wants itsdangerous~=1.1 instead of allowing itsdangerous-2.0.0,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",893537744,Add support for Jinja2 version 3.0, https://github.com/simonw/datasette/issues/1362#issuecomment-855428296,https://api.github.com/repos/simonw/datasette/issues/1362,855428296,MDEyOklzc3VlQ29tbWVudDg1NTQyODI5Ng==,154364,dracos,2021-06-06T16:53:20Z,2021-06-06T16:53:20Z,NONE,"> Presumably this would also require adding Content-Security-Policy to the Vary header though, which will have a nasty effect on Cloudflare and Fastly and such like. No, because Vary header is about *request* headers that cause the response to vary, not response headers.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",912864936,Consider using CSP to protect against future XSS, https://github.com/simonw/datasette/issues/1375#issuecomment-860548546,https://api.github.com/repos/simonw/datasette/issues/1375,860548546,MDEyOklzc3VlQ29tbWVudDg2MDU0ODU0Ng==,4068,frafra,2021-06-14T09:41:59Z,2021-06-14T09:41:59Z,NONE,"> There is a feature for this at the moment, but it's a little bit hidden: you can use `?_json=col` to tell > Datasette that you would like a specific column to be exported as nested JSON: https://docs.datasette.io/en/stable/json_api.html#special-json-arguments Thanks :) > I considered trying to make this automatic - so it detects columns that appear to contain valid JSON and outputs them as nested objects - but the problem with that is that it can lead to inconsistent results - you might hit the API and find that not every column contains valid JSON (compared to the previous day) resulting in the API retuning string instead of the expected dictionary and breaking your code. If a developer is not sure if the JSON fields are valid, but then retrieves and parse them, it should handle errors too. Handling inconsistent data is necessary due to the nature of SQLite. A global or dataset option to render the data as they have been defined (JSON, boolean, etc.) when requesting JSON could allow the user to download a regular JSON from the browser without having to rely on APIs. I would guess someone could just make a custom template with an extra JSON-parsed download button otherwise :)","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",919508498,JSON export dumps JSON fields as TEXT, https://github.com/simonw/datasette/issues/1380#issuecomment-967181828,https://api.github.com/repos/simonw/datasette/issues/1380,967181828,IC_kwDOBm6k_c45pgYE,7094907,Segerberg,2021-11-12T15:00:18Z,2021-11-12T20:02:29Z,NONE,"There is no such option see https://github.com/simonw/datasette/issues/43. But you could write a plugin using the datasette.add_database(db, name=None) https://docs.datasette.io/en/stable/internals.html#add-database-db-name-none ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",924748955,Serve all db files in a folder, https://github.com/simonw/datasette/issues/1380#issuecomment-967801997,https://api.github.com/repos/simonw/datasette/issues/1380,967801997,IC_kwDOBm6k_c45r3yN,7094907,Segerberg,2021-11-13T08:05:37Z,2021-11-13T08:09:11Z,NONE,"@glasnt yeah I guess that could be an option. I run datasette on large databases > 75gb and the startup time is a bit slow for me even with -i --inspect-file options. Here's a quick sketch for a plugin that will reload db's in a folder that you set for the plugin in metadata.json. If you request /-reload-db new db's will be added. (You probably want to implement some authentication for this =) ) https://gist.github.com/Segerberg/b96a0e0a5389dce2396497323cda7042 ","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",924748955,Serve all db files in a folder, https://github.com/simonw/datasette/issues/1384#issuecomment-1062124485,https://api.github.com/repos/simonw/datasette/issues/1384,1062124485,IC_kwDOBm6k_c4_TrvF,167160,khusmann,2022-03-08T19:26:32Z,2022-03-08T19:26:32Z,NONE,"Looks like I'm late to the party here, but wanted to join the convo if there's still time before this interface is solidified in v1.0. My plugin use case is for education / social science data, which is meta-data heavy in the documentation of measurement scales, instruments, collection procedures, etc. that I want to connect to columns, tables, and dbs (and render in static pages, but looks like I can do that with the jinja plugin hook). I'm still digging in and I think @brandonrobertz 's approach will work for me at least for now, but I want to bump this thread in the meantime -- are there still plans for an async metadata hook at some point in the future? (or are you considering other directions?)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1065929510,https://api.github.com/repos/simonw/datasette/issues/1384,1065929510,IC_kwDOBm6k_c4_iMsm,167160,khusmann,2022-03-12T17:49:59Z,2022-03-12T17:49:59Z,NONE,"Ok, I'm taking a slightly different approach, which I think is sort of close to the in-memory _metadata table idea. I'm using a startup hook to load metadata / other info from the database, which I store in the datasette object for later: ``` @hookimpl def startup(datasette): async def inner(): datasette._mypluginmetadata = # await db query return inner ``` Then, I can use this in other plugins: ``` @hookimpl def render_cell(value, column, table, database, datasette): # use datasette._mypluginmetadata ``` For my app I don't need anything to update dynamically so it's fine to pre-populate everything on startup. It's also good to have things precached especially for a hook like render_cell, which would otherwise require a ton of redundant db queries. Makes me wonder if we could take a sort of similar caching approach with the internal _metadata table. Like have a little watchdog that could query all of the attached dbs for their _metadata tables every 5min or so, which then could be merged into the in memory _metadata table which then could be accessed sync by the plugins, or something like that. For most the use cases I can think of, live updates don't need to take into effect immediately; refreshing a cache every 5min or on some other trigger (adjustable w a config setting) would be just fine. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1065951744,https://api.github.com/repos/simonw/datasette/issues/1384,1065951744,IC_kwDOBm6k_c4_iSIA,167160,khusmann,2022-03-12T19:47:17Z,2022-03-12T19:47:17Z,NONE,"Awesome, thanks @brandonrobertz ! The plugin is close, but looks like it only grabs remote metadata, is that right? Instead what I'm wanting is to grab metadata embedded in the attached databases. Rather than extending that plugin, at this point I've realized I need a lot more flexibility in metadata for my data model (esp around formatting cell values and custom file exports) so rather than extending that I'll continue working on a plugin specific to my app. If I'm understanding your plugin code correctly, you query the db using the sync handle every time `get_metdata` is called, right? Won't this become a pretty big bottleneck if a hook into `render_cell` is trying to read metadata / plugin config? > Making the get_metadata async won't improve the situation by itself as only some of the code paths accessing metadata use that hook. The other paths use the internal metadata dict. I agree -- because things like `render_cell` will potentially want to read metadata/config, `get_metadata` should really remain sync and lightweight, which we can do with something like the remote-metadata plugin that could also poll metadata tables in attached databases. That leaves your app, where it sounds like you want changes made by the user in the browser in to be immediately reflected, rather than have to wait for the next metadata refresh. In this case I wonder if you could have your app make a sync write to the datasette object so the change would have the immediate effect, but then have a separate async polling mechanism to eventually write that change out to the database for long-term persistence. Then you'd have the best of both worlds, I think? But probably not worth the trouble if your use cases are small (and/or you're not reading metadata/config from tight loops like render_cell).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1066143991,https://api.github.com/repos/simonw/datasette/issues/1384,1066143991,IC_kwDOBm6k_c4_jBD3,167160,khusmann,2022-03-13T17:13:09Z,2022-03-13T17:13:09Z,NONE,"Thanks for taking the time to reply @brandonrobertz , this is really helpful info. > See ""Many small queries are efficient in sqlite"" for more information on the rationale here. Also note that in the datasette-live-config reference plugin, the DB connection is cached, so that eliminated most of the performance worries we had. Ah, that's nifty! Yeah, then caching on the python side is likely a waste :) I'm new to working with sqlite so this is super good to know the many-small-queries is a common pattern > I tested on very large Datasette deployments (hundreds of DBs, millions of rows). For my reference, did you include a `render_cell` plugin calling `get_metadata` in those tests? I'm less concerned now that I know a little more about sqlite's caching, but that special situation will jump you to a few orders of magnitude above what the sqlite article describes (e.g. 200 vs 20,000 queries+metadata merges for a page displaying 100 rows of a 200 column table). It wouldn't scale with db size as much as # of visible cells being rendered on the page, although they would be identical queries I suppose so will cache well. (If you didn't test this specific situation, no worries -- I'm just trying to calibrate my intuition on this and can do my own benchmarks at some point.) > Simon talked about eventually making something like this a standard feature of Datasette Yeah, getting metadata (and static pages as well for that matter) from internal tables definitely has my vote for including as a standard feature! Its really nice to be able to distribute a single *.db with all the metadata and static pages bundled. My metadata are sufficiently complex/domain specific that it makes sense to continue on my own plugin for now, but I'll be thinking about more general parts I can spin off as possible contributions to liveconfig (if you're open to them) or other plugins in this ecosystem.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1384#issuecomment-1066194130,https://api.github.com/repos/simonw/datasette/issues/1384,1066194130,IC_kwDOBm6k_c4_jNTS,167160,khusmann,2022-03-13T22:23:04Z,2022-03-13T22:23:04Z,NONE,"Ah, sorry, I didn't get what you were saying you the first time. Using _metadata_local in that way makes total sense -- I agree, refreshing metadata each cell was seeming quite excessive. Now I'm on the same page! :)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",930807135,Plugin hook for dynamic metadata, https://github.com/simonw/datasette/issues/1387#issuecomment-873166836,https://api.github.com/repos/simonw/datasette/issues/1387,873166836,MDEyOklzc3VlQ29tbWVudDg3MzE2NjgzNg==,9308268,rayvoelker,2021-07-02T17:58:23Z,2021-07-02T17:58:23Z,NONE,"Thanks Simon for nailing that one down! It does seem a little confusing that the ProxyPreservehost option is set to Off By default, but this config totally did the trick and fixed the issue ``` ProxyPass http://127.0.0.1:8010/collection-analysis/ ProxyPreservehost On ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",935930820,absolute_url() behind a proxy assembles incorrect http://127.0.0.1:8001/ URLs, https://github.com/simonw/datasette/issues/1398#issuecomment-889599513,https://api.github.com/repos/simonw/datasette/issues/1398,889599513,IC_kwDOBm6k_c41BjYZ,192984,aitoehigie,2021-07-30T03:21:49Z,2021-07-30T03:21:49Z,NONE,Does the library support this now?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",947044667,Documentation on using Datasette as a library, https://github.com/simonw/datasette/issues/141#issuecomment-346974336,https://api.github.com/repos/simonw/datasette/issues/141,346974336,MDEyOklzc3VlQ29tbWVudDM0Njk3NDMzNg==,50138,janimo,2017-11-26T00:00:35Z,2017-11-26T00:00:35Z,NONE,FWIW I worked around this by setting TMPDIR to ~/tmp before running the command.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",275814941,datasette publish can fail if /tmp is on a different device, https://github.com/simonw/datasette/issues/1415#issuecomment-1255603780,https://api.github.com/repos/simonw/datasette/issues/1415,1255603780,IC_kwDOBm6k_c5K1v5E,17532695,bendnorman,2022-09-22T22:06:10Z,2022-09-22T22:06:10Z,NONE,"This would be great! I just went through the process of figuring out the minimum permissions for a service account to run `datasette publish cloudrun` for [PUDL](https://github.com/catalyst-cooperative/pudl)'s [datasette deployment](https://data.catalyst.coop/). These are the roles I gave the service account (disclaim: I'm not sure these are the minimum permissions): - Cloud Build Service Account: The SA needs this role to publish the build on Cloud Build. - Cloud Run Admin for the Cloud Run datasette service so the SA can deploy the build. - I gave the SA the Storage Admin role on the bucket Cloud Build creates to store the build tar files. - The Viewer Role is [required for storing build logs in the default bucket](https://cloud.google.com/build/docs/running-builds/submit-build-via-cli-api#permissions). More on this below! The Viewer Role is a Basic IAM role that [Google does not recommend using](https://cloud.google.com/build/docs/running-builds/submit-build-via-cli-api#permissions): > Caution: Basic roles include thousands of permissions across all Google Cloud services. In production environments, do not grant basic roles unless there is no alternative. Instead, grant the most limited [predefined roles](https://cloud.google.com/iam/docs/understanding-roles#predefined_roles) or [custom roles](https://cloud.google.com/iam/docs/understanding-custom-roles) that meet your needs. If you don't grant the Viewer role the `gcloud builds submit` command will successfully create a build but returns exit code 1, preventing the script from getting to the cloud run step: ``` ERROR: (gcloud.builds.submit) The build is running, and logs are being written to the default logs bucket. This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy. The default logs bucket is always outside any VPC-SC security perimeter. If you want your logs saved inside your VPC-SC perimeter, use your own bucket. See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs. ``` long stack trace... ``` CalledProcessError: Command 'gcloud builds submit --tag gcr.io/catalyst-cooperative-pudl/datasette' returned non-zero exit status 1. ``` You can store Cloud Build logs in a [user-created bucket](https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs#store-custom-bucket) which only requires the Storage Admin role. However, you have to pass a config file to `gcloud builds submit`, which isn't possible with the current options for `datasette publish cloudrun`. I propose we add an additional CLI option to `datasette publish cloudrun` called `--build-config` that allows users to pass a [config file](https://cloud.google.com/build/docs/running-builds/submit-build-via-cli-api#running_builds) specifying a user create Cloud Build log bucket. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",959137143,feature request: document minimum permissions for service account for cloudrun, https://github.com/simonw/datasette/issues/1423#issuecomment-993876599,https://api.github.com/repos/simonw/datasette/issues/1423,993876599,IC_kwDOBm6k_c47PVp3,6165713,plpxsk,2021-12-14T18:48:09Z,2021-12-14T18:48:09Z,NONE,"Great feature. But what is the right way to enable this to show up? Currently, it seems I need to edit the URL to add, in the right place, `&_facet_size=max` Is there another (easier) way to enable this feature?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",962391325,Show count of facet values if ?_facet_size=max, https://github.com/simonw/datasette/issues/1426#issuecomment-974711959,https://api.github.com/repos/simonw/datasette/issues/1426,974711959,IC_kwDOBm6k_c46GOyX,52649,tannewt,2021-11-20T21:11:51Z,2021-11-20T21:11:51Z,NONE,I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default", https://github.com/simonw/datasette/issues/1426#issuecomment-985982668,https://api.github.com/repos/simonw/datasette/issues/1426,985982668,IC_kwDOBm6k_c46xObM,95520595,knowledgecamp12,2021-12-04T07:11:29Z,2021-12-04T07:11:29Z,NONE,You can generate xml site map from the online tools using https://tools4seo.site/xml-sitemap-generator. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",964322136,"Manage /robots.txt in Datasette core, block robots by default", https://github.com/simonw/datasette/issues/1432#issuecomment-941002127,https://api.github.com/repos/simonw/datasette/issues/1432,941002127,IC_kwDOBm6k_c44Fo2P,5802411,ashishdotme,2021-10-12T13:14:31Z,2021-10-12T13:14:39Z,NONE,"Any workaround for making it work with datasette-publish-vercel. Currently getting below error module initialization error: __init__() got an unexpected keyword argument 'config' module initialization error __init__() got an unexpected keyword argument 'config'","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",969855774,Rename Datasette.__init__(config=) parameter to settings=, https://github.com/simonw/datasette/issues/1432#issuecomment-941585767,https://api.github.com/repos/simonw/datasette/issues/1432,941585767,IC_kwDOBm6k_c44H3Vn,5802411,ashishdotme,2021-10-12T21:23:19Z,2021-10-12T21:23:19Z,NONE,"Nevermind, had to remove the branch argument in the workflow to make vercel publish work","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",969855774,Rename Datasette.__init__(config=) parameter to settings=, https://github.com/simonw/datasette/issues/1432#issuecomment-945639639,https://api.github.com/repos/simonw/datasette/issues/1432,945639639,IC_kwDOBm6k_c44XVDX,5802411,ashishdotme,2021-10-18T10:44:56Z,2021-10-18T10:44:56Z,NONE,"@simonw I am getting the below issue again now, even after removing branch argument from vercel datasette plugin module initialization error: init() got an unexpected keyword argument 'config' module initialization error init() got an unexpected keyword argument 'config'","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",969855774,Rename Datasette.__init__(config=) parameter to settings=, https://github.com/simonw/datasette/issues/1439#issuecomment-1059863997,https://api.github.com/repos/simonw/datasette/issues/1439,1059863997,IC_kwDOBm6k_c4_LD29,505230,karlcow,2022-03-06T00:57:57Z,2022-03-06T00:57:57Z,NONE,"Probably too late… but I have just seen this because http://simonwillison.net/2022/Mar/5/dash-encoding/#atom-everything And it reminded me of comma tools at W3C. http://www.w3.org/,tools Example, the text version of W3C homepage https://www.w3.org/,text > The challenge comes down to telling the difference between the following: > > * `/db/table` - an HTML table page `/db/table` > * `/db/table.csv` - the CSV version of `/db/table` `/db/table,csv` > * `/db/table.csv` - no this one is actually a database table called `table.csv` `/db/table.csv` > * `/db/table.csv.csv` - the CSV version of `/db/table.csv` `/db/table.csv,csv` > * `/db/table.csv.csv.csv` and so on... `/db/table.csv.csv,csv` I haven't checked all the cases in the thread.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",973139047,Rethink how .ext formats (v.s. ?_format=) works before 1.0, https://github.com/simonw/datasette/issues/144#issuecomment-346427794,https://api.github.com/repos/simonw/datasette/issues/144,346427794,MDEyOklzc3VlQ29tbWVudDM0NjQyNzc5NA==,649467,mhalle,2017-11-22T17:55:45Z,2017-11-22T17:55:45Z,NONE,"Thanks. There is a way to use pip to grab apsw, which also let's you configure it (flags to build extensions, use an internal sqlite, etc). Don't know how that works as a dependency for another package, though. On November 22, 2017 11:38:06 AM EST, Simon Willison wrote: >I have a solution for FTS already, but I'm interested in apsw as a >mechanism for allowing custom virtual tables to be written in Python >(pysqlite only lets you write custom functions) > >Not having PyPI support is pretty tough though. I'm planning a >plugin/extension system which would be ideal for things like an >optional apsw mode, but that's a lot harder if apsw isn't in PyPI. > >-- >You are receiving this because you authored the thread. >Reply to this email directly or view it on GitHub: >https://github.com/simonw/datasette/issues/144#issuecomment-346405660 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",276091279,apsw as alternative sqlite3 binding (for full text search), https://github.com/simonw/datasette/issues/1479#issuecomment-1114601882,https://api.github.com/repos/simonw/datasette/issues/1479,1114601882,IC_kwDOBm6k_c5Cb3ma,32839123,Rik-de-Kort,2022-05-02T08:10:27Z,2022-05-02T11:54:49Z,NONE,"Also ran into this issue today using `datasette package`. The stack trace takes up my whole PowerShell history, though (recursionerror), but it also concerns the temporary directory. Our development machines have a very zealous scanner that appears to insert itself between every call to the filesystem. I suspected that was causing some racing, but this turned out not to be the case: inserting `time.sleep(3)` on line 451 of `datasette/datasette/utils/__init__.py` does not make the problem go away. Commenting out the `tmp.cleanup()` line does. The next error I get is docker-specific, so that probably does resolve the Datasette error here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1010112818,"Win32 ""used by another process"" error with datasette publish", https://github.com/simonw/datasette/issues/1479#issuecomment-929927144,https://api.github.com/repos/simonw/datasette/issues/1479,929927144,IC_kwDOBm6k_c43bY_o,1244799,soobrosa,2021-09-29T07:49:40Z,2021-09-29T07:49:40Z,NONE,"My search yielded these four entries: https://github.com/simonw/datasette/issues?q=PermissionError%3A+%5BWinError+32%5D+ Maybe this is the closet hit? https://github.com/simonw/datasette/issues/744 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1010112818,"Win32 ""used by another process"" error with datasette publish", https://github.com/simonw/datasette/issues/1479#issuecomment-930071625,https://api.github.com/repos/simonw/datasette/issues/1479,930071625,IC_kwDOBm6k_c43b8RJ,76450761,kirajano,2021-09-29T11:01:30Z,2021-09-29T11:01:30Z,NONE,"Thanks, but this one has a different error type. Unfortunately, still not working.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1010112818,"Win32 ""used by another process"" error with datasette publish", https://github.com/simonw/datasette/issues/1497#issuecomment-1387433455,https://api.github.com/repos/simonw/datasette/issues/1497,1387433455,IC_kwDOBm6k_c5Sso3v,270255,adamalton,2023-01-18T17:13:45Z,2023-01-18T17:13:45Z,NONE,"You may have just been talking to yourself here, but I found your documentation of the process incredibly useful when I hit the same error myself. Thanks! 🌈 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1034535001,"Publish to Docker Hub failing with ""libcrypt.so.1: cannot open shared object file""", https://github.com/simonw/datasette/issues/1505#issuecomment-970188065,https://api.github.com/repos/simonw/datasette/issues/1505,970188065,IC_kwDOBm6k_c450-Uh,7094907,Segerberg,2021-11-16T11:40:52Z,2021-11-16T11:40:52Z,NONE,A suggestion is to have the option to choose an arbitrary delimiter (and quoting characters ),"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1052247023,Datasette should have an option to output CSV with semicolons, https://github.com/simonw/datasette/issues/1519#issuecomment-983890815,https://api.github.com/repos/simonw/datasette/issues/1519,983890815,IC_kwDOBm6k_c46pPt_,157158,phubbard,2021-12-01T17:50:09Z,2021-12-01T17:50:09Z,NONE,"thanks so very much for the prompt attention and fix! Plus, the animated GIF showing the bug is just extra and I love it. Interactions like this are why I love open source.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058790545,base_url is omitted in JSON and CSV views, https://github.com/simonw/datasette/issues/1522#issuecomment-974607456,https://api.github.com/repos/simonw/datasette/issues/1522,974607456,IC_kwDOBm6k_c46F1Rg,17906,mrchrisadams,2021-11-20T07:10:11Z,2021-11-20T07:10:11Z,NONE,"As a a sanity check, would it be worth looking at trying to push the multi-process container on another provider of a knative / cloud run / tekton ? I have a somewhat similar use case for a future proejct, so i'm been very grateful to you sharing all the progress in this issue. As I understand it, Scaleway also offer a very similar offering using what appear to be many similar components that might at least see if it's an issue with more than one knative based FaaS provider https://www.scaleway.com/en/serverless-containers/ https://developers.scaleway.com/en/products/containers/api/#main-features ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058896236,Deploy a live instance of demos/apache-proxy, https://github.com/simonw/datasette/issues/1522#issuecomment-976023405,https://api.github.com/repos/simonw/datasette/issues/1522,976023405,IC_kwDOBm6k_c46LO9t,360895,steren,2021-11-23T00:08:07Z,2021-11-23T00:08:07Z,NONE,"If you suspect that Cloud Run throttled CPU could be the cause, you can request to have CPU always allocated with `gcloud beta run deploy --no-cpu-throttling` ([read more](https://cloud.google.com/blog/products/serverless/cloud-run-gets-always-on-cpu-allocation)) It could also be the Cloud Run sandbox that somehow gets in the way here, in which case I recommend testing with the second generation execution environment: `gcloud beta run deploy --execution-environment gen2`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1058896236,Deploy a live instance of demos/apache-proxy, https://github.com/simonw/datasette/issues/1528#issuecomment-988468238,https://api.github.com/repos/simonw/datasette/issues/1528,988468238,IC_kwDOBm6k_c466tQO,30934,20after4,2021-12-08T03:35:45Z,2021-12-08T03:35:45Z,NONE,"FWIW I implemented something similar with a bit of plugin code: ```python @hookimpl def canned_queries(datasette: Datasette, database: str) -> Mapping[str, str]: # load ""canned queries"" from the filesystem under # www/sql/db/query_name.sql queries = {} sqldir = Path(__file__).parent.parent / ""sql"" if database: sqldir = sqldir / database if not sqldir.is_dir(): return queries for f in sqldir.glob('*.sql'): try: sql = f.read_text('utf8').strip() if not len(sql): log(f""Skipping empty canned query file: {f}"") continue queries[f.stem] = { ""sql"": sql } except OSError as err: log(err) return queries ```","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",1060631257,"Add new `""sql_file""` key to Canned Queries in metadata?", https://github.com/simonw/datasette/issues/153#issuecomment-348252037,https://api.github.com/repos/simonw/datasette/issues/153,348252037,MDEyOklzc3VlQ29tbWVudDM0ODI1MjAzNw==,20264,ftrain,2017-11-30T16:59:00Z,2017-11-30T16:59:00Z,NONE,"WOW! -- Paul Ford // (646) 369-7128 // @ftrain On Thu, Nov 30, 2017 at 11:47 AM, Simon Willison wrote: > Remaining work on this now lives in a milestone: > https://github.com/simonw/datasette/milestone/6 > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > , > or mute the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",276842536,Ability to customize presentation of specific columns in HTML view, https://github.com/simonw/datasette/issues/1532#issuecomment-981966693,https://api.github.com/repos/simonw/datasette/issues/1532,981966693,IC_kwDOBm6k_c46h59l,30934,20after4,2021-11-29T19:56:52Z,2021-11-29T19:56:52Z,NONE,FWIW I've written some web components that consume the json api and I think it's a really nice way to work with datasette. I like the combination with datasette+sqlite as a back-end feeding data to a front-end that's entirely javascript + html.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1065429936,Use datasette-table Web Component to guide the design of the JSON API for 1.0, https://github.com/simonw/datasette/issues/1532#issuecomment-982745406,https://api.github.com/repos/simonw/datasette/issues/1532,982745406,IC_kwDOBm6k_c46k4E-,30934,20after4,2021-11-30T15:28:57Z,2021-11-30T15:28:57Z,NONE,"It's a really great API and the documentation is really great too. Honestly, in more than 20 years of professional experience, I haven't worked with any software API that was more of a joy to use. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1065429936,Use datasette-table Web Component to guide the design of the JSON API for 1.0, https://github.com/simonw/datasette/issues/155#issuecomment-347714314,https://api.github.com/repos/simonw/datasette/issues/155,347714314,MDEyOklzc3VlQ29tbWVudDM0NzcxNDMxNA==,388154,wsxiaoys,2017-11-29T00:46:25Z,2017-11-29T00:46:25Z,NONE,"``` CREATE TABLE rhs ( id INTEGER PRIMARY KEY, name TEXT ); CREATE TABLE lhs ( symbol INTEGER PRIMARY KEY, FOREIGN KEY (symbol) REFERENCES rhs(id) ); INSERT INTO rhs VALUES (1, ""foo""); INSERT INTO rhs VALUES (2, ""bar""); INSERT INTO lhs VALUES (1); INSERT INTO lhs VALUES (2); ``` It's expected that in lhs's view, foo / bar should be displayed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",277589569,A primary key column that has foreign key restriction associated won't rendering label column, https://github.com/simonw/datasette/issues/1590#issuecomment-1010556333,https://api.github.com/repos/simonw/datasette/issues/1590,1010556333,IC_kwDOBm6k_c48O92t,1001306,eelkevdbos,2022-01-12T02:03:59Z,2022-01-12T02:03:59Z,NONE,"Thank you for the quick reply! Just a quick observation, I am running this locally without a proxy, whereas your fly example seems to be running behind an apache proxy (if the name is accurate). Can it be that the apache proxy strips the prefix before it passes on the request to the daphne backend?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1099723916,Table+query JSON and CSV links broken when using `base_url` setting, https://github.com/simonw/datasette/issues/1590#issuecomment-1010559681,https://api.github.com/repos/simonw/datasette/issues/1590,1010559681,IC_kwDOBm6k_c48O-rB,1001306,eelkevdbos,2022-01-12T02:10:20Z,2022-01-12T02:10:20Z,NONE,"In my example, path matching happens at the application layer (being the Django channels URLRouter). That might be a somewhat exotic solution that would normally be solved by a proxy like Apache or Nginx. However, in my specific use case, this is a ""feature"" enabling me to do simple management of databases and metadata from within a Django admin app instance mapped in that same router.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1099723916,Table+query JSON and CSV links broken when using `base_url` setting, https://github.com/simonw/datasette/issues/1608#issuecomment-1017993482,https://api.github.com/repos/simonw/datasette/issues/1608,1017993482,IC_kwDOBm6k_c48rVkK,316517,astrojuanlu,2022-01-20T22:46:16Z,2022-01-20T22:46:16Z,NONE,Or you can use https://sphinx-version-warning.readthedocs.io/! 😄 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1109808154,Documentation should clarify /stable/ vs /latest/, https://github.com/simonw/datasette/issues/161#issuecomment-350108113,https://api.github.com/repos/simonw/datasette/issues/161,350108113,MDEyOklzc3VlQ29tbWVudDM1MDEwODExMw==,388154,wsxiaoys,2017-12-07T22:02:24Z,2017-12-07T22:02:24Z,NONE,"It's not throwing the validation error anymore, but i still cannot run following with query: ``` WITH RECURSIVE cnt(x) AS (SELECT 1 UNION ALL SELECT x+1 FROM cnt LIMIT 10) SELECT x FROM cnt; ``` I got `near ""WITH"": syntax error`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",278814220,Support WITH query , https://github.com/simonw/datasette/issues/161#issuecomment-350182904,https://api.github.com/repos/simonw/datasette/issues/161,350182904,MDEyOklzc3VlQ29tbWVudDM1MDE4MjkwNA==,388154,wsxiaoys,2017-12-08T06:18:12Z,2017-12-08T06:18:12Z,NONE,"You're right..got this resolved after upgrading the sqlite version. Thanks you!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",278814220,Support WITH query , https://github.com/simonw/datasette/issues/1615#issuecomment-1023997327,https://api.github.com/repos/simonw/datasette/issues/1615,1023997327,IC_kwDOBm6k_c49CPWP,369053,aidansteele,2022-01-28T08:37:36Z,2022-01-28T08:37:36Z,NONE,"Oops, it feels like this should perhaps be migrated to GitHub Discussions - sorry! I don't think I have the ability to do that 😅","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1117132741,Potential simplified publishing mechanism, https://github.com/simonw/datasette/issues/1618#issuecomment-1028294089,https://api.github.com/repos/simonw/datasette/issues/1618,1028294089,IC_kwDOBm6k_c49SoXJ,770231,strada,2022-02-02T19:42:03Z,2022-02-02T19:42:03Z,NONE,"Thanks for looking into this. It might have been nice if `explain` surfaced these function calls. Looks like `explain query plan` does, but only for basic queries. ``` sqlite-utils fixtures.db 'explain query plan select * from pragma_function_list(), pragma_database_list(), pragma_module_list()' -t id parent notused detail ---- -------- --------- ------------------------------------------------ 4 0 0 SCAN pragma_function_list VIRTUAL TABLE INDEX 0: 8 0 0 SCAN pragma_database_list VIRTUAL TABLE INDEX 0: 12 0 0 SCAN pragma_module_list VIRTUAL TABLE INDEX 0: ``` ``` sqlite-utils fixtures.db 'explain query plan select * from pragma_function_list() as fl, pragma_database_list() as dl, pragma_module_list() as ml' -t id parent notused detail ---- -------- --------- ------------------------------ 4 0 0 SCAN fl VIRTUAL TABLE INDEX 0: 8 0 0 SCAN dl VIRTUAL TABLE INDEX 0: 12 0 0 SCAN ml VIRTUAL TABLE INDEX 0: ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1121121305,"Reconsider policy on blocking queries containing the string ""pragma""", https://github.com/simonw/datasette/issues/1619#issuecomment-1353721442,https://api.github.com/repos/simonw/datasette/issues/1619,1353721442,IC_kwDOBm6k_c5QsCZi,2090382,noslouch,2022-12-15T21:20:53Z,2022-12-15T21:20:53Z,NONE,"i'm also getting bit by this. I'm trying to set up an nginx reverse proxy in front of multiple datasette backends. When I run it locally or behind the proxy, I see the `base_url` value added a second time to the path for various action links on table pages (view as JSON, sort by column, etc).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1121583414,JSON link on row page is 404 if base_url setting is used, https://github.com/simonw/datasette/issues/1619#issuecomment-1455196849,https://api.github.com/repos/simonw/datasette/issues/1619,1455196849,IC_kwDOBm6k_c5WvIqx,969875,BryantD,2023-03-05T20:29:55Z,2023-03-05T20:30:14Z,NONE,"I have this same issue, which is happening with both json links and facets. It is not happening with column sort links in the gear popup menus, but it is happening with the sort arrow that results after you use one of those links. I'm using Apache as a proxy to Datasette; the relevant configs are: ``` ProxyPass /datasette/ http://127.0.0.1:8000/datasette/ nocanon ProxyPreserveHost on ``` ``` { ""base_url"": ""/datasette/"" } ``` If it would be useful to get a look at the running installation via the Web, Simon, let me know.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1121583414,JSON link on row page is 404 if base_url setting is used, https://github.com/simonw/datasette/issues/1619#issuecomment-1483009959,https://api.github.com/repos/simonw/datasette/issues/1619,1483009959,IC_kwDOBm6k_c5YZO-n,6613091,henrikek,2023-03-24T15:38:04Z,2023-03-24T15:38:04Z,NONE,"I also have the same problem when running behind an apache proxy server with base_url. However, I have researched the problem a bit and have come to the conclusion that if you change _facet_date to _facet in the following https://github.com/simonw/datasette/blob/3feed1f66e2b746f349ee56970a62246a18bb164/datasette/facets.py#LL493C57-L493C68 , the facets work. But I'm not sure if this has other consequences?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1121583414,JSON link on row page is 404 if base_url setting is used, https://github.com/simonw/datasette/issues/1624#issuecomment-1261194164,https://api.github.com/repos/simonw/datasette/issues/1624,1261194164,IC_kwDOBm6k_c5LLEu0,38532,palfrey,2022-09-28T16:54:22Z,2022-09-28T16:54:22Z,NONE,https://github.com/simonw/datasette-cors seems to workaround this,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1122427321,Index page `/` has no CORS headers, https://github.com/simonw/datasette/issues/1633#issuecomment-1034222709,https://api.github.com/repos/simonw/datasette/issues/1633,1034222709,IC_kwDOBm6k_c49pPx1,6613091,henrikek,2022-02-09T21:47:02Z,2022-02-09T21:47:02Z,NONE,Is this the correct solution to add the base_url row to url_builder.py?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1129052172,base_url or prefix does not work with _exact match, https://github.com/simonw/datasette/issues/1633#issuecomment-1111955628,https://api.github.com/repos/simonw/datasette/issues/1633,1111955628,IC_kwDOBm6k_c5CRxis,6613091,henrikek,2022-04-28T09:12:56Z,2022-04-28T09:12:56Z,NONE,I have verified that the problem with base_url still exists in the latest version 0.61.1. I would need some guidance if my code change suggestion is correct or if base_url should be included in some other code?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1129052172,base_url or prefix does not work with _exact match, https://github.com/simonw/datasette/issues/1634#issuecomment-1065334891,https://api.github.com/repos/simonw/datasette/issues/1634,1065334891,IC_kwDOBm6k_c4_f7hr,208018,dholth,2022-03-11T17:38:08Z,2022-03-11T17:38:08Z,NONE,I noticed the image was large when using fly. Is it possible to use a -slim base?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1131295060,Update Dockerfile generated by `datasette publish`, https://github.com/simonw/datasette/issues/1646#issuecomment-1172930092,https://api.github.com/repos/simonw/datasette/issues/1646,1172930092,IC_kwDOBm6k_c5F6X4s,1473102,mustafa0x,2022-07-02T17:12:18Z,2022-07-02T17:12:18Z,NONE,"I'm affected by this as well. Would be nice to be able to pass in an extension, eg `--extension=sqlite3`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1157182254,Configuration directory mode does not pick up other file extensions than .db, https://github.com/simonw/datasette/issues/1688#issuecomment-1079550754,https://api.github.com/repos/simonw/datasette/issues/1688,1079550754,IC_kwDOBm6k_c5AWKMi,9020979,hydrosquall,2022-03-26T01:27:27Z,2022-03-26T03:16:29Z,NONE,"> Is there a way to serve a static assets when using the plugins/ directory method instead of installing plugins as a new python package? As a workaround, I found I can serve my statics from a non-plugin specific folder using the [--static](https://docs.datasette.io/en/stable/custom_templates.html#serving-static-files) CLI flag. ```bash datasette ~/Library/Safari/History.db \ --plugins-dir=plugins/ \ --static assets:dist/ ``` It's not ideal because it means I'll change the cache pattern path depending on how the plugin is running (via pip install or as a one off script), but it's usable as a workaround. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1181432624,[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?, https://github.com/simonw/datasette/issues/1688#issuecomment-1079806857,https://api.github.com/repos/simonw/datasette/issues/1688,1079806857,IC_kwDOBm6k_c5AXIuJ,9020979,hydrosquall,2022-03-27T01:01:14Z,2022-03-27T01:01:14Z,NONE,"Thank you! I went through the cookiecutter template, and published my first package here: https://github.com/hydrosquall/datasette-nteract-data-explorer","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1181432624,[plugins][documentation] Is it possible to serve per-plugin static folders when writing one-off (single file) plugins?, https://github.com/simonw/datasette/issues/1706#issuecomment-1325164933,https://api.github.com/repos/simonw/datasette/issues/1706,1325164933,IC_kwDOBm6k_c5O_GmF,1176293,ar-jan,2022-11-23T14:34:54Z,2022-11-23T14:34:54Z,NONE,This would be helpful.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1198822563,"[feature] immutable mode for a directory, not just individual sqlite file", https://github.com/simonw/datasette/issues/1706#issuecomment-1344669160,https://api.github.com/repos/simonw/datasette/issues/1706,1344669160,IC_kwDOBm6k_c5QJgXo,419145,rdmurphy,2022-12-09T19:11:40Z,2022-12-09T19:11:40Z,NONE,"Ah, yes! Was just trying to do this and had the same issue. +1 to this!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1198822563,"[feature] immutable mode for a directory, not just individual sqlite file", https://github.com/simonw/datasette/issues/1713#issuecomment-1099443468,https://api.github.com/repos/simonw/datasette/issues/1713,1099443468,IC_kwDOBm6k_c5BiC0M,9308268,rayvoelker,2022-04-14T17:26:27Z,2022-04-14T17:26:27Z,NONE,"What would be an awesome feature as a plugin would be to be able to save a query (and possibly even results) to a github gist. Being able to share results that way would be super fantastic. Possibly even in Jupyter Notebook format (since github and github gists nicely render those)! I know there's the handy datasette-saved-queries plugin, but a button that could export stuff out and then even possibly import stuff back in (I'm sort of thinking the way that Google Colab allows you to save to github, and then pull the notebook back in is a really great workflow ![image](https://user-images.githubusercontent.com/9308268/163441612-9ad2649f-c73e-4557-aaf2-e3d0fdc48fbf.png) https://github.com/cincinnatilibrary/collection-analysis/blob/master/reports/colab_datasette_example.ipynb )","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1203943272,Datasette feature for publishing snapshots of query results, https://github.com/simonw/datasette/issues/1727#issuecomment-1111448928,https://api.github.com/repos/simonw/datasette/issues/1727,1111448928,IC_kwDOBm6k_c5CP11g,716529,glyph,2022-04-27T20:27:05Z,2022-04-27T20:27:05Z,NONE,"You don't want to re-use an SQLite connection from multiple threads anyway: https://www.sqlite.org/threadsafe.html Multiple connections can operate on the file in parallel, but a single connection can't: > Multi-thread. In this mode, SQLite can be safely used by multiple threads **provided that no single database connection is used simultaneously in two or more threads**. (emphasis mine)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117,Research: demonstrate if parallel SQL queries are worthwhile, https://github.com/simonw/datasette/issues/1727#issuecomment-1111451790,https://api.github.com/repos/simonw/datasette/issues/1727,1111451790,IC_kwDOBm6k_c5CP2iO,716529,glyph,2022-04-27T20:30:33Z,2022-04-27T20:30:33Z,NONE,"> I should try seeing what happens with WAL mode enabled. I've only skimmed above but it looks like you're doing mainly read-only queries? WAL mode is about better interactions between writers & readers, primarily.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1217759117,Research: demonstrate if parallel SQL queries are worthwhile, https://github.com/simonw/datasette/issues/173#issuecomment-823961091,https://api.github.com/repos/simonw/datasette/issues/173,823961091,MDEyOklzc3VlQ29tbWVudDgyMzk2MTA5MQ==,3747136,ColinMaudry,2021-04-21T10:37:05Z,2021-04-21T10:37:36Z,NONE,"I have the feeling that the text visible to users is 95% present in template files ([datasette/templates](https://github.com/simonw/datasette/tree/main/datasette/templates)). The python code mainly contains error messages. In the current situation, the best way to provide a localized frontend is to translate the templates and [configure datasette to use them](https://docs.datasette.io/en/stable/custom_templates.html). I think I'm going to do it for French. If we want localization to be better integrated, for the python code, I think [gettext](https://docs.python.org/3/library/gettext.html#localizing-your-application) is the way to go. The .po can be translated in user-friendly tools such as Transifex and Crowdin. For the templates, I'm not sure how we could do it cleanly and easy to maintain. Maybe the tools above could parse HTML and detect the strings to be translated. In any case, localization implementing l10n is just the first step: a continuous process must be setup to maintain the translations and produce new ones while datasette keeps getting new features.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",281110295,I18n and L10n support, https://github.com/simonw/datasette/issues/173#issuecomment-826784306,https://api.github.com/repos/simonw/datasette/issues/173,826784306,MDEyOklzc3VlQ29tbWVudDgyNjc4NDMwNg==,3747136,ColinMaudry,2021-04-26T12:10:01Z,2021-04-26T12:10:01Z,NONE,I found a neat tutorial to set up gettext with jinja2: http://siongui.github.io/2016/01/17/i18n-python-web-application-by-gettext-jinja2/,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",281110295,I18n and L10n support, https://github.com/simonw/datasette/issues/1732#issuecomment-1115542067,https://api.github.com/repos/simonw/datasette/issues/1732,1115542067,IC_kwDOBm6k_c5CfdIz,52649,tannewt,2022-05-03T01:50:44Z,2022-05-03T01:50:44Z,NONE,"I haven’t set one up unfortunately. My time is very limited because we just had a baby. On Mon, May 2, 2022, at 6:42 PM, Simon Willison wrote: > > > Thanks, this definitely sounds like a bug. Do you have simple steps to reproduce this? > > > — > Reply to this email directly, view it on GitHub , or unsubscribe . > You are receiving this because you authored the thread.Message ID: ***@***.***> > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1221849746,Custom page variables aren't decoded, https://github.com/simonw/datasette/issues/1751#issuecomment-1140321380,https://api.github.com/repos/simonw/datasette/issues/1751,1140321380,IC_kwDOBm6k_c5D9-xk,408765,knutwannheden,2022-05-28T19:52:17Z,2022-05-28T19:52:17Z,NONE,Closing in favor of existing issue #1298.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1251710928,Add scrollbars to table presentation in default layout, https://github.com/simonw/datasette/issues/176#issuecomment-356115657,https://api.github.com/repos/simonw/datasette/issues/176,356115657,MDEyOklzc3VlQ29tbWVudDM1NjExNTY1Nw==,4313116,wulfmann,2018-01-08T22:22:32Z,2018-01-08T22:22:32Z,NONE,"This project probably would not be the place for that. This is a layer for sqllite specifically. It solves a similar problem as graphql, so adding that here wouldn't make sense. Here's an example i found from google that uses micro to run a graphql microservice. you'd just then need to connect your db. https://github.com/timneutkens/micro-graphql","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-356161672,https://api.github.com/repos/simonw/datasette/issues/176,356161672,MDEyOklzc3VlQ29tbWVudDM1NjE2MTY3Mg==,173848,yozlet,2018-01-09T02:35:35Z,2018-01-09T02:35:35Z,NONE,"@wulfmann I think I disagree, except I'm not entirely sure what you mean by that first paragraph. The JSON API that Datasette currently exposes is quite different to GraphQL. Furthermore, there's no ""just"" about connecting micro-graphql to a DB; at least, no more ""just"" than adding any other API. You still need to configure the schema, which is exactly the kind of thing that Datasette does for JSON API. This is why I think that GraphQL's a good fit here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-356175667,https://api.github.com/repos/simonw/datasette/issues/176,356175667,MDEyOklzc3VlQ29tbWVudDM1NjE3NTY2Nw==,4313116,wulfmann,2018-01-09T04:19:03Z,2018-01-09T04:19:03Z,NONE,"@yozlet Yes I think that I was confused when I posted my original comment. I see your main point now and am in agreement. ","{""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 2, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-359697938,https://api.github.com/repos/simonw/datasette/issues/176,359697938,MDEyOklzc3VlQ29tbWVudDM1OTY5NzkzOA==,7193,gijs,2018-01-23T07:17:56Z,2018-01-23T07:17:56Z,NONE,👍 I'd like this too! ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-368625350,https://api.github.com/repos/simonw/datasette/issues/176,368625350,MDEyOklzc3VlQ29tbWVudDM2ODYyNTM1MA==,7431774,wuhland,2018-02-26T19:44:11Z,2018-02-26T19:44:11Z,NONE,great idea!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-431867885,https://api.github.com/repos/simonw/datasette/issues/176,431867885,MDEyOklzc3VlQ29tbWVudDQzMTg2Nzg4NQ==,634572,eads,2018-10-22T15:24:57Z,2018-10-22T15:24:57Z,NONE,"I'd like this as well. It would let me access Datasette-driven projects from GatsbyJS the same way I can access Postgres DBs via Hasura. While I don't see SQLite replacing Postgres for the 50m row datasets I sometimes have to work with, there's a whole class of smaller datasets that are great with Datasette but currently would find another option.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-548508237,https://api.github.com/repos/simonw/datasette/issues/176,548508237,MDEyOklzc3VlQ29tbWVudDU0ODUwODIzNw==,634572,eads,2019-10-31T18:25:44Z,2019-10-31T18:25:44Z,NONE,"👋 I'd be interested in building this out in Q1 or Q2 of 2020 if nobody has tackled it by then. I would love to integrate Datasette into @thechicagoreporter's practice, but we're also fully committed to GraphQL moving forward.","{""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 2, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/176#issuecomment-617208503,https://api.github.com/repos/simonw/datasette/issues/176,617208503,MDEyOklzc3VlQ29tbWVudDYxNzIwODUwMw==,12976,nkirsch,2020-04-21T14:16:24Z,2020-04-21T14:16:24Z,NONE,"@eads I'm interested in helping, if there's still a need...","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",285168503,Add GraphQL endpoint, https://github.com/simonw/datasette/issues/1771#issuecomment-1356657451,https://api.github.com/repos/simonw/datasette/issues/1771,1356657451,IC_kwDOBm6k_c5Q3PMr,1473102,mustafa0x,2022-12-18T04:04:32Z,2022-12-18T04:04:32Z,NONE,"the problem is: ``` .select-wrapper select:focus { outline: none; } ``` I sometimes add this js: ``` window.addEventListener('keydown', function check_tab(e) { if (e.key === 'Tab') { document.documentElement.classList.add('user-is-tabbing') window.removeEventListener('keydown', check_tab) } }) ``` and then in the css, using a `html.user-is-tabbing` selector undo any outlines I removed.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1306984363,minor a11y: