issue_comments
9,947 rows sorted by user
This data as json, CSV (advanced)
id | html_url | issue_url | node_id | user ▼ | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
344125441 | https://github.com/simonw/datasette/pull/81#issuecomment-344125441 | https://api.github.com/repos/simonw/datasette/issues/81 | MDEyOklzc3VlQ29tbWVudDM0NDEyNTQ0MQ== | jefftriplett 50527 | 2017-11-14T02:24:54Z | 2017-11-14T02:24:54Z | CONTRIBUTOR | Oops, if I jumped the gun. I saw the project in my github activity feed and saw some low hanging fruit :) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | :fire: Removes DS_Store 273595473 | |
735279355 | https://github.com/simonw/datasette/pull/1112#issuecomment-735279355 | https://api.github.com/repos/simonw/datasette/issues/1112 | MDEyOklzc3VlQ29tbWVudDczNTI3OTM1NQ== | jefftriplett 50527 | 2020-11-28T19:21:09Z | 2020-11-28T19:21:09Z | CONTRIBUTOR | (Even more annoying is that I see my editor leaked an extra delete space at the end of the line. I'm happy to rebuild this to be less annoying, but you probably don't want the changelog update either way) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix --metadata doc usage 752749485 | |
735281577 | https://github.com/simonw/datasette/issues/493#issuecomment-735281577 | https://api.github.com/repos/simonw/datasette/issues/493 | MDEyOklzc3VlQ29tbWVudDczNTI4MTU3Nw== | jefftriplett 50527 | 2020-11-28T19:39:53Z | 2020-11-28T19:39:53Z | CONTRIBUTOR | I was confused by `--config` and I tried passing the json from datasette-ripgrep into `config.json` just as a wild guess. A short term solution might be pointing out in plugins that their snippet json can go in `metadata.json` at least makes it easier to search for config options or to know where to start if someone is new. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Rename metadata.json to config.json 449886319 | |
748305976 | https://github.com/simonw/datasette/issues/493#issuecomment-748305976 | https://api.github.com/repos/simonw/datasette/issues/493 | MDEyOklzc3VlQ29tbWVudDc0ODMwNTk3Ng== | jefftriplett 50527 | 2020-12-18T20:34:39Z | 2020-12-18T20:34:39Z | CONTRIBUTOR | I can't keep up with the renaming contexts, but I like having the ability to run datasette+ datasette-ripgrep against different configs: ```shell datasette serve --metadata=./metadata.json ``` I have one for all of my code and one per client who has lots of code. So as long as I can point to datasette to something, it's easy to work with. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Rename metadata.json to config.json 449886319 | |
1224382336 | https://github.com/simonw/sqlite-utils/issues/467#issuecomment-1224382336 | https://api.github.com/repos/simonw/sqlite-utils/issues/467 | IC_kwDOCGYnMM5I-peA | jefftriplett 50527 | 2022-08-23T17:16:13Z | 2022-08-23T17:16:13Z | CONTRIBUTOR | > Should passing `alter=True` also drop any columns that aren't included in the new table structure? > > It could even spot column types that aren't correct and fix those. > > Is that consistent with the expectations set by how `alter=True` works elsewhere? I would lean towards not dropping them (or making a `drop=True` or `drop_columns=True`or `drop_missing_columns=True`) to work with existing tables easier. I do like that sqlite-utils mostly just works with existing tables but it's also nice to add to existing fields in a few cases. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Mechanism for ensuring a table has all the columns 1348169997 | |
1256781274 | https://github.com/simonw/datasette/issues/1817#issuecomment-1256781274 | https://api.github.com/repos/simonw/datasette/issues/1817 | IC_kwDOBm6k_c5K6PXa | jefftriplett 50527 | 2022-09-23T22:59:46Z | 2022-09-23T22:59:46Z | CONTRIBUTOR | While you are adding features, would you be future-proofing your APIs if you switched over some arguments over to keyword-only arguments or would that be too disruptive? Thinking out loud: ``` async def render_template( self, templates, *, context=None, plugin_context=None, request=None, view_name=None ): ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Expose `sql` and `params` arguments to various plugin hooks 1384273985 | |
913001282 | https://github.com/simonw/datasette/pull/1455#issuecomment-913001282 | https://api.github.com/repos/simonw/datasette/issues/1455 | IC_kwDOBm6k_c42a0tC | ctb 51016 | 2021-09-04T16:31:24Z | 2021-09-04T16:31:24Z | CONTRIBUTOR | I love it! maybe 'researchers' instead? Or 'scientists and researchers'? | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add scientists to target groups 988325628 | |
915279711 | https://github.com/simonw/datasette/issues/1464#issuecomment-915279711 | https://api.github.com/repos/simonw/datasette/issues/1464 | IC_kwDOBm6k_c42jg9f | ctb 51016 | 2021-09-08T14:16:49Z | 2021-09-08T14:16:49Z | CONTRIBUTOR | on commit d57ab156b35ec642 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | clean checkout & clean environment has test failures 991191951 | |
915302885 | https://github.com/simonw/datasette/issues/1464#issuecomment-915302885 | https://api.github.com/repos/simonw/datasette/issues/1464 | IC_kwDOBm6k_c42jmnl | ctb 51016 | 2021-09-08T14:44:50Z | 2021-09-08T14:44:50Z | CONTRIBUTOR | thanks for the response! full errors attached; excerpt: ``` ... def test_searchmode(table_metadata, querystring, expected_rows): with make_app_client( metadata={"databases": {"fixtures": {"tables": {"searchable": table_metadata}}}} ) as client: response = client.get("/fixtures/searchable.json?" + querystring) > assert expected_rows == response.json["rows"] E AssertionError: assert [[1, 'barry c...sel', 'puma']] == [] E Left contains 2 more items, first extra item: [1, 'barry cat', 'terry dog', 'panther'] E Use -v to get the full diff /Users/t/dev/datasette/tests/test_api.py:1115: AssertionError ``` [errors.txt](https://github.com/simonw/datasette/files/7129719/errors.txt) A quick scan of #1223 suggests you're right. Unfortunately, pysqlite3-binary isn't available for Mac OS X, so I can't quickly check that that fixes it; will do so later. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | clean checkout & clean environment has test failures 991191951 | |
917642487 | https://github.com/simonw/datasette/issues/1464#issuecomment-917642487 | https://api.github.com/repos/simonw/datasette/issues/1464 | IC_kwDOBm6k_c42shz3 | ctb 51016 | 2021-09-12T14:03:09Z | 2021-09-12T14:03:09Z | CONTRIBUTOR | haven't had time to get back to this, but idle thought that I'm recording for later investigation: how does the continuous integration handle this installation issue? Is it documented there? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | clean checkout & clean environment has test failures 991191951 | |
974711959 | https://github.com/simonw/datasette/issues/1426#issuecomment-974711959 | https://api.github.com/repos/simonw/datasette/issues/1426 | IC_kwDOBm6k_c46GOyX | tannewt 52649 | 2021-11-20T21:11:51Z | 2021-11-20T21:11:51Z | NONE | I think another thing would be to make `/pages/robots.txt` work. That way you can use jinja to generate a desired robots.txt. I'm using it to allow the main index and what it links to to be crawled (but not the database pages directly.) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Manage /robots.txt in Datasette core, block robots by default 964322136 | |
1115542067 | https://github.com/simonw/datasette/issues/1732#issuecomment-1115542067 | https://api.github.com/repos/simonw/datasette/issues/1732 | IC_kwDOBm6k_c5CfdIz | tannewt 52649 | 2022-05-03T01:50:44Z | 2022-05-03T01:50:44Z | NONE | I haven’t set one up unfortunately. My time is very limited because we just had a baby. On Mon, May 2, 2022, at 6:42 PM, Simon Willison wrote: > > > Thanks, this definitely sounds like a bug. Do you have simple steps to reproduce this? > > > — > Reply to this email directly, view it on GitHub <https://github.com/simonw/datasette/issues/1732#issuecomment-1115533820>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAAM3KIY5L6FENZ22XANTHDVICAAXANCNFSM5UYOTKQA>. > You are receiving this because you authored the thread.Message ID: ***@***.***> > | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Custom page variables aren't decoded 1221849746 | |
344810525 | https://github.com/simonw/datasette/issues/46#issuecomment-344810525 | https://api.github.com/repos/simonw/datasette/issues/46 | MDEyOklzc3VlQ29tbWVudDM0NDgxMDUyNQ== | ingenieroariel 54999 | 2017-11-16T04:11:25Z | 2017-11-16T04:11:25Z | CONTRIBUTOR | @simonw On the spatialite support, here is some info to make it work and a screenshot: <img width="1230" alt="screen shot 2017-11-15 at 11 08 14 pm" src="https://user-images.githubusercontent.com/54999/32873420-f8a6d5a0-ca59-11e7-8a73-7d58d467e413.png"> I used the following Dockerfile: ``` FROM prolocutor/python3-sqlite-ext:3.5.1-spatialite as build RUN mkdir /code ADD . /code/ RUN pip install /code/ EXPOSE 8001 CMD ["datasette", "serve", "/code/ne.sqlite", "--host", "0.0.0.0"] ``` and added this to `prepare_connection`: ``` conn.enable_load_extension(True) conn.execute("SELECT load_extension('/usr/local/lib/mod_spatialite.so')") ``` | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Dockerfile should build more recent SQLite with FTS5 and spatialite support 271301468 | |
345002908 | https://github.com/simonw/datasette/issues/46#issuecomment-345002908 | https://api.github.com/repos/simonw/datasette/issues/46 | MDEyOklzc3VlQ29tbWVudDM0NTAwMjkwOA== | ingenieroariel 54999 | 2017-11-16T17:47:49Z | 2017-11-16T17:47:49Z | CONTRIBUTOR | I'll try to find alternatives to the Dockerfile option - I also think we should not use that old one without sources or license. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Dockerfile should build more recent SQLite with FTS5 and spatialite support 271301468 | |
754911290 | https://github.com/simonw/datasette/issues/1171#issuecomment-754911290 | https://api.github.com/repos/simonw/datasette/issues/1171 | MDEyOklzc3VlQ29tbWVudDc1NDkxMTI5MA== | rcoup 59874 | 2021-01-05T21:31:15Z | 2021-01-05T21:31:15Z | NONE | We did this for [Sno](https://sno.earth) under macOS — it's a PyInstaller binary/setup which uses [Packages](http://s.sudre.free.fr/Software/Packages/about.html) for packaging. * [Building & Signing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L67-L95) * [Packaging & Notarizing](https://github.com/koordinates/sno/blob/master/platforms/Makefile#L121-L215) * [Github Workflow](https://github.com/koordinates/sno/blob/master/.github/workflows/build.yml#L228-L269) has the CI side of it FYI (if you ever get to it) for Windows you need to get a code signing certificate. And if you want automated CI, you'll want to get an "EV CodeSigning for HSM" certificate from GlobalSign, which then lets you put the certificate into Azure Key Vault. Which you can use with [azuresigntool](https://github.com/vcsjones/AzureSignTool) to sign your code & installer. (Non-EV certificates are a waste of time, the user still gets big warnings at install time). | {"total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0} | GitHub Actions workflow to build and sign macOS binary executables 778450486 | |
344424382 | https://github.com/simonw/datasette/issues/93#issuecomment-344424382 | https://api.github.com/repos/simonw/datasette/issues/93 | MDEyOklzc3VlQ29tbWVudDM0NDQyNDM4Mg== | atomotic 67420 | 2017-11-14T22:42:16Z | 2017-11-14T22:42:16Z | NONE | tried quickly, this seems working: ``` ~ pip3 install pyinstaller ~ pyinstaller -F --add-data /usr/local/lib/python3.6/site-packages/datasette/templates:datasette/templates --add-data /usr/local/lib/python3.6/site-packages/datasette/static:datasette/static /usr/local/bin/datasette ~ du -h dist/datasette 6.8M dist/datasette ~ file dist/datasette dist/datasette: Mach-O 64-bit executable x86_64 ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Package as standalone binary 273944952 | |
344430299 | https://github.com/simonw/datasette/issues/93#issuecomment-344430299 | https://api.github.com/repos/simonw/datasette/issues/93 | MDEyOklzc3VlQ29tbWVudDM0NDQzMDI5OQ== | atomotic 67420 | 2017-11-14T23:06:33Z | 2017-11-14T23:06:33Z | NONE | i will look better tomorrow, it's late i surely made some mistake https://asciinema.org/a/ZyAWbetrlriDadwWyVPUWB94H | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Package as standalone binary 273944952 | |
344516406 | https://github.com/simonw/datasette/issues/93#issuecomment-344516406 | https://api.github.com/repos/simonw/datasette/issues/93 | MDEyOklzc3VlQ29tbWVudDM0NDUxNjQwNg== | atomotic 67420 | 2017-11-15T08:09:41Z | 2017-11-15T08:09:41Z | NONE | actually you can use travis to build for linux/macos and [appveyor](https://www.appveyor.com/) to build for windows. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Package as standalone binary 273944952 | |
1006708046 | https://github.com/dogsheep/dogsheep-photos/pull/36#issuecomment-1006708046 | https://api.github.com/repos/dogsheep/dogsheep-photos/issues/36 | IC_kwDOD079W848ASVO | scoates 71983 | 2022-01-06T16:04:46Z | 2022-01-06T16:04:46Z | NONE | This one got me, today, too. 👍 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Correct naming of tool in readme 988493790 | |
645515103 | https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645515103 | https://api.github.com/repos/dogsheep/twitter-to-sqlite/issues/47 | MDEyOklzc3VlQ29tbWVudDY0NTUxNTEwMw== | hpk42 73579 | 2020-06-17T17:30:01Z | 2020-06-17T17:30:01Z | NONE | It's the one with python3.7:: >>> sqlite3.sqlite_version '3.11.0' On Wed, Jun 17, 2020 at 10:24 -0700, Simon Willison wrote: > That means your version of SQLite is old enough that it doesn't support the FTS5 extension. > > Could you share what operating system you're running, and what the output is that you get from running this? > > python -c 'import sqlite3; print(sqlite3.connect(":memory:").execute("select sqlite_version()").fetchone()[0])' > > I can teach this tool to fall back on FTS4 if FTS5 isn't available. > > -- > You are receiving this because you authored the thread. > Reply to this email directly or view it on GitHub: > https://github.com/dogsheep/twitter-to-sqlite/issues/47#issuecomment-645512127 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fall back to FTS4 if FTS5 is not available 639542974 | |
414860009 | https://github.com/simonw/datasette/issues/267#issuecomment-414860009 | https://api.github.com/repos/simonw/datasette/issues/267 | MDEyOklzc3VlQ29tbWVudDQxNDg2MDAwOQ== | annapowellsmith 78156 | 2018-08-21T23:57:51Z | 2018-08-21T23:57:51Z | NONE | Looks to me like hashing, redirects and caching were documented as part of https://github.com/simonw/datasette/commit/788a542d3c739da5207db7d1fb91789603cdd336#diff-3021b0e065dce289c34c3b49b3952a07 - so perhaps this can be closed? :tada: | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Documentation for URL hashing, redirects and cache policy 323716411 | |
643083451 | https://github.com/simonw/datasette/issues/838#issuecomment-643083451 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDY0MzA4MzQ1MQ== | tsibley 79913 | 2020-06-12T06:04:14Z | 2020-06-12T06:04:14Z | NONE | Hmm, I haven't tried removing `ProxyPassReverse`, but it doesn't touch the HTML, which is the issue I'm seeing. You can read the [documentation here](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassreverse). `ProxyPassReverse` is a standard directive when proxying with Apache. I've used it dozens of times with other applications. Looking a little more at the code, I think the issue here is that the behaviour of `base_url` makes sense when Datasette is _mounted_ at a path within a larger application, but not when HTTP requests are being _proxied_ to it. In a _mount_ situation, it is perfectly fine to construct URLs reusing the domain and path from the request. In a _proxy_ situation, it never is, as the domain and path in the request are not the domain and path that the non-proxy client actually needs to use. That is, links which include the Apache → Datasette request origin, `localhost:8001`, instead of the browser → Apache request origin, `example.com`, will be broken. The tests you pointed to also reflect this in two ways: 1. They strip a leading `http://localhost`, allowing such URLs in the facet links to pass, but inclusion of that in a proxy situation would mean the URL is broken. 2. The test client emits direct ASGI events instead of actual proxied HTTP requests. The headers of these ASGI events don't reflect the way an HTTP proxy works; instead they pass through the original request path which contains `base_url`. This works because Datasette responds to requests equivalently at either `/…` or `/{base_url}/…`, which makes some sense in a _mount_ situation but is unconventional (albeit workable) for a proxied app. Apps that support being proxied automatically support being mounted, but apps that only support being mounted don't automatically support being proxied. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
655018966 | https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655018966 | https://api.github.com/repos/simonw/sqlite-utils/issues/118 | MDEyOklzc3VlQ29tbWVudDY1NTAxODk2Ng== | tsibley 79913 | 2020-07-07T17:41:06Z | 2020-07-07T17:41:06Z | CONTRIBUTOR | Hmm, while tests pass, this may not work as intended on larger datasets. Looking into it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add insert --truncate option 651844316 | |
655052451 | https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655052451 | https://api.github.com/repos/simonw/sqlite-utils/issues/118 | MDEyOklzc3VlQ29tbWVudDY1NTA1MjQ1MQ== | tsibley 79913 | 2020-07-07T18:45:23Z | 2020-07-07T18:45:23Z | CONTRIBUTOR | Ah, I see the problem. The truncate is inside a loop I didn't realize was there. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add insert --truncate option 651844316 | |
655239728 | https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728 | https://api.github.com/repos/simonw/sqlite-utils/issues/118 | MDEyOklzc3VlQ29tbWVudDY1NTIzOTcyOA== | tsibley 79913 | 2020-07-08T02:16:42Z | 2020-07-08T02:16:42Z | CONTRIBUTOR | I fixed my original oops by moving the `DELETE FROM $table` out of the chunking loop and repushed. I think this change can be considered in isolation from issues around transactions, which I discuss next. I wanted to make the DELETE + INSERT happen all in the same transaction so it was robust, but that was more complicated than I expected. The transaction handling in the Database/Table classes isn't systematic, and this poses big hurdles to making `Table.insert_all` (or other operations) consistent and robust in the face of errors. For example, I wanted to do this (whitespace ignored in diff, so indentation change not highlighted): ```diff diff --git a/sqlite_utils/db.py b/sqlite_utils/db.py index d6b9ecf..4107ceb 100644 --- a/sqlite_utils/db.py +++ b/sqlite_utils/db.py @@ -1028,6 +1028,11 @@ class Table(Queryable): batch_size = max(1, min(batch_size, SQLITE_MAX_VARS // num_columns)) self.last_rowid = None self.last_pk = None + with self.db.conn: + # Explicit BEGIN is necessary because Python's sqlite3 doesn't + # issue implicit BEGINs for DDL, only DML. We mix DDL and DML + # below and might execute DDL first, e.g. for table creation. + self.db.conn.execute("BEGIN") if truncate and self.exists(): self.db.conn.execute("DELETE FROM [{}];".format(self.name)) for chunk in chunks(itertools.chain([first_record], records), batch_size): @@ -1038,7 +1043,11 @@ class Table(Queryable): # Use the first batch to derive the table names column_types = suggest_column_types(chunk) column_types.update(columns or {}) - self.create( + # Not self.create() because that is wrapped in its own + # transaction and Python's sqlite3 doesn't support + # nested transactions. + self.db.create_table( + … | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add insert --truncate option 651844316 | |
655643078 | https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655643078 | https://api.github.com/repos/simonw/sqlite-utils/issues/118 | MDEyOklzc3VlQ29tbWVudDY1NTY0MzA3OA== | tsibley 79913 | 2020-07-08T17:05:59Z | 2020-07-08T17:05:59Z | CONTRIBUTOR | > The only thing missing from this PR is updates to the documentation. Ah, yes, thanks for this reminder! I've repushed with doc bits added. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add insert --truncate option 651844316 | |
655652679 | https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655652679 | https://api.github.com/repos/simonw/sqlite-utils/issues/121 | MDEyOklzc3VlQ29tbWVudDY1NTY1MjY3OQ== | tsibley 79913 | 2020-07-08T17:24:46Z | 2020-07-08T17:24:46Z | CONTRIBUTOR | Better transaction handling would be really great. Some of my thoughts on implementing better transaction discipline are in https://github.com/simonw/sqlite-utils/pull/118#issuecomment-655239728. My preferences: - Each CLI command should operate in a single transaction so that either the whole thing succeeds or the whole thing is rolled back. This avoids partially completed operations when an error occurs part way through processing. Partially completed operations are typically much harder to recovery from gracefully and may cause inconsistent data states. - The Python API should be transaction-agnostic and rely on the caller to coordinate transactions. Only the caller knows how individual insert, create, update, etc operations/methods should be bundled conceptually into transactions. When the caller is the CLI, for example, that bundling would be at the CLI command-level. Other callers might want to break up operations into multiple transactions. Transactions are usually most useful when controlled at the application-level (like logging configuration) instead of the library level. The library needs to provide an API that's conducive to transaction use, though. - The Python API should provide a context manager to provide consistent transactions handling with more useful defaults than Python's `sqlite3` module. The latter issues implicit `BEGIN` statements by default for most DML (`INSERT`, `UPDATE`, `DELETE`, … but not `SELECT`, I believe), but **not** DDL (`CREATE TABLE`, `DROP TABLE`, `CREATE VIEW`, …). Notably, the `sqlite3` module doesn't issue the implicit `BEGIN` until the first DML statement. It _does not_ issue it when entering the `with conn` block, like other DBAPI2-compatible modules do. The `with conn` block for `sqlite3` only arranges to commit or rollback an existing transaction when exiting. Including DDL and `SELECT`s in transactions is important for operation consistency, though. There are several existing bugs.python.org tickets about this and future changes are in the works, but sql… | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Improved (and better documented) support for transactions 652961907 | |
655898722 | https://github.com/simonw/sqlite-utils/issues/121#issuecomment-655898722 | https://api.github.com/repos/simonw/sqlite-utils/issues/121 | MDEyOklzc3VlQ29tbWVudDY1NTg5ODcyMg== | tsibley 79913 | 2020-07-09T04:53:08Z | 2020-07-09T04:53:08Z | CONTRIBUTOR | Yep, I agree that makes more sense for backwards compat and more casual use cases. I think it should be possible for the Database/Queryable methods to DTRT based on seeing if it's within a context-manager-managed transaction. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Improved (and better documented) support for transactions 652961907 | |
790857004 | https://github.com/simonw/datasette/issues/1238#issuecomment-790857004 | https://api.github.com/repos/simonw/datasette/issues/1238 | MDEyOklzc3VlQ29tbWVudDc5MDg1NzAwNA== | tsibley 79913 | 2021-03-04T19:06:55Z | 2021-03-04T19:06:55Z | NONE | @rgieseke Ah, that's super helpful. Thank you for the workaround for now! | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Custom pages don't work with base_url setting 813899472 | |
795893813 | https://github.com/simonw/datasette/issues/838#issuecomment-795893813 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDc5NTg5MzgxMw== | tsibley 79913 | 2021-03-10T18:43:39Z | 2021-03-10T18:43:39Z | NONE | @simonw Unfortunately this issue as I reported it is not actually solved in version 0.55. Every link which is returned by the `Datasette.absolute_url` method is still wrong, because it uses the request URL as the base. This still includes the suggested facet links and pagination links. What I wrote originally still stands: > Although many of the URLs in the pages are correct (presumably because they either use absolute paths which include `base_url` or relative paths), the faceting and pagination links still use fully-qualified URLs pointing at `http://localhost:8001`. > > I looked into this a little in the source code, and it seems to be an issue anywhere `request.url` or `request.path` is used, as these contain the values for the request between the frontend (Apache) and backend (Datasette) server. Those properties are primarily used via the `path_with_…` family of utility functions and the `Datasette.absolute_url` method. Would you prefer to re-open this issue or have me create a new one? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
795939998 | https://github.com/simonw/datasette/issues/838#issuecomment-795939998 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDc5NTkzOTk5OA== | tsibley 79913 | 2021-03-10T19:16:55Z | 2021-03-10T19:16:55Z | NONE | Nod. The problem with the tests is that they're ignoring the origin (hostname, port) of links. In a reverse proxy situation, the frontend request origin is different than the backend request origin. The problem is Datasette generates links with the backend request origin. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
795950636 | https://github.com/simonw/datasette/issues/838#issuecomment-795950636 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDc5NTk1MDYzNg== | tsibley 79913 | 2021-03-10T19:24:13Z | 2021-03-10T19:24:13Z | NONE | I think this could be solved by one of: 1. Stop generating absolute URLs, e.g. ones that include an origin. Relative URLs with absolute paths are fine, as long as they take `base_url` into account (as they do now, yay!). 2. Extend `base_url` to include the expected frontend origin, and then use that information when generating absolute URLs. 3. Document which HTTP headers the reverse proxy should set (e.g. the `X-Forwarded-*` family of conventional headers) to pass the frontend origin information to Datasette, and then use that information when generating absolute URLs. Option 1 seems like the easiest to me, if you can get away with never having to generate an absolute URL. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
876213177 | https://github.com/simonw/datasette/issues/1388#issuecomment-876213177 | https://api.github.com/repos/simonw/datasette/issues/1388 | MDEyOklzc3VlQ29tbWVudDg3NjIxMzE3Nw== | aslakr 80737 | 2021-07-08T07:47:17Z | 2021-07-08T07:47:17Z | CONTRIBUTOR | > This sounds like a valuable feature for people running Datasette behind a proxy. Yes, in some cases it is easer to use e.g. Apache's [ProxyPass Directive](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass) with Unix Domain Socket like `unix:/home/www.socket|http://localhost/whatever/`. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Serve using UNIX domain socket 939051549 | |
360535979 | https://github.com/simonw/datasette/issues/179#issuecomment-360535979 | https://api.github.com/repos/simonw/datasette/issues/179 | MDEyOklzc3VlQ29tbWVudDM2MDUzNTk3OQ== | psychemedia 82988 | 2018-01-25T17:18:24Z | 2018-01-25T17:18:24Z | CONTRIBUTOR | To summarise that thread: - expose full `metadata.json` object to the index page template, eg to allow tables to be referred to by name; - ability to import multiple `metadata.json` files, eg to allow metadata files created for a specific SQLite db to be reused in a datasette referring to several database files; It could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | More metadata options for template authors 288438570 | |
401310732 | https://github.com/simonw/datasette/issues/276#issuecomment-401310732 | https://api.github.com/repos/simonw/datasette/issues/276 | MDEyOklzc3VlQ29tbWVudDQwMTMxMDczMg== | psychemedia 82988 | 2018-06-29T10:05:04Z | 2018-06-29T10:07:25Z | CONTRIBUTOR | @russs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg [kartena/Proj4Leaflet](https://kartena.github.io/Proj4Leaflet/)) although the leaflet side would need to detect or be informed of the original projection? Another possibility would be to provide an easy way/guidance for users to create an FK'd table containing the WGS84 projection of a non-WGS84 geometry in the original/principle table? This could then as a proxy for serving GeoJSON to the leaflet map? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Handle spatialite geometry columns better 324835838 | |
435862009 | https://github.com/simonw/datasette/issues/371#issuecomment-435862009 | https://api.github.com/repos/simonw/datasette/issues/371 | MDEyOklzc3VlQ29tbWVudDQzNTg2MjAwOQ== | psychemedia 82988 | 2018-11-05T12:48:35Z | 2018-11-05T12:48:35Z | CONTRIBUTOR | I think you need to register a domain name you own separately in order to get a non-IP address address? https://www.digitalocean.com/docs/networking/dns/ | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette publish digitalocean plugin 377156339 | |
436037692 | https://github.com/simonw/datasette/issues/370#issuecomment-436037692 | https://api.github.com/repos/simonw/datasette/issues/370 | MDEyOklzc3VlQ29tbWVudDQzNjAzNzY5Mg== | psychemedia 82988 | 2018-11-05T21:15:47Z | 2018-11-05T21:18:37Z | CONTRIBUTOR | In terms of integration with `pandas`, I was pondering two different ways `datasette`/`csvs_to_sqlite` integration may work: - like [`pandasql`](https://github.com/yhat/pandasql), to provide a SQL query layer either by a direct connection to the sqlite db or via `datasette` API; - as an improvement of `pandas.to_sql()`, which is a bit ropey (e.g. `pandas.to_sql_from_csvs()`, routing the dataframe to sqlite via `csvs_tosqlite` rather than the dodgy mapping that `pandas` supports). The `pandas.publish_*` idea could be quite interesting though... Would it be useful/fruitful to think about `publish_` as a complement to [`pandas.to_`](https://pandas.pydata.org/pandas-docs/stable/api.html#id12)? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Integration with JupyterLab 377155320 | |
436042445 | https://github.com/simonw/datasette/issues/370#issuecomment-436042445 | https://api.github.com/repos/simonw/datasette/issues/370 | MDEyOklzc3VlQ29tbWVudDQzNjA0MjQ0NQ== | psychemedia 82988 | 2018-11-05T21:30:42Z | 2018-11-05T21:31:48Z | CONTRIBUTOR | Another route would be something like creating a `datasette` IPython magic for notebooks to take a dataframe and easily render it as a `datasette`. You'd need to run the app in the background rather than block execution in the notebook. Related to that, or to publishing a dataframe in notebook cell for use in other cells in a non-blocking way, there may be cribs in something like https://github.com/micahscopes/nbmultitask . | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Integration with JupyterLab 377155320 | |
459915995 | https://github.com/simonw/datasette/issues/160#issuecomment-459915995 | https://api.github.com/repos/simonw/datasette/issues/160 | MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ== | psychemedia 82988 | 2019-02-02T00:43:16Z | 2019-02-02T00:58:20Z | CONTRIBUTOR | Do you have any simple working examples of how to use `--static`? Inspection of default served files suggests locations such as `http://example.com/-/static/app.css?0e06ee`. If `datasette` is being proxied to `http://example.com/foo/datasette`, what form should arguments to `--static` take so that static files are correctly referenced? Use case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple `datasette` demo in MyBinder using jupyter-server-proxy. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Ability to bundle and serve additional static files 278208011 | |
464341721 | https://github.com/simonw/sqlite-utils/issues/8#issuecomment-464341721 | https://api.github.com/repos/simonw/sqlite-utils/issues/8 | MDEyOklzc3VlQ29tbWVudDQ2NDM0MTcyMQ== | psychemedia 82988 | 2019-02-16T12:08:41Z | 2019-02-16T12:08:41Z | NONE | We also get an error if a column name contains a `.` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Problems handling column names containing spaces or - 403922644 | |
474280581 | https://github.com/simonw/datasette/issues/417#issuecomment-474280581 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ== | psychemedia 82988 | 2019-03-19T10:06:42Z | 2019-03-19T10:06:42Z | CONTRIBUTOR | This would be really interesting but several possibilities in use arise, I think? For example: - I put a new CSV file into the import dir and a new table is created therefrom - I put a CSV file into the import dir that replaces a previous file / table of the same name as a pre-existing table (eg files that contain monthly data in year to date). The data may also patch previous months, so a full replace / DROP on the original table may well be in order. - I put a CSV file into the import dir that updates a table of the same name as a pre-existing table (eg files that contain last month's data) CSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form `MYTABLENAME-February2019.csv` etc | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette Library 421546944 | |
474282321 | https://github.com/simonw/datasette/issues/412#issuecomment-474282321 | https://api.github.com/repos/simonw/datasette/issues/412 | MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ== | psychemedia 82988 | 2019-03-19T10:09:46Z | 2019-03-19T10:09:46Z | CONTRIBUTOR | Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to `ATTACH DATABASE`? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Linked Data(sette) 411257981 | |
480621924 | https://github.com/simonw/sqlite-utils/issues/18#issuecomment-480621924 | https://api.github.com/repos/simonw/sqlite-utils/issues/18 | MDEyOklzc3VlQ29tbWVudDQ4MDYyMTkyNA== | psychemedia 82988 | 2019-04-07T19:31:42Z | 2019-04-07T19:31:42Z | NONE | I've just noticed that SQLite lets you IGNORE inserts that collide with a pre-existing key. This can be quite handy if you have a dataset that keeps changing in part, and you don't want to upsert and replace pre-existing PK rows but you do want to ignore collisions to existing PK rows. Do `sqlite_utils` support such (cavalier!) behaviour? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | .insert/.upsert/.insert_all/.upsert_all should add missing columns 413871266 | |
482994231 | https://github.com/simonw/sqlite-utils/issues/8#issuecomment-482994231 | https://api.github.com/repos/simonw/sqlite-utils/issues/8 | MDEyOklzc3VlQ29tbWVudDQ4Mjk5NDIzMQ== | psychemedia 82988 | 2019-04-14T15:04:07Z | 2019-04-14T15:29:33Z | NONE | PLEASE IGNORE THE BELOW... I did a package update and rebuilt the kernel I was working in... may just have been an old version of sqlite_utils, seems to be working now. (Too many containers / too many environments!) Has an issue been reintroduced here with FTS? eg I'm getting an error thrown by spaces in column names here: ``` /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) def enable_fts(self, columns, fts_version="FTS5"): --> 329 "Enables FTS on the specified columns" 330 sql = """ 331 CREATE VIRTUAL TABLE "{table}_fts" USING {fts_version} ( ``` when trying an `insert_all`. Also, if a col has a `.` in it, I seem to get: ``` /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near ".": syntax error ``` (Can't post a worked minimal example right now; racing trying to build something against a live timing screen that will stop until next weekend in an hour or two...) PS Hmmm I did a test and they seem to work; I must be messing up s/where else... ``` import sqlite3 from sqlite_utils import Database dbname='testingDB_sqlite_utils.db' #!rm $dbname conn = sqlite3.connect(dbname, timeout=10) #Setup database tables c = conn.cursor() setup=''' CREATE TABLE IF NOT EXISTS "test1" ( "NO" INTEGER, "NAME" TEXT ); CREATE TABLE IF NOT EXISTS "test2" ( "NO" INTEGER, `TIME OF DAY` TEXT ); CREATE TABLE IF NOT EXISTS "test3" ( "NO" INTEGER, `AVG. SPEED (MPH)` FLOAT ); ''' c.executescript(setup) DB = Database(conn) … | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Problems handling column names containing spaces or - 403922644 | |
483017176 | https://github.com/simonw/datasette/issues/431#issuecomment-483017176 | https://api.github.com/repos/simonw/datasette/issues/431 | MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng== | psychemedia 82988 | 2019-04-14T16:58:37Z | 2019-04-14T16:58:37Z | CONTRIBUTOR | Hmm... nope... I see an updated timestamp from `ls -al` on the db but no reload? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette doesn't reload when database file changes 432870248 | |
483202658 | https://github.com/simonw/datasette/issues/429#issuecomment-483202658 | https://api.github.com/repos/simonw/datasette/issues/429 | MDEyOklzc3VlQ29tbWVudDQ4MzIwMjY1OA== | psychemedia 82988 | 2019-04-15T10:48:01Z | 2019-04-15T10:48:01Z | CONTRIBUTOR | Minor UI observation: ![image](https://user-images.githubusercontent.com/82988/56127017-2bf78e80-5f74-11e9-9120-9393eb5d4988.png) `_where=` renders a `[remove]` link whereas `_facet=` gets a cross to remove it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ?_where=sql-fragment parameter for table views 432636432 | |
509013413 | https://github.com/simonw/datasette/issues/507#issuecomment-509013413 | https://api.github.com/repos/simonw/datasette/issues/507 | MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw== | psychemedia 82988 | 2019-07-07T16:31:57Z | 2019-07-07T16:31:57Z | CONTRIBUTOR | Chrome and Firefox [both support headless screengrabs]( https://www.bleepingcomputer.com/news/software/chrome-and-firefox-can-take-screenshots-of-sites-from-the-command-line/) from command line, but I don't know how parameterised they can be? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Every datasette plugin on the ecosystem page should have a screenshot 455852801 | |
559207224 | https://github.com/simonw/datasette/issues/642#issuecomment-559207224 | https://api.github.com/repos/simonw/datasette/issues/642 | MDEyOklzc3VlQ29tbWVudDU1OTIwNzIyNA== | psychemedia 82988 | 2019-11-27T18:40:57Z | 2019-11-27T18:41:07Z | CONTRIBUTOR | Would cookie cutter approaches also work for creating various flavours of customised templates? I need to try to create a couple of sites for myself to get a feel for what sorts of thing are easily doable, and what cribbable cookie cutter items might be. I'm guessing https://simonwillison.net/2019/Nov/25/niche-museums/ is a good place to start from? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Provide a cookiecutter template for creating new plugins 529429214 | |
559632608 | https://github.com/simonw/datasette/issues/573#issuecomment-559632608 | https://api.github.com/repos/simonw/datasette/issues/573 | MDEyOklzc3VlQ29tbWVudDU1OTYzMjYwOA== | psychemedia 82988 | 2019-11-29T01:43:38Z | 2019-11-29T01:43:38Z | CONTRIBUTOR | In passing, it looks like a start was made on a datasette Jupyter server extension in https://github.com/lucasdurand/jupyter-datasette although the build fails in MyBinder. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exposing Datasette via Jupyter-server-proxy 492153532 | |
571138093 | https://github.com/simonw/sqlite-utils/issues/73#issuecomment-571138093 | https://api.github.com/repos/simonw/sqlite-utils/issues/73 | MDEyOklzc3VlQ29tbWVudDU3MTEzODA5Mw== | psychemedia 82988 | 2020-01-06T13:28:31Z | 2020-01-06T13:28:31Z | NONE | I think I actually had several issues in play... The missing key was one, but I think there is also an issue as per below. For example, in the following: ```python def init_testdb(dbname='test.db'): if os.path.exists(dbname): os.remove(dbname) conn = sqlite3.connect(dbname) db = Database(conn) return conn, db conn, db = init_testdb() c = conn.cursor() c.executescript('CREATE TABLE "test1" ("Col1" TEXT, "Col2" TEXT, PRIMARY KEY ("Col1"));') c.executescript('CREATE TABLE "test2" ("Col1" TEXT, "Col2" TEXT, PRIMARY KEY ("Col1"));') print('Test 1...') for i in range(3): db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) print('Test 2...') for i in range(3): db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}, {'Col1':'c','Col2':'x'}], pk=('Col1')) print('Done...') --------------------------------------------------------------------------- Test 1... Test 2... IndexError: list index out of range --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-763-444132ca189f> in <module> 22 print('Test 2...') 23 for i in range(3): ---> 24 db['test1'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}], pk=('Col1')) 25 db['test2'].upsert_all([{'Col1':'a', 'Col2':'x'},{'Col1':'b', 'Col2':'x'}, 26 {'Col1':'c','Col2':'x'}], pk=('Col1')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1… | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | upsert_all() throws issue when upserting to empty table 545407916 | |
573047321 | https://github.com/simonw/sqlite-utils/issues/73#issuecomment-573047321 | https://api.github.com/repos/simonw/sqlite-utils/issues/73 | MDEyOklzc3VlQ29tbWVudDU3MzA0NzMyMQ== | psychemedia 82988 | 2020-01-10T14:02:56Z | 2020-01-10T14:09:23Z | NONE | Hmmm... just tried with installs from pip and the repo (v2.0.0 and v2.0.1) and I get the error each time (start of second run through the second loop). Could it be sqlite3? I'm on 3.30.1. UPDATE: just tried it on jupyter.org/try and I get the error there, too. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | upsert_all() throws issue when upserting to empty table 545407916 | |
580745213 | https://github.com/simonw/sqlite-utils/issues/73#issuecomment-580745213 | https://api.github.com/repos/simonw/sqlite-utils/issues/73 | MDEyOklzc3VlQ29tbWVudDU4MDc0NTIxMw== | psychemedia 82988 | 2020-01-31T14:02:38Z | 2020-01-31T14:21:09Z | NONE | So the conundrum continues.. The simple test case above now runs, but if I upsert a large number of new records (successfully) and then try to upsert a fewer number of new records to a different table, I get the same error. If I run the same upserts again (which in the first case means there are no new records to add, because they were already added), the second upsert works correctly. It feels as if the number of items added via an upsert >> the number of items I try to add in an upsert immediately after, I get the error. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | upsert_all() throws issue when upserting to empty table 545407916 | |
586599424 | https://github.com/simonw/datasette/issues/417#issuecomment-586599424 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA== | psychemedia 82988 | 2020-02-15T15:12:19Z | 2020-02-15T15:12:33Z | CONTRIBUTOR | So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette Library 421546944 | |
604328163 | https://github.com/simonw/datasette/issues/573#issuecomment-604328163 | https://api.github.com/repos/simonw/datasette/issues/573 | MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw== | psychemedia 82988 | 2020-03-26T09:41:30Z | 2020-03-26T09:41:30Z | CONTRIBUTOR | Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exposing Datasette via Jupyter-server-proxy 492153532 | |
714657366 | https://github.com/simonw/datasette/issues/1033#issuecomment-714657366 | https://api.github.com/repos/simonw/datasette/issues/1033 | MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng== | psychemedia 82988 | 2020-10-22T17:51:29Z | 2020-10-22T17:51:29Z | CONTRIBUTOR | How does `/-/static` relate to [current guidance docs around `static`](https://docs.datasette.io/en/latest/custom_templates.html?highlight=static#serving-static-files) regarding the `--static option` and metadata formulations such as `"extra_js_urls": [ "/static/app.js"]` (I've not managed to get this to work in a Jupyter server proxied set up; the [datasette / jupyter server proxy repo](https://github.com/simonw/jupyterserverproxy-datasette-demo) may provide a useful test example, eg via MyBinder, for folk to crib from?) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette.urls.static_plugins(...) method 725099777 | |
716066000 | https://github.com/simonw/datasette/issues/1033#issuecomment-716066000 | https://api.github.com/repos/simonw/datasette/issues/1033 | MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA== | psychemedia 82988 | 2020-10-24T22:58:33Z | 2020-10-24T22:58:33Z | CONTRIBUTOR | From [the docs](https://docs.datasette.io/en/latest/internals.html#datasette-urls), I note: ``` datasette.urls.instance() Returns the URL to the Datasette instance root page. This is usually "/" ``` What about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, `https://example.com:PORT/weirdpath/datasette`, what does `datasette.urls.instance()` refer to? - [ ] `https://example.com:PORT/weirdpath/datasette` - [ ] `https://example.com:PORT/weirdpath/` - [ ] `https://example.com:PORT/` - [ ] `https://example.com` - [ ] something else? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette.urls.static_plugins(...) method 725099777 | |
716123598 | https://github.com/simonw/datasette/issues/838#issuecomment-716123598 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA== | psychemedia 82988 | 2020-10-25T10:20:12Z | 2020-10-25T10:53:24Z | CONTRIBUTOR | I'm trying to [run something behind a MyBinder proxy](https://github.com/ouseful-testing/nbsearch), but seem to have something set up incorrectly and not sure what the fix is? I'm starting datasette with jupyter-server-proxy setup: ``` # __init__.py def setup_nbsearch(): return { "command": [ "datasette", "serve", f"{_NBSEARCH_DB_PATH}", "-p", "{port}", "--config", "base_url:{base_url}nbsearch/" ], "absolute_url": True, # The following needs a the labextension installing. # eg in postBuild: jupyter labextension install jupyterlab-server-proxy "launcher_entry": { "enabled": True, "title": "nbsearch", }, } ``` where the `base_url` gets automatically populated by the server-proxy. I define the loaders as: ``` # __init__.py from datasette import hookimpl @hookimpl def extra_css_urls(database, table, columns, view_name, datasette): return [ "/-/static-plugins/nbsearch/prism.css", "/-/static-plugins/nbsearch/nbsearch.css", ] ``` but these seem to also need a base_url prefix set somehow? Currently, the generated HTML loads properly but internal links are incorrect; eg they take the form `<link rel="stylesheet" href="/-/static-plugins/nbsearch/prism.css">` which resolves to eg `https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css` rather than required URL of form `https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css`. The main css is loaded correctly: `<link rel="stylesheet" href="/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static/app.css?404439">` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
718528252 | https://github.com/simonw/datasette/pull/1049#issuecomment-718528252 | https://api.github.com/repos/simonw/datasette/issues/1049 | MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg== | psychemedia 82988 | 2020-10-29T09:20:34Z | 2020-10-29T09:20:34Z | CONTRIBUTOR | That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add template block prior to extra URL loaders 729017519 | |
720354227 | https://github.com/simonw/datasette/issues/838#issuecomment-720354227 | https://api.github.com/repos/simonw/datasette/issues/838 | MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw== | psychemedia 82988 | 2020-11-02T09:33:58Z | 2020-11-02T09:33:58Z | CONTRIBUTOR | Thanks; just a note that the `datasette.urls.static(path)` and `datasette.urls.static_plugins(plugin_name, path)` items both seem to be repeated and appear in the docs twice? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Incorrect URLs when served behind a proxy with base_url set 637395097 | |
752098906 | https://github.com/simonw/datasette/issues/417#issuecomment-752098906 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg== | psychemedia 82988 | 2020-12-29T14:34:30Z | 2020-12-29T14:34:50Z | CONTRIBUTOR | FWIW, I had a look at `watchdog` for a `datasette` powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py Not a production thing, just an experiment trying to explore what might be possible... | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Datasette Library 421546944 | |
1010947634 | https://github.com/simonw/datasette/issues/1591#issuecomment-1010947634 | https://api.github.com/repos/simonw/datasette/issues/1591 | IC_kwDOBm6k_c48QdYy | psychemedia 82988 | 2022-01-12T11:32:17Z | 2022-01-12T11:32:17Z | CONTRIBUTOR | Is it possible to parse things like `--ext-{plugin}-{arg} VALUE` ? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Maybe let plugins define custom serve options? 1100015398 | |
1033641009 | https://github.com/simonw/sqlite-utils/pull/203#issuecomment-1033641009 | https://api.github.com/repos/simonw/sqlite-utils/issues/203 | IC_kwDOCGYnMM49nBwx | psychemedia 82988 | 2022-02-09T11:06:18Z | 2022-02-09T11:06:18Z | NONE | Is there any progress elsewhere on the handling of compound / composite foreign keys, or is this PR still effectively open? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | changes to allow for compound foreign keys 743384829 | |
1041313679 | https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041313679 | https://api.github.com/repos/simonw/sqlite-utils/issues/406 | IC_kwDOCGYnMM4-ES-P | psychemedia 82988 | 2022-02-16T09:59:51Z | 2022-02-16T10:00:10Z | NONE | The `CustomColumnType()` approach looks good. This pushes you into the mindspace that you are defining and working with a custom column type. When creating the table, you could then error, or at least warn, if someone wasn't setting a column on a `type` or a custom column type, which I guess is where `mypy` comes in? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Creating tables with custom datatypes 1128466114 | |
1041325398 | https://github.com/simonw/sqlite-utils/issues/402#issuecomment-1041325398 | https://api.github.com/repos/simonw/sqlite-utils/issues/402 | IC_kwDOCGYnMM4-EV1W | psychemedia 82988 | 2022-02-16T10:12:48Z | 2022-02-16T10:18:55Z | NONE | > My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns Other possible pairs: unconventional date/datetime and timezone pairs eg `2022-02-16::17.00, London`; or more generally, numerical value and unit of measurement pairs (eg if you want to cast into and out of different measurement units using packages like `pint`) or currencies etc. Actually, in that case, I guess you may be presenting things that are unit typed already, and so a conversion would need to parse things into an appropriate, possibly two column `value, unit` format. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Advanced class-based `conversions=` mechanism 1125297737 | |
1041363433 | https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1041363433 | https://api.github.com/repos/simonw/sqlite-utils/issues/406 | IC_kwDOCGYnMM4-EfHp | psychemedia 82988 | 2022-02-16T10:57:03Z | 2022-02-16T10:57:19Z | NONE | Wondering if this actually relates to https://github.com/simonw/sqlite-utils/issues/402 ? I also wonder if this would be a sensible approach for eg registering `pint` based quantity conversions into and out of the db, perhaps storing the quantity as a serialised `magnitude measurement` single column string? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Creating tables with custom datatypes 1128466114 | |
1248204219 | https://github.com/simonw/datasette/issues/1810#issuecomment-1248204219 | https://api.github.com/repos/simonw/datasette/issues/1810 | IC_kwDOBm6k_c5KZhW7 | psychemedia 82988 | 2022-09-15T14:44:47Z | 2022-09-15T14:46:26Z | CONTRIBUTOR | A couple+ of possible use case examples: - someone has a collection of articles indexed with FTS; they want to publish a simple search tool over the results; - someone has an image collection and they want to be able to search over description text to return images; - someone has a set of locations with descriptions, and wants to run a query over places and descriptions and get results as a listing or on a map; - someone has a set of audio or video files with titles, descriptions and/or transcripts, and wants to be able to search over them and return playable versions of returned items. In many cases, I suspect the raw content will be in one table, but the search table will be a second (eg FTS) table. Generally, the search may be over one or more joined tables, and the results constructed from one or more tables (which may or may not be distinct from the search tables). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Featured table(s) on the homepage 1374626873 | |
1248440137 | https://github.com/simonw/sqlite-utils/issues/406#issuecomment-1248440137 | https://api.github.com/repos/simonw/sqlite-utils/issues/406 | IC_kwDOCGYnMM5Kaa9J | psychemedia 82988 | 2022-09-15T18:13:50Z | 2022-09-15T18:13:50Z | NONE | I was wondering if you have any more thoughts on this? I have a tangible use case now: adding a "vector" column to a database to support semantic search using doc2vec embeddings ([example](https://psychemedia.github.io/storynotes/Lang_Doc2Vec.html); note that the `vtfunc` package may no longer be reliable...). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Creating tables with custom datatypes 1128466114 | |
1420941334 | https://github.com/simonw/datasette/pull/564#issuecomment-1420941334 | https://api.github.com/repos/simonw/datasette/issues/564 | IC_kwDOBm6k_c5UsdgW | psychemedia 82988 | 2023-02-07T15:14:10Z | 2023-02-07T15:14:10Z | CONTRIBUTOR | Is this feature covered by any more recent updates to `datasette`, or via any plugins that you're aware of? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | First proof-of-concept of Datasette Library 473288428 | |
783560017 | https://github.com/simonw/datasette/issues/1166#issuecomment-783560017 | https://api.github.com/repos/simonw/datasette/issues/1166 | MDEyOklzc3VlQ29tbWVudDc4MzU2MDAxNw== | thorn0 94334 | 2021-02-22T18:00:57Z | 2021-02-22T18:13:11Z | NONE | Hi! I don't think Prettier supports this syntax for globs: `datasette/static/*[!.min].js` Are you sure that works? Prettier uses https://github.com/mrmlnc/fast-glob, which in turn uses https://github.com/micromatch/micromatch, and the docs for these packages don't mention this syntax. As per the docs, square brackets should work as in regexes (`foo-[1-5].js`). Tested it. Apparently, it works as a negated character class in regexes (like `[^.min]`). I wonder where this syntax comes from. Micromatch doesn't support that: ```js micromatch(['static/table.js', 'static/n.js'], ['static/*[!.min].js']); // result: ["static/n.js"] -- brackets are treated like [!.min] in regexes, without negation ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Adopt Prettier for JavaScript code formatting 777140799 | |
1315853097 | https://github.com/simonw/datasette/pull/1893#issuecomment-1315853097 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OblMp | bgrins 95570 | 2022-11-15T20:55:40Z | 2022-11-15T20:55:40Z | CONTRIBUTOR | Should also minify the bundled output | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1315869040 | https://github.com/simonw/datasette/pull/1893#issuecomment-1315869040 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5ObpFw | bgrins 95570 | 2022-11-15T21:11:42Z | 2022-11-15T21:11:42Z | CONTRIBUTOR | extraKeys is done - Shift+Enter is added in the helper function, and it appears that the Tab behavior now defaults to what the `Tab: false` setting was doing (allowing it to escape to the form) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1315869946 | https://github.com/simonw/datasette/pull/1893#issuecomment-1315869946 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5ObpT6 | bgrins 95570 | 2022-11-15T21:12:38Z | 2022-11-15T21:12:38Z | CONTRIBUTOR | https://github.com/Sphinxxxx/cm-resize isn't compatible with 6. There's a suggestion to try using CSS resize in https://discuss.codemirror.net/t/resizing-codemirror-6/3265/2 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316041828 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316041828 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OcTRk | bgrins 95570 | 2022-11-15T23:51:35Z | 2022-11-15T23:51:35Z | CONTRIBUTOR | I experimented with autocompleting the actual schema in https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25, but it would need some work (current problems with it listed in the commit message there) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316243602 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316243602 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OdEiS | bgrins 95570 | 2022-11-16T03:11:46Z | 2022-11-16T03:11:46Z | CONTRIBUTOR | Was just reviewing the SQL options and there's an [upperCaseKeywords](https://github.com/codemirror/lang-sql#user-content-sqlconfig.uppercasekeywords) if we'd rather have SELECT vs select. Datasette seems to prefer lowercase so probably best to keep it as-is | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316256386 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316256386 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OdHqC | bgrins 95570 | 2022-11-16T03:18:06Z | 2022-11-16T03:18:06Z | CONTRIBUTOR | > If you can get a version of this working with table and column autocompletion just using a static JavaScript object in the source code with the right tables and columns, I'm happy to take on the work of turning that static object into something that Datasette includes in the page itself with all of the correct values. This version "sort of" works when on the main database page where the template passes the relevant data https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25 by doing this and passing that into the `schema` object: ``` let TABLES_DATA = []; {% if tables is defined %} TABLES_DATA = {{ tables | tojson(indent=2) }}; {% endif %} // Turn into an object, shaped like https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/test/test-complete.ts#L27. const TABLES_SCHEMA = Object.fromEntries( new Map( TABLES_DATA.map((table) => { return [table.name, table.columns]; }) ).entries() ); ``` But there are a number of papercuts with it - it's not escaping table names with spaces (likely be fixable from the data being passed into the view) but mainly it doesn't seem to autocomplete columns. I think it might only want to do it when you first type the table name from my read of https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/test/test-complete.ts#L37. It's possible I'm just passing something wrong, but it may end up being something that needs feature work upstream. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316318961 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316318961 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OdW7x | bgrins 95570 | 2022-11-16T04:27:51Z | 2022-11-16T04:27:51Z | CONTRIBUTOR | > The resize handle doesn't appear on Mobile Safari on iPhone - I don't think that particularly matters though. > > The textarea does get a weird border around it when focused on iPhone though. The default focus styles appear to be ``` .c1.cm-editor.cm-focused { outline: 1px dotted #212121; } ``` Which I also see on desktop. Would be nice to changed to whatever the default UA textarea styles are to blend in better but I wouldn't recommend removing it entirely - just to keep the visual indication that the element is focused. Maybe followup material to have a theming pass | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316320521 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316320521 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OdXUJ | bgrins 95570 | 2022-11-16T04:29:23Z | 2022-11-16T04:29:23Z | CONTRIBUTOR | <img width="276" alt="Screenshot 2022-11-15 at 8 27 17 PM" src="https://user-images.githubusercontent.com/95570/202083682-dab271f7-cb7b-44dd-8266-70b1eba265ee.png"> UI issue I see on the autocomplete popup with overlapping icon & text. Screenshot's from Firefox, it seems even a little more pronounced on Safari | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316339035 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316339035 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Odb1b | bgrins 95570 | 2022-11-16T04:47:11Z | 2022-11-16T04:47:11Z | CONTRIBUTOR | > Have you ever seen CodeMirror correctly auto-completing columns? I'm not entirely sure I believe that the feature works anywhere else. I was thinking of the BigQuery console, like <img width="516" alt="Screenshot 2022-11-15 at 8 31 10 PM" src="https://user-images.githubusercontent.com/95570/202084210-4d23d916-6862-4fd2-8f41-087d6355921a.png"> But they must be doing something pretty custom & appears to be using Monaco anyway. I suspect some kind of lower level autocomplete integration could make this work, but if the table completion is a good-enough starting point I think it's not too hard. The main issue is that we don't pass the relevant table data down to QueryView. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1316387382 | https://github.com/simonw/datasette/pull/1893#issuecomment-1316387382 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Odno2 | bgrins 95570 | 2022-11-16T05:33:55Z | 2022-11-16T05:33:55Z | CONTRIBUTOR | I added a commit to make our own dialect at https://github.com/simonw/datasette/pull/1893/commits/e273fc8ed5341bdf0b622e722d761bd2acc30a90. Pulled in the full list of keywords from https://www.sqlite.org/lang_keywords.html but haven't gone through and pruned it to only include common select keywords. @simonw you'll have better knowledge than me on that - do you want to take a first shot at narrowing that down to the set that people will be using in the editor? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317281292 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317281292 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OhB4M | bgrins 95570 | 2022-11-16T16:19:16Z | 2022-11-16T16:19:16Z | CONTRIBUTOR | Ha, nice idea! Updating the dialect with that list. I'm thinking of also adding `count` to the list since that's a common thing people would want to autocomplete. I notice BQ console highlights `count` in the same manner as other keywords like `select` as well. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317314064 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317314064 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OhJ4Q | bgrins 95570 | 2022-11-16T16:36:46Z | 2022-11-16T16:36:46Z | CONTRIBUTOR | With ```patch diff --git a/datasette/templates/_codemirror_foot.html b/datasette/templates/_codemirror_foot.html index ed709b3..74fe18e 100644 --- a/datasette/templates/_codemirror_foot.html +++ b/datasette/templates/_codemirror_foot.html @@ -7,7 +7,11 @@ sqlFormat.hidden = false; } if (sqlInput) { - var editor = (window.editor = cm.editorFromTextArea(sqlInput)); + var editor = (window.editor = cm.editorFromTextArea(sqlInput, { + schema: { + compound_three_primary_keys: ["pk1", "pk2", "pk3", "content"], + }, + })); ``` we get table autocompletion and column completion if you name the table in the query (see screencast). I do see bugs with escaped table names like `"'123_starts_with_digits'": ["col1", "col2"]` or `"[123_starts_with_digits]": ["col1", "col2"]` where it doesn't seem to pick up the column names though. I think it needs some further testing and debugging. https://user-images.githubusercontent.com/95570/202238521-e613b4e2-ba92-4418-9068-fc022edaee93.mp4 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317326406 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317326406 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OhM5G | bgrins 95570 | 2022-11-16T16:45:09Z | 2022-11-16T16:45:09Z | CONTRIBUTOR | For escaped table names it looks like we could pass a Completion object (https://codemirror.net/docs/ref/#autocomplete) instead of a string which would allow the non escaped name to be a label and then the escaped name to actually complete in the editor, which might help with some of the funkiness I was seeing w/ completion | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317329157 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317329157 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OhNkF | bgrins 95570 | 2022-11-16T16:46:52Z | 2022-11-16T16:46:52Z | CONTRIBUTOR | > <img alt="Screenshot 2022-11-15 at 8 27 17 PM" width="276" src="https://user-images.githubusercontent.com/95570/202083682-dab271f7-cb7b-44dd-8266-70b1eba265ee.png"> > > UI issue I see on the autocomplete popup with overlapping icon & text. Screenshot's from Firefox, it seems even a little more pronounced on Safari I checked and if I empty out app.css the bug goes away, so there's some kind of inheritance issue there. It's hard to debug bc the autocomplete popup goes away on blur (i.e. when trying to inspect it in devtools), but at least it's narrowed down a bit. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317520304 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317520304 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Oh8Ow | bgrins 95570 | 2022-11-16T18:58:43Z | 2022-11-16T18:58:43Z | CONTRIBUTOR | Nice. And is it possible to include another field which is an escaped table name (only when necessary) - i.e. `[123_starts_with_digits]`. Or is that easy enough to derive on the client? I'm thinking we'd map those to Completion objects so that CM would show the non escaped text but complete to escaped. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317522323 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317522323 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Oh8uT | bgrins 95570 | 2022-11-16T18:59:49Z | 2022-11-16T18:59:49Z | CONTRIBUTOR | Or I guess you could return only the escaped table name and then we could derive the unescaped from the client side (removing the outer `[]` when present) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317681193 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317681193 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Oijgp | bgrins 95570 | 2022-11-16T21:19:13Z | 2022-11-16T21:19:13Z | CONTRIBUTOR | Alright, added Cmd+Enter to submit (Ctrl+Enter on Windows as well bc of using Meta-Enter on codemirror). We can make that MacOS only by changing the combo to Cmd+Enter specifically but I think it's probably fine to have both. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317715580 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317715580 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Oir58 | bgrins 95570 | 2022-11-16T21:49:51Z | 2022-11-16T21:49:51Z | CONTRIBUTOR | I think the table completion still has some quirks to work out. Something like ``` schema: { "[123_starts_with_digits]": ["content"], } ``` Seems to work alright, although it will append it after any other numbers you've started typing - so you end up with `select * from 12[123_starts_with_digits]` if you typed "12" to get the completion to appear. This might just be an issue with numeric names, I haven't tested it in a lot of detail. You can do ``` searchable: [ { label: "name with . and spaces", apply: "[name with . and spaces]", }, "pk", "text1", "text2", ], ``` Which is pretty neat and will show the non-escaped string but complete to the escaped one. You can't easily do that with the table names themselves (you can pass a `tables` array like so https://github.com/codemirror/lang-sql/blob/ebf115fffdbe07f91465ccbd82868c587f8182bc/src/sql.ts#L121 but it will overwrite the columns from the schema ). It's buggy enough (bad output for these unusual table names) that I'd suggest that work gets moved into a follow up to the upgrade to 6. That would give space to sort out how to deliver that to the view directly, figure out where name escaping should happen, and have overall testing to uncover bugs and fix papercuts before enabling it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317789308 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317789308 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5Oi958 | bgrins 95570 | 2022-11-16T22:59:57Z | 2022-11-16T22:59:57Z | CONTRIBUTOR | I can push up a commit that uses the static fixtures schema for testing, but given that the query used to generate it is authed we would still need some work to make that work on live data, right? Ideally it could come down to db and query views directly to avoid waiting on an extra xhr and managing that state change.On Nov 16, 2022, at 2:16 PM, Simon Willison ***@***.***> wrote: Honestly I'm not too bothered if table names with weird characters don't work correctly here - I care about those in the Datasette fixtures.db database because Datasette aims to support ANY valid SQLite database, so I need stuff in the test suite that includes weird edge cases like this. But I would hope very few people actually create tables with spaces in their names, so it's not a huge concern to me if autocompletion doesn't work properly for those. —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***> | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317805482 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317805482 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OjB2q | bgrins 95570 | 2022-11-16T23:18:17Z | 2022-11-16T23:18:17Z | CONTRIBUTOR | Alright with https://github.com/simonw/datasette/pull/1893/commits/f254be4b38936e95e7a7f25866e7c6b0520db96f we should be getting autocomplete on fixture data. Give that a test and see what you think | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317834838 | https://github.com/simonw/datasette/pull/1893#issuecomment-1317834838 | https://api.github.com/repos/simonw/datasette/issues/1893 | IC_kwDOBm6k_c5OjJBW | bgrins 95570 | 2022-11-16T23:50:58Z | 2022-11-16T23:50:58Z | CONTRIBUTOR | Should we empty out the fixture schema to avoid fixture autocomplete showing up on live databases in the interim, or are you planning to tackle #1897 shortly? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Upgrade to CodeMirror 6, add SQL autocomplete 1450363982 | |
1317873458 | https://github.com/simonw/datasette/issues/1899#issuecomment-1317873458 | https://api.github.com/repos/simonw/datasette/issues/1899 | IC_kwDOBm6k_c5OjScy | bgrins 95570 | 2022-11-17T00:31:07Z | 2022-11-17T00:31:07Z | CONTRIBUTOR | This is one way to fix it ```patch r.html diff --git a/datasette/static/cm-editor-6.0.1.js b/datasette/static/cm-editor-6.0.1.js index c1fd2ab..68cf398 100644 --- a/datasette/static/cm-editor-6.0.1.js +++ b/datasette/static/cm-editor-6.0.1.js @@ -22,7 +22,14 @@ export function editorFromTextArea(textarea, conf = {}) { // https://github.com/codemirror/lang-sql#user-content-sqlconfig.tables let view = new EditorView({ doc: textarea.value, + extensions: [ + EditorView.theme({ + ".cm-content": { + // Height on cm-content ensures the editor is focusable by clicking beyond the height of the text + minHeight: "70px", + }, + }), keymap.of([ { key: "Shift-Enter", diff --git a/datasette/templates/_codemirror.html b/datasette/templates/_codemirror.html index dea4710..c4629ae 100644 --- a/datasette/templates/_codemirror.html +++ b/datasette/templates/_codemirror.html @@ -4,7 +4,6 @@ .cm-editor { resize: both; overflow: hidden; - min-height: 70px; width: 80%; border: 1px solid #ddd; } ``` I don't love it but it seems to work for the default case. You can still retrigger the bug by resizing the editor to be > 70px high. The other approach would be to listen for a click on that empty region and move focus to the editor, or something | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused 1452495049 | |
1318897922 | https://github.com/simonw/datasette/issues/1899#issuecomment-1318897922 | https://api.github.com/repos/simonw/datasette/issues/1899 | IC_kwDOBm6k_c5OnMkC | bgrins 95570 | 2022-11-17T16:32:42Z | 2022-11-17T16:32:42Z | CONTRIBUTOR | Another idea would be to just not set a min-height and allow the 1 line input to be 1 line heigh | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Clicking within the CodeMirror area below the SQL (i.e. when there's only a single line) doesn't cause the editor to get focused 1452495049 | |
1319533445 | https://github.com/simonw/datasette/issues/1897#issuecomment-1319533445 | https://api.github.com/repos/simonw/datasette/issues/1897 | IC_kwDOBm6k_c5OpnuF | bgrins 95570 | 2022-11-18T04:38:03Z | 2022-11-18T04:38:03Z | CONTRIBUTOR | Are you tracking the change to send the JSON over to the frontend separately or was that part of this? Something like this is probably pretty close https://github.com/bgrins/datasette/commit/8431c98850c7a552dbcde2a4dd0c3dc942a97d25#diff-0c93232bfd5477eeac96382e52769108b41433d960d5277ffcccf2f464e60abdR9 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Serve schema JSON to the SQL editor to enable autocomplete 1452457263 | |
682182178 | https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682182178 | https://api.github.com/repos/simonw/sqlite-utils/issues/139 | MDEyOklzc3VlQ29tbWVudDY4MjE4MjE3OA== | simonwiles 96218 | 2020-08-27T20:46:18Z | 2020-08-27T20:46:18Z | CONTRIBUTOR | > I tried changing the batch_size argument to the total number of records, but it seems only to effect the number of rows that are committed at a time, and has no influence on this problem. So the reason for this is that the `batch_size` for import is limited (of necessity) here: https://github.com/simonw/sqlite-utils/blob/main/sqlite_utils/db.py#L1048 With regard to the issue of ignoring columns, however, I made a fork and hacked a temporary fix that looks like this: https://github.com/simonwiles/sqlite-utils/commit/3901f43c6a712a1a3efc340b5b8d8fd0cbe8ee63 It doesn't seem to affect performance enormously (but I've not tested it thoroughly), and it now does what I need (and would expect, tbh), but it now fails the test here: https://github.com/simonw/sqlite-utils/blob/main/tests/test_create.py#L710-L716 The existence of this test suggests that `insert_all()` is behaving as intended, of course. It seems odd to me that this would be a desirable default behaviour (let alone the only behaviour), and its not very prominently flagged-up, either. @simonw is this something you'd be willing to look at a PR for? I assume you wouldn't want to change the default behaviour at this point, but perhaps an option could be provided, or at least a bit more of a warning in the docs. Are there oversights in the implementation that I've made? Would be grateful for your thoughts! Thanks! | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | insert_all(..., alter=True) should work for new columns introduced after the first 100 records 686978131 | |
682815377 | https://github.com/simonw/sqlite-utils/issues/139#issuecomment-682815377 | https://api.github.com/repos/simonw/sqlite-utils/issues/139 | MDEyOklzc3VlQ29tbWVudDY4MjgxNTM3Nw== | simonwiles 96218 | 2020-08-28T16:14:58Z | 2020-08-28T16:14:58Z | CONTRIBUTOR | Thanks! And yeah, I had updating the docs on my list too :) Will try to get to it this afternoon (budgeting time is fraught with uncertainty at the moment!). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | insert_all(..., alter=True) should work for new columns introduced after the first 100 records 686978131 | |
683382252 | https://github.com/simonw/sqlite-utils/issues/145#issuecomment-683382252 | https://api.github.com/repos/simonw/sqlite-utils/issues/145 | MDEyOklzc3VlQ29tbWVudDY4MzM4MjI1Mg== | simonwiles 96218 | 2020-08-30T06:27:25Z | 2020-08-30T06:27:52Z | CONTRIBUTOR | Note: had to adjust the test above because trying to exhaust a `SQLITE_MAX_VARIABLE_NUMBER` of 250000 in 99 records requires 2526 columns, and trips the ` "Rows can have a maximum of {} columns".format(SQLITE_MAX_VARS)` check even before it trips the default `SQLITE_MAX_COLUMN` value (2000). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Bug when first record contains fewer columns than subsequent records 688659182 | |
688479163 | https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688479163 | https://api.github.com/repos/simonw/sqlite-utils/issues/146 | MDEyOklzc3VlQ29tbWVudDY4ODQ3OTE2Mw== | simonwiles 96218 | 2020-09-07T19:10:33Z | 2020-09-07T19:11:57Z | CONTRIBUTOR | @simonw -- I've gone ahead updated the documentation to reflect the changes introduced in this PR. IMO it's ready to merge now. In writing the documentation changes, I begin to wonder about the value and role of `batch_size` at all, tbh. May I assume it was originally intended to prevent using the entire row set to determine columns and column types, and that this was a performance consideration? If so, this PR entirely undermines its purpose. I've been passing in excess of 500,000 rows at a time to `insert_all()` with these changes and although I'm sure the performance difference is measurable it's not really noticeable; given #145, I don't know that any performance advantages outweigh the problems doing it this way removes. What do you think about just dropping the argument and defaulting to the maximum `batch_size` permissible given `SQLITE_MAX_VARS`? Are there other reasons one might want to restrict `batch_size` that I've overlooked? I could open a new issue to discuss/implement this. Of course the documentation will need to change again too if/when something is done about #147. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Handle case where subsequent records (after first batch) include extra columns 688668680 | |
688481317 | https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688481317 | https://api.github.com/repos/simonw/sqlite-utils/issues/146 | MDEyOklzc3VlQ29tbWVudDY4ODQ4MTMxNw== | simonwiles 96218 | 2020-09-07T19:18:55Z | 2020-09-07T19:18:55Z | CONTRIBUTOR | Just force-pushed to update d042f9c with more formatting changes to satisfy `black==20.8b1` and pass the GitHub Actions "Test" workflow. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Handle case where subsequent records (after first batch) include extra columns 688668680 | |
688573964 | https://github.com/simonw/sqlite-utils/pull/146#issuecomment-688573964 | https://api.github.com/repos/simonw/sqlite-utils/issues/146 | MDEyOklzc3VlQ29tbWVudDY4ODU3Mzk2NA== | simonwiles 96218 | 2020-09-08T01:55:07Z | 2020-09-08T01:55:07Z | CONTRIBUTOR | Okay, I've rewritten this PR to preserve the batching behaviour but still fix #145, and rebased the branch to account for the `db.execute()` api change. It's not terribly sophisticated -- if it attempts to insert a batch which has too many variables, the exception is caught, the batch is split in two and each half is inserted separately, and then it carries on as before with the same `batch_size`. In the edge case where this gets triggered, subsequent batches will all be inserted in two groups too if they continue to have the same number of columns (which is presumably reasonably likely). Do you reckon this is acceptable when set against the awkwardness of recalculating the `batch_size` on the fly? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Handle case where subsequent records (after first batch) include extra columns 688668680 | |
938171377 | https://github.com/simonw/datasette/issues/1480#issuecomment-938171377 | https://api.github.com/repos/simonw/datasette/issues/1480 | IC_kwDOBm6k_c4361vx | ghing 110420 | 2021-10-07T21:33:12Z | 2021-10-07T21:33:12Z | CONTRIBUTOR | Thanks for the reply @simonw. What services have you had better success with than Cloud Run for larger database? Also, what about my issue description makes you think there may be a workaround? Is there any instrumentation I could add to see at which point in the deploy the memory usage spikes? Should I be able to see this whether it's running under Docker locally, or do you suspect this is Cloud Run-specific? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
author_association 2 ✖