home / github

Menu
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

369 rows where author_association = "NONE" and type = "issue"

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, milestone, comments, repo, state_reason, created_at (date), updated_at (date), closed_at (date)

id ▼ node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
273944952 MDU6SXNzdWUyNzM5NDQ5NTI= 93 Package as standalone binary atomotic 67420 closed 0     18 2017-11-14T21:14:07Z 2021-11-21T07:00:23Z 2021-11-21T07:00:23Z NONE   hint: more than the docker image a standalone and multiplatform binary (containing the app and the database) could be simpler to distribute. i would like to investigate the possibility to package everything with [pyinstaller](http://www.pyinstaller.org/) adding the database as a [data file](https://pythonhosted.org/PyInstaller/spec-files.html#adding-data-files) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/93/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
274160723 MDU6SXNzdWUyNzQxNjA3MjM= 100 TemplateAssertionError: no filter named 'tojson' coisnepe 13304454 closed 0     2 2017-11-15T13:43:41Z 2017-11-16T09:25:10Z 2017-11-16T00:14:13Z NONE   A 500 error is raised upon clicking on the name of a table on the homepage, say _http://0.0.0.0:8001/_ to _http://0.0.0.0:8001/test_check-c1f4771/users_ The API part seems to function as intended, though... ``` 2017-11-15 14:33:57 - (sanic)[ERROR]: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/sanic/app.py", line 503, in handle_request response = await response File "/usr/local/lib/python3.5/dist-packages/datasette/app.py", line 155, in get return await self.view_get(request, name, hash, **kwargs) File "/usr/local/lib/python3.5/dist-packages/datasette/app.py", line 219, in view_get **context, File "/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py", line 84, in render return html(self.render_string(template, request, **context)) File "/usr/local/lib/python3.5/dist-packages/sanic_jinja2/__init__.py", line 81, in render_string return self.env.get_template(template).render(**context) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 812, in get_template return self._load_template(name, self.make_globals(globals)) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 786, in _load_template template = self.loader.load(self, name, globals) File "/usr/lib/python3/dist-packages/jinja2/loaders.py", line 125, in load code = environment.compile(source, name, filename) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 565, in compile self.handle_exception(exc_info, source_hint=source_hint) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 754, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.5/dist-packages/datasette/templates/table.html", line 29, in template <pre>params = {{ query.params|tojson(4) }}</pre> File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 515, i… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
274161964 MDU6SXNzdWUyNzQxNjE5NjQ= 101 TemplateAssertionError: no filter named 'tojson' eaubin 450244 closed 0     1 2017-11-15T13:47:32Z 2017-11-15T13:48:55Z 2017-11-15T13:48:55Z NONE   I get an exception clicking on the table link: ``` 2017-11-15 08:40:10 - (sanic)[ERROR]: Traceback (most recent call last): File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic/app.py", line 503, in handle_request response = await response File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/app.py", line 155, in get return await self.view_get(request, name, hash, **kwargs) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/app.py", line 219, in view_get **context, File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic_jinja2/__init__.py", line 84, in render return html(self.render_string(template, request, **context)) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/sanic_jinja2/__init__.py", line 81, in render_string return self.env.get_template(template).render(**context) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py", line 812, in get_template return self._load_template(name, self.make_globals(globals)) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py", line 786, in _load_template template = self.loader.load(self, name, globals) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/loaders.py", line 125, in load code = environment.compile(source, name, filename) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py", line 565, in compile self.handle_exception(exc_info, source_hint=source_hint) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/environment.py", line 754, in handle_exception reraise(exc_type, exc_value, tb) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "/Users/e/anaconda3-4.2.0/lib/python3.5/site-packages/datasette/templates/table.html", line 29, in template <pre>params = {{ query.params|tojson(4) }}</pre> File "/Users/e/… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
276091279 MDU6SXNzdWUyNzYwOTEyNzk= 144 apsw as alternative sqlite3 binding (for full text search) mhalle 649467 closed 0     3 2017-11-22T14:40:39Z 2018-05-28T21:29:42Z 2018-05-28T21:29:42Z NONE   Hey there, Have you considered providing apsw support as an alternative to stock python sqlite3? I use apsw because it keeps up with sqlite3 and is straightforward to bring in extensions like FTS5. FTS really accelerates the kind of searching often done by web clients. I may be able to help (it shouldn't be much code), but there are a couple of stylistic questions that come up when supporting an optional package. Also, apsw is tricky in that it doesn't have a pypi package (author says limitations in providing options to setup.py). datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
276842536 MDU6SXNzdWUyNzY4NDI1MzY= 153 Ability to customize presentation of specific columns in HTML view ftrain 20264 closed 0   Custom templates edition 2949431 14 2017-11-26T17:46:11Z 2017-12-10T02:08:45Z 2017-12-07T06:17:33Z NONE   This ties into https://github.com/simonw/datasette/issues/3 in some ways. It would be great to have some adaptability in the HTML views and to specific some columns as displaying in certain ways. - [x] 1. **Auto-parsing URIs into in-browser links.** Why? Lots of public data around cultural commons stuff links to a specific URL. This would be a great utility to turn on at the command line, just parse everything for URLs. Maybe they need to be underlined or represented in a different way than internal URLs. - [x] 2. **Ability to identify a column as plain/preformatted text.** Why? Was trying to import the Enron emails, the body collapses. Hard to read. These fields also tend to screw up the ability to scan a table view. If you knew it was text the system could set an `overflow` property on the relevant CSS, so you could still scan. - [x] 3. **Ability to identify a column as HTML.** Why? I want to spider some stuff and drop sections into SQLite, and just keep them as HTML. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
277589569 MDU6SXNzdWUyNzc1ODk1Njk= 155 A primary key column that has foreign key restriction associated won't rendering label column wsxiaoys 388154 closed 0   Custom templates edition 2949431 4 2017-11-29T00:40:02Z 2017-12-07T05:39:53Z 2017-12-07T05:39:53Z NONE     datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
278814220 MDU6SXNzdWUyNzg4MTQyMjA= 161 Support WITH query wsxiaoys 388154 closed 0     4 2017-12-03T20:00:40Z 2017-12-08T06:18:12Z 2017-12-04T04:52:41Z NONE   Currently datasettle failed with error message: Statement must begin with SELECT Example query ```sql WITH RECURSIVE cnt(x) AS ( SELECT 1 UNION ALL SELECT x+1 FROM cnt LIMIT 1000000 ) SELECT x FROM cnt; ``` datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
281110295 MDU6SXNzdWUyODExMTAyOTU= 173 I18n and L10n support janimo 50138 open 0     2 2017-12-11T17:49:58Z 2021-04-26T12:10:01Z   NONE   It would be less geeky and more user friendly if the display strings in the filter menu and possibly other parts could be localized. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/173/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
282971961 MDU6SXNzdWUyODI5NzE5NjE= 175 Add project topic "automatic-api" dbohdan 3179832 closed 0     1 2017-12-18T18:09:17Z 2017-12-21T18:33:55Z 2017-12-21T18:33:55Z NONE   Hi there! Could you add the ~~tag~~ topic `automatic-api` to your repository? I am [making a list](https://github.com/dbohdan/automatic-api) of all projects that automatically expose APIs to databases. (Your Show&nbsp;HN made me do it.&nbsp;:-) I knew about PostgREST and PostGraphQL, but it took adding Datasette to sell me on the concept.) They will be easier to discover if there is a standard GitHub tag, and `automatic-api` seems as good a candidate as any. Two projects [already use it](https://github.com/topics/automatic-api). datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
285168503 MDU6SXNzdWUyODUxNjg1MDM= 176 Add GraphQL endpoint yozlet 173848 open 0     8 2017-12-29T23:21:01Z 2020-04-21T14:16:24Z   NONE   Would make it much easier to build React & similar frontends. Maybe with https://github.com/graphql-python/sanic-graphql ? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
292011379 MDU6SXNzdWUyOTIwMTEzNzk= 184 500 from missing table name carlmjohnson 222245 closed 0     4 2018-01-26T19:46:45Z 2019-05-21T16:17:29Z 2018-04-13T18:18:59Z NONE   https://github.com/simonw/datasette/blob/56623e48da5412b25fb39cc26b9c743b684dd968/datasette/app.py#L517-L519 throws an error if it gets an empty list back. Simplest solution is to write a helper func that just says ```python result = list(await self.execute(name, sql, params) if result: return result[0][0] ``` and use it anywhere `[0][0]` is now. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
299760684 MDU6SXNzdWUyOTk3NjA2ODQ= 185 Metadata should be a nested arbitrary KV store carlmjohnson 222245 open 0     12 2018-02-23T16:02:07Z 2019-05-13T18:33:33Z   NONE   I started using the metadata feature and was surprised to find that values are not inherited from the root object down to specific databases and tables. This makes metadata much less useful and requires a lot of pointless duplication. Ideally, metadata should allow arbitrary key-value pairs, and there should be a way of accessing metadata either in an inherited or non-inherited manner. Something like `metadata.page.key` vs. `metadata.this.key` might work as an interface. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
306811513 MDU6SXNzdWUzMDY4MTE1MTM= 186 proposal new option to disable user agents cache stefanocudini 47107 closed 0     3 2018-03-20T10:42:20Z 2018-03-21T09:07:22Z 2018-03-21T01:28:31Z NONE   I think it would be very useful for debugging an option of adding headers to http replies ``` Cache-Control: no-cache ``` especially in the html output datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
309033998 MDU6SXNzdWUzMDkwMzM5OTg= 187 Windows installation error robmarkcole 11855322 closed 0     7 2018-03-27T16:04:37Z 2019-06-15T21:44:23Z 2019-06-15T21:44:23Z NONE   On attempting install on a Win 7 PC with py 3.6.2 (Anaconda dist) I get the error: ``` Collecting uvloop>=0.5.3 (from Sanic==0.7.0->datasette) Downloading uvloop-0.9.1.tar.gz (1.8MB) 100% |¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 1.8MB 12.8MB/s Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\RCole\AppData\Local\Temp\pip-build-juakfqt8\uvloop\setup.py ", line 10, in <module> raise RuntimeError('uvloop does not support Windows at the moment') RuntimeError: uvloop does not support Windows at the moment ``` datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/187/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
314665147 MDU6SXNzdWUzMTQ2NjUxNDc= 216 Bug: Sort by column with NULL in next_page URL carlmjohnson 222245 closed 0     15 2018-04-16T14:03:18Z 2018-04-17T01:45:24Z 2018-04-17T01:45:24Z NONE   Copy-pasting from https://github.com/simonw/datasette/issues/189#issuecomment-381429213, since that issue is closed: I think I found a bug. I tried to sort by middle initial in my salaries set, and many middle initials are null. The `next_url` gets set by Datasette to: http://localhost:8001/salaries-d3a5631/2017+Maryland+state+salaries?_next=None%2C391&_sort=middle_initial But then None is interpreted literally and it tries to find a name with the middle initial "None" and ends up skipping ahead to O on page 2. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
319449852 MDU6SXNzdWUzMTk0NDk4NTI= 247 SQLite code decoupled from Datasette jsancho-gpl 11912854 open 0     1 2018-05-02T08:03:28Z 2018-05-21T15:29:31Z   NONE   I'm working on the possibility of use Datasette with other file formats that aren't SQLite, like files with [PyTables](https://github.com/PyTables/PyTables) format. In order to accomplish that, I've started [a fork for decoupling the code related with SQLite](https://github.com/jsancho-gpl/datasette/tree/feature/db-type-plugin) and putting it in an external connector to allow future connectors for a lot of file formats. It'd be nice if you could look at it and suggest improvements for a possible PR. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
322283067 MDU6SXNzdWUzMjIyODMwNjc= 254 Escaping named parameters in canned queries philroche 247131 closed 0     4 2018-05-11T12:43:30Z 2020-05-10T14:54:14Z 2020-05-10T14:54:13Z NONE   Thank you very much for this project. I have created some canned queries but some of the filters include a colon eg. "com.ubuntu.cloud:server:18.04:amd64". When saved these colons are parsed as named parameters. Is there a way to escape colons in a canned query? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
330826972 MDU6SXNzdWUzMzA4MjY5NzI= 308 Support extra Heroku apps:create options - region, space, team annapowellsmith 78156 open 0     2 2018-06-08T23:08:33Z 2018-09-21T14:09:28Z   NONE   It would be useful to document how to pass Heroku CLI options on `datasette publish`, e.g. `--region eu`. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
333238932 MDU6SXNzdWUzMzMyMzg5MzI= 316 datasette inspect takes a very long time on large dbs gavinband 132230 closed 0     5 2018-06-18T11:56:27Z 2019-05-11T18:26:25Z 2019-05-11T18:26:25Z NONE   Hi, I want to expose data in a very large sqlite database (~600Gb) to the web. I have used datasette with success on smaller test databases with the same schema - it works very well (thanks!). However, using the full db, both `datasette inspect` and `datasette serve` seem to hang or pause for a very long time (tens of minutes) on startup. Is this expected behaviour? (I noticed that the output of `datasette inspect` includes row counts for each table. Simply counting the rows in this db will take a long time (tens of millions of rows across each of ~10 tables), so I wondered if this is the source of the problem.) Any help on a workaround would be appreciated. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
334190959 MDU6SXNzdWUzMzQxOTA5NTk= 321 Wildcard support in query parameters bsilverm 12617395 closed 0   0.23.1 3439337 8 2018-06-20T18:03:56Z 2018-06-21T17:00:10Z 2018-06-21T04:55:26Z NONE   I haven't found a way to get the wildcard (%) inserted automatically in to a query parameter. This would be useful for cases the query parameter is followed by a LIKE clause. Wrapping the parameter name using the wildcard character within the metadata file (ie - ...where xyz like %:querystring%) does not seem to work. Can this be made possible? Or if not, can the template be extended to provide a tip to the user that they need to insert the wildcard characters themselves? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
339095976 MDU6SXNzdWUzMzkwOTU5NzY= 334 extra_options not passed to heroku publisher kamicut 719357 closed 0     2 2018-07-06T23:26:12Z 2018-07-24T04:53:21Z 2018-07-10T01:46:04Z NONE   I might be wrong but I was not able to publish to `heroku` with `--extra-options`, I think `extra_options` is not being used in this function [here](https://github.com/simonw/datasette/blob/master/datasette/utils.py#L369). Any help appreciated! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
340396247 MDU6SXNzdWUzNDAzOTYyNDc= 339 Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way bsilverm 12617395 closed 0     4 2018-07-11T20:38:06Z 2022-03-21T22:22:40Z 2022-03-21T22:22:34Z NONE   Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
341123355 MDU6SXNzdWUzNDExMjMzNTU= 342 Requesting support for query description bsilverm 12617395 closed 0     4 2018-07-13T18:50:16Z 2018-07-24T04:53:21Z 2018-07-16T02:33:54Z NONE   It would be great if the metadata file allowed you to enter a description for the query. We have a lot of pre-defined queries that can only be so descriptive by their name. It would be nice if an optional description could be included underneath the name within the UI, or on hover where it currently shows the SQL. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
343728754 MDU6SXNzdWUzNDM3Mjg3NTQ= 346 Logo design for DATASETTE ggabogarcia 35750428 closed 0     0 2018-07-23T17:40:17Z 2018-08-02T02:31:59Z 2018-08-02T02:31:59Z NONE   Hello :) , I'm a graphic designer, I'm interested in collaborating with open source projects, besides this helps me expand my portfolio. I would like to design a logo for your project. I will be happy to collaborate with you :). datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
352768017 MDU6SXNzdWUzNTI3NjgwMTc= 362 Add option to include/exclude columns in search filters annapowellsmith 78156 open 0     1 2018-08-22T01:32:08Z 2020-11-03T19:01:59Z   NONE   I have a dataset with many columns, of which only some are likely to be of interest for searching. It would be great for usability if the search filters in the UI could be configured to include/exclude columns. See also: https://github.com/simonw/datasette/issues/292 datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
392610803 MDU6SXNzdWUzOTI2MTA4MDM= 391 Google Trends example doesn’t work styfle 229881 closed 0     1 2018-12-19T13:51:38Z 2019-01-02T19:45:13Z 2019-01-02T19:45:12Z NONE   https://google-trends.datasettes.com/ I see a cloud flare error. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
395236066 MDU6SXNzdWUzOTUyMzYwNjY= 393 CSV export in "Advanced export" pane doesn't respect query ltrgoddard 1727065 closed 0     6 2019-01-02T12:39:41Z 2021-06-17T18:14:24Z 2019-01-03T02:44:10Z NONE   It looks like there's an inconsistency when exporting to CSV via the the web interface. Say I'm looking at [songs released in 1989](https://fivethirtyeight.datasettes.com/fivethirtyeight-c300360/classic-rock%2Fclassic-rock-song-list?Release+Year__exact=1989) in the `classic-rock/classic-rock-song-list` table from the Five Thirty Eight data. The JSON and CSV export links at the top of the page both give me filtered data using `Release+Year__exact=1989` in the URL. In the `Advanced export` tab, though, the CSV option gives me the whole data set, while the JSON options preserve the query. It may be that this is intended behaviour related to the streaming CSV stuff [discussed here](https://github.com/simonw/datasette/issues/266), but if that's the case then I think it should be a little clearer. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
397129564 MDU6SXNzdWUzOTcxMjk1NjQ= 397 Update official datasetteproject/datasette Docker container to SQLite 3.26.0 claes 43564 closed 0     3 2019-01-08T22:51:50Z 2019-01-11T01:25:33Z 2019-01-11T00:56:18Z NONE   I try to start datasette on a database that contains the below view It fails in a way that makes me think it does not support the window functions SQL syntax. ``` create view general_ledger as select transactions.account_number, strftime("%Y-%m-%d", verifications.verification_date) as verification_date, verifications.verification_number, verifications.verification_text, case when transactions.centi_amount >= 0 and verifications.verification_number > 0 then printf("%.2f", (transactions.centi_amount/100.0)) end as debit, case when transactions.centi_amount <= 0 and verifications.verification_number > 0 then printf("%.2f", (transactions.centi_amount/100.0)) end as credit, printf("%.2f", sum(transactions.centi_amount) over (partition by transactions.account_number order by verifications.verification_number range between unbounded preceding and current row)/100.0) from verifications inner join transactions on transactions.verification_id = verifications.id order by transactions.account_number, verifications.verification_number; ``` ``` docker run -p 8001:8001 -v `pwd`:/mnt datasetteproject/datasette datasette -p 8001 -h 0.0.0.0 /mnt/ledger.db Serve! files=('/mnt/ledger.db',) on port 8001 Traceback (most recent call last): File "/usr/local/bin/datasette", line 11, in <module> sys.exit(cli()) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/datase… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
400229984 MDU6SXNzdWU0MDAyMjk5ODQ= 401 How to pass configuration to plugins? dazzag24 1055831 closed 0     3 2019-01-17T11:20:41Z 2019-01-18T11:48:13Z 2019-01-18T06:49:07Z NONE   Hi, Firstly, thanks for your work on datasette, it is a hugely useful tool! I've been working on a fork [https://github.com/dazzag24/datasette-cluster-map] of datasette-cluster-map to allow the tileserver to be easily switched. Primarily because the tiles being served in the current version use localised text for labels and I'd like to have English used for these names instead. It uses http://leaflet-extras.github.io/leaflet-providers/preview/ to allow you to simply set the tile provider using a call like so: ``` let tiles = L.tileLayer.provider('Esri.WorldTopoMap'); ``` instead of the current: ``` let tiles = L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 19, detectRetina: true, attribution: '&copy; <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors' }), ``` However I've got stuck in trying to work out how to pass the provider string to the plugin. In the documentation: https://datasette.readthedocs.io/en/stable/plugins.html you discuss configuration of plugins and use an example of passing in which latitude and longitude columns should be used. However I cannot seem to see anywhere in the current datasette-cluster-map code where these config params are passed in or used. Can you please point me to an example or how to pass configuration from the metadata.json down into a plugin. Once I've over come this issue I was wondering if you would be interested in taking this change into your version? Many thanks Darren datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
400511206 MDU6SXNzdWU0MDA1MTEyMDY= 403 How does persistence work? ccorcos 1794527 closed 0     2 2019-01-17T23:41:57Z 2019-01-19T05:47:55Z 2019-01-18T06:51:14Z NONE   I was under the impression that now.sh is for stateless microservices. So where are these SQLite databases stored and when do they get created and destroyed? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
403922644 MDU6SXNzdWU0MDM5MjI2NDQ= 8 Problems handling column names containing spaces or - psychemedia 82988 closed 0     3 2019-01-28T17:23:28Z 2019-04-14T15:29:33Z 2019-02-23T21:09:03Z NONE   Irrrespective of whether using column names containing a space or - character is good practice, SQLite does allow it, but `sqlite-utils` throws an error in the following cases: ```python from sqlite_utils import Database dbname = 'test.db' DB = Database(sqlite3.connect(dbname)) import pandas as pd df = pd.DataFrame({'col1':range(3), 'col2':range(3)}) #Convert pandas dataframe to appropriate list/dict format DB['test1'].insert_all( df.to_dict(orient='records') ) #Works fine ``` However: ```python df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) DB['test1'].insert_all(df.to_dict(orient='records')) ``` throws: ``` --------------------------------------------------------------------------- OperationalError Traceback (most recent call last) <ipython-input-27-070b758f4f92> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "1": syntax error ``` and: ```python df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) DB['test1'].upsert_all(df.to_dict(orient='records')) ``` results in: ``` --------------------------------------------------------------------------- OperationalError Traceback (most recent call last) <ipython-input-28-654523549d20> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_… sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
407174173 MDU6SXNzdWU0MDcxNzQxNzM= 408 Show metadata info (e.g. license, source) on custom SQL query pages stefanw 78356 closed 0     0 2019-02-06T10:43:34Z 2019-10-14T03:53:22Z 2019-10-14T03:53:22Z NONE   Currently metadata info is not displayed on custom SQL pages. E.g. compare the footer of [this normal table page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/categories) with the footer [this custom SQL page](https://register-of-members-interests.datasettes.com/regmem-98dc8b7?sql=select+*+from+categories). This is important in order to adhere to attribution license requirements. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
408376825 MDU6SXNzdWU0MDgzNzY4MjU= 409 Zeit API v1 does not work for new users - need to migrate to v2 michaelmcandrew 209967 closed 0     3 2019-02-09T00:50:33Z 2020-04-06T15:44:46Z 2020-04-06T15:44:46Z NONE   Hello there, This looks like a great tool. Thanks. Unfortunately, I hit the following error: ``` michael@hazel ~/src/cc-datasette/data/out datasette publish now cc-datasette.db > WARN! You are using an old version of the Now Platform. More: https://zeit.co/docs/v1-upgrade > Deploying /tmp/tmpjtrxwsyf/datasette under michaelmcandrew > Using project datasette > Error! You tried to create a Now 1.0 deployment. Please use Now 2.0 instead: https://zeit.co/upgrade ``` I'm guessing you might not hit this because you are not a 'new user' of Zeit (https://github.com/zeit/now-cli/issues/1805#issuecomment-452470953). Would it be a lot of work to upgrade to the new Zeit API, do you think? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
408518024 MDU6SXNzdWU0MDg1MTgwMjQ= 410 How to setup a multi database environment? aborruso 30607 closed 0     1 2019-02-10T09:39:24Z 2019-04-12T04:42:28Z 2019-04-12T04:42:27Z NONE   Hi, first of all I need to write that Simon Willison and datasette are really great. I have probably a stupid question, but it seems to me that I do not have the reply in the documentation. I have installed datasette and run it with `datasette mydb.db`, and I can reach it on `http://127.0.0.1:8001`. But how to work with more than one db? Imagine I have ten sqlite databases, and that I need to explore/query these via datasette, how to run datasette? Is it possibile to create a sort of db index and than run `datasette serve myindex`? Thank you datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
410384988 MDU6SXNzdWU0MTAzODQ5ODg= 411 How to pass named parameter into spatialite MakePoint() function dazzag24 1055831 closed 0     2 2019-02-14T16:30:22Z 2022-01-20T21:29:41Z 2019-05-05T12:25:04Z NONE   Hi, datasette version: "0.26.2" extensions: spatialite: "4.4.0-RC0" sqlite version: "3.22.0" I have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables: ``` conn.execute('SELECT InitSpatialMetadata(1)') conn.execute("SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);") conn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||"longitude"||' '||"latitude"||')',4326);''') conn.execute("SELECT CreateSpatialIndex('airports', 'point_geom');") ``` I'm attempting to create a canned query and have this in my metadata.json file: ``` "find_airports_nearest_to_point":{ "sql":"SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a JOIN airports AS b ON (b.rowid = a.fid) WHERE f_table_name = \"airports\" AND ref_geometry = MakePoint( :Long , :Lat ) AND max_items = 10;"} ``` which doesn't seem to perform the templating of the name parameters correctly and I get no results. Have also tired: ``` MakePoint( || :Long || , || :Lat || ) ``` which returns this error: ``` near "||": syntax error ``` However I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported? Thanks Darren datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
411066700 MDU6SXNzdWU0MTEwNjY3MDA= 10 Error in upsert if column named 'order' psychemedia 82988 closed 0     1 2019-02-16T12:05:18Z 2019-02-24T16:55:38Z 2019-02-24T16:55:37Z NONE   The following works fine: ``` connX = sqlite3.connect('DELME.db', timeout=10) dfX=pd.DataFrame({'col1':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ``` But if a column is named `order`: ``` connX = sqlite3.connect('DELME.db', timeout=10) dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ``` it throws an error: ``` --------------------------------------------------------------------------- OperationalError Traceback (most recent call last) <ipython-input-130-7dba33cd806c> in <module> 3 dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) 4 DBX = Database(connX) ----> 5 DBX['test'].upsert_all(dfX.to_dict(orient='records')) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order) 347 foreign_keys=foreign_keys, 348 upsert=True, --> 349 column_order=column_order, 350 ) 351 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid OperationalError: near "order": syntax error ``` sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
411257981 MDU6SXNzdWU0MTEyNTc5ODE= 412 Linked Data(sette) sfkeller 43340 open 0     2 2019-02-18T00:38:14Z 2019-03-19T10:09:46Z   NONE   I've a radical feature idea (possible first as an extension in order to experiment?): I'd like to link to a remote table from a remote database, e.g. with a function "linked_datasette()". So one could do following query: ``` SELECT foo.id, foo.a, remote_party.b FROM foo JOIN linked_datasette("https://parlgov.datasettes.com/parlgov-b42a2f2") AS remote_party ON foo.id=remote_party.id ``` This is inspired by SPARQL's SERVICE keyword for remote RDF "endpoints". There's a foundation in the SQL Standard called SQL/MED (https://rhaas.blogspot.com/2011/01/why-sqlmed-is-cool.html ). And here's an implementation from me in Postgres FDW to connect another Postgres "endpoint": https://pastebin.com/Fz2v64Cz . datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/412/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
418329842 MDU6SXNzdWU0MTgzMjk4NDI= 415 Add query parameter to hide SQL textarea ad-si 36796532 closed 0     3 2019-03-07T14:11:30Z 2019-03-15T09:30:57Z 2019-03-15T05:22:43Z NONE   It would be cool if there was a query parameter to hide / remove the SQL textarea. Then I could simply save a bookmark for a certain query and open it to see the data without having to scroll below the (long) SQL query first. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
432727685 MDU6SXNzdWU0MzI3Mjc2ODU= 20 JSON column values get extraneously quoted mhalle 649467 closed 0   1.0 4348046 1 2019-04-12T20:15:30Z 2019-05-25T00:57:19Z 2019-05-25T00:57:19Z NONE   If the input to `sqlite-utils insert` includes a column that is a JSON array or object, `sqlite-utils query` will introduce an extra level of quoting on output: ``` # echo '[{"key": ["one", "two", "three"]}]' | sqlite-utils insert t.db t - # sqlite-utils t.db 'select * from t' [{"key": "[\"one\", \"two\", \"three\"]"}] # sqlite3 t.db 'select * from t' ["one", "two", "three"] ``` This might require an imperfect solution, since sqlite3 doesn't have a JSON type. Perhaps fields that start with `["` or `{"` and end with `"]` or `"}` could be detected, with a flag to turn off that behavior for weird text fields (or vice versa). sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/20/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
435819321 MDU6SXNzdWU0MzU4MTkzMjE= 436 400 Error when trying to register new user via https://publish.datasettes.com/ nniiicc 317694 closed 0     1 2019-04-22T17:55:00Z 2021-01-04T20:15:42Z 2021-01-04T20:15:41Z NONE   Behavior: When registering a new user via Zeit - confirmation is sent and screen acknowledges registered user... When clicking grant access the next screen is a white 400 error message. Replicated: Chrome and Firefox; 2 different email accounts datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
448189298 MDU6SXNzdWU0NDgxODkyOTg= 486 Ability to add extra routes and related templates clausjuhl 2181410 closed 0     2 2019-05-24T14:04:25Z 2019-05-24T14:43:28Z 2019-05-24T14:43:09Z NONE   Hi Simon Thank for an excellent job! Datasette is such an obviously good idea (once you have that idea!) and so well done. The only thing that I miss, is the ability to add extras routes (with associated jinja2-templates). For most of the datasets, that I would like to publish, I would also like at least a page, that describes the data (semantics, provenance, biases...) and a page explaining our cookie- and privacy-policies (which would allows us to use something like Goggle Analytics). datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
449818897 MDU6SXNzdWU0NDk4MTg4OTc= 24 Additional Column Constraints? IgnoredAmbience 98555 closed 0     6 2019-05-29T13:47:03Z 2019-06-13T06:47:17Z 2019-06-13T06:30:26Z NONE   I'm looking to import data from XML with a pre-defined schema that maps fairly closely to a relational database. In particular, it has explicit annotations for when fields are required, optional, or when a default value should be inferred. Would there be value in adding the ability to define `NOT NULL` and `DEFAULT` column constraints to sqlite-utils? sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/24/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
450862577 MDU6SXNzdWU0NTA4NjI1Nzc= 496 Additional options to gcloud build command in cloudrun - timeout costrouc 1740337 closed 0     1 2019-05-31T15:43:55Z 2019-05-31T23:05:05Z 2019-05-31T23:05:05Z NONE   I am trying to deploy a 3.1 GB dataset to cloudrun with datasette. Currrently the docker build times out. Would be nice to have a timeout flag or additional gcloud commands that could be specified. Here is the line https://github.com/simonw/datasette/blob/f825e2012109247fa246e2b938f8174069e574f1/datasette/publish/cloudrun.py#L78 I would be happy to submit a PR to allow for a timeout option. What are your ideas of allowing the user additional build publishing flag options? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
451513541 MDU6SXNzdWU0NTE1MTM1NDE= 498 Full text search of all tables at once? chrismp 7936571 closed 0     12 2019-06-03T14:24:43Z 2020-05-30T17:26:02Z 2020-05-30T17:26:02Z NONE   Does datasette have a built-in way, in a browser, to do a full-text search of all columns, in all databases and tables, that have full-text search enabled? Is there a plugin that does this? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
451585764 MDU6SXNzdWU0NTE1ODU3NjQ= 499 Accessibility for non-techie newsies? chrismp 7936571 open 0     3 2019-06-03T16:49:37Z 2019-06-05T21:22:55Z   NONE   Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
453131917 MDU6SXNzdWU0NTMxMzE5MTc= 502 Exporting sqlite database(s)? chrismp 7936571 closed 0     3 2019-06-06T16:39:53Z 2021-04-03T05:16:54Z 2019-06-11T18:50:42Z NONE   I'm working on datasette from one computer. But if I want to work on it from another computer and want to copy the SQLite database(s) already on the Heroku datasette instance, how to I copy the database(s) to the second computer so that I can then update it and push to online via datasette's command line code that pushes code to Heroku? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
453243459 MDU6SXNzdWU0NTMyNDM0NTk= 503 Handle SQLite databases with spaces in their names? chrismp 7936571 closed 0 simonw 9599   1 2019-06-06T21:20:59Z 2019-11-04T23:16:30Z 2019-11-04T23:16:30Z NONE   I named my SQLite database "Government workers" and published it to Heroku. When I clicked the "Government workers" database online it lead to a 404 page: `Database not found: Government%20workers`. I believe this is because the database name has a space. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
457147936 MDU6SXNzdWU0NTcxNDc5MzY= 512 "about" parameter in metadata does not appear when alone chrismp 7936571 open 0     3 2019-06-17T21:04:20Z 2019-10-11T15:49:13Z   NONE   Here's an example of metadata I have for one database on datasette. ``` "Records-requests": { "tables": { "Some table": { "about": "This table has data." } } } ``` The text in `about` does not show up when I publish the data. But it shows up after I add a `"source"` parameter in the metadata. Is this intended? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
457201907 MDU6SXNzdWU0NTcyMDE5MDc= 513 Is it possible to publish to Heroku despite slug size being too large? chrismp 7936571 closed 0     2 2019-06-18T00:12:02Z 2019-06-21T22:35:54Z 2019-06-21T22:35:54Z NONE   I'm trying to push more than 1.5GB worth of SQLite databases -- 535MB compressed -- to Heroku but I get this error when I run the `datasette publish heroku` command. Compiled slug size: 535.5M is too large (max is 500M). Can I publish the databases and make datasette work on Heroku despite the large slug size? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
459397625 MDU6SXNzdWU0NTkzOTc2MjU= 514 Documentation with recommendations on running Datasette in production without using Docker chrismp 7936571 closed 0   Datasette 0.50 5971510 27 2019-06-21T22:48:12Z 2020-10-08T23:55:53Z 2020-10-08T23:33:05Z NONE   I've got some SQLite databases too big to push to Heroku or the other services with built-in support in datasette. So instead I moved my datasette code and databases to a remote server on Kimsufi. In the folder containing the SQLite databases I run the following code. `nohup datasette serve -h 0.0.0.0 *.db --cors --port 8000 --metadata metadata.json > output.log 2>&1 &`. When I go to `http://my-remote-server.com:8000`, the site loads. But I know this is not a good long-term solution to running datasette on this server. What is the "correct" way to have this site run, preferably on server port 80? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
459882902 MDU6SXNzdWU0NTk4ODI5MDI= 526 Stream all results for arbitrary SQL and canned queries matej-fr 50578294 open 0     23 2019-06-24T13:09:45Z 2022-09-28T04:01:25Z   NONE   I think that there is a difficulty with canned queries. When I want to stream all results of a canned query TwoDays I get only first 1.000 records. Example: `http://myserver/history_sample/two_days.csv?_stream=on` returns only first 1.000 records. If I do the same with the whole database i.e. `http://myserver/history_sample/database.csv?_stream=on` I get correctly all records. Any ideas? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
459936585 MDU6SXNzdWU0NTk5MzY1ODU= 527 Unable to use rank when fts-table generated with csvs-to-sqlite clausjuhl 2181410 closed 0     3 2019-06-24T14:49:48Z 2019-06-24T15:21:18Z 2019-06-24T15:09:10Z NONE   Hi Simon. If i generate a fts-table with the csvs-to-sqlite f-option, I'm unable to use (in datasette's GUI) the internal ranking of the table for sorting or viewing, but if I generate the fts-table with the enable-fts argument from sqlite-utils, everyrthing works ok. Eg.: datasette, version 0.28 sqlite-utils, version 1.2.1 csvs-to-sqlite, version 0.9 No column named rank with these commands: $ csvs-to-sqlite minutes.csv minutes.db -f text_data $ datasette -i minutes.db select rank, * from minutes_fts where minutes_fts match 'dog' Everything ok with these commands: $ csvs-to-sqlite minutes.csv minutes.db $ sqlite-utils enable-fts minutes.db text_data $ datasette -i minutes.db select rank, * from minutes_fts where minutes_fts match 'dog' Am I doing something wrong? Thank you for a great application! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
467218270 MDU6SXNzdWU0NjcyMTgyNzA= 558 Support unicode in url 0x1997 380586 closed 0     4 2019-07-12T04:43:24Z 2019-07-15T01:29:30Z 2019-07-14T02:49:33Z NONE   Hi, I defined some custom queries in my `metadata.json`. There are Chinese characters in the names of the queries. So the urls are like `http://127.0.0.1:8001/mydb/测试查询`. When opening such urls, datasette will throw an exception. ``` Traceback (most recent call last): File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/asgi.py", line 100, in __call__ return await view(new_scope, receive, send) File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/asgi.py", line 172, in view request, **scope["url_route"]["kwargs"] File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/views/base.py", line 267, in get request, database, hash, correct_hash_provided, **kwargs File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/views/base.py", line 471, in view_get for key in self.ds.renderers.keys() File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/views/base.py", line 471, in <dictcomp> for key in self.ds.renderers.keys() File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/__init__.py", line 655, in path_with_format path = request.path File "/home/zhe/miniconda3/lib/python3.7/site-packages/datasette/utils/asgi.py", line 49, in path self.scope.get("raw_path", self.scope["path"].encode("latin-1")) UnicodeEncodeError: 'latin-1' codec can't encode characters in position 9-11: ordinal not in range(256) ``` This used to work when datasette was based on sanic. Btw, thanks for the great work! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
472429048 MDU6SXNzdWU0NzI0MjkwNDg= 9 Too many SQL variables tholo 166463 closed 0     4 2019-07-24T18:24:17Z 2019-07-26T10:01:05Z 2019-07-26T10:01:05Z NONE   Decided to try importing my data, and ran into this: ``` Traceback (most recent call last): File "/Users/tholo/Source/health/bin/healthkit-to-sqlite", line 10, in <module> sys.exit(cli()) File "/Users/tholo/Source/health/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/Users/tholo/Source/health/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/Users/tholo/Source/health/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/tholo/Source/health/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/Users/tholo/Source/health/lib/python3.7/site-packages/healthkit_to_sqlite/cli.py", line 50, in cli convert_xml_to_sqlite(fp, db, progress_callback=bar.update) File "/Users/tholo/Source/health/lib/python3.7/site-packages/healthkit_to_sqlite/utils.py", line 41, in convert_xml_to_sqlite write_records(records, db) File "/Users/tholo/Source/health/lib/python3.7/site-packages/healthkit_to_sqlite/utils.py", line 80, in write_records column_order=["startDate", "endDate", "value", "unit"], File "/Users/tholo/Source/health/lib/python3.7/site-packages/sqlite_utils/db.py", line 911, in insert_all result = self.db.conn.execute(sql, values) sqlite3.OperationalError: too many SQL variables ``` Added some debug output in sqlite_utils/db.py, which resulted in: ``` INSERT INTO [rBodyMassIndex] ([creationDate], [endDate], [metadata_HKWasUserEntered], [metadata_Health Mate App Version], [metadata_Modified Date], [metadata_Withings Link], [metadata_Withings User Identifier], [sourceName], [sourceVersion], [startDate], [unit], [value]) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) , (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) , (?, ?, ?, ?, ?, … healthkit-to-sqlite 197882382 issue     {"url": "https://api.github.com/repos/dogsheep/healthkit-to-sqlite/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
473307794 MDU6SXNzdWU0NzMzMDc3OTQ= 565 Conflict between datasette and uvicorn click versions jonheslop 440503 closed 0     1 2019-07-26T11:13:40Z 2020-10-02T00:09:55Z 2020-10-02T00:09:55Z NONE   Hello Datasette is awesome thanks so much! I not very familiar with Python but I think there is a problem with datasette docker builds I keep getting this error ``` ERROR: uvicorn 0.8.4 has requirement click==7.*, but you'll have click 6.0 which is incompatible. ERROR: datasette 0.29.2 has requirement click~=7.0, but you'll have click 6.0 which is incompatible. ``` The full log from the docker build is here - https://gist.github.com/jonheslop/e01cd322e761cfaf34f0cb83f86411b0 Just in case it’s helpful this is my setup - https://github.com/dotwatcher/dotwatcher-data datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
476437213 MDU6SXNzdWU0NzY0MzcyMTM= 566 Unexpected keyword argument 'hidden' dvot197007 8330931 closed 0     1 2019-08-03T10:07:57Z 2019-08-03T16:13:36Z 2019-08-03T16:13:36Z NONE   I couldn't get a test example running. I am running python 3.6.8 and tried both windows and windows subsystem for linux, getting the same error. My test.db was created by converting a five line csv file with csvs-to-sqlite. The csv file is: col1, col2, col3 1,2,3 4,5,6 7,8,9 10,11,12 Here is the error message: (myvenv) davido@DESKTOP-L29G79U:~/dot/datasette-eg$ datasette test.db Traceback (most recent call last): File "/home/davido/dot/datasette-eg/myvenv/bin/datasette", line 7, in <module> from datasette.cli import cli File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/datasette/cli.py", line 2, in <module> import uvicorn File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/uvicorn/__init__.py", line 2, in <module> from uvicorn.main import Server, main, run File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/uvicorn/main.py", line 224, in <module> headers: typing.List[str], File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/click/decorators.py", line 170, in decorator _param_memo(f, OptionClass(param_decls, **attrs)) File "/home/davido/dot/datasette-eg/myvenv/lib/python3.6/site-packages/click/core.py", line 1430, in __init__ Parameter.__init__(self, param_decls, type=type, **attrs) TypeError: __init__() got an unexpected keyword a… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
476852861 MDU6SXNzdWU0NzY4NTI4NjE= 568 Add database_color as a configurable option LBHELewis 50906992 open 0     0 2019-08-05T13:14:45Z 2019-08-05T13:14:45Z   NONE   This would be really useful as it would allow us to tie in with colour schemes. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
480961330 MDU6SXNzdWU0ODA5NjEzMzA= 54 Ability to list views, and to access db["view_name"].rows / rows_where / etc ftrain 20264 closed 0     5 2019-08-15T02:00:28Z 2019-08-23T12:41:09Z 2019-08-23T12:20:15Z NONE   The docs show me how to create a view via `db.create_view()` but I can't seem to get back to that view post-creation; if I query it as a table it returns `None`, and it doesn't appear in the table listing, even though querying the view works fine from inside the sqlite3 command-line. It'd be great to have the view as a pseudo-table, or if the python/sqlite3 module makes that hard to pull off (I couldn't figure it out), to have that edge-case documented next to the `db.create_view()` docs. sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/54/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
491219910 MDU6SXNzdWU0OTEyMTk5MTA= 61 importing CSV to SQLite as library witeshadow 17739 closed 0     2 2019-09-09T17:12:40Z 2019-11-04T16:25:01Z 2019-11-04T16:25:01Z NONE   CSV can be imported to SQLite when used CLI, but I don't see documentation for when using as library. sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/61/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
494685791 MDU6SXNzdWU0OTQ2ODU3OTE= 574 Improve usage description of --host option terrycojones 132978 closed 0     2 2019-09-17T15:12:12Z 2019-11-01T21:58:17Z 2019-11-01T21:57:54Z NONE   It would be nice if the `--host` option had a clearer description. I tried to get datasette running on an AWS instance and it took a while to realize it was only listening on localhost. So I wanted to make it listen on an non-localhost interface and tried giving a couple of values to `--host` (a host name, then an interface name), but none of them did. In the end I read the source to see that the option is passed to `uvicorn` and looked at the uvicorn docs, which also didn't help. Then I searched the web for "example running datasette on a host" which led me to https://github.com/simonw/datasette/issues/514 where I saw someone using `-h 0.0.0.0`. I tried that and it works. That usage could be mentioned somewhere, and might save someone else some time. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
500783373 MDU6SXNzdWU1MDA3ODMzNzM= 62 [enhancement] Method to delete a row in python Sergeileduc 4454869 closed 0     5 2019-10-01T09:45:47Z 2019-11-04T16:30:34Z 2019-11-04T16:18:18Z NONE   Hi ! Thanks for the lib ! Obviously, every possible sql queries won't have a dedicated method. But I was thinking : a method to delete a row (I'm terrible with names, maybe `delete_where()` or something, would be useful. I have a Database, with primary key. For the moment, I use : ```Python3 db.conn.execute(f"DELETE FROM table WHERE key = {key_id}") db.conn.commit() ``` to delete a row I don't need anymore, giving his primary key. Works like a charm. Just an idea : ```Python3 table.delete_where_pkey({'key': key_id}) ``` or something (I know, I'm terrible at naming methods...). Pros : well, no need to write SQL query. Cons : WHERE normally allows to do many more things (operators =, <>, >, <, BETWEEN), not to mention AND, OR, etc... Method is maybe to specific, and/or a pain to render more flexible. Again, just a thought. Writing his own sql works too, so... Thanks again. See yah. sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/62/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
504238461 MDU6SXNzdWU1MDQyMzg0NjE= 6 sqlite3.OperationalError: table users has no column named bio dazzag24 1055831 closed 0     2 2019-10-08T19:39:52Z 2019-10-13T05:31:28Z 2019-10-13T05:30:19Z NONE   ``` $ github-to-sqlite repos github.db $ github-to-sqlite starred github.db dazzag24 Traceback (most recent call last): File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/bin/github-to-sqlite", line 10, in <module> sys.exit(cli()) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/github_to_sqlite/cli.py", line 106, in starred utils.save_stars(db, user, stars) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/github_to_sqlite/utils.py", line 177, in save_stars user_id = save_user(db, user) File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/github_to_sqlite/utils.py", line 61, in save_user return db["users"].upsert(to_save, pk="id").last_pk File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/sqlite_utils/db.py", line 1067, in upsert extracts=extracts, File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/sqlite_utils/db.py", line 916, in insert extracts=extracts, File "/home/darreng/.virtualenvs/dogsheep-d2PjdrD7/lib/python3.6/site-packages/sqlite_utils/db.py", line 1024, in insert_all result = self.db.conn.execute(sql, values)… github-to-sqlite 207052882 issue     {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/6/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
504720731 MDU6SXNzdWU1MDQ3MjA3MzE= 1 Add more details on how to request data from google takeout correctly. dazzag24 1055831 open 0     0 2019-10-09T15:17:34Z 2019-10-09T15:17:34Z   NONE   The default is to download everything. This can result in an enormous amount of data when you only really need 2 types of data for now: - My Activity - Location History In addition unless you specify that "My Activity" is downloaded in JSON format the default is HTML. This then causes the `google-takeout-to-sqlite my-activity takeout.db takeout.zip` command to fail as it only contains html files not json files. Thanks google-takeout-to-sqlite 206649770 issue     {"url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
505512251 MDU6SXNzdWU1MDU1MTIyNTE= 588 Queries per DB table in metadata.json bsilverm 12617395 closed 0     3 2019-10-10T21:08:19Z 2019-10-21T12:58:22Z 2019-10-21T01:48:42Z NONE   It doesn't appear possible to have separate queries defined per database table. When I do something like below, my table descriptions show up but not the queries: ` "databases": { "MYDB": { "tables": { "MYFIRSTTABLE": { "source": "Test", "source_url": "https://www.google.com", "queries": { "Query 1": { "sql": "select * from MYFIRSTTABLE", "title": "Query 1", "description": "This is the first query" }, } }, "MYSECONDTABLE": { "source":"Test2", "source_url":"https://www.google.com", "queries": { "Query 2" : { "sql":"select * from MYSECONDTABLE;", "title": "Query 2", "description":"This is the second query" } } } }` datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
506183241 MDU6SXNzdWU1MDYxODMyNDE= 593 make uvicorn optional dependancy (because not ok on windows python yet) stonebig 4312421 closed 0     3 2019-10-12T12:51:07Z 2019-10-13T06:22:08Z 2019-10-13T06:22:07Z NONE   would it be possible to: - remove uvicorn mandatory dependancy ? - eventually make a fallback to hypercorn ? reason: - uvloop not yet supported on Windows/Python-3.8 and below, may happen with Python-3.9 only. - it seems a 6 lines effort (but I'm not expert) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
506297048 MDU6SXNzdWU1MDYyOTcwNDg= 594 upgrade to uvicorn-0.9 to be Python-3.8 friendly stonebig 4312421 closed 0     3 2019-10-13T09:23:43Z 2019-11-12T04:47:04Z 2019-11-12T04:47:04Z NONE   uvicorn-0.8 relies on websockets-0.7 which lacks python-3.8 compatiblity datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
508100844 MDU6SXNzdWU1MDgxMDA4NDQ= 598 Character encoding bug with CSV export JoeGermuska 46313 closed 0     1 2019-10-16T21:09:30Z 2021-06-17T18:13:20Z 2019-10-18T22:52:21Z NONE   I was just poking around, and at [this URL](https://sql-murder-mystery.datasette.io/sql-murder-mystery/crime_scene_report.csv?_stream=on&type=arson&_size=max), I encountered this error: ``` 'latin-1' codec can't encode character '\u2019' in position 27: ordinal not in range(256) ``` datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
510076368 MDU6SXNzdWU1MTAwNzYzNjg= 605 Support queries at the table level bsilverm 12617395 open 0     2 2019-10-21T15:58:30Z 2019-10-30T18:55:37Z   NONE   Per the issue described in [issue #588](https://github.com/simonw/datasette/issues/588), it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
512996469 MDU6SXNzdWU1MTI5OTY0Njk= 607 Ways to improve fuzzy search speed on larger data sets? zeluspudding 8431341 closed 0     6 2019-10-27T17:31:37Z 2019-11-07T03:38:10Z 2019-11-07T03:38:10Z NONE   I have an sqlite table with 16 million rows in it. Having read @simonw article "[Fast Autocomplete Search for Your Website](https://24ways.org/2018/fast-autocomplete-search-for-your-website/)" I was curious to try datasette to see what kind of query performance I could get out of it. In truth I don't need to do full text search since all I would like to do is give my users a way to search for the names of investors such as "Warren Buffet", or "Tim Cook" (who's names are in a single column). On the first search, Datasette takes over 20 seconds to return all records associated with `elon musk`: > ![image](https://user-images.githubusercontent.com/8431341/67638889-a86e1100-f8b7-11e9-9f7e-a9d13a42e988.png) > ![image](https://user-images.githubusercontent.com/8431341/67638825-ed457800-f8b6-11e9-94d1-b44f1a40ee8c.png) If I rerun the same search, it then takes almost 9 seconds: > ![image](https://user-images.githubusercontent.com/8431341/67638908-e4a17180-f8b7-11e9-9d00-748c80ef1f21.png) That's far to slow to implement an autocomplete feature. I could reduce the latency by making a special table of only unique investor names, thereby reducing the search space to less than a million rows (then I'd need to implement a way to add only new investor names to the table as I received new data.. about 4,000 rows a day). If I did that, I'm still concerned the new table wouldn't be lean enough to lookup investor names quickly. Plus, even if I can implement the autocomplete feature, I would still finally have to lookup records for that investors which would take between 8 - 20 seconds. Are there any tricks for speeding this up? Here's my hardware: > ![image](https://user-images.githubusercontent.com/8431341/67638861-55945980-f8b7-11e9-96a8-ca76c7c68c5d.png) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
518506242 MDU6SXNzdWU1MTg1MDYyNDI= 616 Datasette FTS detection bug null92 49656826 closed 0     2 2019-11-06T14:25:47Z 2019-11-08T15:31:33Z 2019-11-08T02:06:56Z NONE   I'm having a trouble with datasette. I deployed EXACTLY the same project on two different apps on Heroku. Both have databases (not all) with FTS activated but only one detects and works fine. You can take a look here: With search: http://teste-templates.herokuapp.com/amazonia_protege/car Without search: http://bases.vortex.media/amazonia_protege/car ![teste](https://user-images.githubusercontent.com/49656826/68306310-11a80e00-0088-11ea-8d1c-db3bd3375518.jpg) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
527670799 MDU6SXNzdWU1Mjc2NzA3OTk= 639 updating metadata.json without recreating the app pkoppstein 172847 open 0     6 2019-11-24T09:19:53Z 2019-11-30T06:08:50Z   NONE   I've sucessfully "uploaded" an SQLite database (with a metadata.json file) to heroku using: $ datasette publish heroku so-sales.db -m metadata.json -n so-sales The question is: how can I modify the (small) metadata.json file without having to upload the (large) SQLite database. The directions on heroku indicate I should run: heroku git:clone -a so-sales But this just results in an empty directory with a warning: warning: You appear to have cloned an empty repository. I've been able to "clone" the heroku "app" using the command: $ heroku slugs:download -a so-sales but this is not a git repository.... Ideally, it seems to me, there'd be an option of the `datasette` CLI to allow a file to be updated, or there'd be some way to create a local git "clone" of the app so that the heroku instructions for "Deploying with git" would apply. (p.s. I ran `datasette publish heroku -m metadata.json -n so-sales` in the hope that that would not cause the .db file to be wiped, but of course it was.) (p.p.s. Thanks for Datasette!) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
531502365 MDU6SXNzdWU1MzE1MDIzNjU= 646 Make database level information from metadata.json available in the index.html template lagolucas 18017473 open 0   Datasette 1.0 3268330 3 2019-12-02T19:55:10Z 2022-03-15T20:50:34Z   NONE   Did a search on the issues here and didn't find anything related to what I want. I want to have information that is on the database level of the JSON like title, source and source_url, and use it on the index page. I tried some small tweaks on the python and html files, but failed to get that result. Is there a way? Thanks! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
534507142 MDU6SXNzdWU1MzQ1MDcxNDI= 69 Feature request: enable extensions loading aborruso 30607 closed 0     3 2019-12-08T08:06:25Z 2022-02-05T00:04:25Z 2020-10-16T18:42:49Z NONE   Hi, it would be great to add a parameter that enables the load of a sqlite extension you need. Something like "-ext modspatialite". In this way your great tool would be even more comfortable and powerful. Thank you very much sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
539204432 MDU6SXNzdWU1MzkyMDQ0MzI= 70 Implement ON DELETE and ON UPDATE actions for foreign keys LucasElArruda 26292069 open 0     2 2019-12-17T17:19:10Z 2020-02-27T04:18:53Z   NONE   Hi! I did not find any mention on the library about ON DELETE and ON UPDATE actions for foreign keys. Are those expected to be implemented? If not, it would be a nice thing to include! sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
539590148 MDU6SXNzdWU1Mzk1OTAxNDg= 651 fts5 syntax error when using punctuation clausjuhl 2181410 closed 0     3 2019-12-18T10:25:35Z 2021-07-14T19:26:06Z 2019-12-30T06:42:55Z NONE   Hi Simon I get a syntax error when using punctuation or special characters in a fulltext search (using fts5). I created the virtual table using sqlite-utils' "enable-fts"-command. The same error appears on Niche Museums [https://www.niche-museums.com/browse/search?q=park.](https://www.niche-museums.com/browse/search?q=park.), but works fine in most of your other datasette-examples, e.g. register-of-members-interests [https://register-of-members-interests.datasettes.com/regmem-98dc8b7/items?_search=mins.](https://register-of-members-interests.datasettes.com/regmem-98dc8b7/items?_search=mins.) What am I doing wrong? Many thanks! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
541274681 MDU6SXNzdWU1NDEyNzQ2ODE= 2 Add linkedin-to-sqlite mnp 881925 open 0     0 2019-12-21T03:13:40Z 2019-12-21T03:13:40Z   NONE   There is an API available. https://developer.linkedin.com/docs/rest-api# At the minimum, I would think contact list and messages would be of interest. dogsheep.github.io 214746582 issue     {"url": "https://api.github.com/repos/dogsheep/dogsheep.github.io/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
544571092 MDU6SXNzdWU1NDQ1NzEwOTI= 15 Assets table with downloads garethr 2029 closed 0   1.0 5225818 4 2020-01-02T13:05:28Z 2020-03-28T12:17:01Z 2020-03-23T19:17:32Z NONE   The `releases` command extracts the releases table, but data about the individual assets are locked up in the JSON document in the `assets` field. My main interest is in individual and aggregate download counts. I was wondering if creating a new table with a record per asset may be useful? If so I'm happy to send a PR when I get a moment. Do you have opinions about that simply being part of the `releases` command or would you prefer a separate command as well? github-to-sqlite 207052882 issue     {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
545407916 MDU6SXNzdWU1NDU0MDc5MTY= 73 upsert_all() throws issue when upserting to empty table psychemedia 82988 closed 0     6 2020-01-05T11:58:57Z 2020-01-31T14:21:09Z 2020-01-05T17:20:18Z NONE   If I try to add a list of `dict`s to an empty table using `upsert_all`, I get an error: ```python import sqlite3 from sqlite_utils import Database import pandas as pd conx = sqlite3.connect(':memory') cx = conx.cursor() cx.executescript('CREATE TABLE "test" ("Col1" TEXT);') q="SELECT * FROM test;" pd.read_sql(q, conx) #shows empty table db = Database(conx) db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-74-8c26d93d7587> in <module> 1 db = Database(conx) ----> 2 db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1159 upsert=True, 1160 ) 1161 /usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert) 1040 sql = "INSERT OR IGNORE INTO [{table}]({pks}) VALUES({pk_placeholders});".format( 1041 table=self.name, -> 1042 pks=", ".join(["[{}]".format(p) for p in pks]), 1043 pk_placeholders=", ".join(["?" for p in pks]), 1044 ) TypeError: 'NoneType' object is not iterable ``` A hacky workaround in use is: ```python try: db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) except: db['test'].insert_all([{'Col1':'a'},{'Col1':'b'}]) ``` sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
546051181 MDU6SXNzdWU1NDYwNTExODE= 16 Exception running first command: IndexError: list index out of range jayvdb 15092 closed 0     4 2020-01-07T03:01:58Z 2020-04-14T18:37:21Z 2020-04-14T18:37:21Z NONE   Exception running first command without an existing db or auth. ```py > mkdir ~/.github/coala > /usr/bin/github-to-sqlite repos ~/.github/coala coala Traceback (most recent call last): File "/usr/bin/github-to-sqlite", line 11, in <module> load_entry_point('github-to-sqlite==0.6', 'console_scripts', 'github-to-sqlite')() File "/usr/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/lib/python3.7/site-packages/github_to_sqlite/cli.py", line 163, in repos utils.save_repo(db, repo) File "/usr/lib/python3.7/site-packages/github_to_sqlite/utils.py", line 120, in save_repo to_save["owner"] = save_user(db, to_save["owner"]) File "/usr/lib/python3.7/site-packages/github_to_sqlite/utils.py", line 61, in save_user return db["users"].upsert(to_save, pk="id", alter=True).last_pk File "/usr/lib/python3.7/site-packages/sqlite_utils/db.py", line 1135, in upsert extracts=extracts, File "/usr/lib/python3.7/site-packages/sqlite_utils/db.py", line 1162, in upsert_all upsert=True, File "/usr/lib/python3.7/site-packages/sqlite_utils/db.py", line 1105, in insert_all row = list(self.rows_where("rowid = ?", [self.last_rowid]))[0] IndexError: list index out of range ``` github-to-sqlite 207052882 issue     {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
548591089 MDU6SXNzdWU1NDg1OTEwODk= 657 Allow creation of virtual tables at startup dazzag24 1055831 open 0     4 2020-01-12T16:10:55Z 2021-01-15T20:24:35Z   NONE   Hi, I've been experimenting with SQLite reading from huge datasets using this excellent Parquet extension from @cldellow. https://cldellow.com/2018/06/22/sqlite-parquet-vtable.html https://github.com/cldellow/sqlite-parquet-vtable This works really well, but I was keen to see if I could combine datasette with this. Having previously experimented with the spatialite extension I knew that datasette supports loading extensions in the underlying sqlite instance. However I hit a blocker as the current design only allows SELECT statements to be executed and so I am unable to execute the crucial CREATE VIRTUAL TABLE ......... command that is required to load the data from the parquet file into the table. It seems like this would be a simple-ish change, but I don't know enough about the architecture of datasette to start implementing this myself? Could this be done as a datasette plugin? or would this require more fundamental changes at initialisation time? My thoughts are that something at init time could detect that the user was loading a *.parquet file and then switch to a mode were it loads that via the "CREATE VIRTUAL TABLE..." rather than loading the *.db file in the default case?? I'm happy to contribute code and testing, I just need some pointers on the best approach. Thanks Darren datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
549287310 MDU6SXNzdWU1NDkyODczMTA= 76 order_by mechanism metab0t 10501166 closed 0     4 2020-01-14T02:06:03Z 2020-04-16T06:23:29Z 2020-04-16T03:13:06Z NONE   In some cases, I want to iterate rows in a table with `ORDER BY` clause. It would be nice to have a `rows_order_by` function similar to `rows_where`. In a more general case, `rows_filter` function might be added to allow more customized filtering to iterate rows. sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/76/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
550293770 MDU6SXNzdWU1NTAyOTM3NzA= 658 How do I use the app.css as style sheet? null92 49656826 open 0     2 2020-01-15T16:27:57Z 2020-02-07T00:29:50Z   NONE   Simon, I'm trying to use the app.css (in static folder) as style sheet but the datasette on Heroku simply ignore it! I read everything about customization here and on readthedocs but still can't. Is this possible? Thanks! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/658/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
551834842 MDU6SXNzdWU1NTE4MzQ4NDI= 659 README information is obscured by feature history labstersteve 55480210 closed 0     1 2020-01-18T22:34:51Z 2020-12-10T23:28:51Z 2020-12-10T23:28:51Z NONE   While it's sometimes valuable to know how a project has developed, there is usually little justification for including this information in the README, and certainly not immediately after other key information such as "what does this package do, and who might want to use it?" Might I recommend that the feature history is migrated to an Appendix in the documentation? datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
555832585 MDU6SXNzdWU1NTU4MzI1ODU= 661 --port option to expose a port other than 8001 in "datasette package" dvhthomas 134771 closed 0     3 2020-01-27T21:05:56Z 2020-01-30T04:17:52Z 2020-01-29T22:46:45Z NONE   I see how to alter the port using `datasette serve -p XXX` per the docs. However, I'm packaging up to server the container on AppEngine flexible, which [requires](https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build#listening_to_port_8080) that the container is serving traffic on port 8080. https://github.com/simonw/datasette/blob/7950105c278b140e6cb665c68b59df219870f9bc/Dockerfile#L41 Is there a way to inject a non-default port into the Dockerfile, or should I just do something like `sed` to replace 8001 with 8080 after `dataset package` has done it's thing? Thanks for the advice. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/661/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
556814876 MDU6SXNzdWU1NTY4MTQ4NzY= 662 Escape_fts5_query-hookimplementation does not work with queries to standard tables clausjuhl 2181410 closed 0     5 2020-01-29T11:56:03Z 2020-01-30T00:30:20Z 2020-01-30T00:30:19Z NONE   Hi Simon Thank you for adding the escape_function, but it does not work on my datasette-installation (0.33). I've added the following file to my datasette-dir: /plugins/sql_functions.py: `from datasette import hookimpl def escape_fts_query(query): bits = query.split() return ' '.join('"{}"'.format(bit.replace('"', '')) for bit in bits) @hookimpl def prepare_connection(conn): conn.create_function("escape_fts_query", 1, escape_fts_query)` It has no effect on the standard queries to the tables though, as they still produce errors when including any characters like '-', '/', '+' or '?' Does the function only work when using costum queries, where I can include the escape_fts-function explicitly in the sql-query? PS. I'm calling datasette with --plugins=plugins, and my other plugins work just fine. PPS. The fts5 virtual table is created with 'sqlite3' like so: `CREATE VIRTUAL TABLE "cases_fts" USING FTS5( title, subtitle, resume, suggestion, presentation, detail = full, content_rowid = 'id', content = 'cases', tokenize='unicode61', 'remove_diacritics 2', 'tokenchars "-_"' );` Thanks! _Originally posted by @clausjuhl in https://github.com/simonw/datasette/issues/651#issuecomment-579675357_ datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/662/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
559197745 MDU6SXNzdWU1NTkxOTc3NDU= 82 Tutorial command no longer works petey284 10350886 closed 0     3 2020-02-03T16:36:11Z 2020-02-27T04:16:43Z 2020-02-27T04:16:30Z NONE   Issue with command on [tutorial](https://simonwillison.net/2019/Feb/25/sqlite-utils/) on Simon's site. The following command no longer works, and breaks with the previous too many variables error: #50 ``` cmd > curl "https://data.nasa.gov/resource/y77d-th95.json" | \ sqlite-utils insert meteorites.db meteorites - --pk=id ``` Output: ``` cmd Traceback (most recent call last): File "continuum\miniconda3\envs\main\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "continuum\miniconda3\envs\main\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "Continuum\miniconda3\envs\main\Scripts\sqlite-utils.exe\__main__.py", line 9, in <module> File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 764, in __call__ return self.main(*args, **kwargs) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 717, in main rv = self.invoke(ctx) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "continuum\miniconda3\envs\main\lib\site-packages\click\core.py", line 555, in invoke return callback(*args, **kwargs) File "continuum\miniconda3\envs\main\lib\site-packages\sqlite_utils\cli.py", line 434, in insert default=default, File "continuum\miniconda3\envs\main\lib\site-packages\sqlite_utils\cli.py", line 384, in insert_upsert_implementation docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs File "continuum\miniconda3\envs\main\lib\site-packages\sqlite_utils\db.py", line 1081, in insert_all result = self.db.conn.execute(query, params) sqlite3.OperationalError: too many SQL variables ``` My thought is that maybe the dataset grew over the last few years and so didn't run into this issue before. No error… sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/82/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
562787785 MDU6SXNzdWU1NjI3ODc3ODU= 667 Allow injecting configuration data from plugins xrotwang 870184 closed 0     2 2020-02-10T19:50:15Z 2020-02-12T16:18:22Z 2020-02-12T09:21:22Z NONE   I'm trying to customize datasette as explorer for [CLDF](https://cldf.clld.org) datasets. Such datasets can be converted automatically to SQLite, which then can be fed to datasette, (e.g. https://github.com/cldf/cookbook/blob/master/recipes/datasette/README.md). Part of this customization would be support for the "special" data types described in the [CLDF ontology](https://cldf.clld.org/v1.0/terms.rdf). But while rendering of the values can be customized via the `render_cell` hook in a plugin, e.g. custom labels for foreign keys must be specified through the config file. It would be nice to be able to programmatically inject config data from plugins as well. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
564579430 MDU6SXNzdWU1NjQ1Nzk0MzA= 86 Problem with square bracket in CSV column name foscoj 8149512 closed 0     7 2020-02-13T10:19:57Z 2020-02-27T04:16:08Z 2020-02-27T04:16:07Z NONE   testing some data from european power information (entsoe.eu), the title of the csv contains square brackets. as I am playing with glitch, sqlite-utils are used for creating the db. Traceback (most recent call last): File "/app/.local/bin/sqlite-utils", line 8, in <module> sys.exit(cli()) File "/app/.local/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/app/.local/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/app/.local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/app/.local/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/app/.local/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/app/.local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 434, in insert default=default, File "/app/.local/lib/python3.7/site-packages/sqlite_utils/cli.py", line 384, in insert_upsert_implementation docs, pk=pk, batch_size=batch_size, alter=alter, **extra_kwargs File "/app/.local/lib/python3.7/site-packages/sqlite_utils/db.py", line 997, in insert_all extracts=extracts, File "/app/.local/lib/python3.7/site-packages/sqlite_utils/db.py", line 618, in create extracts=extracts, File "/app/.local/lib/python3.7/site-packages/sqlite_utils/db.py", line 310, in create_table self.conn.execute(sql) sqlite3.OperationalError: unrecognized token: "]" entsoe_2016.csv renamed to txt for uploading compatibility [entsoe_2016.txt](https://github.com/simonw/sqlite-utils/files/4197688/entsoe_2016.txt) code is remixed directly from your https://glitch.com/edit/#!/datasette-csvs repo sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/86/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
567902704 MDU6SXNzdWU1Njc5MDI3MDQ= 675 --cp option for datasette publish and datasette package for shipping additional files and directories aviflax 141844 open 0     12 2020-02-19T22:55:56Z 2020-12-28T18:49:21Z   NONE   I’m working on integrating Datasette into a documentation-oriented publishing workflow internally in my company, and in order to deploy the Docker image created by `datasette package` I need to add an additional file to the image — in my case, it’s a sort of a deployment directive. I’ve worked out a way to do this after the image has been created, but it’s convoluted and brittle. So it’d be excellent if there was an additional option for this command, something like, like, `--copy`. I’d envision it looking something like: ```shell $ datasette package --copy /the/source/path:/the/target/path data.db ``` I’d be happy to help design, specify, implement, and test this feature, if you’d be interested. Thanks for the fantastic tools! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}    
568091133 MDU6SXNzdWU1NjgwOTExMzM= 676 ?_searchmode=raw option for running FTS searches without escaping characters tunguyenatwork 58088336 closed 0     9 2020-02-20T06:56:57Z 2020-02-25T05:57:24Z 2020-02-25T05:56:04Z NONE   After the version 0.34. I am not able to use the wildchar in the _search option( or the full text search). It will not return any result unless I specify the whole word for text search. If I use 'match :search || "*" ' in the sql statement then it will work as expected. datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
569317377 MDU6SXNzdWU1NjkzMTczNzc= 681 Cashe-header missing in http-response clausjuhl 2181410 closed 0     4 2020-02-22T10:50:45Z 2020-02-24T20:53:57Z 2020-02-24T20:53:56Z NONE   Hi Simon. I need some help with both understanding and adding http-headers. If I call datasette on localhost with --config default_cache_ttl:120 and --cors, I only get the following response-headers: access-control-allow-origin: * content-type: text/html; charset=utf-8 date: Sat, 22 Feb 2020 10:32:15 GMT referrer-policy: no-referrer server: uvicorn transfer-encoding: chunked Cors works, but no caching-header is set? Same thing happens if I use the command in a Dockerfile and run datasette with docker. Second, how can one add headers to uvicorn? I've tried to add uvicorn commands to the Dockerfile, before the final datasette command, but it doesn't work. Is there any way to add headers to the uvicorn.run() command i datasette? I particular, I would like to add some of the missing security-headers: <img width="1010" alt="Screenshot 2020-02-22 at 11 48 03" src="https://user-images.githubusercontent.com/2181410/75091037-5ab59c80-5569-11ea-8dbb-22357f1aa4c8.png"> Thank you for a great product! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
573583971 MDU6SXNzdWU1NzM1ODM5NzE= 689 "Templates considered" comment broken in >=0.35 chrishas35 35075 closed 0     6 2020-03-01T17:31:21Z 2020-04-05T19:39:44Z 2020-04-05T19:39:44Z NONE   Noticed that the "Templates Considered" comment is missing in 0.37. Believe I traced it back to #664 as you can see it in https://v0-34.datasette.io/ but not https://v0-35.datasette.io/. Looking at the template context debug between the two you can see what is missing from 0.35 vs. 0.34: ```diff < "datasette_version": "0.34", < "app_css_hash": "ffa51a", < "select_templates": [ < "*index.html" < ], < "zip": "<class 'zip'>", < "body_scripts": [], < "extra_css_urls": "<generator object BaseView._asset_urls at 0x7f6529ac05f0>", < "extra_js_urls": "<generator object BaseView._asset_urls at 0x7f6529ac0660>", < "format_bytes": "<function format_bytes at 0x7f652a1588b0>", < "database_url": "<bound method BaseView.database_url of <datasette.views.index.IndexView object at 0x7f6529b03e50>>", < "database_color": "<bound method BaseView.database_color of <datasette.views.index.IndexView object at 0x7f6529b03e50>>" --- > "datasette_version": "0.35", > "database_url": "<bound method BaseView.database_url of <datasette.views.index.IndexView object at 0x7f6140dacd90>>", > "database_color": "<bound method BaseView.database_color of <datasette.views.index.IndexView object at 0x7f6140dacd90>>" ``` datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
577302229 MDU6SXNzdWU1NzczMDIyMjk= 91 Enable ordering FTS results by rank gfrmin 416374 closed 0   3.0 6079500 1 2020-03-07T08:43:51Z 2020-11-06T23:53:26Z 2020-11-06T23:53:25Z NONE   According to https://www.sqlite.org/fts5.html (not sure about FTS4) results can be sorted by relevance. At the moment results are returned by default by `rowid`. Perhaps a flag can be added to the `search` method? sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/91/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
593751293 MDU6SXNzdWU1OTM3NTEyOTM= 97 Adding a "recreate" flag to the `Database` constructor betatim 1448859 closed 0     4 2020-04-04T05:41:10Z 2020-04-15T14:29:31Z 2020-04-13T03:52:29Z NONE   I have a [script](https://github.com/betatim/binder-datasette/blob/master/create-db.ipynb) that imports data into a sqlite DB. When I re-run that script I'd like to remove the existing sqlite DB, instead of adding to it. The pragmatic answer is to add the check and file deletion to my script. However I thought it would be easy and useful for others to add a `recreate=True` flag to `db = sqlite_utils.Database("binder-launches.db")`. After taking a look at the code for it I am not so sure any more. This is because the connection string could be a URL (or "connection string") like `"file:///tmp/foo.db"`. I don't know what the equivalent of `os.path.exists()` is for a connection string or how to detect that something is a connection string and raise an error "can't use recreate=True and conn_string at the same time". Does anyone have an idea/suggestion where to start investigating? sqlite-utils 140912432 issue     {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/97/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
605806386 MDU6SXNzdWU2MDU4MDYzODY= 735 Error when I click on "View and edit SQL" aborruso 30607 closed 0     2 2020-04-23T19:31:32Z 2020-04-28T06:10:20Z 2020-04-27T19:00:30Z NONE   Hi, when I do it [here](https://my-database.now.sh/commissioniComunePalermo/youtube), I have "unrecognized token: "["" error. Is it normal? Thank you datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
606720674 MDU6SXNzdWU2MDY3MjA2NzQ= 736 strange behavior using accented characters aborruso 30607 closed 0     3 2020-04-25T08:34:51Z 2020-04-28T06:09:28Z 2020-04-27T18:59:16Z NONE   Hi, when I search `incompatibilità` [here](https://my-database.now.sh/commissioniComunePalermo/youtube), using full text search, it becomes `incompatibilità` and I have no result. If I encode the `à` char in the URL (`incompatibilit%C3%A0`) I have the right result. ![image](https://user-images.githubusercontent.com/30607/80275201-00a79380-86e0-11ea-865e-f7e1474e8098.png) datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
608058890 MDU6SXNzdWU2MDgwNTg4OTA= 744 link_or_copy_directory() error - Invalid cross-device link aborruso 30607 closed 0     28 2020-04-28T06:26:45Z 2020-05-28T14:32:53Z 2020-05-27T06:01:28Z NONE   Hi, when I run ``` datasette publish heroku -n myapp --template-dir ./template mydb.db ``` I have this error ``` Traceback (most recent call last): File "/home/aborruso/.local/lib/python3.7/site-packages/datasette/utils/__init__.py", line 607, in link_or_copy_directory shutil.copytree(src, dst, copy_function=os.link) File "/usr/lib/python3.7/shutil.py", line 365, in copytree raise Error(errors) shutil.Error: [('/myfolder/youtubeComunePalermo/processing/./template/base.html', '/tmp/tmps9_4mzc4/templates/base.html', "[Errno 18] Invalid cross-device link: '/myfolder/youtubeComunePalermo/processing/./template/base.html' -> '/tmp/tmps9_4mzc4/templates/base.html'"), ('/myfolder/youtubeComunePalermo/processing/./template/index.html', '/tmp/tmps9_4mzc4/templates/index.html', "[Errno 18] Invalid cross-device link: '/myfolder/youtubeComunePalermo/processing/./template/index.html' -> '/tmp/tmps9_4mzc4/templates/index.html'")] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/aborruso/.local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/aborruso/.local/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/home/aborruso/.local/lib/pytho… datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
609950090 MDU6SXNzdWU2MDk5NTAwOTA= 33 Fall back to authentication via ENV garethr 2029 closed 0     4 2020-04-30T12:58:14Z 2020-05-02T18:46:10Z 2020-05-02T18:45:37Z NONE   Would you accept a PR that falls back to looking for an environment variable for the GitHub token? Specifically a change here: https://github.com/dogsheep/github-to-sqlite/blob/c34d5a18bfc41fa08755ba3d5cf9fe09ff204238/github_to_sqlite/cli.py#L271 I'd like to use `github-to-sqlite` in a GitHub Action workflow and this would be simpler than trying to fill out the prompt or generate a file with sensitive content. Wanted to check first, I'm happy to submit a PR with tests and updates to the docs. github-to-sqlite 207052882 issue     {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
611284481 MDU6SXNzdWU2MTEyODQ0ODE= 38 [Feature Request] Support Repo Name in Search 🥺 zzeleznick 5779832 closed 0     4 2020-05-02T22:08:51Z 2020-05-03T02:34:32Z 2020-05-02T23:15:11Z NONE   ## Description Per your [v2.2 release tweet](https://twitter.com/simonw/status/1256700238099693568) I played with the demo, but the output did not match my expectations. ## Expected Behavior Expected a search query for "twitter" contained within the `repo` column to return non-zero results. ## Actual Behavior 😭 [0 rows where repo contains "twitter" sorted by starred_at descending](https://github-to-sqlite.dogsheep.net/github/stars?repo__contains=twitter&_sort_desc=starred_at) ## Best Explanation Per the table schema (see appendix) `repo` is of type `INTEGER` which built from `repo_id` and does not expose the repo name in search. ## Desired Behavior Given that searching for "206156866" is less intuitive than "twitter", it would be great to support this via extending the search capabilities or by adding an additional column. ✅ 104 rows where repo contains "twitter" ❌ [104 rows where repo contains "206156866" sorted by starred_at descending](https://github-to-sqlite.dogsheep.net/github/stars?repo__contains=206156866&_sort_desc=starred_at) ## Appendix ``` CREATE TABLE [stars] ( [user] INTEGER REFERENCES [users]([id]), [repo] INTEGER REFERENCES [repos]([id]), [starred_at] TEXT, PRIMARY KEY ([user], [repo]) ); CREATE INDEX [idx_stars_repo] ON [stars] ([repo]); CREATE INDEX [idx_stars_user] ON [stars] ([user]); ``` github-to-sqlite 207052882 issue     {"url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed
611835285 MDU6SXNzdWU2MTE4MzUyODU= 752 Non-utf8 encoding in exceptionhandlers and custom-pages clausjuhl 2181410 closed 0     1 2020-05-04T12:24:42Z 2020-05-04T17:42:20Z 2020-05-04T17:42:20Z NONE   Hi Simon. Whenever a response is not piped through a router-view, the template is encoded in latin-1 (I think). This is especially a problem (for me) with the new custom_pages-functionality, but also problematic with the 404- and 500-handlers. Thanks! datasette 107914493 issue     {"url": "https://api.github.com/repos/simonw/datasette/issues/752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0}   completed

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 254.812ms · About: simonw/datasette-graphql