issues
22 rows where user = 25778
This data as json, CSV (advanced)
Suggested facets: title, state, comments, updated_at, closed_at, repo, type, state_reason, created_at (date), updated_at (date), closed_at (date)
id ▼ | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
817989436 | MDU6SXNzdWU4MTc5ODk0MzY= | 242 | Async support | eyeseast 25778 | open | 0 | 13 | 2021-02-27T18:29:38Z | 2021-10-28T14:37:56Z | CONTRIBUTOR | Following our conversation last week, want to note this here before I forget. I've had a couple situations where I'd like to do a bunch of updates in an async event loop, but I run into SQLite's issues with concurrent writes. This feels like something sqlite-utils could help with. PeeWee ORM has a [SQLite write queue](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq) that might be a good model. It's using threads or gevent, but I _think_ that approach would translate well enough to asyncio. Happy to help with this, too. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
907642546 | MDU6SXNzdWU5MDc2NDI1NDY= | 264 | Supporting additional output formats, like GeoJSON | eyeseast 25778 | closed | 0 | 3 | 2021-05-31T18:03:32Z | 2021-06-03T05:12:21Z | 2021-06-03T05:12:21Z | CONTRIBUTOR | I have a project going where it would be useful to do some spatial processing in SQLite (instead of large files) and then output GeoJSON. So my workflow would be something like this: 1. Read Shapefiles, GeoJSON, CSVs into a SQLite database 2. Join, filter, prune as needed 3. Export GeoJSON for just the stuff I need at that moment, while still having a database of things that will be useful later I'm wondering if this is worth adding to SQLite-utils itself (GeoJSON, at least), or if it's better to make a counterpart to the ecosystem of `*-to-sqlite` tools, say a suite of `sqlite-to-*` things. Or would it be crazy to have a plugin system? | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
913017577 | MDU6SXNzdWU5MTMwMTc1Nzc= | 1365 | pathlib.Path breaks internal schema | eyeseast 25778 | closed | 0 | 1 | 2021-06-07T01:40:37Z | 2021-06-21T15:57:39Z | 2021-06-21T15:57:39Z | CONTRIBUTOR | Ran into an issue while trying to build a plugin to render GeoJSON. I'm using pytest's `tmp_path` fixture, which is a `pathlib.Path`, to get a temporary database path. I was getting a weird error involving writes, but I was doing reads. Turns out it's the internal database trying to insert a `Path` where it wants a string. My test looked like this: ```python @pytest.mark.asyncio async def test_render_feature_collection(tmp_path): database = tmp_path / "test.db" datasette = Datasette([database]) # this will break with a path await datasette.refresh_schemas() # build a url url = datasette.urls.table(database.stem, TABLE_NAME, format="geojson") response = await datasette.client.get(url) fc = response.json() assert 200 == response.status_code ``` I only ran into this while running tests, because passing in database paths from the CLI uses strings, but it's a weird error and probably something other people have run into. The fix is easy enough: Convert the path to a string and everything works. So this: ```python @pytest.mark.asyncio async def test_render_feature_collection(tmp_path): database = tmp_path / "test.db" datasette = Datasette([str(database)]) # this is fine now await datasette.refresh_schemas() ``` This could (probably, haven't tested) be fixed [here](https://github.com/simonw/datasette/blob/03ec71193b9545536898a4bc7493274fec48bdd7/datasette/app.py#L357) by calling `str(db.path)` or by doing that conversion earlier. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
914130834 | MDExOlB1bGxSZXF1ZXN0NjY0MDcyMDQ2 | 1370 | Ensure db.path is a string before trying to insert into internal database | eyeseast 25778 | closed | 0 | 2 | 2021-06-08T01:16:48Z | 2021-06-21T15:57:39Z | 2021-06-21T15:57:39Z | CONTRIBUTOR | simonw/datasette/pulls/1370 | Fixes #1365 This is the simplest possible fix, with a test that will fail without it. There are a bunch of places where `db.path` is getting converted to and from a `Path` type, so this fix errs on the side of calling `str(db.path)` right before it's inserted. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1102899312 | PR_kwDOCGYnMM4w_p22 | 385 | Add new spatialite helper methods | eyeseast 25778 | closed | 0 | 16 | 2022-01-14T03:57:30Z | 2022-02-05T00:04:26Z | 2022-02-04T05:55:10Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/385 | Refs #79 This PR adds three new Spatialite-related methods to Database and Table: - `Database.init_spatialite` loads the Spatialite extension and initializes it - `Table.add_geometry_column` adds a geometry column - `Table.create_spatial_index` creates a spatial index Has tests and documentation. Feedback very welcome. | sqlite-utils 140912432 | pull | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1105916061 | I_kwDOBm6k_c5B6vCd | 1601 | Add KNN and data_licenses to hidden tables list | eyeseast 25778 | closed | 0 | 5 | 2022-01-17T14:19:57Z | 2022-01-20T21:29:44Z | 2022-01-20T04:38:54Z | CONTRIBUTOR | They're generated by Spatialite and not very interesting in most cases. <img width="626" alt="Screen Shot 2022-01-17 at 9 17 31 AM" src="https://user-images.githubusercontent.com/25778/149786443-7832363c-2a2c-4533-9edc-048cd00e4a08.png"> | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1108671952 | I_kwDOBm6k_c5CFP3Q | 1605 | Scripted exports | eyeseast 25778 | open | 0 | 10 | 2022-01-19T23:45:55Z | 2022-11-30T15:06:38Z | CONTRIBUTOR | Posting this while I'm thinking about it: I mentioned at the end of [this thread](https://twitter.com/eyeseast/status/1483893011658551299) that I'm usually doing `datasette --get` to export canned queries. I used to use a tool called [datafreeze](https://github.com/pudo/datafreeze) to do scripted exports, but that project looks dead now. The ergonomics of it are pretty nice, though, and the `Freezefile.yml` structure is actually not too far from Datasette's canned queries. This is related to the idea for `datasette query` (#1356) but I think it's a distinct feature. It's most likely a plugin, but I want to raise it here because it's probably something other people have thought about. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1124237013 | I_kwDOCGYnMM5DAn7V | 398 | Add SpatiaLite helpers to CLI | eyeseast 25778 | closed | 0 | 9 | 2022-02-04T14:01:28Z | 2022-02-16T01:02:29Z | 2022-02-16T00:58:07Z | CONTRIBUTOR | Now that #385 is merged, add CLI versions of those methods. ```sh # init spatialite sqlite-utils init-spatialite database.db # or maybe/also sqlite-utils create database.db --enable-wal --spatialite # add geometry columns # needs a database, table, geometry column name, type, with optional SRID and not-null # this needs to create a table if it doesn't already exist sqlite-utils add-geometry-column database.db table-name geometry --srid 4326 --not-null # spatial index an existing table/column sqlite-utils create-spatial-index database.db table-name geometry ``` Should be mostly straightforward. The one thing worth highlighting in docs is that geometry columns can only be added to existing tables. Trying to add a geometry column to a table that doesn't exist yet might mean you have a schema like `{"rowid": int, "geometry": bytes}`. Might be worth nudging people to explicitly create a table first, then add geometry columns. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1138948786 | PR_kwDOCGYnMM4y3yW0 | 407 | Add SpatiaLite helpers to CLI | eyeseast 25778 | closed | 0 | 7 | 2022-02-15T16:50:17Z | 2022-02-16T01:49:40Z | 2022-02-16T00:58:08Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/407 | Closes #398 This adds SpatiaLite helpers to the CLI. ```sh # init spatialite when creating a database sqlite-utils create database.db --enable-wal --init-spatialite # add geometry columns # needs a database, table, geometry column name, type, with optional SRID and not-null # this will throw an error if the table doesn't already exist sqlite-utils add-geometry-column database.db table-name geometry --srid 4326 --not-null # spatial index an existing table/column # this will throw an error it the table and column don't exist sqlite-utils create-spatial-index database.db table-name geometry ``` Docs and tests are included. | sqlite-utils 140912432 | pull | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1160034488 | I_kwDOCGYnMM5FJLi4 | 411 | Support for generated columns | eyeseast 25778 | open | 0 | 8 | 2022-03-04T20:41:33Z | 2022-03-11T22:32:43Z | CONTRIBUTOR | This is a fairly new feature -- SQLite version 3.31.0 (2020-01-22) -- that I, admittedly, haven't gotten to work yet. But it looks _incredibly_ useful: https://dgl.cx/2020/06/sqlite-json-support I'm not sure if this is an option on `add-column` or a separate command like `add-generated-column`. Either way, it needs an argument to populate it. It could be something like this: ```sh sqlite-utils add-column data.db table-name generated --as 'json_extract(data, "$.field")' --virtual ``` More here: https://www.sqlite.org/gencol.html | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/411/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1178456794 | I_kwDOCGYnMM5GPdLa | 418 | Add generated files to .gitignore | eyeseast 25778 | closed | 0 | 0 | 2022-03-23T17:48:12Z | 2022-03-24T21:01:44Z | 2022-03-24T21:01:44Z | CONTRIBUTOR | I end up with these in my local directory: .hypothesis/ Pipfile Pipfile.lock pyproject.toml Might as well gitignore them. | sqlite-utils 140912432 | issue | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1178484369 | PR_kwDOCGYnMM405rPe | 419 | Ignore common generated files | eyeseast 25778 | closed | 0 | 1 | 2022-03-23T18:06:22Z | 2022-03-24T21:01:44Z | 2022-03-24T21:01:44Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/419 | Closes #418 This adds four files to `.gitignore`: .hypothesis/ Pipfile Pipfile.lock pyproject.toml Those are all generated in the course of development and testing. | sqlite-utils 140912432 | pull | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1193090967 | I_kwDOBm6k_c5HHR-X | 1699 | Proposal: datasette query | eyeseast 25778 | open | 0 | 6 | 2022-04-05T12:36:43Z | 2022-04-11T01:32:12Z | CONTRIBUTOR | I started sketching out a plugin to add a `datasette query` subcommand to export data from the command line. This is based on discussions in #1356 and #1605. Before I get too far down this rabbit hole, I figure it's worth getting some feedback here (unless this should happen in `Discussions`). Here's what I'm thinking: At its most basic, it will write the results of a query to STDOUT. ```sh datasette query -d data.db 'select * from data' > results.json ``` This isn't much improvement over using [sqlite-utils](https://github.com/simonw/sqlite-utils). To make better use of datasette and its ecosystem, run `datasette query` using a canned query defined in a `metadata.yml` file. For example, using the metadata file from [alltheplaces-datasette](https://github.com/eyeseast/alltheplaces-datasette/blob/main/metadata.yml): ```sh cd alltheplaces-datasette datasette query -d alltheplaces.db -m metadata.yml count_by_spider ``` That query would be good to get as CSV, and we can auto-discover metadata and databases in the current directory: ```sh cd alltheplaces-datasette datasette query count_by_spider -f csv ``` In this case, `count_by_spider` is a canned query defined on the `alltheplaces` database. If the same query is defined on multiple databases or its otherwise unclear which database `query` should use, pass the `-d` or `--database` option. If a query takes parameters, I can pass them in at runtime, using the `--param` or `-p` option: ```sh datasette query -d data.db -p value something 'select * from neighborhoods where some_column = :value' ``` I'm very interested in feedback on this, including whether it should be a plugin or in Datasette core. (I don't have a strong opinion about this, but I'm prototyping it as a plugin to start.) | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1292368833 | I_kwDOBm6k_c5NB_vB | 1764 | Keep track of config_dir in directory mode (for plugins) | eyeseast 25778 | closed | 0 | 0 | 2022-07-03T16:57:49Z | 2022-07-18T01:12:45Z | 2022-07-18T01:12:45Z | CONTRIBUTOR | I started working on using `config_dir` with my [datasette-query-files plugin](https://github.com/eyeseast/datasette-query-files) and realized Datasette doesn't actually hold onto the `config_dir` argument. It gets used in `__init__` but then forgotten. It would be nice to be able to use it in plugins, though. Here's the reference issue: https://github.com/eyeseast/datasette-query-files/issues/4 | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | completed | ||||||
1292377561 | PR_kwDOBm6k_c46wdOW | 1766 | Keep track of config_dir | eyeseast 25778 | closed | 0 | 2 | 2022-07-03T17:37:02Z | 2022-07-18T01:12:45Z | 2022-07-18T01:12:45Z | CONTRIBUTOR | simonw/datasette/pulls/1766 | Closes #1764 Small change that adds `self.config_dir = config_dir` to `Datasette.__init__`. This will let plugins also use `config_dir`, if available. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1439009231 | I_kwDOBm6k_c5VxYnP | 1884 | Exclude virtual tables from datasette inspect | eyeseast 25778 | open | 0 | 6 | 2022-11-07T21:26:01Z | 2022-11-21T04:40:56Z | CONTRIBUTOR | Ran `inspect` on a spatialite database and got these warnings: ``` ERROR: conn=<sqlite3.Connection object at 0x119e46110>, sql = 'select count(*) from [SpatialIndex]', params = None: no such module: VirtualSpatialIndex ERROR: conn=<sqlite3.Connection object at 0x119e46110>, sql = 'select count(*) from [ElementaryGeometries]', params = None: no such module: VirtualElementary ERROR: conn=<sqlite3.Connection object at 0x119e46110>, sql = 'select count(*) from [KNN]', params = None: no such module: VirtualKNN ``` It still worked, but probably want to catch this. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1469796454 | I_kwDOBm6k_c5Xm1Bm | 1920 | Document Datasette.metadata() method | eyeseast 25778 | open | 0 | 0 | 2022-11-30T15:10:36Z | 2022-11-30T15:10:36Z | CONTRIBUTOR | Code is here: https://github.com/simonw/datasette/blob/main/datasette/app.py#L503 This will be the official way to access metadata from plugins. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1469821027 | I_kwDOBm6k_c5Xm7Bj | 1921 | Document methods to get canned queries | eyeseast 25778 | open | 0 | 0 | 2022-11-30T15:26:33Z | 2022-11-30T23:34:21Z | CONTRIBUTOR | Two methods will get canned queries for a Datasette instance: [`Datasette.get_canned_queries`](https://github.com/simonw/datasette/blob/main/datasette/app.py#L575) will return all canned queries for a database that an `actor` can see. [`Datasette.get_canned_query`](https://github.com/simonw/datasette/blob/main/datasette/app.py#L592) will return a single canned query by name. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | ||||||||
1522778923 | I_kwDOBm6k_c5aw8Mr | 1978 | Document datasette.urls.row and row_blob | eyeseast 25778 | closed | 0 | 2 | 2023-01-06T15:45:51Z | 2023-01-09T14:30:00Z | 2023-01-09T14:30:00Z | CONTRIBUTOR | These are in the codebase but not in documentation. I think everything else in this class is documented. ```python class Urls: ... def row(self, database, table, row_path, format=None): path = f"{self.table(database, table)}/{row_path}" if format is not None: path = path_with_format(path=path, format=format) return PrefixedUrlString(path) def row_blob(self, database, table, row_path, column): return self.table(database, table) + "/{}.blob?_blob_column={}".format( row_path, urllib.parse.quote_plus(column) ) ``` | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | not_planned | ||||||
1538342965 | PR_kwDOBm6k_c5HpNYo | 1996 | Document custom json encoder | eyeseast 25778 | open | 0 | 1 | 2023-01-18T16:54:14Z | 2023-01-19T12:55:57Z | CONTRIBUTOR | simonw/datasette/pulls/1996 | Closes #1983 All documentation here. Edits welcome. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1996.org.readthedocs.build/en/1996/ <!-- readthedocs-preview datasette end --> | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/1996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||||
1620164673 | PR_kwDOCGYnMM5L08O8 | 531 | Add paths for homebrew on Apple silicon | eyeseast 25778 | closed | 0 | 4 | 2023-03-11T22:27:52Z | 2023-04-09T01:49:44Z | 2023-04-09T01:49:43Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/531 | This also passes in the extension path when specified in GIS methods. Wherever we know an extension path, we use `db.init_spatialite(find_spatialite() or load_extension)`. <!-- readthedocs-preview sqlite-utils start --> ---- :books: Documentation preview :books:: https://sqlite-utils--531.org.readthedocs.build/en/531/ <!-- readthedocs-preview sqlite-utils end --> | sqlite-utils 140912432 | pull | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | |||||
1659525418 | PR_kwDOCGYnMM5N35VZ | 536 | Add paths for homebrew on Apple silicon | eyeseast 25778 | closed | 0 | 1 | 2023-04-08T13:34:21Z | 2023-04-13T01:44:43Z | 2023-04-13T01:44:43Z | CONTRIBUTOR | simonw/sqlite-utils/pulls/536 | Does what it says and nothing else. This is the same set of paths as Datasette uses. <!-- readthedocs-preview sqlite-utils start --> ---- :books: Documentation preview :books:: https://sqlite-utils--536.org.readthedocs.build/en/536/ <!-- readthedocs-preview sqlite-utils end --> | sqlite-utils 140912432 | pull | {"url": "https://api.github.com/repos/simonw/sqlite-utils/issues/536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);