home / github

Menu
  • GraphQL API

pull_requests

Table actions
  • GraphQL API for pull_requests

9 rows where user = 15178711

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, draft, base, repo, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association repo url merged_by auto_merge
1031503844 PR_kwDOBm6k_c49e3_k 1789 closed 0 Add new entrypoint option to `--load-extension` asg017 15178711 Closes #1784 The `--load-extension` flag can now accept an optional "entrypoint" value, to specify which entrypoint SQLite should load from the given extension. ```bash # would load default entrypoint like before datasette data.db --load-extension ext # loads the extensions with the "sqlite3_foo_init" entrpoint datasette data.db --load-extension ext:sqlite3_foo_init # loads the extensions with the "sqlite3_bar_init" entrpoint datasette data.db --load-extension ext:sqlite3_bar_init ``` For testing, I added a small SQLite extension in C at `tests/ext.c`. If compiled, then pytest will run the unit tests in `test_load_extensions.py`to verify that Datasette loads in extensions correctly (and loads the correct entrypoints). Compiling the extension requires a C compiler, I compiled it on my Mac with: ``` gcc ext.c -I path/to/sqlite -fPIC -shared -o ext.dylib ``` Where `path/to/sqlite` is a directory that contains the SQLite amalgamation header files. Re documentation: I added a bit to the help text for `--load-extension` (which I believe should auto-add to documentation?), and the existing extension documentation is spatialite specific. Let me know if a new extensions documentation page would be helpful! 2022-08-19T19:27:47Z 2022-08-23T18:42:52Z 2022-08-23T18:34:30Z 2022-08-23T18:34:30Z 1d64c9a8dac45b9a3452acf8e76dfadea2b0bc49     0 5a2a05f2cea7b55b1c3bb1df043c0a454eca6563 663ac431fe7202c85967568d82b2034f92b9aa43 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/1789    
1445438054 PR_kwDOCGYnMM5WJ6Jm 573 closed 0 feat: Implement a prepare_connection plugin hook asg017 15178711 Just like the [Datasette prepare_connection hook](https://docs.datasette.io/en/stable/plugin_hooks.html#prepare-connection-conn-database-datasette), this PR adds a similar hook for the `sqlite-utils` plugin system. The sole argument is `conn`, since I don't believe a `database` or `datasette` argument would be relevant here. I want to do this so I can release `sqlite-utils` plugins for my [SQLite extensions](https://github.com/asg017/sqlite-ecosystem), similar to the Datasette plugins I've release for them. An example plugin: https://gist.github.com/asg017/d7cdf0d56e2be87efda28cebee27fa3c ```bash $ sqlite-utils install https://gist.github.com/asg017/d7cdf0d56e2be87efda28cebee27fa3c/archive/5f5ad549a40860787629c69ca120a08c32519e99.zip $ sqlite-utils memory 'select hello("alex") as response' [{"response": "Hello, alex!"}] ``` Refs: - #574 <!-- readthedocs-preview sqlite-utils start --> ---- :books: Documentation preview :books:: https://sqlite-utils--573.org.readthedocs.build/en/573/ <!-- readthedocs-preview sqlite-utils end --> 2023-07-22T22:48:44Z 2023-07-22T22:59:09Z 2023-07-22T22:59:09Z 2023-07-22T22:59:09Z 3f80a026983d3e634f05a46f2a6da162b5139dd9     0 faf398fe075f60929337d3cd0f12309fc4229a3c 091c63cfbf7b40e99e2017a3c37619c7689cc447 CONTRIBUTOR sqlite-utils 140912432 https://github.com/simonw/sqlite-utils/pull/573    
1485078422 PR_kwDOBm6k_c5YhH-W 2149 closed 0 Start a new `datasette.yaml` configuration file, with settings support asg017 15178711 refs #2093 #2143 This is the first step to implementing the new `datasette.yaml`/`datasette.json` configuration file. - The old `--config` argument is now back, and is the path to a `datasette.yaml` file. Acts like the `--metadata` flag. - The old `settings.json` behavior has been removed. - The `"settings"` key inside `datasette.yaml` defines the same `--settings` flags - Values passed in `--settings` will over-write values in `datasette.yaml` Docs for the Config file is pretty light, not much to add until we add more config to the file. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2149.org.readthedocs.build/en/2149/ <!-- readthedocs-preview datasette end --> 2023-08-22T16:24:16Z 2023-08-23T01:26:11Z 2023-08-23T01:26:11Z 2023-08-23T01:26:11Z 17ec309e14f9c2e90035ba33f2f38ecc5afba2fa     0 db720cd603def51f1d0f074a16d186779a962ea7 943df09dcca93c3b9861b8c96277a01320db8662 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2149    
1487124196 PR_kwDOBm6k_c5Yo7bk 2151 open 0 Test Datasette on multiple SQLite versions asg017 15178711 still testing, hope it works! <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2151.org.readthedocs.build/en/2151/ <!-- readthedocs-preview datasette end --> 2023-08-23T22:42:51Z 2023-08-23T22:58:13Z     3d2c7cbf727c0ca31161a7acb8ea51f1ee7dcd58     1 b895cd2fb308154de67972c485e54497c006f47e bdf59eb7db42559e538a637bacfe86d39e5d17ca CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2151    
1492889894 PR_kwDOBm6k_c5Y-7Em 2162 closed 0 Add new `--internal internal.db` option, deprecate legacy `_internal` database asg017 15178711 refs #2157 This PR adds a new `--internal` option to datasette serve. If provided, it is the path to a persistent internal database that Datasette core and Datasette plugins can use to store data, as discussed in the proposal issue. This PR also removes and deprecates the previous in-memory `_internal` database. Those tables now appear in the `internal` database, with `core_` prefixes (ex `tables` in `_internal` is now `core_tables` in `internal`). ## A note on the new `core_` tables However, one important notes about those new `core_` tables: If a `--internal` DB is passed in, that means those `core_` tables will persist across multiple Datasette instances. This wasn't the case before, since `_internal` was always an in-memory database created from scratch. I tried to put those `core_` tables as `TEMP` tables - after all, there's always one 1 `internal` DB connection at a time, so I figured it would work. But, since we use the `Database()` wrapper for the internal DB, it has two separate connections: a default read-only connection and a write connection that is created when a write operation occurs. Which meant the `TEMP` tables would be created by the write connection, but not available in the read-only connection. So I had a brillant idea: Attach an in-memory named database with `cache=shared`, and create those tables there! ```sql ATTACH DATABASE 'file:datasette_internal_core?mode=memory&cache=shared' AS core; ``` We'd run this on both the read-only connection and the write-only connection. That way, those tables would stay in memory, they'd communicate with the `cache=shared` feature, and we'd be good to go. However, I couldn't find an easy way to run a `ATTACH DATABASE` command on the read-only query. Using `Database()` as a wrapper for the internal DB is pretty limiting - it's meant for Datasette "data" databases, where we want multiple readers and possibly 1 write connection at a time. But the internal database doesn't really require that kind of support - I think we… 2023-08-29T00:05:07Z 2023-08-29T03:24:23Z 2023-08-29T03:24:23Z 2023-08-29T03:24:23Z 92b8bf38c02465f624ce3f48dcabb0b100c4645d     0 73489cac8ef8e934e601302fa6594e27b75a382d 2e2825869fc2655b5fcadc743f6f9dec7a49bc65 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2162    
1504915653 PR_kwDOBm6k_c5ZszDF 2174 open 0 Use $DATASETTE_INTERNAL in absence of --internal asg017 15178711 #refs 2157, specifically [this comment](https://github.com/simonw/datasette/issues/2157#issuecomment-1700291967) Passing in `--internal my_internal.db` over and over again can get repetitive. This PR adds a new configurable env variable `DATASETTE_INTERNAL_DB_PATH`. If it's defined, then it takes place as the path to the internal database. Users can still overwrite this behavior by passing in their own `--internal internal.db` flag. In draft mode for now, needs tests and documentation. Side note: Maybe we can have a sections in the docs that lists all the "configuration environment variables" that Datasette respects? I did a quick grep and found: - `DATASETTE_LOAD_PLUGINS` - `DATASETTE_SECRETS` <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2174.org.readthedocs.build/en/2174/ <!-- readthedocs-preview datasette end --> 2023-09-06T16:07:15Z 2023-09-08T00:46:13Z     0fc2896ffc5b49adba967a3d0ab8ac9ca119ba0e     0 d75b51950f6836d6e5a58accb48b1d7687dbdd1c 05707aa16b5c6c39fbe48b3176b85a8ffe493938 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2174    
1510964674 PR_kwDOBm6k_c5aD33C 2183 closed 0 `datasette.yaml` plugin support asg017 15178711 Part of #2093 In #2149 , we ported over `"settings.json"` into the new `datasette.yaml` config file, with a top-level `"settings"` key. This PR ports over plugin configuration into top-level `"plugins"` key, as well as nested database/table plugin config. From now on, no plugin-related configuration is allowed in `metadata.yaml`, and must be in `datasette.yaml` in this new format. This is a pretty significant breaking change. Thankfully, you should be able to copy-paste your legacy plugin key/values into the new `datasette.yaml` format. An example of what `datasette.yaml` would look like with this new plugin config: ```yaml plugins: datasette-my-plugin: config_key: value databases: fixtures: plugins: datasette-my-plugin: config_key: fixtures-db-value tables: students: plugins: datasette-my-plugin: config_key: fixtures-students-table-value ``` As an additional benefit, this now works with the new `-s` flag: ```bash datasette --memory -s 'plugins.datasette-my-plugin.config_key' new_value ``` Marked as a "Draft" right now until I add better documentation. We also should have a plan for the next alpha release to document and publicize this change, especially for plugin authors (since their docs will have to change to say `datasette.yaml` instead of `metadata.yaml` <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2183.org.readthedocs.build/en/2183/ <!-- readthedocs-preview datasette end --> 2023-09-11T20:26:04Z 2023-09-13T21:06:25Z 2023-09-13T21:06:25Z 2023-09-13T21:06:25Z b2ec8717c3619260a1b535eea20e618bf95aa30b     0 acca3387a18a64439d8ae8f535c856c97605a8a5 a4c96d01b27ce7cd06662a024da3547132a7c412 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2183    
1519993584 PR_kwDOBm6k_c5amULw 2190 closed 0 Raise an exception if a "plugins" block exists in metadata.json asg017 15178711 refs #2183 #2093 From [this comment](https://github.com/simonw/datasette/pull/2183#issuecomment-1714699724) in #2183: If a `"plugins"` block appears in `metadata.json`, it means that a user hasn't migrated over their plugin configuration from `metadata.json` to `datasette.yaml`, which is a breaking change in Datasette 1.0. This PR will ensure that an error is raised whenever that happens. <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2190.org.readthedocs.build/en/2190/ <!-- readthedocs-preview datasette end --> 2023-09-18T18:08:56Z 2023-10-12T16:20:51Z 2023-10-12T16:20:51Z 2023-10-12T16:20:51Z 3d6d1e3050b8e50fac40ec090672d8a95fa8e06c     0 fc7dbe0d8ac3e368b6c335d2ce8abe780f36dbd6 6ed7908580fa2ba9297c3225d85c56f8b08b9937 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2190    
1520248889 PR_kwDOBm6k_c5anSg5 2191 closed 0 Move `permissions`, `allow` blocks, canned queries and more out of `metadata.yaml` and into `datasette.yaml` asg017 15178711 The PR moves the following fields from `metadata.yaml` to `datasette.yaml`: ``` permissions allow allow_sql queries extra_css_urls extra_js_urls ``` This is a significant breaking change that users will need to upgrade their `metadata.yaml` files for. But the format/locations are similar to the previous version, so it shouldn't be too difficult to upgrade. One note: I'm still working on the Configuration docs, specifically the "reference" section. Though it's pretty small, the rest of read to review 2023-09-18T21:21:16Z 2023-10-12T16:16:38Z 2023-10-12T16:16:38Z 2023-10-12T16:16:38Z 35deaabcb105903790d18710a26e77545f6852ce     0 18b48f879b68d1e80e3adbae056710a6238b16bb 6ed7908580fa2ba9297c3225d85c56f8b08b9937 CONTRIBUTOR datasette 107914493 https://github.com/simonw/datasette/pull/2191    

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
, [auto_merge] TEXT);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 39.341ms · About: simonw/datasette-graphql