pull_requests_fts
560 rows sorted by body
This data as json, CSV (advanced)
Link | rowid | title | body ▼ | pull_requests_fts | rank |
---|---|---|---|---|---|
727390835 | 727390835 | Correct naming of tool in readme | 10 | ||
753513062 | 753513062 | Fix compatibility with Python 3.10 | 10 | ||
754942128 | 754942128 | GitHub Actions: Add Python 3.10 to the tests | 10 | ||
757797315 | 757797315 | Test against Python 3.10 | 10 | ||
817942693 | 817942693 | Typo in docs about default redirect status code | 10 | ||
862823026 | 862823026 | Correct spelling mistakes (found with codespell) | 10 | ||
1179812491 | 1179812491 | Archive: Import mute table | 10 | ||
1179812730 | 1179812730 | Archive: Import Twitter Circle data | 10 | ||
1179812838 | 1179812838 | Archive: Fix "ni devices" typo in importer | 10 | ||
1299129869 | 1299129869 | use universal command | 10 | ||
152360740 | 152360740 | :fire: Removes DS_Store | 10 | ||
153432045 | 153432045 | Foreign key information on row and table pages | 10 | ||
157365811 | 157365811 | Upgrade to Sanic 0.7.0 | 10 | ||
182357613 | 182357613 | Fix for plugins in Python 3.5 | 10 | ||
185307407 | 185307407 | ?_shape=array and _timelimit= | 10 | ||
196526861 | 196526861 | Feature/in operator | 10 | ||
206863803 | 206863803 | Bump versions of pytest, pluggy and beautifulsoup4 | 10 | ||
208719043 | 208719043 | Import pysqlite3 if available, closes #360 | 10 | ||
216651317 | 216651317 | fix small doc typo | 10 | ||
232172106 | 232172106 | Bump dependency versions | 10 | ||
241418443 | 241418443 | Fix some regex DeprecationWarnings | 10 | ||
247576942 | 247576942 | Fts5 | 10 | ||
247861419 | 247861419 | Run Travis tests against Python 3.8-dev | 10 | ||
255658112 | 255658112 | Support for numpy types, closes #11 | 10 | ||
261418285 | 261418285 | URL hashing now optional: turn on with --config hash_urls:1 (#418) | 10 | ||
270191084 | 270191084 | ?_where= parameter on table views, closes #429 | 10 | ||
275801463 | 275801463 | Use dist: xenial and python: 3.7 on Travis | 10 | ||
284390197 | 284390197 | Upgrade pytest to 4.6.1 | 10 | ||
284743794 | 284743794 | Fix typo in install step: should be install -e | 10 | ||
285698310 | 285698310 | Test against Python 3.8-dev using Travis | 10 | ||
293992382 | 293992382 | Added asgi_wrapper plugin hook, closes #520 | 10 | ||
293994443 | 293994443 | Switch to ~= dependencies, closes #532 | 10 | ||
298962551 | 298962551 | Fix typos | 10 | ||
300286535 | 300286535 | Implemented table.lookup(...), closes #44 | 10 | ||
301824097 | 301824097 | Fix for too many SQL variables, closes #50 | 10 | ||
327051673 | 327051673 | twitter-to-sqlite import command, refs #4 | 10 | ||
334448258 | 334448258 | Update to latest black | 10 | ||
337847573 | 337847573 | test_insert_upsert_all_empty_list | 10 | ||
337853394 | 337853394 | Release 1.12.1 | 10 | ||
338647378 | 338647378 | Add parkrun-to-sqlite | 10 | ||
339244888 | 339244888 | Bump pint to 0.9 | 10 | ||
346264926 | 346264926 | Run tests against 3.5 too | 10 | ||
365218391 | 365218391 | gcloud run is now GA, s/beta// | 10 | ||
368734500 | 368734500 | -p argument for datasette package, plus tests - refs #661 | 10 | ||
369348084 | 369348084 | New conversions= feature, refs #77 | 10 | ||
370024697 | 370024697 | Add beeminder-to-sqlite | 10 | ||
372273608 | 372273608 | Upgrade to sqlite-utils 2.2.1 | 10 | ||
468377212 | 468377212 | Docs now live at docs.datasette.io | 10 | ||
483175509 | 483175509 | Fix accidental mega long line in docs | 10 | ||
498104830 | 498104830 | Run tests against Python 3.9 | 10 | ||
499603359 | 499603359 | Test against Python 3.9 | 10 | ||
500798091 | 500798091 | Add json_loads and json_dumps jinja2 filters | 10 | ||
505076418 | 505076418 | Add fitbit-to-sqlite | 10 | ||
511005542 | 511005542 | Radical new colour scheme and base styles, courtesy of @natbat | 10 | ||
529090560 | 529090560 | use jsonify_if_need for sql updates | 10 | ||
564172140 | 564172140 | fixing typo in get cli help text | 10 | ||
572209243 | 572209243 | --ssl-keyfile and --ssl-certfile, refs #1221 | 10 | ||
579697833 | 579697833 | fix small typo | 10 | ||
602261092 | 602261092 | Add testres-db tool | 10 | ||
603082280 | 603082280 | Fix little typo | 10 | ||
647552141 | 647552141 | Fix small typo | 10 | ||
655726387 | 655726387 | Test docker platform blair only | 10 | ||
677554929 | 677554929 | Test against Python 3.10-dev | 10 | ||
691707409 | 691707409 | Fix for race condition in refresh_schemas(), closes #1231 | 10 | ||
1143946542 | 1143946542 | Typo in JSON API `Updating a row` documentation | <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1930.org.readthedocs.build/en/1930/ <!-- readthedocs-preview datasette end --> | 10 | |
1355563020 | 1355563020 | Datsette gpt plugin | <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2076.org.readthedocs.build/en/2076/ <!-- readthedocs-preview datasette end --> | 10 | |
1432754160 | 1432754160 | Make primary key view accessible to render_cell hook | <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2100.org.readthedocs.build/en/2100/ <!-- readthedocs-preview datasette end --> | 10 | |
1050417981 | 1050417981 | progressbar for inserts/upserts of all fileformats, closes #485 | <!-- readthedocs-preview sqlite-utils start --> ---- :books: Documentation preview :books:: https://sqlite-utils--486.org.readthedocs.build/en/486/ <!-- readthedocs-preview sqlite-utils end --> | 10 | |
1311438738 | 1311438738 | Support self-referencing FKs in `Table.create` | <!-- readthedocs-preview sqlite-utils start --> ---- :books: Documentation preview :books:: https://sqlite-utils--537.org.readthedocs.build/en/537/ <!-- readthedocs-preview sqlite-utils end --> | 10 | |
445023326 | 445023326 | Add insert --truncate option | Deletes all rows in the table (if it exists) before inserting new rows. SQLite doesn't implement a TRUNCATE TABLE statement but does optimize an unqualified DELETE FROM. This can be handy if you want to refresh the entire contents of a table but a) don't have a PK (so can't use --replace), b) don't want the table to disappear (even briefly) for other connections, and c) have to handle records that used to exist being deleted. Ideally the replacement of rows would appear instantaneous to other connections by putting the DELETE + INSERT in a transaction, but this is very difficult without breaking other code as the current transaction handling is inconsistent and non-systematic. There exists the possibility for the DELETE to succeed but the INSERT to fail, leaving an empty table. This is not much worse, however, than the current possibility of one chunked INSERT succeeding and being committed while the next chunked INSERT fails, leaving a partially complete operation. | 10 | |
152870030 | 152870030 | [WIP] Add publish to heroku support | Refs #90 | 10 | |
1125261188 | 1125261188 | Use DOMContentLoaded instead of load event for CodeMirror initialization | Closes #1894 <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--1898.org.readthedocs.build/en/1898/ <!-- readthedocs-preview datasette end --> | 10 | |
928210171 | 928210171 | chore: Set permissions for GitHub actions | Restrict the GitHub token permissions only to the required ones; this way, even if the attackers will succeed in compromising your workflow, they won’t be able to do much. - Included permissions for the action. https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs [Keeping your GitHub Actions and workflows secure Part 1: Preventing pwn requests](https://securitylab.github.com/research/github-actions-preventing-pwn-requests/) Signed-off-by: naveen <172697+naveensrinivasan@users.noreply.github.com> | 10 | |
716357982 | 716357982 | Add --merged-by flag to pull-requests sub command | ## Description Proposing a solution to the API limitation for `merged_by` in pull_requests. Specifically the following called out in the readme: ``` Note that the merged_by column on the pull_requests table will only be populated for pull requests that are loaded using the --pull-request option - the GitHub API does not return this field for pull requests that are loaded in bulk. ``` This approach might cause larger repos to hit rate limits called out in https://github.com/dogsheep/github-to-sqlite/issues/51 but seems to work well in the repos I tested and included below. ## Old Behavior - Had to list out the pull-requests individually via multiple `--pull-request` flags ## New Behavior - `--merged-by` flag for getting 'merge_by' information out of pull-requests without having to specify individual PR numbers. # Testing Picking some repo that has more than one merger (datasette only has 1 😉 ) ``` $ github-to-sqlite pull-requests ./github.db opnsense/tools --merged-by $ echo "select id, url, merged_by from pull_requests;" | sqlite3 ./github.db 83533612|https://github.com/opnsense/tools/pull/39|1915288 102632885|https://github.com/opnsense/tools/pull/43|1915288 149114810|https://github.com/opnsense/tools/pull/57|1915288 160394495|https://github.com/opnsense/tools/pull/64|1915288 163308408|https://github.com/opnsense/tools/pull/67|1915288 169723264|https://github.com/opnsense/tools/pull/69|1915288 171381422|https://github.com/opnsense/tools/pull/72|1915288 179938195|https://github.com/opnsense/tools/pull/77|1915288 196233824|https://github.com/opnsense/tools/pull/82|1915288 215289964|https://github.com/opnsense/tools/pull/93| 219696100|https://github.com/opnsense/tools/pull/97|1915288 223664843|https://github.com/opnsense/tools/pull/99| 228446172|https://github.com/opnsense/tools/pull/103|1915288 238930434|https://github.com/opnsense/tools/pull/110|1915288 255507110|https://github.com/opnsense/tools/pull/119|1915288 255980675|https://github.com/opnsense/tools/pull/120… | 10 | |
1299206303 | 1299206303 | feat: Javascript Plugin API (Custom panels, column menu items with JS actions) | ## Motivation - Allow plugins that add data visualizations [`datasette-vega`](https://github.com/simonw/datasette-vega), [`datasette-leaflet`](https://github.com/simonw/datasette-leaflet), and [`datasette-nteract-data-explorer`](https://github.com/hydrosquall/datasette-nteract-data-explorer) to co-exist safely - Standardize APIs / hooks to ease development for new JS plugin developers (better compat with datasette-lite) through standardized DOM selectors, methods for extending the existing Table UI. This has come up as a feature request several times (see research notes for examples) - Discussion w/ @simonw about a general-purpose Datasette JS API ## Changes Summary: Provide 2 new surface areas for Datasette JS plugin developers 1. Custom column header items: <https://a.cl.ly/Kou97wJr> 2. Basic "panels" controlled by buttons: <https://a.cl.ly/rRugWobd> ### User Facing Changes - Allow creating menu items under table header that triggers JS (instead of opening hrefs per the existing [menu_link](https://docs.datasette.io/en/stable/plugin_hooks.html#menu-links-datasette-actor-request) hook). Items can respond to any column metadata provided by the column header (e.g. label). The proof of concept plugins log data to the console, or copy the column name to clipboard. - Allow plugins to register UI elements in a panel controller. The parent component handles switching the visibility of active plugins. - Because native button elements are used, the panel is keyboard-accessible - use tab / shift-tab to cycle through tab options, and `enter` to select. - There's room to improve the styling, but the focus of this PR is on the API rather than the UX. ### (plugin) Developer Facing Changes - Dispatch a `datasette_init` [CustomEvent](https://developer.mozilla.org/en-US/docs/Web/API/CustomEvent/CustomEvent) when the `datasetteManager` is finished loading. - Provide `manager.registerPlugin` API for adding new functionality that coordinates with Datasette lifecycle events. - Provide a `manager.s… | 10 | |
592548103 | 592548103 | Fix: code quality issues | ### Description Hi :wave: I work at [DeepSource](https://deepsource.io), I ran DeepSource analysis on the forked copy of this repo and found some interesting [code quality issues](https://deepsource.io/gh/withshubh/datasette/issues/?category=recommended) in the codebase, opening this PR so you can assess if our platform is right and helpful for you. ### Summary of changes - Replaced ternary syntax with if expression - Removed redundant `None` default - Used `is` to compare type of objects - Iterated dictionary directly - Removed unnecessary lambda expression - Refactored unnecessary `else` / `elif` when `if` block has a `return` statement - Refactored unnecessary `else` / `elif` when `if` block has a `raise` statement - Added .deepsource.toml to continuously analyze and detect code quality issues | 10 | |
469944999 | 469944999 | Document the use of --stop_after with favorites, refs #20 | (I discovered this trawling the issues for how to use --since with favorites) | 10 | |
187668890 | 187668890 | Refactor views | * Split out view classes from main `app.py` * Run [black](https://github.com/ambv/black) against resulting code to apply opinionated source code formatting * Run [isort](https://github.com/timothycrosley/isort) to re-order my imports Refs #256 | 10 | |
1073492809 | 1073492809 | Photo links | * add to `checkin_details` view new column for a calculated photo links * supported multiple links split by newline * create `events` table if there's no events in the history to avoid SQL errors Fixes #9. | 10 | |
655741428 | 655741428 | DRAFT: add test and scan for docker images | **NOTE: I don't think this PR is ready, since the arm/v6 and arm/v7 images are failing pytest due to missing dependencies (gcc and friends). But it's pretty close.** Closes https://github.com/simonw/datasette/issues/1344 . Using a build-matrix for the platforms and [this test](https://github.com/simonw/datasette/issues/1344#issuecomment-849820019), we test all the platforms in parallel. I also threw in container scanning. ### Switch `pip install` to use either tags or commit shas Notably! This also [changes the Dockerfile](https://github.com/blairdrummond/datasette/blob/7fe5315d68e04fce64b5bebf4e2d7feec44f8546/Dockerfile#L20) so that it accepts tags or commit-shas. ``` # It's backwards compatible with tags, but also lets you use shas root@712071df17af:/# pip install git+git://github.com/simonw/datasette.git@0.56 Collecting git+git://github.com/simonw/datasette.git@0.56 Cloning git://github.com/simonw/datasette.git (to revision 0.56) to /tmp/pip-req-build-u6dhm945 Running command git clone -q git://github.com/simonw/datasette.git /tmp/pip-req-build-u6dhm945 Running command git checkout -q af5a7f1c09f6a902bb2a25e8edf39c7034d2e5de Collecting Jinja2<2.12.0,>=2.10.3 Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB) ``` This le… | 10 | |
511868153 | 511868153 | New explicit versioning mechanism | - Remove all references to versioneer - Re-implement versioning to use a static string baked into the repo - Ensure that string is output by `datasette --version` and `/-/versions` Refs #1054 | 10 | |
512545364 | 512545364 | .blob output renderer | - [x] Remove the `/-/...blob/...` route I added in #1040 in place of the new `.blob` renderer URLs - [x] Link to new `.blob` download links on the arbitrary query page (using `_blob_hash=...`) - plus tests for this Closes #1050, Closes #1051 | 10 | |
293962405 | 293962405 | Support cleaner custom templates for rows and tables, closes #521 | - [x] Rename `_rows_and_columns.html` to `_table.html` - [x] Unit test - [x] Documentation | 10 | |
297459797 | 297459797 | .get() method plus support for compound primary keys | - [x] Tests for the `NotFoundError` exception - [x] Documentation for `.get()` method - [x] Support `--pk` multiple times to define CLI compound primary keys - [x] Documentation for compound primary keys | 10 | |
303990683 | 303990683 | Work in progress: m2m() method for creating many-to-many records | - [x] `table.insert({"name": "Barry"}).m2m("tags", lookup={"tag": "Coworker"})` - [x] Explicit table name `.m2m("humans", ..., m2m_table="relationships")` - [x] Automatically use an existing m2m table if a single obvious candidate exists (a table with two foreign keys in the correct directions) - [x] Require the explicit `m2m_table=` argument if multiple candidates for the m2m table exist - [x] Documentation Refs #23 | 10 | |
544923437 | 544923437 | Modernize code to Python 3.6+ | - compact dict and set building - remove redundant parentheses - simplify chained conditions - change method name to lowercase - use triple double quotes for docstrings please feel free to accept/reject any of these independent commits | 10 | |
1243080434 | 1243080434 | Avoid repeating primary key columns if included in _col args | ...while maintaining given order. Fixes #1975 (if I'm understanding correctly). <!-- readthedocs-preview datasette start --> ---- :books: Documentation preview :books:: https://datasette--2026.org.readthedocs.build/en/2026/ <!-- readthedocs-preview datasette end --> | 10 | |
589263297 | 589263297 | Minor type in IP adress | 127.0.01 replaced by 127.0.0.1 | 10 | |
434085235 | 434085235 | Reload support for config_dir mode. | A reference implementation for adding support to reload when datasette is in the config_dir mode. This implementation is flawed since it is watching the entire directory and any changes to the database will reload the server and adding unrelated files to the directory will also reload the server. | 10 | |
187770345 | 187770345 | Add new metadata key persistent_urls which removes the hash from all database urls | Add new metadata key "persistent_urls" which removes the hash from all database urls when set to "true" This PR is just to gauge if this, or something like it, is something you would consider merging? I understand the reason why the substring of the hash is included in the url but there are some use cases where the urls should persist across deployments. For bookmarks for example or for scripts that use the JSON API. This is the initial commit for this feature. Tests and documentation updates to follow. | 10 | |
521287994 | 521287994 | changes to allow for compound foreign keys | Add support for compound foreign keys, as per issue #117 Not sure if this is the right approach. In particular I'm unsure about: - the new `ForeignKey` class, which replaces the namedtuple in order to ensure that `column` and `other_column` are forced into tuples. The class does the job, but doesn't feel very elegant. - I haven't rewritten `guess_foreign_table` to take account of multiple columns, so it just checks for the first column in the foreign key definition. This isn't ideal. - I haven't added any ability to the CLI to add compound foreign keys, it's only in the python API at the moment. The PR also contains a minor related change that columns and tables are always quoted in foreign key definitions. | 10 | |
181600926 | 181600926 | Initial units support | Add support for specifying units for a column in metadata.json and rendering them on display using [pint](https://pint.readthedocs.io/en/latest/). Example table metadata: ```json "license_frequency": { "units": { "frequency": "Hz", "channel_width": "Hz", "height": "m", "antenna_height": "m", "azimuth": "degrees" } } ``` [Example result](https://wtr-api.herokuapp.com/wtr-663ea99/license_frequency/1) This works surprisingly well! I'd like to add support for using units when querying but this is PR is pretty usable as-is. (Pint doesn't seem to support decibels though - it thinks they're decibytes - which is an annoying omission.) (ref ticket #203) | 10 | |
395258687 | 395258687 | Add type conversion for Panda's Timestamp | Add type conversion for Panda's Timestamp, if Panda library is present in system (thanks for this project, I was about to do the same thing from scratch) | 10 | |
152522762 | 152522762 | SQL syntax highlighting with CodeMirror | Addresses #13 Future enhancements could include autocompletion of table and column names, e.g. with ```javascript extraKeys: {"Ctrl-Space": "autocomplete"}, hintOptions: {tables: { users: ["name", "score", "birthDate"], countries: ["name", "population", "size"] }} ``` (see https://codemirror.net/doc/manual.html#addon_sql-hint and source at http://codemirror.net/mode/sql/) | 10 | |
475874493 | 475874493 | Handle case where subsequent records (after first batch) include extra columns | Addresses #145. I think this should do the job. If it meets with your approval I'll update this PR to include an update to the documentation -- I came across this bug while preparing a PR to update the documentation around `batch_size` in any event. | 10 | |
593805804 | 593805804 | FTS quote functionality from datasette | Addresses #246 - this is a bit of a kludge because it doesn't actually *validate* the FTS string, just makes sure that it will not crash when executed, but I figured that building a query parser is a bit out of the scope of sqlite-utils and if you actually want to use the query language, you probably need to parse that yourself. | 10 | |
747742034 | 747742034 | Add support for retrieving teams / members | Adds a method for retrieving all the teams within an organisation and all the members in those teams. The latter is stored as a join table `team_members` beteween `teams` and `users`. | 10 | |
755729137 | 755729137 | Added instructions for installing plugins via pipx, #1486 | Adds missing instructions for installing plugins via pipx | 10 | |
211860706 | 211860706 | Search all apps during heroku publish | Adds the `-A` option to include apps from all organizations when searching app names for publish. | 10 | |
290971295 | 290971295 | Sort commits using isort, refs #516 | Also added a lint unit test to ensure they stay sorted. #516 | 10 |
Advanced export
JSON shape: default, array, newline-delimited
CREATE VIRTUAL TABLE [pull_requests_fts] USING FTS5 ( [title], [body], content=[pull_requests] );