issue_comments
26 rows where user = 3243482
This data as json, CSV (advanced)
Suggested facets: issue_url, issue, created_at (date), updated_at (date)
id ▼ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
509618339 | https://github.com/simonw/datasette/pull/554#issuecomment-509618339 | https://api.github.com/repos/simonw/datasette/issues/554 | MDEyOklzc3VlQ29tbWVudDUwOTYxODMzOQ== | abdusco 3243482 | 2019-07-09T12:16:32Z | 2019-07-09T12:16:32Z | CONTRIBUTOR | I've also added another fix for using static mounts with absolute paths on Windows. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix static mounts using relative paths and prevent traversal exploits 465728430 | |
509629331 | https://github.com/simonw/datasette/pull/554#issuecomment-509629331 | https://api.github.com/repos/simonw/datasette/issues/554 | MDEyOklzc3VlQ29tbWVudDUwOTYyOTMzMQ== | abdusco 3243482 | 2019-07-09T12:51:35Z | 2019-07-09T12:51:35Z | CONTRIBUTOR | I wanted to add a test for it too, but I've realized it's impossible to test a server process as we cannot get its exit code. ```python # tests/test_cli.py def test_static_mounts_on_windows(): if sys.platform != "win32": return runner = CliRunner() result = runner.invoke( cli, ["serve", "--static", r"s:C:\\"] ) assert result.exit_code == 0 ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix static mounts using relative paths and prevent traversal exploits 465728430 | |
510730200 | https://github.com/simonw/datasette/issues/511#issuecomment-510730200 | https://api.github.com/repos/simonw/datasette/issues/511 | MDEyOklzc3VlQ29tbWVudDUxMDczMDIwMA== | abdusco 3243482 | 2019-07-12T03:23:22Z | 2019-07-12T03:23:22Z | CONTRIBUTOR | @simonw yes it works fine on Windows, but test suite doesn't run properly, for that I had to use WSL | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Get Datasette tests passing on Windows in GitHub Actions 456578474 | |
645293374 | https://github.com/simonw/datasette/issues/851#issuecomment-645293374 | https://api.github.com/repos/simonw/datasette/issues/851 | MDEyOklzc3VlQ29tbWVudDY0NTI5MzM3NA== | abdusco 3243482 | 2020-06-17T10:32:02Z | 2020-06-17T10:32:28Z | CONTRIBUTOR | Welp, I'm an idiot. Turns out I had a sneaky comma `,` after `sql` key: ``` ... (:name, :url), ``` which tells sqlite to expect another `values(...)` list. Correcting the SQL solved the issue. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Having trouble getting writable canned queries to work 640330278 | |
647135713 | https://github.com/simonw/datasette/issues/859#issuecomment-647135713 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzEzNTcxMw== | abdusco 3243482 | 2020-06-21T14:30:02Z | 2020-06-21T14:30:02Z | CONTRIBUTOR | Oops, the same method is called from both index and database pages. But removing select count queries speed up the page load quite a bit. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
647194131 | https://github.com/simonw/datasette/issues/859#issuecomment-647194131 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzE5NDEzMQ== | abdusco 3243482 | 2020-06-21T23:15:54Z | 2020-06-21T23:26:09Z | CONTRIBUTOR | I'm not sure if table counts are to blame. There shouldn't be a ~3 orders of magnitude difference. ```fish user@klein /a/w/scrapyard (master)> set sql "select count(*) from table_1; select count(*) from table_2; select count(*) from table_3;" user@klein /a/w/scrapyard (master)> time sqlite3 scrapyard.db "$sql" 187489 46492 2229 ________________________________________________________ Executed in 25.57 millis fish external usr time 3.55 millis 0.00 micros 3.55 millis sys time 22.42 millis 1123.00 micros 21.30 millis ``` but not letting datasette count the tables definitely helps. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
647922203 | https://github.com/simonw/datasette/issues/859#issuecomment-647922203 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzkyMjIwMw== | abdusco 3243482 | 2020-06-23T05:44:58Z | 2021-01-05T08:22:43Z | CONTRIBUTOR | I'm seeing the problem on database page. Index page and table page runs quite fast. - Tables have <10 columns (`id`, `url`, `title`, `body_html`, `date`, `author`, `meta` (for keeping unstructured json)). I've added index on `date` columns (using `sqlite-utils`) in addition to the index present on `id` columns. - All tables have FTS enabled on `text` and `varchar` columns (`title`, `body_html` etc) to speed up searching. - There are couple of tables related with foreign keys (think a thread in a forum and posts in that thread, related with `thread_id`) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
647923666 | https://github.com/simonw/datasette/issues/859#issuecomment-647923666 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzkyMzY2Ng== | abdusco 3243482 | 2020-06-23T05:49:31Z | 2020-06-23T05:49:31Z | CONTRIBUTOR | I think I should mention that having FTS on all tables mean I have 5 visible, 25 hidden (FTS) tables displayed on database page. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
647925594 | https://github.com/simonw/datasette/issues/859#issuecomment-647925594 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzkyNTU5NA== | abdusco 3243482 | 2020-06-23T05:55:21Z | 2020-06-23T06:28:29Z | CONTRIBUTOR | Hmm, not seeing the problem now. I've removed the commented out sections in `database.py` and restarted the process. Database page now loads in <250ms. I have couple of workers that check some pages regularly and scrape new content and save to the DB. Could it be that datasette tries to recount tables every time database size changes? Normally it keeps a count cache, but as DB gets updated so often (new content every 5 min or so) it's practically recounting every time I go to the database page? EDIT: It turns out it doesn't hold cache with mutable databases. I'll update the issue with more findings and a better way to reproduce the problem if I encounter it again. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
647935300 | https://github.com/simonw/datasette/issues/859#issuecomment-647935300 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzkzNTMwMA== | abdusco 3243482 | 2020-06-23T06:23:01Z | 2020-06-23T06:23:01Z | CONTRIBUTOR | > You said "200k+, 50+ rows in a couple of tables" - does that mean 50+ columns? I'll try with larger numbers of columns and see what difference that makes. Ah that was a typo, I meant 50k. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
647936117 | https://github.com/simonw/datasette/issues/859#issuecomment-647936117 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0NzkzNjExNw== | abdusco 3243482 | 2020-06-23T06:25:17Z | 2020-06-23T06:25:17Z | CONTRIBUTOR | > > > ``` > sqlite-generate many-cols.db --tables 2 --rows 200000 --columns 50 > ``` > > Looks like that will take 35 minutes to run (it's not a particularly fast tool). Try chunking write operations into batches every 1000 records or so. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
648232645 | https://github.com/simonw/datasette/issues/859#issuecomment-648232645 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0ODIzMjY0NQ== | abdusco 3243482 | 2020-06-23T15:19:53Z | 2020-06-23T15:19:53Z | CONTRIBUTOR | The issue seems to appear sporadically, like when I return to database page after a while, during which some records have been added to the database. I've just visited database, page first visit took ~10s, consecutive visits took 0.3s. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
648669523 | https://github.com/simonw/datasette/issues/859#issuecomment-648669523 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY0ODY2OTUyMw== | abdusco 3243482 | 2020-06-24T08:13:23Z | 2020-06-24T10:30:36Z | CONTRIBUTOR | I tried setting `cache_size_kb=0` then `cache_size_kb=100000`, still getting this behavior. I even changed `Database::table_counts` and lowered time limit to 1 ```py table_count = ( await self.execute( "select count(*) from [{}]".format(table), custom_time_limit=1, ) ).rows[0][0] counts[table] = table_count ``` I feel like 10 seconds is a magic number, like a processing timeout and datasette gives up and returns the page. Index page loads instantly, table page, query page, as well. But when I return to database page after some time, it loads in 10s. EDIT: It's always like 10 + 0.3s, like 10s wait and timeout then 300ms to render the page | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
652160909 | https://github.com/simonw/datasette/issues/859#issuecomment-652160909 | https://api.github.com/repos/simonw/datasette/issues/859 | MDEyOklzc3VlQ29tbWVudDY1MjE2MDkwOQ== | abdusco 3243482 | 2020-07-01T03:09:32Z | 2020-07-01T03:10:21Z | CONTRIBUTOR | I've just realized Datasette tries to count hidden tables too. There are 5 visible tables, 25 hidden tables, which I haven't realize earlier to consider their effect. I've turned off counting for hidden tables to see if it has any effect. What's the point of counting FTS tables? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Database page loads too slowly with many large tables (due to table counts) 642572841 | |
652166115 | https://github.com/simonw/datasette/issues/877#issuecomment-652166115 | https://api.github.com/repos/simonw/datasette/issues/877 | MDEyOklzc3VlQ29tbWVudDY1MjE2NjExNQ== | abdusco 3243482 | 2020-07-01T03:28:07Z | 2020-07-01T03:28:07Z | CONTRIBUTOR | Does this mean custom routes get to expose endpoints accepting POST requests? I've tried earlier to add some POST endpoints, but requests were being rejected by Datasette due to CSRF | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Consider dropping explicit CSRF protection entirely? 648421105 | |
652255960 | https://github.com/simonw/datasette/issues/877#issuecomment-652255960 | https://api.github.com/repos/simonw/datasette/issues/877 | MDEyOklzc3VlQ29tbWVudDY1MjI1NTk2MA== | abdusco 3243482 | 2020-07-01T07:52:25Z | 2020-07-01T08:10:00Z | CONTRIBUTOR | I am calling the API from another origin, so injecting CSRF token into templates wouldn't work. EDIT: I'll try the new version, it sounds promising | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Consider dropping explicit CSRF protection entirely? 648421105 | |
652261382 | https://github.com/simonw/datasette/issues/877#issuecomment-652261382 | https://api.github.com/repos/simonw/datasette/issues/877 | MDEyOklzc3VlQ29tbWVudDY1MjI2MTM4Mg== | abdusco 3243482 | 2020-07-01T08:03:17Z | 2020-07-01T08:03:23Z | CONTRIBUTOR | Bearer tokens sound interesting. Where do tokens come from? An auth provider of my choosing? How do they get verified? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Consider dropping explicit CSRF protection entirely? 648421105 | |
652297139 | https://github.com/simonw/datasette/pull/883#issuecomment-652297139 | https://api.github.com/repos/simonw/datasette/issues/883 | MDEyOklzc3VlQ29tbWVudDY1MjI5NzEzOQ== | abdusco 3243482 | 2020-07-01T09:11:29Z | 2020-07-01T09:11:29Z | CONTRIBUTOR | Turns out we should include hidden tables in the result dict, or we're breaking tests. I've committed a refactor https://github.com/simonw/datasette/pull/883/commits/4f06e1bf6fbe4b73be770b87f610bf7c0e6e3ea7 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Skip counting hidden tables 648749062 | |
652394742 | https://github.com/simonw/datasette/pull/883#issuecomment-652394742 | https://api.github.com/repos/simonw/datasette/issues/883 | MDEyOklzc3VlQ29tbWVudDY1MjM5NDc0Mg== | abdusco 3243482 | 2020-07-01T12:41:13Z | 2020-07-01T12:41:13Z | CONTRIBUTOR | Well tests need to be updated. I need to get tests working on Windows. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Skip counting hidden tables 648749062 | |
736322290 | https://github.com/simonw/datasette/issues/1111#issuecomment-736322290 | https://api.github.com/repos/simonw/datasette/issues/1111 | MDEyOklzc3VlQ29tbWVudDczNjMyMjI5MA== | abdusco 3243482 | 2020-12-01T08:54:47Z | 2020-12-01T08:54:47Z | CONTRIBUTOR | Somewhat related: https://github.com/simonw/datasette/issues/859 I fixed the issue with forking and disabling the counts for hidden tables. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Accessing a database's `.json` is slow for very large SQLite files 751195017 | |
738907852 | https://github.com/simonw/datasette/pull/1130#issuecomment-738907852 | https://api.github.com/repos/simonw/datasette/issues/1130 | MDEyOklzc3VlQ29tbWVudDczODkwNzg1Mg== | abdusco 3243482 | 2020-12-04T17:22:29Z | 2020-12-04T17:31:25Z | CONTRIBUTOR | EDIT: I misunderstood the problem. This seems like a fix better suited for Safari. But I don't have any Apple device to test it. ```css body { min-height: 100vh; min-height: -webkit-fill-available; } html { height: -webkit-fill-available; } ``` https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ --- It's actually not that difficult to fix. Well, this is actually a workaround to keep viewport in place. I usually put a transition (forgot to do it here) that keeps page from resizing. ```css .container { min-height: 100vh; transition: height 10000s steps(0); } ``` `steps()` function prevents excessive layout calculations, and lets the page snap back into place (10000s ~= 3h later) in a single step. This fix also prevents page from jumping around when the keyboard pops up and down. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix footer not sticking to bottom in short pages 756876238 | |
821970965 | https://github.com/simonw/datasette/issues/1300#issuecomment-821970965 | https://api.github.com/repos/simonw/datasette/issues/1300 | MDEyOklzc3VlQ29tbWVudDgyMTk3MDk2NQ== | abdusco 3243482 | 2021-04-18T10:41:15Z | 2021-04-18T10:41:15Z | CONTRIBUTOR | If I change the hookspec and add a row parameter, it works https://github.com/simonw/datasette/blob/7a2ed9f8a119e220b66d67c7b9e07cbab47b1196/datasette/hookspecs.py#L58 ``` def render_cell(value, column, row, table, database, datasette): ``` But to generate a URL, I need the primary keys, but I can't call `pks = await db.primary_keys(table)` inside a sync function. I can't call `datasette.utils.detect_primary_keys` either, because the db connection is not publicly exposed (AFAICT). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Make row available to `render_cell` plugin hook 860625833 | |
821971059 | https://github.com/simonw/datasette/issues/1300#issuecomment-821971059 | https://api.github.com/repos/simonw/datasette/issues/1300 | MDEyOklzc3VlQ29tbWVudDgyMTk3MTA1OQ== | abdusco 3243482 | 2021-04-18T10:42:19Z | 2021-04-18T10:42:19Z | CONTRIBUTOR | If there's a simpler way to generate a URL for a specific row, I'm all ears | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Make row available to `render_cell` plugin hook 860625833 | |
833132571 | https://github.com/simonw/datasette/issues/1300#issuecomment-833132571 | https://api.github.com/repos/simonw/datasette/issues/1300 | MDEyOklzc3VlQ29tbWVudDgzMzEzMjU3MQ== | abdusco 3243482 | 2021-05-06T00:16:50Z | 2021-05-06T00:18:05Z | CONTRIBUTOR | I ended up using some JS as a workaround. First, add a JS file in `metadata.yaml`: ```yaml extra_js_urls: - '/static/app.js' ``` then inside the script, find the blob download links and replace `.blob` extension in the url with `.jpg` and replace the links with `<img/>` elements. You need to add an output formatter to serve `BLOB` columns as JPG. You can find the code in the first post. ~~Replacing `.blob` -> `.jpg` might not even be necessary, because browsers only care about the mime type, so you only need to serve the binary content with the right `content-type` header.~~. You need to replace the extension, otherwise the output renderer will not run. ```js window.addEventListener('DOMContentLoaded', () => { function renderBlobImages() { document.querySelectorAll('a[href*=".blob"]').forEach(el => { const img = document.createElement('img'); img.className = 'blob-image'; img.loading = 'lazy'; img.src = el.href.replace('.blob', '.jpg'); el.parentElement.replaceChild(img, el); }); } renderBlobImages(); }); ``` while this does the job, I'd prefer handling this in Python where it belongs. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Make row available to `render_cell` plugin hook 860625833 | |
861497548 | https://github.com/simonw/datasette/pull/1130#issuecomment-861497548 | https://api.github.com/repos/simonw/datasette/issues/1130 | MDEyOklzc3VlQ29tbWVudDg2MTQ5NzU0OA== | abdusco 3243482 | 2021-06-15T13:27:48Z | 2021-06-15T13:27:48Z | CONTRIBUTOR | There's a workaround: https://css-tricks.com/css-fix-for-100vh-in-mobile-webkit/ and a future fix: https://css-tricks.com/safari-15-new-ui-theme-colors-and-a-css-tricks-cameo/ | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Fix footer not sticking to bottom in short pages 756876238 | |
895003796 | https://github.com/simonw/datasette/issues/1425#issuecomment-895003796 | https://api.github.com/repos/simonw/datasette/issues/1425 | IC_kwDOBm6k_c41WKyU | abdusco 3243482 | 2021-08-09T07:14:35Z | 2021-08-09T07:14:35Z | CONTRIBUTOR | I believe this also provides a workaround for the problem I face in https://github.com/simonw/datasette/issues/1300. Now I should be able to get table PKs and generate a row URL. I'll test this out and report my findings. ```py from datasette.utils import path_from_row_pks pks = await db.primary_keys(table) url = self.ds.urls.row_blob( database, table, path_from_row_pks(row, pks, not pks), column, ) ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | render_cell() hook should support returning an awaitable 963528457 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);