issue_comments
7 rows where "updated_at" is on date 2020-06-14 sorted by html_url
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date)
id | html_url ▼ | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
643709037 | https://github.com/simonw/datasette/issues/691#issuecomment-643709037 | https://api.github.com/repos/simonw/datasette/issues/691 | MDEyOklzc3VlQ29tbWVudDY0MzcwOTAzNw== | amjith 49260 | 2020-06-14T02:35:16Z | 2020-06-14T02:35:16Z | CONTRIBUTOR | The server should reload in the `config_dir` mode. Ref: #848 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --reload sould reload server if code in --plugins-dir changes 574021194 | |
643698790 | https://github.com/simonw/datasette/issues/846#issuecomment-643698790 | https://api.github.com/repos/simonw/datasette/issues/846 | MDEyOklzc3VlQ29tbWVudDY0MzY5ODc5MA== | simonw 9599 | 2020-06-14T00:20:42Z | 2020-06-14T00:20:42Z | OWNER | Released a new plugin, `datasette-psutil`, as a side-effect of this investigation: https://github.com/simonw/datasette-psutil | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "Too many open files" error running tests 638241779 | |
643699063 | https://github.com/simonw/datasette/issues/846#issuecomment-643699063 | https://api.github.com/repos/simonw/datasette/issues/846 | MDEyOklzc3VlQ29tbWVudDY0MzY5OTA2Mw== | simonw 9599 | 2020-06-14T00:22:32Z | 2020-06-14T00:22:32Z | OWNER | Idea: `num_sql_threads` (described as "Number of threads in the thread pool for executing SQLite queries") defaults to 3 - can I knock that down to 1 in the tests and open less connections as a result? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "Too many open files" error running tests 638241779 | |
643699583 | https://github.com/simonw/datasette/issues/846#issuecomment-643699583 | https://api.github.com/repos/simonw/datasette/issues/846 | MDEyOklzc3VlQ29tbWVudDY0MzY5OTU4Mw== | simonw 9599 | 2020-06-14T00:26:31Z | 2020-06-14T00:26:31Z | OWNER | That seems to have fixed the problem, at least for the moment. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | "Too many open files" error running tests 638241779 | |
643702715 | https://github.com/simonw/datasette/issues/847#issuecomment-643702715 | https://api.github.com/repos/simonw/datasette/issues/847 | MDEyOklzc3VlQ29tbWVudDY0MzcwMjcxNQ== | simonw 9599 | 2020-06-14T01:03:30Z | 2020-06-14T01:03:40Z | OWNER | Filed a related issue with some ideas against `coveragepy` here: https://github.com/nedbat/coveragepy/issues/999 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Take advantage of .coverage being a SQLite database 638259643 | |
643704565 | https://github.com/simonw/datasette/issues/847#issuecomment-643704565 | https://api.github.com/repos/simonw/datasette/issues/847 | MDEyOklzc3VlQ29tbWVudDY0MzcwNDU2NQ== | simonw 9599 | 2020-06-14T01:26:56Z | 2020-06-14T01:26:56Z | OWNER | On closer inspection, I don't know if there's that much useful stuff you can do with the data from `.coverage` on its own. Consider the following query against a `.coverage` run against Datasette itself: ```sql select file_id, context_id, numbits_to_nums(numbits) from line_bits ``` <img width="1451" alt="_coverage__select_file_id__context_id__numbits_to_nums_numbits__from_line_bits" src="https://user-images.githubusercontent.com/9599/84582622-40ffc580-ada3-11ea-98d2-52bcde514a26.png"> It looks like this tells me which lines of which files were executed during the test run. But... without the actual source code, I don't think I can calculate the coverage percentage for each file. I don't want to count comment lines or whitespace as untested for example, and I don't know how many lines were in the file. If I'm right that it's not possible to calculate percentage coverage from just the `.coverage` data then I'll need to do something a bit more involved - maybe parsing the `coverage.xml` report and loading that into my own schema? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Take advantage of .coverage being a SQLite database 638259643 | |
643704730 | https://github.com/simonw/datasette/issues/847#issuecomment-643704730 | https://api.github.com/repos/simonw/datasette/issues/847 | MDEyOklzc3VlQ29tbWVudDY0MzcwNDczMA== | simonw 9599 | 2020-06-14T01:28:34Z | 2020-06-14T01:28:34Z | OWNER | Here's the plugin that adds those custom SQLite functions: ```python from datasette import hookimpl from coverage.numbits import register_sqlite_functions @hookimpl def prepare_connection(conn): register_sqlite_functions(conn) ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Take advantage of .coverage being a SQLite database 638259643 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
author_association 2 ✖