issue_comments
8 rows where "created_at" is on date 2020-04-10 sorted by id descending
This data as json, CSV (advanced)
Suggested facets: issue_url, updated_at (date)
id ▲ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
612258687 | https://github.com/simonw/sqlite-utils/issues/98#issuecomment-612258687 | https://api.github.com/repos/simonw/sqlite-utils/issues/98 | MDEyOklzc3VlQ29tbWVudDYxMjI1ODY4Nw== | simonw 9599 | 2020-04-10T23:08:48Z | 2020-04-10T23:08:48Z | OWNER | I need a test that reproduces this. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Only set .last_rowid and .last_pk for single update/inserts, not for .insert_all()/.upsert_all() with multiple records 597671518 | |
612216820 | https://github.com/simonw/datasette/issues/236#issuecomment-612216820 | https://api.github.com/repos/simonw/datasette/issues/236 | MDEyOklzc3VlQ29tbWVudDYxMjIxNjgyMA== | cldellow 193185 | 2020-04-10T21:03:38Z | 2020-04-10T21:03:38Z | CONTRIBUTOR | I made a repo at https://github.com/code402/datasette-lambda to demonstrate the idea, and scratch my personal itch for this. The demo relies on some central authority having already published a public, reusable Lambda layer with Datasette & its dependencies. I think that differs from the other publish plugins which seem to mainly publish Dockerfiles that the host will interpret to install deps from a requirements.txt file. I chose that approach because `uvloop` appears to be a dependency with native code that needs to be compiled for the target runtime environment. In this case, that's Amazon Linux 2. I'm not 100% clear on whether that's still required, because: - maybe `uvloop` is only needed for `uvicorn`, which the demo doesn't actually use since HTTP routing is handled by API Gateway - it seems like `uvloop` may be an optional, drop-in optimization for `asyncio` in any case (but I may be misreading this; I'm very much a Python noob) If it's the case that `uvloop` is truly optional, then I think the publish plugin could do the packaging on the user's machine, regardless of what flavour of operating system they're on. That'd be a bit slower for the user, but would provide the most long-term flexibility in terms of supporting plugins. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | datasette publish lambda plugin 317001500 | |
612173156 | https://github.com/simonw/sqlite-utils/issues/98#issuecomment-612173156 | https://api.github.com/repos/simonw/sqlite-utils/issues/98 | MDEyOklzc3VlQ29tbWVudDYxMjE3MzE1Ng== | simonw 9599 | 2020-04-10T19:03:32Z | 2020-04-10T23:08:28Z | OWNER | Investigate this traceback: ``` Traceback (most recent call last): File "fetch_projects.py", line 60, in <module> fetch_projects(db, token) File "fetch_projects.py", line 41, in fetch_projects db["projects"].upsert(project, pk="id") File "/Users/simonw/.local/share/virtualenvs/big-local-datasette-2jT6nJCT/lib/python3.7/site-packages/sqlite_utils/db.py", line 1139, in upsert conversions=conversions, File "/Users/simonw/.local/share/virtualenvs/big-local-datasette-2jT6nJCT/lib/python3.7/site-packages/sqlite_utils/db.py", line 1168, in upsert_all upsert=True, File "/Users/simonw/.local/share/virtualenvs/big-local-datasette-2jT6nJCT/lib/python3.7/site-packages/sqlite_utils/db.py", line 1107, in insert_all row = list(self.rows_where("rowid = ?", [self.last_rowid]))[0] IndexError: list index out of range ``` | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Only set .last_rowid and .last_pk for single update/inserts, not for .insert_all()/.upsert_all() with multiple records 597671518 | |
612156367 | https://github.com/simonw/datasette/issues/724#issuecomment-612156367 | https://api.github.com/repos/simonw/datasette/issues/724 | MDEyOklzc3VlQ29tbWVudDYxMjE1NjM2Nw== | simonw 9599 | 2020-04-10T18:23:39Z | 2020-04-10T18:23:39Z | OWNER | I think I'll use https://github.com/clarketm/mergedeep (MIT license) | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --plugin-secret over-rides existing metadata.json plugin config 598013965 | |
612155297 | https://github.com/simonw/datasette/issues/724#issuecomment-612155297 | https://api.github.com/repos/simonw/datasette/issues/724 | MDEyOklzc3VlQ29tbWVudDYxMjE1NTI5Nw== | simonw 9599 | 2020-04-10T18:20:53Z | 2020-04-10T18:20:53Z | OWNER | Lot's of recipes for deep dictionary merge on https://stackoverflow.com/q/7204805/6083 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --plugin-secret over-rides existing metadata.json plugin config 598013965 | |
612154749 | https://github.com/simonw/datasette/issues/724#issuecomment-612154749 | https://api.github.com/repos/simonw/datasette/issues/724 | MDEyOklzc3VlQ29tbWVudDYxMjE1NDc0OQ== | simonw 9599 | 2020-04-10T18:19:31Z | 2020-04-10T18:19:31Z | OWNER | Actually that code isn't the problem, since `extra_metadata` has not yet been read from `metadata.json` at that point. The bug is here: https://github.com/simonw/datasette/blob/d55fe8cdfc2ce7bc6960bf2507766c1fcd1d31a7/datasette/utils/__init__.py#L362-L368 I need to merge dictionaries in a smarter way. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --plugin-secret over-rides existing metadata.json plugin config 598013965 | |
611880250 | https://github.com/simonw/datasette/issues/699#issuecomment-611880250 | https://api.github.com/repos/simonw/datasette/issues/699 | MDEyOklzc3VlQ29tbWVudDYxMTg4MDI1MA== | simonw 9599 | 2020-04-10T05:08:54Z | 2020-04-10T05:08:54Z | OWNER | So maybe this is all handled by plugin hooks? `auth_from_scope(datasette, scope)` would be the main one - for deciding if a user should be authenticated based on data from the scope. How would a permissions hook work though? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Authentication (and permissions) as a core concept 582526961 | |
611879821 | https://github.com/simonw/datasette/issues/699#issuecomment-611879821 | https://api.github.com/repos/simonw/datasette/issues/699 | MDEyOklzc3VlQ29tbWVudDYxMTg3OTgyMQ== | simonw 9599 | 2020-04-10T05:06:56Z | 2020-04-10T05:06:56Z | OWNER | Another problem this would solve: if you want multiple authentication mechanisms - GitHub auth for users, `Authorization: bearer xxx` auth for API keys - the order in which they run might end up mattering. I dealt with this a bit in https://github.com/simonw/datasette-auth-github/issues/59 But having an authentication plugin hook - where playing get to decide if a user should be authenticated based on the incoming ASGI scopes - would be neater. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Authentication (and permissions) as a core concept 582526961 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2 ✖