issue_comments
14 rows where "created_at" is on date 2020-12-16
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions
updated_at (date) 1 ✖
- 2020-12-16 14
id ▼ | html_url | issue_url | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
746734412 | https://github.com/dogsheep/github-to-sqlite/issues/58#issuecomment-746734412 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/58 | MDEyOklzc3VlQ29tbWVudDc0NjczNDQxMg== | simonw 9599 | 2020-12-16T17:58:56Z | 2020-12-16T17:58:56Z | MEMBER | I'm going to rewrite those `<a href="#filtering-tables">` links to `<a href="#user-content-filtering-tables">` - but only if a corresponding `id="user-content-filtering-tables"` element exists. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Readme HTML has broken internal links 769150394 | |
746735889 | https://github.com/dogsheep/github-to-sqlite/issues/58#issuecomment-746735889 | https://api.github.com/repos/dogsheep/github-to-sqlite/issues/58 | MDEyOklzc3VlQ29tbWVudDc0NjczNTg4OQ== | simonw 9599 | 2020-12-16T17:59:50Z | 2020-12-16T17:59:50Z | MEMBER | I don't want to add a full HTML parser (like BeautifulSoup) as a dependency for this feature. Since the HTML comes from a single, trusted source (GitHub) I could probably handle this using [regular expressions](https://stackoverflow.com/a/1732454). | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Readme HTML has broken internal links 769150394 | |
746827083 | https://github.com/simonw/datasette/issues/1143#issuecomment-746827083 | https://api.github.com/repos/simonw/datasette/issues/1143 | MDEyOklzc3VlQ29tbWVudDc0NjgyNzA4Mw== | simonw 9599 | 2020-12-16T18:56:07Z | 2020-12-16T18:56:07Z | OWNER | I think the right way to do this is to support multiple optional `--cors-origin=` pattern values, like you suggested. | {"total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | More flexible CORS support in core, to encourage good security practices 764059235 | |
747029636 | https://github.com/dogsheep/dogsheep-beta/issues/29#issuecomment-747029636 | https://api.github.com/repos/dogsheep/dogsheep-beta/issues/29 | MDEyOklzc3VlQ29tbWVudDc0NzAyOTYzNg== | simonw 9599 | 2020-12-16T21:14:03Z | 2020-12-16T21:14:03Z | MEMBER | I think I can do this as a cunning trick in `display_sql`. Consider this example query: https://til.simonwillison.net/tils?sql=select%0D%0A++path%2C%0D%0A++snippet%28til_fts%2C+-1%2C+%27b4de2a49c8%27%2C+%278c94a2ed4b%27%2C+%27...%27%2C+60%29+as+snippet%0D%0Afrom%0D%0A++til%0D%0A++join+til_fts+on+til.rowid+%3D+til_fts.rowid%0D%0Awhere%0D%0A++til_fts+match+escape_fts%28%3Aq%29%0D%0A++and+path+%3D+%27asgi_lifespan-test-httpx.md%27%0D%0A&q=pytest ```sql select path, snippet(til_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 60) as snippet from til join til_fts on til.rowid = til_fts.rowid where til_fts match escape_fts(:q) and path = 'asgi_lifespan-test-httpx.md' ``` The `and path = 'asgi_lifespan-test-httpx.md'` bit means we only get back a specific document - but the snippet highlighting is applied to it. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add search highlighting snippets 724759588 | |
747030964 | https://github.com/dogsheep/dogsheep-beta/issues/29#issuecomment-747030964 | https://api.github.com/repos/dogsheep/dogsheep-beta/issues/29 | MDEyOklzc3VlQ29tbWVudDc0NzAzMDk2NA== | simonw 9599 | 2020-12-16T21:14:54Z | 2020-12-16T21:14:54Z | MEMBER | To do this I'll need the search term to be passed to the `display_sql` SQL query: https://github.com/dogsheep/dogsheep-beta/blob/4890ec87b5e2ec48940f32c9ad1f5aae25c75a4d/dogsheep_beta/__init__.py#L164-L171 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add search highlighting snippets 724759588 | |
747031608 | https://github.com/dogsheep/dogsheep-beta/issues/29#issuecomment-747031608 | https://api.github.com/repos/dogsheep/dogsheep-beta/issues/29 | MDEyOklzc3VlQ29tbWVudDc0NzAzMTYwOA== | simonw 9599 | 2020-12-16T21:15:18Z | 2020-12-16T21:15:18Z | MEMBER | Should I pass any other details to the `display_sql` here as well? | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add search highlighting snippets 724759588 | |
747034481 | https://github.com/dogsheep/dogsheep-beta/issues/29#issuecomment-747034481 | https://api.github.com/repos/dogsheep/dogsheep-beta/issues/29 | MDEyOklzc3VlQ29tbWVudDc0NzAzNDQ4MQ== | simonw 9599 | 2020-12-16T21:17:05Z | 2020-12-16T21:17:05Z | MEMBER | I'm just going to add `q` for the moment. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Add search highlighting snippets 724759588 | |
747059277 | https://github.com/simonw/datasette/issues/675#issuecomment-747059277 | https://api.github.com/repos/simonw/datasette/issues/675 | MDEyOklzc3VlQ29tbWVudDc0NzA1OTI3Nw== | simonw 9599 | 2020-12-16T21:43:52Z | 2020-12-16T21:43:52Z | OWNER | It turns out I need this for a couple of projects: - [datasette-ripgrep](https://github.com/simonw/datasette-ripgrep) needs to ship a whole bunch of source code files up in a known location. I worked around this with a nasty hack involving `--static` but it would be better if I wasn't doing that. - [dogsheep-beta](https://github.com/dogsheep/dogsheep-beta) uses an additional `dogsheep-beta.yml` configuration file in the project root (a sibling to `metadata.yml`) which needs to be included when publishing - see https://github.com/simonw/datasette.io/issues/21#issuecomment-747058067 I want this for `datasette publish cloudrun`, not just for `datasette package`. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --cp option for datasette publish and datasette package for shipping additional files and directories 567902704 | |
747062909 | https://github.com/simonw/datasette/issues/1148#issuecomment-747062909 | https://api.github.com/repos/simonw/datasette/issues/1148 | MDEyOklzc3VlQ29tbWVudDc0NzA2MjkwOQ== | simonw 9599 | 2020-12-16T21:51:54Z | 2020-12-16T21:51:54Z | OWNER | This is a really frustrating bug with Vercel: https://github.com/simonw/datasette-publish-vercel/issues/28 `+` characters in URLs get translated into spaces before they get to Datasette. They know about the bug and said they were working on a fix a few months ago, but looks like it's still a problem. A workaround is to avoid `+` and use `-` instead - I think this SQL query does the same thing as yours: https://aws-partners-singapore.vercel.app/partners?sql=select%0D%0A++A.launch_rank%2C%0D%0A++A.partner_info%0D%0Afrom%0D%0A++summary+A%0D%0A++INNER+JOIN+summary+B+ON+A.launch_rank+%3E%3D+B.launch_rank+-+3%0D%0A++AND+A.launch_rank+-4+%3C%3D+B.launch_rank%0D%0AWHERE%0D%0A++B.%22partner_info%22+LIKE+%27%25Palo+Alto%25%27 ```sql select A.launch_rank, A.partner_info from summary A INNER JOIN summary B ON A.launch_rank >= B.launch_rank - 3 AND A.launch_rank -4 <= B.launch_rank WHERE B."partner_info" LIKE '%Palo Alto%' ``` I've been moving projects from Vercel to Cloud Run when they run into this, but that's not a great situation to be in. | {"total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Syntax error with + symbol when deployed to Vercel 767561886 | |
747065487 | https://github.com/simonw/datasette/issues/1148#issuecomment-747065487 | https://api.github.com/repos/simonw/datasette/issues/1148 | MDEyOklzc3VlQ29tbWVudDc0NzA2NTQ4Nw== | simonw 9599 | 2020-12-16T21:57:29Z | 2020-12-16T21:57:29Z | OWNER | I filed a new public bug in their issue tracker here: https://github.com/vercel/vercel/issues/5575 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | Syntax error with + symbol when deployed to Vercel 767561886 | |
747066629 | https://github.com/simonw/datasette/issues/675#issuecomment-747066629 | https://api.github.com/repos/simonw/datasette/issues/675 | MDEyOklzc3VlQ29tbWVudDc0NzA2NjYyOQ== | simonw 9599 | 2020-12-16T21:59:58Z | 2020-12-16T22:00:48Z | OWNER | Note that `datasette publish cloudrun` uses a working directory of `/app` - so users will need to copy their files into `/app` if that's where they need to live. https://github.com/simonw/datasette/blob/17cbbb1f7f230b39650afac62dd16476626001b5/datasette/utils/__init__.py#L348-L357 | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --cp option for datasette publish and datasette package for shipping additional files and directories 567902704 | |
747067864 | https://github.com/simonw/datasette/issues/675#issuecomment-747067864 | https://api.github.com/repos/simonw/datasette/issues/675 | MDEyOklzc3VlQ29tbWVudDc0NzA2Nzg2NA== | simonw 9599 | 2020-12-16T22:02:55Z | 2020-12-16T22:02:55Z | OWNER | But since we're already running `COPY . /app` anything that's made it into the temporary directory will get copied into `/app`. But... I feel the usability of the command will be better if users can use absolute paths on the `target` side: datasette publish cloudrun my.db --cp dogsheep-beta.yml /app | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --cp option for datasette publish and datasette package for shipping additional files and directories 567902704 | |
747068624 | https://github.com/simonw/datasette/issues/675#issuecomment-747068624 | https://api.github.com/repos/simonw/datasette/issues/675 | MDEyOklzc3VlQ29tbWVudDc0NzA2ODYyNA== | simonw 9599 | 2020-12-16T22:04:42Z | 2020-12-16T22:04:42Z | OWNER | I can't just use `COPY /path/to/blah.yml /app` in the `Dockerfile` because it runs on the Google Cloud Build servers, not on the user's laptop - so I need to first copy the files they specify to that temporary directory that gets uploaded to the cloud, then rewrite the `COPY` lines in the `Dockerfile` to copy from there. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --cp option for datasette publish and datasette package for shipping additional files and directories 567902704 | |
747070709 | https://github.com/simonw/datasette/issues/675#issuecomment-747070709 | https://api.github.com/repos/simonw/datasette/issues/675 | MDEyOklzc3VlQ29tbWVudDc0NzA3MDcwOQ== | simonw 9599 | 2020-12-16T22:09:15Z | 2020-12-16T22:09:15Z | OWNER | The other way this could work is passing a single argument - the file (or directory) to be copied in - and assuming it should always go in the `/app` root. Something like: datasette publish cloudrun my.db --include src/ --include dogsheep-beta.yml Which would add `/app/src/...` and `/app/dogsheep-beta.yml`. | {"total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | --cp option for datasette publish and datasette package for shipping additional files and directories 567902704 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1 ✖