issues
2 rows where user = 110420
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id ▼ | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
715779909 | MDExOlB1bGxSZXF1ZXN0NDk4NjMwNzA4 | 995 | Document setting Google Cloud SDK properties | ghing 110420 | closed | 0 | Datasette 0.50 5971510 | 2 | 2020-10-06T15:18:01Z | 2020-10-08T23:55:30Z | 2020-10-06T16:25:38Z | CONTRIBUTOR | simonw/datasette/pulls/995 | Document setting Google Cloud SDK properties to avoid having to respond to interactive prompts when running `datasette publish cloudrun`. | datasette 107914493 | pull | {"url": "https://api.github.com/repos/simonw/datasette/issues/995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} | 0 | ||||
1015646369 | I_kwDOBm6k_c48iYih | 1480 | Exceeding Cloud Run memory limits when deploying a 4.8G database | ghing 110420 | open | 0 | 9 | 2021-10-04T21:20:24Z | 2022-10-07T04:39:10Z | CONTRIBUTOR | When I try to deploy a 4.8G SQLite database to Google Cloud Run, I get this error message: > Memory limit of 8192M exceeded with 8826M used. Consider increasing the memory limit, see https://cloud.google.com/run/docs/configuring/memory-limits Unfortunately, the maximum amount of memory that can be allocated to an instance is 8192M. Naively profiling the memory usage of running Datasette with this database locally on my MacBook shows the following memory usage (using Activity Monitor) when I just start up Datasette locally: - Real Memory Size: 70.6 MB - Virtual Memory Size: 4.51 GB - Shared Memory Size: 2.5 MB - Private Memory Size: 57.4 MB I'm trying to understand if there's a query or other operation that gets run during container deployment that causes memory use to be so large and if this can be avoided somehow. This is somewhat related to #1082, but on a different platform, so I decided to open a new issue. | datasette 107914493 | issue | {"url": "https://api.github.com/repos/simonw/datasette/issues/1480/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0} |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);