\n\nThis id number isn't helpful for viewing, as people don't have the IDs of all their CI runners memorized. It'd be much better to use the agent name (or alias / whatever you tagged it as) instead, and possibly default to the ID if no name is present.\n\n### Suggested solution\n\nUse the agent name / alias instead of an id number for the queue badge.\n\n### Alternative\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [x] Checked that the feature isn't part of the `next` version already [https://woodpecker-ci.org/versions]\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't already an [issue](https://github.com/woodpecker-ci/woodpecker/issues) that request the same feature to avoid creating a duplicate.",[3176],{"name":3177,"color":3178},"feature","180DBE",5469,"In the queue list, use the agent name rather than the agent id for the badge.","2025-08-29T18:27:54Z","https://github.com/woodpecker-ci/woodpecker/issues/5469",0.73983073,{"description":3185,"labels":3186,"number":3188,"owner":3154,"repository":3155,"state":3156,"title":3189,"updated_at":3190,"url":3191,"score":3192},"### Component\n\nserver, agent\n\n### Describe the bug\n\nSetup a woordpecker server and agent via 2 docker containers with a shared `WOODPECKER_AGENT_SECRET`\r\nEverything works fine at first, but after a couple of hours the communication between server and agent fails.\r\nThe logs from the agent are:\r\n```\r\n ERR src/agent/rpc/client_grpc.go:461 > grpc error: report_health(): code: Unauthenticated error=\"rpc error: code = Unauthenticated desc = access token is invalid: invalid token: token has invalid claims: token is expired\"\r\n5:40PM ERR src/cmd/agent/core/agent.go:251 > failed to report health error=\"rpc error: code = Unauthenticated desc = access token is invalid: invalid token: token has invalid claims: token is expired\"\r\n```\r\nThe server doesn't log anything relevant about it I think.\r\nSo since this is so basic, I really hesitated to report that bug, believing I made some mistake, so I actually recreated my real setup with the most basic setup just locally and it is the same problem.\n\n### Steps to reproduce\n\n1. start server with:\r\n```\r\ndocker run --rm -it -p 8000:8000 -p 9000:9000 -v ./datatest:/var/lib/woodpecker -e WOODPECKER_HOST=http://localhost:8000 -e WOODPECKER_AGENT_SECRET=ADFI34YAKMIGSNKK55IPCKVVJWNULOBQY2QRIBJ42X527NY7GLTQ===1 -v /var/run/docker.sock:/var/run/docker.sock -e WOODPECKER_LOG_LEVEL=debug -e WOODPECKER_GITEA=true -e WOODPECKER_GITEA_CLIENT=... -e WOODPECKER_GITEA_SECRET=...-e WOODPECKER_GITEA_URL=... -e WOODPECKER_OPEN=true -e WOODPECKER_ADMIN=martin --name woodpecker-server-test woodpeckerci/woodpecker-server:v2.7.1-alpine\r\n```\r\n2. start agent with:\r\n```\r\ndocker run --rm -it --network host -e WOODPECKER_SERVER=localhost:9000 -e WOODPECKER_AGENT_SECRET=ADFI34YAKMIGSNKK55IPCKVVJWNULOBQY2QRIBJ42X527NY7GLTQ===1 -v /var/run/docker.sock:/var/run/docker.sock -e WOODPECKER_LOG_LEVEL=debug --name woodpecker-runner-test woodpeckerci/woodpecker-agent:v2.7.1-alpine\r\n```\r\n3. check it works\r\n4. wait for a day and you will see the error in the agent logs, server doesn't report anything related i think\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n```shell\n* woodpeckerci/woodpecker-server:v2.7.1-alpine\r\n* woodpeckerci/woodpecker-agent:v2.7.1-alpine\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3187],{"name":3151,"color":3152},4144,"agent connection seems broken, rpc access token expired","2024-10-08T07:04:00Z","https://github.com/woodpecker-ci/woodpecker/issues/4144",0.75363433,{"description":3194,"labels":3195,"number":3200,"owner":3154,"repository":3155,"state":3156,"title":3201,"updated_at":3202,"url":3203,"score":3204},"### Component\n\nagent\n\n### Describe the bug\n\nWhen using the Kubernetes backend, and then running a pipeline that requests for `detach`:\n```yaml\nsteps:\n server:\n image: node:22-alpine\n detach: true\n commands:\n - cd /woodpecker/playwright-tests/server-work/server\n - npm run db:reset\n - exec npm run dev\n ```\n\nThe following error shows up in the logs:\n```json\n{\"level\":\"info\",\"time\":\"2025-04-25T20:50:02Z\",\"message\":\"starting Woodpecker agent with version '3.5.2' and backend 'kubernetes' using platform 'linux/amd64' running up to 1 pipelines in parallel\"} \n{\"level\":\"error\",\"error\":\"rpc error: code = Unknown desc = workflow finished with error Service \\\"wp-svc-01jsqbepk0ykj9emkhf5mwxeak-server\\\" is invalid: spec.ports: Required value\",\"time\":\"2025-04-25T20:50:33Z\",\"message\":\"grpc error: wait(): code: Unknown\"} \n{\"level\":\"warn\",\"repo\":\"AtvikSecurity/pentracker\",\"pipeline\":\"959\",\"workflow_id\":\"1968\",\"error\":\"rpc error: code = Unknown desc = workflow finished with error Service \\\"wp-svc-01jsqbepk0ykj9emkhf5mwxeak-server\\\" is invalid: spec.ports: Required value\",\"time\":\"2025-04-25T20:50:33Z\",\"message\":\"cancel signal received\"}\n```\n\n\n\nHowever, then appending the `ports` value to the pipeline:\n```yaml\nsteps:\n server:\n image: node:22-alpine\n detach: true\n commands:\n - cd /woodpecker/playwright-tests/server-work/server\n - npm run db:reset\n - npx ts-node src/scripts/create-test-user.ts\n - exec npm run dev\n ports:\n - 5001\n```\n\nThe following linter error shows up:\n\n\nI'm assuming that the Kubernetes backend either needs to run it in some other capacity other than a `service` when `detach: true`, or require the ports are specified when `detach: true`.\n\n### Steps to reproduce\n\nExplained above\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n```shell\nVersion 3.5.2 for both the server and agent.\n```\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3196,3197],{"name":3151,"color":3152},{"name":3198,"color":3199},"backend/kubernetes","bfdadc",5124,"`Detach` action conflicts with Kubernetes backend","2025-05-07T16:51:44Z","https://github.com/woodpecker-ci/woodpecker/issues/5124",0.75412077,{"description":3206,"labels":3207,"number":3209,"owner":3154,"repository":3155,"state":3210,"title":3211,"updated_at":3212,"url":3213,"score":3214},"When long building is in progress, restarting woodpecker containers (agent then server) using docker-compose similar to\r\n\r\nhttps://woodpecker.laszlo.cloud/server-setup/\r\n\r\nfocres docker daemon to kill agent container (then server is stopped also but building container is not). After next woodpecker containers start, build task has still Running status and one cannot see building container output (building container finishes its work in the background but its status is not updated in woodpecker; pipeline service containers are left running orphaned till host/docker restart).\r\n\r\nAgent logs after restart initialization show only\r\n\r\n`ctrl+c received, terminating process`\r\n\r\nand agent does not cancel running task.\r\n\r\nChecked in woodpecker compiled from b52e404f93ccea05dc783aa929770c4a0fad2e74.\r\n\r\nWhen receiving term signal (i.e. host reboot) agent process should cancel all ongoing tasks and terminate itself ASAP. This should leave task database in consistent state after next start.\r\n\r\nRegards,\r\nPaweł",[3208],{"name":3151,"color":3152},178,"closed","Inconsistent state after woodpecker container restart during ongoing build","2021-12-19T03:18:37Z","https://github.com/woodpecker-ci/woodpecker/issues/178",0.71124196,{"description":3216,"labels":3217,"number":3221,"owner":3154,"repository":3155,"state":3210,"title":3222,"updated_at":3223,"url":3224,"score":3225},"### Component\r\n\r\nagent, maybe?\r\n\r\n### Describe the bug\r\n\r\nToday I'm trying woodpecker, on a RaspberryPI I found in a drawer, both server and agent on the same raspi.\r\n\r\nI connected it to a gitea and everything went smoothly but one thing: the agent were not picking tasks.\r\n\r\nMy `.woodpecker.yml` looks like:\r\n\r\n```yml\r\n---\r\n\r\npipeline:\r\n test:\r\n image: debian\r\n commands:\r\n - echo Hello from Woodpecker\r\n```\r\n\r\nIn order for the agent to pick job I had to add:\r\n\r\n```diff\r\n+platform: linux/arm64\r\n```\r\n\r\nI expected that, without telling the platform explicitly, any agent would pick it, so it would be picked by the only agent I have.\r\n\r\nIt looks like this:\r\n\r\n\r\n\r\nThe one on the top is the one with an explicit `platform`, it gets picked and run. The one in the bottom don't have an explicit `platform` setting and never gets picked (and no information is given about why it's not picked ☹).\r\n\r\n\r\n\r\n\u003Cdetails>\r\n\u003Csummary>\r\nThe logs when I'm not specifying the `platform`\r\n\u003C/summary>\r\n\r\n```text\r\nwoodpecker-agent_1 | {\"level\":\"trace\",\"error\":\"rpc error: code = Unavailable desc = closing transport due to: connection error: desc = \\\"error reading from server: EOF\\\", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: \\\"too_many_pings\\\"\",\"time\":\"2022-12-08T08:43:20Z\",\"message\":\"grpc: to many keepalive pings without sending data\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:43:21Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/grpc/rpc.go:63\",\"message\":\"agent connected: 04c8d295e8f3: polling\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:43:21Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:321\",\"message\":\"queue: pending right now: 16\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:43:21Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:293\",\"message\":\"queue: trying to assign task: 16 with deps []\"}\r\nwoodpecker-agent_1 | {\"level\":\"trace\",\"error\":\"rpc error: code = Unavailable desc = closing transport due to: connection error: desc = \\\"error reading from server: EOF\\\", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: \\\"too_many_pings\\\"\",\"time\":\"2022-12-08T08:44:41Z\",\"message\":\"grpc: to many keepalive pings without sending data\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:44:42Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/grpc/rpc.go:63\",\"message\":\"agent connected: 04c8d295e8f3: polling\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:44:42Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:321\",\"message\":\"queue: pending right now: 16\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:44:42Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:293\",\"message\":\"queue: trying to assign task: 16 with deps []\"}\r\nwoodpecker-agent_1 | {\"level\":\"trace\",\"error\":\"rpc error: code = Unavailable desc = closing transport due to: connection error: desc = \\\"error reading from server: EOF\\\", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: \\\"too_many_pings\\\"\",\"time\":\"2022-12-08T08:47:22Z\",\"message\":\"grpc: to many keepalive pings without sending data\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:47:23Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/grpc/rpc.go:63\",\"message\":\"agent connected: 04c8d295e8f3: polling\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:47:23Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:321\",\"message\":\"queue: pending right now: 16\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:47:23Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:293\",\"message\":\"queue: trying to assign task: 16 with deps []\"}\r\nwoodpecker-agent_1 | {\"level\":\"trace\",\"error\":\"rpc error: code = Unavailable desc = closing transport due to: connection error: desc = \\\"error reading from server: EOF\\\", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: \\\"too_many_pings\\\"\",\"time\":\"2022-12-08T08:52:43Z\",\"message\":\"grpc: to many keepalive pings without sending data\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:52:44Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/grpc/rpc.go:63\",\"message\":\"agent connected: 04c8d295e8f3: polling\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:52:44Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:321\",\"message\":\"queue: pending right now: 16\"}\r\nwoodpecker-server_1 | {\"level\":\"debug\",\"time\":\"2022-12-08T08:52:44Z\",\"caller\":\"/woodpecker/src/github.com/woodpecker-ci/woodpecker/server/queue/fifo.go:293\",\"message\":\"queue: trying to assign task: 16 with deps []\"}\r\nwoodpecker-agent_1 | {\"level\":\"trace\",\"error\":\"rpc error: code = Unavailable desc = closing transport due to: connection error: desc = \\\"error reading from server: EOF\\\", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: \\\"too_many_pings\\\"\",\"time\":\"2022-12-08T09:03:24Z\",\"message\":\"grpc: to many keepalive pings without sending data\"}\r\n```\r\n\u003C/details>\r\n\r\n### System Info\r\n\r\n```shell\r\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"0.15.5\"}\r\n\r\n\r\non\r\n\r\n```text\r\nLinux raspberrypi 5.15.61-v8+ #1579 SMP PREEMPT Fri Aug 26 11:16:44 BST 2022 aarch64 GNU/Linux\r\n```\r\n\r\nusing `docker-compose`.\r\n```\r\n\r\n### Validations\r\n\r\n- [X] Read the [Contributing Guidelines](https://github.com/woodpecker-ci/woodpecker/blob/master/CONTRIBUTING.md).\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Check that this is a concrete bug. For Q&A join our [Discord Chat Server](https://discord.gg/fcMQqSMXJy) or the [Matrix room](https://matrix.to/#/#woodpecker:matrix.org).",[3218],{"name":3219,"color":3220},"feedback","4B7070",1468,"With a single arm64 agent: jobs are not pulled without an explicit `platform:`.","2022-12-25T08:52:03Z","https://github.com/woodpecker-ci/woodpecker/issues/1468",0.72267807,{"description":3227,"labels":3228,"number":3229,"owner":3154,"repository":3155,"state":3210,"title":3230,"updated_at":3231,"url":3232,"score":3233},"### Component\r\n\r\nagent\r\n\r\n### Describe the bug\r\n\r\nWoodpecker Agent is being shown as `starting` when executing `docker ps -a` / `podman ps -a`.\r\n\r\n### System Info\r\n\r\n`Artix Linux Rolling - OpenRC`\r\n```\r\nversion\t\"0.15.6\"\r\n```\r\n\r\nOn podman / `podman.sock`.\r\n\r\nRunner:\r\n```\r\n#!/usr/bin/openrc-run\r\n\r\nsupervisor=supervise-daemon\r\n\r\ndepend() {\r\n\tneed localmount net pihole postgresql woodpecker-server pipeline\r\n}\r\n\r\nstart() {\r\n\techo -n \"Starting:\"\r\n\tif [ \"$(podman ps -aq -f name='cwoodpecker')\" ]; then\r\n\t\tpodman stop cwoodpecker > /dev/null\r\n\tfi\r\n\r\n\tif [ \"$(podman ps -aq -f name='cwoodpecker')\" ]; then\r\n\t\tpodman start cwoodpecker\r\n\telse\r\n\t\tpodman run \\\r\n\t\t\t-d --restart=unless-stopped \\\r\n\t\t\t--stop-signal SIGKILL \\\r\n\t\t\t--name cwoodpecker \\\r\n\t\t\t--ip 192.168.2.13 \\\r\n\t\t\t--privileged \\\r\n\t\t\t-v /server/pipeline:/listen \\\r\n\t\t\t-e WOODPECKER_LOG_LEVEL=warn \\\r\n\t\t\t-e WOODPECKER_HOSTNAME=woodpeckeragent \\\r\n\t\t\t-e WOODPECKER_AGENT_SECRET=\"secret\" \\\r\n\t\t\t-e WOODPECKER_HEALTHCHECK=false \\\r\n\t\t\t-e WOODPECKER_MAX_PROCS=2 \\\r\n\t\t\t-e WOODPECKER_SERVER=192.168.2.14:9000 \\\r\n\t\t\t-e WOODPECKER_BACKEND=docker \\\r\n\t\t\t-e DOCKER_HOST=\"unix:///listen/podman.sock\" \\\r\n\t\t\twoodpeckerci/woodpecker-agent\r\n\tfi\r\n}\r\n\r\nstop() {\r\n\tif [ \"$(podman ps -aq -f name='cwoodpecker')\" ]; then\r\n\t\tpodman stop cwoodpecker > /dev/null\r\n\tfi\r\n}\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\n### Validations\r\n\r\n- [X] Read the [Contributing Guidelines](https://github.com/woodpecker-ci/woodpecker/blob/master/CONTRIBUTING.md).\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]\r\n- [X] Check that this is a concrete bug. For Q&A join our [Discord Chat Server](https://discord.gg/fcMQqSMXJy) or the [Matrix room](https://matrix.to/#/#woodpecker:matrix.org).",[],1624,"Woodpecker Agent is being shown as `starting` in a container.","2023-08-21T15:18:39Z","https://github.com/woodpecker-ci/woodpecker/issues/1624",0.73307717,{"description":3235,"labels":3236,"number":3241,"owner":3154,"repository":3155,"state":3210,"title":3242,"updated_at":3243,"url":3244,"score":3245},"for example https://ci.woodpecker-ci.org/repos/3780/pipeline/12015/28\r\n\r\nshows that the queue impl. has the potential to lock itselve.\r\n\r\nas in production you do not _normaly_ have short task insert times, this is is hardly discovered in the wild.\r\nand a server restart do fix it ... so i can see why we did not got an bug report.\r\n\r\nbut we should still find the cause to:\r\n- not need to restart our test pipelines to make random error disapear\r\n- fix potential bigger issues that coudl be the cause of this\r\n- dont have no-explainable hickups in deployed instances\r\n\r\nfor me this looks like some kind of race condition ... e.g. some lock&unlock is probably somewhere missing\r\n\r\n## workaround: restart server",[3237,3238],{"name":3151,"color":3152},{"name":3239,"color":3240},"server","5D7A92",3175,"Task Queue can lock itself","2024-11-14T17:24:12Z","https://github.com/woodpecker-ci/woodpecker/issues/3175",0.7333455,{"description":3247,"labels":3248,"number":3250,"owner":3154,"repository":3155,"state":3210,"title":3251,"updated_at":3252,"url":3253,"score":3254},"### Component\n\nserver, other\n\n### Describe the bug\n\nSince updating to 2.7 we've observed a massive spike on the database load. Most of it seems to come from updates to the `agents` table. More specifically, the `last_work` field of several agents is updated multiple times per second.\r\n\r\nThis is greatly impacting database performance (which seems to feed into https://github.com/woodpecker-ci/woodpecker/issues/3999)\r\n\r\nI haven't done a full profiling but if I had to guess this seems to come from this change https://github.com/woodpecker-ci/woodpecker/pull/3844/files#diff-0f4ca4733649eb6707a0dd7e0ca0083cdc587b5cdced5b3ac051fc32cc9353cbR361-R368. If I understand correctly at every time a log line is persisted to the database, the respective agent is updated. The frequency of these updates seem quite high.\r\n\r\nFor context we were running Woodpecker 2.3 with a PostgreSQL database on Amazon RDS db.t4g.small before just fine (2 vCPU and 2 GiB RAM). After 2.7 we had to update to a db.t4g.xlarge (4 vCPU and 8 GiB RAM) and it's still struggling on the CPU.\r\n\r\nWe're running on Kubernetes with 10 agents and up to 10 workflows for agent.\n\n### Steps to reproduce\n\n1. Install Woodpecker 2.7\r\n2. Run multiple Workflows\r\n3. Observe multiple updates on the `agents` table\n\n### Expected behavior\n\nFrom https://github.com/woodpecker-ci/woodpecker/pull/3844 this seems to be the intended behavior. However it's clearly coming at a cost.\r\n\r\nMaybe we could only update the agents every X minutes instead of every log line/every second (not sure how it's implemented right now, need to look deeper). Possibly the same could be said for log_entries update, their frequency might be just a tad too high. Of course surely there are risks here (i.e. losing logs).\r\n\r\nWe will test an internal version with the last work update on every log disabled and see how that goes.\n\n### System Info\n\n```shell\nWoodpecker 2.7\r\nKubernetes installation\n```\n\n\n### Additional context\n\nHere's our Amazon Performance Insights from the database. On Friday 9th evening we updated Woodpecker from 2.3 to 2.7 but we barely had any pipelines ran then nor during the weekend. Then on Monday you can see how load is many times higher than the previous week. \r\n\r\n\r\n\r\nMost of it is from INSERTs coming from `log_entries` and `agents` however a more detailed analysis shows that in query _volume_, `log_entries` updates have not increased much after the update. However `agents` have increased tremendously. \r\n\r\nOn Monday we increased the database from a t4g.small to t4g.xlarge which helped but did not solve the issue.\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3249],{"name":3151,"color":3152},4030,"Massive load on agents table since 2.7","2024-08-14T19:53:37Z","https://github.com/woodpecker-ci/woodpecker/issues/4030",0.73375475,["Reactive",3256],{},["Set"],["ShallowReactive",3259],{"$fTRc1wZytZ_XrK4EfJfei_Sz-An4H4Yy6syhVxH_PVJc":-1,"$f5L1KPY96DyS2fnhV-E0tY9k5AQyMIosVYuVTIU4lZo8":-1},"/woodpecker-ci/woodpecker/3248"]