\n\ni did attempt `pull_5323-alpine` as recomended in #5238 but it didnt work. the pod never stood up.\n\n### Steps to reproduce\n\n1. install woodpecker (using longhorn as a storage backend and forgejo as a forge but that hasnt been an issue before) on k3s cluster.\n2. create pipeline.\n3. attempt to run it.\n4. will never complete pod creation.\n\n### Expected behavior\n\ncreates pod, and run pipeline\n\n### System Info\n\n```shell\nsource\t\"https://github.com/woodpecker-ci/woodpecker\"\nversion\t\"3.8.0\"\n```\n\n### Additional context\n\n\u003Cimg width=\"1875\" height=\"143\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/6fbbcd41-751c-4d5d-b852-f05e57d6ca12\" />\n\n\u003Cimg width=\"1008\" height=\"312\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/6fc6001e-01e4-4cce-b9c4-2ccc144b9b1a\" />\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3018],{"name":3019,"color":3020},"bug","d73a4a",5345,"woodpecker-ci","woodpecker","open","Kubernetes Pod never completes initialisation - hangs forever","2025-07-21T14:12:00Z","https://github.com/woodpecker-ci/woodpecker/issues/5345",0.65764827,{"description":3030,"labels":3031,"number":3036,"owner":3022,"repository":3023,"state":3024,"title":3037,"updated_at":3038,"url":3039,"score":3040},"### Component\n\nagent\n\n### Describe the bug\n\nWhen using the Kubernetes backend, and then running a pipeline that requests for `detach`:\n```yaml\nsteps:\n server:\n image: node:22-alpine\n detach: true\n commands:\n - cd /woodpecker/playwright-tests/server-work/server\n - npm run db:reset\n - exec npm run dev\n ```\n\nThe following error shows up in the logs:\n```json\n{\"level\":\"info\",\"time\":\"2025-04-25T20:50:02Z\",\"message\":\"starting Woodpecker agent with version '3.5.2' and backend 'kubernetes' using platform 'linux/amd64' running up to 1 pipelines in parallel\"} \n{\"level\":\"error\",\"error\":\"rpc error: code = Unknown desc = workflow finished with error Service \\\"wp-svc-01jsqbepk0ykj9emkhf5mwxeak-server\\\" is invalid: spec.ports: Required value\",\"time\":\"2025-04-25T20:50:33Z\",\"message\":\"grpc error: wait(): code: Unknown\"} \n{\"level\":\"warn\",\"repo\":\"AtvikSecurity/pentracker\",\"pipeline\":\"959\",\"workflow_id\":\"1968\",\"error\":\"rpc error: code = Unknown desc = workflow finished with error Service \\\"wp-svc-01jsqbepk0ykj9emkhf5mwxeak-server\\\" is invalid: spec.ports: Required value\",\"time\":\"2025-04-25T20:50:33Z\",\"message\":\"cancel signal received\"}\n```\n\n\n\nHowever, then appending the `ports` value to the pipeline:\n```yaml\nsteps:\n server:\n image: node:22-alpine\n detach: true\n commands:\n - cd /woodpecker/playwright-tests/server-work/server\n - npm run db:reset\n - npx ts-node src/scripts/create-test-user.ts\n - exec npm run dev\n ports:\n - 5001\n```\n\nThe following linter error shows up:\n\n\nI'm assuming that the Kubernetes backend either needs to run it in some other capacity other than a `service` when `detach: true`, or require the ports are specified when `detach: true`.\n\n### Steps to reproduce\n\nExplained above\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n```shell\nVersion 3.5.2 for both the server and agent.\n```\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3032,3033],{"name":3019,"color":3020},{"name":3034,"color":3035},"backend/kubernetes","bfdadc",5124,"`Detach` action conflicts with Kubernetes backend","2025-05-07T16:51:44Z","https://github.com/woodpecker-ci/woodpecker/issues/5124",0.6799999,{"description":3042,"labels":3043,"number":3045,"owner":3022,"repository":3023,"state":3024,"title":3046,"updated_at":3047,"url":3048,"score":3049},"### Component\r\n\r\nserver\r\n\r\n### Describe the bug\r\n\r\nStep with `when.status: [failure]` doesn't run on failed pipeline, instead runs step with `when.status: [success]`.\r\n\r\n### Steps to reproduce\r\n\r\n1. Woodpecker + some Forge\r\n2. Make a pipeline\r\n\r\n```yaml\r\n# build.yaml\r\nskip_clone: true\r\nsteps:\r\n build:\r\n image: alpine\r\n commands:\r\n - echo 'Building the app'\r\n - exit 1 # 0 - success, 1 - fail\r\n```\r\n```yaml\r\n# notifications.yaml\r\nskip_clone: true\r\ndepends_on: [build]\r\nruns_on: [success, failure]\r\nsteps:\r\n fail-notification:\r\n when:\r\n - status: [failure]\r\n image: alpine\r\n commands:\r\n - echo 'Build failed'\r\n success-notification:\r\n when:\r\n - status: [success]\r\n image: alpine\r\n commands:\r\n - echo 'Build succeed'\r\n```\r\n3. Run manually and get wrong step execution\r\n\r\n\r\n### Expected behavior\r\n\r\n1. `fail-notification` step runs\r\n2. `success-notification` skips\r\n\r\n### System Info\r\n\r\n```shell\r\nWP `next-f87e80381b`, `2.7.1`, Gitea, Postgres 16\r\n```\r\n\r\n\r\n### Additional context\r\n\r\nhttps://github.com/woodpecker-ci/woodpecker/issues/4337#issuecomment-2468498847\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3044],{"name":3019,"color":3020},4355,"when.status:failure doesn't work across workflows (runs_on:failure)","2024-11-19T10:27:03Z","https://github.com/woodpecker-ci/woodpecker/issues/4355",0.7009219,{"description":3051,"labels":3052,"number":3061,"owner":3022,"repository":3023,"state":3024,"title":3062,"updated_at":3063,"url":3064,"score":3065},"### Clear and concise description of the problem\n\nWhen I run the following pipeline\r\n```\r\npipeline:\r\n my-step:\r\n image: alpine:booooom\r\n commands:\r\n - echo \"hello world\"\r\n```\r\nthe test fails, because the image tag `booooom` does not exist. Instead of the providing a helpful error message, the Woddpecker Web UI only shows a small notification pop-up in the bottom right corner of the screen saying \"An unknown error occurred\".\r\n\r\nThat is not very helpful.\n\n### Suggested solution\n\nProvide a more specific error message reading\r\n```\r\nContainer image 'alpine:booooom' for step 'my-step' could not be found in registry.\r\n```\r\nor similar.\n\n### Alternative\n\n_No response_\n\n### Additional context\n\n\r\n\n\n### Validations\n\n- [X] Read the [Contributing Guidelines](https://github.com/woodpecker-ci/woodpecker/blob/master/CONTRIBUTING.md).\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't already an [issue](https://github.com/woodpecker-ci/woodpecker/issues) that request the same feature to avoid creating a duplicate.",[3053,3056,3059],{"name":3054,"color":3055},"enhancement","7E1FE4",{"name":3057,"color":3058},"ux","fef2c0",{"name":3060,"color":3035},"backend/docker",1460,"Improve error message \"An unknown error occurred\" when container image cannot be found","2022-12-23T22:23:15Z","https://github.com/woodpecker-ci/woodpecker/issues/1460",0.7049886,{"description":3067,"labels":3068,"number":3070,"owner":3022,"repository":3023,"state":3024,"title":3071,"updated_at":3072,"url":3073,"score":3074},"### Component\n\nagent, web-ui\n\n### Describe the bug\n\nOn all my pipelines only the first few log lines appear in the webui. Then everything else is cut out. I can certainly see the log output when I kubectl logs wp-1234.... the step/pipeline, bit it doesn't seem to get sent to the server from the agent.\r\n\r\nI also see a lot of \r\n`{\"level\":\"error\",\"repo\":\"renovatebot/renovate\",\"pipeline\":\"2064\",\"workflow_id\":\"6848\",\"image\":\"docker.io/woodpeckerci/plugin-git:2.5.1\",\"workflow_id\":\"6848\",\"error\":\"io: read/write on closed pipe\",\"time\":\"2024-11-17T13:04:52Z\",\"message\":\"copy limited logStream part\"}`\r\n\r\non the agent and \r\n\r\n`{\"level\":\"error\",\"repo_id\":\"29\",\"pipeline_id\":\"4660\",\"workflow_id\":\"6871\",\"error\":\"stream: not found\",\"time\":\"2024-11-18T12:55:30Z\",\"message\":\"done: cannot close log stream for step 21827\"}` and `{\"level\":\"error\",\"repo_id\":\"29\",\"pipeline_id\":\"4660\",\"workflow_id\":\"6871\",\"error\":\"sql: no rows in result set\",\"time\":\"2024-11-18T12:55:30Z\",\"message\":\"queue.Done: cannot ack workflow\"}`\r\n\r\non the server.\r\n\r\nIt has to be mentioned, that before a month ago (I upgrade woodpecker almost immediately after release) the logs worked almost flawlessly for more than a year, since I switched from drone to woodpecker.\r\n\r\nCurrently I don't know where to start debugging, to get to the bottom of this. Do you have any pointers?\n\n### Steps to reproduce\n\ninstall woodpecker using the helm chart version 1.6.2 on a kubernetes cluster v1.31 only setting WOODPECKER_BACKEND_K8S_STORAGE_RWX: false beside the setup with a forge using https://woodpecker-ci.org/docs/next/administration/forges/forgejo\n\n### Expected behavior\n\nI expect to always get all the logs, when I click on a step/workflow. \n\n### System Info\n\n```shell\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.7.3\"}\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3069],{"name":3019,"color":3020},4409,"Logs only showing the first few lines of each step","2024-11-19T10:25:07Z","https://github.com/woodpecker-ci/woodpecker/issues/4409",0.7058098,{"description":3076,"labels":3077,"number":3082,"owner":3022,"repository":3023,"state":3024,"title":3083,"updated_at":3084,"url":3085,"score":3086},"### Component\r\n\r\nagent\r\n\r\n### Describe the bug\r\n\r\nThe agent failed to pick up new jobs, reporting RPC error.\r\n\r\n```\r\n2:48AM INF src/shared/logger/logger.go:101 > log level: debug\r\n2:48AM WRN src/pipeline/backend/kubernetes/kubernetes.go:101 > WOODPECKER_BACKEND_K8S_PULL_SECRET_NAMES is set to the default ('regcred'). It will default to empty in Woodpecker 3.0. Set it explicitly before then.\r\n2:48AM DBG src/cmd/agent/core/agent.go:173 > loaded kubernetes backend engine\r\n2:48AM DBG src/cmd/agent/core/agent.go:201 > agent registered with ID 30003\r\n2:48AM INF src/cmd/agent/core/agent.go:243 > starting Woodpecker agent with version 'next-5527d9bf86' and backend 'kubernetes' using platform 'linux/amd64' running up to 1 pipelines in parallel\r\n2:48AM DBG src/cmd/agent/core/agent.go:226 > created new runner 0\r\n2:48AM DBG src/cmd/agent/core/agent.go:234 > polling new steps\r\n2:48AM DBG src/agent/runner.go:54 > request next execution\r\n2:49AM ERR src/agent/rpc/client_grpc.go:93 > grpc error: done(): code: Unavailable error=\"rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\"\r\n```\r\n\r\n### System Info\r\n\r\n```shell\r\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.4.1\"}\r\n```\r\n\r\n### Additional context\r\n\r\nThe server and the agent are running in two Kubernetes clusters in different locations, connected by WireGuard + iptables.\r\n\r\nThe server still assigns the pipeline to the agent, and may falsely assign more pipelines than the capacity.\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3078,3079],{"name":3019,"color":3020},{"name":3080,"color":3081},"agent","ECBCDF",3712,"Agent failed to retrieve new jobs due to RPC error \"keepalive ping failed to receive ACK within timeout\"","2024-05-23T14:23:26Z","https://github.com/woodpecker-ci/woodpecker/issues/3712",0.71081454,{"description":3088,"labels":3089,"number":3094,"owner":3022,"repository":3023,"state":3024,"title":3095,"updated_at":3096,"url":3097,"score":3098},"### Component\r\n\r\nserver\r\n\r\n### Describe the bug\r\n\r\nI have various repo's set up using cron.\r\nThe crons are starting but are stuck on pending.\r\nWhen looking at the logs it appears Woodpecker is unable to find the pipeline config.\r\nI currently dont know why it cant find the pipeline config.\r\nAdditionally I cannot cancel the pending pipelines.\r\n\r\n### System Info\r\n\r\n```shell\r\nnext-5abba554\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n```\r\n2023-04-21T20:15:06Z ERR failure to find or persist pipeline config for docker-images/development | error=UNIQUE constraint failed: pipeline_config.config_id, pipeline_config.pipeline_id\r\n2023-04-21T20:15:06Z ERR run cron failed | error=failure to find or persist pipeline config for docker-images/development cronID=13\r\n```\r\n\r\n_Originally posted by @rubenelshof in https://github.com/woodpecker-ci/woodpecker/issues/1712_\r\n",[3090,3091],{"name":3019,"color":3020},{"name":3092,"color":3093},"server","5D7A92",1810,"failure to find or persist pipeline config | cronjob","2023-06-02T15:24:24Z","https://github.com/woodpecker-ci/woodpecker/issues/1810",0.7123466,{"description":3100,"labels":3101,"number":3103,"owner":3022,"repository":3023,"state":3104,"title":3105,"updated_at":3106,"url":3107,"score":3108},"### Component\n\nagent\n\n### Describe the bug\n\nWe were running Woodpecker v2.1.1 on Kubernetes backend at a multi-node cluster on AWS.\r\n\r\nWe've got a few `panic: runtime error` logs in our agent like this:\r\n\r\n```\r\npanic: runtime error: invalid memory address or nil pointer dereference\r\n[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16f7594]\r\n\r\ngoroutine 99699 [running]:\r\n[go.woodpecker-ci.org/woodpecker/v2/pipeline/backend/kubernetes.(*kube).WaitStep(0xc00050a140](http://go.woodpecker-ci.org/woodpecker/v2/pipeline/backend/kubernetes.%28*kube%29.WaitStep%280xc00050a140), {0x1e32748, 0xc0005048c0}, 0xc00116c500?, {0xc0024529b0, 0x5})\r\n /src/pipeline/backend/kubernetes/kubernetes.go:251 +0x594\r\n[go.woodpecker-ci.org/woodpecker/v2/pipeline.(*Runtime).exec(0xc001d80b80](http://go.woodpecker-ci.org/woodpecker/v2/pipeline.%28*Runtime%29.exec%280xc001d80b80), 0xc00116c500)\r\n /src/pipeline/pipeline.go:269 +0x196\r\n[go.woodpecker-ci.org/woodpecker/v2/pipeline.(*Runtime).execAll.func1()](http://go.woodpecker-ci.org/woodpecker/v2/pipeline.%28*Runtime%29.execAll.func1%28%29)\r\n /src/pipeline/pipeline.go:206 +0x1ba\r\n[golang.org/x/sync/errgroup.(*Group).Go.func1()](http://golang.org/x/sync/errgroup.%28*Group%29.Go.func1%28%29)\r\n /src/vendor/[golang.org/x/sync/errgroup/errgroup.go:75](http://golang.org/x/sync/errgroup/errgroup.go:75) +0x56\r\ncreated by [golang.org/x/sync/errgroup.(*Group).Go](http://golang.org/x/sync/errgroup.%28*Group%29.Go) in goroutine 41\r\n /src/vendor/[golang.org/x/sync/errgroup/errgroup.go:72](http://golang.org/x/sync/errgroup/errgroup.go:72) +0x96\r\n```\r\n\r\nTracking it down to https://github.com/woodpecker-ci/woodpecker/blob/v2.1.1/pipeline/backend/kubernetes/kubernetes.go#L251 it's likely that either `ContainerStatuses` is empty or Terminated is nil, both which would cause a panic.\r\n\r\nNow it seems that simply adding error handling code either case would be a viable option here, which is what I did internally (and I hope to be submitting that for review shortly). This way we were able to track down at least one occurrence of the bug: when the node hosting the pod is killed before the agent can retrieve the exit code. In our cause it was being caused by the Amazon Auto Scaling Group trying to rebalance multiple AZs despite active pipelines being executed on the node.\n\n### System Info\n\n```shell\nWoodpecker v2.1.1\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3102],{"name":3019,"color":3020},3330,"closed","(Kubernetes backend) terminated node causes runtime error when handling step exit code","2024-02-05T21:46:15Z","https://github.com/woodpecker-ci/woodpecker/issues/3330",0.6476141,{"description":3110,"labels":3111,"number":3114,"owner":3022,"repository":3023,"state":3104,"title":3115,"updated_at":3116,"url":3117,"score":3118},"### Component\r\n\r\nagent\r\n\r\n### Describe the bug\r\n\r\nA detached container cannot be accessed by its name, making it unusable.\r\n\r\n### Steps to reproduce\r\n\r\n1. Install Woodpecker and configure Kubernetes backend;\r\n2. Run a detached step and access it in following steps with its name;\r\n3. See \"bad DNS name\" or similar reports.\r\n\r\n### Expected behavior\r\n\r\nAs documented, a `detached` step should behave like a service. If it cannot be accessed by DNS, it is not capable of replacing `service`.\r\n\r\n### System Info\r\n\r\n```shell\r\n{\r\n \"source\": \"https://github.com/woodpecker-ci/woodpecker\",\r\n \"version\": \"2.8.0\"\r\n}\r\n```\r\n\r\n\r\n### Additional context\r\n\r\nhttps://github.com/woodpecker-ci/woodpecker/pull/3411 should be favorable...\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3112,3113],{"name":3019,"color":3020},{"name":3034,"color":3035},4627,"A detached container cannot be accessed (at least with Kubernetes backend)","2025-01-06T16:06:40Z","https://github.com/woodpecker-ci/woodpecker/issues/4627",0.67665726,{"description":3120,"labels":3121,"number":3123,"owner":3022,"repository":3023,"state":3104,"title":3124,"updated_at":3125,"url":3126,"score":3127},"### Component\n\nweb-ui\n\n### Describe the bug\n\nThe pipeline status image (on Woodpecker - returned by API: /api/badges/25/status.svg ) shows ERROR, even that last build is OK:\r\n\r\n\n\n### System Info\n\n```shell\nversion 2.4.1 running on Docker\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3122],{"name":3019,"color":3020},3561,"Pipeline status is error, but last job is OK","2024-04-15T10:15:05Z","https://github.com/woodpecker-ci/woodpecker/issues/3561",0.6844615,["Reactive",3129],{},["Set"],["ShallowReactive",3132],{"$fTRc1wZytZ_XrK4EfJfei_Sz-An4H4Yy6syhVxH_PVJc":-1,"$fD8kHi_9lXcbPMtSbXUyeiqmXynybo9Q6QBFVtp5WVOo":-1},"/woodpecker-ci/woodpecker/3555"]