\n\ni did attempt `pull_5323-alpine` as recomended in #5238 but it didnt work. the pod never stood up.\n\n### Steps to reproduce\n\n1. install woodpecker (using longhorn as a storage backend and forgejo as a forge but that hasnt been an issue before) on k3s cluster.\n2. create pipeline.\n3. attempt to run it.\n4. will never complete pod creation.\n\n### Expected behavior\n\ncreates pod, and run pipeline\n\n### System Info\n\n```shell\nsource\t\"https://github.com/woodpecker-ci/woodpecker\"\nversion\t\"3.8.0\"\n```\n\n### Additional context\n\n\u003Cimg width=\"1875\" height=\"143\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/6fbbcd41-751c-4d5d-b852-f05e57d6ca12\" />\n\n\u003Cimg width=\"1008\" height=\"312\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/6fc6001e-01e4-4cce-b9c4-2ccc144b9b1a\" />\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3044],{"name":3019,"color":3020},5345,"Kubernetes Pod never completes initialisation - hangs forever","2025-07-21T14:12:00Z","https://github.com/woodpecker-ci/woodpecker/issues/5345",0.70325035,{"labels":3051,"number":3053,"owner":3025,"repository":3026,"state":3027,"title":3054,"updated_at":3055,"url":3056,"score":3057},[3052],{"name":3019,"color":3020},4446,"Agent stops taking jobs after server throws 5XX errors","2025-03-12T09:31:42Z","https://github.com/woodpecker-ci/woodpecker/issues/4446",0.7043879,{"labels":3059,"number":3061,"owner":3025,"repository":3026,"state":3027,"title":3062,"updated_at":3063,"url":3064,"score":3065},[3060],{"name":3019,"color":3020},4848,"Server crash when pipeline start","2025-02-14T15:03:25Z","https://github.com/woodpecker-ci/woodpecker/issues/4848",0.7063249,{"description":3067,"labels":3068,"number":3070,"owner":3025,"repository":3026,"state":3071,"title":3072,"updated_at":3073,"url":3074,"score":3075},"### Component\n\nagent\n\n### Describe the bug\n\nWe were running Woodpecker v2.1.1 on Kubernetes backend at a multi-node cluster on AWS.\r\n\r\nWe've got a few `panic: runtime error` logs in our agent like this:\r\n\r\n```\r\npanic: runtime error: invalid memory address or nil pointer dereference\r\n[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16f7594]\r\n\r\ngoroutine 99699 [running]:\r\n[go.woodpecker-ci.org/woodpecker/v2/pipeline/backend/kubernetes.(*kube).WaitStep(0xc00050a140](http://go.woodpecker-ci.org/woodpecker/v2/pipeline/backend/kubernetes.%28*kube%29.WaitStep%280xc00050a140), {0x1e32748, 0xc0005048c0}, 0xc00116c500?, {0xc0024529b0, 0x5})\r\n /src/pipeline/backend/kubernetes/kubernetes.go:251 +0x594\r\n[go.woodpecker-ci.org/woodpecker/v2/pipeline.(*Runtime).exec(0xc001d80b80](http://go.woodpecker-ci.org/woodpecker/v2/pipeline.%28*Runtime%29.exec%280xc001d80b80), 0xc00116c500)\r\n /src/pipeline/pipeline.go:269 +0x196\r\n[go.woodpecker-ci.org/woodpecker/v2/pipeline.(*Runtime).execAll.func1()](http://go.woodpecker-ci.org/woodpecker/v2/pipeline.%28*Runtime%29.execAll.func1%28%29)\r\n /src/pipeline/pipeline.go:206 +0x1ba\r\n[golang.org/x/sync/errgroup.(*Group).Go.func1()](http://golang.org/x/sync/errgroup.%28*Group%29.Go.func1%28%29)\r\n /src/vendor/[golang.org/x/sync/errgroup/errgroup.go:75](http://golang.org/x/sync/errgroup/errgroup.go:75) +0x56\r\ncreated by [golang.org/x/sync/errgroup.(*Group).Go](http://golang.org/x/sync/errgroup.%28*Group%29.Go) in goroutine 41\r\n /src/vendor/[golang.org/x/sync/errgroup/errgroup.go:72](http://golang.org/x/sync/errgroup/errgroup.go:72) +0x96\r\n```\r\n\r\nTracking it down to https://github.com/woodpecker-ci/woodpecker/blob/v2.1.1/pipeline/backend/kubernetes/kubernetes.go#L251 it's likely that either `ContainerStatuses` is empty or Terminated is nil, both which would cause a panic.\r\n\r\nNow it seems that simply adding error handling code either case would be a viable option here, which is what I did internally (and I hope to be submitting that for review shortly). This way we were able to track down at least one occurrence of the bug: when the node hosting the pod is killed before the agent can retrieve the exit code. In our cause it was being caused by the Amazon Auto Scaling Group trying to rebalance multiple AZs despite active pipelines being executed on the node.\n\n### System Info\n\n```shell\nWoodpecker v2.1.1\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3069],{"name":3019,"color":3020},3330,"closed","(Kubernetes backend) terminated node causes runtime error when handling step exit code","2024-02-05T21:46:15Z","https://github.com/woodpecker-ci/woodpecker/issues/3330",0.65342724,{"description":3077,"labels":3078,"number":3083,"owner":3025,"repository":3026,"state":3071,"title":3084,"updated_at":3085,"url":3086,"score":3087},"### Component\r\n\r\nserver, agent\r\n\r\n### Describe the bug\r\n\r\nUser-1 have pipeline in repository `wp-test`, he [runs](https://woodpecker.test.smthd.com/repos/1/pipeline/139) `gitea-integration-test` branch:\r\n```yaml\r\nskip_clone: true\r\nservices:\r\n postgres:\r\n image: digitalocean/doks-debug\r\n commands:\r\n - echo 'This is Gitea Postgres test server' | nc -l -6 5432\r\n ports:\r\n - 5432\r\nsteps:\r\n gitea:\r\n image: digitalocean/doks-debug\r\n commands:\r\n - nc -v -6 -w 10 postgres 5432\r\n```\r\n\r\nUser-2 have pipeline below in his `wp-test-2` repository and `woodpecker-integration-test` branch:\r\n```yaml\r\nskip_clone: true\r\nservices:\r\n postgres:\r\n image: digitalocean/doks-debug\r\n commands:\r\n - echo 'This is Woodpecker Postgres test server' | nc -l -6 5432\r\n ports:\r\n - 5432\r\nsteps:\r\n wp:\r\n image: digitalocean/doks-debug\r\n commands:\r\n - nc -v -6 -w 10 postgres 5432\r\n```\r\nWhen User-1's pipeline Service and service Pod were launched, User-2 [ran his own](https://woodpecker.test.smthd.com/repos/2/pipeline/13).\r\n\r\nBugs:\r\n1. User-2's pipeline was cancelled with `error\":\"services \"postgres\" already exists`.\r\n2. User-1's Service and service Pod were deleted because User-2's pipeline cleaned up resources and the names were the same - `postgres`.\r\n\r\n### System Info\r\n\r\n```shell\r\n`next-0b5eef7d1e`, 1 Server, 1 Agent, max workflows 2.\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n[woodpecker-agent.log](https://github.com/woodpecker-ci/woodpecker/files/14075623/woodpecker-agent.log)\r\n\r\nhttps://github.com/woodpecker-ci/woodpecker/pull/3236#issuecomment-1902404296\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3079,3080],{"name":3019,"color":3020},{"name":3081,"color":3082},"backend/kubernetes","bfdadc",3288,"Multiple flaws when running pipelines with the same service (K8s)","2024-02-17T18:45:23Z","https://github.com/woodpecker-ci/woodpecker/issues/3288",0.65767205,{"description":3089,"labels":3090,"number":3093,"owner":3025,"repository":3026,"state":3071,"title":3094,"updated_at":3095,"url":3096,"score":3097},"### Component\r\n\r\nagent\r\n\r\n### Describe the bug\r\n\r\nA detached container cannot be accessed by its name, making it unusable.\r\n\r\n### Steps to reproduce\r\n\r\n1. Install Woodpecker and configure Kubernetes backend;\r\n2. Run a detached step and access it in following steps with its name;\r\n3. See \"bad DNS name\" or similar reports.\r\n\r\n### Expected behavior\r\n\r\nAs documented, a `detached` step should behave like a service. If it cannot be accessed by DNS, it is not capable of replacing `service`.\r\n\r\n### System Info\r\n\r\n```shell\r\n{\r\n \"source\": \"https://github.com/woodpecker-ci/woodpecker\",\r\n \"version\": \"2.8.0\"\r\n}\r\n```\r\n\r\n\r\n### Additional context\r\n\r\nhttps://github.com/woodpecker-ci/woodpecker/pull/3411 should be favorable...\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3091,3092],{"name":3019,"color":3020},{"name":3081,"color":3082},4627,"A detached container cannot be accessed (at least with Kubernetes backend)","2025-01-06T16:06:40Z","https://github.com/woodpecker-ci/woodpecker/issues/4627",0.6667636,{"description":3099,"labels":3100,"number":3103,"owner":3025,"repository":3026,"state":3071,"title":3104,"updated_at":3105,"url":3106,"score":3107},"### Component\r\n\r\nserver\r\n\r\n### Describe the bug\r\n\r\nOn a Kubernetes backend, if any container that is part of a step fails to pull an image and gets stuck in an ImagePullBackOff error, the step will just keep running indefinitely, with no feedback for the user.\r\n\r\nI think the expected behavior here would be something along these lines:\r\n\r\n- Woodpecker to try to pull the image for a while\r\n- If it fails (after a timeout) it displays a error to the user informing that it timed out/failed to pull the specific image\r\n- It fails the step\r\n- It terminates the pod on the cluster\r\n\r\nI'd assume that similar errors can happen if other issues cause a Pod to be in a pending state (for example, there are no nodes available in the cluster). Maybe a similar \"timeout\" strategy could be implemented to deal with all these similar scenarios?\r\n\r\nNote: canceling the pipeline terminates the pipeline and terminates the pod, but _marks the pipeline as successful_, which is another issue.\r\n\r\n### System Info\r\n\r\n```shell\r\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.3.0\"}\r\n```\r\n\r\n\r\n### Additional context\r\n\r\nHere's a sample I did to showcase the issue (it's running in an internal Woodpecker cluster based on Woodpecker 2.3 so I can't share an open link). \r\n\r\nI have built a pipeline where I have referenced an image that does not exist, `image: broken-image-ref`.\r\n\r\n\r\n\r\nHere's the result. It just stays stuck on the broken step, indefinitely (or at least possibility until the pipeline timeout; didn't get to wait that long) without logging anything.\r\n\r\n\r\n\r\nIf I go look at this pod in my cluster, I can see that it is stuck with the ImagePullBackOff error:\r\n\r\n```\r\n...\r\nEvents:\r\n Type Reason Age From Message\r\n ---- ------ ---- ---- -------\r\n Normal Scheduled 4m55s default-scheduler Successfully assigned woodpecker-pipelines/wp-01hsvnwbdgge7msffe0qn6zz68 to \u003C redacated >\r\n Normal SuccessfulAttachVolume 4m45s attachdetach-controller AttachVolume.Attach succeeded for volume \u003C redacated >\r\n Warning Failed 3m25s (x6 over 4m43s) kubelet Error: ImagePullBackOff\r\n Normal Pulling 3m10s (x4 over 4m44s) kubelet Pulling image \"broken-image-ref\"\r\n Warning Failed 3m10s (x4 over 4m43s) kubelet Failed to pull image \"broken-image-ref\": rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/broken-image-ref:latest\": failed to resolve reference \"docker.io/library/broken-image-ref:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\r\n Warning Failed 3m10s (x4 over 4m43s) kubelet Error: ErrImagePull\r\n Normal BackOff 2m58s (x7 over 4m43s) kubelet Back-off pulling image \"broken-image-ref\"\r\n ```\r\n\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3101,3102],{"name":3019,"color":3020},{"name":3081,"color":3082},3555,"Step freezes when container image can't be pulled (ImagePullBackOff)","2024-04-16T06:10:51Z","https://github.com/woodpecker-ci/woodpecker/issues/3555",0.6817289,{"description":3109,"labels":3110,"number":3112,"owner":3025,"repository":3026,"state":3071,"title":3113,"updated_at":3114,"url":3115,"score":3116},"### Component\n\nagent\n\n### Describe the bug\n\nAgents after running a while starts throwing error and restarting. Auto-restart from k8s does not fix the issue, only deleting the pod.\n\n\n\n### Steps to reproduce\n\nIt just happens after few minutes of idle.\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n```shell\n$ kubectl version\nClient Version: v1.31.5\nKustomize Version: v5.4.2\nServer Version: v1.31.5+k3s1\n\n\nMy helmfile configuration as follows. In logs above you can see 3.3.0 instead of 3.2.0 as I tried to manually upgrade agent to see if newer version fixes the issue or not.\n\n\nrepositories:\n - name: woodpecker\n url: https://woodpecker-ci.org/\n skipTLSVerify: true\n\nreleases:\n - name: woodpecker-server\n chart: woodpecker/woodpecker\n version: 2.1.0\n namespace: woodpecker\n createNamespace: true\n cleanupOnFail: false\n devel: false\n installed: true\n skipDeps: false\n values:\n - server:\n env:\n WOODPECKER_ADMIN: admin\n WOODPECKER_HOST: https://cicd\n WOODPECKER_GITHUB: false\n WOODPECKER_GITEA: true\n WOODPECKER_GITEA_URL: https://git\n WOODPECKER_AUTHENTICATE_PUBLIC_REPOS: true\n # WOODPECKER_LOG_LEVEL: trace\n \n extraSecretNamesForEnvFrom:\n - woodpecker-gitea-client\n - woodpecker-gitea-secret\n - woodpecker-secret\n \n persistentVolume:\n storageClass: \"local-path\"\n \n ingress:\n enabled: true\n annotations:\n cert-manager.io/cluster-issuer: letsencrypt\n hosts:\n - host: cicd\n paths:\n - path: /\n pathType: Prefix\n tls:\n - secretName: cicd-tls\n hosts:\n - cicd\n - agent:\n resources:\n limits:\n cpu: 500m\n memory: 512Mi\n requests:\n cpu: 100m\n memory: 64Mi\n env:\n WOODPECKER_SERVER: \"woodpecker-server.woodpecker.svc.cluster.local:9000\"\n WOODPECKER_BACKEND_K8S_STORAGE_CLASS: \"local-path\"\n WOODPECKER_BACKEND_K8S_STORAGE_RWX: \"false\"\n WOODPECKER_FORGE_TIMEOUT: \"30s\"\n WOODPECKER_MAX_WORKFLOWS: \"3\"\n```\n\n### Additional context\n\n```\n$ kubectl -n woodpecker get pods\nNAME READY STATUS RESTARTS AGE\nwoodpecker-server-0 1/1 Running 0 46h\nwoodpecker-server-agent-0 0/1 CrashLoopBackOff 77 (2m2s ago) 6h21m\nwoodpecker-server-agent-1 0/1 CrashLoopBackOff 77 (3m6s ago) 6h21m\n\n$ kubectl -n woodpecker delete pod woodpecker-server-agent-0\npod \"woodpecker-server-agent-0\" deleted\n\n$ kubectl -n woodpecker get pods\nNAME READY STATUS RESTARTS AGE\nwoodpecker-server-0 1/1 Running 0 46h\nwoodpecker-server-agent-0 0/1 ContainerCreating 0 2s\nwoodpecker-server-agent-1 0/1 CrashLoopBackOff 77 (3m33s ago) 6h22m\n\n$ kubectl -n woodpecker get pods\nNAME READY STATUS RESTARTS AGE\nwoodpecker-server-0 1/1 Running 0 46h\nwoodpecker-server-agent-0 1/1 Running 1 (4s ago) 7s\nwoodpecker-server-agent-1 0/1 CrashLoopBackOff 77 (3m38s ago) 6h22m\n```\nLogs before restart:\n```\n{\"level\":\"info\",\"time\":\"2025-03-05T13:40:32Z\",\"message\":\"log level: info\"}\n{\"level\":\"info\",\"time\":\"2025-03-05T13:40:32Z\",\"message\":\"starting Woodpecker agent with version '3.3.0' and backend 'kubernetes' using platform 'linux/amd64' running up to 3 pipelines in parallel\"}\npanic: runtime error: invalid memory address or nil pointer dereference\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1a40389]\n\ngoroutine 73 [running]:\ngo.woodpecker-ci.org/woodpecker/v3/pipeline/backend/kubernetes.(*kube).DestroyWorkflow(0x34d87e0, {0x2364440, 0xc0000bc9b0}, 0xc000283680, {0xc0002dc998, 0x4})\n/src/pipeline/backend/kubernetes/kubernetes.go:428 +0x109\ngo.woodpecker-ci.org/woodpecker/v3/pipeline.(*Runtime).Run.func1()\n/src/pipeline/pipeline.go:112 +0x7f\npanic({0x1d11040?, 0x349eb10?})\n/usr/local/go/src/runtime/panic.go:787 +0x132\ngo.woodpecker-ci.org/woodpecker/v3/pipeline/backend/kubernetes.(*kube).SetupWorkflow(0x34d87e0, {0x2364440, 0xc0000bc9b0}, 0xc000283680, {0xc0002dc998, 0x4})\n/src/pipeline/backend/kubernetes/kubernetes.go:194 +0xa3\ngo.woodpecker-ci.org/woodpecker/v3/pipeline.(*Runtime).Run(0xc000258d20, {0x2364440, 0xc0000bc9b0})\n/src/pipeline/pipeline.go:118 +0x2eb\ngo.woodpecker-ci.org/woodpecker/v3/agent.(*Runner).Run(0xc000068a80, {0x2364440, 0xc0000bc9b0}, {0x23643d0, 0x34fc7e0})\n/src/agent/runner.go:153 +0xeb3\ngo.woodpecker-ci.org/woodpecker/v3/cmd/agent/core.run.func5()\n/src/cmd/agent/core/agent.go:293 +0x205\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n/src/vendor/golang.org/x/sync/errgroup/errgroup.go:78 +0x50\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1\n/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75 +0x93\n```\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3111],{"name":3019,"color":3020},4934,"Runtime SIGSEGV error in agent","2025-03-14T13:04:55Z","https://github.com/woodpecker-ci/woodpecker/issues/4934",0.69739276,["Reactive",3118],{},["Set"],["ShallowReactive",3121],{"$fTRc1wZytZ_XrK4EfJfei_Sz-An4H4Yy6syhVxH_PVJc":-1,"$f26FZ1HEEhJuReHMMyxet2prxrTTmBioX8f1CE-sDBlA":-1},"/woodpecker-ci/woodpecker/5124"]