\n\ni did attempt `pull_5323-alpine` as recomended in #5238 but it didnt work. the pod never stood up.\n\n### Steps to reproduce\n\n1. install woodpecker (using longhorn as a storage backend and forgejo as a forge but that hasnt been an issue before) on k3s cluster.\n2. create pipeline.\n3. attempt to run it.\n4. will never complete pod creation.\n\n### Expected behavior\n\ncreates pod, and run pipeline\n\n### System Info\n\n```shell\nsource\t\"https://github.com/woodpecker-ci/woodpecker\"\nversion\t\"3.8.0\"\n```\n\n### Additional context\n\n\u003Cimg width=\"1875\" height=\"143\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/6fbbcd41-751c-4d5d-b852-f05e57d6ca12\" />\n\n\u003Cimg width=\"1008\" height=\"312\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/6fc6001e-01e4-4cce-b9c4-2ccc144b9b1a\" />\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3035],{"name":3019,"color":3020},5345,"Kubernetes Pod never completes initialisation - hangs forever","2025-07-21T14:12:00Z","https://github.com/woodpecker-ci/woodpecker/issues/5345",0.69715935,{"description":3042,"labels":3043,"number":3045,"owner":3025,"repository":3026,"state":3027,"title":3046,"updated_at":3047,"url":3048,"score":3049},"### Component\n\nagent, web-ui\n\n### Describe the bug\n\nOn all my pipelines only the first few log lines appear in the webui. Then everything else is cut out. I can certainly see the log output when I kubectl logs wp-1234.... the step/pipeline, bit it doesn't seem to get sent to the server from the agent.\r\n\r\nI also see a lot of \r\n`{\"level\":\"error\",\"repo\":\"renovatebot/renovate\",\"pipeline\":\"2064\",\"workflow_id\":\"6848\",\"image\":\"docker.io/woodpeckerci/plugin-git:2.5.1\",\"workflow_id\":\"6848\",\"error\":\"io: read/write on closed pipe\",\"time\":\"2024-11-17T13:04:52Z\",\"message\":\"copy limited logStream part\"}`\r\n\r\non the agent and \r\n\r\n`{\"level\":\"error\",\"repo_id\":\"29\",\"pipeline_id\":\"4660\",\"workflow_id\":\"6871\",\"error\":\"stream: not found\",\"time\":\"2024-11-18T12:55:30Z\",\"message\":\"done: cannot close log stream for step 21827\"}` and `{\"level\":\"error\",\"repo_id\":\"29\",\"pipeline_id\":\"4660\",\"workflow_id\":\"6871\",\"error\":\"sql: no rows in result set\",\"time\":\"2024-11-18T12:55:30Z\",\"message\":\"queue.Done: cannot ack workflow\"}`\r\n\r\non the server.\r\n\r\nIt has to be mentioned, that before a month ago (I upgrade woodpecker almost immediately after release) the logs worked almost flawlessly for more than a year, since I switched from drone to woodpecker.\r\n\r\nCurrently I don't know where to start debugging, to get to the bottom of this. Do you have any pointers?\n\n### Steps to reproduce\n\ninstall woodpecker using the helm chart version 1.6.2 on a kubernetes cluster v1.31 only setting WOODPECKER_BACKEND_K8S_STORAGE_RWX: false beside the setup with a forge using https://woodpecker-ci.org/docs/next/administration/forges/forgejo\n\n### Expected behavior\n\nI expect to always get all the logs, when I click on a step/workflow. \n\n### System Info\n\n```shell\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.7.3\"}\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3044],{"name":3019,"color":3020},4409,"Logs only showing the first few lines of each step","2024-11-19T10:25:07Z","https://github.com/woodpecker-ci/woodpecker/issues/4409",0.71607006,{"description":3051,"labels":3052,"number":3056,"owner":3025,"repository":3026,"state":3027,"title":3057,"updated_at":3058,"url":3059,"score":3060},"### Clear and concise description of the problem\n\nHi, thanks for this project! The Kubernetes integration appears to be much better thought out than most other CI systems we've used.\n\nUnfortunately, as far as we can tell, cloning a repo using the Woodpecker-provided `netrc` credentials isn't possible without root privileges, short of marking the project as \"Trusted:Security\", which is probably worse overall, from a security perspective:\n\n* The default clone step runs as the root user, so the runner namespace must be privileged. On security-conscious Kubernetes distributions like Talos, namespaces are not privileged by default, so the runner namespace needs the `pod-security.kubernetes.io/enforce: privileged` label on these distributions. In our clusters, we prefer to add this label only when the app absolutely requires elevated privileges; e.g., because it needs to use host networking.\n* Because the default clone step runs as the root user, the otherwise very handy `WOODPECKER_BACKEND_K8S_SECCTX_NONROOT` option is incompatible with it. The default clone step pod will always fail to start if this option is set.\n* As far as I can tell, the only reason that Woodpecker runs the default clone step as the root user is to set the filesystem permissions on the workspace directory. However, unless I'm missing something, that could easily be addressed by using the `fsGroupChangePolicy` pod security context setting, without any need for root privileges.\n\nInitially, #4151 had support for `fsGroupChangePolicy` (e.g., https://github.com/woodpecker-ci/woodpecker/pull/4151/commits/3c7e071a56713c4176e713c8572a94fc21b0b7bc), but that appears to have been removed at some stage before the merge, though it's not clear to me why.\n\n\n\n### Suggested solution\n\nUnless there's some other reason why the default clone step needs to run as root, adding support for `fsGroupChangePolicy` in the Kubernetes `backend_options` and de-privileging the default clone step would be a major security posture improvement for the Kubernetes backend.\n\n### Alternative\n\n### Run the clone step manually\n\nBecause the `woodpeckerci/plugin-git` plugin is trusted, Woodpecker provides it with the `netrc` creds when run as a plugin. Therefore, we tried running a manual clone step, first like this:\n\n```\nskip_clone: true\n\nsteps:\n - name: clone\n image: quay.io/woodpeckerci/plugin-git:2.6.5\n backend_options:\n kubernetes:\n securityContext:\n runAsUser: 405 # `guest` user in Alpine\n runAsGroup: 100 # `users` group in Alpine\n fsGroup: 100\n privileged: false\n runAsNonRoot: true\n seccompProfile:\n type: RuntimeDefault\n```\n\nHowever, that failed because the default `HOME` is `/app`, and the pod couldn't access `/app/.netrc`.\n\nWe then tried this:\n\n```\nskip_clone: true\n\nsteps:\n - name: clone\n image: quay.io/woodpeckerci/plugin-git:2.6.5\n settings:\n home: /tmp\n backend_options:\n kubernetes:\n securityContext:\n runAsUser: 405\n runAsGroup: 100\n fsGroup: 100\n privileged: false\n runAsNonRoot: true\n seccompProfile:\n type: RuntimeDefault\n```\n\nThen the pod was able to read `/tmp/.netrc`, but that failed due to permissions on the mounted workspace, which leads us to the need for `fsGroupChangePolicy`:\n\n```\n+ git init --object-format sha1 -b main\n/woodpecker/src/forgejo.hackworth-corp.com/hackworth/hops/.git: Permission denied\nexit status 1\n```\n\n### User namespaces\n\nSince v1.30, Kubernetes supports user namespaces: https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/\n\nThe feature is currently in beta. When it's enabled, and `pod.spec.hostUsers` is `false`, Kubernetes creates a separate UID/GID namespace, so that, e.g., UID `0` in the pod is not the same as UID `0` outside the pod; i.e., it has no privileges outside the pod. If a Woodpecker Kubernetes backend option to disable `hostUsers` were available, we'd be more comfortable with the default clone step running as UID `0`. (For starters, based on our testing of the feature with other apps, it's not necessary to privilege the namespace when running containers as UID `0` and `hostUsers` disabled.) However, we'd still feel much better if the default clone step ran as non-root.\n\nIrrespective of this particular issue, user namespaces would be a useful feature for other CI workloads that need to run as root, but don't require actual system-level privileges.\n\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [x] Checked that the feature isn't part of the `next` version already [https://woodpecker-ci.org/versions]\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't already an [issue](https://github.com/woodpecker-ci/woodpecker/issues) that request the same feature to avoid creating a duplicate.",[3053],{"name":3054,"color":3055},"feature","180DBE",5346,"Kubernetes: de-privileging the clone step","2025-07-22T14:28:02Z","https://github.com/woodpecker-ci/woodpecker/issues/5346",0.7161944,{"description":3062,"labels":3063,"number":3066,"owner":3025,"repository":3026,"state":3027,"title":3067,"updated_at":3068,"url":3069,"score":3070},"### Clear and concise description of the problem\n\nFor example in a `go build`, I want to mount a volume for caching dependencies and sharing their to another step in a same pipeline.\n\n### Suggested solution\n\n[Drone CI proposes 4 types of temporary volume.](https://docs.drone.io/pipeline/kubernetes/syntax/volumes/)\n\n### Alternative\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Checked that the feature isn't part of the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]\n- [X] Read the [Contributing Guidelines](https://github.com/woodpecker-ci/woodpecker/blob/master/CONTRIBUTING.md).\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't already an [issue](https://github.com/woodpecker-ci/woodpecker/issues) that request the same feature to avoid creating a duplicate.",[3064,3065],{"name":3054,"color":3055},{"name":3022,"color":3023},1595,"Ability to add a volume by step in kubernetes backend","2024-02-12T20:34:55Z","https://github.com/woodpecker-ci/woodpecker/issues/1595",0.71625924,{"description":3072,"labels":3073,"number":3076,"owner":3025,"repository":3026,"state":3077,"title":3078,"updated_at":3079,"url":3080,"score":3081},"### Component\r\n\r\nserver\r\n\r\n### Describe the bug\r\n\r\nOn a Kubernetes backend, if any container that is part of a step fails to pull an image and gets stuck in an ImagePullBackOff error, the step will just keep running indefinitely, with no feedback for the user.\r\n\r\nI think the expected behavior here would be something along these lines:\r\n\r\n- Woodpecker to try to pull the image for a while\r\n- If it fails (after a timeout) it displays a error to the user informing that it timed out/failed to pull the specific image\r\n- It fails the step\r\n- It terminates the pod on the cluster\r\n\r\nI'd assume that similar errors can happen if other issues cause a Pod to be in a pending state (for example, there are no nodes available in the cluster). Maybe a similar \"timeout\" strategy could be implemented to deal with all these similar scenarios?\r\n\r\nNote: canceling the pipeline terminates the pipeline and terminates the pod, but _marks the pipeline as successful_, which is another issue.\r\n\r\n### System Info\r\n\r\n```shell\r\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.3.0\"}\r\n```\r\n\r\n\r\n### Additional context\r\n\r\nHere's a sample I did to showcase the issue (it's running in an internal Woodpecker cluster based on Woodpecker 2.3 so I can't share an open link). \r\n\r\nI have built a pipeline where I have referenced an image that does not exist, `image: broken-image-ref`.\r\n\r\n\r\n\r\nHere's the result. It just stays stuck on the broken step, indefinitely (or at least possibility until the pipeline timeout; didn't get to wait that long) without logging anything.\r\n\r\n\r\n\r\nIf I go look at this pod in my cluster, I can see that it is stuck with the ImagePullBackOff error:\r\n\r\n```\r\n...\r\nEvents:\r\n Type Reason Age From Message\r\n ---- ------ ---- ---- -------\r\n Normal Scheduled 4m55s default-scheduler Successfully assigned woodpecker-pipelines/wp-01hsvnwbdgge7msffe0qn6zz68 to \u003C redacated >\r\n Normal SuccessfulAttachVolume 4m45s attachdetach-controller AttachVolume.Attach succeeded for volume \u003C redacated >\r\n Warning Failed 3m25s (x6 over 4m43s) kubelet Error: ImagePullBackOff\r\n Normal Pulling 3m10s (x4 over 4m44s) kubelet Pulling image \"broken-image-ref\"\r\n Warning Failed 3m10s (x4 over 4m43s) kubelet Failed to pull image \"broken-image-ref\": rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/broken-image-ref:latest\": failed to resolve reference \"docker.io/library/broken-image-ref:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\r\n Warning Failed 3m10s (x4 over 4m43s) kubelet Error: ErrImagePull\r\n Normal BackOff 2m58s (x7 over 4m43s) kubelet Back-off pulling image \"broken-image-ref\"\r\n ```\r\n\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3074,3075],{"name":3019,"color":3020},{"name":3022,"color":3023},3555,"closed","Step freezes when container image can't be pulled (ImagePullBackOff)","2024-04-16T06:10:51Z","https://github.com/woodpecker-ci/woodpecker/issues/3555",0.67697924,{"description":3083,"labels":3084,"number":3087,"owner":3025,"repository":3026,"state":3077,"title":3088,"updated_at":3089,"url":3090,"score":3091},"This worked with WP 2.3 and Kubernetes backend:\r\n\r\n```\r\n publish:\r\n image: woodpeckerci/plugin-docker-buildx\r\n settings:\r\n repo: *repo\r\n tags: 8h\r\n```\r\n\r\nUsing WP 2.4 the docker daemon does not start.\r\n\r\nDebugging the Pod manifest I see an empty security context and the docker daemon does not start.\r\n\r\nThis works with WP 2.4.1 but is not so user friendly:\r\n\r\n```\r\n publish:\r\n image: woodpeckerci/plugin-docker-buildx\r\n privileged: true\r\n backend_options:\r\n kubernetes:\r\n securityContext:\r\n privileged: true\r\n settings:\r\n repo: *repo\r\n tags: 8h\r\n daemon.debug: \"true\"\r\n```\r\n\r\n\r\nSee also: https://github.com/woodpecker-ci/woodpecker/issues/3482#issuecomment-2015672185_\r\n ",[3085,3086],{"name":3019,"color":3020},{"name":3022,"color":3023},3537,"Woodpecker 2.4 breaks privileged steps/plugins with Kubernetes backend","2024-05-30T16:53:05Z","https://github.com/woodpecker-ci/woodpecker/issues/3537",0.68685126,{"description":3093,"labels":3094,"number":3099,"owner":3025,"repository":3026,"state":3077,"title":3100,"updated_at":3101,"url":3102,"score":3103},"### Component\n\nagent\n\n### Describe the bug\n\nAfter upgrading woodpecker from 3.4 to 3.5+, I start getting this error in the `clone` step of a pipeline\n\n```\nname is not a valid kubernetes DNS name\n```\n\n\n### Steps to reproduce\n\n1. Install woodpecker using the official helm chart with the following values:\n\n``` \nserver:\n env:\n WOODPECKER_HOST: \u003CREDACTED>\n WOODPECKER_BITBUCKET: true\n WOODPECKER_BITBUCKET_CLIENT: \u003CREDACTED>\n WOODPECKER_BITBUCKET_SECRET: \u003CREDACTED>\n WOODPECKER_OPEN: true\n WOODPECKER_ORGS: \u003CREDACTED>\n WOODPECKER_REPO_OWNERS: \u003CREDACTED>\n WOODPECKER_DISABLE_USER_AGENT_REGISTRATION: true\n WOODPECKER_CONFIG_SERVICE_ENDPOINT: http://pipeline-fetcher:7000\n WOODPECKER_ADMIN: \u003CREDACTED>\n WOODPECKER_DATABASE_DRIVER: postgres\n WOODPECKER_DATABASE_DATASOURCE: \u003CREDACTED>\n\nagent:\n replicaCount: 1\n env:\n WOODPECKER_BACKEND_K8S_STORAGE_RWX: false\n```\n\n2. Trigger a push from bitbucket cloud with a simple pipeline like this:\n\n```\nsteps:\n - name: echo\n image: alpine/git\n commands:\n - echo \"Done\"\n```\n\n### Expected behavior\n\nIt should work as before\n\n### System Info\n\n```shell\nWoodpecker: 3.8.0 (Same problem with other 3.5+ versions)\nk8s: v1.31.8-gke.1045000\n```\n\n### Additional context\n\nI did a bit a debugging and here are what I've found:\n\n1. When woodpecker receives a webhook push from Bitbucket, it will parse the repository data and update it to the database.\n2. The default branch is not included in the webhook payload anymore, so woodpecker will set the `branch` column in `repos` table to an empty string\n3. When woodpecker attempts to start a pipeline with the k8s backend, it will use `repo.branch` as a pod label ([link](https://github.com/woodpecker-ci/woodpecker/blob/b38228717083d4e91f38190e6bf90385058b7167/pipeline/backend/kubernetes/pod.go#L107)). Since the branch is empty, it will throw the dns name error \n\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3095,3096],{"name":3019,"color":3020},{"name":3097,"color":3098},"forge/bitbucket","E96280",5322,"name is not a valid kubernetes DNS name","2025-07-18T13:37:53Z","https://github.com/woodpecker-ci/woodpecker/issues/5322",0.687984,{"description":3105,"labels":3106,"number":3110,"owner":3025,"repository":3026,"state":3077,"title":3111,"updated_at":3112,"url":3113,"score":3114},"### Clear and concise description of the problem\n\nWhen running inside kubernetes/docker without externally reachable domains, the .local and .localhost domains can be used for browser based access.\r\nThese point to 127.0.0.1 which does not work for inter-container connectivity.\n\n### Suggested solution\n\nAn optional, separate env for the API of gitea server for OAuth and API requests that is separate from the URL that is accessed by the browser of the user would fix that.\n\n### Alternative\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Checked that the feature isn't part of the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't already an [issue](https://github.com/woodpecker-ci/woodpecker/issues) that request the same feature to avoid creating a duplicate.",[3107],{"name":3108,"color":3109},"enhancement","7E1FE4",3470,"Provide a separate WOODPECKER_GITEA_API_URL for easier local setup","2024-03-21T10:37:03Z","https://github.com/woodpecker-ci/woodpecker/issues/3470",0.6884383,{"description":3116,"labels":3117,"number":3122,"owner":3025,"repository":3026,"state":3077,"title":3123,"updated_at":3124,"url":3125,"score":3126},"### Clear and concise description of the problem\n\nhttps://woodpecker-ci.org/docs/next/administration/backends/kubernetes#volumes\r\n\r\nMake reference to \r\n```\r\n settings:\r\n mount:\r\n - \"woodpecker-cache\"\r\n```\r\n\r\nWhich is no longer supported when used with things like `command:`\r\n\r\n\n\n### Suggested solution\n\nRemoving that part, and including a sample PVC like \r\n\r\n```\r\napiVersion: v1\r\nkind: PersistentVolumeClaim\r\nmetadata:\r\n name: woodpecker-cache\r\nspec:\r\n storageClassName: \"longhorn\"\r\n accessModes:\r\n - ReadWriteMany\r\n resources:\r\n requests:\r\n storage: 1Gi\r\n```\r\n\r\nMight go well to enhancing this part of the documentation.\n\n### Alternative\n\nAlternatively, just removing that settings part and providing a link to the kubernetes docs (https://kubernetes.io/docs/concepts/storage/persistent-volumes/) might serve as well.\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Checked that the feature isn't part of the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't already an [issue](https://github.com/woodpecker-ci/woodpecker/issues) that request the same feature to avoid creating a duplicate.",[3118,3121],{"name":3119,"color":3120},"documentation","7D625D",{"name":3022,"color":3023},4369,"Documentation: Kubernetes mounts","2024-11-16T20:46:00Z","https://github.com/woodpecker-ci/woodpecker/issues/4369",0.6961461,["Reactive",3128],{},["Set"],["ShallowReactive",3131],{"$fTRc1wZytZ_XrK4EfJfei_Sz-An4H4Yy6syhVxH_PVJc":-1,"$fMovajFhfqsRxTuRup4mp14wTUwv6jWonZXmQVtIpOVQ":-1},"/woodpecker-ci/woodpecker/4627"]