\n\nAnd here is the outcome on BitBucket:\n\u003Cimg width=\"479\" height=\"182\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/11c2a65d-cbf2-4162-9805-3d035bdac04e\" />\n\n\n### Steps to reproduce\n\n1. Run Woodpecker with BitBucket Cloud forge\n2. Create a pipeline that uses a matrix\n3. Let the pipeline finish on Woodpecker\n4. Observe the build statuses on the commit in BitBucket\n\n### Expected behavior\n\nBoth pipelines should show completed.\n\n### System Info\n\n```shell\nWoodpecker version: v3.8.0\n```\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3047],{"name":3019,"color":3020},5338,"Build status not completing on matrix pipelines with BitBucket","2025-07-17T18:03:12Z","https://github.com/woodpecker-ci/woodpecker/issues/5338",0.70538855,{"labels":3054,"number":3056,"owner":3025,"repository":3026,"state":3027,"title":3057,"updated_at":3058,"url":3059,"score":3060},[3055],{"name":3019,"color":3020},4446,"Agent stops taking jobs after server throws 5XX errors","2025-03-12T09:31:42Z","https://github.com/woodpecker-ci/woodpecker/issues/4446",0.7059758,{"description":3062,"labels":3063,"number":3065,"owner":3025,"repository":3026,"state":3027,"title":3066,"updated_at":3067,"url":3068,"score":3069},"### Component\n\nagent, web-ui\n\n### Describe the bug\n\nOn all my pipelines only the first few log lines appear in the webui. Then everything else is cut out. I can certainly see the log output when I kubectl logs wp-1234.... the step/pipeline, bit it doesn't seem to get sent to the server from the agent.\r\n\r\nI also see a lot of \r\n`{\"level\":\"error\",\"repo\":\"renovatebot/renovate\",\"pipeline\":\"2064\",\"workflow_id\":\"6848\",\"image\":\"docker.io/woodpeckerci/plugin-git:2.5.1\",\"workflow_id\":\"6848\",\"error\":\"io: read/write on closed pipe\",\"time\":\"2024-11-17T13:04:52Z\",\"message\":\"copy limited logStream part\"}`\r\n\r\non the agent and \r\n\r\n`{\"level\":\"error\",\"repo_id\":\"29\",\"pipeline_id\":\"4660\",\"workflow_id\":\"6871\",\"error\":\"stream: not found\",\"time\":\"2024-11-18T12:55:30Z\",\"message\":\"done: cannot close log stream for step 21827\"}` and `{\"level\":\"error\",\"repo_id\":\"29\",\"pipeline_id\":\"4660\",\"workflow_id\":\"6871\",\"error\":\"sql: no rows in result set\",\"time\":\"2024-11-18T12:55:30Z\",\"message\":\"queue.Done: cannot ack workflow\"}`\r\n\r\non the server.\r\n\r\nIt has to be mentioned, that before a month ago (I upgrade woodpecker almost immediately after release) the logs worked almost flawlessly for more than a year, since I switched from drone to woodpecker.\r\n\r\nCurrently I don't know where to start debugging, to get to the bottom of this. Do you have any pointers?\n\n### Steps to reproduce\n\ninstall woodpecker using the helm chart version 1.6.2 on a kubernetes cluster v1.31 only setting WOODPECKER_BACKEND_K8S_STORAGE_RWX: false beside the setup with a forge using https://woodpecker-ci.org/docs/next/administration/forges/forgejo\n\n### Expected behavior\n\nI expect to always get all the logs, when I click on a step/workflow. \n\n### System Info\n\n```shell\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.7.3\"}\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3064],{"name":3019,"color":3020},4409,"Logs only showing the first few lines of each step","2024-11-19T10:25:07Z","https://github.com/woodpecker-ci/woodpecker/issues/4409",0.7075634,{"description":3071,"labels":3072,"number":3075,"owner":3025,"repository":3026,"state":3076,"title":3077,"updated_at":3078,"url":3079,"score":3080},"### Component\r\n\r\nserver\r\n\r\n### Describe the bug\r\n\r\nOn a Kubernetes backend, if any container that is part of a step fails to pull an image and gets stuck in an ImagePullBackOff error, the step will just keep running indefinitely, with no feedback for the user.\r\n\r\nI think the expected behavior here would be something along these lines:\r\n\r\n- Woodpecker to try to pull the image for a while\r\n- If it fails (after a timeout) it displays a error to the user informing that it timed out/failed to pull the specific image\r\n- It fails the step\r\n- It terminates the pod on the cluster\r\n\r\nI'd assume that similar errors can happen if other issues cause a Pod to be in a pending state (for example, there are no nodes available in the cluster). Maybe a similar \"timeout\" strategy could be implemented to deal with all these similar scenarios?\r\n\r\nNote: canceling the pipeline terminates the pipeline and terminates the pod, but _marks the pipeline as successful_, which is another issue.\r\n\r\n### System Info\r\n\r\n```shell\r\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.3.0\"}\r\n```\r\n\r\n\r\n### Additional context\r\n\r\nHere's a sample I did to showcase the issue (it's running in an internal Woodpecker cluster based on Woodpecker 2.3 so I can't share an open link). \r\n\r\nI have built a pipeline where I have referenced an image that does not exist, `image: broken-image-ref`.\r\n\r\n\r\n\r\nHere's the result. It just stays stuck on the broken step, indefinitely (or at least possibility until the pipeline timeout; didn't get to wait that long) without logging anything.\r\n\r\n\r\n\r\nIf I go look at this pod in my cluster, I can see that it is stuck with the ImagePullBackOff error:\r\n\r\n```\r\n...\r\nEvents:\r\n Type Reason Age From Message\r\n ---- ------ ---- ---- -------\r\n Normal Scheduled 4m55s default-scheduler Successfully assigned woodpecker-pipelines/wp-01hsvnwbdgge7msffe0qn6zz68 to \u003C redacated >\r\n Normal SuccessfulAttachVolume 4m45s attachdetach-controller AttachVolume.Attach succeeded for volume \u003C redacated >\r\n Warning Failed 3m25s (x6 over 4m43s) kubelet Error: ImagePullBackOff\r\n Normal Pulling 3m10s (x4 over 4m44s) kubelet Pulling image \"broken-image-ref\"\r\n Warning Failed 3m10s (x4 over 4m43s) kubelet Failed to pull image \"broken-image-ref\": rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/broken-image-ref:latest\": failed to resolve reference \"docker.io/library/broken-image-ref:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\r\n Warning Failed 3m10s (x4 over 4m43s) kubelet Error: ErrImagePull\r\n Normal BackOff 2m58s (x7 over 4m43s) kubelet Back-off pulling image \"broken-image-ref\"\r\n ```\r\n\r\n\r\n### Validations\r\n\r\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\r\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\r\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3073,3074],{"name":3019,"color":3020},{"name":3037,"color":3038},3555,"closed","Step freezes when container image can't be pulled (ImagePullBackOff)","2024-04-16T06:10:51Z","https://github.com/woodpecker-ci/woodpecker/issues/3555",0.65985197,{"description":3082,"labels":3083,"number":3088,"owner":3025,"repository":3026,"state":3076,"title":3089,"updated_at":3090,"url":3091,"score":3092},"### Component\n\nweb-ui\n\n### Describe the bug\n\nI am attempting to use Woodpecker to build an image using the [Cypress Factory](https://hub.docker.com/r/cypress/factory/) docker image that can be used with docker args to generate a container with specific package versions. I am using the woodpecker docker buildx plugin to build the image.\r\n\r\nWhen I run my build job and view the progress in the UI it appears the job is stuck installing GPG keys. However, when I look at the log output on the agent I can see that the GPG key installations are successful and the rest of the job steps are executed and the final image is published.\r\n\r\nAdditionally, the UI never marks the job as completed and the kubernetes builds pods stayed in a 'Completed' state until the job times out or is manually cancelled. When this happens the agents no longer pull new work from the server.\n\n### Steps to reproduce\n\n1. Create a Dockerfile using the Cypress factory image\r\n```\r\nARG NODE_VERSION='18.16.0'\r\nARG YARN_VERSION='1.22.19'\r\nARG CYPRESS_VERSION='13.7.1'\r\nARG CHROME_VERSION='120.0.6099.71-1'\r\nARG EDGE_VERSION='120.0.2210.61-1'\r\nARG FIREFOX_VERSION='120.0.1'\r\n\r\nFROM cypress/factory\r\n```\r\n2. Create a woodpecker job to build the image\r\n```\r\nsteps:\r\n - name: docker-cypress\r\n image: woodpeckerci/plugin-docker-buildx\r\n secrets: [docker_username, docker_password]\r\n settings:\r\n platforms: linux/amd64\r\n registry: \u003Cmy_docker_registry>\r\n repo: \u003Cmy_docker_repo>\r\n dockerfile: Dockerfile.cypress.test\r\n tags: buildx.test-build\r\n username:\r\n from_secret: docker_username\r\n password:\r\n from_secret: docker_password\r\n build_args_from_env:\r\n - CI_REPO\r\n - CI_COMMIT_BRANCH\r\n - CI_COMMIT_SHA\r\n - CI_PIPELINE_NUMBER\r\n debug: true\r\n```\r\n3. Run the job\r\n\n\n### Expected behavior\n\n* The job should successfully build the container and be marked Successful\r\n* The UI log output should stay in sync with the worker agent log output\r\n* The UI should not hang indefinitely during the build\n\n### System Info\n\n```shell\n{\r\n\"source\": \"https://github.com/woodpecker-ci/woodpecker\",\r\n\"version\": \"2.4.1\"\r\n}\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3084,3085],{"name":3019,"color":3020},{"name":3086,"color":3087},"ui","46DEA2",3716,"Woodpecker UI output not updating/out of sync","2024-06-13T15:18:35Z","https://github.com/woodpecker-ci/woodpecker/issues/3716",0.6628934,{"description":3094,"labels":3095,"number":3097,"owner":3025,"repository":3026,"state":3076,"title":3098,"updated_at":3099,"url":3100,"score":3101},"### Component\n\nagent\n\n### Describe the bug\n\nAgents after running a while starts throwing error and restarting. Auto-restart from k8s does not fix the issue, only deleting the pod.\n\n\n\n### Steps to reproduce\n\nIt just happens after few minutes of idle.\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n```shell\n$ kubectl version\nClient Version: v1.31.5\nKustomize Version: v5.4.2\nServer Version: v1.31.5+k3s1\n\n\nMy helmfile configuration as follows. In logs above you can see 3.3.0 instead of 3.2.0 as I tried to manually upgrade agent to see if newer version fixes the issue or not.\n\n\nrepositories:\n - name: woodpecker\n url: https://woodpecker-ci.org/\n skipTLSVerify: true\n\nreleases:\n - name: woodpecker-server\n chart: woodpecker/woodpecker\n version: 2.1.0\n namespace: woodpecker\n createNamespace: true\n cleanupOnFail: false\n devel: false\n installed: true\n skipDeps: false\n values:\n - server:\n env:\n WOODPECKER_ADMIN: admin\n WOODPECKER_HOST: https://cicd\n WOODPECKER_GITHUB: false\n WOODPECKER_GITEA: true\n WOODPECKER_GITEA_URL: https://git\n WOODPECKER_AUTHENTICATE_PUBLIC_REPOS: true\n # WOODPECKER_LOG_LEVEL: trace\n \n extraSecretNamesForEnvFrom:\n - woodpecker-gitea-client\n - woodpecker-gitea-secret\n - woodpecker-secret\n \n persistentVolume:\n storageClass: \"local-path\"\n \n ingress:\n enabled: true\n annotations:\n cert-manager.io/cluster-issuer: letsencrypt\n hosts:\n - host: cicd\n paths:\n - path: /\n pathType: Prefix\n tls:\n - secretName: cicd-tls\n hosts:\n - cicd\n - agent:\n resources:\n limits:\n cpu: 500m\n memory: 512Mi\n requests:\n cpu: 100m\n memory: 64Mi\n env:\n WOODPECKER_SERVER: \"woodpecker-server.woodpecker.svc.cluster.local:9000\"\n WOODPECKER_BACKEND_K8S_STORAGE_CLASS: \"local-path\"\n WOODPECKER_BACKEND_K8S_STORAGE_RWX: \"false\"\n WOODPECKER_FORGE_TIMEOUT: \"30s\"\n WOODPECKER_MAX_WORKFLOWS: \"3\"\n```\n\n### Additional context\n\n```\n$ kubectl -n woodpecker get pods\nNAME READY STATUS RESTARTS AGE\nwoodpecker-server-0 1/1 Running 0 46h\nwoodpecker-server-agent-0 0/1 CrashLoopBackOff 77 (2m2s ago) 6h21m\nwoodpecker-server-agent-1 0/1 CrashLoopBackOff 77 (3m6s ago) 6h21m\n\n$ kubectl -n woodpecker delete pod woodpecker-server-agent-0\npod \"woodpecker-server-agent-0\" deleted\n\n$ kubectl -n woodpecker get pods\nNAME READY STATUS RESTARTS AGE\nwoodpecker-server-0 1/1 Running 0 46h\nwoodpecker-server-agent-0 0/1 ContainerCreating 0 2s\nwoodpecker-server-agent-1 0/1 CrashLoopBackOff 77 (3m33s ago) 6h22m\n\n$ kubectl -n woodpecker get pods\nNAME READY STATUS RESTARTS AGE\nwoodpecker-server-0 1/1 Running 0 46h\nwoodpecker-server-agent-0 1/1 Running 1 (4s ago) 7s\nwoodpecker-server-agent-1 0/1 CrashLoopBackOff 77 (3m38s ago) 6h22m\n```\nLogs before restart:\n```\n{\"level\":\"info\",\"time\":\"2025-03-05T13:40:32Z\",\"message\":\"log level: info\"}\n{\"level\":\"info\",\"time\":\"2025-03-05T13:40:32Z\",\"message\":\"starting Woodpecker agent with version '3.3.0' and backend 'kubernetes' using platform 'linux/amd64' running up to 3 pipelines in parallel\"}\npanic: runtime error: invalid memory address or nil pointer dereference\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1a40389]\n\ngoroutine 73 [running]:\ngo.woodpecker-ci.org/woodpecker/v3/pipeline/backend/kubernetes.(*kube).DestroyWorkflow(0x34d87e0, {0x2364440, 0xc0000bc9b0}, 0xc000283680, {0xc0002dc998, 0x4})\n/src/pipeline/backend/kubernetes/kubernetes.go:428 +0x109\ngo.woodpecker-ci.org/woodpecker/v3/pipeline.(*Runtime).Run.func1()\n/src/pipeline/pipeline.go:112 +0x7f\npanic({0x1d11040?, 0x349eb10?})\n/usr/local/go/src/runtime/panic.go:787 +0x132\ngo.woodpecker-ci.org/woodpecker/v3/pipeline/backend/kubernetes.(*kube).SetupWorkflow(0x34d87e0, {0x2364440, 0xc0000bc9b0}, 0xc000283680, {0xc0002dc998, 0x4})\n/src/pipeline/backend/kubernetes/kubernetes.go:194 +0xa3\ngo.woodpecker-ci.org/woodpecker/v3/pipeline.(*Runtime).Run(0xc000258d20, {0x2364440, 0xc0000bc9b0})\n/src/pipeline/pipeline.go:118 +0x2eb\ngo.woodpecker-ci.org/woodpecker/v3/agent.(*Runner).Run(0xc000068a80, {0x2364440, 0xc0000bc9b0}, {0x23643d0, 0x34fc7e0})\n/src/agent/runner.go:153 +0xeb3\ngo.woodpecker-ci.org/woodpecker/v3/cmd/agent/core.run.func5()\n/src/cmd/agent/core/agent.go:293 +0x205\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n/src/vendor/golang.org/x/sync/errgroup/errgroup.go:78 +0x50\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1\n/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75 +0x93\n```\n\n### Validations\n\n- [x] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [x] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [x] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/versions]",[3096],{"name":3019,"color":3020},4934,"Runtime SIGSEGV error in agent","2025-03-14T13:04:55Z","https://github.com/woodpecker-ci/woodpecker/issues/4934",0.66836965,{"description":3103,"labels":3104,"number":3106,"owner":3025,"repository":3026,"state":3076,"title":3107,"updated_at":3108,"url":3109,"score":3110},"### Component\n\nserver, agent\n\n### Describe the bug\n\nWhen variable `WOODPECKER_BACKEND_K8S_POD_ANNOTATIONS` is filled up with some annotations it doesn't pass to the pod anymore.\r\n\r\nI think this code is related to the bug. \r\nhttps://github.com/woodpecker-ci/woodpecker/blob/main/pipeline/backend/kubernetes/kubernetes.go#L166C4-L166C18\n\n### System Info\n\n```shell\n{\"source\":\"https://github.com/woodpecker-ci/woodpecker\",\"version\":\"2.2.2\"}\n```\n\n\n### Additional context\n\n_No response_\n\n### Validations\n\n- [X] Read the [docs](https://woodpecker-ci.org/docs/intro).\n- [X] Check that there isn't [already an issue](https://github.com/woodpecker-ci/woodpecker/issues) that reports the same bug to avoid creating a duplicate.\n- [X] Checked that the bug isn't fixed in the `next` version already [https://woodpecker-ci.org/faq#which-version-of-woodpecker-should-i-use]",[3105],{"name":3019,"color":3020},3254,"Pod Annotations is missing on 2.2.2","2024-01-22T13:24:07Z","https://github.com/woodpecker-ci/woodpecker/issues/3254",0.6923557,{"description":3112,"labels":3113,"number":3115,"owner":3025,"repository":3026,"state":3076,"title":3116,"updated_at":3117,"url":3118,"score":3119},"When long building is in progress, restarting woodpecker containers (agent then server) using docker-compose similar to\r\n\r\nhttps://woodpecker.laszlo.cloud/server-setup/\r\n\r\nfocres docker daemon to kill agent container (then server is stopped also but building container is not). After next woodpecker containers start, build task has still Running status and one cannot see building container output (building container finishes its work in the background but its status is not updated in woodpecker; pipeline service containers are left running orphaned till host/docker restart).\r\n\r\nAgent logs after restart initialization show only\r\n\r\n`ctrl+c received, terminating process`\r\n\r\nand agent does not cancel running task.\r\n\r\nChecked in woodpecker compiled from b52e404f93ccea05dc783aa929770c4a0fad2e74.\r\n\r\nWhen receiving term signal (i.e. host reboot) agent process should cancel all ongoing tasks and terminate itself ASAP. This should leave task database in consistent state after next start.\r\n\r\nRegards,\r\nPaweł",[3114],{"name":3019,"color":3020},178,"Inconsistent state after woodpecker container restart during ongoing build","2021-12-19T03:18:37Z","https://github.com/woodpecker-ci/woodpecker/issues/178",0.69357026,["Reactive",3121],{},["Set"],["ShallowReactive",3124],{"$fTRc1wZytZ_XrK4EfJfei_Sz-An4H4Yy6syhVxH_PVJc":-1,"$fK4K-1pBtjvBIPNYUyjuktu71a-OoDGuTqb4LQuUkhEo":-1},"/woodpecker-ci/woodpecker/5345"]