You're unable to read via this Friend Link since it's expired. Learn more
Member-only story
The Hidden Cost of Parallel Processing in GitHub Actions
Why monolithic workflows might be the better option for your short-running workflow jobs

In my recent effort to optimize our GitHub Actions workflows, I came across a surprising fact I was unaware of. I feel it’s important to share my finding in this article.
Parallel Processing in GitHub Actions
Parallel processing in GitHub Actions workflows is a common “best practice” for the many benefits it brings:
- Faster execution: By distributing a job across multiple parallel processors, the overall execution time can be reduced significantly.
- Scalability: Parallel processing can help scale workflows to handle larger and more complex projects.
- Resource optimization: With parallel processing, it is possible to allocate resources more efficiently, allowing for better resource utilization.
If we compare a sequential workflow job to a monolithic application, parallel processing can be regarded as the light-weight microservices, with many smaller workflow jobs which focus on their specific purposes.
In the CI/CD world, using parallel processing to tackle unit tests, integration tests, security scanning, deployments to multiple environments, etc., has become the recommended practice. I accepted this recommended practice without question until last week when I came across a usage report for one of my parallel processing GitHub Actions workflow.
The Usage Report That Surprised Me
I have a workflow comprised of five different jobs:
- Build, test, and upload to GitHub Artifacts
- Trivy scan
- Sonar scan
- Trufflehog scan
- Deploy to AWS ECS’s Fargate
Since this workflow was designed for the development environment, I wanted to allow developers to work on their stories and deploy and test the new features they developed in the development environment. If any vulnerabilities were found through those security scans (check out my blog DevOps Self-Service Centric Pipeline…