I sincerely wish I could move away from Jenkins for the reasons stated in TFA (GUI-oriented, slow, hard to backup/config, test-in-production mentality and boundless plugins) but I've never found something that fits the bill.
The much-touted repo integrations (travis, circle...) all have an exclusive focus on build-test-deploy CI of single repos.
But when you have many similar repos (modules) with similar build steps you want to manage, and want to have a couple of pipelines around those, and manage the odd Windows build target, these just give up (it's docker or bust).
Sadly, only Jenkins is generic enough, much as it pains me to admit.
Anyone got a sane alternative to jenkins for us poor souls?
TeamCity from JetBrains is the same thing as jenkins, except the core features are working core features instead of broken plugins. It's paid software though, you get what you pay for. https://www.jetbrains.com/teamcity/
Of the CI tools I've used (most of them) TeamCity was my personal favorite--but the advantage of Jenkins is that it's very widely used, has a greater breadth of capabilities due to the huge plethora of plugins, and a huge amount of support info readily available online. Some plugins are even maintained by an external vendor that produces the tool you're trying to integrate with and are either better supported or the first to get timely updates.
Bamboo on the other hand is IMO the worst of the commercial CI tools by far and where I work has gone down for us the most. Atlassian itself doesn't appear to be investing in it much anymore judging by the slow pace of development in recent years and at their most recent conference, you can hardly find it mentioned or see much presence for it anywhere.
In all the CI systems I've used though, there has not been one that I haven't encountered some major difficulties with.
Beyond that, anything to do with build automation for a large number of users always quickly becomes a support & maintenance quagmire. Users frequently want to install a new (barely maintained) plugin to solve a problem they have, complex interactions lead to difficult to understand failure modes that require time consuming investigations ("your CI tool is the problem and broke my build" ... "No, your build is broken" ...).
Fine, when you are one specific vendor shop, like Jetbrains or Atlassian stack and you have got plenty of financial power, then there is always cool features, what can bring benefit. But in the end CI and CD systems are glorious semi-smart cron runners. Are these tools 10x better than Jenkins. Not so much, CI/CD is from one of the standpoint most important and in the same time less important tool, delivery should suck very much to migrate to new platform just because. Jenkins shines here, it's not perfect, but it works.
More or less it's for free from licensing standpoint, you don't have to go thru Corporate procurement hell. It's not for free from workforce perspective, but none of these tools are with zero configuration. Just x,y,z, still some yaml or some other crazy configuration needs to be done (like Bamboo dsl).
Of course it can be 10 times better. It's so trivial to be 10 times better.
First you checkout the project from the repo and it just works, doesn't matter GIT, SVN or whatever. How many plugins does it take to checkout a project in jenkins? Is there even a git plugin working nowadays?
Then, you build the project. If it's any of C# or Java for example, the ant/maven/sln/nuget files are detected automatically, just click next and it's built. Does jenkins even understand what is a requirements.txt? Hint: It's not a bash script.
The JVM and the Visual Studio are detected automatically on all the build slaves and the project is already building in the right places. If you want to aim for specific tool versions, there are presets variables on all hosts to filter where to build and to use in build scripts so paths are always right. How is the build matrix plugin in Jenkins lately? Broken as usual?
The project is built, or is it building? It's easy to tell because there is a clear colored status icon and the estimated time to completion is displayed. Teamcity offers that out of the box for maybe 15 years now. Well, jenkins finally got a progress bar too a couple years ago. I guess I'm defeated, Jenkins caught up on basic core functionality only a decade late, I can't justify to pay for working and polished tools anymore. Well, I hope our sysadmin will install the Extra Status Icon Plugin or we'll have to live without the big colored circles next to the build.
You may find our direction page for CI/CD at GitLab interesting if you're looking to learn more about the possibilities involved here. We do all of our planning and roadmapping in public so you read a bit about our overall technical challenges and approach there, and drill down into the stages (CI, packaging, and CD) that make up the capabilities within GitLab, each of which have their own videos and other planning content.
And actually, you can get quite far with the free TeamCity license of three build agents and 100 build configs. I’m also fairly sure that Jetbrains would take kindly to license requests from open-source projects and academia.
TeamCity doesn't handle downstream builds properly. Bamboo has severe stability problems. I've worked at places that evaluated them and always found Jenkins was still the least bad option.
We had the problem that whenever we built a project it would trigger builds of any project that transitively depended on that module. So if you have e.g. 26 projects depending on each other in a line and you make a change to the first one, in jenkins this will run 26 builds as it builds A, then B, then C, .... . Whereas in Teamcity it will run 26 + 25 + 24 + ... builds: it'll build A, then B-Z immediately, then the build of B will trigger another build of C-Z, then the build of C will trigger a rebuild of D-Z and so on.
It sounds like those builds weren’t quite set up correctly. I’ve used TeamCity’s build chains quite a bit and haven’t seen this behavior. Depending on exactly how the builds are triggered it will sometimes enqueue redundant builds, but as the duplicates come to the top of the queue the server realizes they’re unnecessary and doesn’t run them.
There was no "TeamCity build chain", just normal maven dependencies. We raised the issue with their official support (we had a commercial contract) and they couldn't fix it either. Whereas Jenkins did the right thing by default. Shrug.
TeamCity has an extremely generous 100 build configuration limit, if you’re exceeding that, than in all likelihood you’re getting far better value from it than the additional licensing cost.
At my work we use TeamCity for some things and Gitlab CI for others. Things that are good about TeamCity:
- Templates
Gitlab has something called templates but it's a very different thing. In Gitlab, a template is used to bootstrap a project, but that's it. In TeamCity a template is attached to a project such that if you change the template, changes are applied to all projects that inherit from the template. Each project can override any settings or build steps it got from the template, without losing the association to other settings. A project can have multiple templates attached to control orthogonal aspects of its behavior. From a template, you can see what projects inherit from it, and you can freely detach and attach to a different template. It makes managing a large number of projects with similar configs, that all evolve at somewhat different rates really easy.
- Build results
Teamcity has very good integration with xUnit and code coverage tools to quickly see test results and coverage as part of a build. Gitlab recently got better at this (it can now at least parse xUnit results), but you can still only see test results in the merge request view. TeamCity can also do things like track a metric over time and fail a build if it drops (i.e. PR builds should fail if code coverage drops more than X %). TeamCity also supports adding custom tabs to the build page so that you can attach reports generated by the build easily viewable in the UI (vs in Gitlab where you have to download the artifact and then open it to view)
- Overall view of runner status
It's very easy in TeamCity to see the build queue, and an estimate of when your build will run, and how long it's expected to take based on past builds.
-Dashboard
For me it's easier in TeamCity to see the overall status of deployments to a set of environments (i.e. what's on dev/stage/prod) that might span multiple source code repos. At a glance I can see what changes are pending for each environment, etc. In Gitlab things are too tied to a single repo or a single environment, and the pages tend to present either too much or too little information. Also, in TeamCity I can configure my own dashboard to see all of the stuff I care about and hide other things, all in one place.
- System wide configs
There are some settings that apply to the whole system (repository urls, etc). There's no easy way in Gitlab to have system wide settings, they have to be defined at the group or repository level. In TeamCity, you can configure things at any level, and then override at lower levels.
- Extensibility
TeamCity supports plugins. I know this can lead to the Jenkins problem of too many plugin versions, etc, but in TeamCity you tend to use far less plugins, and the plugin APIs have been super stable (I've written plugins against TeamCity 8 which is 4 major versions old and they work fine on the latest). It's really nice to be able to write a plugin that can perform common behavior and have it easily apply across projects and be nicely integrated into the UI.
To me, overall Gitlab CI seems useful for simple things, but overall it's 70% of the way to being something that could replace TeamCity.
We did recently add pipeline info to the operations dashboard (https://docs.gitlab.com/ee/user/operations_dashboard/), which I know isn't exactly what you're looking for here but we are making progress in this direction and recognize the gap.
This can be achieved by using includes to set the variables, which is admittedly a workaround. We do have an open issue (https://gitlab.com/gitlab-org/gitlab-ce/issues/3897) to implement instance level variables that would solve this.
- Extensibility
This is an interesting one because Plugins are, at least in my opinion, what makes Jenkins a mess to use in reality and believe me, I've managed plenty of Jenkins instances in my career with lots of "cool" plugins that do something great, at least while they work. It is one of our values that we play well with others, though, so I'd be curious to work with you to understand specifically what you'd like to be able to make GitLab do that can't be done through your .gitlab-ci.yml. Our goal is that you should never be blocked, or really have to jump through hoops, but still not have to be dependent on a lot of your own code or third party plugins.
I hear you on plugins, and I agree they are problematic. I went back and forth on whether to include this or not TBH.
I'll give you a couple of examples of use cases for plugins:
We have an artifact repo that can store NPM, Python and other artifacts (Nexus if you're interested). I wrote a plugin for TeamCity that can grab artifacts from a build and upload them to the repository. Obviously this can be done in a script, but there are a couple of things that make doing it in a plugin nice:
- You can set it up as a reusable build feature that can be inherited from templates (i.e. all builds of a particular type publish artifacts to Nexus)
- You can get nice UI support. The plugin contributes a tab to the build page that links to the artifacts in Nexus.
- The plugin can tie in to the build cleanup process, and remove the artifacts from the repository when the build is cleaned up. This is useful for snapshot/temporary artifacts that you want to publish so people can test with, but have automatically removed later.
Another example of where plugins have proved useful is influencing build triggering: we have some things that happen in the build server, and then other stuff happens outside of the build server. When all that completes, we then want to kick off another process in the build server (that sounds abstract - think an external deploy process runs, and once the deploy stabilizes you kick off QA jobs). In TeamCity you can write a plugin that keeps builds in the queue until the plugin reports that they are ready to run.
While plugins aren't the first tool I reach for when looking at how to provide reusable functionality in a build server, I have written several plugins for both Jenkins and TeamCity. Overall, I don't think Jenkins/TeamCity's model of having plugins run in-process is a good one, and it leads to most of the problems people have with them (although TeamCity is much better here: Jenkins basically exposes most of its guts to plugins which makes keeping the API stable virtually impossible, while TeamCity has APIs specifically designed for plugins that they've been able to keep stable very effectively) A model where a plugin was just a Docker container that communicated with the build server through some defined APIs, combined with some way for it to attach UI elements to a build that could then call back into the plugin would be much nicer. This seems to be more like what Drone is doing, but haven't played around a lot with that.
I think Gitlab has a strong philosophy of wanting to build out everything that anyone will ever need, all nicely integrated, and that's a great ideal. I think in practice, it's REALLY hard to be all things to all people. People have existing systems and/or weird use cases that it just doesn't make sense to handle all of, and plugins are a useful tool in addressing that.
If you work at gitlab, you can download the free version of TeamCity on their website. Setup a few projects and it will be obvious what it does better.
You may want to try a C#, java, python and a go projects to see the differences, with slaves on Windows and Linux. There are some pretty tight integrations for some of these.
* Broken base Ubuntu images being recommended by Atlassian as the default for agent Image configuration, only to be fixed to a usable state months later;
* Being generally years behind other CI tools, even the traditional ones;
* Data exports corrupting themselves despite apparently succeeding, blocking migrations after upgrades or server changes;
* The official documentation somewhere recommending copying across password hashes directly for setting up new admin for a migration, but I can't find this anymore so they've hopefully improved the process and documentation for this;
* A bug in an earlier version in which a strange combination of error cases in an Elastic Bamboo image configuration could spam EC2 instances in a constant loop, which we thankfully spotted before it ate through our AWS bill;
* No clear messaging from Atlassian about how the future of Bamboo compares to Pipelines. The official line seems to be that Bamboo is for highly custom setups who want to host their CI themselves, but you get the impression from to the slow pace of development that they're slowly dropping it in favour of Pipelines. I'd rather they be honest about it if that is the case.
Those are just the problems I can think of from the top of my head, anyway.
I agree. I used TeamCity and liked it. It was like Jenkins, but easier to setup, less messy and just worked for what we needed it. It was worth paying every penny for it.
We use Teamcity even though we have Gitlab for source control. Teamcity has worked for years which we needed. Don't know if we ever will switch to Gitlab for CI.
Not with pipeline files. I am a total Jenkins noob, but I was able to (relatively) quickly setup a minimal job that automatically pulls config from the relevant GH repo.
Ah yes, pipelines do make a difference in configuring jobs. However, how are you managing your plugins? Your Jenkins configs? Most likely those are manual (however if you've found a way that works well, please share). I've also found that for some functionality, I've had to add Groovy into my pipelines.
That said, pipelines has made a HUGE difference. I still want to migrate but this fixes a large pain point.
> (however if you've found a way that works well, please share)
Not extremely well, but I did a small PoC where Jenkins is running in Kubernetes without persistent storage. Plugins are installed on boot with install-plugins.sh (part of Jenkins' Docker image) and configuration is done via Groovy scripts in init.groovy.d (stuff like https://gist.github.com/Buzer/5148372464e2481a797091682fabba...). It's not perfect (e.g. I didn't have time to find out good way to store & import old builds) & it does require some digging around to find how plugins actually do their configuration.
And then somebody needs a different, incompatible, version of plugin X and you set up another Jenkins master.
Or upgrade the the Jenkins master and watch other jobs fail.
And not to mention plugin Y and plugin Z crashing Jenkins when being run together because they share the same classpath.
While in the meantime one of the developer is trying to migrate his pipeline from one master to another and he finds out that of course they won't work because the plugins and configuration are not exactly the same.
This is exactly what OP was complaining about. You don't set up plugins and configuration just once. You want them to be replicable, but Jenkins does not provide a good way to do that.
Most other CI/CD system handle this issue very simply. They just don't have plugins, and have very little (if any) global configuration. This means you can start up an entirely new cluster and chances are your pipeline files will run without a hitch.
Have you observed any limitation or problem with it? I've been very interested in transforming our internal Jenkins CI into something lighter and modular with less maintenance which still allows multi-platform slaves, and BuildKite seems like a very interesting new player.
Not really - it's about as simple as buildbot with a nicer UI. All our builds trigger off of Github pushes - I have a handful of cheap Ubuntu VMs on Linode doing builds and tests for our code, and one Mac Mini doing builds for some developer tools - the latter is in a small rack in our office, but it works all the same.
Each build pipeline is just a small shell script which does some setup and runs make to build or invokes our test entry points.
Gitlab and Concourse both support windows runners as far as I can see. They also don't require docker, but you might actually want that for most of your jobs.
My biggest gripe about gitlab is you can't schedule a job in code, and I suppose it's less then ideal to support 3rd party repos in hosted gitlab, but I don't know why you'd not use it as an SCM.
The bigger problem, would be using a group job that triggers a bunch of other jobs to do the many modules type of development you spoke about, but I'd just develop my modules seperatly, and build them in parallel steps if need be.
Or are you looking more for putting the values in the .gitlab-ci.yml itself? This is something we have thought a bit about, but it gets strange with branches and merges where it's not always clear you're doing what the user wants as the different merges happen.
Indeed I meant in the .gitlab-ci.yml. I would assume you'd name the branch in the schedule, and if not default to the default branch. Similarly, it's sad you can't set a variable in one stage and have it available in another, and there's a couple of other niggles that one needs to work around.
With that said, the product is fantastic and I'm just pointing out some flaws so the parent understands I've actually used the product, and not just a fanboy yelling. :)
Same boat as you. I'm very happy with Gitlab CI. Do look into it, it's extremely flexible. Not quite as flexible as Jenkins, but far more than Travic/Circle CI without it becoming an issue.
They now have configuration includes and cross project pipeline triggers, which is part of what GP seems to be looking for.
Personally I’ve found that for my past and present use cases generating any needed step (e.g test matrix) e.g with a script is much more flexible, predictable, and reproducible since the generated result can be versioned.
I also successfully used various custom runners including baremetal Windows ones and virtualised macOS ones inside VirtualBox.
I don't think Jenkins is gui oriented, slow or hard to backup/config, but I did enjoy using TeamCity few yers back. Sure it costs you arm and a leg, but it worked well without any plugins.
Happy Buildkite user here across two companies. We've built some custom tooling around the GraphQL API here but have since found it solid for both periodic jobs and CI needs.
I’m experimenting right now with how far I can simplify the abstractions, and writing my own thing in rust.
Since my use case is integration with gerrit, I poll the updated changes over ssh, and have the regex-based triggers which cause a “job” launches. Job consists of making a database entry and calling a shell script, then updating the entry upon completion. Since job is just a shell script it can kick off other jobs either serially or in parallel simply using gnu parallel :-)
And voting/review is again just a command so of course is also flexible and can be made much saner than what I had seen done with Jenkins.
So the “job manager” is really the OS - thus killing the “daemon” doesn’t affect the already running jobs - they will update the database as they finish.
The database is SQLite with a foreseen option for Postgres. (I have made diesel optionally work with both in another two year old project which successfully provisioned and managed the event network of about 500 switches)
Since I also didn’t want the HTTP daemon, the entire web interface is just monitoring, and is purely static files, regenerated upon changes.
Templating for HTML done via mustache (again also use it in the other project, very happy).
For fun I made (if enabled in config) the daemon reexec itself if mtime of config or the executable changes.
I think these kind of home-grown systems are pretty hard to "sell" to others. I know that I've written a couple, my general approach was to :
* Get triggered by a github (enterprise) webhook.
* Work out the project, and clone it into a temporary directory.
* Launch a named docker container, bind-mounting the temporary directory to "/project" inside the image.
* Once the container exits copy everything from "/output" to the host - those are the generated artifacts.
There's a bit of glue to tie commit-hashes to the appropriate output, and a bit of magic to use `rsync` to allow moving output artifactes to the next container in the pipeline, if multiple steps are run.
But in short I'd probably spend more time explaining the system than an experienced devops person would be creating their own version.
Zuul-ci.org had recently caught my eye, particularly because it fully supports heavy integration testing of multi-repo apps. It doesn't yet have support for bitbucket server though, which is sort of a deal breaker for me.
> But when you have many similar repos (modules) with similar build steps you want to manage
How many teams do you have? In all seriousness, if you aren't talking at least one team per repo, have you considered a monorepo setup? Aren't you burning time managing those many similar repos with many similar build steps?
That said, even in a monorepo, I still prefer Jenkins compared to cleaner seeming cloud offerings due to its limitless flexibility.
Internal libraries and similar fun stuff. Common build step ~~ same packager commands run on them.
Management is fairly simple with a template + seed jobs. It's just ... everything else is annoying.
I don't understand what you mean by one team per repo?
I agree, as I keep saying at $WORK: Jenkins is the least-worst system out there.
side note: I am confused by your usage of "TFA". I looked it up and it stands for what I thought it does, which has a pejorative connotation. That doesn't seem to be what you meant?
Heyo, sorry about that, I was playing on the fact that common parlance has tamed the usage to have "TFA = The FINE Article" in civil discourse =)
My bad, will check my assumptions some more!
Hrm... Totally anecdotal but I see it used that way just frequently enough that I'm familiar with the more-general usage but not nearly frequently enough for it to feel "right".
Yeah but that is derived from RTFA or RTFM, but that meaning doesn't apply here at all.
I think the people using TFA don't know what it means... Whenever I see that I think they are angry about something or arguing, but he's instead supporting the point of the article. Doesn't make sense.
> Yeah but that is derived from RTFA or RTFM, but that meaning doesn't apply here at all.
The meaning of the "TFA" part does. The meaning of the "R" doesn't, which is why it is dropped.
> I think the people using TFA don't know what it means...
They generally do. You, however, seem to be confusing "derived from" with "means the same as". TFA is derived from RTFA, but it does not mean RTFA, nor does the argumentative implication of RTFA come along with it, since the argumentative implication is associated primarily with the implicit accusation that the target has not done what is expected in a discussion and read the source material that is the subject of discussion, which is carried entirely by the "R".
(One can read anger into the "F", but that's tamed by the fact that even in the context of RTFA/RTFM, that's often reconstructed into a non-profane alternative ["fine" is the one I've most frequently encountered.])
TFA is in reference to actually Reading TFA or RTFA. Historically, it has very strong roots in Slashdot culture, which was sort of the Hacker News of the late 1990s and all of the 2000s. By using TFA, somewhat indicates you RTFA, as opposed to everyone else who is just speculating on the content of the linked article (didn't RTFA).
Some of us here have been using terms like RTFA and TFA for twenty years, maybe longer.
Actually, historically its use doesn't necessarily have a pejorative connotation. You can take it to mean "The Fine Article" just the same. It's more of a joke reference with roots to 'RTFA' used frequently in discussion forums like this.
I think it was here on HN that someone introduced me to reading it as The Fine Article.
While I am a conservative christian myself (hah, most of you didn't guess that) I try to make a point out of not getting offended for such things, and if I can do it so can most people :-)
I'm helping clients move from Jenkins to Azure Pipelines which is part of Azure DevOps (formerly VSTS, TFS). If that doesn't make you dizzy then it's a pretty good product. It has a free tier. Windows build targets shouldn't be a problem since it's from Microsoft. Obviously it's not OSS.
We run our infrastructure off of cloudflare, so we can easily spin up a staging environment that's an exact replica of production (only difference is # and size of instances). We also run a staging jenkins server that's defined in the cloudflare config.
We keep our jenkins jobs version controlled by checking in each job's config.xml into a git repo. In the past I've seen the config.xml files managed by puppet or other config management tools.
This helps us get around the "hard to backup" and "test in production" issues. We can test out jenkins changes in staging, commit those changes to our jenkins repo, and then push up the config.xml file to the production jenkins server when we're ready to deploy.
>Anyone got a sane alternative to jenkins for us poor souls?
I haven't tried this yet myself but AWS CodePipeline lets you have Jenkins as a stage in the pipeline. You use Jenkins only for the bits you need without the extra plugins. The resulting Jenkins box is supposed to be very lean and avoid the problems you describe.
Performance isn't great. We're using codepipeline/codebuild (triggered via jenkins), and it's common to wait 30 seconds while the step is being created
Cloudbuild on the gcp side has had much better performance
I'm still on buildbot, but it's definitely showing its age and I'm hoping to move off of it within a year. I've been keeping an eye on Chromium's buildbot replacement, LUCI (https://ci.chromium.org/). It's still light on documentation and the source is very internal google-y (they seem to have written their own version of virtualenv in go). However, based on the design docs it does look like they ran into a lot of the same problems I have with buildbot, specifically the lack of support for dynamic workers, and how underpowered the buildbot build steps can be.
I'm not on buildbot nine (I think the new waterfall UI is a big regression), but what that is describing looks like a statically defined list of workers that scale up and down dynamically. What I'm looking for is the ability to add and remove workers at will, without having to add them to the configuration list and restart the master.
In terms of underpowered build steps, I have several fairly complicated, 1k-2k line build factories, with multiple codebases and hundreds of steps (some custom, some from the stdlib). There's many dependencies between the steps, and many different properties that can be toggled in the UI. All these steps need to be defined up-front in the master, but their actual execution often depends on runtime information, so it becomes a mess of doStepIfs. I think it would be an improvement to give a program on the worker the power to tell the service what it wants to do, rather than the other way around.
One way to scale up / down workers in Buildbot is to have more workers defined in the configuration than actually needed with generic names (e.g. worker1, worker2, etc) and then start / stop them when required.
Agree with you on the waterfall UI regression. It seems console view is preferred than waterfall in the recent versions. It's slower than waterfall UI though.
The much-touted repo integrations (travis, circle...) all have an exclusive focus on build-test-deploy CI of single repos.
But when you have many similar repos (modules) with similar build steps you want to manage, and want to have a couple of pipelines around those, and manage the odd Windows build target, these just give up (it's docker or bust). Sadly, only Jenkins is generic enough, much as it pains me to admit.
Anyone got a sane alternative to jenkins for us poor souls?