CI/CD Pipelines: Importing and Exporting TeamCity projects across different versions
If you develop applications and are involved with build management and pipelines then you will have heard about TeamCity, a CI / CD solution from JetBrains.
Working with my current client we encountered an issue whereby we couldn’t import an exported project because I was getting the below error:
The selected backup file version (829) does not match the current version of the TeamCity database (856). Only backup files created with a TeamCity server of the same version as the current server are supported.
A simple inspection of the version printed in the lower part of any TeamCity page showed the obvious: there were indeed different version of TeamCity involved:
Our use case was: we were trying to export the project configuration from our Production TeamCity instance and import it into our test instance so we can test various aspects of our build pipeline.
Now since getting these 2 versions aligned was going to take longer than we desired or had time for we employed a little configuration change which allowed the TeamCity instance in test to accept the export from the older instance.
A TeamCity project export looks like the below folder structure:
We unzipped the export, which comes in a zip file.
Then we edited the version.txt file and changed the Data format version to the one the test instance was expecting, in our case 856 from 829 seen below:
After zipping back the package the import worked like a charm. This would have been an issue if the relevant parts of the schema actually changed between
those versions but for our versions they haven’t and you might also get away with this.
Just a quick hack you can do to speed up your TeamCity work.
Azure Service Fabric Cluster stuck on the Status: "Waiting for nodes" after deployment - Certificate Thumbprint Issue
The issue I’m about to share may be considered quite trivial but me and my colleagues did end up spending quite a bit of time on it as we weren’t familiar much with Azure Service Fabric Clusters and our goal was to deploy a secure azure service fabric cluster into our project’s enterprise azure subscription.
After locking in all the configuration and triggering a deployment all was fine in the Azure cloud and our cluster got deployed successfully. Happy faces all around, until we realized the cluster never reached a status of ready so we can begin to work with it.
I think the main reason I’m writing about this is that the cluster deploys successfully and you get no error messages but you still can’t use it as it’s not fully working. It’s hard to figure out why if you’ve not delved into service fabric clusters before I would assume. The only noticeable behavior is that the status is listed as: “Waiting for nodes”. If you see this for a bit longer than what you would expect then something is wrong and time won’t fix it for you so don’t wait for some magic behind the scenes, there’s no help coming. Basically the nodes fail to join the cluster but Azure so far doesn’t surface any errors in the portal logs.