<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by RunningMattress on Medium]]></title>
        <description><![CDATA[Stories by RunningMattress on Medium]]></description>
        <link>https://medium.com/@RunningMattress?source=rss-702f4857791a------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 13 Apr 2026 10:40:10 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@RunningMattress/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Exploring the details of the automated Jenkins deployment pipeline]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://medium.com/@RunningMattress/exploring-the-details-of-the-automated-jenkins-deployment-pipeline-de076349771f?source=rss-702f4857791a------2"><img src="https://cdn-images-1.medium.com/max/1330/1*IkaRa1oD1GCtErBnAyKXpQ.png" width="1330"></a></p><p class="medium-feed-snippet">In the first part, we covered the high-level version of how we pulled together an automated, source-controlled deployment pipeline for our&#x2026;</p><p class="medium-feed-link"><a href="https://medium.com/@RunningMattress/exploring-the-details-of-the-automated-jenkins-deployment-pipeline-de076349771f?source=rss-702f4857791a------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://medium.com/@RunningMattress/exploring-the-details-of-the-automated-jenkins-deployment-pipeline-de076349771f?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/de076349771f</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[jenkins]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Thu, 14 Nov 2024 14:46:27 GMT</pubDate>
            <atom:updated>2024-11-14T14:46:27.410Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
        </item>
        <item>
            <title><![CDATA[Creating an automated, source-controlled deployment pipeline for Jenkins Controllers]]></title>
            <link>https://medium.com/@RunningMattress/creating-an-automated-source-controlled-deployment-pipeline-for-jenkins-controllers-26b74907b3b?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/26b74907b3b</guid>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[jenkins]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Tue, 12 Nov 2024 14:32:00 GMT</pubDate>
            <atom:updated>2024-11-12T14:32:00.309Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="Jenkins Logo" src="https://cdn-images-1.medium.com/max/207/0*tJAWgltrS_cE_cb8.png" /></figure><p>As a former Lead Tools &amp; Build Engineer at a small games studio, I often wondered, can we roll out updates to Jenkins, configs and jobs better…</p><p>The stability of a Jenkins controller is crucial, especially when it has become so crucial to everyone’s daily workflow, our controller, for instance, was responsible for the following:</p><ul><li>Regular builds to ensure stability and for publishing to QA</li><li>PR Checks to do script-only builds and prevent compilation errors entering the mainline</li><li>On-demand builds (either directly or via PR Comments)</li></ul><p>This meant that any downtime on the Controller resulted in delays to pull requests (PRs) being approved for merging, a lack of on-demand build support, or, in worse cases, no builds for testing or shipping. Therefore, it was crucial to minimize downtime as much as possible.</p><p>On my own time, I opted to explore ways to avoid some of the key risks in updating a Controller such as:</p><ul><li>Updating a Job Config</li><li>Updating Plugins</li><li>Applying a Jenkins Update</li><li>Modifying Settings</li></ul><p>Each of the above comes with its risks, and with limited hardware available, setting up a staging environment was neither possible nor practical since we needed a reliable way to fully mirror each job&#39;s settings or configs.</p><p>Some of the key requirements for a replacement I had were</p><ul><li>Source-controlled</li><li>Locally testable</li><li>All configs, pipelines, settings etc must be defined by code</li><li>Automated deployments</li></ul><p>We already used pipelines for our jobs, and each project had its own pipeline file in its repository. However, all our settings and Jenkins configs had only ever been set up via the UI, so changing these settings happened live. If we made a mistake, the build farm could go down or pipelines become broken, causing delays for the team. To solve this problem, we needed a way to replicate these settings so that we could test changes before making them live.</p><p>Enter Docker…</p><p>Around this time I discovered that we no longer had to install the controller as a WAR file and instead we could deploy a docker image, and this is where the cogs began to turn…</p><p>With the docker deployment method, came the ability to customise that image, and at this point, the Configuration as Code Plugin was becoming stable and meant that we would be able to define the configuration of our controller in a code file. Adding this file to the docker image began to open a world of possibilities…</p><p>This initial version solved one key issue, we could now replicate our exact production config and run it locally or even on staging machines in the future.</p><p>Replicating the config was one thing, but much of the important information about how a job ran was in the job config, something Configuration as Code didn’t solve. Our options here were seemingly limited to the Job DSL plugin, however, this didn’t fully cover the options our jobs used and was laborious to write. This is when I stumbled across a far better option, XML. Internally Jenkins stores job configs in XML, we didn’t need to rewrite job configs into some other format to rebuild them in the new image, because we already had an easy-to-read config that could be source-controlled. Better yet, we could configure the job in Jenkin’s UI, save it and simply copy the XML file into our Jenkins Controller Repro to be included in the docker file — so long as we copied this to the correct folder in the image Jenkins would accept this job already existed.</p><p>At this point, we’d now achieved most of our goals; The config was defined by code, Job Configs were included in the docker image using XML files, and all of this was in source control. All of that meant I could run the image on my machine and have a replica of our production environment for me to make changes and test the results.</p><p>The last remaining goal, automating the deployment pipeline.</p><p>One important thing to note here, I didn’t want Jenkins involved in any part of this pipeline, if I accidentally pushed a change that broke it, I needed to be able to use the same CI/CD as normal to recover the system. This meant looking to Github actions to support deploying Jenkins.</p><p>Building and distributing the image was pretty straightforward using existing actions, but getting the Jenkins Controller machine to grab the latest image automatically meant some extra configuration was needed. This came in the form of WatchTower, a fantastic docker image that watches for updates to my other images and then pulls them. Using this we could schedule the new image to be pulled regularly at a time we knew it would be quiet.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H4ZRUYOYrcPUtNa7SYZffA.png" /></figure><p>In summary, this new approach offers several advantages over the previous setup. Unlike before, where all changes were made live and couldn&#39;t be tested or easily reverted, the new approach stores everything in source control. This allows for a thorough review before pushing to the mainline for automatic deployment at a safe time, typically overnight. In the event of issues, we have the capability to roll back to the previous image. Additionally, access to the live controller is heavily restricted as there should be no need for direct changes.</p><p>I’ve skipped over a lot of the detail here and will write a follow-up post delving into that detail, so follow along, leave a comment or clap to show your support.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=26b74907b3b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bitesize Git for newcomers; Part 2: Commits]]></title>
            <link>https://medium.com/@RunningMattress/bitesize-git-for-not-nerds-part-2-commits-5db9e037a8f5?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/5db9e037a8f5</guid>
            <category><![CDATA[git]]></category>
            <category><![CDATA[development]]></category>
            <category><![CDATA[bitesize]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[learning-to-code]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Wed, 23 Aug 2023 19:56:31 GMT</pubDate>
            <atom:updated>2023-09-05T17:50:20.218Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="A git graph" src="https://cdn-images-1.medium.com/max/1024/1*ZiB_BMVeyXg-JEEnT32jdw.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@yancymin?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Yancy Min</a> on <a href="https://unsplash.com/photos/842ofHC6MaI?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>Previously we briefly covered what a repository is and how to get one, now let’s talk about commits.</p><p>Commits are the lifeblood of Git, they describe every single change from the start of your project to its current state. Each commit describes a single set of changes in your project, whether that was adding lines or files, removing them or just editing those lines. All that information is stored in a commit. Whilst Git will automatically track what you’re changing, the creation of a commit is entirely driven by the user, you choose what to commit, and when.</p><p>A <strong>commit</strong> is made up of a few important elements:</p><ul><li>The change diff — This is in the form of a text snippet that describes the type of change and the new data in a line-by-line format.</li><li>A <strong>commit</strong> message — This is a message written by the author to describe the change that was made. An author — The person who created the commit</li><li>A unique identifier — Also known as the <strong>commit</strong> sha</li></ul><p>To create a <strong>commit </strong>you’ll first need to “<strong>stage</strong>” some files in your chosen git gui, this is essentially a process of ticking which files should be in your commit — very useful if you’ve worked on multiple things at the same time but want to create multiple commits to describe the work you did. You’ll then need to write a message to describe what you’ve done, and then just commit and you’re done.</p><p>So, why should you create a commit, well it has a few purposes really:</p><ol><li>It’s a useful &quot;checkpoint&quot; of a time when something functioned in a particular way, for example, maybe you made a change to the characters movement speed and thought it was good, a commit allows you to record that and continue making changes, safe in the knowledge you can get back to where you were before.</li><li>Commits are the backbone of Git and without them, you don’t have any way back up your changes</li><li>They provide a record that helps remind you of what you’ve done.</li></ol><p>What about when to create a commit… whilst there’s no strict rule on when to <strong>commit</strong>, and it’s not an automatic process these are some good guiding principles for your <strong>commits</strong>:</p><ul><li>Each <strong>commit</strong> should be done at a time when your changes are functional, that is to say, the project could be built and run.</li><li>A <strong>commit</strong> should describe a singular change, this is to make it easier to undo a specific change if needed and to maintain a cleaner version control history</li><li>You should <strong>commit</strong> little and often, meaning to make small meaningful commits regularly.</li></ul><p>When writing a commit message it’s important to ensure the message is clear but concise, the Conventional Commits standard is generally a good guide to follow when writing your commit messages, it enables you to quickly filter through commits to find the different types of commits you’ve made (bug fixes, features etc) and better yet you can ultimately use it to auto-generate a change list. Here’s an example of a conventional <strong>commit</strong>:</p><pre>feat(player): added a new character</pre><p><a href="https://www.conventionalcommits.org/en/v1.0.0/#examples">Conventional Commits</a></p><p>So in conclusion, create regular focused commits and write yourself and your colleagues helpful structured commit messages.</p><p><em>If you’ve found this helpful please drop me a follow or a like to help others find this content and to get updates when I post new articles</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5db9e037a8f5" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bitesize Git for newcomers; Part 1 — Repositories and checking them out]]></title>
            <link>https://medium.com/@RunningMattress/bitesize-git-for-not-nerds-part-1-repositories-and-checking-them-out-4271ee993d80?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/4271ee993d80</guid>
            <category><![CDATA[development]]></category>
            <category><![CDATA[learning-to-code]]></category>
            <category><![CDATA[bitesize]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[git]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Wed, 23 Aug 2023 08:17:15 GMT</pubDate>
            <atom:updated>2023-09-05T17:50:12.011Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<h3>Bitesize Git for newcomers; Part 1 — Repositories and checking them out</h3><figure><img alt="A git graph" src="https://cdn-images-1.medium.com/max/1024/1*ZiB_BMVeyXg-JEEnT32jdw.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@yancymin?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Yancy Min</a> on <a href="https://unsplash.com/photos/842ofHC6MaI?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>I see plenty of resources for teaching Git to programmers, this is quite natural of course since programmers spend a lot of time using git. But in the games industry especially, plenty of other disciplines utilise git a lot in their everyday work.</p><p>This series aims to describe Git, and how to best use it, in a much less technical manner.</p><p>Before we start I highly recommend using a program to interact with Git, typically referred to as a Git GUI (Simply meaning Git graphic user interface). My personal favourite is GitKraken but GitHub Desktop offers a very nice easy workflow as well. Many people will insist you learn git via the command line however, whilst useful this is overkill for most early users.</p><p>Before we begin… let’s quickly define what Git is:</p><p>Quite simply Git is software that allows multiple users to store and track the history of various files. Think of it as a clever way to back up your files at regular intervals, while simultaneously being able to share those files with others. Typically Git is used with code or text files but can be extended to other data types, including binary files (like images and models) with a little extra setup.</p><p>Unlike a regular back up however Git doesn’t store the entire file, what Git stores instead is information about the change you made. It calls this a <strong>commit, </strong>which is<strong> </strong>essentially<strong> </strong>an advanced save with some additional data.</p><p>A Git project is called a repository and there are many hosting services for Git including GitHub, GitLab and Bitbucket. A repository in its simplest form is a list of changes that describe the project from its creation to its current state.</p><p>So, how do you get a repository? Well, the first step is to pick a hosting service, GitHub is one of the more popular options at the moment and is great for new starters. Once you’ve created an account you’ll be guided through the process of creating your first repository by GitHub, at the end of which you’ll be able to perform an action known as <strong>cloning </strong>this is essentially just copying the repository to your computer. Cloning in a git GUI is as easy as logging in and picking the repository you want to clone, and where to. This will then copy all of the changes to your machine and allow you to start making your own changes.</p><p>One last important note, a repository clone is exactly that, you have an exact copy of the remote, hosted, version of the repository. This means that any changes you make are local to you until those are pushed back to the remote, we’ll cover that in more detail later though.</p><p>In conclusion, a Git repository is made up of a series of commits, each of which describes a change that someone made to the project. You clone these changes to your machine to get a local version of them and work within the project.</p><p>In a future article, we’ll start looking at what a commit is in more detail.</p><p><em>If you’ve found this helpful please drop me a follow or a like to help others find this content and to get updates when I post new articles</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4271ee993d80" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Soft skills for games development]]></title>
            <link>https://medium.com/@RunningMattress/soft-skills-for-games-development-76c1be086064?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/76c1be086064</guid>
            <category><![CDATA[career-advice]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[games]]></category>
            <category><![CDATA[games-industry]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Wed, 09 Aug 2023 08:11:00 GMT</pubDate>
            <atom:updated>2023-08-10T20:15:33.677Z</atom:updated>
            <cc:license>http://creativecommons.org/publicdomain/zero/1.0/</cc:license>
            <content:encoded><![CDATA[<p>So you’ve learnt how to program, maybe you taught yourself, did some courses online or perhaps went to Uni and got a degree in a relevant field like computer science or Games technology.</p><p>Technical skills are only part of what recruiters and hiring managers in the Games Industry seek. The other, often overlooked, skills are equally as important in this industry that demands a high level of collaboration. Let’s look at the soft skills that will help you join or progress within the games industry.</p><p>Here are a few of the things I am personally always on the lookout for in a candidate:</p><h4>Communication</h4><p>A programmer doesn’t just know how to write good code all day long, though that’s a key part of their role, it’s equally important to be a great communicator. Programmers spend a large portion of their time communicating with their peers, designers, producers and many others across the project, knowing how to communicate effectively massively increases your value as a programmer. It enables you to work closely with design to implement their vision, keep your producer up to date, and work with other programmers to help build your understanding of the systems you’re working in/with and resolve any technical challenges you face.</p><h4>Willingness and Ability to learn</h4><p>This one is a hugely important attribute I look for, often I’d rather take on a less technically competent candidate if they show a willingness and ability to learn over a candidate who is technically brilliant but has no aptitude towards learning. Let me explain why, the candidate who is willing to learn will do significantly better in the long term as the games industry is constantly evolving with new techniques, new technology, new hardware etc. More importantly than that the game itself is always changing and adapting, the underlying systems are ever-expanding and refactoring. Someone less willing or able to learn will ultimately have a harder time keeping up and will be a much less effective programmer in the long run.</p><h4>Interest in games</h4><p>This one is fairly obvious I would hope, but if you want to work in the games industry it helps to have an interest in games, not just playing them, but understanding how they work, what makes them tick, why it’s fun, why that system is implemented the way it is and the impact that has on the player. All this helps you anticipate problems in the systems you build and design your solution around this. On a more day-to-day note, you’re going to play your game A LOT, so having an interest in playing and analysing games helps.</p><h4>Ability to take initiative</h4><p>Being able to take initiative is a great attribute to have, in programmers, this often manifests in the form of cleaning up nearby code while working in that area, or fixing up another bug you spot whilst working on a feature. This goes a long way to improving the overall code in the game and helps you passively tackle tech debt, which all too often goes without dedicated time to address it.</p><h4>Humility</h4><p>For me, this translates to a very key skill, knowing when you need help. It is always okay and often preferred, to ask for help. All too often I see a programmer spending days giving vague updates on their work before ultimately admitting they’re stuck and need some help. Usually, this comes from a point of pride or not wanting to seem incompetent. But game development is hard, and often someone else on the team may have experienced a similar issue to you before and have suggestions that can save you days. Knowing when to reach out for help is crucial, don’t reach out too fast, but equally leaving it a full day isn’t good either. A good rule of thumb is if, after a couple of hours, you’ve made no real progress and have no solid leads, reach out to your colleagues, your producer, or your lead. They can all help point you in the right direction.</p><p><em>If you enjoyed this, clap or subscribe to support the content!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=76c1be086064" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Setting up a CI/CD pipeline for Unity Part 3]]></title>
            <link>https://medium.com/@RunningMattress/setting-up-a-ci-cd-pipeline-for-unity-part-3-5885016f4367?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/5885016f4367</guid>
            <category><![CDATA[unity3d]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Sun, 04 Jun 2023 10:58:14 GMT</pubDate>
            <atom:updated>2023-06-04T10:58:14.693Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="Our optimisation gains from 15:30 build time to 6:30" src="https://cdn-images-1.medium.com/max/640/1*71eFHWYmQVJq14ACMir6zg.png" /><figcaption>Optimising the build time from 15:30 to 6:30–60% Faster</figcaption></figure><p>In Part 2, we learnt how to set up an automated build process for Unity to generate regular builds with automatically generated release notes.</p><p><a href="https://medium.com/@RunningMattress/setting-up-a-ci-cd-pipeline-for-unity-part-2-e5e6693d4546">Setting up a CI/CD pipeline for Unity Part 2</a></p><p>In this next part, we’ll look at some improvements and optimizations we can make to the pipeline. We’ll look at the following options:</p><ol><li>Improving the stability of main by enforcing PRs are up to date before merging.</li><li>Caching to speed up job runs</li><li>Separation and Parallelisation of Jobs</li><li>Ensuring your main branch only contains conventional commit messages</li><li>Self-Hosted Runners and Cache Accelerators</li><li>Only testing code changes</li></ol><h4>1. Improving the stability of main by enforcing PRs are up to date before merging.</h4><p>When setting up our build pipeline in the previous article, we noted that we still had to test our main branch because we couldn’t easily enforce PRs being up to date before we merge. Well, GitHub has some tools that will help with this! Firstly we should tick the “Require branches to be up to date before merging” branch protection rule in the Branches part of the repository settings.</p><figure><img alt="Branch protection rules showing the “Require branches to be up to date before merging” rule" src="https://cdn-images-1.medium.com/max/779/1*vqOxO0SM2T7xf15zXv2zjA.png" /><figcaption>Branch Protection Rules</figcaption></figure><p>This change alone however will cause us lots of problems in a particularly busy repository. Our very tiny project with just a single script, a few Unit tests and no playmode tests already takes over 5 minutes to run! On bigger projects, this will grow by quite a bit. Because this rule ensures that you have the latest changes before you can merge, as soon as someone else merges, you have to update your PR, GitHub can automate that for you with a handy button on the PR page, but it adds a lot of time and manual back and forth from developers.</p><p>Introducing the Merge Queue!</p><figure><img alt="Merge Queue Settings" src="https://cdn-images-1.medium.com/max/774/1*-y9uv40glK_1RfbOzhNOLQ.png" /><figcaption>Merge Queue Settings</figcaption></figure><p>This handy feature allows you to queue up Pull Requests, and automatically merge the latest changes from the target branch, main in our case, into all pull requests in the queue. But the clever part, is it also merges all the changes from PRs ahead of it in the queue. This means that the 3rd PR in the queue, for example, would contain the latest changes from main, the changes from the first PR in the queue and the second PR. Each PR in the queue is then tested again to ensure that they still work with what main would contain once merged.</p><p>Because these can now be tested in parallel the rate at which PRs get merged back into main is much improved. It also has the added advantage that you can now guarantee main will always be in a state where it compiles and the unit tests pass. One downside worth mentioning is that this will eat your free (or paid) minutes faster as more checks are being run, 1 to validate your PR works on its own before it’s added to the queue, and 1 again once it’s in the queue with all the changes ahead of it. If you’re using self-hosted runners you’ll need to support the increased concurrency as well to get the benefits of this.</p><p><a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/merging-a-pull-request-with-a-merge-queue">Merging a pull request with a merge queue - GitHub Docs</a></p><h4>2. Use caching to speed up job runs</h4><p>As you may already know, when you open Unity it will attempt to import any new files and process them to apply any rules or settings you’ve applied to them. This can be a pretty long process after a big PR with many files has been merged or when you open a project for the first time.</p><p>Currently, our pipeline, unfortunately, falls into the latter bracket, and every single time we run the pipeline it’s re-importing all of the project files from scratch. So, let’s see how we can fix that. GitHub actions have a very handy caching feature we can use here, it allows us to capture all of the import results generated by the build. We can then restore this cache at the start of the next build, import anything that has changed or is new and those will then be re-cached for the next build. This will massively speed up both of the CI/CD processes that we’ve made.</p><pre>      # Cache<br>      - uses: actions/cache@v3<br>        with:<br>          path: Unity CI CD/Library<br>          key: Library-PR<br>          restore-keys: |<br>            Library-<br></pre><p>The caching process isn’t fast for our use case however, so we’ll further optimise by using a different cache, we can do this by changing the cache key, for the build and test processes. The unit tests won’t need the full set of library imports as it’s a script-only process and won’t import other files like art assets which are much larger and will take a longer time to download.</p><figure><img alt="Our library caches from the CI/CD runs" src="https://cdn-images-1.medium.com/max/1024/1*_miOTQtQCn1iKupzJdPFww.png" /><figcaption>The various caches generated for our workflows</figcaption></figure><h3>3. Separation and Parallelisation of Jobs</h3><p>If we don’t want or can’t use the PR queue feature then there’s another option we have. GitHub workflows are broken up into jobs. Currently, our workflows all use a single job. If we broke these up into 3 distinct jobs: Test, Build and Release, we could parallelise the Test and Build jobs since they’re not dependent on each other.</p><p>We can break the pipeline up something like below:</p><pre>jobs:<br>  test:<br>    name: Test<br>    runs-on: ubuntu-latest<br>    steps:<br>      #Checkout<br>      #Test Code<br>  <br>  build: <br>    name: Build<br>    runs-on: ubuntu-latest<br>    outputs:<br>      releaseNotes: #release notes step output<br>      tag: #version step output<br>    steps:<br>      #Checkout<br>      #Build Code<br><br>  Release:<br>    name: Release<br>    runs-on: ubuntu-latest<br>    needs: [test, build]<br>    steps:<br>      #Release Code</pre><p><em>(Full code in the repository at the end of the article)</em></p><p>Here’s how our pipeline looks like after the optimisation:</p><figure><img alt="A completed pipeline showing two stages running in parrallel followed by the release stage once both have passed" src="https://cdn-images-1.medium.com/max/666/1*QV49lW9nGHhGimDo6Or3_g.png" /><figcaption>The optimised pipeline</figcaption></figure><p>A few things to remember when breaking up your job</p><ol><li>Each job that needs the code will need to checkout its copy of the code.</li><li>The way you access the outputs of previous jobs is a bit different to the outputs of previous steps and requires outputs to be defined at the top of the job.</li><li>Use needsto set up prerequisites if you want a job to wait for other jobs before they run.</li></ol><h3>4. Ensuring your main branch only contains conventional commit messages</h3><p>We mentioned a previous article about the power of conventional commits in generating automated changelogs. But these aren’t always the most friendly format for less tech-orientated disciplines in their commit messages. We still want to harness the power of these in our pipeline though, so instead of enforcing these at a commit level, we’ll enforce them at the PR level.</p><p>First, let’s set up our PR merge rules.</p><figure><img alt="PR Merge Rules showing that merge commits are disabled" src="https://cdn-images-1.medium.com/max/791/1*ZGCtmtUyiVDQx1PIkWF_Fg.png" /><figcaption>PR Merge Rules</figcaption></figure><p>To best support the workflows we want to encourage on our repository we disabled merge commits.</p><p>Looking at the remaining strategies we have Squash and Rebase. Our preferred option is Rebase as this provides a very clean history and preserves each commit from the PR, it also allows multiple changelog updates from the conventional commits. We also provide a Squash option for those who aren’t using conventional commits, and instead, we’ll enforce the conventional commit style on the title.</p><p>We’ll do that with another GitHub action and make this a required check in our branch protection setting.</p><pre>name: Check PR title<br>on:<br>  pull_request:<br>    types:<br>      - opened<br>      - reopened<br>      - edited<br>      - synchronize<br><br>jobs:<br>  lint:<br>    runs-on: ubuntu-latest<br>    steps:<br>      - uses: aslafy-z/conventional-pr-title-action@v3<br>        id: pr_title_check<br>        env:<br>          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}<br><br>      - name: PR Comment<br>        if: ${{ failure() }}<br>        uses: thollander/actions-comment-pull-request@v1<br>        with:<br>          message: |<br>            Add a prefix like &quot;fix: &quot;, &quot;feat: &quot; or &quot;feat!: &quot; to indicate the change type this pull request corresponds to. The title should match the commit mesage format as specified by https://www.conventionalcommits.org/.</pre><p>This check should ensure all titles are matching the conventional commit spec. It’s not perfect however and in the future, we could use some advanced automation to auto-merge PRs in the correct manner based on their contents.</p><h3>5. Self Hosted Runners and Cache Accelerators</h3><p>GitHub’s self-hosted runners can only take you so far before you run out of free credits for the month, or need a more powerful machine for its faster speeds or bigger hard disk space. At this point, you’re faced with 2 options</p><ol><li>Join GitHub’s larger runners beta program and pay for the more powerful machines.</li><li>Set up your own infrastructure.</li></ol><p>Option one is certainly our easiest option if you want to stay hands-off from the infrastructure side of things.</p><p>However, option two comes with its own set of benefits (and problems) that we should consider. Firstly if we have our own infrastructure either cloud or locally based we will have greater control over the setup of those machines, ensuring we have the exact hardware we desire. But we could also set up a Cache Accelerator alongside our infrastructure. This would replace the caching changes we made above since Unity, with some extra arguments, can be configured to ask the cache accelerator for any library artifacts it needs and then download them instead of restoring a huge amount of data from the cache on the assumption we need it.</p><h3>6. Only testing code changes</h3><p>Imagine, every time you add art assets to your game you have to wait for the Unit tests to pass… This doesn’t make any sense, we don’t test the art with Unit tests and they should have no influence on our tests.</p><p>So let’s remove this case from our pipeline:</p><pre>#Run on all pull requests that contain script, json or yaml changes<br>on: <br>  pull_request:<br>    paths:<br>      - &#39;**.cs&#39;<br>      - &#39;**.yml&#39;<br>      - &#39;**.json&#39;</pre><p>By adding this to the top of our PR Check job we ensure we only run the check if the check can influence the result of the pull request.</p><p>We also need to add another job, art_pr_check.yml to our project</p><pre>name: Test project<br><br>#Run on anything that doesn&#39;t have .cs or .yml files<br>on:<br>    pull_request:<br>      paths-ignore:<br>        - &#39;**.cs&#39;<br>        - &#39;**.yml&#39;<br>        - &#39;**.json&#39;<br><br>jobs:<br>    build:<br>      name: PR Check<br>      runs-on: ubuntu-latest<br>      steps:<br>        - run: &#39;echo &quot;No build required&quot;&#39;</pre><p>This check uses the same name as our PR Check deliberately because it’s a required check, a check with this name MUST pass. So with the above two changes in place, we can ensure that our art PRs aren’t slowed down with Unit tests that they can’t break and everything can still be merged.</p><h3>Conclusion</h3><p>That about wraps it up for our optimisations and improvements for now, but that work is never complete so there’s always plenty more to do.</p><p>I’ll leave this on one last note, don’t just apply all these optimisations, take the time to understand your particular pipeline, understand where the bottlenecks are, where the improvements are needed, and strategically deploy improvements and optimisations where they are needed most.</p><p>As always, the code can be found on GitHub:</p><p><a href="https://github.com/RunningMattress/UnityCI_CD_Pipeline/releases/tag/v0.0.2">Release Release v0.0.2 · RunningMattress/UnityCI_CD_Pipeline</a></p><p><em>If you enjoyed this, clap or subscribe to support the content!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5885016f4367" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Setting up a CI/CD pipeline for Unity Part 2]]></title>
            <link>https://medium.com/@RunningMattress/setting-up-a-ci-cd-pipeline-for-unity-part-2-e5e6693d4546?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/e5e6693d4546</guid>
            <category><![CDATA[gaming]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[unity3d]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Sat, 20 May 2023 18:02:20 GMT</pubDate>
            <atom:updated>2023-05-20T18:02:20.817Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<p>In the first article in this series, we discussed how to set up a pipeline to ensure the stability of code changes going into our main branch. This helps to keep our code in an always compiling state and the unit tests passing, this largely covered our CI section.</p><p><a href="https://medium.com/@RunningMattress/setting-up-a-ci-cd-pipeline-for-unity-part-1-344e74c4f35a">Setting up a CI/CD pipeline for Unity Part 1</a></p><p>In this second article, we’ll cover the CD side of things by way of setting up a pipeline for automating the build and release of our project. We’ll reuse a lot of the same concepts from our previous article and stick with GameCI for building.</p><p>Let’s define what our CD process needs to cover:</p><ol><li>Something will need to trigger the pipeline</li><li>We need to checkout the project</li><li>Ideally, we should test it before we build it.</li><li>Next, we’ll do the actual build of the project</li><li>Finally, we need to release to a destination of our choice (In this article we’ll use GitHub Releases)</li></ol><p>For our example project, we’re gonna imagine we’re ultimately aiming to ship a mobile project, and for simplicity’s sake, we’ll pick Android for our first pipeline. However with some very minor and easy adjustments everything we’ll talk about here can easily be adjusted for other platforms.</p><h4>How often to trigger the build</h4><p>We have a few different options to trigger our build. In a true CI/CD setup you want to deploy as often as possible, but realistically you should scale this to suit your team’s needs and capabilities.</p><p>Here’s some options to consider:</p><ol><li>Every commit</li><li>Every day</li><li>Every week</li></ol><p>The less frequent the builds the more changes are likely to be included and therefore the more possibilities for bugs to be introduced. So pick your cadence wisely but be cautious about creating more builds than you can test. With that in mind, we’ll create a weekly build trigger for now. As before we’ll make a new workflow file at .github/workflow/build.yml</p><pre>name: Weekly Build<br>on:<br>  workflow_dispatch:<br>  schedule: <br>    - cron: &quot;0 0 * * 1&quot;</pre><p>Notice we’ve also added the workflow dispatch trigger as well, this leaves us the freedom to manually start our builds if we need to. Scheduling a GitHub action uses the cron syntax, if you’re not familiar with it I recommend using crontab guru as a great resource for building and understanding expressions.</p><p><a href="https://crontab.guru/#0_0_*_*_1">Crontab.guru - The cron schedule expression editor</a></p><h4>Checkout and Test</h4><p>As before we’ll checkout and test in the same way. Unlike before we’re not operating on a PR but the GitHub checkout action will still help us out of the box by checking out the latest code on the branch it’s running on, in our case: main.</p><p>We’ll run our tests again now, this may seem counterintuitive since we ran tests before we merged. However, games can take a very long time to build and test, especially if you start adding playmode tests that need to run in real-time or near real-time. With multiple developers all raising changes all day long, we can’t easily enforce a rule to ensure PRs are fully up to date before merging. We do have a few other options however and we’ll talk about those in a future article, for the purpose of this article however we’ll continue to assume we can’t rely on the tests always being run on up-to-date code.</p><h4>Build</h4><p>Let’s take our new trigger code and combine it with the previous PR check code</p><pre>name: Weekly Build<br>on:<br>  workflow_dispatch:<br>  schedule: <br>    - cron: &quot;0 0 * * 1&quot;<br><br>jobs:<br>  testAllModes:<br>    name: Build<br>    runs-on: ubuntu-latest<br>    steps:<br><br>      - uses: actions/checkout@v2<br>        with:<br>          lfs: true<br><br>      - uses: game-ci/unity-test-runner@v2<br>        env:<br>          UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>        with:<br>          projectPath: path/to/your/project<br>          githubToken: ${{ secrets.GITHUB_TOKEN }}<br>          testMode: EditMode<br><br>      - uses: actions/upload-artifact@v2<br>        if: always()<br>        with:<br>          name: Test results<br>          path: artifacts</pre><p>The build process isn’t too different from the test process, we just use the builder action instead and pass in the platform we wish to build.</p><pre>- uses: game-ci/unity-builder@v2<br>  env:<br>    UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>    UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}<br>    UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}<br>  with:<br>    targetPlatform: Android<br>    projectPath: &quot;path/to/your/project&quot;<br>    buildsPath: &quot;build&quot;<br></pre><p>As with the Test action, GameCI does all the hard work of figuring out which Unity version we’re using and installs it for us as well as any dependencies like the Android SDKs.</p><p>We can use the upload artifact action to store the build result.</p><pre>- uses: actions/upload-artifact@v2<br>  with:<br>    name: Build<br>    path: build</pre><p>This will start to consume a fair amount of space over time but we’ll look at resolving that in the next step.</p><h4>Release</h4><p>So now everything is building let’s look at creating a release, we will later look at more complex release pipelines and target actual storefronts, but for now, we’ll make simple GitHub releases instead.</p><p>We’ll first need to tag the release so GitHub knows what to call the release and what we’re actually releasing. We’ll put this step just above the Unity build step as we’ll need to use some of the outputs in our build.</p><pre># Tag<br>- name: Bump version and push tag<br>  id: tag_version<br>  uses: mathieudutour/github-tag-action@v6.1<br>  with:<br>    github_token: ${{ secrets.GITHUB_TOKEN }}<br>    release_branches: main</pre><p>The above action will use our commit messages to decide on the appropriate version number, it does this based on semantic release conventions, I’m a big fan of the Conventional Commit format so this works out greatly in our favour. For example, any commit I prepend with fix: will result in a patch version bump where as feat: creates a minor version bump, and finally BREAKING CHANGE: will result in a major release.</p><p><a href="https://www.conventionalcommits.org/en/v1.0.0/">Conventional Commits</a></p><p>This gives us a great deal of versatility if the conventional commit spec is followed, and in a future article, we’ll talk about how we can enforce following the spec in a manner that isn’t too controlling.</p><p>We’ll quickly amend our Unity build step now to grab the version out of our tag step. Below we’ve set the versioning type to Custom and the version we pass in is the result of our tag step ${{steps.tag_version.outputs.new_tag}}</p><p>That should enable our build version to always match the latest tag.</p><pre>      # Build<br>      - uses: game-ci/unity-builder@v2<br>        env:<br>            UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>            UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}<br>            UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}<br>        with:<br>            projectPath: &quot;path/to/your/project&quot;<br>            buildsPath: &quot;build&quot;<br>            versioning: Custom<br>            version: ${{ steps.tag_version.outputs.new_tag }}<br>            targetPlatform: Android</pre><p>On to the actual release side of things, we’ll again use another GitHub action to help us out here:</p><pre># Create Release<br>- uses: ncipollo/release-action@v1<br>  with:<br>    body: ${{ steps.tag_version.outputs.changelog }}<br>    token: ${{ secrets.GITHUB_TOKEN }}<br>    generateReleaseNotes: true<br>    tag: ${{ steps.tag_version.outputs.new_tag }}<br>    name: Release ${{ steps.tag_version.outputs.new_tag }}</pre><p>This step reuses some of the information we previously generated when tagging the release, because as well as tagging our repository with the appropriate version it also created a changelog for us using the same conventional commit spec. This changelog uses the additional information provided in the spec to group changes by feature and change type providing a log that gives your quality engineers and users a very clear and easy-to-read log of exactly what’s new.</p><p>As mentioned above, always uploading the build to the workflows artifact storage will start consuming a lot of space over time. Since we need to upload the build to the release as well this is also wasting time uploading twice! So instead we’ll remove the artifact upload above, and add it to our release instead by adding the below line to the release step, and remove the build artifact upload (make sure to leave the test results upload!).</p><pre>artifacts: &quot;build&quot;</pre><p>Altogether your script should look something like this:</p><pre>name: Weekly Build<br>on:<br>  workflow_dispatch:<br>  schedule: <br>    - cron: &quot;0 0 * * 1&quot;  <br><br>jobs:<br>  testAllModes:<br>    name: Build<br>    runs-on: ubuntu-latest<br>    steps:<br><br>      #Checkout<br>      - uses: actions/checkout@v2<br>        with:<br>          lfs: true<br><br>      # Test<br>      - uses: game-ci/unity-test-runner@v2<br>        env:<br>          UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>        with:<br>          projectPath: &quot;path/to/your/project&quot;<br>          githubToken: ${{ secrets.GITHUB_TOKEN }}<br>          testMode: EditMode<br><br>      # Upload Test Results<br>      - uses: actions/upload-artifact@v2<br>        if: always()<br>        with:<br>          name: Test results<br>          path: artifacts<br><br>      # Tag<br>      - name: Bump version and push tag<br>        id: tag_version<br>        uses: mathieudutour/github-tag-action@v6.1<br>        with:<br>          github_token: ${{ secrets.GITHUB_TOKEN }}<br>          release_branches: main<br><br>      # Build<br>      - uses: game-ci/unity-builder@v2<br>        env:<br>            UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>            UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}<br>            UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}<br>        with:<br>            projectPath: &quot;Unity CI CD&quot;<br>            buildsPath: &quot;build&quot;<br>            versioning: Custom<br>            version: ${{ steps.tag_version.outputs.new_tag }}<br>            targetPlatform: Android<br><br>      # Create Release<br>      - uses: ncipollo/release-action@v1<br>        with:<br>          body: ${{ steps.tag_version.outputs.changelog }}<br>          token: ${{ secrets.GITHUB_TOKEN }}<br>          generateReleaseNotes: true<br>          tag: ${{ steps.tag_version.outputs.new_tag }}<br>          name: Release ${{ steps.tag_version.outputs.new_tag }}<br>          artifacts: &quot;build/Android/*.apk&quot;</pre><p>Once again raise a pull request and merge this to your main branch once the Unit tests have finished.</p><p>You’ve now got a complete CI/CD pipeline that can run as frequently/infrequently as you need!</p><p>As before the code is all available to look at on GitHub:</p><p><a href="https://github.com/RunningMattress/UnityCI_CD_Pipeline/releases/tag/v0.0.1">Release Release v0.0.1 · RunningMattress/UnityCI_CD_Pipeline</a></p><p>Follow for the rest of this series and other articles. Next, we’ll be looking at some improvements and optimisation we can do to this pipeline.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e5e6693d4546" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Setting up a CI/CD pipeline for Unity Part 1]]></title>
            <link>https://medium.com/@RunningMattress/setting-up-a-ci-cd-pipeline-for-unity-part-1-344e74c4f35a?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/344e74c4f35a</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[gaming]]></category>
            <category><![CDATA[unity3d]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Tue, 16 May 2023 19:56:31 GMT</pubDate>
            <atom:updated>2023-05-20T18:46:25.504Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<p>A CI/CD pipeline is an incredibly valuable addition to any Unity project.</p><p>Throughout this series, we’ll discuss what value it can bring to your project, how to set one up, and how to scale this for larger projects.</p><p>For the first article in this series, we’ll use GitHub actions and GameCI to automate our Pull Requests back into our main branch.</p><figure><img alt="All the various steps involved in our pipeline" src="https://cdn-images-1.medium.com/max/399/1*-fyddxPyTjvtC6Z7obsMMw.png" /><figcaption>All the various steps involved in our pipeline</figcaption></figure><p>Firstly let’s define a few key concepts for this series.</p><h4>What is a CI/CD pipeline?</h4><p>A CI/CD pipeline or Continuous Integration and Continuous Delivery is the automation of integration and delivery of your project to ensure consistency and quality. In terms of a Unity project, and in particular, what we’ll showcase in this article, this means automating the building and testing of your project before we merge it back into our mainline branch. In future parts of this series, we’ll look at automating the building and release of the project to various deployment targets.</p><h4>What are GitHub Actions</h4><p>GitHub actions are a yaml-based syntax that allows us to define a series of steps to automate a task. These automated tasks are run on servers hosted by GitHub with a variety of operating systems and configurations available. Often these are composed of several smaller actions such as GitHub’s clone action to checkout your repository. There’s a huge wealth of these available on the GitHub actions Marketplace allowing developers to piece together several of these to create incredibly powerful and robust pipelines.</p><h4>Setting up the project</h4><p>First, let’s take a simple Unity Project as an example and add some Unit tests to the codebase. This gives us something to test in the next step and starting a project with a test pipeline in place helps to further that practice as you develop more of the game.</p><p>Below you can see the very trivial feature and tests we created to prove our pipeline.</p><pre>public static class SampleFeature<br>{<br>    public static List&lt;string&gt; UniqueStrings; <br>    <br>    public static bool TryAddUniqueValue(string newValue)<br>    {<br>        //Init the list if null<br>        UniqueStrings ??= new List&lt;string&gt;();<br>        <br>        //Early exit if already added<br>        foreach (string item in UniqueStrings)<br>        {<br>            if (item == newValue)<br>            {<br>                return false;<br>            }<br>        }<br>        <br>        //Add the value<br>        UniqueStrings.Add(newValue);<br>        return true; <br>    }<br>}</pre><pre>public class SampleFeatureTests<br>{<br>    [SetUp]<br>    public void SetUp()<br>    {<br>        SampleFeature.UniqueStrings?.Clear();<br>    }<br>    <br>    <br>    // Test that we can add a single value<br>    [Test]<br>    public void CanAddAValue()<br>    {<br>        SampleFeature.TryAddUniqueValue(&quot;test&quot;);<br>        <br>        Assert.AreEqual(1, SampleFeature.UniqueStrings.Count);<br>    }<br>    <br>    // Test that we can add many values<br>    [Test]<br>    public void CanAddManyValues()<br>    {<br>        SampleFeature.TryAddUniqueValue(&quot;test&quot;);<br>        SampleFeature.TryAddUniqueValue(&quot;test2&quot;);<br>        <br>        Assert.AreEqual(2, SampleFeature.UniqueStrings.Count);<br>    }<br><br>    // Test that we cannot add duplicates<br>    [Test]<br>    public void CannotAddTheSameValue()<br>    {<br>        SampleFeature.TryAddUniqueValue(&quot;test&quot;);<br>        SampleFeature.TryAddUniqueValue(&quot;test&quot;);<br>        <br>        Assert.AreEqual(1, SampleFeature.UniqueStrings.Count);<br>    }<br>}</pre><figure><img alt="Passing Unit tests in Unity" src="https://cdn-images-1.medium.com/max/1024/1*r3cEb2-aVEINrxu2E-OO7g.png" /><figcaption>Passing Unit Tests</figcaption></figure><h3>Creating our simple CI/CD pipeline</h3><p>Moving on to the GitHub actions side of things, as mentioned earlier GameCI will be doing the bulk of the work for our actions.</p><p>Here we want to create an action that will be run against every single pull request we raise. This will validate that the project compiles and all Unit tests pass, finally then providing some simple feedback to the person raising the pull request to let them know their PR is good to be merged.</p><p>Breaking this down into some more manageable steps that we can begin writing a script for we have:</p><ol><li>Define what triggers the action (what causes it to run)</li><li>Checkout the project</li><li>Use GameCI to run the tests (this will trigger a compilation to achieve this)</li></ol><p>Looking at GameCI’s documentation we can see we’ll need to do some work upfront to generate an appropriate serial key for the project to use so we’ll start with this.</p><p>We’ll run the provided workflow from GameCI to do this. We can then use the serial key as a GitHub Actions secret to enable our workflow to use the license.</p><p>Make a file at .github/workflow/activation.yml</p><pre>name: Acquire activation file<br>on:<br>  workflow_dispatch: {}<br>jobs:<br>  activation:<br>    name: Request manual activation file 🔑<br>    runs-on: ubuntu-latest<br>    steps:<br>      # Request manual activation file<br>      - name: Request manual activation file<br>        id: getManualLicenseFile<br>        uses: game-ci/unity-request-activation-file@v2<br>      # Upload artifact (Unity_v20XX.X.XXXX.alf)<br>      - name: Expose as artifact<br>        uses: actions/upload-artifact@v2<br>        with:<br>          name: ${{ steps.getManualLicenseFile.outputs.filePath }}<br>          path: ${{ steps.getManualLicenseFile.outputs.filePath }}<br></pre><p>Push this to your main branch and then follow these steps to get a licence ulf file <a href="https://game.ci/docs/github/activation#converting-into-a-license">​</a></p><ol><li>Follow these (one-time) steps for simple activation.</li><li>Manually run the above workflow.</li><li>Download the manual activation file that now appeared as an artifact and extract the Unity_v20XX.X.XXXX.alf file from the zip.</li><li>Visit <a href="https://license.unity3d.com/manual">license.unity3d.com</a> and upload the Unity_v20XX.X.XXXX.alf file.</li><li>You should now receive your license file (Unity_v20XX.x.ulf) as a download. It&#39;s ok if the numbers don&#39;t match your Unity version exactly.</li><li>Open Github &gt; &lt;Your repository&gt; &gt; Settings &gt; Secrets.</li><li>Create the following secrets;</li></ol><p>UNITY_LICENSE - (Copy the contents of your license file here)</p><p>UNITY_EMAIL - (Add the email address that you use to log into Unity)</p><p>UNITY_PASSWORD - (Add the password that you use to log into Unity)</p><figure><img alt="Our Repository Secrets" src="https://cdn-images-1.medium.com/max/781/1*lipI0iRHf_nibfsizuUSfA.png" /><figcaption>Our Repository Secrets</figcaption></figure><p>So now our license is all set up we’re good to create our pipeline. Let’s create another pipeline file in the .github/workflow folder, we’ll call it pr_check.yml.</p><p>We’ll start with the trigger conditions</p><pre>name: Test project<br><br>on: [pull_request]<br><br>jobs:<br>  testAllModes:<br>    name: Run Tests<br>    runs-on: ubuntu-latest<br>    steps:<br></pre><p>Let’s add GitHub’s checkout script.</p><pre>- uses: actions/checkout@v2<br>  with:<br>    lfs: true<br></pre><p>The default setup of this will checkout the head of your PR branch, or in simpler terms the latest commit on your branch. This makes it incredibly straightforward to use.</p><p>Next, we’ll add the GameCI step to run our tests.</p><pre>- uses: game-ci/unity-test-runner@v2<br>  env:<br>    UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>  with:<br>    projectPath: path/to/your/project<br>    githubToken: ${{ secrets.GITHUB_TOKEN }}<br>    testMode: EditMode<br></pre><p>And that’s it. That’s all we need to run our playmode and editmode tests.</p><p>To make it easy to view the results we’ll also upload them as artifacts.</p><pre>- uses: actions/upload-artifact@v2<br>  if: always()<br>  with:<br>    name: Test results<br>    path: artifacts<br></pre><p>Bringing that all together should give you something like this</p><pre>name: Test project<br><br>on: [pull_request]<br><br>jobs:<br>  testAllModes:<br>    name: Run Tests<br>    runs-on: ubuntu-latest<br>    steps:<br><br>      - uses: actions/checkout@v2<br>        with:<br>          lfs: true<br><br>      - uses: game-ci/unity-test-runner@v2<br>        env:<br>          UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}<br>        with:<br>          projectPath: path/to/your/project<br>          githubToken: ${{ secrets.GITHUB_TOKEN }}<br>          testMode: EditMode<br>      - uses: actions/upload-artifact@v2<br>        if: always()<br>        with:<br>          name: Test results<br>          path: artifacts<br></pre><p>You can now commit that to a branch, raise a pull request and sit back and watch as the action springs to life to start validating your changes. Once this has been completed successfully, go ahead and merge it so all future branches can use this.</p><figure><img alt="A successful check run" src="https://cdn-images-1.medium.com/max/440/1*OHFceJJLOLTk-F2LBYi7Tw.png" /><figcaption>A successful check run</figcaption></figure><h4>Set up GitHub rules</h4><p>The pipeline we’ve just created is all well and good but without rules in place anyone can circumvent the check and merge bad code to the main branch causing instability, future test run breaks and ultimately someone having to spend time fixing it all.</p><p>So let’s head over to the repository settings and into the branch protections tab, from here we can create a new rule for our main branch and then add some restrictions. For now, we’ll use the “Require a pull request before merging” and the “Require status checks to pass before merging&quot; rules and then add our check to this list “Run Tests”.</p><p>With these rules in place, it ensures that the pipeline must succeed before the PR gets merged. Of course, any admin can still override these in the event they need to. I would, however, always advise against this as when the inevitable emergency crops that makes you think you need to skip the checks are exactly the time you want something else double checking everything your most likely very stressed self has just done in an attempt to rectify the situation. Trust me, I speak from experience here. Shipping a fix that breaks more things than were previously broken is not fun.</p><figure><img alt="Our GitHub Branch Rules" src="https://cdn-images-1.medium.com/max/828/1*6IEKp0OsjrqDdNgjkjIr0Q.png" /><figcaption>Our GitHub Branch Rules</figcaption></figure><p>And with that, you’re done, you have a simple pipeline that ensures your Unit tests pass on every single PR. Just remember that Unit tests are not a replacement for QA and only serve to prove that individual code units/modules are working the way you expect, there’s no guarantee how all that code will behave when you bring it all together, but you can help reduce a lot of risk with Unit testing. You can further reduce that risk by adding playmode tests which we’ll talk about a bit more in a future part of this series.</p><p>Check out the source code here:</p><p><a href="https://github.com/RunningMattress/UnityCI_CD_Pipeline/releases/tag/part-1">Release Part 1 of a series of articles on creating a Unity CI/CD Pipeline · RunningMattress/UnityCI_CD_Pipeline</a></p><p>Follow for the rest of this series and other articles. Next, we’ll be looking at automating the Continuous Delivery side of CI/CD and creating a pipeline to create regular automated releases on GitHub. Check out part 2 here:</p><p><a href="https://medium.com/@RunningMattress/setting-up-a-ci-cd-pipeline-for-unity-part-2-e5e6693d4546">Setting up a CI/CD pipeline for Unity Part 2</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=344e74c4f35a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[7 Best Practices to Employ in Your Jenkins build system]]></title>
            <link>https://medium.com/@RunningMattress/jenkins-best-practices-e1869c1216ec?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/e1869c1216ec</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[jenkins]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Sun, 09 Apr 2023 14:08:03 GMT</pubDate>
            <atom:updated>2023-04-09T14:08:19.618Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="Jenkins CasC Icon" src="https://cdn-images-1.medium.com/max/207/1*8YDIx2ORjEP3jnG9ZlnArQ.png" /></figure><h4>What is Jenkins?</h4><p>At a high level, Jenkins is a highly configurable free open-source build system with great community support through various plugins developed over the years. It’s very scalable and through creating your custom Groovy scripts it’s incredibly extendable as well. Here I want to share my top tips from my experience using and maintaining multiple Jenkins instances across several projects.</p><h4>1. No Freestyle jobs</h4><p>As far as I’m concerned Jenkins Freestyle jobs don’t and shouldn’t exist. They’re a hugely outdated way of writing your build pipeline and custom or complex functionality relies on scripts being deployed manually to the Jenkins Controller node. For many reasons, this is a terrible way to do things:</p><ol><li>Bad iteration flows, you’re either editing scripts directly on the controller while it’s live, or you’re committing them to source control and then pulling them on the controller. It’s really slow and very risky.</li><li>There is no safe way to test this in the production environment and no branching.</li><li>You have to account for every other project on the Jenkins instance, this can result in some very messy code comparing project names before applying project-specific settings or processes.</li></ol><h4>2. Use shared libraries</h4><p>The alternative to Freestyle jobs is scripted and declarative pipelines, both of which are capable of using Jenkins-shared libraries. These are custom libraries that Jenkins can check out alongside your project code to provide additional capabilities to your build pipeline. This can simplify complex custom tasks that need sharing across multiple projects. This does naturally add risk to those projects however as any change in the shared pipeline code will impact all projects using that code so make changes here very cautiously. If possible have a staging environment for testing these changes.</p><h4>3. Blue Ocean</h4><p>Available as a plugin for Jenkins, the Blue Ocean interface is a much cleaner UI for Jenkins, whilst it doesn’t offer admins everything they need it is perfect for the majority of end users. The visual representation of the build pipeline, dependent stages and parallel stages make it much easier to see where and why the pipeline failed. It’s trivial to swap between interfaces as needed and links can be generated to direct users to the desired interface.</p><h4>4. Small Modular Jobs</h4><p>Instead of having large encompassing jobs that contain all the steps and various configurations your project needs or uses break them down into smaller pipelines that can be reused or pieced together to create a complete pipeline.</p><p>Some examples of this could be</p><ul><li>Deploy to play store — a job to deploy an apk to the play store, this can be used by any Android project you have by setting the parameters to accept app id’s, APKs, credentials etc</li><li>Build My Android Project — Project-specific pipeline for your Android build, this might also employ some shared libraries to execute more generic steps</li><li>Build project — This pipeline could be seen as a conductor or orchestrator of other pipelines, it exists for a few purposes, kicking a matrix of builds off across all platforms on a timer, providing an easy way for an end user to start builds across platforms, etc.</li></ul><p>With the above example pipelines you can build a great deal of power into your pipeline and reduce the number of times you re-write a simple deploy stage for example.</p><h4>5. Deploy with Docker</h4><p>Jenkins can be deployed in many ways, but my favourite is Docker, it’s a much more robust way of doing so, and you gain all the benefits of Docker such as an auto restart. Another huge advantage of deploying with Docker is that you can also create custom images based on Jenkins, adding in your own jobs configs and using Config As Code to have your own custom Jenkins instance that’s fully ready to go, more on this in a future article.</p><h4>6. Avoid build work on the Jenkins Controller</h4><p>Simply put the more time the master is doing work for your build pipelines, the less time it has to provide content (such as web pages and api responses to your users) and manage your build agents. Aim to run every part of your build pipeline on Jenkins Agents and you’ll have a much more responsive instance.</p><h4>7. Provision agents as part of the pipeline</h4><p>Building on the previous point, when using agents, don’t rely on software and tools being manually installed on the machine. Install the tools and software you need as part of your build process, the best case is it’s already set up and the install is skipped, worst case is some extra time spent provisioning resources. In either case, however, you’re able to rapidly expand your build farm due to low manual setup overheads and it opens up the possibility of connecting to cloud farms like Amazon’s EC2 to auto-scale the farm as needed.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e1869c1216ec" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Level up your Unity Packages with CI/CD]]></title>
            <link>https://medium.com/@RunningMattress/level-up-your-unity-packages-with-ci-cd-9498d2791211?source=rss-702f4857791a------2</link>
            <guid isPermaLink="false">https://medium.com/p/9498d2791211</guid>
            <category><![CDATA[unity]]></category>
            <category><![CDATA[npm]]></category>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[continuous-integration]]></category>
            <dc:creator><![CDATA[RunningMattress]]></dc:creator>
            <pubDate>Sun, 05 Mar 2023 18:11:27 GMT</pubDate>
            <atom:updated>2023-03-05T22:02:13.854Z</atom:updated>
            <content:encoded><![CDATA[<p>This is the second post in a series about creating Unity Packages and distributing them via GitHub.</p><p>Check out part one below</p><p><a href="https://medium.com/@RunningMattress/how-to-create-private-packages-for-your-unity-project-48414039ab5">How to create private packages for your Unity project</a></p><p>So you’re up and running sharing code across your projects now, but you want to improve your CI/CD pipeline and provide better information for your packages’ users.</p><p>In this part two we’ll explore how we can improve the CI/CD pipeline to help us write higher-quality code and automatically generate release notes.</p><h4>Add auto changelogs</h4><p>To start with, we’re gonna ensure that we enforce conventional commits on this repository, this is a crucial first step in generating your release notes.</p><p>There are a few ways we can do this:</p><ul><li><strong>Pre-commit checks</strong><br>These are great and if you’re the only one working on your project this is a nice straightforward option. However, it can get complicated if you’re working with others and don’t have the tooling set up to install pre-commit hooks for everyone. They’re also not so great for less tech focussed colleagues as the error messages are often hard to read.</li><li><strong>GitHub action to validate all commits </strong><br>This is also quite restrictive and checks too late, in my opinion (no one wants to rewrite all their commit messages after they’ve made the PR), as well as suffering some of the above issues.</li><li><strong>GitHub action to validate the PR title and enforced merge conventions</strong><br>This is how we’ll do this, it offers the most freedom whilst still ensuring our main branch only contains conventional commits. However, if the other options work for you then they’ll also work with the rest of the pipeline we’ll talk about today.</li></ul><p>Let’s add the following GitHub action to our project</p><pre>name: Check PR title<br>on:<br>  pull_request:<br>    types:<br>      - opened<br>      - reopened<br>      - edited<br>      - synchronize<br><br>jobs:<br>  lint:<br>    runs-on: ubuntu-latest<br>    steps:<br>      - uses: aslafy-z/conventional-pr-title-action@v3<br>        id: pr_title_check<br>        env:<br>          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}<br><br>      - name: PR Comment<br>        if: ${{ failure() }}<br>        uses: thollander/actions-comment-pull-request@v1<br>        with:<br>          message: |<br>            Add a prefix like &quot;fix: &quot;, &quot;feat: &quot; or &quot;feat!: &quot; to indicate what kind of release this pull request corresponds to. The title should match the commit message format as specified by https://www.conventionalcommits.org/.</pre><p>This checks we have formatted the title of the PR according to the <a href="https://www.conventionalcommits.org/en/v1.0.0/">conventional commits</a> spec.</p><p>Next, we’ll change the way PRs are merged into the project to ensure the title is taken as the merge commit message.</p><p>In the settings page for our repository, we’ll scroll down to the Pull Requests section in the general tab, and set it up like so.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/788/1*R6a6wkt0VLHc5MYBGnTulA.png" /><figcaption>Our pull request merges settings.</figcaption></figure><p>This means that when a PR is merged it’ll squash all commits on that branch into 1 and take the PR title as the commit message.</p><p>So that’s all the setup taken care of, let’s look at how to turn that into a changelog.</p><p>We’ll update our package publishing workflow for this. Add the following snippet just after the version bump and before we set up node.</p><pre>    - name: &#39;Get Previous tag&#39;<br>      id: previoustag<br>      uses: &quot;WyriHaximus/github-action-get-previous-tag@v1&quot;     <br>    <br>    - name: Update CHANGELOG<br>      id: changelog<br>      uses: requarks/changelog-action@v1<br>      with:<br>        token: ${{ secrets.GITHUB_TOKEN }}<br>        tag: ${{ steps.previoustag.outputs.tag }}<br><br>    - name: Commit CHANGELOG.md<br>      uses: stefanzweifel/git-auto-commit-action@v4<br>      with:<br>        branch: main<br>        commit_message: &#39;docs: update CHANGELOG.md for ${{ steps.previoustag.outputs.tag }} [skip ci]&#39;<br>        file_pattern: CHANGELOG.md</pre><p>This will grab the newly created tag, create/update the CHANGELOG.md file and push it to our main branch as well.</p><p>And there we are, automatically generated change logs.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jzItL93_niKnwGv3tYAg8Q.png" /><figcaption>Unity will link us to our changelog right from the package manager</figcaption></figure><h4>Create a GitHub release</h4><p>Whilst we’re improving the publish GitHub action, let’s go a step further and create a GitHub release for this as well. We’ll use the changelog we made previously to add some content to the release.</p><p>Add the below snippet at the end of the publish</p><pre>    - name: Create Release<br>      uses: ncipollo/release-action@v1<br>      with:<br>        allowUpdates: true<br>        draft: false<br>        name: ${{ steps.previoustag.outputs.tag }}<br>        body: ${{ steps.changelog.outputs.changes }}<br>        token: ${{ secrets.GITHUB_TOKEN }}<br>        tag: ${{ steps.previoustag.outputs.tag }}</pre><p>Now when we merge a PR, not only will a new package be published, but a GitHub release will be created so we can share the news and show what’s been changed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/311/1*F06uILQwMTttBBiLtAay3g.png" /><figcaption>There’s our release!</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/798/1*ApoxVykEqTPAz2C_CG6Hgg.png" /><figcaption>Here’s our changelist from earlier, automatically embedded into our release</figcaption></figure><h4>Add some automated review help</h4><p>Finally, to wrap up part two, let’s help ourselves out by adding an automated reviewer. We all make mistakes now and then and a little help goes a long way. We’re gonna use <a href="https://megalinter.io/latest/">MegaLinter</a> for this as they’ve already pulled together all the linters we want to use.</p><p>Open up a command prompt or terminal in your repository folder and run</p><p>npx mega-linter-runner --install</p><p>Follow the instructions to set it up according to your needs and desire.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/955/1*H9DZvArAF_sEb5zHZGdI2g.png" /><figcaption>Our megalinter config</figcaption></figure><p>You’ll then want to make a quick change to the mega-linter.yml file to prevent duplicate runs on pull requests, change the on: block to the below code instead</p><pre>on: <br>  pull_request:<br>    branches: [main]</pre><p>The config file sometimes also includes the wrong linters, set your’s up to look like this instead or follow Megalinter’s guides to configure it how you’d like.</p><pre># Configuration file for MegaLinter<br># See all available variables at https://megalinter.io/configuration/ and in linters documentation<br><br>APPLY_FIXES: all # all, none, or list of linter keys<br># ENABLE: # If you use ENABLE variable, all other languages/formats/tooling-formats will be disabled by default<br># ENABLE_LINTERS: # If you use ENABLE_LINTERS variable, all other linters will be disabled by default<br># DISABLE:<br>  # - COPYPASTE # Uncomment to disable checks of excessive copy-pastes<br>  # - SPELL # Uncomment to disable checks of spelling mistakes<br>SHOW_ELAPSED_TIME: true<br>FILEIO_REPORTER: true<br># DISABLE_ERRORS: true # Uncomment if you want MegaLinter to detect errors but not block CI to pass<br><br>ENABLE_LINTERS:<br>    # Looks for excessive uses of copying and pasting your code around your project.<br>    - COPYPASTE_JSCPD <br>    # Formats your CSharp code according to CSharpier standards<br>    - CSHARP_CSHARPIER<br>    # Format CSS code (handy for uss files as well)<br>    - CSS_STYLELINT <br>    # Formating for Json files<br>    - JSON_PRETTIER<br>    # Spell checker<br>    - CSPELL<br>    # Watches out for any missed merge conflict markers<br>    - REPOSITORY_GIT_DIFF</pre><p>Now you can commit your files to a branch, create a pull request, sit back and relax as the robots check your code for you.</p><p>This was part two of a series about making Unity packages. Follow me for part three, coming soon.</p><p>All the code above is available in a public template repository: <a href="https://github.com/RunningMattress/upm-test-package">https://github.com/RunningMattress/upm-test-package</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9498d2791211" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>