Javascript required
Skip to content Skip to sidebar Skip to footer

Gitlab Pipeline Runs Again and Again

This article is role of a serial of ii. In this first role, you'll learn GitLab pipelines basics and craft an Angular pipeline including build, tests, coverage and lint in a docker environment. The second article focuses on deployment: publish docker image and deploy to GitLab Pages.

Don't worry if y'all don't know GitLab, this serial is a step by step guide. Along with comments, y'all'll discover many links to the well-written documentation. If you lot utilize other CI/CD tools, it can still be interesting because the concepts and commands to execute are similar.

Getting started

Allow's brainstorm from a basic Angular app generated with ng-cli.

          

<> Re-create

$ npm install --global @angular/cli $ ng new my-app

You don't have much code to update to get in work with the pipeline. The essential function is to define each step of the pipeline in the gitlab-ci.yml file.

Take a await at how the pipeline looks similar in action:

Content image Content image

This is the GitLab pipeline you lot'll go by the terminate of the series. For now, we'll focus on the essential: the three first steps. First, you demand to understand basic concepts related to pipelines. Again, very similar to what other CI/CD tools such as Jenkins, CircleCI and TeamCity have.

GitLab CI nuts

Having a CI tool helps to build and exam your code in a neutral environment to ensure information technology works on whatever calculator and server. Sometimes it simply works on your auto considering of some special setup you take.

The thought is to run tasks in the CI environment. As many projects would need to run tasks, many machines are available to run the tasks. In GitLab world, nosotros don't talk nearly tasks running on machines merely Jobs executed by Runners.

Past default, jobs run one afterward the other. Yet, it'south possible to run several jobs in parallel past arranging them in stages.

Content image Content image
Pipeline with jobs in parallel

Equally explained before, jobs run on runners. Yet, it'south non runner responsibility to run the script on its own. Runners delegate the work to executors which comes in unlike flavors such every bit Docker, crush and ssh.

Let's practice with a dummy job running the node --version command. To make sure nodejs is available, we need the docker executor. The executor can run the command inside a docker container, built from node:12-alpine image.

            

<> Re-create

job_1 : stage : stage_a image : node:12-alpine tags : - docker script : - node - -version
Job running a node command

The image keyword defines the docker image to utilise. Still, you need to make certain the runner picking your job implements the docker executor. It'southward possible to select runners for a job using the tags keyword.

Content image Content image
Effect of the previous job running a node control in a docker container

A pipeline is the accumulation of all jobs you defined. Brand sure to remind what jobs and runners are, so it'll be easier for you to understand the remainder of the article. You lot can read the Getting started with GitLab CI/CD documentation if you need more details.

Install dependencies chore

Before building and testing your app it's necessary to install all dependencies. Y'all can do it by running npm install. Making it a separate task is important because it's required for the following jobs. Yous don't want to install dependencies and loose time afterwards.

Each job runs on GitLab runners. If y'all accept many runners, you'll need to share the job event with other runners (the node_modules directory in our situation). You tin do information technology with either a enshroud or an antiquity, but in our state of affairs, the enshroud is a better option.

            

<> Copy

stages : - install install_dependencies : stage : install image : node:12-alpine tags : - docker script : - yarn install - yarn ngcc - -properties es2015 - -create-ivy-entry-points cache : key : files : - yarn.lock paths : - node_modules only : refs : - merge_requests - master changes : - yarn.lock
Install dependencies job

There is an actress pace afterwards dependencies install. If your projection is an Ivy app, you need to run the compatibility compilation for libraries using Athwart. This step is usually washed when running ng build and ng test. In the pipeline, it'south done beforehand to go the last working node_modules at in one case. Note running ngcc isn't mandatory if it'southward already setup in postinstall script.

Sharing the resulting node_modules with the post-obit jobs works with the enshroud keyword. Two pocket-sized optimizations to note here:

  • the cache is invalidated only when yarn.lock file changes
  • other jobs volition use the pull policy to avoid uploading the cache
            

<> Copy

cache : key : files : - yarn.lock paths : - node_modules policy : pull
Default configuration for pipeline jobs

Other jobs will pull node_modules from the cache. It'll be available for any other pipeline and job. Brand certain all runners can access the enshroud location. If you take a company license, your project may have both corporate runners and GitLab shared runners. Those two kinds of runners must access the enshroud and your company registry if applicable.

Using the only:changes keyword on this job makes sure information technology doesn't run if the yarn.lock has no changes. It must be combined with only:refs to make it work properly in merge requests.

The enshroud key is likewise based on this file, information technology means when the job runs there is no matching cache to pull, and so it begins with from clean node_modules.  In this situation, if the install is too slow yous can have reward of the fallback enshroud central. With the fallback cache containing most of the dependencies, downloading some new ones would be quick. No demonstration for this optimisation. To be honest, I'thousand not certain information technology worths it.

Make certain this job is included in the pipeline the first time it runs. Following jobs build and exam needs the enshroud to be set. Two suggestions to make it happen:

  • Commit a showtime time without only:changes on this task
  • Commit yarn.lock alter and the job together. Information technology's the case if yous follow this commodity to the end (y'all'll add dependencies for tests report after).

If you prefer to use npm for this job, the documentation provides a short example using npm ci command. Don't forget to replace yarn.lock with packet-lock.json in the aforementioned samples.

Build application task

Beside code validation, the pipeline should build the project and produce a prod-ready artifact. You can build Angular app with ng build --prod command.

The projection configuration changes to output the built app to artifacts/app. This isn't mandatory only having a dedicated folder can aid to gather jobs artifacts if many of them are producing some.

            

<> Copy

{ "projects" : { "angular-app-case" : { "builder" : { "build" : { "builder" : "@angular-devkit/build-athwart:browser" , "options" : { "outputPath" : "artifacts/app" } } } } } }
angular.json

GitLab provides many environment variables about the project and the context of the pipeline. They're always bachelor and can be used along with the variables keyword. For example, you lot tin can ascertain the path to the antiquity.

          

<> Copy

variables : PROJECT_PATH : "$CI_PROJECT_DIR" APP_OUTPUT_PATH : "$CI_PROJECT_DIR/artifacts/app" build_app : stage : build_and_test prototype : node:12-alpine tags : - docker script : - yarn ng build - -prod after_script : - cp $PROJECT_PATH/Dockerfile $APP_OUTPUT_PATH artifacts : name : "angular-app-pipeline" paths : - $APP_OUTPUT_PATH cache : primal : files : - yarn.lock paths : - node_modules policy : pull

Annotation the classic ng build command comes with yarn before. Information technology ensures, you use ng-cli from the current projection and not a global installed version. In my feel, having commands as scripts in package.json is expert way to keep short commands and rely on the project ng-cli.

          

<> Copy

{ "scripts" : { "ng" : "ng" , "build" : "ng build --prod" } }
          

<> Copy

$ yarn build

Extra files must be included in the antiquity. Indeed, in the next article, you'll build the project docker prototype which needs a Dockerfile. The after_script keyword defines the commands to run after the job script.

A chore can only produce a single artifact but the support for many artifact may come up in the future. Nonetheless, an antiquity tin can include several directories if necessary. Artifacts produced during a pipeline are available for other jobs.  You tin download them from many places in the UI as a zip file.

Content image Content image
Pipelines artifacts

Note y'all may need artifacts:expire_in keyword to set up an expiration engagement for your artifact. If your artifacts are big, you don't want to fill the runners' disk. The default expiration is 30 days, significant all pipelines artifact are available for a calendar month.

Test awarding task

During the test stride, both unit tests and lint runs. Having the lint in the same job is debatable, this is discussed at the end of this section.

Too the chore event, we want unit tests and code coverage reports. The proficient news is these reports appears in GitLab merge requests. Merely first, here is a very basic chore that we'll iterate on.

            

<> Copy

variables : OUTPUT_PATH : "$CI_PROJECT_DIR/artifacts" test_app : stage : build_and_test image : node:12-tall tags : - docker before_script : - apk add chromium - export CHROME_BIN=/usr/bin/chromium-browser script : - yarn ng lint - yarn ng test - -lookout=false
Incomplete exam job

Note the disabled lookout man (enabled past default). You don't want the runner to be stuck forever waiting for file changes. Karma runs Angular unit of measurement tests runs on Chrome browser. Information technology means the docker container needs to have Chromium installed. This is the before_script job but notation it'south called each fourth dimension the job runs. You'll learn in the adjacent article how to optimize this installation which takes up to xxx seconds.

With the default karma configuration, unit tests won't work on GitLab for two reasons:

  • Runners don't have any monitor, and so tests must run on Headless Chrome
  • Sandbox fashion must be disabled when running Chrome in a docker container

Past default Angular tests run in Chrome browser, but yous can change this with the browsers option.

Try to run your unit tests with Headless Chrome:
ng exam --browsers=ChromeHeadless

Allow's create a custom Karma launcher to have Chrome in headless mode merely with sandbox mode disabled.

            

<> Copy

module. exports = function ( config ) { config. set ( { customLaunchers : { GitlabHeadlessChrome : { base : 'ChromeHeadless' , flags : [ '--no-sandbox' ] , } , } , } ) ; }
karma.conf.js

In one case this new custom launcher defined, y'all tin can apply it through the browsers selection: ng examination --browsers=GitlabChromeHeadless.

Unit tests study

When running tests, you get tests results in the console. It'southward possible to generate a consummate study that CI tools understand. This report is important to check no tests fails after a merge simply as well in merge requests.

Content image Content image
test_app job upshot

Default reporters enabled in Karma aren't compatible with GitLab. Merely the classic JUnit written report works. Allow's add this new reporter to the project.

          

<> Copy

$ npm install --relieve-dev karma-junit-reporter
            

<> Copy

module. exports = role ( config ) { config. fix ( { plugins : [ require ( 'karma-junit-reporter' ) ] , junitReporter : { outputDir : 'artifacts/tests' , outputFile : 'junit-test-results.xml' , useBrowserName : false , } , reporters : [ 'progress' , 'kjhtml' , 'junit' ] , } ) ; }
karma.conf.js

Running the tests at present generates a JUnit study placed in artifacts/tests/junit-test-results.xml. The last step is to let the chore know nigh this location and then GitLab can find and clarify the written report.

          

<> Copy

variables : OUTPUT_PATH : "$CI_PROJECT_DIR/artifacts" test_app : artifacts : proper noun : "tests-and-coverage" reports : junit : - $OUTPUT_PATH/tests/junit-test-results.xml

Code coverage written report

Did you know coverage study can be generated while running tests? Employ --code-coverage option while running unit tests, it's working out of the box. Angular relies on Istanbul which is able to provide several types of reports.

Content image Content image
Istanbul html report

The truth is you won't get a detailed report integrated in GitLab. Yet, it'south possible to have the project and merge requests coverage.

Merely cobertura is compatible with GitLab and Istanbul. Permit's change the karma configuration to generate reports.

            

<> Copy

module. exports = function ( config ) { config. gear up ( { coverageIstanbulReporter : { dir : path. join (__dirname, './artifacts/coverage' ) , reports : [ 'html' , 'lcovonly' , 'text-summary' , 'cobertura' ] , fixWebpackSourcePaths : truthful , 'report-config' : { 'text-summary' : { file : 'text-summary.txt' } } , } , } ) ; }
karma.conf.js

Istanbul should already be setup in Karma. Make sure to enable both cobertura and text-summary reporters. The commencement ane is for coverage in merge requests while the second exposes metrics for the whole project.

If you run tests with coverage enabled, yous should get the reports in artifacts/coverage directory as defined in karma configuration. Likewise cobertura and text-summary reports, you'll besides find the html report from the prototype before.

          

<> Re-create

variables : OUTPUT_PATH : "$CI_PROJECT_DIR/artifacts" test_app : coverage : '/Statements\s+:\s\d+.\d+%/' artifacts : name : "tests-and-coverage" reports : cobertura : - $OUTPUT_PATH/coverage/cobertura-coverage.xml

It works the aforementioned equally for JUnit report earlier, the coverage report for merge requests is placed in an antiquity. In merge requests you have red and green borders besides the new code to indicate coverage condition.

Content image Content image
Merge request line coverage

When running the tests with text-summary reporter, the project metrics coverage appears in the console. GitLab looks upwardly in the console and apply coverage keyword regex to match the coverage output.

Content image Content image
Content image Content image
Projection coverage in console and merge request

In case the project metrics don't announced, there are saved inartifacts/coverage/text-summary.txt. You can display them manually by running the cat control in the job script.

If you demand more than one coverage metric, yous can utilise coverage-average parcel which makes an average. For detailed data, you lot tin provide a metrics written report exposed in merge requests (premium feature).

Merge request aren't the only identify where the projection coverage appears. For thoses into project badges, there is a defended badge for coverage.

Last words on test job

Did you find the exam chore also runs lint? This isn't a separate task for two main reasons:

  • A second runner needs to run this new task. Depending on the number and availability of runners it tin can exist important.
  • With examination and lint jobs in parallel, the pipeline volition wait for two jobs to complete even if one failed. Information technology means the longer test job will run for nothing.

This example uses a single job with lint first so unit examination. Yet, this solution has some drawbacks: the test job is ~8 sec longer and it can fail because of the lint without running tests.

            

<> Re-create

variables : OUTPUT_PATH : "$CI_PROJECT_DIR/artifacts" test_app : stage : build_and_test paradigm : node:12-alpine tags : - docker before_script : - apk add chromium - consign CHROME_BIN=/usr/bin/chromium-browser script : - yarn ng lint - yarn ng examination - -code-coverage - -watch=false - -browsers=GitlabHeadlessChrome coverage : '/Statements\s+:\southward\d+.\d+%/' artifacts : name : "tests-and-coverage" reports : junit : - $OUTPUT_PATH/tests/junit-exam-results.xml cobertura : - $OUTPUT_PATH/coverage/cobertura-coverage.xml cache : key : files : - yarn.lock paths : - node_modules policy : pull
Consummate test job

In the final pipeline, exam and build jobs run in parallel. The reason is they only demand node_modules to run and don't depend on each other. Also, these ii steps take near the same fourth dimension to complete.

If you don't want to run build and tests on the aforementioned stage, make sure the jobs don't download each other artifacts. Use dependencies keyword and fix its value to an empty assortment.

Wrapping up

Y'all at present have solid knowledge nearly GitLab pipelines. Jobs and runners are something you know to work with. At the terminate of this first article, our Angular app pipeline includes install dependencies, build and tests jobs.

Content image Content image
Athwart app pipeline (part 1)

Tests and coverage reports appear in merge requests and you tin can find artifacts generated past your jobs. The jobs for an Angular library are the same except you may need to specify the library proper name when using ng command.

For a complete and alive example check my angular-app-pipeline sample project on GitLab.

In case you need to validate the format of your pipeline file, check out CI Lint. Proceed your reading with the second article to implement the deployment jobs for Angular apps and libraries. Looking forwards for your comments.

Thanks for reading!

Gitlab Pipeline Runs Again and Again

Source: https://indepth.dev/posts/1374/craft-a-complete-angular-gitlab-pipeline