Click here to Skip to main content
15,867,594 members
Articles / DevOps

End-To-End Testing in Azure Pipelines using Nightwatch.js

Rate me:
Please Sign up or sign in to vote.
4.73/5 (7 votes)
13 Apr 2019CPOL12 min read 14.8K   124   4  
How to run End-To-End tests in an Azure Pipeline using Nightwatch.js framework with TypeScript
In this article, we look at running End-To-End tests in an Azure Pipeline using the Nightwatch.js framework with TypeScript.

Azure Pipelines End-To-End using Nightwatch.js

Table of Contents

Introduction

Last year, Microsoft decided to give its Team Foundation Service, Visual Studio Online, Visual Studio Team Services a new branding: Azure DevOps was born. One of the great marketing tools here was the introduction of some subproducts (formerly known as functionalities of the real product) like Azure Boards (managing issues) or Azure Repos (source code repositories). Another one was Azure Pipelines, which consists of build jobs and release definitions. This is the CI/CD solution Microsoft offers as a service.

A real cool pitch for using Azure Pipelines is the "use free" advertisement. In short, it allows us to use up to 10 parallel running build jobs for our own projects. I guess I don't have to tell you that this is quite a good offer. Indeed, there are many CI/CD providers on the market (offering a free service for open-source projects), but none of them is giving us (for non-open-source-projects) that kind of firepower.

Azure Pipelines description

We can actually make use of this free computing power to establish all kinds of useful things for us, e.g., scheduling a cleanup job to run in certain time intervals.

Another possibility is to use Azure Pipelines for automated end-to-end testing. We will see that the default agents, i.e., the configurations that run the build jobs, already feature browsers such as Firefox and Chrome (Linux). On other hosted agents, we even find Safari or Edge being available. In any case, the browser(s) along with the necessary drivers are available already by default.

In this article, we will explore a setup that has been proven efficient and easy to work with for running automated end-to-end tests on Azure Pipelines. Our tool of choice will be the Node.js based framework Nightwatch.js, which we will use with TypeScript.

Background

Running End-To-End (E2E) tests provide an important element for guaranteeing the robustness of an application. Of course, an E2E test is never a replacement for a unit test, however, it will ensure us already that at least the user reachable flows seem to be alright for our defined standard personas.

While browser automation tooling such as Selenium exist for quite some time, recently more investments have been put into creating so-called "headless" browsers. Mostly, these are not standalone browsers, but rather special modes of running a standard browser, e.g., running Chrome in headless mode. The headless mode provides us with a more lightweight (in terms of operational resources as well as dependencies required for actually running the software) instance. Furthermore, this whole direction is backed up by introduction of a standardized browser automation API.

The so-called WebDriver API (yes, it's an official W3C standard) will be supported by all major browser vendors eventually. Right now, the support in Firefox and Chrome is alright, while Chrome still uses for most parts the predecesor called "JsonWireProtocol". In the long run, this should make Selenium obsolete, requiring even less resources to run fully automated and unattended UI E2E tests.

While we could directly work against the WebDriver API (or some other API, e.g., Selenium) we certainly wish to get a little bit more comfort for writing actual tests. In my opinion, using Node.js / JavaScript makes sense for web tests. One of the rising stars in this segment is Nightwatch.js.

Using Nightwatch.js with TypeScript

Nightwatch.js is pretty straight forward. If we want to get some type completion, we should use TypeScript and import the respective interfaces, e.g., NightwatchBrowser.

The framework itself consists of multiple parts. At its heart, we have tests. A test can use all the other parts to make maintenance and reuse quite nice. Here, we find "commands", which extend the API surface of the browser object, which is used to automate what the browser does, and to perform assertations and expectations.

Next we find "assertations". An assertion provides the basis for comparing an expected with an actual state. It can also use internal commands and has access to the full browser surface. Finally, we have "page objects", which make constants in pages such as recurring (CSS) selectors, URLs, or other data reusable.

Before we dive into how to add a test, let's make use of adding commands and assertions first.

Add Custom Commands

In our source folder (e.g., named src), we should create another subfolder called commands. This folder will be used for every command we write. The file name is quite important - Nightwatch.js will use the filename to label the commands. Thus, a filename like foo.ts (later transpiled to foo.js) would be available as browser.foo().

A command is always a single file (a Node.js module) that exports a command function. As mentioned, this function is later named to its filename when we access it via the browser API from Nightwatch.

The following example command creates a compareScreenshot command. It uses the existing saveScreenshot command and a custom assertion that is part of the attached code, but not the article.

TypeScript
import { NightwatchBrowser } from 'nightwatch';

export function command(this: NightwatchBrowser, filename: string, 
                        tolerance = 0, callback?: Function) {
  const screenshotPath = 'screenshots/';
  const resultPath = `${screenshotPath}results/${filename}`;

  return this.saveScreenshot(resultPath, () => {
    this.assert.screenshotEquals(filename, tolerance, result => {
      if (typeof callback === 'function') {
        callback.call(this, result);
      }
    });
  });
}

Importantly, we return the result of calling saveScreenshot, which itself returns at some point the this, i.e., the NightwatchBrowser instance. This is important to follow the concept of chaining. Later, when we create a sample test, we will see how nicely such a fluid test definition can be.

One important thing that we forgot is the TypeScript definition. Since commands are magically added to the Nightwatch provided API we will not get any code completion for it. However, by writing some d.ts file, we are able to use TypeScript's interface merging capability.

TypeScript
import * as NW from 'nightwatch';

// merge interfaces with nightwatch types
declare module 'nightwatch' {
  // ...
  export interface NightwatchCustomCommands {
    compareScreenshot(
      this: NW.NightwatchBrowser,
      filename: string,
      tolerance?: number,
      callback?: Function,
    ): NW.NightwatchBrowser;
  }
}

This will teach our code about our own commands and allow us to use full IDE / type checking support when working with Nightwatch.js - despite using custom commands.

Add Custom Assertions

Custom commands are nice, after all, we will need to give our E2E tests some concise instructions. Nevertheless, all the commands are useless if we cannot run tests to assert the behavior.

Nightwatch.js comes with three different possibilities. We find verifications (verify), expectations (expects), and assertions (assert). There are subtle differences (e.g., one continues with the test while the other does not) between them, however, only the last category can be extended.

Custom assertions are actually created much like custom commands. We need to put them in a dedicated folder and write one module (i.e., file) per custom assertion. The name of the file determines the name of the custom assertion, while the export of the assertion module needs to export a single function called assertion.

Let's code a very simple assertion to see if an URL (e.g., after following a link element) matches a regular expression:

TypeScript
import { format } from 'util';

export function assertion(regex: RegExp, msg?: string) {
  this.message = msg || format('Testing if the URL match the regex "%s".', regex);
  this.expected = regex;

  this.pass = function(value) {
    return this.expected.test(value);
  };

  this.value = result => result.value;

  this.command = function(callback) {
    return this.api.url(callback);
  };

  return this;
}

The custom assertion function needs to have three parts: a pass function (when does it pass?), a function to compute the value of a result that is found when invoking a command, and finally a command to invoke to get the website into a state where we can perform the assertion.

Like with commands, we need to extend the basic Nightwatch.js typings. Otherwise, the assert property will only reveal the already in-built assertions.

We store the extension again in a d.ts (potentially the same d.ts) file as with the custom command.

TypeScript
import * as NW from 'nightwatch';

// merge interfaces with nightwatch types
declare module 'nightwatch' {
  export interface NightwatchCustomAssertions {
    urlMatch(this: NW.NightwatchBrowser, regex: RegExp, msg?: string): 
             NW.NightwatchBrowser;
  }
  
  // ...
}

It is quite important not to mix commands and assertions. The output from assertions is necessary not only to populate fail or success decisions for the runner, but will also be used from the reporter that writes out report files (by default, the JUnit XML format is used).

Configuration for Running Nightwatch.js in an Azure Pipeline

Now that we understand a bit what Nightwatch.js is about, it's time to actually run it in an Azure Pipeline! Let's start with Nightwatch's configuration.

Available Package.Json Scripts

Out of the box, Nightwatch can already run. The following dependencies are all necessary for running it (mkdirp is only interesting if we think about creating new directories, e.g., for storing screenshots, node-resemble-js would be necessary to make screenshot comparisons / diffs possible).

JSON
{
  // ...
  "dependencies": {
    "chromedriver": "^2.46.0",
    "mkdirp": "^0.5.1",
    "nightwatch": "^1.0.19",
    "node-resemble-js": "^0.2.0"
  },
}

Long story short: If Chrome is installed on that system, we can run it!

Let's define a couple more scripts for convenience in our package.json file. Running this thing actually needs to transpile it (since we want to use TypeScript) and run the Nightwatch CLI. Other than that, we may want to run with different environments (by invoking the Nightwatch CLI with the --environment or -e flag), hence it makes sense to add some more scripts for all known environments.

The following section shows an example configuration.

JSON
{
  // ...
  "scripts": {
    "start": "npm run build && npm run test",
    "test:ci": "npm run build && nightwatch -e default",
    "test:local": "npm run build && nightwatch -e local",
    "build": "tsc --project tsconfig.json",
    "test": "nightwatch"
  },
  // ...
}

Now that we configured the application properly, we also need to configure Nightwatch itself.

Basic Nightwatch Configuration

All these scripts are fine so far, but Nightwatch does not (yet) know where to get, e.g., the tests, the commands, and the assertions from. Furthermore, we have not specified against which browser we want to communicate and how this communication looks like.

The following nightwatch.json contains the most important parts. Note that we go always against the dist folders and not the src folders as Nightwatch does only understand JavaScript and no TypeScript.

JSON
{
  "src_folders" : ["dist/tests"],
  "output_folder" : "./reports",

  "custom_assertions_path": "./dist/asserts",
  "custom_commands_path": "./dist/commands",
  "globals_path" : "./dist/globals.js",

  "webdriver" : {
    "start_process": true,
    "server_path": "./node_modules/chromedriver/lib/chromedriver/chromedriver",
    "port": 9515
  },

  "test_settings" : {
    "default" : {
      "desiredCapabilities": {
        "browserName": "chrome",
        "javascriptEnabled": true,
        "acceptSslCerts": true,
        "chromeOptions": {
          "prefs": {
            "intl.accept_languages": "en-US,en"
          },
          "args": [
            "--headless"
          ]
        }
      },
      "skip_testcases_on_fail": false,
      "globals": {
        // global variables here
      }
    }
  }
}

While we could use multiple browsers, we only use Chrome for this boilerplate. We set it up such that it will always use the English language (if you want to test localization, you could either override it in the test locally, or fall back to always set it locally) and headless mode. With the headless mode, we will not be operational within the hosted agent.

Important, we also did not configure skipping of test cases. Usually, if one test case of a text module fails, all the remaining test modules will be skipped as well. Especially in test modules that are rather disconnected, no immediate shutdown should be performed.

Adding a Simple Test

Writing a test is as simple and straight forward as creating a module and adding different exported functions.

The following piece of code creates two tests to verify the successful / unsuccessful login to some example homepage. We make use of a custom command called login.

TypeScript
import { NightwatchBrowser } from 'nightwatch';

module.exports = {
  'The login works with the correct credentials'(browser: NightwatchBrowser) {
    return browser
      .login()
      .assert.containsText('#case_login > .success', 'WELCOME')
      .end();
  },

  'The login fails with the incorrect credentials'(browser: NightwatchBrowser) {
    return browser
      .login({ pass: 'foo' })
      .assert.containsText('#case_login > .error', 'DENIED')
      .end();
  },
};

Running the E2E tests with Nightwatch.js locally looks as follows:

Running E2E Tests with Nightwatch.js locally

Furthermore, we may even write a test that checks a given design.

TypeScript
import { NightwatchBrowser } from 'nightwatch';

module.exports = {
  'Design of homepage'(browser: NightwatchBrowser) {
    return browser
      .login()
      .compareScreenshot('design-of-homepage.png')
      .end();
  },

  beforeEach(browser: NightwatchBrowser) {
    return browser.windowSize('current', 1000, 1000);
  },
};

The beforeEach is a special function that is called before each test begins, but after the browser has been set up for the test. Therefore, it is a good way to set up the browser how to be used for all tests in a module.

In case of a screenshot, it is quite important to fix the visual boundary conditions. In this case, to see all we want to see, and also to have reproducible results.

Comparing designs then works as follows:

  • If no file with the given name exists yet in the screenshots/baseline directory, it is created
  • A new screenshot is recorded in the screenshots/results directory
  • The difference (if found) between the recorded and the baseline screenshot is captured in the screenshots/diffs directory

There is a variable tolerance level, which is automatically set to 0 (forgive no differences). Due to platform rendering differences, a higher threshold here (e.g., tolerance = 11) may be useful.

This is especially true if we want to compare screenshots between MacOS and Linux / Windows, but also holds for the other platforms.

Tolerance levels are important

The snippet above shows how a failed test looks in the Azure DevOps console. Nevertheless, while we potentially need to adjust the tolerance level here (Windows vs Linux), it may also be another issue (e.g., with the site itself). The only way to know is get to the recorded screenshot, which needs to be considered in our Azure Pipeline setup.

Azure Pipeline Setup

In order for our E2E Pipeline to be fruitful, we need the following steps (in order):

  • Run on an hosted Ubuntu agent containing at least Chrome
  • Clone the repository
  • Install the dependencies
  • Transpile (i.e., build) the TypeScript files to produce JavaScript
  • Run the Node.js application
  • Publish the created test results

Steps 1 and 2 are rather implicit. For step 4 and 5, we created a single script.

The following azure-pipelines.yml covers all these in a single sweep using the structure of our Nightwatch.js boilerplate.

YAML
pool:
  name: Hosted Ubuntu 1604
  demands: npm

steps:
- task: Npm@1
  displayName: 'Install Dependencies'
  inputs:
    verbose: false
- task: Npm@1
  displayName: 'Build and Run'
  inputs:
    command: custom
    verbose: false
    customCommand: start
- task: PublishTestResults@2
  displayName: 'Publish Test Results'
  inputs:
    testResultsFiles: 'reports/*.xml'
    mergeTestResults: true
    failTaskOnFailedTests: true
  condition: succeededOrFailed()

This setup can also be done graphically.

The pipeline definition in the Azure DevOps client

Important, we need to set up the right triggers.

In the following example, we only trigger it once (on a Thursday), but always when the master branch changes.

Triggers for running the E2E Tests

Once tests have been run successfully, they are visible in the build details and can be further used / connected by Azure DevOps.

The following screenshot displays the gathered text results as displayed in Azure DevOps. The nice thing about this view is that we can inspect every single text that was running. We are able to see the test's history, and get the complete information that the test reported. We can even view and download attachments.

Viewing the test results

If we have an active Azure DevOps test manager subscription, we can connect these results further and align them with different test suites, manual tests, and a more detailed description.

Points of Interest

A small sample project is attached. It will come with a basic boilerplate and tests CodeProject itself. Since automated UI tests are rather sensitive to any (not only visual) changes of the target web application, I cannot give you promises that the tests will be still green in the future. I hope you get the right idea from the sample project.

How do you solve the issue of extensive end-to-end testing? What is your favorite infrastructure to let these tests run reliably? Tell me in the comments!

History

  • 29th March, 2019: v1.0.0 | Initial release
  • 1st April, 2019: v1.1.0 | Added image info
  • 14th April, 2019: v1.2.0 | Added Table of Contents

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Chief Technology Officer
Germany Germany
Florian lives in Munich, Germany. He started his programming career with Perl. After programming C/C++ for some years he discovered his favorite programming language C#. He did work at Siemens as a programmer until he decided to study Physics.

During his studies he worked as an IT consultant for various companies. After graduating with a PhD in theoretical particle Physics he is working as a senior technical consultant in the field of home automation and IoT.

Florian has been giving lectures in C#, HTML5 with CSS3 and JavaScript, software design, and other topics. He is regularly giving talks at user groups, conferences, and companies. He is actively contributing to open-source projects. Florian is the maintainer of AngleSharp, a completely managed browser engine.

Comments and Discussions

 
-- There are no messages in this forum --