Learn Webdriver IO - Professional Add-on

Hooking in to Browserstack & Sauce Labs

Transcript: Storing cloud Selenium credentials

Welcome to the second half of the Learn WebdriverIO course. This is the first video of the professional add-on.

We’re starting off with a module on Cloud Selenium services. So far in the course, we’ve used our development computer for our browser needs. But what if you want to test on a browser that isn’t available on your local computer?

Cloud Selenium services fill that need by providing a wide array of operating systems and browsers for you to test on. While you will need to pay a subscription fee for advanced features from these services, all of the ones we’ll talk about provide a free account level for low-use needs.

There’s nothing particularly special about these services that you can’t set up yourself with a cluster of computers and some effort. The benefit provided is really in the ease of set up. Instead of spending your time configuring various operating systems and browsers, you can instead spend it writing tests.

There are several cloud selenium services available. The three we’ll take a look at in this module are some of the most popular services out there, and they already include built-in integration with WebdriverIO.

We’ll talk about integration with all three services in the other videos of this module, but first we want to take a look at credential storage.

All the services require an ID and key to hook in to their service. This is a basic security feature, allowing only your specific sessions to connect in.

The easiest method for managing this is to add the ‘user’ and ‘key’ property to your wdio.conf.js file. WebdriverIO will look at the ID you pass in and determine which service you’re trying to connect with.

The problem with this method is that it’s very insecure. Anyone who looks in to your wdio.conf.js file will see your credentials, and be able to use it for themselves.

Fortunately there are several alternatives to this. They include, passing in the information from the command line, using a “secret” file for your credentials, and finally, storing your credentials in your computer’s environmental variables.

Let’s start out with the command line options.

If we pass in the ‘help’ argument when running webdriverio from the command line, we’ll be given a list of options we can use.

Note that I’m prefixing my arguments with double dashes, which tells NPM to pass that following information to the command its running. It’s okay if that doesn’t make sense, just know that if you’re running WebdriverIO through NPM and you want to pass in arguments to it, you need to precede them with double dashes.

Through the command line, we can pass in a user and key parameter, containing the credentials needed to connect to our services. While this is better than storing them in your configuration file, it has two drawbacks.

The first is that you’ll need to constantly remember your user and key information. Your key likely isn’t an easy to remember phrase, so that can be a source of constant frustration.

Also, it can be potentially insecure in a shared environment. Anyone with access to your command line history could see your credentials.

The next method is storing your credentials in a ‘secret’ file. This is a file that won’t be included in any source code management tool you have, thereby reducing the risk of accidentally sharing it.

To use this method, created a new file, naming it something like “secrets.js”. In it, you’ll export your user and key information in the form of a javascript object. This is similar to how other Node.js files export information.

Then, in your wdio.conf file, you’ll load the secret information by using a require statement. Once loaded, you’ll pass the information to your configuration via the user & key properties.

Be sure you don’t accidentally add this file to your source code management tool. If you’re using git, that means adding the filename to your gitignore file.

The last method we’ll talk about, and the one I recommend, is to store your credentials in your computer’s environment variables.

This matches with the standard industry practice for all sorts of personal information management and how the WebdriverIO Test Runner handles storage.

It also allows for easier integration with third-party tools like CICD services. Many of these services have built-in environment variable handling, making it easy to set up your credentials during integration.

For now, we’ll stick with running through how to configure environmental variables on Mac and Windows.

On Mac, and on Linux machines as well, you’ll set your variable via the command line. With it open, type ‘export’ then the variable name you want to set and what you want to set it to.

You can validate your command worked by echoing out the variable you just set, preceding it with a dollar sign.

One important note. Setting the variable from within the command line only makes it work for that session. If you were to restart your command line window (say you reboot your computer), that information would be gone.

For example, if I open a new terminal window and run my echo statement again, the variable won’t contain any information.

To avoid having to run this command every time a new command prompt session is created, you can add it to your command prompt startup scripts.

There are many variations of startup scripts used by different systems. I’ll be showing one version, but it may not apply to your environment. Despite this, the same concept will work.

As I mentioned, when you start a command prompt, it runs a startup script. The name of the script my system uses is “.profile” and lives in my home directory. Pay attention to the dot preceding the name. This is not a typo and is part of the name.

Other names you might see are bash_profile and bashrc. There are some subtle differences between each of these, so find what works for you and stick to it.

Opening our profile file, we’ll scroll to the bottom of it, and paste in our export command. We’ll save the file, then open a new command prompt and echo our sauce credentials out. Now we have our environment set up for continued use.

Windows systems are a bit different. To set a permanent variable, use the ‘setx’ command. It also doesn’t use an equal sign in the command, and you don’t need to worry about adding it to a profile file.

Just run the setx command, open a new command prompt, and you can echo out your variable.

The last change we need to make is inside our configuration file. At the top of our config, we’re going to add two properties: “user” and “key”.

Node can access environmental variables via the ‘process.env’ global. We can reference our personal user and key by using process.env.selenium_user and process.env.selenium_key."

Be sure to document whichever method you use in your test suite installation instructions, as it may be new to anyone jumping in to the codebase.

Transcript & Files: Using Sauce Labs
The following information may change due to future Sauce Labs site updates. Check their documentation for the latest information.

Sauce Labs is a popular Cloud Selenium service and offers a wide variety of browsers and systems to test on.

In this video, we’ll be covering the integration between WebdriverIO and Sauce Labs, explaining how to set up your credentials and use the sauce service for improved functionality."

To start, you’ll need a Sauce Labs account. The have a free 14-day trial available for anyone interested in previewing their service.

Once you’re logged in with your account, open up the menu in the bottom left corner of the page. Then click ‘my account’.

On this page you’ll see notice many things, but what we’re looking for is the ‘API key’. This is your secret code for accessing your Sauce Labs account and related functionality from WebdriverIO.

Click ‘show’ to reveal your API key. Be sure not to share this with others, as it allows full access to your account. "

"Copying the key over, let’s open up our profile file. Be sure you’ve watched the first video in this module, as it covers the specifics on this file.

Inside of it, we’re going to add two lines.

The first is to define our sauce username. Since I’m on a Mac, I’ll use the ‘export’ statement, followed by the key i want to define and the value of my username.

Next I’ll add the access key. I can paste it in from what I copied out of the Sauce page.

Having saved the file, I need to reload it in my current terminal. I can do that by running ‘source ~/.profile’.

Now my credentials are ready to use.

The next step is to add our user and key to our configuration file. Opening it up, we’ll add it to the top.

Node provides access to environment variables through the ‘process.env’ global, so we can hook in to that to get our information.

With the login information now included, WebdriverIO will recognize what we’re trying to do and hook in to the Sauce Labs service for us.

There is one small temporary caveat. Because we’re running the browser inside of Sauce Labs, it doesn’t currently have access to our local server. We’ll fix that later in this video, but for now, we’re going to pass in the ‘server prod’ flag in order to use the publicly available website. "

As our tests are running, we can hop over to our Sauce Labs dashboard and see them live.

Notice that they’re all labeled as “unknown job”. We’ll fix that in a minute, but we’ve got a bigger problem on our hands.

Because Sauce Labs takes a few seconds to spin up, our tests are timing out before that have a chance to complete.

Thankfully this is an easy fix. Simply increase the timeout amount in the configuration file to be a larger amount, like 60 seconds. Now our tests should pass again without trouble.

Back to the ‘unknown job’ issue we ran in to earlier.

Sauce doesn’t know the information about the tests were running, aside from the Selenium commands that WebdriverIO is sending it. So when a session is started, it can’t associate it with a specific test.

To pass along the relevant information, a service was written for WebdriverIO that provides deeper integration with Sauce.

You may have noticed during the Test Runner configuration setup phase at the beginning of Module two that a ‘sauce service’ was available for install. We skipped it at that point, but now it’s time to take advantage of it."

Installation is the same as with WebdriverIO. Using NPM, we’ll run the ‘install’ command, request the ‘wdio sauce service’ module, and we’ll save it to our dev dependencies list.

With that install, we need to turn on the service via our config file. Jumping down to the ‘services’ section, we’ll add ‘sauce’ to the list of services we want to use.

We’ll leave the ‘selenium standalone’ service as is, so that testers without a Sauce username set up will fall back to running Selenium locally. "

We’re also going to enable the ‘sauceConnect’ flag, which will allow Sauce Labs to access our local server via a secure tunnel.

More information about Sauce Connect is available on the Sauce website. The main thing to know is that it allows Sauce Labs to access servers running on your local computer.

With all that set up, we’ll save the file then head back to the command line and run our tests. We’ll skip passing in the ‘server prod’ flag, as Sauce Connect allows us to now test locally.

Reviewing the test runs on the Sauce dashboard, you can notice the test names are now being passed in, along with the results of the test. Green checkmarks indicate passing tests and red X’s indicate failing ones.

That’s it for now. There’s more to this than just the WebdriverIO integration, but we’ll be covering that in a later module.

wdioconf.js

Transcript & Files: Using Browserstack

The following information may change due to future Browserstack site updates. Check their documentation for the latest information.

Browserstack is another popular choice in the arena of Cloud Selenium services.

In this video, we’ll be covering the integration between it and WebdriverIO, explaining how to set up your credentials and use the browserstack service for local testing

Similar to Sauce Labs, Browserstack features a free trial offer, limited to 100 minutes of automated testing. This should be plenty to test out their software and make decision.

One you’ve registered for your account, you can find your API key on the ‘automate’ landing page, by clicking the ‘username and access keys’ dialog.

Note that the username provided is different from your login credentials. You will need to copy both the username and access key in to your profile settings.

Similar to Sauce Labs, we’ll store our credentials in our profile.

To avoid conflicts, we’ll name them BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY.

With that set up, we can now use our credentials in our configuration file.

Like before, we’ll set the user and key properties, and hook in to process.env to access our variables.

And again, like before, we want to install the browserstack service. This will enable local connections and pass along test metadata for our job.

Installation is the same as other services. First, run the install command

wdioconf.js

Transcript & Files: Using TestingBot

The following information may change due to future Testingbot site updates. Check their documentation for the latest information.

TestingBot may be a lesser known cloud selenium service, but it includes the same level of WebdriverIO integration as BrowserStack and Sauce Labs.

Like the other options, TestingBot offers a free trial, which provides 100 free minutes of testing; plenty to see what TestingBot has to offer.

Once you’re signed up, you can access your credentials by visiting the account page. On that page you’ll see your ‘key’ and ‘secret’. These are the values you’ll use for connecting to TestingBot.

With those values, we’ll add them to our computer’s environment via the profile file. We’re going to store the TB_KEY and TB_SECRET variables with our information.

Then, in the webdriverio configuration file, load those values via process dot env.

Just like browserstack and sauce labs, testingbot provides a service to integrate with their selenium grid. Installation goes like before:

First, run the install command, then include ‘testingbot’ in your list of services.

The last item is to tell the service to start a local tunnel for us. That’s done by setting the ‘tbTunnel’ option to true.

Now it’s time to run our tests. We’ll use the standard npm test command.

Then jump over to the testingbot dashboard. We can see the tests start up, with the meta information passed over. We get the test names and status in our results.

That’s it for module 7 and setting up the cloud selenium services. Up next, we’ll take full advantage of these services by expanding the browsers and platforms we’re testing on.

wdioconf.js

2. Multibrowser Testing

Links:

Transcript:

The cloud services we covered in the previous module provide access to hundreds of different browser configurations, so it’d be a shame to keep limiting our tests to a single scenario.

In this module, you’re going to learn how you can configure WebdriverIO to run your tests on any variety of browser set ups.

The default WebdriverIO example shows us how we can define multiple browsers in our capabilities configuration.

In fact, we could have been testing on multiple browsers all along, even without hooking in to a cloud service. We just haven’t covered it until now, because we’re mostly likely to use this feature with cloud services.

But again, as long as you have the browser set up on the computer you’re running Selenium on, you’re able to run your tests on multiple browsers. Setting up these browsers won’t be covered in this course, which is why I’ll be showing examples using the cloud services.

Anyway, looking in to our capabilities object, there are some important properties to be aware of.

The first thing to note is that each browser is defined as a javascript object, and we can define multiple browsers by combining them in to a JavaScript array. This array can include however many browsers you want to test on, although testing on a multitude will take a significant amount of time.

Inside each of our object, we define our standard Selenium properties, like “browserName”, and also some WebdriverIO specific ones, like maxInstances, specs, and exclude.

These three WebdriverIO properties allow us to customize how WebdriverIO runs our tests depending on the browsers.

MaxInstances will limit the number of parallel tests that are run for a specific browser. This is an optional override for the global ‘maxInstances’ property, and can be useful if you have limitations on your selenium testing box. For cloud services, it likely isn’t needed.

‘Specs’ and ‘exclude’ are optional overrides for the global properties of the same name. Specs defines the test files we want to run, and ‘exclude’ defines and files that shouldn’t run. This can be helpful if you have a browser specific bug you want to validate, or have trouble running a specific test in a specific browser. Keep these properties in mind in case you run across such a situation.

Back to the ‘browserName’ property. As I mentioned, this is a standard Selenium property. Looking at the “desired capabilities’ document on the Selenium wiki, we can see several properties like it, include ‘version’ and ‘platform’.

This document is a great reference for what you can define in your capabilities objects. While it’s not always a 1 to 1 match with every cloud service out there, almost all properties shown here have support in the major cloud selenium services.

One important thing to note is that almost all of these properties have a default setting, so it’s not necessary that you define them all in your capabilities object. Just specify what you need and leave the rest as the default.

While this document is a great reference, there are easier ways to configure your capabilities objects. Since each cloud service is unique in its own way, they all provide a way to get capabilities specific to their environment.

For example, Sauce Labs has a platform configuration page. Let’s try it out.

First, we’ll select ‘Selenium’, then for a device I’ll pick ‘PC’. Since I’m on a Mac, I’d like to see how my tests run on a PC.

Next, I’ll choose Windows 10 as my operating system, then IE 11 as my browser.

Finally, I’ll switch to the Node.js tab, to get a better format for my capabilities.

I’ll then copy the code from this box to my wdio config file, then append it to my capabilities array.

Assuming I still have my Sauce Labs credentials and service configured, I can now run my test. From the command line, I’ll send my usual npm test command, and wait as my tests run.

Inside my Sauce Labs dashboard, I can now see both running. Since I didn’t define an operating system or version for Chrome, it just went with whatever was the default or is available.

Let’s move on to BrowserStack’s configuration editor.

While it’s similar in that it asks for an operating system and browser, it does have some difference. The main one is the way we define our browser and OS versions.

Browserstack has two unique properties for this: browser_version and os_version.

Let’s take a look at this in action. We’ll set up the same profile we used for Sauce Labs using the Browserstack configuration tool. The format this tool creates is slightly different than Sauce Labs, but it’s the same basic capabilities object.

In the object, you can see how Browserstack uses the ‘browser_version’ and ‘os_version’ properties. You can also see that it includes a resolution by default. Feel free to change that as you deem fit.

Let’s copy just the object over to the BrowserStack version of our configuration file. We could save it as a variable and load it in our capabilities array, but it’s simpler just to load the object directly in.

We’ll also add a project property to see how that looks. We’ll set it to ‘Robot Parts Emporium’.

With that, let’s save the file and run our tests.

Taking a look at the BrowserStack dashboard, we can see the tests running. Also notice that the project name is filled out. This can be very helpful when you have multiple projects using the same account, and allows you to filter your sessions by project.

It’s now time to look at TestingBot. While they don’t have a capabilities generator like the previous two services, you can narrow down on what browser you want via their browsers page.

We’ll find our Edge browser on Windows 10. Clicking the version number we want, TestingBot will return the capabilities we need to use.

They come wrapped in a non-JavaScript format, so we’ll need to convert it over. Let’s copy the code…

Paste it in our editor. Then we’ll convert it to fit in our capabilities object.

Now we’re ready to run our tests. Like before, we’ll use the ‘npm test’ command.

And like before, we’ll look at the dashboard page to see our tests running, and see how the test names and statuses are being passed over.

Before ending this video, I want to mention a one thing.

Each browser is different, and just as they each have differences when displaying sites, they also have differences when running selenium on them. You may run in to issues running tests that work perfectly in other browsers. Plan to spend a little bit of time debugging and possibly re-working tests when adding new browsers.

Downloads10.zip

Links, Transcript & Files: Advanced Browser Configurations

Links:

Transcript:

If you run your tests on multiple environments, let’s say a test server and a production one, it can be helpful to have multiple configuration files with differing settings.

WebdriverIO allows for passing in custom configuration files through the test runner, but the method shown here requires a lot of duplication of configurations.

Instead, you can share a set of configurations across environments, customizing the standard set up as needed. This is similar to how we set up our common page objects.

In a main configuration file, you’ll store all the configurations you share across environments.

Then, for each environment, you’ll create supplementary configurations containing the settings from the main config along with environment specific ones.

The WebdriverIO documentation includes a basic example of this, where they extend their main configuration with Sauce Labs specifics.

We’ll do something similar, but in reverse. We’ll use the standard configuration file for testing our production site in Sauce Labs.

Then, we’ll allow easier local testing by creating a local configuration file, which will test on a single browser, using our local selenium server, and will run against our local website server.

Just to review, we’ll be using the configuration file we used in our Sauce Labs example. This includes the user/key combo, along with the addition of the ‘sauce’ service.

While we’re in this file, we can clean up a couple items we’ll no longer need.

First, we’ll remove the ‘production’ check at the top, setting our baseUrl to always use the production site.

Then we’ll remove ‘selenium-standalone’ from our services list, since that will be specifically used by our local testing configuration.

Next, we’ll create a new file, saving it as wdio.conf.local.js.

At the top, we’ll load of prod configuration via a require statement. Since all of the settings are stored on the config property, we’ll reference that.

We’ll then create a copy of this configuration using the ‘object.assign’ JavaScript method. Object.assign takes in any number of objects, returning a merge of them. We’ll pass in our prodConfig, and an empty object we’ll use in just a second to add custom configurations.

Let’s go ahead and do that now.

First, we’ll override the capabilities array and set it up to only test in Chrome

Then we override the baseUrl property to point to our local server

Finally, we’ll override the services array, requesting just ‘selenium-standalone’"

Since we’re using the prod configuration as our main config, we need to delete some unwanted properties from it. We’ll remove the user, key and sauceConnect properties using the ‘delete’ keyword.

Also, I’ll log out the result of all of our configuration changes, just so we can see what the end result looks like.

Finally, we’ll pass our local config to the exports.config property so that WebdriverIO can find it.

With that, we’ll save our file, then use it in our test by passing it in as a command line argument. To do this, we’ll use our ‘npm test’ command, then use two dashes to let npm know to pass along the remainder of the text to the wdio command. After the dashes, we’ll state the filename of the configuration we want to use.

Running our tests, you can see it now pops up a local browser for execution and hits our local server address.

There are many different ways we could have set this all up, and the example we just saw was one version of that

For a more complex set up, check out the ‘node-config’ module, which lets you get really fancy with your set up. Fair warning that you should feel very comfortable with object inheritance and advanced JavaScript usage before diving in to this.

Downloads11.zip

Links, Transcript & Files: Multiremote Testing

Links:

Transcript:

In this video, we’re going to change things up a bit and test out a different website than Robot Parts Emporium. While it’s served us well so far, as a basic static test website, it doesn’t have the functionality needed for this lesson.

Instead, we’re going to test out a community built Spyfall app. Spyfall is a popular board game we’re one person plays as a spy trying to figure out the hidden location of all the other players. The specifics of the game aren’t important for our lesson, it just makes a really good example of when you’d need the functionality we’re going to use today.

That functionality is called “multiremote”. So far in this module we’ve focused on testing in multiple browsers, but that was for the sake of browser coverage. While more than one browser was tested on, they all ran independently of each other.

Multiremote, on the other hand, uses browsers in connection with one another to simulate multiple users testing the same page at the same time.

For instance if we want to test a chat application, there has to be one browser who inputs a text message while the other browser waits to receive that message and do an assertion on it.

In the example shown here, we set up two browsers, then send a message in one of them, wait for it to appear in the other, then assert it has the correct text.

Back to the Spyfall website, we’re going to test several things.

First, we’ll create a game, then send the game ID to our second browser. It will use that ID to join the game we just created. Once joined, we’re going to verify both players are listed, then start the game. As a final step, we’ll assert that one player is the Spy and the other is given the secret location.

To get started, we need to create a special webdriverio configuration file. We’ll do this using the same extend pattern from the previous video. First, we’ll create a new file, calling it wdio dot conf dot spyfall dot js. Inside of it, we’ll require our main configuration file, then clone it using the ‘Object.assign’ function.

Inside our custom property object, we’ll start things off by defining the browsers we want to test on.

For multiremote, we define these browsers as an object, not an array. This tells WebdriverIO that we’re going to be running these browsers side-by-side, instead of separate from each other.

We’ll then define our first browser. We’ll label it ‘Host’, as it will be the one creating and starting the game. In it, we’ll add a desiredCapabilities object, containing the information on the browser we want to set up. For our needs, we’ll just use the standard Chrome browser.

Next, we’ll create a Guest browser. This is the browser that will join the game. It will have the same capabilities as our Host browser. While these capabilities match, they will be run as entirely different browsers.

Those are our capabilities. There are a couple more items we need to define. First, we’ll set the baseUrl to refer to the spyfall website.

Then we’ll overwrite the ‘specs’ property so that it will only run our spy fall test, which we’re going to make in just a second.

That’s all we need to change. Let’s export this config so WebdriverIO can see it.

Now it’s time to write our tests. Creating a new file named ‘spyfall.js’, we’ll start things off in it with a simple describe block.

Our first test is going to verify that the host can create a game. After creating our ‘it’ block, we’ll tell the ‘host’ browser to open the root url.

Notice how we don’t use the ‘browser’ object like we have in the past. While the browser object is available, since we only want to run this command on the Host browser, we use the ‘Host’ object that multi remote provided us.‘Host’ comes from the property name we defined in our capabilities object.

Continuing with our use of the Host browser, we’ll click the new game button. Then we’ll set our player name to Lisa.

We’ll start the game by clicking the ‘create-game’ button, then wait for the page to reload and a ‘waiting for players’ status message to appear.

Before we can have other players join, we need to get the id of the game we started up. Since we need to use this game ID in our other tests, we’re going to initialize the variable outside of this one. This is due to the way variable scoping in JavaScript works. Anything you define inside of a function, is only available inside of that function. To pass data from one function or another, you can increase the scope of the variable by defining it outside of those functions.

In this case, we’re going to initialize the variable outside of our ‘it’ function, but still inside our ‘describe’ function. This way it’s available in all of our test functions.

Then, back inside our first test, we’ll assign the value to the gameId variable using the ‘getText’ command.

Finally, we’ll assert that we received a valid ID by checking that the string length is above 0.

That’s our first test. Now it’s time to try out that ID in a second browser.

We’ll create a new test, saying that it should allow others to join the game.

Inside the test, we’ll get the Guest browser to open up the base url. Again, we’re ‘Guest’ to reference the browser we want, which matches up with what we defined inside of our capabilities object.

Next up in our test, we’ll click the ‘join game’ button. Then we’ll enter the game ID in to the access code input box. We’ll also name our player as Bob.

With our information filled out, we’ll click the Join button, then wait for the ‘waiting for players’ message to appear.

As one final validation step, we’ll get the url of our guest browser and assert that it contains our game ID.

That’s the first two tests. It’s worth mentioning at this point that the commands in the second test aren’t run until the first test has completed. This is why we’re able to get the gameID in our second test, even though it’s not part of the first test.

Let’s create a third one. In it, we’ll check that ‘bob’ now appears in our list of players on the host’s browser.

We’ll start by waiting for an element with the text ‘Bob’ to exist. Then we’ll get the text of the lobby player list to double check that Bob is in the right spot of our list.

When we run the ‘getText’ command and pass in a CSS selector that matches multiple elements, it will return an array containing the text of each match.

We’re going to do two things with the array returned. First, we’ll assert that the length is two, as it should contain entries for the host name and the guest name.

Second, we’re going to assert that the second item is ‘bob’, which is the name of our guest.

We could add a few more assertions here checking that the host name is correct, but we’ll let things stay simple and move on to our next test.

Our fourth test is going to be somewhat simple. We’re just going to start the game, the check in both browsers that the right HTML elements appear on the page.

We’ll start things off by having the host click the ‘start game’ button.

Once clicked, we need to wait for both browsers to add the ‘game countdown’ element. Since we want this command to run in both browsers, we’re going to use the ‘browser’ object. This tells web driverIO to run the command in all the browsers we have open, not just a specific one.

We’ll use the browser object one more time to get the existence of the status container.

isGameStarted is going to return as an object, containing the response from each of our browsers assigned to a matching property. We’re going to assert it’s correct by comparing it to what we expect to see, which is an object with a Host and Guest property, each with a value of ‘true’. We’ll use the ‘deep’ flag since we’re comparing two objects.

Our final test will verify that the game is set up properly. We’ll check that one person is the spy, the other isn’t, and that the location isn’t given to the Spy.

First, we need to check on the status of our players. We’ll get both by using browser.getText, passing in the status selector. This is going to return an object with two properties, Host and Guest.

One of those values will contain the text ‘You are the spy!’, but we don’t know which one it will be. The spy could be either the host or the guest. In order to continue on in our test, we’re going to need to determine which one is which.

To do this, we’re going to use a conditional check. Before doing that though, we’re going to initialize two variables, ‘spy’ and ‘notSpy’. These variables are going to store which property we should check on the status object for either the spy or non-spy.

We’ll then write an if statement, checking if the status of the host is equal to ‘You are the spy’. If so, we can assume that the Host is the spy and the guest is not. If it’s not the case, then we can assume the opposite, that the guest is the spy and the host is not.

Since we’re making an assumption, we should probably double-check that it’s correct. Tests that make unchecked assumptions are liable to miss failures.

We’ll verify things are in order by expecting the status of the ‘spy’ to be ‘You are the spy’, while also checking that the status of the non-spy is ‘you are not the spy’. If something messed up and both players were assigned the spy, this test would now fail.

For those unfamiliar with it, we’re using bracket notation to reference to either the host or guest properties. Since either could be the spy, we store which one is which on the corresponding variable, the pass that variable in to the object to reference the proper one.

This is where it’s helpful to have programming experience in your background. If you’re unfamiliar with this, I recommend doing some research in to it as the knowledge can really come in handy in your own test writing experience.

Moving on, the last thing we’ll check is that the location is shown to the non-spies and hidden from the spy. We’ll check for the existence of the ‘current-location’ element, then use our spy reference again to assert that the element isn’t existing on the spy browser. We’ll do the opposite to validate that the location is shown for the non-spy browser.

With that, our tests are completely written and ready to run. Jumping over to the command line, we’ll run npm test, then pass in the name of our custom configuration file.

Just to show the two browsers side by side, I’ve edited my spyfall configuration file to run my tests locally.

Running the test, you can see each browser take actions at separate times, with the actions of one browser impacting the other. Multiremote is pretty cool, and it’s kind of fun to watch the tests in actions as the browsers go back and forth.

Downloads12.zip

3. Integrating with CICD Systems
Transcript & Files: CICD Systems

In this module, we’ll be taking a look at CICD systems and how we can integrate our automated tests with them. CICD, which stands for continuous integration and continuous deployment, aims to improve the code production process.

While there are many tools available for the job, CICD isn’t bound to a specific piece of software or set or rules. It’s more of a general practice that teams adhere to in an effort to become better at deploying their code. There will be many similarities between different implementations, but they will all have their unique qualities.

From a testing perspective, there are many reasons to integrate with a CICD system.

The simplest is that it allows you to run your tests automatically, either on a timed schedule or upon outside changes. For example, many systems have tests run immediately after a code change is made, ensuring the latest features are always tested.

This brings us to the second benefit, which is better integrating your tests with the code development and deployment process. Tests that are not part of this process are liable to breaking, as they may be forgotten for periods of time.

Speaking of forgetting, having easier access to test results is also an important part of the equation. By storing the results in a shared location, teammates can more easily see how tests are performing, and if their changes had any impact on the test run.

Now that we’ve talked about what and why, let’s discuss how tests fit in.

It all starts with a trigger action. In this example, it’s a code update, but it could also be based upon a time schedule or a manual push of a button.

The trigger will notify the CICD system to start the process. It usually begins by building the latest code available. Once that’s complete, it will move on to the tests.

Your tests will run, validating that the functionality of the latest code available works as expected.

After completion, the tests will create a report of the results.

If the tests passed, the results will tell the CICD system to continue with the deployment process.

If there was an issue with the tests, the CICD system will stop progressing and will signal that something went wrong. If it’s an issue with the new code, the developers will make an update to resolve it, then restart the process.

As I mentioned before, there are many CICD solutions out there. They come in the form of both paid software, and open-source tools that you can install yourself.

This is just a short collection of them, and there are many others out there that aren’t shown here. The right system really depends on your specific needs.

We’re going to cover four of the most popular choices out there. Jenkins, TeamCity, TravisCI and CircleCI.

I won’t be showing how to install or set-up each tool, but just how you can run your tests with each of them.

Regardless of the tool used, there are a few keys to success that are common to all of them.

The first is having a clean, stable website to test on. Having tests fail due to a server error can cause you to dismiss test failures, and you risk ignoring valid failures due to this.

Secondly, you should work to load test data via an automated method. One of the benefits of the CICD process is the fact that the code is built fresh each time, so any users or data you manually added on a previous instance won’t be there. By automatically adding this data before or during a test run, you can be sure to have more consistent and quicker results.

Finally, focus more on the reliability of your tests. It’s better to have one solid test than many that are prone to unwanted failure. Before building out hundreds of tests, make sure the ones you have work every time.

If your tests unintentionally fail on a regular basis, folks will push to circumvent the test results, compromising the quality of the CICD process.

With that, let’s get started with Jenkins integration.

cicd.pdf

Transcript: Jenkins

Jenkins is a popular open-source CICD system. In this video, we’re going to take a look at how we can get our WebdriverIO tests to run on the software.

WebdriverIO does have an in-depth guide on integration, so I suggest having a read of that page when you’re through with this video. It covers a couple of items that we’re not going to talk about here.

Before getting started with our example, there are a couple of requirements you need to set up on your own.

First, you’ll need to have a Jenkins instance running. Getting this set up greatly depends on your situation. Many companies already have their own server ready for use. If you’re setting this up individually for yourself, I recommend using their pre-packaged solutions, which is how I set mine up for this demo.

Along with having Jenkins installed, for this example we’ll also be using the Git and GitHub plugins. Whether you use this or not really depends on where you store the code for your tests.

Finally, you’ll want to ensure your Jenkins instance has access to NodeJS. Since I’m running my Jenkins server locally, I simply added the path to my Node installation to the Jenkins Path information.

An alternative to this step would be to use the NodeJS plugin, which can install Node for you. Again, the solution depends on the situation.

With those items in place, it’s time to get started with our set up. First thing we’ll do is let Jenkins know about our Sauce Labs credentials.

From the main dashboard, click the ‘Manage Jenkins’ link on the main menu. Then choose the ‘configure system’ link.

Once that page loads, scroll to the ‘Global properties’ section and make sure the ‘environmental variables’ checkbox is enabled.

We’ll add two keys. The first will be the SAUCE_USERNAME, with our username. The second is SAUCE_ACCESS_KEY, again with the value of your access key.

Click save to store your credentials.

Now it’s time to create our Job. Back on the main dashboard, we’ll click the ‘New Item’ link. Once that page loads, we’ll start entering our information.

We’ll name our job “WebdriverIO Tests”. You can name this whatever works for your.

Then we’ll set this as a ‘Freestyle Project’ and click “ok” to get to the next step.

Now take a deep breath because this page can be a bit overwhelming. Thankfully we’ll only need to focus on a few items.

The first is the ‘parameterized project’ option. This will allow us to specify the URL we want to test on a per build basis. After ticking the checkbox, we’ll add a parameter, specifically a String parameter.

We’ll name it ‘baseUrl’, and add the url to the production Robot site as the default value. That’s all we need for that.

Next up, we need to let Jenkins know where our tests are at. Since we’ve been storing ours in git, let’s let Jenkins know that we’re using that. In the settings, we’ll pass in the repository URL, and use our Git credentials for access. I won’t cover setting up credentials, because again, it really depends on your set up.

The one other setting I’ll update here is the branch to build. You can probably leave yours alone, but for the Robots test code we’ll use the CICD branch. The master branch is set up for the very beginning of this course, so it doesn’t have the tests or settings we’ve made throughout the videos.

Moving on to the build triggers section, there are quite a few options we have and any one of them could be useful to you.

The first two would be useful if you have other projects set up or a different tool you’re using for triggering actions.

The third option is useful if you don’t have any current outside integrations. It will run your job in a periodic basis. Let’s check the box to see how that looks.

In order to know what we should add to that blank ‘schedule’ box, let’s click the ‘question mark’ icon on the side. This will pop up a helpful description of the format expected.

If you’ve ever written a cronjob, this format will look familiar. Essentially, you define 5 values for the minute, hour, day of month, month and day of week. There are some helpful examples at the bottom, but even more helpful are the built-in aliases they have.

@yearly, @annually, @monthly, @weekly, @daily, @midnight, and @hourly are all tags you can use in place of the 5 values. They work as named. @yearly will run your job once a year, whereas @daily will run it once a day.

If you’d like to run your tests after a code commit, these next two options are for you. Setting up the GitHub hook trigger will make Jenkins run your job whenever GitHub tells it to, which is usually after someone has committed code.

This does require some configuration on the GitHub side of things to make it work, which may not be an option for you. In that case, you can choose to poll your SCM, or Source Code Management tool, on a schedule. The schedule format matches what we looked at for the periodic build option, except that it will only run the job if the code has changed.

Here I’ll set up Jenkins to poll the SCM every five minutes. Even though it checks quite often, it will only run the tests if there are code changes. This is a useful option if you’re wanting to run your tests based on code changes, but remote triggers aren’t available.

With all that said, this is not a required setting, and the way we’ll be running our test is by manually clicking the ‘build now’ button we’ll see in just a second.

So far we’ve set a lot up, but we haven’t yet told Jenkins what to do with all of this. Luckily there’s not much we need to say.

Our build is going to consist of one step, which is to execute a shell command.

The command will consist of two parts. First, we need to tell Jenkins to install our NPM dependencies. This will make sure WebdriverIO and all of our extras are available for Jenkins to use.

Then, we’ll tell it to use our npm test command. This is just like how we run our tests locally.

The only difference is that we’re going to pass in a baseUrl argument. This is the thing we defined at the very beginning of our build options. We’ll use the normal double dash followed by another double dash and baseUrl. For the value, we’ll reference our build param using a dollar sign, followed by the name we used for our string parameter.

Okay, our build is now set up. Let’s save it, then wait for the main build page to load. Once loaded, we’ll start a new job by clicking the ‘build now’ button. We have the option of changing our base url if we’d like, but we’ll leave it as is for this demo.

In a few seconds, we’ll see our job appear in the build history widget on the sidebar. The blinking orb indicates the job is currently running.

Hovering over the icon, we can see a drop-down menu pop out. In this menu is a ‘console output’ option. Let’s see what that looks like.

Just like how we see our output when running our tests locally, we can see the output that’s coming from Jenkins. The first part is Jenkins checking out our git repo, then installing the NPM dependencies.

After that, we can see our ‘npm test’ command being run. Hopefully we’ll see it show a success, in which case Jenkins will report the build as a success. If there’s a failure, WebdriverIO will exit with a failing status code, which will let Jenkins know that something went wrong.

There’s a ton more to Jenkins than what I covered here. In fact, I could create an entire separate course on it, if only I had the time. If you’re looking for more information on the subject, be sure to check out their homepage for documentation and links to support communities.

Transcript, Files & Links: TravisCI

Links:

Transcript:

Popular in the open-source community, Travis-CI is the next CICD tool we’re going to take a look at. It’s a hosted service that integrates with GitHub projects. In fact, WebdriverIO uses it to build and test the project’s code, as the tool is free for any open-source project to use.

TravisCI supports many programming languages out there, NodeJS included. You’re able to request any specific version of Node, and NPM support is built right in.

All of this makes Travis an appealing choice if you’re looking for a build tool for your open-source project.

The first step we’ll take is creating an account. This is as simple as signing in with our Github account.

Once signed in, we’ll need to enable builds for the GitHub repo we want to test. I already have a few set up, as you can see in the sidebar. To add a new one, click the plus sign, the find the repo you want to enable.

Ours is at the bottom of the list. First, we’ll click the slider to enable the repo, then click the gear icon to go to the settings page.

There are quite a few settings we can change, but what we’re really interested in is adding our sauce labs credentials to the environment variables.

Let’s go ahead and add our usual SAUCE_USERNAME and SAUCE_ACCESS_KEY. This way, Travis can access our Sauce account and the browsers available with it.

If you’re looking for an alternative way to store your credentials, Travis also provides a command line tool for encrypting values. The documentation goes in to more details on this.

With that set up, it’s time to define our job. In Travis, this is stored inside a file called travis.yml. This file goes in the root of our project directory, so we’ll create it there.

Opening it up, the first thing we’ll do is define what programming language we’re going to use. We do that with the ‘language’ key, providing it a value of node_js.

Next, we’ll define the version of Node we want to use. While we could leave this out and go with the default that Travis supplies, it’s better to match the specific version we’ve been testing with to ensure we’re using a consistent setup.

The next thing we’ll define is the branch we want to limit our builds to. Similar to how we set this up in Jenkins, we’re going to tell Travis to only build the ‘cicd’ branch. This may not apply to your project, but we need it for ours since the master branch doesn’t contain the tests we want to run.

That’s all we need to do for our configuration file. By default, Travis will run two things for all node builds. npm install and npm test, which is conveniently all we need to run for our build. Therefore, we don’t need to define anything specific for our build steps, as Travis has us covered.

Now it’s time to trigger our build. Using Git, we’re going to add and commit this new travis.yml file, then push it up to our GitHub repo on the CICD branch.

Once GitHub receives our update, it will tell Travis to run a build. If we go back to the Travis dashboard for our project, we can see the new build start to run and see the console output from the build steps. Most of the output is Travis initializing the environment for our tests to run in. Only at the very end of the output do we see the information from our tests.

Once all the tests have completed running, WebdriverIO will tell Travis that the tests passed, at which time Travis will give us a nice big green checkmark.

There is a wealth of more options that Travis has available that we’re not covering here. They’ve got some great documentation, so be sure to check it out if you’re looking for more features.

travis.yml

Transcript & Files: CircleCI

CircleCI is a tool very similar in nature to TravisCI. It’s a hosted service that allows you to run your build code on their servers.

Just like TravisCI, CircleCI allows you to sign up with your GitHub account, which I’ll do now.

Once joined, I need to pick the code repo on GitHub that I want to start building. First I’ll deselect all of the repos, then re-select webdriverio-course-content. With that, I’ll click “follow and build” to get things started.

Before we set up our configuration file, we need to add our sauce lab credentials to the environment variables. Similar to TravisCI, CircleCI also includes a web interface to this.

To access it, I’ll click the gear icon next to the project name, then select ‘environment variables’. On the page that loads, I’ll add two variables, one for the sauce username and another for the sauce access key.

With that set up, it’s time to configure our build file. In our text editor, we’ll create a new file saved as circle.yml. This is similar to how our Travis.yml file worked.

In our file, we’ll need to define two things. The first is the programming language we’ll be using. In this case, it’s going to be node, version 6.5. If you need a different version, be sure to check out the CircleCI docs for different options.

The other setting we’ll configure is a limit on the branches that will be built. We did the same thing for Travis and Jenkins, which is to say that only the CICD branch should be built. Again, this may not be necessary in your case.

That’s all there is to it. Similar to TravisCI, CircleCI will run npm install and npm test by default.

To start our test, we’ll add and commit the code to git, then push our changes up to our github repository. Github will see this and let CircleCI know to trigger a new build.

Jumping back to my dashboard page, you can see the new build running. If we click on the build, we’ll get detailed information on what’s currently happening. When the npm test command runs, we can see our webdriverio script executing and our tests passing as hoped.

For more information on CircleCI and all the functionality they provide, check out their documentation.

circle.yml

Transcript: TeamCity

he final CICD solution we’ll look at in this module is TeamCity. TeamCity is similar in style to Jenkins, in that you host the service and define jobs via their UI site.

Similar to Jenkins, how your TeamCity server will be set up will really depend on your current code environment. I’m running this example locally, having downloaded their source code and properly configured my environment.

After having logged in to my TeamCity server, I’ll start things off by creating a new project.

The project will be loaded from a repository url, which I’ll let TeamCity know about. Since this is a public repo, I don’t need to specify a GitHub username or password

Proceeding on, I’ll leave the defaults as is and finish creating my project.

Next we’ll define our build step. Clicking the “configure build steps manually” link, I’ll choose a ‘command line’ build runner type. I’ll name it “npm”, and run it as a custom script.

For the script, I’ll run two things: npm install and npm test, then I’ll save my step.

Now that TeamCity knows what to do, we need to let it know our Sauce Credentials. Switching over to the parameters page, we’ll add our two environment variables. We’ll enter SAUCE_USERNAME, select ‘Environment Variable’ as the kind, then enter our username. We’ll save it, then do the same for our SAUCE_ACCESS_KEY. If you’d like this key to be hidden, you can edit the spec to be a type of password.

We’ve now got all of our settings set up to run our tests. I do need to change one other thing for my specific example. That’s updating the branch I want to test on. Jumping over to the Version Control Settings, I’ll edit my GitHub repo information and set the default branch to CICD.

Once we save it, we’re ready to run our tests.

Back on the project page, I’ll find my ‘build’ job, then click the ‘run’ button on the right-hand side.

Once the build starts running, we can see how it’s going by opening the build dropdown then selecting ‘build log’. In it, you can see all of the actions taking place, specifically the ‘npm’ step we’ve defined which is running our tests. Inside it, we can see the output from our commands, as TeamCity runs npm install and npm test.

That’s all there is to a basic TeamCity set up with WebdriverIO. The TeamCity Team has put together an extensive video library on the tool, so be sure to have a look if you need more details on TeamCity itself.

4. Visual Regression Testing

Transcript & Slides: What is Visual Regression Testing?

Visual Regression Testing is an incredibly interesting topic to me. In fact, research in to the idea is what first introduced me to WebdriverIO a few years ago.

Before getting in to the nuts and bolts of it all, I want to give a basic real-world example of what visual regression testing is.

A popular kids activity is a game called “Spot the Difference”. Two nearly identical pictures are shown side by side, and the kid is challenged to spot the differences between the two images.

The fun in the game is thanks to the difficulty brains have in identifying small differences between two very similar images. While this can be entertaining as a game, it can be infuriating as a tester.

Trying to catch minor changes in the visual styles of a website can be an impossible task, especially when you don’t have the images side by side.

Let’s take a look at our Robot website and play a little “spot the difference”. I’ve included five differences between these two screenshots for you to identify. Think you can find them all?

Here, take a few seconds to search.

How’d you do? If you’re like me, you’re lucky to have spotted a single difference.

If we do a comparison between the two screenshots and highlight the places where they’re different, we’ll come up with an image like the one here in the middle. This is called the ‘diff’ image, and it’s helpful for showing exactly where the two images are different.

For example, in the site header, the ‘our products’ link is pushed to the left in the second screenshot, causing the pink to show where the text was and now is.

Harder to spot is the font size change in the “reviews” and “similar products” tabs. A couple pixels font size difference really doesn’t stand out when trying to scan through an entire page.

The slashes in the breadcrumbs were also reversed. Did you catch that? How about the color of the horizontal rule between the product information and the Reviews section?

I don’t know many folks who can catch such minute changes, but luckily we have computers that can help out. Processors can easily compare images pixel by pixel and highlight any differences between them, and that’s the heart of Visual Regression Testing. Getting the computer to play the “spot the difference” game for us.

Here’s how the practice works. First, we run our test scenario to get us to the point we want to check. Then we have Selenium take a screenshot of either the entire page, or a specific portion of interest. Once that screenshot is taken, we check to see if we have a baseline image to compare against.

A baseline image is just a previous screenshot of the same scenario. When your running your tests for the first time, you won’t have a baseline created yet. In that case, we’ll store this new screenshot as the baseline and mark the test as passed.

If a baseline already exists, we’ll have the computer run a comparison between the two images. If all is good, then our test passes a normal. If however the computer spots a difference, we’ll have it create a diff image for us to review, then mark the test as failing to let us know about the issue.

Taking this approach brings several benefits.

The first is that we add to our tests the ability to catch visual defects. The look and feel of a website is often just as important as the actual functionality of it. A login page may technically be functional, but if an inadvertent color change causes the submit button to be hidden, the user won’t be able to use it.

It would be difficult to write tests for this type of scenario, as we’d have to define every visual rule defined for the entire website. Instead of going through that tedious process, we use screenshots to check for element spacing, the colors of the page and how content and text is aligned.

That’s a nice segue for our second benefit, which is that visual regression testing can cover a lot of ground with just a little bit of automation code. The phrase “a picture is worth a thousand words” really describes this idea. A screenshot is worth a thousand lines of assertions. That’s less poetic, but points to the fact that screenshots assert a lot of information all at once. One other benefit of this approach is the ability to test responsive styles. If you’re not familiar with responsive web design, it’s a way to build websites so that they adjust to the screen size of the device you’re using. With visual regression testing, we can snag screenshots of our site at various screen resolutions, comparing them against each other to ensure the site looks good on all devices.

That’s not to say it’s all perfect though. There are difficulties when comparing screenshots that you should be aware of.

The first is that it’s trickier when there’s a lot of dynamic content on the page. Because the content is always changing, the screenshot will be capturing different text each time. This leads to false failures when the content is different, but the styles are the same.

There are ways to mitigate this. You could run your visual tests on a server with fake content that never changes. Or you could use a design style guide or pattern library to run your tests on.

Another difficulty is the differences browsers have in their rendering capabilities. Even the same browser may render content differently depending on what computer is being used. Text is also rendered differently depending on the user’s settings and the operating system being used.

The best way to avoid this is to use a cloud Selenium service to ensure your tests are always run with the same operating system and settings.

Finally, managing the screenshots is a task that hasn’t quite been polished up yet. For the service we’ll be showing off, there is no interface for reviewing whether a diff is a valid failure or not. You simply have to run through it from your file browser.

There are open-source projects that handle this, but we won’t be covering them because they don’t have a direct integration with WebdriverIO, yet.

What we will be covering in this module is the wdio-visual-regression-service. This is an NPM module that allows you to take screenshots of the entire page, just what’s in view, or a single element if you so desire.

It has many options available for configuring snapshots which we’ll cover in the next couple videos.

Before closing out this video, I do want to mention WebdriverCSS, which is a currently defunct WebdriverIO plugin. While it contained a pretty decent feature set, when WebdriverIO evolved from version 2 to 3 to 4, WebdriverCSS compatibility fell out of date and the tool fell out of favor.

While there are ways to use WebdriverCSS with the latest version of WebdriverIO, we won’t be covering them due to the ephemeral status of those solutions.

With that, let’s get started with wdio-visual-regression-service.

vrt.pdf

Transcript & Files: The WebdriverIO Visual Regression Service

The WebdriverIO Visual Regression Service is a tool that adds the ability to capture and compare screenshots in WebdriverIO. While WebdriverIO does have a ‘saveScreenshot’ command, it is limited in functionality, and this service works to extend that command with extra features.

The service provides three new commands for you to use. The first two will take a screenshot of either the entire document or just the current viewport. The third captures an area defined by an element’s bounds. All three of these commands include functionality which allows you to compare these screenshots against ones already existing.

Before we get in to the details, let’s first focus on installing and initiating the service. To start we’ll install the NPM package using the standard npm install command.

Once installed, we’ll jump in to our webdriverio configuration file for the next few steps. In the ‘services’ section, we’ll add “visual-regression”, as that’s the name the service is registered under.

Next, we’ll define our configurations for the service. Similar to the way we created a mochaOpts object and stored our settings inside of it, we’ll create a visualRegression object and do the same.

Inside the object, we’ll define one property, named ‘compare’. This will provide the Visual Regression service wth the comparison tool we want to use for our screenshots.

To pass in a tool, we first need to load it in to our configuration file. The visual regression module we installed comes with the comparison tool we want to use. To get it set up, we’ll jump to the top of the page, the load the VisualRegressionCompare tool via a require statement. The compare tool is inside of the visual-regression-service package, so we’ll need to use the full reference to get to it.

With that set up, we can now make use of it. Back down in our visualRegression object, we’ll create a new LocalCompare instance from the VisualRegressionCompare utility we just loaded.

We need to pass in a few properties when initializing this new instance. These properties will let the compare tool know where to store the various screenshot files. We’ll be defining the ‘referenceName’, ‘screenshotName’ and ‘diffName’. Each of these will reference a function that will generate the path and filename of the screenshot to be taken.

Since the filename will need to be unique per screenshot, we need to write a function that will dynamically generate that name.

Jumping back up to the top of the file, we’re going to create a ‘getScreenshotName’ function. This function will accept a folder path, which will be used to store the different types of images in different folders. For example, we’ll have one folder for our baseline, one for a latest screenshots and one for our diffs.

It will also accept a ‘context’ object, which contains information about the specific test being run. This makes it possible to include the test name in our filename, making it easier to search for screenshots later.

Inside our function, we’re going to define quite a few variables. These variables are going to store several properties of our test. We’ll be getting the type of screenshot being taken (whether its of a single element, the viewport or the entire document). We’ll also get the title of the test and some information about the browser, including the version, name and dimensions.

Finally, we’re going to combine all these values together to create the file path for our image. For added functionality, we’re going to use Node’s path module. We need to require the module before using it, so let’s do that real quick.

With the path module loaded, in our function’s return statement, we’ll use the ‘join’ function to combine three parts of our path.

The first part is the current working directory, which we’ll get using process.cwd. This ensure our screenshots will be saved in the same folder that our project is in.

Next, we’ll get the folder path that gets passed in to the function. We’ll define these paths in just a moment.

Finally, we’ll piece together the various parts of the filename using the test and browser values we defined earlier.

Our screenshot function is now ready for use. Jumping back down to the visualRegression object, it’s time to finish entering the information for our compare method.

‘referenceName’ refers to the path we’ll store our reference, or baseline, images in. To pass in our function, we’ll reference it using it’s name. Normally that’s all we’d need to do to have the function ready to go. But since we need to pass in what type of screenshot it is, we’re going to use a little bit of advanced JavaScript.

All functions in JavaScript come with a ‘bind’ method, which allows you to tie in data to a function without running that function right away. This is similar to how we pass in parameters, except it won’t run the function right away. Instead, using bind, we’ll create a new function that’s ready to be run with a predefined parameter.

Here’s how it looks. We call bind, then pass in two parameters. The first parameter isn’t all that important. We set it to null, since we’re not using that functionality at this time. For those curious, the value you pass in here will be bound to the this object when the function is invoked. Again, we just need to pass in something to make the bind function happy.

Next, we’ll pass in the important data, which is the folder path we want to use for our screenshots. I’ll pass in ‘screenshots’ then ‘baseline’, ensuring that all of our reference images will be stored in the baseline folder inside of the screenshots folder.

Let’s do this two more times. For screenshotName, we’ll use the same pattern, but instead store the images in the ‘latest’ folder. For the diffName, we’re going to store the images inside the ‘diff’ folder.

With all that defined, our service is set up and ready to use. There are more settings we could define here, but we’ll save those for the next video. Let’s try out what we have in a test first.

We’re going to add screenshot functionality to our shop-button test. That’s a very simple test that runs pretty quick, so it’ll be a good first try for the visual regression method.

Inside my ‘shop-button’ test file, I’ll add a single command, calling ‘checkElement’. This command is one of the three commands that the visual regression service added. Similar to commands like click, we need to pass in a selector for the element we want to check.

I’ll copy the shop button link selector so that we can take a picture of it.

That’s all we need to do. checkElement will take the screenshot, see there isn’t a baseline, and let the test pass as normal. Let’s run our test.

To speed up our test, we’ll use the ‘spec’ command line argument to only run the shop-button file. While we still use Sauce Labs for our browser needs, the screenshots will be saved on our local computer. This makes accessing them super easy.

With the test run complete, it’s time to review the screenshot. Opening up the Mac finder window, I’ll jump in to the screenshots folder and see a ‘baseline’ and ‘latest’ folder. Inside each will be an identical image of our call-to-action button.

Notice how the file name contains the test name, browser name and screen size dimensions. Pretty handy information.

Now it’s time to see a failure scenario. I’m going to make a change to the css, adjust the default padding of any element with a button class. This change is going to be almost imperceptible, so it’s a great example of when visual regression testing can really come in hand.

With our change made, let’s re-run our tests.

After everything runs, it looks like the test still passes. But we know that it shouldn’t since the style was changed. Let’s look at the screenshots and see if it created a diff.

It turns out it did. And if we look at the diff image it created, we can notice the small padding difference creating some spacing changes. We definitely want our test to fail in this instance, so why didn’t it?

Well, we added the screenshot comparison, but we didn’t pass that information back to mocha via an assertion. Let’s do that now so our test will fail if the screenshot doesn’t match.

First, we’ll store the results from our ‘checkElement’ call. Then I’ll log them out to the console so we can see what they look like.

Finally, I’ll add a new assertion. I already know that the results are returned as an array, and that I want to check the ‘isWithinMisMatchTolerance’ property, so I’ll pass that reference in to my ‘expect’ statement. I want this value to be true, so i’ll expect it to be that way. Now if ‘isWithinMisMatchTolerance’ returns false, indicating that there was a mismatch, our test will fail.

Let’s run our tests again and see our failure occur. We’ll also look at the output of the ‘results’ to see what other information the response provides us.

There are four values we can use. misMatchPercentage defines how much is different between the two images. You can use this value to determine what’s an acceptable amount of mismatch between screenshots, in case you want to avoid some false failures.

There are also a few boolean values letting us know how the two images related. You can see that in our instance, none of the features of the image are matching.

We’ll stick with just using ‘isWithinMisMatchTolerance’, but these other results are nice to know about.

Before finishing this video, there are two more things I want to quickly touch on.

The first is how to update your baseline image if you make a style change that you intended. The solution is pretty simple. Simply delete the existing baseline image and run your test again. You can leave the ‘latest’ and ‘diff’ files alone, as the comparison will overwrite those files as needed.

Also, if you’re using a version control system like git, you’ll want to ignore all the images in the diff and latest folder. This will ensure that images aren’t needlessly checked in to your project repository. For git, we’ll edit our ‘.gitignore’ file and add two lines. These lines reference the diff and latest folders, telling git to ignore these folders and the files they contain.

With that, we’re ready to get a little more advanced with our screenshots. In the next video, we’ll look at some custom options we can define to add more functionality to our testing.

Downloads13.zip

Transcript & Files: Advanced Visual Regression Service Usage

Links:

Transcript:

In the last video, we looked at how the webdriverio visual regression service can capture screenshots of our page elements to check for unintended changes over time.

While this technique is powerful on its own, you will likely need some more advanced techniques to take full advantage of it.

In this video, we’re going to look at how we an customize our screenshots for various circumstances you may face.

The first option we’ll check out is misMatchTolerance. If you recall from the previous video, we used the ‘isWithinMisMatchTolerance’ property from our screenshot comparison results to fail our test if need be.

The misMatchTolerance is a number between 0 and 100 that defines the degree of mismatch to consider two images as identical. Increasing the value makes the tests less likely to fail, as it takes a greater difference between the screenshots to exceed the tolerance.

Let’s try it out. Last video, we ended with a test failing due to a 14% mismatch. With the default misMatch tolerance set to .01%, we definitely exceeded the bounds.

You can set this property inside of your webdriverio configuration file, but we only want to override a specific test, so we’ll set it there.

Say we want to allow for this much variance in our images. If we jump in to our test file, we can pass a second parameter to our checkElement command. This is going to be an object, and will contain any options we want to set.

In our case, we want to set the misMatchTolerance property to a value higher than our current mismatch percentage, which is 14. 15 works great for our needs.

With our value set and our file saved, when we re-run our test, we’ll see that it now passes, even though the two screenshots have a fair amount of difference between them.

Changing this value is useful if you have small differences in font rendering that aren’t really regressions. Normally you wouldn’t want to set the value higher than 2 or 3, but you never know what situation you may be presented with that could use a higher number.

While we set it for a specific test, misMatchTolerance can be defined on a global basis. You could change your tolerance to be higher for all of your check commands, without having to set them on an individual basis.

This is true for all of the options. Any global option defined in our webdriverio configuration file can be overridden for a specific test when necessary.

This is good to keep in mind as we look through the remaining options.

Moving on, the next option we’re going to take a look at it is ‘viewports’.

While WebdriverIO has a command to change the viewport size, this shortcut configuration can really save some effort on your part.

If you set this up, every screenshot you take will be captured in the various viewport sizes you set. This can really come in handy for testing responsive websites.

The viewports property can be set individually on each check command, or you can set it globally via your webdriverio configuration file. Let’s do that right now.

Inside our configuration file, in our ‘visualRegression’ configuration object, we’ll add a new property named ‘viewports’. This property is going to have an array as a value, as we’ll have multiple viewports we want to define.

Inside the array, we’ll create a new object for each size. Each object will contain a width and a height. Here, I’m adding two objects, one for a mobile viewport size, and one for a desktop.

Speaking of mobile viewports, there is an orientation property that is useful when testing on mobile devices. Using it will define what screen orientation you want the images captured in. You can use ‘landscape’, ‘portrait’, or both if you desire.

Since we don’t have a mobile browser set up, we’re going to skip setting this property in our settings.

There are some options you can set on a case by case basis that aren’t available as global configurations.

Specifically, we’re talking about the ‘hide’ and ‘remove’ settings. Both of these settings work to achieve the same goal, which is to remove content from our screenshot.

This is useful when you’re dealing with dynamic content that can change between test runs.

Let’s look at an example. We’re going to create a new basic test which checks a screenshot of the main dropdown menu on the site. We’ll name the file ‘menu.js’, and have it describe the main menu.

The test will validate that the menu opens on click.

First, we’ll load the main page. Then click the link with the text ‘Our Products’. This should all look familiar from previous test exercises.

The next thing we’ll do which is a little different is use the ‘checkViewport’ command. We mentioned this in the previous video, but didn’t actually use it. We want to take advantage of it now, because of the nature of our menu dropdown panel.

Instead of taking a screenshot of just the dropdown menu, we want to capture the entire viewport. This is so we can ensure that the positioning of the menu is correct in relation to the rest of the page.

We’ll use the same assertion we used in the previous test to check that the mismatchTolerance is within the standard limit.

The first time we run this test, it’s going to create a baseline and pass as normal. But if we look at the baselines created, we’ll notice something.

Behind most of the menu, the carousel content is showing. This should be fine, but because the carousel changes, if our test doesn’t capture the image in the same timeframe as the first, it will inadvertently fail.

I ran the test again to show what I mean. In the baseline image, notice the image used in the carousel. Now if we look at the ‘latest’ image, we’ll notice that image has changed. Looking at the diff image, we can definitely tell it’s not happy with the difference.

To avoid this, we’re going to use the ‘hide’ option.

Hide will set the visibility on any element we give it to ‘hidden’, effectively hiding it from our screenshot. This means we’ll have a blank white space where that image used to be.

Before running our test, we’re going to delete our old baseline, as it still has the background image in it, so it would still fail the comparison.

With the old baseline deleted and our test run completed, you can see the new baseline image without the background carousel. Now anytime we take a new screenshot, the timing of it won’t impact the results.

Just so you know, the ‘remove’ option works the same way, except it sets the display to none. Be careful with this one as the layout of your page will change with the element effectively removed from it.

If you’re interested in learning more about visual regression testing and other options out there, check out a website myself and Micah Godbolt put together, with contributions from the community. It contains a list of articles, videos and tools for use. Many of these don’t use WebdriverIO, but they still provide value on their own.

That concludes our look in to visual regression testing. I personally look forward to what this field of testing has to provide in the future.

Downloads14.zip

5. Test Reporters
Files & Transcript: Junit Reporter

We’ve spent a lot of time in this course talking about how to write tests. Now we’re going to spend a few videos figuring out how we can better understand the results of our tests.

Test reporters comes in a wide variety of feature and function. Some reporters, like ‘dot’, ‘spec’ and ‘concise’ are best for human consumption. They’re easy to read and are used to let a developer know what went wrong.

Others, like ‘junit’, ‘json’ and ‘tap’, are designed to be consumed by computers. The output is much harder to read through, but structured in a way that other programs can process the data.

Some reporters are made for specific tools, like ‘allure’ and ‘teamcity’.

If none of those are what you’re looking for, you can even write your own reporter. WebdriverIO’s reporter support really is fantastic.

For today though, we’re going to look at setting up and using the junit reporter.

This report format was originally created for the Junit Java test framework, but has been ported to many other programming languages and test tools.

Because of this, many CI servers support the format and have built-in functionality to consume the reports.

We’re going to look at how you can use the Junit reporter to take advantage of the built-in Jenkins Junit integration.

First, we need to install the junit webdriverio reporter. This comes as a separate webdriverio npm package, similar to how services are separate modules.

Installation is same as ever, running the npm install command and saving the dependency information to our package.json file.

Once installed, we need to pass in a small configuration to tell webdriverio where to put the XML files created by the reporter.

This configuration lives in the ‘reporterOptions’ property and goes inside a ‘junit’ object. We’re going to set the ‘outputDir’ property to go to our base folder.

There are a few more options available if you find a need for them. Check out the reporter GitHub page for more information.

One thing we didn’t do here that the documentation told us to was add ‘junit’ to the ‘reporters’ setting. Because we only want to run the junit reporter during our CI builds, we don’t want it in our global settings.

Instead, we’re going to edit the configuration on our jenkins server for our project. Where we defined the ‘npm test’ command, we’re going to pass in a second argument, named ‘reporters’. We’ll set the value to ‘junit’. This will override the default ‘dot’ reporter, and make sure that WebdriverIO creates our junit report.

The other update we need to make to our Jenkins server is to add a post-build action. After WebdriverIO creates the Junit xml files, Jenkins will need to process those results.

You could install an external xunit plugin to track your reports, but the one that comes with the default Jenkins installation is sufficient enough for now.

All we need to do is tell Jenkins to ‘Publish the Junit Test Report’ and pass in the path to the XML file we want it to process. Since we’ve set up webdriverio to publish these files to the base directory, we’ll direct Jenkins to look there with the *.xml path.

Saving our changes, the next time we run a test we’ll see a “test result” link. Here I’ve run a new build, which passed, and now has the “test result” available.

Clicking the link, we’ll see our tests categorized by the browsers they were run in. Since we only tested in chrome, there’s only one browser to choose.

Clicking on ‘chrome’, we’ll start to drill-down in to our test results. The tests are broken up by ‘describe’ block. Let’s look at our ‘accordion’ tests.

We can see each individual test that was run, and can even click on each to view the actual commands that were executed. This is all great information for when you need to debug failing or slow tests.

That’s all it takes to set up the Junit reporter in Jenkins. It’s only a few minutes of time, but the results are quite valuable in the long run. Not only does it help with debugging, it also helps make your tests results more visible to folks less familiar with WebdriverIO.

Next, we’re going to take a look at the Allure reporter, which is similar to junit, but provides many more features for use.

wdioconf.js

Links, Files & Transcript: Allure Reporter

Links:

Transcript:

The Allure Framework is a flexible, lightweight, multi-language test report tool used to display and organize test results. It’s very similar to junit, except it raises the bar as far as UI and functionality goes.

Allure is free to download and use, and comes with many CI system integrations. Both Jenkins and TeamCity provide plugin functionality for hooking in to Allure.

For this video, we’re going to use the Allure command line runner to generate and serve our Allure report website. We’ll walk through that in just a bit.

WebdriverIO has a good short guide on the allure reporter that’s well worth a look. Before we start on it, I want to mention that Allure recently released a second version, which will look different from the screenshot shown on this page. Either version should work for the reporter. We’ll be using the latest version.

Installation is the same as with junit. Run the NPM install command, and be sure to use the ‘save-dev’ flag to store the dependency information in your package.json file.

Next, we’ll open our wdio configuration file to our reporters section. We’re going to add ‘allure’ to our list of reporters, as we’ll be running these tests locally.

We’ll also add a new ‘allure’ property to our reporterOptions, and set the output directory to ‘allure-results’. Allure generates many XML, text and image files, so we’d like to keep them contained within a single folder.

With the configuration file saved, we can now run our test to generate our test results.

After the test runs, you can see a new ‘allure-results’ directory created, with a bunch of files in it.

To turn those files in to a more user-friendly report, we need to send them through the allure generator.

The allure generator is a separate piece of software that comes in many forms. I’ve installed it via the Mac OS X instructions, using the Homebrew package manager. What you’ll use depends on what operating system you’ll be running allure on. Check out their documentation for specific needs.

With the command line tool installed, it’s time to use it. First, we’ll generate our HTML pages via the ‘allure generate’ command, passing in the folder path to our raw results.

Next, we can open up this report by using allure open, which will start a local server and open it in my default browser.

On the report overview page, we can see what percentage of our tests have passed, and other high level information about our tests.

Specific test case information can be found on the ‘xunit’ page. This is very similar to our junit report we built out in jenkins last video.

There’s much more to the Allure reporter than I’ve shown here, so if you’re interested in learning more, check out their documentation for more details.

wdioconf.js

Transcript: TeamCity Reporter

The Teamcity reporter integration is surprisingly simple. Just install the NPM module, add team city to your reporter options, and run your team city build again.

Once set up, you’ll gain access to a ‘tests’ tab that provides you information on your tests, including individual test execution time. It also allows you to ignore or investigate troublesome tests, which can be very helpful when one acts up.

Starting off, we’ll add the wdio-teamcity-reporter to our dev dependencies by running the npm install command.

Next, we’ll jump in to our TeamCity server administration page. If you haven’t watched the TeamCity set up video in Module 9, be sure to check that out.

In our build configuration, we’re going to change our build step.

We currently have it set up to run ‘npm test’. We’re going to add a reporters parameter to tell WebdriverIO to use the TeamCity reporter. The reporter will take care of letting the TeamCity server know all about our tests.

To test it out, we’ll save our changes then run the test again.

Once the build is completed, a new ‘test’ tab will be added to the overall report.

Clicking this tab, we can see all of the tests we’ve run with their status and duration.

We can also drill down in to specific tests to see additional options, including a test history page which shows historical data on that test’s execution time and status.

That’s really all there is to this reporter. For those of you using TeamCity, I highly recommend using it, as it adds many features that can come in handy as you work on your test suite.

6. The WebdriverIO Starter Kit and Login Tests

Transcript & Links: The WebdriverIO Starter Kit

Links:

Transcript:

Welcome to the final module in this professional add-on. We’re going to focus on piecing together all the bits of information we’ve learned so far.

To start things off, I’m going to walk you through a starter kit I put together that features many of the technologies and techniques we’ve looked at.

This starter kit is available freely to use and modify as you see fit. It’s available on GitHub and includes an MIT license, which allows you free reign on modifications to the code.

To start using the code, you’ll want to either download, clone or fork the repo. I’ll walk you through cloning the repo.

First, copy the ‘clone’ link via the GitHub interface. Then, in your command line, run the git clone command, pasting in the link you just copied. This will download the repo to your computer, giving you local access to the code.

The next step is to cd in to the new folder, then run npm install. This will install all the needed dependencies of the tests, including WebdriverIO and several services we want to take advantage of.

Once installed, you can run npm test to run the test suite. By default the tests will fail, as there are some details we need to update to make everything pass.

Before getting in to that, I’d like to highlight some of the features of this starter kit.

These are just opinions, and in no way imply that you should be setting up your test framework this way. Again, feel free to modify and adapt this starter kit as you see fit.

The first ‘opinion’ of the kit is that it will use Mocha and Chai for the test and assertion framework. I chose these two tools due to their popularity, and the fact I’m most familiar with them.

I’ve set up chai’s ‘expect’ interface to be available as a global variable in your tests, so you don’t have to manually include it in every test file.

I’ve also included the Chai WebdriverIO plugin, which adds WebdriverIO specific assertions to our tests. This is great for adding extra semantics to your assertions, allowing clearer and more concise tests.

No additional set up is needed to use this plugin in the starter kit, so just have a quick read through the examples and use it freely.

The next opinion of the starter kit is to use Sauce Labs for your Selenium needs. Again, due to the popularity and functionality of the service, I’ve included it.

It does require some additional set up when getting up and running. You’ll need to define your Sauce username and access key in your environment variables, similar to how we set things up in Module 7.

Since we’re using Sauce Labs, we can take advantage of the consistent browser environment by including Visual Regression Testing in our tests. I’ve set up the WebdriverIO Visual Regression Service just for this.

Screenshots are saved to the ‘screenshots’ folder, and a default configuration takes snaps at mobile and tablet screen resolutions.

In module 8, we talked about advanced configurations, allowing you to have both a “production” configuration file and a “local” one. Following that convention, I’ve set up two configuration files in the starter pack. One is for production level testing, and is used at the default configuration.

In it, you’ll need to update your baseUrl, and possibly update the browsers you want to test on depending on your needs.

In the ‘local’ version of the file, we focus on development level testing. This type of testing is great for quicker feedback on local changes. It foregoes sauce lab usage in favor of selenium standalone, and also skips all Visual Regression Testing. This is to avoid false failures in our screenshot comparisons due to the different computers the tests will be run on.

You’ll need to update the baseUrl in this configuration file to match the correct path to your local server.

The local tests also includes a notification system, making it easier to review tests while developing. It will trigger local notification when test runs start and end, along with notifying you if there is a test failure.

More information on this functionality is available in a blog post I wrote.

One thing we haven’t covered in the course, but I wanted to include in the kit, is ESLint. ESLint is a tool used to check for code syntax errors, as well as promote a consistent code formatting style. I use it heavily in my day-to-day work and it has prevented many bugs for me.

I’ve set up ESLint with the Semi-standard style, which is a popular syntax style in the community. I’ve also added Mocha and WebdriverIO specific information to the tool, ensuring that you won’t receive lint errors for some of the things WebdriverIO and Mocha do.

For CICD integration, I’ve added a TravisCI file. As covered in module 9, TravisCI is a useful continuous integration tool and fairly easy to set up. If you want to use it, you’ll need to fork this starter kit repo, then update your TravisCI settings to run builds for your copy of the code. Everything else is set up for you.

You will want to update the badge information in your README file to point to your GitHub repo. Or you can remove the badge if it’s not useful for you.

A few last things. I included a small bit of code that allows you to pass in a DEBUG flag when running your tests. This flag will effectively remove the mocha timeout setting, allowing you plenty of time to debug your live running tests.

Usage is shown in the README file.

If you’re using Git for code versioning, there are some WebdriverIO specific files you don’t want saved. I’ve added a ‘gitignore’ file that ignores all common Node files and folders, along with the “errorShots” and visual regression related folders we don’t want versioned. This helps keep the size of your code repo smaller, which is helpful in the long run.

The final item are a couple of tests and page objects I’ve included to help you get started with testing. We’re going to cover how I wrote these tests next, so stay tuned.

testlogin.js