1. Demo Site Installation
IMPORTANT: WebdriverIO Major Version Update
Since creating this course, WebdriverIO has undergone a major upgrade (from version 4 to 5), changing how the library works.
Because of that, some of the videos are going to show material that is out of date. I’m currently reworking the content of the course to match the new version, but this is going to be a fair amount of work.
I have the beginnings of the update available as an ebook right now:
You can download a free preview of it, and I’ll be providing access to the full book for free for all students once it’s ready.
I’ll also be adding a couple of videos to the start of the course explaining the differences between the two versions and how you can use both.
Links & Transcript: “Robot Parts Emporium” Site Setup
- Github Repo: https://github.com/klamping/webdriverio-course-con…
- Example Site: http://www.kevinlamping.com/webdriverio-course-con…
In this course, we’ll be testing a fictitious site called Robot Parts Emporium. This site is intended to mimic a full-featured e-commerce shop, including dynamic elements like dropdown and modal windows. Let’s take a look at how to install the site on your computer.
The code for the pages is freely available on GitHub. If you’re not familiar with GitHub, checkout the great guides they provide at guides.github.com. There isn’t much Git or GitHub knowledge required to complete the course, so don’t worry too much about the details.
To get a copy of the site code, either download the ZIP file directly from Github, or clone the repo locally using Git. I’ll walk you through the latter choice.
We first grab the git URL for the public repo. Then, from the command line, we’ll go where we want the site folder to be added. Here I’m in my home directory. Using ‘git clone’ we’ll paste in the address for the github repo and let Git do its work.
Our site uses a basic Node server to run. You don’t need to know much Node, but you do need to have Node.js installed locally. Any of the recent versions should work.
Once you have Node.js available, run
npm install to grab the site dependencies. Run
npm start, and your server will get started.
Grab the IP address and paste it in to your browser, and you’ll see you now have your own local server available for testing. That’s all you need to get going.
2 . Automated Testing in WebdriverIO Standalone mode
Welcome to Module 1 of “Automated Testing with WebdriverIO”. In these next few videos, we’re going to install WebdriverIO and Selenium, enabling us to write our first set of tests.
By the end of the module, we’ll have a test that runs the following actions:
It goes to our site
It outputs the title of the page
It click the “shop” button
And it again outputs the page title and url, so we can make sure we’re on the product page
Before writing our tests, we need to install the tools used to run them.
We use NPM to install our dependencies. NPM is a package manager for NodeJS, allowing easier installation of utilities like WebdriverIO.
The two tools we need for our tests to run are WebdriverIO and the selenium-standalone module. You’re probably familiar with WebdriverIO, but for those unfamiliar with selenium-standalone, it’s a handy utility for getting Selenium to run locally on our computer.
To install both packages, open the command line to the project folder you set up in the “Robot Parts Emporium Site Setup” video. Then, run
npm install --save-dev webdriverio@4 selenium-standalone.
This will install both tools for us.
save-dev tells NPM to store the dependency information in our
package.json file, in case we need to re-install the tools later.
WebdriverIO is ready to go, but we still need to do a little bit of work setting up our local selenium server. This is where the
selenium-standalonetool comes in.
When we ran the NPM installation, “selenium-standalone” added a command inside the
node_modules/.bin folder. We can run it by calling
node_modules/.bin/selenium-standalone from the command line.
You can see that it wants us to state whether we should “install” or “start” Selenium. “install” will download and install selenium for us, and start will start the selenium server.
Since this is our first time setting things up, we need to download the Selenium server. We do this by sending ‘node_modules/.bin/selenium-standalone install’ to our command line. We won’t need to run this again, as it will save the files for later use.
With the Selenium server downloaded, we’ll start it up by running “node_modules/.bin/selenium-standalone start”. After a few seconds, Selenium will be up and running.
You can confirm you have it working locally by going to the location provided at the end of the output in a browser. You won’t be using this page though, so you don’t need to keep it around.
Now that WebdriverIO is downloaded and Selenium is running, we can start writing our tests. We’ll take a look at that in the next video.
Transcript & Code Samples: Trying out the WebdriverIO example
There are two ways to run tests using WebdriverIO.
The first, which we’ll look at in this video, is to write a simple NodeJS script and run it from the command line.
WebdriverIO also provides a test runner tool as a second option, but we’re going save that for the next module.
We want to write a custom test for our site, but it would be helpful to look at the official example first.
On the WebdriverIO site, they include a simple script which executes a test on the duckduckgo search engine. Let’s try that script out and see how it works.
example.js, but the filename doesn’t matter all that much.
Let’s take a line-by-line look at the test they provide. The first thing they do is require the WebdriverIO object. This is one of the tools provided to us when we installed WebdriverIO via NPM.
The next step of their test is to configure some options. In their example, they tell Selenium to use the Chrome browser. We can change that to any other browser we have configured, such as Firefox or Internet Explorer. We’ll leave it at Chrome though.
With WebdriverIO and our options set up, we create a new
remote object. This object is what we’ll use to run all of our test commands. You can see the options get passed in to the instantiation of that object.
For more details on configurations you can provide to this object, check out the configuration guide on the official site. It goes in to good detail on everything you can do.
With the newly minted object ready, it’s time for the test steps. The first step when running WebdriverIO in standalone mode is to initialize the browser. That’s done via the
The next function called is
url, which loads the website of the url passed in. While the page loads in the browser, WebdriverIO will automatically wait before executing the next action.
After the search engine page has loaded, the script will set a value in the search box using the
setValue command. The
setValue command only works for keyboard-interactable elements, like form inputs, textareas or the body of the page. You wouldn’t be able to set the value on an
H1; it would throw an error.
There are a variety of ways we can tell WebdriverIO what element we want. These are called “selectors”. The official documents go through each type of selector we can use, so be sure to check it out for more details.
In the example test script, it uses an ID-based CSS selector to find the input field we want. That’s passed in as the first parameter of the
The second parameter of the command is the value we want to set, which will be
setValue will tell Selenium to sends the keystrokes
W-e-b-d-r-i-v-e-r-I-O to the element, just as if we had typed them in ourselves.
With the value in the search field what we want it, it’s time to run our search. To do this, we’ll click the search button. To do that, the script uses the ‘click’ command, again, using an ID-based CSS selector to specify which element we want to click.
This performs a mouse click action on that element, which in turn runs the search. This submission reloads the page, which WebdriverIO inherently waits for.
Once the page is reloaded, the title of the page is retrieved using the
For those of you unfamiliar with Promises, here’s a basic explanation.
then is a generic function which allows us to take an action after the previous action has completed running. It can also pass along data from that previous action if provided.
In this case, it gets the “title” value, which was passed along from the
getTitle command. It logs that value to our console, which we’ll see in a moment.
then ensures that the title isn’t logged out before the
getTitle command has finished running. We’ll look at an alternative to this style of writing tests in the WDIO test runner module.
The last step of our test is to run the
end command, which closes the browser and ends the session.
Let’s run the test to see the actions execute. Make sure you still have your selenium server up from the previous video, otherwise the test won’t run. Remember to use the
node_module/.bin/selenium-standalone start command for this.
Then, open up a new command line window and run
Running the tests, we see a browser window pop up, DuckDuckGo get loaded, the input field filled, and search results returned. When we go back to the command line, we see the page title has been logged. Our test ran successfully and is complete.
Now that we’ve covered the example WebdriverIO script, let’s write our own. We’ll jump in to that in our next video.
Transcript & Code Samples: Writing our First Test
With the example WebdriverIO script under our belts, let’s dive in to creating something of our own.
To recap from the first video in our lesson, our test will load the url, log the page title, click the shop button, then log the title and url of the new page.
We can copy over all of the code from our example test in to a new file called “shop-button.js”. Our test is fairly similar to the example we just went through, so it’s a good starting point for our needs.
We’ll leave the top as is, as those defaults work well for us. The first item we’ll change is the url. Instead of going to DuckDuckGo, let’s go to the URL of our local server.
Next, we can get rid of
setValue, as we won’t be filling out a form. Instead, we’ll add a
getTitle call to log the initial page title.
We then use the
then function to accept the title back from the
getTitle command. With it, we’ll log it out using
Once logged, we click the “shop” button. There are many ways to select this button for clicking. We’re going to use a descendent class selector, but you could replace this with an xpath or text-based selector if you prefer. We’ll get in to advanced selectors in a later lesson.
After the “shop” button is clicked, the page should reload. We’ll verify this by logging the title and url. Both commands are followed by “then” functions, which accept and log the results.
Finally, we’ll leave the “end” command as is, as we want to close our browser when we’re done.
Let’s save our file and run the test. Because we’re running this all locally, the test runs very fast. You may not even see the browser window pop up on your screen. This is why the console output is so useful.
As you see here, it’s logged out the expect page titles and url. This tell us that the shop button is working correctly and that we have our first useful automated test.
That concludes this first module. We’ve installed WebdriverIO, set up a local Selenium Server, and written our own test.
I mentioned in the first video that there are two ways to run WebdriverIO. We’ve been looking at using standalone mode, which is decent for basic examples.
But if you really want to take advantage of what WebdriverIO offers, you have to check out the test runner. It allows us to store all of our configuration information in a single file, comes with a bunch of integrations and services, and features
sync mode, which gets rid of those repetitive
We’ll cover all of that next.
3. The WDIO Test Runner
Transcript: Creating our config file using the Test Runner CLI
In module 1, we created a simple WebdriverIO script. It went to our local URL, got the title of the home page, clicked the callout button, then logged out the title and URL of the product page.
We ran all of this locally through the
node command, using WebdriverIO’s ‘standalone’ mode.
Now we’re going to look at using the WebdriverIO test runner, which is a utility that comes with the WebdriverIO NPM package. It comes with many features, including an easier way to manage our configurations.
Instead of having to redefine our browser and Selenium settings in every test file we write, we’re going to store those settings in a single configuration file. The test runner will use that file to apply settings for all our tests.
So how do we make this configuration file? Well, we could create it manually, but the file is pretty large. To help with this, the test runner provides a utility which asks a series of questions to help customize our settings to our specific needs.
To run the configuration set up, we need to call the
wdio test runner utility. Similar to
selenium-standalone, WebdriverIO added a command line utility to our
node_modules\.bin folder. And just like before, we can call it by referencing the path, plus the name of the utility, which is
Since we haven’t set up a configuration file yet, WebdriverIO is recognize this and run the set up utility. We’ll be asked a series of questions about how we want to set everything up, and then it will generate the configuration file for us.
Let’s try it out.
The first question it asks is where our Selenium server is. We’ll look at some third-party server options later, but for now, let’s stick with our local machine.
The next question is, which framework we want to use. This is talking about the test framework we want to wrap our tests in. We haven’t talked about this yet, but we’ll be using Mocha as our framework.
Yes, we would like it to install the framework adapter for it. This will run an npm install script for us, and add the
wdio-mocha-framework NPM module to our local installation. It will also save the dependency information to our package.json file.
We’ll need a folder to store all of our tests. By default they’re located in the
test/specs folder. However, I prefer just keeping them in the
test folder. I’ll update this default value to be
Just so you know, the is called a Glob pattern, which is a convention to define where files are at. The star star section says to look in all the sub-folders for files, so if we later organize our tests by feature, say we add a
checkout folder, WebdriverIO will know to look in those subfolders as well.
*.js matches any file with a
Moving on to reporters. By default, WebdriverIO uses the dot reporter, even if you don’t select it. We’ll leave this option as is, but you could select the spec reporter if you’re curious.
Next is services. Services are used to add extra functionality our tests. Some enable integrations with third-party tools, like cloud Selenium services. Other are used for local features, like Firefox or PhantomJS set up. We’re not going to set up those services at this time, but we’ll come back to them in the Professional add-on video series.
There’s one service we do want to use though. Last module, we set up the Selenium-Standalone NPM module. It made it easier to get a local Selenium server running, but did require an extra step to start the server whenever needed.
WebdriverIO has a Selenium-Standalone service that’s pretty awesome. Instead of us starting our server manually, it will automatically do so when you run your tests. This not only saves us a little bit of work, but also makes it easier for others to install and run your test suite.
With that selected, we’ll tell WebdriverIO to install the service for us, then move on to the next question.
We’re going to set logging to silent. We could use
verbose, which outputs all four types of logs, or we can be specific to a certain log type. These can be helpful if you’re running in to any issues, but they do create a lot of output noise. That’s why we’ll leave it silent for now.
This next option defines where WebdriverIO should store screenshots if there is an error with our tests. This can help with seeing the exact visual state of the page when debugging issues in your tests. Let’s keep this as the default “errorShots” directory.
The ‘baseUrl’ value is where our website is at. We’ll use the IP address with the port of our node server.
Now it’s running the installation for our mocha package and we’ll give that a second to complete.
It looks like it’s successfully created the file with all of our defined settings, along with a few more, in it. We’ll take a look at that file in the next lesson.
Transcript & Files: Reviewing the wdio.conf.js file
In the previous lesson, we used the WebdriverIO test runner to help us create a configuration file to hold all of our settings. Let’s go ahead and take a look at that file.
The first thing we’ll check out is the
exports.config line. If you’re familiar with NodeJS, you’ll recognize the use of the
exports global variable.
If you’re unfamiliar with it, just know that NodeJS has a predefined
exports object available for us to attach properties to, and WebdriverIO looks at the
config property on this
exports object for all of our settings.
console.log command to show that we’re in fact using this file.
There are some added benefits to this, namely the ability to customize your settings based on environmental variables, custom command line arguments or other logic. This is a pretty in-depth topic though, so we’ll cover that in a later video.
Let’s check out all the settings defined in our file. There are actually several more options here than just what we answered during our config step and it’s good to know what those are.
The first setting is
specs, which is the path to our tests. Notice this is an array; we can add multiple patterns to search for, or specific files we’d like to run.
Likewise, there’s an
exclude option, which allows us to exclude files based on a pattern or specific path.
Next up is our capabilities setting. Again, the value is an array, which means that we can run our tests in multiple browsers each time we use the test runner.
The configuration here matches what we provided in the
desiredCapabilities of our
shop-button.js file. The default browser the test runner wants to use is Firefox, so we should switch that to be Chrome for now just to be consistent with our previous test. We’ll cover this option in more detail in later lessons.
Moving on, you’ll notice a new setting called
sync. By setting this to true, we can use a feature new to WebdriverIO 4.0 that removes the need for “Promise” style code. Because we haven’t updated our test file yet, we’ll switch this to
false, so that it will still run successfully. Like I’ve said many time already, we’ll cover this topic in a separate video.
We’ve already covered the
logLevel setting, so let’s skip on by that.
coloredLogs setting color codes the Selenium log output when enabled and is nice to have if your terminal supports it. It’s useful to turn this setting off if ANSI escape codes aren’t supported, such as in the console output in a tool like Jenkins. We’re going to leave this as-is.
We’ve also already talked about
baseUrl, so we’ll jump past those as well.
waitforTimeout defines how long
waitFor commands should wait until erroring out. We haven’t covered these commands yet, so let’s leave this at the default value.
connectionRetryCount options are useful to adjust if you’re having trouble connecting to your Selenium Grid. You should be good to leave these alone though.
Commented out are options for a few WebdriverIO Plugins. We’ll cover these plugins in later lessons and will leave these options commented out until then.
framework setting with the value set to
mocha, which is what we provided. You’ll also see the
reporter option commented out, as we didn’t ask for a reporter.
mochaOpts option is useful for passing configurations for Mocha to use. Here it defines using the
bdd ui type, which says we want to write our tests in the Behavior-driven Development style. We’ll talk about this style in the next video.
Finally, there are several hooks we can use to add functionality in the middle of the test process. This opens up a fair amount of potential to really add-on to the default WebdriverIO test runner functionality, and we’ll look at an example of that in the Assertions lesson.
That’s the end of our settings file, so now it’s time to try it out.
First, we need to move our
shop-button.js file in to the test directory.
Then, we’ll go back to the command line and run the
wdio command again. When we run this, it will find our configuration file and use the settings we’ve defined.
Let’s try it out and see what happens.
It looks like it tried to run our test, but didn’t complete it, as evidenced by the blank browser window left open and missing title logs. This is due to several factors, which we’ll fix in the next video.
Transcript & Files: Updating our test file
In the last video, we left off with our test runner failing to run any of our commands. I mentioned this was due to several factors, all of which have to do with how we originally wrote out our file.
Let’s open up the
shop-button.js test file and update it to work with the test runner settings.
The first order of business is to remove the initialization code. The test runner handles these steps for us.
The test runner also
end's the browser, so we can remove those commands as well.
Because we have our
baseUrl defined in our settings, we can remove the majority of our url. WebdriverIO will prepend the
baseUrl value to the path we pass in to
browser.url. If we define
.\ as our value, it will go to whatever is defined in the
Just so you know, if we were to define it with just a slash, it would go to the root of the
There are only two other changes we need to make to get our file working. The first is really simple, which is to change
browser. Similar to how we referenced the WebdriverIO session as a
client variable, the test runner creates a
browser variable and passes it down as a global object. Let’s switch over to using that.
The only change we need to make right now is to add
it functions, which organize the tests we want to run.
describe is used to group sets of tests by the feature they are testing.
it defines a specific test to run. There are usually multiple
it functions nested inside each
describe, and sometimes
describe functions are nested inside each other for a better defined test hierarchy.
In our file, let’s add a
describe section first. We call
describe like a function, passing in two parameters. The first is a name for the feature we are testing. Let’s say we’re testing the “Shop CTA Button”. This name will be used in our test reporting, which you’ll see in a moment.
The second half of our describe call is a function which contains all of the code we want to associate with this feature.
Inside of this function we’ll be adding the
it call. Similar to
it is a function call that again takes two parameters, the first being the name of the specific test, and the second is a function which contains our actual test code.
With our structure set up, let’s move our test code inside. It’s a pretty straight copy and paste action, and the only change we need to make is to
return the browser object, as we’re still testing via the asynchronous interface.
Mocha handles asynchronous testing by accepting a Promise returned from our
it function callback. Remember, WebdriverIO uses Promises to handle the asynchronous nature of browser testing, and every command on the
browser object returns a Promise. We simply pass the promise back to Mocha, which will wait for it to resolve before completing the test.
With all of these updates made, we’re now ready to run our test. Let’s go back to the command line and again run the
wdio command. This time you see it actually ran our test, logging the console output, and let us know that our test passed without any errors being thrown.
It was a decent amount of work, but switching to the WebdriverIO Test Runner is really going to pay off, with the first big benefit being synchronous style testing. We’ll take a look at that in the next video.
Transcript & Files: Switching to Sync Mode
When we first wrote our tests, we used a promise style callback using
then functions in order to run code after specific commands.
With WebdriverIO 4.0 they introduce a new style of coding that makes it a bit easier to understand the order in which commands are being executed. This is called “sync mode” and we briefly covered when we reviewed the WebdriverIO configuration file.
In this video we’ll update our tests to match this new style and see how it can improve the readability of our tests. The first change we need to make is to turn sync mode on. To do that, we go to our configuration file, find the sync setting, and set it to true.
Now we go back to our test file, and make two sets of changes.
The first is a simple update, which is to delete the ‘return’ keyword. Since the test is now synchronous, Mocha doesn’t need to know about the promise on the Browser object.
The second change is slightly more complicated. Instead of using
then function callbacks, we will treat the
getUrl commands as if they were synchronous actions.
To get started, let’s isolate the first
getTitle call. We separate it from the surrounding commands, then convert this to a simple variable assignment. This removes the need for the
then function, so we can get rid of it.
We keep the
console.log though, as the code we want to run immediately after the
getTitle commands remains the same.
Let’s do the same for our second
getTitle call. Again, isolate the command, assign the result to a variable, and remove the
We do need to update the
console.log to match the updated variable, as we created a new variable name to avoid confusion with the previous
getTitle variable assignment.
Finally, we’ll update the getUrl command, following the same pattern.
Let’s save our file and run our test.
As expected everything works in our new synchronous format.
I personally enjoy this style over the original promise-style commands. It makes the code more concise, easier to read, and gets rid of the boilerplate of having to write
then functions all the time.
This style is only available via the Test Runner, as behind the scenes it adds the proper Mocha hooks needed to execute in this manner.
Going forward, we’ll be using this format for all of our testing.
Transcript & Files: Options and Logging
Now that we’ve gotten comfortable with the Test Runner, let’s dig in to some advanced usage of the tool.
Out of the box, there are several command line overrides you can use to customize your tests runs on a case-by-case basis. Let’s take a look at a couple of these.
Overriding the baseUrl can be helpful for times when you need to test the same site on a different server. Most often this occurs when you’re testing a server on your local computer versus the production server copy. We’re going to give this a shot by testing a copy of the Robot Parts shop that’s available on a public URL.
We’ll start the wdio command as normal, but add a
baseUrl argument at the end to override the URL used in our tests. All the other settings will remain the same.
As you can see, it tested against the public URL instead of the local IP address. This is exactly what we wanted. If we were to run the test again without the baseUrl option, it would go back to using the IP address of our local machine.
Let’s take a look at another option. I mentioned the
logLevel a few videos ago. Sometimes when you’re debugging your tests, it helps to see the logs that WebdriverIO outputs.
Running our wdio command with the baseUrl option, we add a second option which sets the
verbose. When we run our test, we can see the output as the test is executed, with our test success message at the very end.
Let’s take a brief detour to walk through the activity. The first thing WebdriverIO does is ask Selenium for a browser to use. These are called “sessions”, and to get one WebdriverIO sends a POST request with data to the session endpoint on the Selenium server. The selenium server receives this request and initializes the session with the provided data.
You’ll notice this matches our capabilities settings in our config file, but has many more options specified. These are WebdriverIO defaults used to start a normal browser session. We could override them via the capabilities object in our config file if we so desired, but we don’t right now so we’ll leave them be.
The next thing to occur is that we receive a result back from Selenium. This result contains all the relevant data about our new session, including a session ID that WebdriverIO logs out separately for easier debugging. This session ID is used in all of our future requests to identify which browser session we’re using.
Understanding the relationship between WebdriverIO and Selenium is helpful, so I want to take a little bit of extra time to review it. WebdriverIO doesn’t actually run the browser automation, Selenium takes care of all of that.
Take the next command for example. WebdriverIO sends a request to the Selenium hub at the
wd slash hub slash session id slash url endpoint. In the request, it passes along information about the url for the browser to go to. After Selenium receives and processes this request, it returns the results of the command execution. In this case, there’s no information to pass back, so the results are null.
In the next command, we request the page title from the Browser. We don’t send any data to Selenium, as there isn’t any information to send. Instead, Selenium returns data back to us, namely, the title of the page. You see this in the “result” log output and sequentially, the console output we sent in our test.
The next part of our test is clicking the call-to-action button. In our script, this is only one command, but WebdriverIO has to send two requests to Selenium to make the action happen. The first thing it does is find the element on the page. You can see it hits the
element endpoint, passing in the selector data.
Selenium will then return the ID for the element we requested. With this ID, WebdriverIO sends a second command, click, to the endpoint with that specific element ID. That completes the click action.
The fact that WebdriverIO helps simplify two commands down to a single one is a great benefit of the tool. We’ll take a look in a couple lessons at how we can create our own custom commands for a similar purpose.
The next several actions are similar to the first title grab, where we’re asking for the title and url of the page and then logging the result. Finally, WebdriverIO closes our browser by sending a
DELETE request for our session endpoint. With that the test is complete.
The last thing we’ll cover in this video is the ability to create our own custom arguments. To do this, we’ll be using some temporary environmental variables when running our script.
I talked about sending in a custom baseUrl for a different server environment. What if we’re going to be doing this often and want to avoid typing out the entire URL every time we run the command? It’s actually a pretty simple thing to do.
The first step is to move the baseUrl value to a variable. For simplicity sake, we’ll name this variable baseUrl. We then update the config object to point to that variable.
Now that we have that set up, we can create a conditional that checks a
SERVER flag in our environmental variables. Node allows you to access all of your environmental variables through the
process.env object. If
SERVER is set to
prod, we update the
baseUrl variable to use the production url. That’s all the changes we need to make in our config file for this to work.
Back on the command line, we need to temporarily set the environmental variable for the server property. Again, this is a simple feat. Instead of passing in the value as an argument to the WDIO command, we set the variable first by typing
prod. After that, we call our wdio command. Note that in Windows you need to use the
set keyword for it to work.
Let’s run this to see if it passes muster.
As hoped for, the URL gets updated to the prod server.
This sort of customization is really helpful when working on larger teams with multiple environments, especially when you want to make it easier for other folks to run the tests without having to know the specific URLs.
Speaking of making things easier for others, in the next video of this lesson, we’ll take a look at using NPM scripts to simplify the command used to run our test suite.
Transcript, Files & Links: NPM scripts for easier commands
So far, to run our WDIO command, we’ve been referencing the
node_modules/.bin path. This is kind of hairy to type out, so let’s look at a simple way to make things easier for us.
NPM scripts are a lesser known utility provided by the package manager. There are two types of scripts available. The first are the “supported” scripts. These are common script names that are used to create a common vocabulary between NPM modules.
For example, the
start script is commonly used to start up the node server for that package. We use this in our package.json to start a server for our Robot Parts website.
The second type are custom named scripts. These are very similar to supported scripts, except that don’t have a prescribed naming convention. They’re arbitrary scripts you create for one-off needs during a project.
Because they are arbitrarily named, we execute them with the
npm run command. We could also do this for our supported scripts, except we leave the
run part out. This is because NPM provides a short-hand reference to the common scripts to make it easier to type out these standard commands. So we can type
npm start, instead of
npm run start.
With all that said, let’s use the supported
test script to run our wdio command. We define these scripts in the
package.json file, so let’s open that up.
In there, we’ll replace the
test value with
wdio wdio.conf.js. We can leave out the
node_modules/.bin path because NPM will look in that directory for the command we’re trying to run.
We can take the simplicity one step further by removing the file name. WebdriverIO automatically looks for a
wdio.conf.js file whether we specify it or not, so we can leave it off. Not only does that shorten our command, it also allows us to pass in a custom configuration file paths if so desired in the future.
Now, to run our tests, we just type in
npm test. NPM will then try running the
wdio command, looking in the
node_modules/bin folder as needed.
The one caveat to this is it makes passing command line options to WebdriverIO slightly more complicated. Instead of just passing in the options we want to change, we need to prefix all of those options with two dashes. This tells NPM to pass along all the arguments after the two dashes directly to the command.
Here’s what it looks like.
When we run the command, you’ll notice that the options get added to the command information displayed. Neat.
This doesn’t change anything for our environmental variable settings. We can use those the same as before.
Going forward, we’ll be using
npm test to run WebdriverIO, as it’s much easier to type.
If you’re interested in some in-depth blog posts on NPM scripts, check out the links available in the supporting content of this video.
4. Catch Failures the Lazy Way
Transcript, Links & Code Samples: An Introduction to Assertions
So far, we’ve been validating that our test successfully ran by checking the log output after execution. This is simple enough to do while we only have a test or two, but it quickly becomes tedious as more and more tests are added.
Instead of us manually checking the values, what if we could tell the computer what to expect and have it check for us? This is what assertions do. They compare two values and “assert” that they relate to each other in a certain way, whether that be that they’re equal or different in a specific way. If the values are off, an error is thrown and the test is marked as failing.
To gain an understanding of how this looks in code, we’re going to start with the built-in Node assertion library. For the sake of simplicity, we’ll start out by creating some basic assertions in a blank JS file.
The first thing we need to do is load the
assert library. We do that via a
require statement. Since the
assert library is built-in to Node, we don’t have to use NPM to install it.
Now it’s time for our first assert. We’ll keep it basic and assert that two values equal each other. Both values are the number 1.
We’ll then run our script via the
node command. No output occurs, which means that no errors were thrown and our test passed. Let’s see what happens if the assertion fails.
Going back to our numbers, let’s change the second 1 to 2. We’ll save the file and run the node command again. This time around we see an Assertion Error thrown, complaining about our math.
Let’s take this one small step forward. Instead of directly passing in the numbers, let’s assign them to variables first. The first variable will be named
expected, as it’s the value we’re expecting. The second variable will be named
actual, as it’s the actual result.
expected, we’ll set the value to 2. For
actual, we’ll do a simple calculation of
1 + 1. Then, we’ll pass in our variables to the assertion function and make sure they match. Running our node file again confirms this. And just to validate it fails if the numbers are off, let’s change the math to make the calculation incorrect. Running it shows us our error again.
What if you want to check that two values aren’t the same? Assertions can check for that too. Going back to our file, we’ll update the assertion function to use
notEqual and leave the math the same, as we want it to not equal. Let’s try it out.
You see that the error isn’t thrown this time, because it’s now validating the values aren’t equal. If we change the math to match, then run the test one more time, you’ll see the Assertion Error pop back up.
Let’s migrate this idea to our WebdriverIO tests. Again, the first step is to load the
assert library. We do that by adding a
require statement to the top of our file.
Notice how the statement is outside of the
describe block. If we were to include it inside the
describe block, it would only be available in that section. Any other
Next up, we make our first assertion. We want to validate that the page title is a specific string of text. To do this, we use
assert.equal and pass in the actual value and then the expected value. The actual value is the
title variable and the expected value is a simple string of text containing our page title.
Before converting the other checks to assertions, let’s see how this first assertion works. To run it, we’ll use our
npm test command.
The test runs and we no longer see the first title output. While I’m pretty sure that the check was successful, let’s make it fail just to ensure it catches any errors.
It’s a good idea when writing tests to check that they fail when they should. Tests are code, and like any code, you can write it wrong. I cover this in detail in a blog post of mine called “Congruence Bias in Testing”. Check it out if you’re interested.
To make our tests fail, we’re going to change the website itself. This will simulate an actual error in our code. Open the index.html file and update the page title. After saving the change, run the tests again.
Now we see that our test is failing. Notice that the error output is different from our first script. It doesn’t show the full Assertion Error output, but instead shows a much friendly error message, including the content issue, and test case that failed.
This output is part of the Mocha system. Mocha watches for Assertion Errors in your tests, and when they’re thrown, handles them by marking the test as an error and reporting it. The default “reporter” is called “dot”, which is named after the dots you see showing whether individual tests passed or failed (so one dot per test). We’ll cover the other reporters available in a later lesson. For now, let’s revert our title change so we can continue with our assertions.
Updating the second title is almost the same as the first. The only differences are the actual and expected inputs.
Asserting the URL is a little bit trickier. Because we could be testing the site either locally or in production, we can’t do a straight “equal” assertion. The hostname of the URL will be different depending on the server we’re testing.
product-page.html filename in it. We’ll do this by using Node’s
includes function available on strings. Checking whether our full URL contains our expected file name, we’ll get either a true or a false and save that to a
Now, we could assert that the
containsFile variable equals true, but that’s a bit verbose. Instead, we’ll use the
ok assertion, which checks that a value is truthy. As long as the full url contains the filename we provided, our tests will pass.
Let’s try this out with our
prod flag to validate the change.
Finally, we should validate these two assertions fail by breaking the link in the main page. Let’s do that real quick.
You can see that it only reports the first assertion failure. This is because the test immediately stops executing once an error is thrown, so it never makes it to that second check. If we turn the link back on and put in a bad filename, we can see the URL check fail.
One last thing. Notice how the error message is just
true? That’s not very helpful when debugging issues. To improve this message, we can add custom messages to our assertions by passing them in as the final parameter in our assertion functions. By adding a simple message, you can see how our error output is more helpful.
Assert library is great for getting started, but it requires some boilerplate code every time we want to go outside the normal “equal” and “ok” assertions. In the next video of this lesson, we’ll take a look at Chai, which is a more feature-rich assertion library that I really enjoy working in.
Transcripts & Links: Switching to Chai
In the first video of this lesson, the topic of assertions was introduced and we took a look at Node’s “Assert” functionality. We were able to use the tool for some basic assertions, validating the page title and url, but even doing that required some extra code.
To help with our needs, we’re going to look at switching to a tool called Chai. Chai is an assertion library that, like Node’s Assert library, provides an utility to compare expected and actual values in code.
Chai provides two distinct benefits. First, there are a wider range of assertions available in Chai. Later in this video we’ll look at using the “include” assertion to save a line of code in our tests.
Chai also provides three different assertion styles for us to choose from. These styles allow us to write our tests in a format that fits our preferences better. In the next video, we’ll look at the ‘should’ and ‘expect’ styles. For this video though, we’ll stick with the standard ‘assert’ format.
Getting set up with Chai is a fairly simple process. All we have to do is run
npm install chai --save-dev. This installs Chai from NPM and makes it available to require via Node.
Chai offers three flavors of assertions, but all three essentially do the same thing. They’re all a different language for saying the same thing; what a result should or shouldn’t be. Chai offers the trio of styles because humans are fickle and have preferences that are often just a personal thing. By providing three interfaces, you’re able to choose the one you’re most comfortable.
The first format we’ll look at will feel familiar. Similar to Node’s Assert style, Chai has an “assert” style as well. While their names are the same, there are some differences between the two libraries. The main difference is that Chai’s version offers a much wider range of available assertions. Perusing through the API page, you’ll notice a plethora of pronouncements to proclaim.
Alliteration aside, we’ll start off by loading the library in pretty much the same way as before, the only difference being we require Chai, then get the utility from the returned object.
Next we need to update our assertions. The first two checks actually stay the same.
assert.equal is the same format for both Chai and the base Node library. (show http://chaijs.com/api/assert/#method_equal)
ok is slightly different though. Chai uses
isOk instead of just
ok. (show http://chaijs.com/api/assert/#method_isok). Let’s switch that over real quick.
With our updates, let’s run our tests again to ensure everything passes.
Well, we can simplify this thanks to Chai having this “include” assertion built-in. Using the
include assertion (http://chaijs.com/api/assert/#method_include), we can pass in the needle, otherwise known as the expected value, and the haystack, which is the result we’ll be looking in.
There are at least two benefits to this. First, it’s one less line of code to have to write. Second, the default error message when the assertion fails will be much more useful than “false does not equal true”.
Let’s run our test just to make sure we’ve got everything right. Looks good.
Now let’s break our assertion to give you an idea for what an error message looks like. After running our test again, you can see the friendlier output. No more guessing what “false does not equal true” means.
That’s it for this video. Next up, we’ll try out the ‘should’ and ‘expect’ format, to give our tests a more sentence-like assertion style.
Transcripts, Links & Code Samples: Expect & Should Style Assertions
Chai’s extra assert functionality is helpful, but in my mind Chai shines brightest with its “expect” and “should” assertion formats. These styles add a more sentence-like structure to your tests. Take the statement
assert.equal(actual, expected); In the ‘expect’ format, it becomes
expect(actual).to.equal(expected). In the ‘should’ style it’s
actual.should.equal.expected. In both cases the reading order is more natural.
In this video, we’ll look at converting our tests to work for each style. Let’s start off with the “expect” style. We don’t need to install anything new, but we do need to change how we load the library. Instead of asking for the ‘assert’ property, we ask for an ‘expect’ property, and assign it to an
Then we update our assertions. We’ll start with the first assertion:
assert.equal(title, 'Robot Parts Emporium');. Chai has a helpful API page available showing the various assertions with examples for each type.
Scrolling down the options available, we find the ‘equal’ assertion. The first example matches our needs. Let’s copy it over to our test.
The next step is to replace the example values with ours, then get rid of the old assert.
We’ll do the same thing for the second page title check as well.
The last assertion is a little different. Going back to the Chai API page, we find the
include assertion. It comes in two forms,
to.contain, which both do the same thing. We’ll use the former format.
Back in our code, we change
assert to an
expect function call, pass in
url, and update the remainder to read
Let’s save the file and run our tests again to see how it works.
Looks like everything passes as expected.
While I’ll be sticking with the
expect format going forward, I do want to cover the
expect are very similar. Both use the same chainable language to construct assertions, however the
should style extends each object with a special property instead of calling an
expect function. This style has some issues when used with Internet Explorer, so be aware of browser compatibility.
Given that should works by extending
Object.prototype, there are some scenarios where
should will not work. If you are trying to check the existence of an object, it will throw a Reference error if that object ever doesn’t exist. There are other ways to test the existence of objects using
should, but I just prefer to stick with
Regardless, let’s update our test to see what it would look like with the
should needs to attach itself to the
Object.prototype, we need to require it via a function. It’s a simple update to make, but good to be aware of. Because the functionality is attached to all Objects, we don’t need to reference a specific assertion function, and can get rid of this
expect variable entirely.
The next thing we’ll do is update our assertions. Because the format is so similar between
should, we can update all of our assertions at the same time using Sublime Text’s multi-replace tool. First, we highlight a single
expect call, then
quick find all to select all instances of that call.
Next, we delete this text, move to the end of the variables we’re testing, delete the extra parentheses, then replace
Let’s run our tests to ensure we’ve made the right updates. Again, it works great in this new format.
As I mentioned, going forward, I’ll be using the
expect format. You’re welcome to use
should if you prefer that style, as I know it’s a little more succinct than
Before we finish this lesson, I want to show one last tip for simplifying your test. As it stands, we need to load the Chai assertion library in every one of our test files. Instead of doing that, we can move this statement in to our wdio configuration file.
Let’s cut the statement out of our test file, then open our
wdio.conf.js file. While we’re in here, let’s delete that
console.log at the top, since it’s kind of annoying now.
Next, let’s scroll down to the ‘hooks’ section. We want to find the
before hook. This handy function allows us to run arbitrary code before test execution begins. We’ll uncomment the function, then paste our
chai require statement inside of it.
After saving our files, we can run our test to validate that the
before hook worked.
It worked perfectly. Now we no longer have to include the same
require statement in all of our test files. This may not be a big deal for a single file, but it’s helpful at a larger scale.
That’s everything you need to know about assertions to get started. In the next module, we’ll get back to writing tests and cover
debug and other useful WebdriverIO commands.
5. Pause, Debug and other Useful Commands
Transcript: The Debug Command
In this module, we’re going to look at a few special WebdriverIO commands that make it easier to debug the tests you’re writing. In the final video, we’ll use these commands to help write a series of real-life tests.
We’ll start off by looking at the
debug command. This command pauses the test execution when run, giving you time to jump into the browser and check the state of your application. Once done, you can restart the tests by pressing Enter on the command line.
Let’s try it out. First, we’ll add the command to our execution chain in our test. It’ll go right after our click command, showing us what the page looks like after the call-to-action button is clicked. Now I’ll save and run the test to try it out.
Normally this test completes so fast you don’t really see it happening. It’s more like a flicker on the screen. But in this instance, the browser stops and sits there doing nothing, which is what we wanted.
The page loaded correctly, but one thing I notice is that the URL in the browser contains two slashes. Looks like we don’t need the trailing slash in our
baseUrl setting. Let’s fix that really quick.
To get our test to complete running, hit the
enter key back on the command line. This completes the debug command, and the test completes as normal.
Opening up the configuration file, I’ll remove the trailing slashes from the two URLs we have. I’ll then save the file and run the test again.
Once more, the browser window pops up and pauses until I’m ready to move forward. We see now that the URL doesn’t include the two slashes and looks better for it.
One of the nice things about the debug command is you’re able to jump in to the browser debug tools to snoop around the page in the middle of your test. This can be helpful for identifying what selector to use to find certain elements.
Before we’re able to take a look at this, we need to change one of our Mocha settings. By default, the timeout for Mocha tests in WebdriverIO is 10 seconds. This means that Mocha will wait 10 seconds for a test to complete running. If it doesn’t finish in that amount of time, it throws an error.
Since I’ve been talking during this example for more than 10 seconds, you can see the error was thrown here. Usually this setting is best left alone, because a test taking too long is normally a sign that it’s broken. But when we’re in the middle of debugging, there’s a high likelihood that we’ll take longer than 10 seconds during our investigations.
We can fix this though.
Mocha allows you to override their default settings for timeouts. Looking at the Mocha documentation page, you can see that the option we want to change is understandably named
Usually you’d set this value by either passing in the setting via the command line, or having a
mocha.opts file with your specific overrides in it. For WebdriverIO though, our Mocha settings are stored inside its configuration file.
Let’s jump back into that file and look for the
mochaOpts section. All of the options passed in there get delivered to the Mocha instance that WebdriverIO spins up.
You can see we’ve already got one option set, which specifying that we want the
BDD UI format. This tells Mocha that we’ll be using
it to construct our test suite.
To get our timeout setting included, we’ll just add a
timeout property to the mochaOpts object and use a really large number for the value. This value is the number of milliseconds Mocha should wait for a test to complete, so again, it needs to be very large.
Let’s save the file and run our test again. With the long timeout, we can now move about the page and inspect the HTML as much as we’d like.
Being familiar with the browser developer tools can be incredibly helpful for validating selectors. Let’s open up the Chrome dev console and try it out. To get there, right-click on the page and choose “Inspect”. In Firefox, this option is called “Inspect Element”. A new interface will pop-up and I’ll navigate to the “console” tab.
From here, I can test selectors to see if they match elements on the page. For CSS selectors, I can use the
$ function. For XPath, I’ll use
Testing out a CSS selector first, I call the dollar sign function and pass in the selector I want. Here I’ll pass in the selector for my call-to-action button.
When I hit enter, the browser searches the page for any elements that match, and returns the results. If we have an element, we’re good. If it says null, that means there were no matches and we’ve messed up somehow.
Now let’s try this for XPath. I don’t have an XPath selector handy, so let me show you a quick way to get one. Switching over to the ‘element’ panel, I can right-click an element and choose
copy, then choose
Copy XPath. This will create an XPath selector for the element I chose and save it to my clipboard. Did you notice there was an option to
Copy selector as well? This would do the same thing, but return a CSS selector.
Now I can go back in to the console and try out my XPath selector. Using the
$x function, I pass in my selector just like with the CSS one. Then hit enter and see the element returned. The results for a missing element are the same as with the CSS selector.
Using this functionality can be a great help when trying to find the right selector, especially if your less-than-familiar with CSS or XPath.
I’m done with this example, so I’ll jump back to my command line and hit enter to complete the running of the test.
I should also change my timeout configuration back to its original value, so that I don’t end up waiting forever for a failed test to timeout. This is kind of annoying to do every time I want to use the debug statement, but thankfully WebdriverIO has officially documented a solution.
Remember how we used an environmental variable to switch the URL we’re using in our tests? We can do a very similar thing to set the timeout value.
You can see in the WebdriverIO example, they use the
DEBUG command line flag to specify whether they want the long or short timeout setting. They also do some work to configure the
We’re going to ignore that for now and stick with just adjusting the timeout value.
At the top of our configuration file, we’ll add tertiary statement which checks for the presence of the DEBUG environmental variable. If it’s there and set to true, it will use the long timeout. If not, it’ll use the short one. All we need to do now is update our mochaOpts setting to reference this new variable.
With this in place, let’s try out our tests. First we’ll run the command without the timeout.
I skipped ahead the video for the sake of time, but trust me in that I only waited 10 seconds for the browser to shut down.
Now I’ll pass in a
DEBUG=true flag to my test and run it again, and again I’ll skip ahead 10 seconds to save you time. Even after the default timeout has passed, our test is still patiently waiting for us to move forward.
To clean up, I’ll leave the configuration file as is, but will remove the debug statement from the actual test. It’s great to know about this feature, but our tests simply won’t work with the statement in the code.
That’s it for the
debug statement. We’ll be using it throughout the course to help debug issues with our tests as we write them.
Transcript & Code Sample: The Pause Command
In the last video, we covered the debug command, which gives us the ability to stop our test script so we can look around the page. Sometimes though, you just need to wait a moment for an action to occur on the page before continuing on. This is where the
pause command comes in.
Pause is a simple command that will delay your test a defined amount of time before continuing on. For example, if you want to test a carousel menu that auto-rotates after a specific interval, you’d need to wait until it rotates before validating the functionality.
Another example is an animation delay. Smooth interfaces are essential in modern websites, so working with animations is pretty much a given nowadays.
On our Robot Parts Emporium website, we have both a carousel and animations. We’ll cover testing the carousel later in this module, as it requires some additional knowledge we haven’t gone over yet. For now, let’s check out the animation on the accordion menu we have in our FAQs section at the bottom of our homepage.
As you see, when you click on one of the sections, it expands the content of that section out via an animation. Let’s write a test using
pause to validate this functionality works.
Starting off, we’ll create a new file to house our tests. We’ll name it
homepage-faq.js. Then we create our
describe block, naming our test
Homepage FAQ Accordion.
The first test will just validate that the first section is visible. We’ll title it “should show first section on page load”.
There are a few ways we can validate this, but we’ll be using the
getCssProperty command. This command returns an object describing the value of the css property you ask for. Simply pass in the element you want to check, and the property you want to look at.
Here, we want to check the first
.accordion-content element in our section. Before doing that, we’ll load the homepage first. Afterwards, we’ll declare a variable named
firstHeight that will take the response from the
getCssProperty we’ll pass in the selector for our element, in this case
.accordion .accordion-item:first-child .accordion-content. We’ll also specify that we want the
Before adding the assertion, I’ll add in a temporary
console.log command to show you what the value of
firstHeight comes back as.
Now let’s see how this works. On the command line, I’ll issue my
npm test command as usual and let the test run its course. You’ll see an object be logged with some in-depth data about our CSS property. This is what we’ll use for our assertion.
value returned from
height contains the
px unit, we’ll take advantage of the
parsed property and its value to get the numeric form of the data.
Back in our test, we’ll add an assertion expecting that
firstHeight.parsed.value be greater than 0. The format of this assertion is one of the reasons I like Chai as an assertion library. We get to write our tests in a very sentence-like structure.
I’ll run the test again just to make sure it passes. A quick note while running this test that we’re not asserting the content be 58 specifically, because we don’t want our test to break if the content changes. We only want to make sure the value is greater than zero. Our test still passes so let’s move on.
That was the positive scenario, that content should be showing on page load. What about the opposite? We also want to check that none of the other accordion content is visible. Let’s write a test for that.
We’ll create another
it block, this time naming the test
should not show other content. Because the first thing we’ll want to do is load the url, let’s move that command from our first test in to a new
beforeEach block. As a review,
beforeEach is a Mocha hook that will run whatever code we want before each of our tests in this section.
I’ll add the
beforeEach hook to the top of our tests, just to signify what it’s doing. Then I’ll grab that
browser.url command and move it inside the function. I’ll also get rid of that
console.log in the first test.
With that out of the way, it’s back to our test. First, a variable will be created to store our height value. Then just as before, we’ll use the
getElementSize command, but this time pass in the second content item in our accordion. The
:nth-of-type selector is what we’ll use to do that, asking for the second child of that type in our list of items.
console.log out this value just in case something weird happens. Finally, I’ll add our assertion to validate the height is specifically zero. Pretty simple right? Let’s run our test and try it out.
Well, our first two tests passed as expected, but our newest one threw an error. The error says that it
expected undefined to equal 0, which is obviously not right. It looks like the value isn’t being provided in our parsed property.
Checking out the logged data, you can see this is true. Instead of a numeric value, the CSS property was defined as
auto. That’s because
getCSSProperty checks the computed style of an element. And because we didn’t define a specific height on our element, it defaults to
There are two ways around this. We could simply change our assertion to check that the height value is auto. That’s easy to do, but I don’t really like it because it’s not actually validating that the content is hidden.
Instead, I’m going to check a different property to ensure the content is hidden. Jumping back in to the browser, I’ll take a look at the hidden item and see that its
display property is set to none. The content definitely won’t be visible with this set, so let’s validate that in the test.
First we change ‘height’ to ‘display’ in the the variable name and inside the
getCssProperty command. When we’ll get rid of
parsed, since we don’t need to parse a number. Finally, I’ll change the expected value to be
Once more I’ll run my test to see how it works.
Looks like the display property is now being checked and the test is now passing. Before moving on, I’ll remove that
console.log line in my code.
Okay, now we need to get to our pause command. We’ve validated that the accordion is in a good state to start off with, so let’s test its functionality. In our next test, we want to check that, upon clicking another accordion link, the visible content shrinks away and the requested content expands out.
Because this is animated, it takes brief moment for the state to finish its transition. Let me show you what I mean.
In this test, I’m going to click the second item in the accordion menu, then check the display and size of the content in the first two accordion items. The first element should have a display of none and the second should have a height greater than zero. Let’s write our test.
We’ll create a new
it block, this time with the name of
should expand/hide content on click. Inside the function, we’ll start off by clicking the link for the second accordion item.
Then, we’ll validate the height of our second content area is greater than zero. With that, let’s run our tests.
Seems to be working. Let’s make our test a little more robust by checking the display of the first item. We’ll duplicate the check we wrote in our previous test, updating our selector and our assertion to validate the display does not equal to
Again, we’ll run our tests.
Hmm, what happened? Did the first content not collapse?
I want to see what’s going on, so I’m going to add a
debug right after the
click command. Trying it out, I can see that the content is collapsed and that everything seems to be working as normal. So what’s going on?
Well, let’s get rid of the debug command and log out the values of the height and display we’re checking for.
Running the test again, you can see that the height is definitely set to block, which isn’t correct, but you may also notice that our height isn’t very large. Out of curiosity, I’m going to run this test again.
The display property stayed the same, but as I was suspecting, the height property is a different number.
I can pretty safely say at this point we’re checking our values too soon. Because the hide/show is animation, it takes a moment for everything to transition over. In our test, we’re checking the values as it’s making the update, which is why our height is different each time and the display value isn’t updated yet.
This is where we finally get to the
pause command. Instead of clicking the link then checking right away, we need to click, pause a moment, then check. Thankfully, this is really easy to do.
Back in our test, right after clicking the element, I’ll add a
browser.pause command and pass in the number of milliseconds I want to wait before checking. Let’s start with 100 and see what happens.
After running our tests, the value of the height is higher, but our display still isn’t right. I’m guessing we haven’t waited long enough.
I’m going to cheat a little bit here because I actually know the length of the transition, which is 500 milliseconds. If you’re unsure of this value, check with the person who created the page to see if they might have a better idea.
With the updated value, I can run the tests again and see they now pass every time. Every single time. Our height value stays consistent and our display value is correct. That’s the value of the pause command.
One final note. In the debug video, I mentioned increasing the size of the test timeout value. If you end up pausing a little too long, you’ll run in to a similar issue where you test times out waiting to complete. In cases like this, add
this.timeout(xx) to the top of your test block to to overcome that on a per-test basis.
Just as an example of this, I’m going to increase the value of my ‘pause’ to
10000, then run my tests to show they timeout. To overcome this, I’ll add
this.timeout(15000) to the test that needs the increased timeout, run it again, and see it passes fine now. This functionality is really handy when you have a specific test that takes a while, but the rest of your tests don’t.
In the next video, we’ll talk about some alternatives to pause. For now though,
pause is an extremely useful command to keep in your back pocket when you need to wait just a little bit.
Transcript & Code Sample: Element State Commands - isExisting
There are currently six element state commands you can use to more easily inspect the state of your website and its elements. You can find them all on the API page under the ‘State’ nav section.
Each state command accepts a selector and returns a true of false value depending on the state of the element provided.
We’re going to take a look at all 6 state commands, and jump in to examples for a few of them, beginning with the
The functionality we’ll be testing throughout this video is the review form on the product page. Review forms are fairly simple, accepting in an identification and the review itself. It gets a little tricky though when you try testing form validation as well.
Starting off, I’m going to create a new file called “review.js”. In it, I’ll create my
describe block, defining this set of tests as “Review form interaction”.
I like this method as it helps me plan what I need to do to test without having to worry about code details just yet. It’s also helpful to clarify what the code is doing, which can be helpful if you’re working on a team of testers or just have poor memory.
The steps for our test will be:
Go to the product page
Enter the email address
Enter text in the comment form
Submit the review
Assert that our review now appears in the list
Our first step is to load the product page. This step should be fairly familiar by now, calling
browser.url and passing in the path to our page, in this case
For the next two steps, we’ll be using the
setValue command. This command replaces any existing value in an input field with the text passed in. We call it by sending the element selector and the text to set.
First, we’ll add an email address to the form. We can get the selector by inspecting the email input element on our review form. You can see it has an ID of
review-email, which makes it simple to select. Let’s copy that ID for our test.
setValue and pass in the ID selector. Then we’ll send the input, which should be a valid email address like
We’ll do the same thing for the comment field. It’s a different type of form input, but because it accepts text, we can still use
setValue. Again, we’ll pass in the ID selector of the input element, which is
review-content. For the value, the text
This is the review will work just fine.
With the fields filled out, the form is ready to submit. To do this, we’ll use a command called
submitForm. This is a very simple command, taking in just the selector of either the form element you want to submit or any element that is a descendant of that form. Since we already have it available, we’ll use the selector for the
Before we get to the
isExisting state command, let’s verify our comment was added. We’ll use
debug to pause the test execution after submitting the form so we can manual validate the review was added.
Since we have other files in our test folder, when we call
npm test it will execute all the tests we’ve written so far. To run just the single ‘review’ file, we can pass in a
spec option to the WebdriverIO command. This tells WebdriverIO to only run the tests defined in that single file.
Because we’re using NPM scripts to run the
wdio command, we need to prefix our option with two dashes. After that, we pass in the
spec option with our filename as the value. (
Now when we run our test, it will load the product page, fill out the form, submit it, then pause as we debug the page. You can see the review submitted successfully, so we’re ready to automate that assertion.
Looking at the
isExisting command API page, you can see it’s a fairly simple command. It takes just one parameter, the selector of the element you want to check existence for. It will return the value
true if that element exists in the HTML, and false if it doesn’t.
Notice in the example that it doesn’t matter if the element is visible or not, it will always return true as long as the element exists on the page. Later on we’ll take a look at the
isVisible state command, which does take visibility in to account.
Back in our test, we’ll create a variable called “hasReview” to assign the return value to. Then we fire off the
isExisting command. For our selector, we’ll use a text-based value, searching for an element with a class of “comment” that contains our exact text of “this is the review”.
With that, we’ll assert that the ‘hasReview’ variable is true, signifying that it found the element with that text.
The last part of our test will be adding a better message in case we can’t find the comment. In our assertion, we can pass a second parameter to the
expect function. This value is a string of text you want prepended to the assertion error message. For us, we’ll say “comment text exists”. That way we get a better idea of what we were expecting.
With that, we’ll save our file and run our test.
Since we still have our debug statement in place, the test will pause before the assertion. Seeing that the review is visible, we’ll hit enter to continue with the test and validate that the assertion came back positive.
The final thing we’ll do is remove that
debug statement, as now our test works perfectly.
That’s it for the
isExisting command. In the next video, we’ll talk about using
isVisible to verify error message show up when invalid content is submitted.
Code Sample: Element State Commands - isVisible
Code Sample: Element State Commands - hasFocus
Transcript & Code Sample: The waitFor Commands
Similar to the state commands, the ‘waitFor’ commands detect the state of an element. Depending on those results, they will pause the script execution until the desired state is met.
This sort of functionality is extremely useful when testing a page that has animation-based delays or functionality that requires an indeterminate amount of time to execute. Instead of using an arbitrary time delay, which always runs the risk of being too short, we can use the waitFor commands to hold off just long enough before continuing with our test.
The basic format of all of the commands is a state to wait for, what element to check the state of, and how long to wait for that state. There is a third argument to the commands that we’ll take a look at later.
Before that though, we’re going to take a look at how we can take advantage of the state commands to write more robust tests for our robot website.
In this example, we’ll be testing the Robot checkout flow, adding a product with a quantity to our cart, clicking the ‘buy now’ button, then validating and closing the ‘thank you’ message that’s returned.
To start things off, we need to create the skeleton for our test. In a new file titled ‘cart.js’, we’ll open up a ‘describe’ block talking about our cart functionality. Since we’ll be testing the checkout page, we need a beforeEach function to navigate to that page for each of our tests. We do this for every test, so we have a clean page to test on each time.
Our first test will be validating that we can only click the ‘buy now’ button after we’ve entered a quantity.
First, we’ll use our ‘isEnabled’ state command from the previous lesson to validate that the ‘buy now’ button is disabled to begin with. Normally we don’t need to validate the beginning state of our test, but here it’s a good idea since the default for a button is for it to be enabled.
After our initial validation, we’ll use the
setValue command to enter a quantity in to the corresponding input field.
With a quantity entered, the ‘buy now’ button should now be enabled. We’ll use the same
isEnabled state command to check it, this time expecting the return value to be true instead of false.
That’s it for our first test. We validate some basic functionality of the shopping cart. Now it’s time to dive in to the checkout process. We’ll create a nested ‘describe’ statement to help section this portion of the tests apart from the rest.
There are a couple of commands in common we’re going to run for each of these ‘checkout’ tests. Since the commands are all the same, we can add them to a second ‘beforeEach’ function. Note that this beforeEach function only runs for tests inside the ‘checkout process’ describe block. This is a mocha pattern that’s very useful with our test hierarchy.
Inside the beforeEach function, we’re going to set the quantity to 10, just like we did in the previous test. Notice that we’re not asserting the state of the ‘buy now’ button. It would be redundant to test this again, so we can happily ignore it.
Next, we click the ‘buy now’ button, initiating the cart checkout process.
With this administrative code out of the way, let’s start in on our next text. We’ll be checking that the ‘buy now’ button becomes disabled while we wait for the response from our ‘buy now’ action. We don’t want the user submitting two orders, even if that would provide a short-term increase in sales.
Similar to our first test, we’ll validate the ‘enabled’ state of the button. We’re checking that it’s disabled, as we just clicked the button and haven’t received a response from the checkout server yet.
We’ll also validate that the text changed inside the button, from “buy now” to 'Purchasing”.
That’s all we’re looking at in this test. We simply want to ensure that the state changes from ‘pre-buy’ to ‘pending’.
If you didn’t fall asleep in the previous video, then this should all look familiar so far. If you did fall asleep, we’ll, I can’t blame you, as I’m a parent of young kids and am liable to doze off whenever I can.
Now we’re going to dive in to the
waitFor commands, so if you need a dose of caffeine to make it through, press pause and go pour yourself a cup.
In our functionality, we submit an order to a checkout cart server, which processes the information and returns a result. Since this is a demo page, the result always returns within 2 seconds, and is always successful. That’s not always the case though. It could easily take 10 seconds to resolve the status, even though the sales team would cringe at that delay.
Regardless, the truth is that we just don’t know how long it will take. This is where our ‘waitFor’ commands come in handy. Instead of requiring a long delay in our tests just to make sure we give the server enough time to respond, we use the ‘waitFor’ command to wait just long enough for the response to come back. No more, no less.
In this next test, we’ll be using the
waitForExist command to delay our test until the success message element exist. As I mentioned before, the command takes the selector of the element you’re waiting for.
It also optionally takes a timeout limit, which by default is 500 milliseconds. If the wait reaches that limit, the test will error out. If we need a longer delay, we can increase this timeout by passing in the number of milliseconds to wait as a numeric value.
As soon as the element is found, the remainder of the test proceeds. The timeout is only used as an upper limit for how long to wait. If the element is found immediately, the test will immediately continue.
Using waitForExist, we’re going to verify that the thank you message appeared, and that it includes the quantity we entered for our product along with its name.
The first thing we’ll do is define the selector we’re going to use to match the element we’re looking for. Here, we’re defining a ‘partial text match’ selector, telling Selenium to look for an element that has a class of ‘callout’ and contains text with the words “Thank you human”.
Next, we’ll use our waitForExist command to pause execution of the script until the thank you text is found. We’ll increase the timeout to 3000 milliseconds, or 3 seconds, to ensure we wait long enough for the response to return.
Once found, we’ll get the complete text of the thank you message and pass that in to our expect function, checking that it contains our desired quantity and product.
Sometimes we don’t need to check whether an element exists, but rather wait for it to be in a certain state. In the next example, we’ll be using the ‘waitForValue’ command to check that an input’s value is a specific number before continuing on.
This command works just like
waitForExist, but I’m going to use it in a slightly different way. Instead of waiting for a value to appear, I’m going to wait for it to disappear. To do this, I’ll use the ‘reverse’ flag at the end of my command call to tell WebdriverIO to wait until the value no longer exists in the input.
For this test, we’ll be finding out if the quantity input is cleared after the checkout is complete.
It’s a very basic test, simply waiting for the value in quantity to disappear. We don’t need an assertion, as ‘waitForValue’ is acting in that role. Remember that we’re using the ‘true’ flag as the last parameter in our command to wait for the value to be gone, instead of waiting for it to have a value.
Our last test is going to be a little tricky. It will use two waitFor commands. First, waitForExist, to check that the success message has shown. Second, waitForVisible, because the message animates away on click, taking half a second to disappear.
The only thing this test is really checking is that the thank you message is gone after clicking the close button, so we’ll name it accordingly.
Next, we’ll select the message using a partial text selector, similar to what we did a couple of tests ago.
We’ll then wait for that message to exist. After it appears, we’ll click the close button inside of it.
Then we’ll use the waitForVisible command, passing in the true flag as the last parameter to make it act like a waitForInvisible. Now our test will pass only after the thank you message is no longer visible on the page. The element will still be there, but it will be hidden from sight.
With all of our tests written up, it’s time to make sure they actually work. The tests take a little bit longer to run, as each one has to wait for a specific event to occur. Still, it’s better than having to wait 10 seconds for each test because you need some arbitrary number you know won’t fail.
Looks like all of our tests passed. It took a little longer, but sometimes browser testing does.
All the other waitFor commands work in a similar fashion. They take a selector, an optional timeout limit, and an optional ‘reverse’ flag. They simply check different states.
There’s one other ‘wait’ command to go over, but it’s special, so we’ll show that off in the next video.
Transcript & Code Sample: The waitUntil command
The last command we’ll take a look at in this module is waitUntil.
Similar to the waitFor commands, waitUntil is used to delay tests until a specific event or state has occurred. The difference is that with waitUntil, you’re the one defining what to wait for.
To use waitUntil, you pass it a function to run which will return either true or false, depending on the current state of the page.
If true is returned, the wait is considered complete and the test proceeds as normal. False will delay the tests either until it’s ready to test the state again, or the timeout is reached.
In our cart checkout tests, we check that the button changes to ‘purchasing’, but not that it changes back to ‘buy now’. That’s because the ‘purchasing’ text change occurred immediately after button click. The revert back to ‘buy now’ though, only occurs after an undefined time delay.
In our other tests, we used the wait commands right out of the box. For this test we’re going to need to do a little more work. We can’t use a command like waitForEnabled, because the disabled state of the button remains the same throughout the sequence.
It would also be nice to avoid using waitForVisible on the success message, as that would needlessly tie the success of this test with another component on the page.
Instead, we can write a custom waitUntil command, to tell our script to wait until the text of the button has changed to a specific value.
One quick note before we continue. There is a waitForText command already available, but that command only checks for text to appear in the element. It doesn’t care what text is there, only that it has text. It won’t work for our needs as the button already contains the ‘purchasing’ text.
We’ll build off the tests we wrote last video, adding another one to the mix.We’ll use the ‘only’ mocha feature to run just this test, saving us a little bit of time in development.
Before we get to waitUntil, let’s write the assertion that will be checking out that the text is back to normal. To do this, we’ll get the text inside the button, then assert that it equals ‘buy now’
Let’s run this really quick to make sure that our test fails without the proper wait.
As expected, it doesn’t pass, because it didn’t wait long enough for the text to change. Let’s fix that.
We can either define what’s called an anonymous function inside of the waitUntil command, or pass in a function as a variable. To keep things simple, I’ll stick with defining a function here.
We also need to define how long we want to wait before the test reports an error. We’ll stick with 3 seconds.
Inside our function, we need to write an expression that returns either true or false. Remember, we’re going to be returning ‘false’ until our condition is met.
For our check, we’ll get the text inside the button and see if it’s not “Purchasing”. We could also run a check that the text equals ‘buy now’. Either would work.
Just to demonstrate that it’s continuously running the check while waiting for the function to return true, I’ll add a console.log inside which will print out multiple times during our test run.
Let’s see it in action.
As the test runs, you can see the ‘here’ message be logged out multiple times. It’s actually running our button text comparison every half second.
Once the condition is met, the rest of the test continues and it now passes.
Before finishing up, we should clean up our file. I’ll remove the console log and ‘only’ statement to ensure our tests run normally again.
waitUntil is a very flexible command that allows you to check all sorts of properties. You could use it to check that the CSS is set to a specific value, or check an attribute of an element. It really just depends on what you need.
That’s it for the wait commands and Module 4. In the next section, we’ll dive in to more customized commands with addCommand and execute.
6. Avoid Rework with “execute” and Custom Commands
Transcript & Code Sample: Create Custom Commands with 'addCommand’
As you gain more knowledge about WebdriverIO, your tests will increasingly become more sophisticated. In order to manage this complexity, WebdriverIO offers a command that allows us to customize and consolidate our test definitions.
This functionality is so useful, the folks at WebdriverIO have dedicated a specific page in the guide for the utility.
The ‘addCommand’ command provides us with a way to write clearer tests that are easier to keep up to date. By combining a series of steps into a single request, we’re able to reduce the total amount of code written.
This technique is similar to Page Objects, which is a topic we’ll cover in depth in the next module, but requires less code to implement, which is why I want to talk about it first.
‘addCommand’ works by taking a name for your command, plus a function to run whenever this custom command is called. What goes inside the function is up to you; you can use it to consolidate a series of webdriverio commands, or have it run a completely custom node.js script.
For this example, we’ll be taking another look at the review test script we wrote several lessons back. In it, we test the ‘product review’ functionality of our site. To do this, we type text in to two different input fields, then submit the form and validate whether the correct error messages or content is shown.
The ‘addCommand’ definition can go anywhere in our file, but I like to include it at the very top, outside of the ‘describe’ block. This helps ensure that it’s assigned before we try using it. It also makes it easier to find the command if we need to update it, say if a selector changes for one of our elements.
After naming our command, we’ll define the function that will run when it’s called. It can take any number of parameters depending on our need. Here, we’ll define two arguments, the email address and review text to use in the submission.
The first thing we’ll do inside our function is set the email address to the value passed in
Next, we’ll set the value of the review content input to the text we’re given. This is the same thing we do in our test scripts below, just consolidated to a common command.
Because our tests check both successful and failed scenarios, we need to accept situations where the email or review isn’t entered. To do that, we’ll wrap both ‘setValue’ calls in conditional ‘if’ statements.
These if conditionals will allow us to run our command in a variety of ways. We can test with an email, but no review, with a review but no email, with neither, and finally, with both.
The last command to run is to submit the form. We don’t need to wrap this in a conditional, because we’re always going to submit the form when calling submitReview.
Now that we have our command written out, it’s time to use it.
The first test is a straightforward submission of a valid review. At the top of the test block, we’ll call ‘submitReview’, then copy and paste the email and review text values in.
Since we’re handling everything inside our custom command, we’ll remove the two setValue’s and the one submitForm. submitReview handles all of this.
We are going to keep our ‘hasReview’ check, as that’s the part that’s unique to this test.
Now it’s time for our next test. In this one, we don’t set the value in either form field, we just call ‘submitForm’. Therefore, we’ll just replace submitForm with submitReview, and won’t pass anything in to it.
The main benefit of the switch is that we don’t need to duplicate the form selector inside our test. By sticking with our common command, we create consistency in our test patterns.
Our third test makes full use of the submitReview command.
First, we’ll use it to submit an empty review. This makes our first error message appear.
Next, we’ll submit the review with just an email address added. This pops the next error.
Finally, we’ll submit a valid review with both email and review text entered, which will allow us to test that the error messages are all cleared.
Our final test is similar to our previous changes, submitting an empty form, then attempting a submit with just the email address. This time though, we’re checking for focus. Still, the changes needed are the same.
With our updates made, let’s save the file and run our review test to ensure they all still pass. Our tests should operate exactly the same as before, so we should see 4 green dots.
As expected, things worked out well.
Reviewing the file, you can see the tests are a bit cleaner than before. We have fewer repeated selectors inside our tests, and fewer repeated commands as well. addCommand is really a great utility for writing more maintainable and readable tests, and can help when explaining to others what your test is doing.
That said, our test still has lots of repetition with selectors and ‘isVisible’ calls. We’ll cover how to improve that with page objects in the next module. Before that though, we’re going to dive in to two more customization options, starting with the ‘execute’ command in the next video.