Laravel Dusk Browser Testing: Best Practices & Tips


May 16th, 2022

Laravel Dusk Browser Testing: Best Practices & Tips

The Laravel Dusk test suite for our test management tool Testmo currently consists of 788 complex browser tests. Running the entire test suite against just a single browser takes about 2.5 hours. Making sure that a large test suite like this is easy to maintain, to avoid flaky tests and ensuring it's robust to handle future app changes can be a challenge.

There are various best practices that can help with building and maintaining Laravel Dusk browser tests (and I wish we had known all of these when we started with our test suite). In this article we want to share some of the lessons we've learned so you can write better Laravel Dusk tests yourself.

Use robust & maintainable selectors

This is likely the most important thing to get right when writing browser tests: it is absolutely critical to choose robust selectors when interacting with your app and to assert page states.

What does this mean? With Laravel Dusk, and browser tests in general, you interact with your app via DOM elements. For example, you need to tell the browser where to click and which input field to fill. You can use so called XPaths to locate elements, specify elements' ID attributes, use form field names etc.

The most universal and popular way to identify elements is via CSS selectors. However, it is tempting to write selectors that easily break when you change your CSS classes or page structure. This is especially true when you use systems like Tailwind CSS or React where the class names aren't helpful to identify elements. Here are some examples for selectors you should avoid:

// Breaks when you change button order
$b->click('.actions button:first-child');
// Breaks when you change app styling
$b->click('form button.negative');
// Breaks when you change sorting or add extra rows
$b->assertSeeIn('tr:nth-child(5)', 'Admin user');

Instead, it's better to tag your elements to uniquely identify them from your tests. This makes your tests much more robust for changes in your app UI. For example, if you want to submit a form by clicking an element that you tagged as your submit button, it doesn't matter if this element is implemented as a link, an image or an actual form button in your view. You can tag your submit element by giving it an attribute such as [dusk=submitButton]. Dusk has a handy way to select such elements by specifying the name with a leading @ in your tests:

// In your view:
<button dusk="submitButton">Register</button>
// Clicks the element with the [dusk=submitButton] attribute
// You can also chain selectors to be more specific
$b->within('@orderForm', function ($b) {
$b->type('@name', 'Gabrielle Baker');
// You can also add IDs in your views to find elements more reliably
$b->assertSeeIn('tr[id=3]', 'Admin user');

Avoid flaky tests & use better waits

Flaky tests can be a big problem with test automation efforts. A flaky test is a test that sometimes passes and sometimes fails, even if you don't change the code. The problem with flaky tests is that they are unreliable, break builds and just cannot be trusted.

Browser tests are especially prone to producing flaky results. Flaky tests are often the result of timing problems, asynchronous operations or network delays that were not expected by a test. For example, if you click a button in your app and then check for a resulting text, but the text sometimes just takes a moment longer to render (e.g. because your browser was slower to update than the assert code), the test sometimes fails.

Flaky tests are not just annoying, they are also often difficult to find and difficult to fix. Here are some strategies we use to avoid, identify and fix flaky tests in our Laravel Dusk tests:

  • Wait, don't assume: It's important to wait for elements to appear, for the DOM state to be ready and to wait for changes to complete before interacting with or asserting the page. Laravel Dusk comes with many methods to wait for elements, page changes or texts. Make sure to use them! Consider these examples:

// Wrong: not waiting for elements will cause failures
$b->click('@addUser'); // Performs network request
$b->within('@userDialog', function ($b) {
$b->type('@name', 'Gabrielle Baker');
$b->click('@submitButton'); // Performs network request
$this->assertUserExists(1, 'Gabrielle Baker');
// Correct: Wait for updates before interactions or assertions
$b->click('@addUser'); // Performs network request
$b->whenAvailable('@userDialog', function ($b) { // Wait
$b->type('@name', 'Gabrielle Baker');
$b->click('@submitButton'); // Performs network request
$b->waitUntilMissing('@userDialog'); // Wait
$this->assertUserExists(1, 'Gabrielle Baker');
  • Signal state from JavaScript: Sometimes it can be helpful to signal from your JavaScript code when the DOM is in a state it can be interacted with. Something that can be obvious to a user, such as an animation, might confuse your test into interacting with an element that is not quite ready yet. Remember the above mentioned tip to tag your elements such as [dusk=submitButton]? You could output your view without this attribute initially, and then add the attribute from your JavaScript code when the submit button is actually ready. A simple Dusk $b->waitFor('@submitButton') will take care of this from your test.

  • Test against different browsers: Another non-obvious tip is to test against different web browsers from the beginning. This is not just a good idea to make sure that your app works with browsers other than Chrome. It is also a great way to detect timing issues that cause flaky tests, because different browsers sometimes behave just slightly differently. So our tip: don't just run your Dusk tests against the default Chrome browser, but add least Firefox and maybe also Edge so issues surface earlier.

  • Track & identify flaky tests: In order to fix something, you need to measure and identify the problem first. To find flaky tests, you need to know which tests randomly pass and fail. This is actually easier said than done. Most test environments don't make it easy to track or compare results over time. There are a couple of options here. First, you can submit your test results to a QA tool that supports flaky test detection. Our tool Testmo allows you to find flaky tests (as well as slow or failing tests). It's also easy to submit your PHPUnit results with our test automation support, including from CI pipelines. Another option is your CI tool. Some CI tools also support flaky test detection if they can track and report automation results. Whichever option you use, make sure to keep an eye on flaky tests as they can cause a lot issues when you grow and scale your test suite.

Error detection, debugging & manual testing

Asserting that your application is in the state you expect after various tests is the main reason you write your test suite. Our recommendation: don't just test the UI state or database state, make sure to write your asserts against both the page DOM and your data model. There are various classes of bugs that can go undetected if you only test if your UI was updated correctly. Likewise, if you just check whether the database was updated after a form submit, the UI might not show the confirmation message etc. Always test both the user interface and the database!

We also found it very helpful to add some general catch all asserts and features to our tests. One thing that has proven to be invaluable is to get the browser log at the end of the test and raise an error if there are unexpected warnings and errors in the log. This is a great way to catch JavaScript errors or warnings that didn't trigger any other asserts. Additionally we also take screenshots of the browser state after failed and successful tests. This can be super helpful to further review or debug test issues, especially if the test suite executed in a different environment such as your CI pipeline.

Last but not least we also wanted to mention manual testing. With a big test automation suite in place, it's easy to forget that manual testing can often still be important to find issues you didn't consider in your tests. And they can be critical to test features that are difficult to automate, such as payment flows, third-party integrations or production sites.

Laravel Dusk test runs displayed in Testmo

If you work on a project with dedicated testers or even have a full QA team in your company, you are likely already familiar with this. Your tester colleagues likely already use test case management or exploratory testing features in a tool like our Testmo. You could also start with Excel, Google Docs or a simple note-taking app. Whatever tools you use, a healthy mix of automated, manual and exploratory tests ensures that you have great test coverage to increase the chance of finding bugs before your users do.

This is a guest posting by Dennis Gurock, one of the developers of Testmo, a QA tool to manage manual & automated software tests. Testmo is built using Laravel and uses Laravel Dusk extensively. If you are not familiar with QA tools, he recently curated a list of the best test management tools to try.

Filed in:

Eric L. Barnes

Eric is the creator of Laravel News and has been covering Laravel since 2012.