How To QA: Part 2
Recap / Overview
In part one of this “How To QA” guide, we talked about why the quality of a website is your responsibility. Only you and your team can know what you know. Only you know what your organization needs from a website. This puts the responsibility for the quality of the website on you, ultimately.
We understand this. We understand how lonely this can feel. Part two of this “How To QA” guide offers tools and methods that will help you better handle your QA burden.
Part one focused on these main points:
- You are responsible for the ultimate quality of the product.
- You should have (and communicate) clear expectations of what excellence looks like.
- You should feel empowered to demand excellence.
- You should take all the time you need to verify quality before signing off.
- You should be thorough.
- You should be adversarial in a professional way.
- You should avoid assumptions and insist on evidence.
- You should break big complex things into smaller simpler parts.
- You should use all of your available tools.
Part two of this how-to guide will center around these three main points:
- Planning is the essential aspect of QA.
- A number of QA methods are available to help you.
- A growing list of tools is at your disposal and will greatly help ease the QA burden.
The root of all good planning is the act of taking big things and breaking them into smaller components. When you plan a party, you break the creation of the experience as a whole into the smaller elements such as menu, decorations, music, location, guest list, games, schedule, etc.
When planning for QA you take the entire new website or web application and related systems into account, and break everything down into smaller components for specifically testing performance, functionality, look & feel, error handling and assumption testing. Each of these components can be reduced to additional smaller sub-components as needed, but this general breakdown of test activities greatly reduces the initial overwhelming complexity of the project and better supports successful management of the process.
Plan Performance Testing
Plan to Prioritize Everything Related to Slow Performance
It’s axiomatic that any and every website and web app need to load fast. Pages need to load quickly, navigation must progress rapidly, interactive page elements must do their work smoothly and quickly. The list goes on. You will find thousands of well-researched articles on the web that stress the importance of fast page load times on websites. Planning to test website performance is a critical part of QA.
One part of performance test planning is a bit abstract. As you plan your testing, make sure you’re well-positioned to pay close attention to - and complain about - page load speed. Don’t brush it off or dismiss what might maybe… possibly… be a performance problem. Listen to your intuition. If it ‘maybe’ ‘sort of’ feels slow, it is. Plan to make some noise about it.
Plan to Test on a Production Level Environment
Often when you are conducting QA you are doing so on a development or staging instance of the website that you will eventually launch. The resources dedicated to this environment are typically less than those allocated for the ultimate production environment, to save costs. Your developer will no doubt assure you that the final production environment will have more resources and perform much faster than the development or staging instance you are reviewing. This creates a high level of risk in the process, and is dangerous.
To illustrate this point, let’s imagine it’s launch day. You have conducted an extensive planning and research process to develop your new site. It has been designed with painstaking effort. Even more effort was then put into building it. You then conducted careful testing with a QA team, identifying and resolving bugs and remediating problems. You have invested a lot of time and money up to this point. All of the testing and reviewing has taken place on the staging site, and the build has been declared finalized and ready to launch. But there is a missing piece. Nobody has accounted for potential performance problems that will emerge on launch day. There is no way to verify what will happen when the site goes live. Are you really now, after all this effort, going to voluntarily leave these potential performance problems out of your QA cycle?
With this in mind, we always urge our clients to conduct QA in an environment that is at parity with what will be used in production. Is this more expensive? Yes. But it is much cheaper than launching a website that falls over the instant that it launches into production.
Plan to Expose the Performance Problems Early in the QA Process
Performance issues are often systemic, and tend to point to bigger underlying architecture problems that are very difficult to fix once a site is live. Plan to find these issues during QA. Plan to test in a high quality environment from which you can expect fast performance.
Plan to Dump a Ton of Bricks on Your Website
There are numerous tools available to help you simulate massive amounts of concurrent user traffic. We’ll detail some of those tools below, but in your planning stage you should prepare to hit your website hard during testing. Remember, you want to surface the scary stuff before you launch, not be surprised by it post-launch. Trying to fix the airplane while it is in flight is insanely hard compared to fixing it while it is stowed away in a nice warm dry airplane hanger before it takes off.
You will want to plan to test performance several times during the course of your QA process. Your development team will fix performance issues that you discover, but by doing so they will likely introduce new bugs. This is completely normal. The QA cycle is tuning and refining a complex system. Plan to not get overly annoyed with your dev team. You are working together collegially to reach a high quality result. Plan to take some time with this process, and your patience will be rewarded.
Plan to Find the Performance Breaking Point.
With the availability of sophisticated cloud technologies like AWS, Azure and Google Cloud, you can throw tons of money at your web infrastructure and never see a performance problem. But pretending performance problems won’t ever happen is a high risk and potentially expensive approach. The smart thing to do is to plan for a few strategic performance targets. If these targets are reached, then the site is good enough. For example, most websites perform perfectly well if they can serve web pages in less than a second to 200 concurrent users. This is not a very high bar but it is also typical of a lot of websites. You could pay more for more horsepower, but you probably don’t need to.
So plan your target performance level. Review your traffic analytics and make an informed decision about how many concurrent users you want to support and how quickly you want pages to load.
Plan Design Testing
If you are working with a web designer or a web design agency who is part of your QA process, you’re working with a professional. An experienced designer knows that a lot can go wrong in the process of converting design comps and specs into working code that shows up correctly in a browser. If your designer wants to have several review cycles where they get to validate your web developer’s work for pixel perfection, don’t shrug them off. This is a designer who is dedicated to excellence. Embrace this.
Plan to test the fidelity of the implemented product against your final design specification. The person best equipped to do this is the designer themselves. They see details that tend to disappear from the untrained eye. Plan to allow your designer or designers to conduct their own QA cycle, and make sure that they have access to the dev team. Your designer may become aware of issues with their design only once it is built out in code, issues that they could not anticipate when they put the comps together. Include adequate time for the designers and the coders to go back and forth with one another to shake out all of these issues and fully refine the site before it launches.
This design testing process is often one of the hallmarks that separates the amateur websites from the professional websites.
Plan Functional Testing
The functional testing stage is where you are most likely to spend the bulk of your time. Plan for this, and allow a generous amount of time to make sure it can be done carefully and completely. The functionality of even the simplest website is a highly complex and layered system of elements. The most important thing to plan for here is managing the inherent tedium of the process. You are going to be bored to tears. Make sure you plan a schedule that allows you to test for just a couple of hours at a time, with breaks in between. More than this will just cause your eyes to glaze over and will be a waste of your time.
Plan to Test Junk
You need to test early and often. This will really help your dev team. They desperately want you to notice faulty assumptions and missing components as early as possible. One of the most terrifying things about web development is its complexity. Because web development is so complex, it’s not uncommon for surprisingly big, dumb, obvious things to get missed by many people over a long period of time. A lot of times some of those big dumb obvious things get launched into production. The only way to avoid this is to test early and often.
But there’s a problem with this. You will be testing web functionality that is not fully baked. Your developers will ask you to test units of functionality that will only sort of work, in the context of vast parts of the website that definitely do not work at all. It will be nearly impossible to tell what’s going on, and how things are supposed to be working. It’s maddening. But it’s still very necessary. The earlier in the build process you can spot problems and call them out, the smoother the process will go overall.
In the previous part one guide we urged you to be adversarial with your developers. But it’s a friendly competition with the goal of everyone’s success. The developers get points if they release bug free code to you. You get points if you find nice juicy bugs for them to fix. To find those bugs you need to test stinky junk. Just hold your nose and do the work, and know your reward will be an excellent end result.
Plan to Test Boring Stuff
Good QA is extremely thorough. And being thorough is boring. Making sure that every aspect of a website’s functionality has been validated is a tedious process. Plan for this. Know this thoroughly in advance and look for ways to turn it into something interesting.
Some of my clients have reported that they enjoy the functional QA process because it is meditative. Testing multiple permutations of the same functionality can become a sort of zen exercise. Getting into the craftsperson’s mindset means finding the underlying grace and beauty of thoroughly working through something. Plan to engage with the boring stuff in a way that increases your peace.
Plan to Test Over and Over Again
We know that the functionality of any website is complex and multilayered. There are interdependencies and invisible relationships. Many aspects of the code touch many other aspects. When you find a bug in one area and your developer fixes it, there’s a decent chance they will break something somewhere else. This is quite normal with new code. With newly-built functionality comes instability. As you and your developers continue to work through and remedy problems, things will eventually stabilize. And that is the only way to get to the place of stability - by testing and retesting.
Plan To Be Someone Else
One vector of functional complexity is the result of your website being designed to serve multiple personas. Different types of users have different needs from your site, and may take different paths through it. Each of these requires a unique layer of QA planning. You can make this fun. During the QA process you are in a position to become an actor on a stage, inhabiting someone else’s persona. It may sound hokey, but it’s really valuable to the process for you to embrace and inhabit this persona. When you do, things will be exposed during QA that would otherwise be missed. This is yet another great opportunity to take what might be just a decent website and ensure it ends up as an excellent website.
Plan Error Testing
What happens when one of your customers tries to pay for something on your website with an invalid credit card? What happens to the transaction? What kinds of error messages do they see, if any? Do those messages make sense? Are they helpful to the user? Are they consistent with your branding and marketing strategy?
Error handling is one of those user experience items that often goes unplanned and untested in QA cycles. Factoring this into your process is another marker of truly professional level web development and QA. A well-done website specification, most likely detailed by your design team in their process, will include a plan for how various kinds of functional errors are handled.
Some types of errors must be handled by the server or by the APIs that provide services to the servers. For example, if a customer tries to use a credit card that is invalid or blocked, only the credit card processor can determine this problem and return an error. Plan to map out all of the likely error cases and include them in your test plan.
Lastly, include tests for lost internet connections in your plan. Try to simulate a loss of connectivity at various stages throughout your functionality cycle. Remember the maxim that you and the QA team should do the suffering, not your customer.
Plan To Be Like Puck and Make Mischief
Above we discussed the greatest fear of a web developer, which is that the website is launched into production and all of a sudden a really dumb and obvious mistake is found. Our team recently launched a complex website after months of intensive work, only to realize after the site was live that shipping calculations were grossly underestimating costs. This could absolutely have been avoided. If only we’d had someone playing the role of Shakespeare’s Puck.
Puck looks for trouble. He delights in creating chaos. He is gleeful at the thought of stirring the pot and getting people worked up. So you should plan to be the QA Puck. And if you can’t be Puck, make sure to delegate a person who will do their utmost to create mischief on the website. “What happens if I try to buy a million dollars worth of this website’s product? Does sales tax calculate properly? Are the commas in the right place? What happens to shipping? Does the credit card processor crash?"
Stirring up trouble and trying to break things is one of the best ways to discover and debunk invisible assumptions. Do it early and often. You’ll annoy people enormously, but remind them that their annoyance means a customer doesn’t have to be annoyed.
Plan To Script
Websites are far too complex for you to just hold all of their details in your head. Trying to wing your way through QA using just what’s in your head is a sure way to fail. Instead, acknowledge and embrace the complexity and write testing scripts. It’s more fun that way, honest. Dive into the process of writing up test scenarios for all of the relevant personas on the website. It’s a lot of work, but here again you will benefit from making a big problem manageable by breaking it into chunks. You will use your scripts over and again as you test and retest throughout the QA cycle. In fact, the scripts serve as an excellent reference for your dev team, and can help guide them to the exact location of a problem. This speeds their work along and greatly helps with the whole process. A lovely side effect of this level of planning, if done early in the project, is that it can guide the other parts of the planning such as the design and build planning. You tell your team what you are going to test. They build it to suit, and everything proceeds in a surprisingly smooth and delightful way.
There are a number of helpful methodologies that can be applied to the work of website QA. Think of these as different and complementary ways to approach a common problem. In our testing work we tend to combine a number of these methods throughout the QA life cycle, leveraging the specific benefits of each one to help us achieve the best results.
Test Against User Personas
One of the most powerful QA methods is testing based on user personas. If you’re serious about your website and the work that it does for your organization, then you likely have already defined and worked with user personas. These are short biographical profiles of the types of people who are most likely to use your website. Each persona is crafted to describe a group of users or customers in order so the site may be better tailored to help serve their specific needs and desires more effectively. In most cases between 3 and 7 personas are needed for an optimal result.
Personas are probably part of your marketing strategy process. You may have already defined your user backstories and their underlying motivations for interacting with your website. If you’ve done this work, we encourage you to build on it and test for these personas in the QA cycle. Essentially, your QA team will do their best to act like a member of one of your persona groups while they are testing the website. Another opportunity to add a little fun to the process.
An example of a user persona that we commonly employ is the website content editor persona. Most websites we develop include a content management system to support the client’s team in maintaining site content. These people tend to be extremely busy, typically juggling multiple roles in their organization. We like to take on their personalities and mimic their day-to-day demands on the CMS, because that will make sure we build a system that meets their specific needs while also serving the other website user personas. During QA, we create a number of user flows based specifically on the content editor persona. All of these flows involve using particular back-end tools that other user groups would never even know about.
Well-defined user personas are rich source material. They serve as deep wells from which a lot of useful material can be derived. Below, we talk further about user flows specifically. These flows won’t have any value or meaning if they are not anchored to the specific personas that help define a user's motivations for proceeding through a specific website flow.
You create test flows by building on your user personas and testing the various flows a given user might go through. Mapping out as many of these user flows as is practicable helps you navigate thoroughly through the QA process.
Continuing with the content editor persona example above, we can come up with several flows that a content editor might go through hourly, daily, weekly and monthly. For example, a content editor on a website may need to archive old content items every month. This workflow is very specific, and very different from the daily task of creating or updating content and advancing it through the editorial workflow process for publication. Each of these flows should be mapped out from beginning to end, and then tested in your QA process.
The user flows for your website are an essential component of testing because they describe an incoming set of user expectations and also describe an expected outcome from the functional actions taken by that user. They are the framework you build for the user to enter and move through the site to successfully get what they came for. This framework is measured against the overall persona to which the flows belong and the result is a thorough and complete view of the essential components of website functionality.
Test Smaller Units
To better understand and test the user flows we’ve described above, those user flows will need to be broken into smaller functional units. For example, our content editor persona needs to log in to the website in order to access their user account before they can begin their work. This login flow is a discrete functional unit that could contain its own bugs, but it could also trigger bugs in other parts of the flow. Once a user logs in, more and more discrete testable units of the flow will be revealed. Our content editor might need to save a draft article, which is a distinct unit. Our content editor might also need to attach categories to an article, another distinct unit.
We urge you to take the big problem of QA and break it into smaller pieces, and then to break those pieces into even smaller pieces. These even-smaller pieces are the atomic-level test units of QA. They originate from the user personas, and cascade into the user flows. This breakdown process creates a testing framework that can scale to support very complex systems spanning multiple user types and thousands of user interactions.
Test Worst Case Situations
In the planning section above I suggested that you plan to be like Puck from Shakespeare’s Midsummer Night’s Dream. Puck is the one who goes around stirring up trouble among the other characters and looking for ways to poke holes in the idea of what’s ok to do. Maybe you’d rather think of the Puck from The Real World on MTV. That Puck was like Shakespeare’s Puck in the sense that he stirred up trouble and questioned what was normal and allowed.
If you’re Puck in QA then you are the one looking for the weirdest, strangest ways to break the system you are testing. If you’re Puck, then you are the one who relishes in annoying others. The persona of the rogue mischief-maker is an essential role on the QA team. It’s inevitable for assumptions to build up in the design and building of complex systems. We don’t know we’re making assumptions, it’s just a thing that humans do. We don’t mean to. We just do it as a way of coping with complexity.
We all start off in our QA work trying to be a supportive team member. This means we’re friendly and helpful. This stance means that we miss the point. Puck comes in and tries to upend assumptions. Puck is the one who pokes around and tries to find the worst- case scenarios in a system and exploit them to break things. We’re supposed to break things during QA. That’s what’s helpful. Finding ways to embrace this duty means finding ways to do the work better.
Every system has its limits. A website is designed and built to handle a certain type and quantity of load for a certain group of users. The job of QA is to push these limits and force errors to emerge. You may force errors and then be told to back down by the project lead. This is fine and normal. This is good. What is not good is not finding these limits in the first place. You can’t know the shape of the thing until you find the boundaries that define it. Forcing errors makes them visible and allows you to find the boundaries.
Eyeballs (both weathered and fresh)
It’s rarely the case in QA that you can get as many people to test a website as you would like. Nevertheless I want to emphasize the importance of it here. Part of the value of having lots of people test is their varying levels of experience and different points of view.
No matter the number of people you have in your testing group, in good QA you want two kinds of eyes; fresh eyes and weathered eyes. You want people testing who understand the system intimately and have thoroughly tested it already. They know how to quickly trigger problem areas and expose regression problems. But these are the same people who are so familiar with the system that they have at least a few assumptions about it. These are the people with weathered eyes. So you also need fresh eyes. You want people who have never used the system before. These people with their fresh eyes are one of your greatest sources of assumption-busting insights.
On a recent project our team worked on, we only had weathered eyes available for most of the QA process. It was only late in the cycle when we were close to launching that we managed to get some fresh eyes on the shipping calculation aspect of the system. This person almost instantly saw a glaring problem. The problem had been missed by everyone else because we had all become habituated to the website we were testing. Some things become invisible and can only be seen by fresh eyes.
It’s very important for you to test the same thing over and over again. But why? If a part of the website has been built, tested, QA'd and fixed why would it break all over again for no reason?
Why is something that was working before, now all of a sudden not working? We have heard this complaint from clients many times over the years. The answer is that websites are highly complex systems with many interdependent parts. They do a great deal of work for your organization and they represent complex business rules that are essential to your organization. In order for these websites to remain manageable and adaptable over time, the moving parts have to fit together well. To the untrained eye, functions on a website that appear to have no relation to other functions are in fact intimately tied to them.
When a bug is found during QA and a web developer needs to apply a fix, that developer has a choice. They can fix it quickly with a patch or they can take more time and apply a more carefully crafted, comprehensive refactoring. Taking the extra time for careful craftsmanship pays for itself in the long run. Excellence is its own reward and a developer’s best defense when it comes to websites. However, this means that complex functional systems must be checked and rechecked as they go through the QA process. In a complex system, things eventually quiet down and become stable later in the QA process, but prior to that time, as the system is tuned and harmonized, things are noisy and chaotic. This is normal.
You’re not alone in your QA journey. There is an ever-growing list of tools to help you with the different aspects of the QA process.
Planning: Basecamp, Trello, Asana, ClickUp
You are likely already familiar with a number of project management tools. Some tools are more niche-specific than others. The Solspace team tends to favor tools that are less opinionated and offer more flexibility about how projects and QA cycles are managed. Because our team is flexible and works hard to select the management style that best suits each project, we gravitate to tools like Basecamp and Trello and ClickUp that allow us to manage multiple projects in various ways within the same tool. For us, having the ability to make the equivalent of nested to-do lists supports our planning needs well. The ability to attach milestones to work activity is very helpful and often essential as well, especially if you are fond of Gantt charts. Most of these planning tools allow you to assign tasks to people. This is also an essential feature for most website QA projects.
Bug Tracking: Basecamp, Trello, Asana, ClickUp, Jira, Monday, Github
You will evaluate and select your bug tracking tool based on your QA and dev team’s needs, of course. But keep in mind that sometimes less is more, and a simple flexible tool like Trello is all you need.
To effectively manage your bug tracking process, you need to be able to provide a short summary of each bug along with a longer description. You need to be able to track the steps used by the developers to reproduce the bug. And you need to be able to attach screenshots and videos of bug states. Assigning bugs to specific people so every bug has an owner is essential, as is the ability to trigger email notifications when bug states change.
When you can nest your bug testing process inside the user flows and user personas you’ve already built, you can have a more thorough and comprehensive QA cycle. Some tools like Github Issues support this by integrating with the file repository system so that when developers commit fixes, they attach those fixes to your bug reports. You can then validate the fixes in a streamlined and efficient manner. The developers can then go back and easily locate the specific commits that fixed a bug or triggered a regression.
Emulators and Cross Browser Testing Tools
The state of web browsers today is so much more advanced than it was 20 years ago. There is much more predictability in how a web page will look across various modern browsers now. Nevertheless, on most websites it is still necessary to perform cross-browser QA testing. There are a number of tools available to support this and depending on your needs, simple approaches tend to be best.
Our team uses Browserstack which includes a wide variety of cross browser testing tools.
Speed Testing Tools
Spending time on improving page load speed is never a waste. Every QA plan should include multiple extensive speed testing cycles as a baseline. Benchmarks and page load speed goals should be set as part of your testing plan. There are page load speed testing tools available that allow you to gather data and evaluate load issues throughout the stack.
We often use Google’s tools for this: https://pagespeed.web.dev
SEO Validation Tools
No website should ever be launched with obvious search engine optimization errors. When engaging in a complex web development project, SEO tends to be the last thing that gets attention. SEO mistakes do not usually result in functional failures the way page speed mistakes do, so they are deprioritized. Nevertheless, SEO mistakes can be costly. Plan for your QA cycles to include SEO validation. Most of the time it is sufficient to use automated tools for this validation during QA. Separate SEO optimization projects are normally undertaken with subject matter experts who take a bigger picture view of the problem.
Let’s be honest. We all know that when it comes to SEO, Google metrics are what really matters. So you may as well use the tools provided by Google Analytics to do your QA level automated SEO audits.
ADA Validation Tools
Web accessibility is another one of those important but often neglected parts of QA. Similar to the expectations you’ve set for page load performance and SEO performance, baseline expectations should be set for ADA compliance too.
Web Accessibility refers to the creation of alternate ways for people with disabilities to interact with a website, using specially designed interfaces. There are screen reader software programs for the blind, for example, and shortcut tools for motor-impaired users to more easily navigate the site. Set a compliance level based on what you know of your audience and use some automated tools during QA to validate this.
This last tool is an imaginary one, but it's an effective one.
When you are conducting QA, you are working through your test plans, finding bugs, logging them, seeing them get fixed, regressing those fixes, rinsing and repeating. In the beginning, the activity is intense, with many people involved and a seemingly endless supply of bugs emerging from the build. But then toward the end of QA, things are settling down and stabilizing. Fewer bug reports are filed. Fewer regression issues pop up.
I like to imagine that I possess a special Geiger counter calibrated specifically for website QA. Instead of clicking every time a radioactive particle is detected by the device, I imagine that it’s the presence of a bug causing the clicking sound. When there is a lot of bug activity during QA, the imaginary Geiger counter clicks loudly and often. When things settle down, there are fewer clicks and they are farther apart. When the imaginary Geiger counter gets pretty quiet, QA is done and you’re ready to launch. (Note that the imaginary Geiger counter never completely stops clicking. Websites always have bugs.)
So you have planned your QA process. You have conducted it using the methods and some of the tools mentioned above. You have launched your site. And so now, just as it’s time for me to wrap up this guide, it’s time to mop up your QA.
Maybe your QA didn’t go very well. To tell the truth, it almost never does. The job of QA is to take the development and stakeholder teams through a process of refining and shaping the website. QA is conflict oriented, and meant to be messy. You’re trying to break things, find flaws in someone else's work, and then gleefully point them out, even logging them with pictures and video. It’s not inherently fun, but with solid preparation and a good sense of humor, you can make it fun.
Your QA cycle may have exposed deep structural problems that happened because of failed planning at the marketing or strategy level. This is hard. Tough decisions have to be made. But at least QA found the problems instead of the customer.
Mopping up after QA means synthesizing everything that was learned by the respective teams that were involved in the process by conducting some version of a post mortem on the project and documenting it so it can be used as a resource by others in future. Your team members learned about each other and about the business the website serves as well as about the underlying technology. Talk to your team and to the participating stakeholders, and surface all of these learnings. If you can apply this learning to your next planning cycle, your next QA will be better and more effective, and the subsequent web development efforts will be much more successful.