Congratulations! After extensive meetings to finalize requirements and solidify designs and many intense hours of coding, what once existed as a potential solution to a client’s need is on its way to becoming a fully tangible mobile or web-based application. I know you are so relieved. Your completed application will soon be in the hands of your desired users. You are so excited to give them an awesome product! They are going to be so happy with it that they’ll tell everyone they know how much they appreciate your hard work!
But don’t celebrate quite yet. No… Put down the champagne… Not. Yet. For your application to be a success you must first conduct a series of tests to ensure it works. The Quality Assurance Analyst is responsible for confirming the overall behavior, aesthetic, and performance of the final product. We’re the people responsible for making sure that the reality matches the agreement you made with the client. The QA Analyst will perform this feat of magic by implementing functional test cases based on the requirements, assessing UI test cases based on the designs and user flows, evaluating performance, load, and stress, and testing unknown- unknowns.
The Quality Assurance effort may, nay, MUST look different depending on the size of the project. Determining the right testing effort hinges upon the complexity of the application built, the length of time allotted for development and testing and the need for testing the performance of an application.
Imagine that you are a University research team needing an application to effectively test a scientific hypothesis in 4 years, or a small startup that needs a proof-of-concept application to obtain funding from a VC in 2 months. You will have different requirements for your application and in the types and extent of your testing than a Fortune- 500 corporation that requires a suite of applications for use in all of their stores and warehouses created over the course of a year with continuing maintenance after deployment.
Goldilocks would agree that to get your solution “just right” you need an understanding of what is important and necessary. Here we will discuss the guidelines BlueFletch uses to help our Quality Assurance Analysts perform at the highest standard of excellence
First, let’s start with a small project. This is the basis of our testing plan and is the minimum of the testing services that we provide. If the application is a proof-of-concept or is internal-use-only for a company of 50 employees or less, that does one or two basic functions; we use this amount and method of testing to test our client solutions.
Test planning begins as soon as we green- light the project; the QA Analyst needs to be involved in the Requirement Gathering and Design Phase of the Software Development Life Cycle (SDLC) to ensure that the individual testing the application has adequate knowledge of the client’s needs. It is imperative to the success of our QA Department to understand the intricacies of the proposed solution early in the project life cycle so that they can make sure all parties involved are on the same page.
At this point in the SDLC, the QA Analyst will determine the scope of the testing effort and draft a Test Plan to inform the client, the Project Manager and the Business Analyst of the proposed test strategy and schedule for test execution. Next, the QA Analyst chooses the appropriate testing tools, a hermetic test environment is created and the testing effort is negotiated and explained to all parties. The test deliverables are then prepared, reviewed and approved by the client prior to development. As we said before, different sizes of projects will require different test deliverables.
Active testing will begin once a testable build is available from the Development Team and the test entry criteria specified in the test plan is met. We begin with exploratory testing, where overall functionality is verified. After this initial run- through and fix of critical issues, the developers may show a demo to the client. If a major crash or block is discovered, the QA Analyst will be able to alert the Development Team early in the Development Phase, resulting in fewer surprises and project overages. As testing commences, black box testing is used to assess the functional test cases, which have already been verified by the client in the previous phase. Functional testing occurs until the application satisfies the conditions in the exit criteria.
After functional testing is complete, UI testing is conducted to prove that the application visually aligns with the expected design and that the application flows as the client desires it to. These bugs are often easier to correct and therefore can be tested after all the bigger, more complicated functional problems have been solved.
Finally, the BlueFletch QA team mashes buttons.
We do all the things that a small child might accidentally do to destroy a software application. We try to do multiple actions at once, we press buttons in rapid succession, we input unusable information, including incredibly long strings into text entry fields, letters into number fields, try to press the physical back button and the back button on the screen at the same time, etc. We try everything and anything that anyone could possibly try to make an application crash.
This allows us to understand the Unknown-unknowns, the things we did not plan for in the Requirements Gathering and Design Phase or in the Development Phase, i.e. the Unexpected.
If the application is medium-sized.
We determine a mid-sized project to be customer-facing with 3 or 4 basic functions or one that collects a lot of data. This is the category that most of the applications that we create fall into. If your solution is medium-sized, the BlueFletch QA Team will complete all the steps above, plus Performance Testing.
Performance Testing determines the behavior of the application when placed under a heavy load or high stress. Customer-facing applications should always be load-tested and stress-tested to ensure that the solution can accommodate the number of consumers that your client expects to reach. These tests should also be used on applications that collect a lot of data to make sure that the back-end services and servers are up to the amount of data that will be flooding in without crashing or causing the application to slow.
Large-scale projects are any application that collects more data, reaches more users, or has more functions than the solutions explained above.
Large-scale applications might need constant updates and maintenance over an extended period after development is completed. As new features and bug fixes are submitted in a new build, regression testing occurs to verify all existing features remain unaffected. Thousands of test cases may need regression testing for each build.
The BlueFletch QA Team adopts automation testing for Regression Testing, saving us invaluable time and creating a faster and more reliable version deployment. Based on the time needed to create and maintain test scripts, this option may not be necessary for a smaller-scaled project or suitable for a project with a shorter timeline.
Large-scale projects contain a myriad of aspects that require the standardization of testing tools. These tools must match the testing components. Typically on a larger scale, these components will evolve as the project continues to iterate. BlueFletch will determine all of these aspects for your project as well as help define a system for accurate and prompt reporting. Helping determine the frequency of your daily, weekly, monthly, quarterly, and yearly deliverables are more effectively achieved with proper estimates and planning with a linear roadmap for releases and iterations.
The introduction to end-to-end test flows will minimize the question most large projects face in their infancy. “Are we building the right product?” and “Are we building the product right?” An automation strategy helps answer both of these questions and others that are bound to arise over the course of testing. When you’re able to produce metrics from performance, security, component, functional, and usability testing, you can spend less time manually testing each module and you’re able to further progress your product with continuous delivery, while simultaneously working out issues caught by automation.
A key benefit of automating a large-scale project is that the test suites evolve from the detection of bugs to the prevention of bugs, which is ideal for long-term success. A higher level of visibility, especially amongst large departments, helps business, technology, and team members all stay in the loop of the current capabilities and limitations of their products. Providing quality metrics with test case management and implementing continuous integration of your project helps you stay ahead of your deliverables as well as gives you the room to innovate and compare your product against market competition.
As a client, you are left with test coverage that improves with every version release of your product. The BlueFletch QA Team will also integrate with your current testing workflow if you have one established. If you do not have a testing workflow, we are able to curate for you the most beneficial tests and hand them off to you so that your company is able to run, maintain, and execute these tests with ease and confidence.
Perhaps the testing effort needed for your particular project falls somewhere in between these three examples. These are just guidelines for our QA Team to be as effective as possible and help fulfill our client’s needs to the fullest. But, as we all know, in Agile Development there are always alternatives and in-betweens. For example, a client might require Functional Testing but not the need to test the unexpected, as it might just be a demo application.
Here at BlueFletch, the solution is always client-oriented and dependent on your circumstances and needs, but the bottom line is that every solution needs testing. You want to ensure that the application that you worked so hard on and are so proud of, is in fact what your client agreed to and what the user will find pleasurable and easy to use. Our multilayered test approach will cover many aspects of your project from integration, user acceptance testing, web services, and functional regression. This testing style will lay a strong foundation as your product evolves and components of your application are changed and updated.
NOW you can pop the champagne. Enjoy! It’s much more satisfying now, isn’t it? Choosing the right level of testing is the hard part, but the BlueFletch QA testing process is scalable to accommodate every project, large or small, for your perfect fit.