webomates

Software Testing Life Cycle

Software Testing is a mandatory part of any Software Development process. It ensures that all technical and business requirements are met satisfactorily. Software Testing Life Cycle (STLC) is a series of systematically planned phases in case of a waterfall software development approach and continuous iterative & agile in case of agile or iterative development of software.

Software testing life cycle comprises of various activities with specific goals. Organizations may tweak these to align with their corporate philosophy, but the basic essence remains the same. In the waterfall model, the activities are performed in phases as shown in Waterfall diagram below, whereas in Agile model, all these activities are performed in every release, as shown in the following agile diagram below.

Feature understanding and Test script updates are simultaneous activities which run in every release however, there are activities which are required only once like test environment setup and only new release deployment is required.

Lets understand how testing is carried out in the traditional waterfall approach. To know more about agile testing click here. Requirement Analysis

The QA team interacts with various stakeholders to understand their requirements for testability. The requirements can be either functional or non-functional in nature.

Priorities are attached to the requirements for testing. The test conditions are defined in this phase. Every test condition should be traceable to a requirement. To aid this, a Requirement traceability Matrix is maintained where each requirement is mapped with test conditions. Requirement traceability Matrix helps in keeping track of testing.

Testing environment is identified during this phase. Test Planning and Test Case Development Test planning phase is sometimes referred to as Test Strategy phase. It is very important from technical and business point of view.

A detailed test plan is created in this phase. All testing strategies and approaches are defined. Risk Analysis, Risk management and mitigation strategies are defined. Scheduling is done for various testing phases.

Once test planning is completed, the team starts working on test cases based on inputs from planning phase.

Detailed Test case Document is prepared.

Test Scripts are prepared for tests marked for automation testing. Test cases and test scripts are reviewed by peers and managers to ensure complete coverage.

Test data is prepared in test environment. Click to read more about this blog : Software Testing Life Cycle

If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo or reach out to us at info@webomates.com

Test case testing vs Exploratory Testing

So which one is better? Easy answer. You need to do both types of testing. No this is not a case of just doing more is better!! Exploratory testing and test-case based testing take different and complementary approaches for testing a product. Let’s start with basic definitions.

Test case-based testing In this approach the test team sits down and defines the test cases that they will be carrying out on the day of regression. Here is the definition from softwaretestingfundamentals.com

“A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.”

Typically the test case will have a series of steps with or without data being entered defined with one or more validation steps carried out to determine that the behavior of the system is occurring correctly.

Exploratory testing

In Exploratory testing the team sits down prior to testing of the next build and creates a charter or a set of guidelines on where to explore. This can include high defect areas, new feature areas, areas where defects are fixed etc. Here is a more formal definition from ISTQB

“Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used.”

In addition if you are looking for an excellent guide on how to perform Exploratory testing Elisabeth Hendrickson’s book Explore It!! is an easy yet phenomenal read. Click to read more about this blog : Test case testing vs Exploratory Testing

Test Failure Analysis With AI-Critical To DevOps

Software testing is an extremely important process to ensure that all quality requirements are met before any application is released in the market. With DevOps, the release cycles have become shorter, but expectations are still the same, i.e. high-quality end product. But, it is a fact that the tests fail sometimes and testing may hit a roadblock. Usually, a knee-jerk reaction is to pass the buck to the development team for fixing the issue. However, the ideal approach is to understand the root cause of the failure.

Test failure analysis is a systematic process to analyze and identify the underlying causes for a failed test and to prevent them from occurring again.

Test failure analysis is an important exercise that should be conducted for all critical test failures. Continuous testing generates a lot of data in terms of test results which can be utilized for test failure analysis which can help in sifting, identifying the cause, and fixing the issue. It helps in improving the overall QA Ops process by helping in managing and addressing the root causes of any issue.

Key elements of Test failure analysis Failure in testing analysis is a good quality control measure that provides you with insights on what exactly went wrong, at what point, and understand the reason behind it. This helps the teams in improving the testing strategy based on the findings by identifying whether the problem was a testing issue or a flaw in design and development.

Test failure analysis is tailored as per the organization’s QA process and the application under test. However, certain key elements are common across the board and are mentioned below.

Key Element of Test failure Analysis Root cause analysis Root cause analysis of the defect holds the key to understanding what exactly went wrong.

Was it because of issues with the software or there was a problem with the test itself?

Once this basic question is answered, it is time to trace the origin of the problem. The problem could be at any point in the whole development cycle: requirement gathering and understanding, or design and development, or in some cases environmental issues while testing. Once the origin is verified, the teams need to work on it for corrective action.

If there is an issue with the test, then the whole test plan needs to be re-examined and corrected before further testing can take place.

Addressing false failures False failures are the bane of test automation. These are the cases when the automation system should have been able to correctly identify whether it is a Pass or Fail. Instead, it is incorrectly specified as a Fail. We have covered this in detail in our blog, click here to read more. False failures can lead to unnecessary delays because every failed test case needs to be triaged and based on its priority needs to be addressed.

Detailed Reporting Good reporting aids the teams to

Understand test results Saves time in going through a huge amount of data that was generated while testing Helps in distinguishing between actual defects, script errors, feature change and the noise. Helps in addressing the key question of how many known defects are in the build, and can the shipping be delayed, while the test team works on classifying the automation failures as actual bugs or a re-test scenario. Addressing the root cause Once the actual defects have been identified and traced to their origins, the concerned team needs to first segregate the ones that need immediate attention and address them as per the priorities. Depending on the magnitude of the change, the re-test can be done within the same test cycle or a new sprint can be conducted just for testing.

Acing the test failure analysis with Webomates In a typical test cycle, on average 30%-40% of tests fail. Though 93% of these failures are NOT related to defects and are false failures, they need to be triaged by the testing team to understand the underlying reason for these failures. Once the impacted test scripts are fixed, then the automated test run can be deemed a success with either true fails(i.e. defects) or true passes(scripts passed).

Webomates provides solutions to handle such scenarios with ease. Read for more : Test Failure Analysis

If this has piqued your interest and you want to know more, then please click here and schedule a demo, or reach out to us at info@webomates.com. If you liked this blog, then please like/follow us Webomates or Aseem.

Crowdsourced testing

Crowdsourced testing is an emerging trend in the field of software testing. It is an approach which involves different people with different backgrounds across the globe in testing process.

The testers are connected through a crowdsourcing platform and tests are conducted by multiple individuals at different locations. It adds multiple dimensions to manual testing process by providing a diverse and temporary workforce, thus aiding the organizations with limited testing bandwidth in achieving quick results.

For quick references you may want to read our blogs which talk about Crowdsourced Testing, overcoming the challenges posed by crowdsourcing and how it is different from manual and automated testing.

This particular article talks about best practices to be followed in crowdsourced testing and avoiding basic mistakes in order to have best results.

Take a look at the following table for quick understanding of what to do and what not to do while crowdsourcing testing jobs. These points are further discussed in detail in entailing paragraphs.

Do’s Don’ts Detailed Recruitment Policy Select random testers Clear testing Guidelines Undermine the findings Open communication Delayed response Internal audits Alienate internal team Interesting and challenging tasks
What to do while crowdsourcing the testing Detailed Crowd Recruitment Policy

Crowdsourced testing is all about hiring the right set of people to do the job. Scouting for right talent and having a good pool of testers requires having good recruitment policy in place.

There has to be a wide spectrum of people and right fit can be identified based on mix and match of the requirements like demographics and technical setup.

One factor to consider is response rate of the tester based on previous interactions. Sometimes, a tester may not respond well in time, thus delaying the whole process. Such testers should not be considered for time critical projects.

Set clear objectives and guidelines

It is important to lay out the scope of testing with proper guidelines in place. The scope may encompass new features or changes in existing features. Clear definition of scope ensures that time and efforts are channelized correctly. The guidelines will help the testers to know the specific areas to focus thus making the testing efforts productive.

Set testing standards and define specifications

Setting up standards for testing aligned with organization level quality standards is needed to ensure that defect reporting and rectification fits seamlessly in overall quality management process.

It also helps the testers in understanding the expectations of the end client (in this case the organization crowdsourcing the work).Clear set of technical specifications help in defining the test environment to be used for conducting the tests. Read for more : Crowdsourced testing

If you are interested in learning more about the services offered by Webomates then please click here and schedule a demo, or reach out to us at info@webomates.com. You can also avail a free trial by clicking here.

what is white box testing

The software development process involves understanding customer requirements, analysing them for feasibility, followed by designing, coding, testing and implementation.

Testing is a crucial step of the software development cycle as it ensures that all the requirements have been converted into a successful end product.

The decision of choosing the right approach for testing a software is critical. Ideally, the approach should be a healthy mix of various techniques to cover all possible scenarios. The two most commonly used approaches are White Box and Black Box testing.

White box testing requires the tester to know all the functional and design details of the module/code that is being tested. The tester needs to have in-depth knowledge of the requirements, design and even code, as well as the desired outcome.

Black Box Testing, also known as functionality testing or behavioural testing, essentially requires the testers to evaluate the functionality of the software without looking at details of the code.

In this article, we will explore the fundamentals of White Box testing and its importance in testing any software.

White-Box testing White-box testing is used to test the structure and business logic of the program being developed.

In order to test the code thoroughly, the testing professional needs to have good knowledge of the programming language, the set standards for the code and design fundamentals. Since they have full access to the code, it is important for them to know the details of the software development process, before the testing commences.

White Box Testing

White-Box Testing is known by several other names, like, Glass box testing, Clear Box testing, Open Box testing, Structural testing, Path Driven Testing or Logic driven testing.

Testing can be done using either Static or Dynamic approach. Static/Dynamic refers to the state of system under test (running/dynamic, or stopped/static).

Static Analysis requires code walk through by various stakeholders, who read and analyse code for possible defects or deviation from desired functioning. This process also ensures that the code has been developed following defined processes and standards of the Organization. It is also called Structural Analysis or Verification.

Dynamic Analysis involves executing and analysing the code in a test environment. It focus on behaviour. Since dynamic testing validates the outcomes, it is also known as Validation.

White Box testing largely follows the Dynamic approach of testing.

White-Box Testing Process

White box testing follows a process, wherein each aspect of the software is tested thoroughly by the testing team to ensure its quality and adherence to expected norms.

The process for White box testing at any level of testing is more or less the same and involves the following steps:

Understand the languages and tools used in development of the software Understand the Source Code Write test cases for every flow and coverage possibility Execute test cases.

Analyse and record the results White Box Testing Process

White Box Testing at various levels of the Testing Process White-box testing can be applied at the Unit, Integration and System levels of the software testing process.

The diagram below provides an idea about the levels at which White Box Testing can be applied.

White Box Testing

White Box Testing at Unit Testing Level

Unit testing is done at the basic level, as and when the programmer develops a fully functional module, aka unit of code. The module is tested independent of other modules or sub modules. It has its own set of inputs specific to the function it is expected to perform.

White Box testing at Unit Test Level involves different types of testing. Some important approaches are discussed below:

Execution Testing: It involves deep checking the code to ensure all possible aspects have been covered. It involves: Statement Coverage: Statement Coverage is a White Box Test design technique which ensures that each and every line of code has been executed at least once.

Branch/Decision coverage: Branch coverage is a testing technique which ensures that each and every decision making point and its possible outcomes have been tested at least once with different input parameters.

Path coverage: Path Coverage ensures that each and every path of the code has been traversed at least once. It is imperative to ensure that all segments of Control structures have been covered for a complete path coverage.

Loop coverage: Loop coverage ensures that all loops have been executed zero times, at least once or more than once within their boundary values. It is important to test the exit condition of the loop for these cases.

Data Structures Test: All data structures have to be tested for extreme conditions. For example: Stack overflow issues sometimes go undetected if not handled properly.

Functional Testing – The Functional Testing technique involves testing the functionality of the code segment based on specific inputs to verify and validate the expected outputs. Mutation Testing – Every time a code is fixed to rectify a bug, there are chances of introducing another bug. Mutation testing looks into such scenarios.

White Box Testing at the Integration Level White box testing at the integration level focuses on testing the interfacing between various modules to ensure that they work in tandem with each other to produce the desired results. It can be done as and when fully functional modules or sub modules are completed.

Integration testing can be done using the following approaches:

Top-Down Approach: In this type of testing, the high-level modules are tested first and then low-level modules. Finally the low-level modules are integrated with a high level module to ensure that the system is working as intended. Stubs are used in case low level modules are not ready at time of testing. Stubs simulate the functionality of an actual software module.

Bottom-Up Approach: In this type of testing, the lowest level modules are tested first and then high-level modules. Finally integrating them to ensure proper system flow. In case a higher level module is not ready at the time of testing, then Driver modules are used to invoke the module under test by providing test inputs.

Hybrid (Top and Bottom Combined) Approach There are times when new modules are added to address the new requirements or reflect changes in functionality. Incremental integration testing is performed to test the seamless integration of these new/changed modules. Read for more : what is white boxtesting

If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo or reach out to us at info@webomates.com

Test Automation vs Manual Testing

In the software testing arena a perennial debate has raged between proponents of manual and automation testing. In our experience, the two are complementary; used together they form a more effective test strategy.

Manual testing Since pretty much the start of software development in the 1960’s manual testing has been carried out by teams of testers. In this technique a team of people ( qa testers) get access to the latest software build and test it to validate that the software build works correctly. For feature testing there are two broad categories of manual testing that can be carried out.

Test case based testing In this case test cases are defined up front, prior to the arrival of the next software build and the manual team work through the list of test cases performing the actions defined in the test cases and validating that the test case and hence the software build is operational. This technique requires more domain understanding at the test case creation point and less domain knowledge at the time of execution.

Exploratory testing Here a manual QA tester that understands the domain of the software fairly well attempts to “break” ( cause a bug to happen) the software. Exploratory testing is an excellent complement to to test case based testing. Together they result in a significant improvement in quality

Automation testing Since the 1990’s automation testing has risen as strong alternative. In automation testing a software tool is programmed ( In Java for example) to carry out human actions that testers would do in check based testing. The process is typically to first create the test cases like in Manual check based testing and then to program the test cases. Read for more : Test Automation vs Manual Testing

With Webomates CQ we have developed a service that incorporates the benefits of AI into a TAAS ( Testing As a Service). To hear more about WebomatesCQ schedule a demo here.

Performance Testing Vs Functional Testing

An Ideal Software testing process has to be a holistic approach that involves combination of various testing techniques to achieve a high quality software. Broadly speaking, testing of any application can be broken down on the basis of two premises – “Operability” and “Efficiency”. “Operability” is taken care of by functional testing and “Efficiency” is taken care of by performance testing.

Functional Testing evaluates individual and cohesive behaviour functions of a software system to verify that they adhere to pre-defined specifications. It tests the functional accuracy, interoperability of subsystems and compliance with pre-defined standards in the context of functional and business requirements.

Performance testing is a non-functional testing technique that exercises a system and then, measures, validates and verifies the response time, stability, scalability, speed and reliability of the system in production-like environment.

In this article, we will assess the differences between Performance and Functional testing.

Performance Testing Functional Testing Objective Validates performance Validates behaviour Focus Area User expectations User requirements Test data input Performance requirements Functional Requirements Test Execution Sequence Done after functional testing Done before performance testing Testing Approach Automation preferred Manual or Automated or Crowdsource Production test environment emulation Preferred Not mandatory Infrastructural requirements High Minimal Time taken for testing Less More Impact of functional Requirement changes No Yes Testing Tools examples LoadRunner, Jmeter Selenium, QTP, WinRunner

Conclusion

On close observation of the above table, it can be noticed that both the testing types complement each other. Performance testing validates that application software can handle real time scenarios and address issues, if any, to deliver a robust and efficient product to the end users. Functional testing on the other hand ensures the validity of software as per the functional and business requirements. Read for more : what are performance tests Webomates CQ, a tool by Webomates is used for performing regression testing for all the domains. Request a demo today.

Self-Healing – Automate the Automation

Introduction to The Test Automation Landscape…and Beyond It’s no secret that Software Testing was always on the backseat during the traditional ways of development. Fast forward to 2020, and testing is right next to the development phase and even goes hand-in-hand. From assuming it as ‘low priority’, it’s now become the most important aspect of software development. From manual testing to automated testing to self-healing automated testing – It’s a journey from proscriptive to prescriptive.

Why Test Automation became so important In an attempt to exempt testers from the time-consuming repetitive tasks, Automated testing came into existence and helped organizations achieve multiple business values like faster time to market, improved ROI and reduced testing cost and effort. The process involved an automated tool to execute a test case suite and generate detailed test reports.

However, as automation testing began to evolve, its popularity and ubiquity revealed some deficits. While the promise of automated testing was immense as it provided a greater testing efficiency, it had its associated risks from frequent changes made to the test scripts – ranging from failure of automated tests to test packages not being up-to-date. Real-time issue resolution was the need of the hour. And what better than the power combo of AI and ML to resolve this problem.

How the new age technologies – Agile, DevOps, AI and ML led to offset of Test Automation and onset of self-healing

With great power, comes great responsibility. As per World Quality Report, demands for quality-at-speed and shift-left have placed the onus of ensuring end-user satisfaction on quality assurance teams.

Imagine you are playing and you scrape your hand. Your body’s self healing mechanism kicks in and tries to heal the wound. Now, apply the same principle to software testing.

Artificial intelligence (AI) is intelligence demonstrated by machines. Machine learning (ML) is the study of computer algorithms that improve automatically through experience. It is seen as a part of artificial intelligence. Read for more click here

If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo, or reach out to us at info@webomates.com

Click to rate this post!

Uncovering the Meaning of Exploratory Testing

The first thing that comes to mind after hearing about ‘exploratory’ testing is adhoc testing. But there’s a difference between the two.

Exploratory testing is a ‘thoughtful approach’ of testing that involves simultaneous learning, test schematizing, and test execution, unlike ad-hoc testing, which involves wandering through an application looking for bugs.

What is Exploratory Testing?

Exploring the software to discover its functionalities and drawbacks is what exploratory software testing does. So, this is a formal testing process that doesn’t rely on test cases or test planning documents to test the application. Instead, testers go through the application and learn about its functionalities.

They then, use exploratory test charters to direct, record and keep track of the exploratory test session’s observations.

How do we go about the charter process?

Our charter process is conducted very meticulously by some of our best professionals.

Identifying the purpose of the product: Once the primary purpose has been established, drawing a charter becomes exponentially simpler.

Identify functions and areas of potential instability:

It is quite likely that some of the functions within the project have a degree of instability. This can be quickly remedied by identifying potential pitfalls and solving them immediately.

Creation of a charter for each function:

Breaking down the project into smaller functions is the best way to formulate a comprehensive charter that prevents any mishaps. Execute Charter: We take great care to execute the planned charter.

We then proceed to design and record a consistency verification test.

Exploratory testing, as a process, includes the phases of discovery, investigation, and learning. The best way to go about it is to explicitly define and maintain test charters and record the observations within a conducted test. It is a hands-on procedure in which testers perform minimum planning and maximum test exploration.

Read More about : Exploratory testing services

If this has picked your interest and you want to know more, then please click here and schedule a demo, or reach out to us at info@webomates.com.

We have more exciting articles coming up every week. Stay tuned and like/follow us at

LinkedIn – Webomates LinkedIn Page

Facebook – Webomates Facebook page

For More Information visit us at : webomates.com

How To Choose The Right Automation Testing Tool

Are you exploring the options for automating your testing process?

Are you worried about the ROI of test automation?

Will you be able to convince your decision-makers in investing in an automated testing tool?

What if we give you a ready reckoner guide that can help in making the right decision?

Interested? Go on, read more….

Automating the testing process aids the organizations in accelerating their development and delivery cycles significantly with a higher level of quality of the end product.

Let us take a quick look at the major benefits of test automation and then we can move on to what all you need to consider while looking for an automated testing tool that fits perfectly for your needs.

Banefits of test automation Now that you have seen that it is worth spending time, effort, and money in getting a test automation tool, let us now discuss what key points you should consider while making the crucial decision.

automation-tool Key Point Details Kind of tests to automate Tests that can be automated should beHighly repetitiveVery frequently executedStableWith predictable outcomesLook for the tool that covers your requirement and offers scalability in terms of the number of tests to execute so that tomorrow if the number of tests increases/decreases, the test suite can be adjusted accordingly. Skills of the team What is the learning curve for using the tool?Does it need extra skill to use the tool?Do you have SME on your team? If not, then does the vendor extend support and expertise? Ease of integration Is it easy to integrate the tool with your current CI/CD pipeline without disturbing the current workflow?Does it require an extra infrastructure setup? Ease of use Does the tool have an option of codeless testing?Or does your team have the capability to write the test scripts?How intuitive is it to use the tool?Is it easy to follow the workflow of the tool? Versatility Is the testing tool versatile enough to conduct different types of testing? Integration testing, functional testing, performance testing, UI testing are just to name a few.;Is it compatible with different platforms/browsers/OS? Smart test management Test management includes generating the test cases, executing them, and maintaining them. Consider the following points while analyzing the test management capabilities of the tool.Time taken to generate the test casesEfforts involved in generating the test casesAre the test cases generated by the tool reusableTime taken to execute a test suiteDoes the tool have an option to modify test cases in case of any changes?Does it execute the modified test case in the same cycle? Comprehensive reporting The management needs to see the results and numbers as an assurance that their investment was worthwhile. A testing tool with a good reporting feature that generates easy to understand reports gets bonus points while shortlisting.The reporting feature should haveDetailed test analysisReport only true failures and filter out false failuresOption for real-time alerts for defects Budget Keep in mind the budget allocated for test automation. Check for various costs likeInfrastructure setup costsCosts associated with upgrading skill setLicensing costs for any third-party tools, if required.Many tools offer various plans based onNumber of tests to be executedTime quantumUse of their infrastructureAll these factors have to be accounted for towards the total value of investment towards automation. Trial package Does the tool have a free trial run option? A trial run helps you to get a feel of tool functionalities and aids in making an informed decision. Technical support and assistance Does the vendor provide training to your team?Do they have a detailed technical manual for using the tool?What kind of process does the vendor have for lending technical support?Is it online, email, telephonic, or in-person?Is it 24X7 support?What is the turnaround time for addressing the issues? The Webomates advantage There are multiple options available in the market ranging from open source tools to customized ones. It is up to you to make the wise choice keeping in mind the long-term ROI.

As mentioned earlier, test automation has marked benefits in terms of accuracy, scalability, dependability, enhanced test coverage, time, and effort saving, but it cannot “think”. Enters, AI test automation, aka intelligent test automation.

AI test automation can spot anomalies, learn from patterns, analyze the data, and then if required, can update the test scripts to reflect the intended changes. You can read more about the differences between test automation and AI test automation on our blog by clicking here.

What if we give you a tool that meets all your testing requirements, fits in your budget, and is intelligent enough to help you in analysis and decision making?Webomates CQ is a revolutionary AI-based testing tool that meets all the criteria mentioned in the previous section with the service level guarantees to support its claims. Read for more : Application testing

If you are interested in learning more about Webomates’ CQ service please click here and schedule a demo or reach out to us at info@webomates.com.