webomates

Software Testing Evolution & Methodologies

Software programming has evolved over the decades and consequently, testing, which is an integral part of software development, has also gone through a series of changes.

It’s all started with programming and debugging phase, when finding errors during debugging was considered testing. In 1957, testing got an individual identity and was treated as a separate activity from the debugging process.

Till the late ’70s, testing was seen as an exercise to ensure that the software works as per the specified requirements. It was then extended to find the errors, besides ensuring the proper functioning of the software. In the ’80s, the testing activity was also considered as a measurement of quality. With this, it gained more importance and was treated as a clearly defined and managed process of the software development life cycle.

By the mid-’90s, the testing process had its own life cycle.

In this article, we have segregated the phases of software testing evolution in terms of different eras for a better understanding.

The Era of Programmers and Testers Development and testing were treated as mutually independent activities during this era. Once the software was ready, it was passed on to the testing team for verification. Testers were not very actively involved during the requirement analysis phase and had limited interactions with business stakeholders.

They were heavily dependent on the knowledge passed on to them via documentation done during design and development or gathering knowledge from developers who wrote the code.

Lack of insight into the requirements and expectations of the customers led to limited testing strategies used by the testing team. The testers would develop a test plan based on their understanding of documentation and test the software in an ad-hoc manner. This obviously had certain limitations and the testing was not comprehensive.

The Era of Exploration and Manual Testing The late ’90s saw the advent of various methodologies like agile testing, exploratory testing, etc. Testing was done manually using detailed test cases and test plans.

Exploratory testing gave freedom to test and break software in indigenous ways by exploring the software within the gamut of testing charters.

Expansive and intensive growth of the software development process needed more comprehensive ways of testing. The incremental and iterative approach employed by agile testing helped in achieving this goal. Iterative way of testing paved way for automating the tests that were repetitive in nature. Read for more : Read for more click this link > Software Testing Evolution

Selenium Testing Automation

Ever since Selenium based automation testing came into existence in the tech industry, it has left an indelible mark and has become the most widely used automated testing method for web-based applications. The incessant need for quality assurance and rigorous testing in complex web and mobile applications has resulted in the market selecting Selenium and it has become the most prominent tool in this space.

So, what is Selenium based automation testing ? Basically Selenium commands are used by automation scripts for emulating user actions on a web page. Furthermore, Selenium is an open source automated testing tool available for several dedicated purposes on web-based applications, which provides support for different web browsers, operating systems and programming languages. Selenium is a software testing suite written in Java and has paved the way to become the de facto product in the quality assurance world.

With the capacity to support numerous programming languages, operating systems and web browsers, Selenium based automation testing has been adopted for use by big technology providers such as Google, HubSpot, Fitbit, Netflix and many more. The whole suite provides solutions to different testing problems and needs.

How did the name Selenium come into existence? Jason Huggins was the pioneer of the Selenium automation industry. As early as 2000 Mercury interactive was popular and a competitor to Thoughtwork’s. Jason cracked a joked in an email sent to his team at ThoughtWork’s, where he mocked their competitor “Mercury” by specifying that selenium is the antidote for Mercury poisoning! His team took the name, that was how the team approved the name Selenium for their framework.

Brief History of Selenium

The Selenium is a collection of different tools and has contributions from different notable people. The long history of selenium project has different stages with key individuals contributing immensely to the growth at different stages. Selenium was initially developed by Jason Huggins in 2004 while he was working as an Engineer in ThoughtsWork on a web application that requires frequent testing. He created the program using JavaScript, after using it he realized the shortcomings of manual testing and the need to curb monotony. He originally named the program JavaScriptTestRunner but after realizing the potential of the program, he made it an open source program which he re-named as Selenium Core.

However there were problems. Due to “Same Origin Policy” which prohibits JavaScript from being used from a different Domain name from which it was launched, testers had to go through the stress of installing Selenium Core and Web servers containing web applications to be tested so they can belong to the same domain. Paul Hammant another ThoughtWork’ Engineer offered a solution to this problem by creating Selenium Remote control (Selenium RC) or Selenium 1. Read For more : Selenium testing automation

Automating Mobile App Testing

Mobile phone users worldwide are increasing due to the availability of a variety of smartphones in the market, affordable data plans, improved connectivity, and a large range of apps for all types of usage. As per Statista, the number of smartphone users is expected to reach 7.5 billion worldwide by 2026.

smart-phone These days, there are apps for almost everything, payments, shopping, news reading, education, medicine, work, social media, gaming, and the list keeps growing. However, the popularity of an app is directly proportional to the number of downloads and its reviews, consequently, marking its market share. Users are demanding and any lag in the user experience, services, performance, and less friendly UI results in poor ratings. So, it is wise to invest time, effort, and resources to properly test your apps before launching.

Mobile app testing is a process to test the functionality, performance, user-friendliness, and security of an app.

Mobile app testing also includes testing for operational issues like interruptions due to change in service providers (or location changes), memory leaks, low resources, certifications, compatibility, installation issues, etc. Mobile app testing can be either manual and/or automated.

Benefits of automating Mobile App Testing As mentioned in the previous section, your market share can take a major hit if your app is not robust and user-friendly. Therefore, it needs to be tested rigorously before it can be launched in the market. However, app testing comes with a cost attached to it. Since apps need to be tested for a variety of mobile devices and different OS, and with upgrades happening frequently, the possibilities are numerous and the cost to test all combinations grows beyond the original estimate.

To address this, it is strategically wise to automate your testing process to improve efficiency and save precious resources. Mobile app test automation is known to be a complex activity, but it would be good to invest initially and reap long-term benefits. You can read more about managing QA costs in our previous blog “Managing QA cost by adopting intelligent test automation”.

automated-mobile Faster feedback One of the biggest advantages of automation is that you get to know the test results almost immediately and resolutions for issues can be done on a priority basis.

Faster bug discovery Defect discovery time is significantly reduced as compared to manual testing.

Improved risk mitigation Faster and earlier detection of issues aids in quick resolutions and risks of running into problems at the time of launch are reduced by a greater degree.

Extended test coverage There is as much that you can cover in manual testing. With frequent releases and upgrades, it takes extra effort to manually test every little change. Test automation saves time and effort and provides better test coverage.

Faster time to release Test automation is a winner when it comes to saving those precious man-hours before the app is launched in the market. You need not worry about getting into the details of executing every test by yourself. Let automation take over and you can focus on other details to further make app release faster and better for your customers. Read for more : Mobile App Testing

Webomates has integrated solution Webomates CQ which helps companies to test the Mobile app properly and with effectiveness

Performance Testing Types & Metrics

Performance testing is a non-functional testing technique that exercises a system and then , measures, validates and verifies the response time, stability, scalability, speed and reliability of the system in production-like environment.

It may additionally identify any performance bottlenecks and potential crashes when the software is subjected to extreme testing conditions. The system can then be fine-tuned by identifying and addressing root cause of the problem.

The main objective of Performance testing is to compare behavior against system requirements. If performance requirements were not expressed by stakeholders, then the initial testing effort will establish baseline metrics for the system, or the benchmark to meet in future releases.

Different applications may have different performance benchmarks before the product is made available to the end user. A wide range of Performance tests are done to ensure that the software meets those standards.

This article is focused on some commonly used Performance testing techniques and the process involved in testing an application.

Performance testing Metrics

Every system has certain Key Performance Indicators (KPI’s) or metrics that are evaluated against the baseline during Performance testing.

Some of the commonly used Performance Metrics are described below.

Performance Testing

Response time: Response Time is the time elapsed between a request and the completion of request by the system. Response time is critical in Real Time Applications. Testers usually monitor Average Response Time, Peak Response Time, Time to first byte, Time to last byte etc.

Latency: Latency is waiting time for request processing. It is time elapsed between request till the time first byte is received. Error Rate: Error rate is percentage of requests resulting in errors as compared to total number of requests. Rate of errors may increase when application starts reaching its threshold limit or goes beyond.

Throughput: This is the measure of number of processes/transactions handled by system in a specified time. Bandwidth: Bandwidth is an important metrics to check network performances. It is the measure of Volume of data per second. CPU Utilization: This is a key metrics which measures the percentage of time CPU spends in handling a process.

High CPU utilization by any task is red flagged to check any performance issues.

Memory Usage: If the amount of memory used by the process is unusually high despite of proper handling routines, then it indicates that there are memory leaks which need to be plugged before system goes live.

Based on the type of application being tested, the technical and business stakeholders select which metrics need to be checked and accordingly attach priorities to them. Unlike functional requirements, performance requirements are not binary.

The metrics are usually expressed with a target percentile, and sometimes combined with other metrics. For example, latency might be expressed with throughput: response time must be less than 500 milliseconds for 90% of responses at a throughput of 10 requests per second.Read for more : Performance testing

Webomates CQ, a tool by Webomates is used for performing regression testing for all the domains. Request a demo today.

Manual Testing Explained

Manual Testing is the process of confirming that the manufactured product is the quality product and to assure that the manufactured product is working according to the specifications or not.

The deviation from expected behavior and the desired result is considered as Defects.

What is the role of Manual Testing? Manual testing is the process of executing and running test cases or exploring the application to find any deviation from existing behavior without the help of any tool.

Manual Testing is the most ancient and important type of testing and helps find bugs in the software system. Any new application must be manually tested before it can be automated with the help of any tool. Manual Testing requires more effort but is necessary to check automation feasibility.

Manual Testing Each of the above stages has a definite Entry and Exit criteria and Activities & Deliverables associated with it.

What is Entry and Exit Criteria? Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.

Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded.

Manual Testing Exit Requirement Analysis:– It is the first phase of STLC and it starts as soon as the SRS is shared with the testing team and In this phase, we analyze functional or non-functional requirement of Application under Test. Requirement Traceability Matrix is the output of this phase. Below are some activities that are performed under this phase:

Extracting Requirement from Customer Analysis and determining any unclear and ambiguous requirement Requirement documentation in form of Use cases Defining the scope of testing Preparing Requirement Traceability Matrix

Test Case Design: – Once we have the Traceability matrix, the next step will be to transform these business or functional requirement into use cases. This phase involves the creation, verification, and modification of test cases & test scripts. Test Cases is the output of this phase.

Implementation and Execution: – During this phase, the testers test case execution will be carried out on the basis of test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed. Execution Status and the Bug report is the outcome of this phase.

Exit Criteria and Reporting:– This STLC phase concentrates on the exit criteria and reporting. There are different types of reports ( DSR – Daily status report, WSR – Weekly status reports), depending on your project workflow and stakeholders choice, you can decide on reporting whether you want to send out a daily report of weekly report etc.

Test Closure Activities:– This phase include the following activities:-

– This phase includes verification of completion of the test. Whether all the test cases are executed or mitigated deliberately. There should not be any High Severity and Priority defects in the open state.

– Document creation for any new finding from the whole cycle or any lesson learned from meetings. It Includes what went well and where are the scope of improvements and what can be improved.

Types testing performed Manually:-

Black Box Testing

Black-box testing is a technique of software testing that validates the behavior of an application based on business and functional requirements. It is also known as requirement-based, opaque-box, eye-to-eye testing or closed-box.

In this technique, tester tries to analyses the functionality of Application under test without knowing much about the internal architecture of the application. Manual Intervention is required in Black box testing to create and execute boundary value and Edge cases from the user perspective. Read for more : Manual Testing

Webomates CQ, a tool by Webomates is used for performing regression testing for all the domains. Request a demo today.

Regression Testing

If you are a fan of the Ice Age animated movie series, you might recognize this scene. In this scene, Scrat (the squirrel) is holding on to an acorn and trying to close one of the leaks from the iceberg. But that results in another leaking point elsewhere and when he tries to cover that leak, it results in another one with a cascading effect.

Similarly, if we draw a parallel in the world of software development, there are times when fixing one bug might result in another bug(s) unintentionally. This is further compounded by the agile team, teams creating modifications and new features and releasing them continuously, which can introduce even more bugs. Regression testing plays a crucial role in getting a stable version of the software by taking care of finding such bugs before production deployment.

What is regression testing all about?

Regression testing is a type of software testing technique where the QA team runs a set of test cases to verify that with bug fixes or with the addition of new features in the application, the previous functionalities are not hampered.

In the agile methodology of software development, there are incremental changes and the frequency of lines of code change is high. Hence, there is a high probability of new bugs getting introduced. This leads to the need for regression testing to become an integral part of the software development life cycle.

Regression testing becomes even more important when the application is live and being used by the end-user and their reported issues are to be fixed on the highest priority.

After the software is made available to the end-user, especially during the beta testing phase, development teams do not have any control over the environment and the actions, that the end-users will perform while using the application.

Those actions are the real-time, real-world actions as to how the application is intended to be used and this phase of testing brings in the most valuable feedback about the application. The feedback report might have bugs or errors which need to be fixed in order to achieve a version that can be released to the end-user for actual use.

Regression testing ensures the stability and quality of the release.

Let’s talk about types of Regression testing Regression testing can be done on various scales, depending on the requirements of the application and the organizational structure. This invariably requires taking into consideration the cost and the manual efforts involved. Based on the current requirement of the project, one can go with either complete regression or partial regression.

Complete regression involves going through the entire application or website once again to make sure that all the functional areas are working as expected and none of the current functionalities is hampered. This can either be done manually or can be automated as well. For the automation, there are various frameworks available in the market like Selenium or Capybara using which the software testers can automate their regression suite so that whenever this needs to be done before an important release, it can be done swiftly and quickly covering all the major areas of the application. This is a time-saving option when looking for complete regression. However, this requires a lot of upfront effort and almost continuous maintenance or you can use Webomates CQ.

Partial regression is done basically when a software tester wants to analyze the effects of a bug fix to other areas of that particular part of the application. This is done by running some selective test cases from the test suite, just to make sure that the recent changes have not impacted other functional and verified areas of the code. Read for more : Regression testing

Webomates CQ, a tool by Webomates is used for performing regression testing for all the domains. Request a demo today.

Functional Testing

Among the various types of testing that can be carried out today the overwhelming majority are focused in an area called Functional testing. At its core, functional testing is a reliable, repeatable method to validate that a piece of software continues to carry out certain documented functions. Functional testing has been propelled to the forefront of testing due to a sub-technique called Automation testing.

At the same time, software applications usage has continued to grow across all major fields like business, education, defense, research, medicine, energy, utilities, etc. The list is long. Software systems are evolving to become more sophisticated to handle more complex functionalities.

Faulty software can cause minor inconveniences like browser incompatibility for websites or they can cause major catastrophes like losing a spacecraft, causing pipeline explosions, or banking failures and impacting economies. That’s why rigorous procedures have to be defined to test the software before it can be released to the end user. In addition, the software is nowadays being released at far greater frequencies. A decade back it was normal to release software in production once a year whereas nowadays companies like Facebook and Google release software multiple times a day! Functional testing is a critical component in ensuring that software works well in production environments.

What is Functional Testing?

Functional Testing, also known as Specification-based testing, evaluates individual functions of a software system to verify that they adhere to pre-defined specifications. It tests the functional accuracy, interoperability of subsystems and compliance with pre-defined standards in the context of functional and business requirements.

The primary focus of this type of testing is validating the results of processing, not “how” the processing is done. It can also be termed as a form of Black Box testing.

There are two key approaches to performing this type of testing.

Testing based on requirement specifications: This is focused on testing all possible scenarios based on requirements aligning to the functionality that the software system is expected to perform. Testing based on Business Scenarios: This is focused on end users’ requirements pertaining to business objectives that a software system is designed to achieve.

The primary objective of functional testing is to verify and validate the functionalities of individual subsystems and then testing the functionality of the system as a whole.

Functional Testing is applicable at any level of granularity, be it at individual module levels or overall system levels. Depending on the level of granularity and interdependence, appropriate test cases are designed.

Any software system also has certain functions or plugins that can be tested independent of other functionalities. Such features are known as Independently Testable features. Such features either take inputs from modules of software under test or provide inputs to them. Read for more : Functional Testing

Webomates CQ, a tool by Webomates is used for performing regression testing for all the domains. Request a demo today.

Exploratory testing: complementing scripted testing

Exploratory testing is a hands-on testing technique that involves minimal planning and maximum test execution by working on the basic principle of learning by discovery.

Exploratory testing can prove to be an interesting journey of discovering new bugs, uncovering hidden segments of code that may be harboring a dormant bug, leading to a path-breaking discovery, which may prevent a major issue in future. This journey is in particular rewarding for the testers with out-of-the-box thinking, which further adds a lateral and analytical element to their technical skills.

Conclusion Exploratory testing requires a unique combination of skill set mentioned above, besides having good technical know-how. The curiosity and the desire to know “why” should be a driving factor for the testers. However, finding that perfect mix of skills may not be that easy a feat.

Webomates has done extensive research on how to high jump to a high quality using exploratory testing and a detailed analysis can be found in the article written by Aseem Bakshi, Webomates CEO.

Read More about : Exploratory testing

If this has picked your interest and you want to know more, then please click here and schedule a demo, or reach out to us at info@webomates.com.

We have more exciting articles coming up every week. Stay tuned and like/follow us at LinkedIn – Webomates LinkedIn Page

Facebook – Webomates Facebook page

For More Information visit us at : webomates.com

Continuous integration

Continuous integration has become a critical element in current devops practices and is the first step in a CI/CD (Continuous Integration/Continuous Deployment) process. Many agile cloud based companies are implementing this process to achieve two primary goals:

Improve Quality

Improve Velocity (that is the speed of feature deployment) Often full CI/CD implementation span development, integration and production with new features and bug fixes being pushed into production via this process and significantly improving deployment speed to production.

Thus,features are introduced in production days or even hours after feature development is completed. In contrast traditional development processes would take weeks or even months to deploy features into production.

For the ease of development, the whole design is broken down into modules which are then handled by different teams of developers and/or individual developer. This whole process has its own challenges when it comes to synergizing these modules into a single unit to verify and validate the end product.

The coding process is not an error free activity, despite individual tests being executed and white box testing being performed by the development team. Issues can crop up while integrating these modules because of a variety of reasons. And these issues, if discovered late in the Software Development Cycle, can prove costly to the business. Here is a definition:

Continuous integration (CI) is a development practice which takes care of integration issues early in the development cycle thus accelerating collaborative Software Development process.

This article focuses on how Continuous Integration can help the technical and business stakeholders in developing and releasing high quality software, thus ensuring maximum customer satisfaction.

Continuous Integration Process Continuous integration (CI) is a software development practice that requires developers to integrate code into a shared version control repository with every task completion. Code check-in triggers an automated build process which then further invokes the testing routines. These testing routines use automated test scripts to test the code and report the bugs, if any. If no errors are found, then it implies that the developer’s code changes are acceptable.

Continuous Integration Typically, a software is an aggregation of several modules which are developed by different developers. Every module is comprised of a significant number of files.

Visualize a scenario where the smallest change in a file has a cascading effect on the output of that particular module and eventually impacting the expected outcome of the software.

Before the Continuous Integration era, the Developers had to spend a considerable amount of time in integrating their modules seamlessly. Integration became an expensive exercise since a lot of time was spent in testing, debugging and retesting. Continuous Integration is a pre-emptive measure for detecting integration issues earlier in Software Development Cycle, thus saving critical time which can then be used for improving the software or building new features.

Ideally it is prudent to integrate code at every change that makes even the smallest impact in the functioning of the software. This could range from a minimum of one check-in per day to several times a day.

The main goal of Continuous Integration is to provide feedback on defects discovered during code merges, prompting immediate action, thereby preventing issues which emerge later in the Development Cycle. Risk mitigation is significantly reduced as it is much easier to detect and rectify bugs in early stages of development. Read More about : Continuous integration

If this has picked your interest and you want to know more, then please click here and schedule a demo, or reach out to us at info@webomates.com.

We have more exciting articles coming up every week. Stay tuned and like/follow us at LinkedIn – Webomates LinkedIn Page

Facebook – Webomates Facebook page

For More Information visit us at : webomates.com

Automation Vs AI Automation

Software Development is an ever evolving process with new research and breakthroughs making the development process more efficient and optimal.

Similarly, software testing has also seen a series of changes ranging from manual to automation, and now AI test automation is making waves in the industry. If you want to know more about how software testing has evolved then do read our blog “Evolution of Software Testing”. In this particular article, we will be focusing on the difference between Test Automation and AI driven automation.

Automation and AI Automation are fundamentally different techniques.

Test automation essentially means performing repetitive testing tasks based on the test scripts. Whereas, AI test automation involves the software to understand, learn, decide and perform on the basis of these learnings.

A blog that delves into the various types of AI Automation and the current capabilities that are in the industry is here. In short, test automation isn’t “smart” and cannot “think”. While AI automation has the capability to make decisions, thus emulating human behavior, without any actual human involvement.

It can spot anomalies, learn from patterns, analyze the data and then if required, can upgrade the test scripts (self healing). In simple words, AI automation adds a dimension of machine learning capabilities overlaying test automation.

AI automation is still in its nascent stage and will take some time to revolutionize the testing process completely. However, we have done a quick analysis and compared how Test automation and AI automation differ. Findings are presented in the following table.

We have a detailed blog where we have talked about how AI can assist in the QA process. Click here to read it.

Read More about : Test automation

If this has picked your interest and you want to know more, then please click here and schedule a demo, or reach out to us at info@webomates.com. We have more exciting articles coming up every week. Stay tuned and like/follow us at

LinkedIn – Webomates LinkedIn Page

Facebook – Webomates Facebook page

For More Information visit us at : webomates.com