What I Do When Automated Scripts Fail

Jayateerth Katti
5 min readOct 22, 2023

--

Automation is important.

Development of automated scripts is very important.

What is more important and key is, maintenance of the scripts or framework as a whole.

Change is the constant.

You know that. We all need to adapt to change.

This is applicable to automated testing /verification as well.

When automated scripts are executed, there can be test scripts failures. Are they bugs?

I don’t immediately report the bug for them.

Before reporting bug, I make sure that whether failure is because of the actual bug in the product or is it our script issue.

I have tried to document some major steps I take when automated scripts fail.

I hope these steps help you as well to conclude if the failures are bugs in automation or in product.

Let’s get in.

Re-Execute

Run, run.

First thing I do when automated script fails is … rerun. I re-execute the scripts to confirm if it is failing consistently.

There can be multiple issues.

Network issue can be one of them. To rule out the possibility of network or any such temporary issue, I re-execute the scripts.

If it passes, then that is good.

If it fails again, then I start my detailed analysis for failure.

Reproduce Manually

For confirmation.

I do run the test case manually. If the test run passes, then I conclude that the issue is not with the product under test.

However if the test fails then it can be bug. Most probably.

Wait.

Even if the manual test run passes or fails, it is not conclusion. For failed test case, I will verify if there any pre-requisites test cases.

These pre-requisite test cases would have created data. Which might be wrong or invalid.

Therefore for me, it is very important to run tests manually.

Also read : Reasons for Software Testing To Fail

Isolate Steps

Focused test.

I try to isolate the steps as much as possible. If a test script has multiple steps, I determine which step is causing the failure.

This will help me pinpoint the problem quickly.

Also, there is possibility that multiple test steps may fail.

In that scenario, I try to isolate each step.Individually.

I can debug the scripts by putting check-points at these crucial steps.

After isolation, further analysis is done like pre-requisite check, manual verification etc.

So, this focused testing helps to me to conclude if it is automated script issue or bug.

Check Dependency

Case is something else.

Script may fail because of some other reason. Root cause may be different.

Example: A test case would be asserting a user’s telephone number. There it is failing.

Actually reason would be in user registration page, where the telephone number field is accepting alphabets.

So, it is very important to also see and run pre-requisite steps.

If pre-requisite test case is not automated (due to many reasons), I check how test data is handled.

Possibility can be hardcoding or using the same data multiple times. This might not be allowed in the product.

If the pre-requisite test case is automated, then I check if it is executed properly.

There are chances that script execution has skipped.

That can be code issue.

Check Other Browser/ OS

Test in other platforms.

If the product is web based application, I run the script on different browsers.

Also test that manually in different browsers.

This helps me to determine if the issue is specific to browser. Or it is generic issue.

This does not guarantee if it is bug in product.

However the findings in this step is input for further analysis.

Verify Test Data

Data is important.

Test data is more important.

I verify the test data used in the script. I verify if it is the right data for the specific test scenario? Incorrect or outdated data can lead to test failures.

Based on the data, application can behave differently.

Example, if I am entering the amount as 10000 instead of 1000 (difference of 1 zero), the application can throw message/notification.

There can be limitation set on the field. It can be amount field in the application.

So, any mistake in the data can make a huge impact on the flow of test cases.

If data is hard coded in the script, it can cause failures.

Some scenarios might demand unique value.

If data used in the last run is used again, application might not accept it. It throws error.

If this error is not handled in the script, it fails abruptly.

Therefore I verify the test data, which is very crucial for test automation.

Consistency

Consistency.

Automated scripts are developed for regular execution. They are expected to run and provide reports consistently.

So, when I execute these automated scripts, they should report same result every time. At least in the same build of application.

If a test script fails, it should fail at same step each time I run.

To verify this consistency, I run failed script multiple times.

If on the subsequent runs it passes, then there could be some temporary issues.

There can be network issues for example.

The application’s response should synchronise with tool’s response time.

If this sync is not proper, script may fail.

We need to handle this sync in the code.

Analyse Reports Logs

The proof.

One failure of the test script, I verify the logs. I try to see if there are any entries made in the log file, which is related to failed step.

I also check the screenshot attached to failed step.

This gives me the evidence for reason of failure.

Again, I take them as the inputs for further analysis.

If there are no application failure entries in logs, then it can be something related to automation.

Revisit Requirements

The requirements.

These are the places where user’s requirements/expectations are documented. If my manual testing also fails, I re-visit the requirements.

Requirements can be documents or in the form of stories in Jira.

I do this to make sure , my test cases and scripts are aligned to requirements.

There can by changes to requirements too.

Documents might have updated based on the change.

Revisit / update Test Cases

The change.

The change can cause failure. The development team would have changed the functionality based on requirements.

But my test cases can be of older version.

I update the test cases based on this change. Then update the automated scripts too.

Maintenance of the automated scripts is the major effort.

It is ongoing process.

As and when product is updated, I update the test cases and automated scripts too.

Conclusion

As you saw and might have guessed, in the world of automated testing, handling test failures is an art and science. It’s about remaining calm, analysing the problem, identifying the root cause, and using each failure as an opportunity to improve. Embrace failures as stepping stones to success, and you’ll become an expert automation tester in no time.

P.S: Subscribe my weekly newsletter. Follow me on Linkedin

--

--

Jayateerth Katti

20 Years of experience in testing. I write about testing and growth.