New Slack Integration

Our new Slack integration is live!

The previous version was deprecated by Slack and our new version will replace all of the functionality of the original app, including easier setup. We will not be deprecating the webhooks, which many customers have used to build their own Slack apps.

Receive Slack notifications when your Rainforest QA test runs are completed or if any tests failed and need your team’s attention. You can also opt for updates when a run or webhook experiences an issue that prevents your tests from running.

These updates streamline your result review process, bringing you important run information and links to failed tests directly into your Slack workflow. This works best when you set up a Rainforest channel for all of your test suite updates.

Your notification preferences can be set up and managed under the Integrations section in Settings. From there you can opt in for specific notifications based on a variety of criteria, including Run Groups, Tags and Features.

Test Designer: Automatic Test Author Assignment Algorithm

Rainforest automatically assigns test authors to test writing request based on the number of requests submitted, test author capacity, and test author previous experience.

The automatic assignment algorithm will speed up the distribution of test writing requests in order to return tests back to the customer faster. By introducing the previous experience criteria into the algorithm, the system will assign the same authors to repeat client’s requests over time. This allows test authors to develop domain knowledge for a particular client. Additionally, by looking at the number of requests submitted by a user and matching them to test authors with capacity, the system can efficiently manage the distribution of tests to testers to balance the work and return tests faster.

The assignment algorithm first identifies all test authors with previous writing experience for a particular client. It will then sort the authors by the least number of active jobs. The algorithm will then review the number of tests submitted for a request in order to determine the number of test authors to assign. If it is the first time a client has submitted a test writing request, the algorithm will bypass the previous experience criteria until the client’s subsequent test writing request.

QA Visibility Report - Email Chart Visualizations

The weekly QA Visibility Report now visualizes Test Creation and Maintenance, Test Execution, and Failure Categorization activities to help you improve your QA process.

Previously, it was difficult to understand the whole picture from test creation to executing tests to failure category. Visualizing this data unlocks the ability to see if a particular activity is not supporting your development needs.

With the visuals, positive or negative trends are able to be seen week over week and over the last 30 days, making it easier to identify if QA process needs to be tweaked or a team’s process isn’t being followed.

The weekly QA Visibility Report now visualizes Test Creation and Maintenance, Test Execution, and Failure Categorization activities to help you improve your QA process.

1.png

Test Case Priority

Easier management of your test suite and results

Not all tests are created equal: with Test Priority, you can set a priority for each test case in your suite to indicate the relative importance of that test flow to your app (P1, P2, or P3), and start to see results in order of priority.

How to add Test Priority

In the app, there are several places to assign priority from:

  • When you create a New Test, you must set the importance from the test creation flow. This is a required field, and we don't default to any value;
  • When editing an individual test, you can use the dropdown in the navigation on the right hand side of the screen to set the priority. We don't store historical values for priority and will always consider the latest priority set;
  • When requesting tests via Test Designer, you can specify how important each test returned to you should be when you upload the videos or introduce text outlines
  • You can also use the CLI to add priority. When using the CLI, we will validate that a P1, P2 or P3 priority was added when you upload the tests.

Coming in the future:

  • RFML support for test priority
  • Tests requested from Test Designer returned in order of priority.

Test Designer Summary Report

A report that provides an actionable summary on the tests that were created from each Test Designer request.

The Test Designer summary report provides clients with information about the output of their test writing requests. The report provides a breakdown of the tests requested, tests written, tests passing, and any tests that are blocked. The report highlights any test that requires the client’s attention — from resolving blockers, to incorporating tests into their testing suite. Additionally, the report will highlight any bugs found during the writing process.

The report will be sent to automatically via email upon the conclusion of each test writing request. The report will be sent directly to the individual that requested the tests.

1.png

Test Rewrite Assignment Algorithm: Improvement

The test rewrite feature now considers the additional criteria of previous experience when assigning rewrite jobs to test authors.

This improvement will help reduce the time needed to gather all the context need to complete the rewrite assignment. By factoring in previous experience, the author should be able to quickly get up to speed — it is even very likely they were the original author of the test — to help reduce the time needed to complete the task and ultimately decrease the turnaround time of the job.

Factoring the additional criteria means that when rewrite job is distributed to our test author pool, it will look for the test author(s) that have previously written test for that particular client. It will then look for the test author with the least amount of active test author jobs.

Test Designer: Queued Requests

Add test writing requests to a queue in the Test Designer.

Previously, clients could create and draft new test writing requests but had to manually submit each request as soon as the current one was completed and returned. This adds some friction to the workflow and to improve the user experience of writing test inputs, we’re allowing for automatically queued requests in the Test Designer.

Clients will now be able to submit a new test writing request even if there is a request that is currently in progress. This request will be added to the bottom of the queue. As soon as the current request is completed and returned, the next request in the queue will automatically be picked up and sent in for test writing. An email notification is sent out to the client when their current test writing request is completed and the new request is kicked off.

1.png

All queued requests can be tracked in the new Queued tab on your Test Designer page.

2.png

3.png

New "Expected Result" field in the Test Designer

You can now specify the expected result of their test writing requests via the Test Designer.

To improve the quality of Test Designer output and help our test writing team write better tests for clients, we now ask clients to also specify the expected result of their request. This gives test authors complete clarity on what the outcome of the tests should be and results in fewer questions around instructions and consequently, fewer blockers.

On the test writing request form, you will now see and enter a new field titled “Expected result”. This data is sent directly to the test author so they can then write tests keeping the result or outcome in mind.

1.png

Test Designer - Run Level Blockers and UI Improvements

Visual updates to the Test Designer main pages to improve the user experience.

Our goal is to provide a better user experience by making the status of test requests more apparent. The visual updates call attention to blockers and will help distinguish the progress of each test request.

Previously, reviewers could block runs from the Rainforest admin which would require OB/CS intervention to facilitate unblocking. Now, you can unblock runs straight from the UI removing the need for OB/CSM intervention.

If a run is blocked, you can simply address the run-level blocker through the Rainforest interface. Additionally, we made visual updates to the status badges as well as updates to the main pages to reflect when requests were submitted, their status, and how many tests were returned.

1.png

2.png

3.png

Easier Failure Categorization

Categorize failures straight from the run summary page.

You can do mass categorizing of test runs that failed for the same reason and view tester comments from the step failure on the run summary page in order to categorize failures with fewer clicks.

The results triage process can be painful for users. By reducing the number of clicks it takes to review and categorize failures, we are aiming to reduce some of the pain associated with this process.

You can now categorize several test runs into the same failure category by selecting the “Mass Categorize” checkbox and choosing a failure category. This saves them time from having to select each failure and categorize it individually.

By hovering over the failed step number, you can also view the tester comments for specific failed steps in order to diagnose the failure and categorize it straight from the run summary page.