Skip to main content

Session

To execute a manual test a Test Session (aka session) is required. A session is used to create individual runnable specs based on the files contained in the repo.

To create a Session select the Manual tab within a space.

Listing#

The Manual tab presents all of the specs along with any current status.

Session Specs Listing

The spec status reflects the most recently completed session. If there are cycles defined, individual cycle listings are also available by clicking on the Cycles(n) sub-tab on the right side of the page. Refer to the Cycles section for more details.

Assignee#

Using the specs listing a user can be pre-assigned for the execution of a spec, which will then be used as the default on all subsequent created sessions.

By clicking on the Assignee cell of the spec a dialog of available users will be presented.

Spec User Assignee

A user can also self-assign, and overwrite an existing assignment when running a spec.

Create#

A session is created for testing (i.e. test build.003a) using a selection of the available specs. To create a session click on the New Test Session button.

A dialog will be presented. If cycles have been defined, a Cycle option will also be available, allowing the session to be created in the context of a cycle. When cycles do exists, selecting the unbound option (default) indicates the session is not associated with a cycle.

New Session

The option Only failing is only active for specs that have 1 or more failing cases.

  • Add a name for the session, or use the default
  • Optional description (e.g. "testing feature ABC")
  • Select SUBMIT

Note that a session can also be created by selecting a spec and clicking on - create a new session

When a session is created it is tagged as open, until explicitly completed. There is no limit on the number of concurrent open sessions.

Open Sessions

The status of open sessions are independent of each other and are not included with the overall listing status until they are completed.

Run#

Tests are based on specs and are executed individually. Within a session highlighted, click on a spec to execute cases within it.

Run Spec

Select START to begin testing.

The left panel represents the set of cases associated with the spec being executed. A case defaults to Untested. The following table lists the available case status that can be assigned:

STATUSdescription
UntestedNo testing as of yet
Not applicableMarked as NA in the case
BlockedNot able to execute the test case. This sets status of case to errored in report.
PassedWorking as expected
FailedNot working as expected

Issues can be directly listed or referenced from within the comments form. For more details see further down.

The comments form supports markdown, including drag and drop images, and in-place issue references. There is also toggle providing access to previous execution history:

The timer starts when the spec's dialog is open, after the current user becomes the assignee.

Run Comments

On selecting STOP the execution of a spec, if there are any untested test cases, a dialog will present a choice:

Carry the previous status for all N untested cases?

If activated (default), all test cases not executed will carry their previous status setting.

Run w/ Automation#

warning

To run automation in Testspace a one-time configuration is required to be set up by your Testspace account owner. For details see this on how to enable automation.

Specs that contain Fixture automation before, and optional after, have additional execution constraints. There will be an exclamation icon next to ! START indicating a Required action to be performed:

Automation Execution Required

Before#

The before fixture is required to be successfully processed before test cases can be executed. You need to manually trigger it by clicking on the blue START button. While running a blue spinning icon would appear:

Automation Running

When the automation is complete a green check icon should reflect success:

Automation Successful

In case of failure, a red cross icon would indicate unsuccessful automation. The tooltip will provide additional failure information.

Automation Failed

The before fixture has several constraints that govern the execution of the spec

  • Closing the dialog while executing will cancel the execution
  • Closing and opening the Spec will require re-execution of the automation

Closing the dialog will require re-running the Spec's fixture

After#

The after fixture is optional and could only exist if there was a before fixture. If the Spec has successfully executed the before fixture, once the Spec dialog is closed, the associated after fixture automation is executed.

Closing the spec dialog automatically triggers the after fixture automation

The after fuxture is executed async, in the background, without any status available to the tester.

Issues#

GitHub issues could be associated with each test case. There are two ways of doing that - push new or reference existing.

Push New Issue#

GitHub issues can be auto-generated within the test case execution. Simply select the Push new issue checkbox.

Issues

Automatic issue processing occurs when a session is Completed

The auto-generated Issue will insert the status in the title [Failed], along with the spec name and test case name. Comments from the test case will be added to the issue automatically as well.

The Issue is generated on the session Complete event.

GitHub Issue

The generated Issue will be automatically added to the ISSUES form for future reference.

Reference Existing Issue#

If you want to reference an existing GitHub issue in a test case, you could either list it in the ISSUES form or simply mention it in the comments form inside any message describing your testing. In both cases, it needs to be formatted a #-prefixed Github issue number, e.g. #123.

Complete#

A session can be completed at any time, independent of what specs have been executed.

Complete Session

  • Completing a Test Session removes the active session from the Manual tab.
  • Once a session has been completed no additional updates can be made.

If no Specs have been executed and the session is completed, the results record will be ignored (same behavior as deleting the session).

Status#

The status of a spec in the Manual view can either reflect completed test results or an individually selected open session.

Listing#

The up to date status for all completed sessions is captured in the listing.

Listing Status

To view the listing status deselect any currently highlighted open session.

Open#

An open session provides status on the testing that is currently in-progress.

Listing Status

Status w/ Changes#

When a spec with existing status has its corresponding spec source file changed (i.e. via repo commit), a special orange warning icon will be displayed, indicating newer source code changes.

Listing Status with Change

For existing open sessions, there will be no updates or indications of changes, however, any newly created sessions will automatically pick up the changes.

Testspace tries to gracefully handle source changes and preserve execution history and failure tracking. However, renaming a test case (i.e. changing the markdown <H2> heading that denotes it) can not be handled and will result in loss of history.