Tests
Tests are functions decorated with @session.test() or @suite.test().
Basic Test
Async and Sync
Both async and sync tests are supported:
@session.test()
async def test_async():
await some_async_operation()
assert True
@session.test()
def test_sync():
result = some_sync_operation()
assert result == expected
Sync tests are automatically wrapped to run in the async event loop.
Test with Fixtures
Tests declare their dependencies using type annotations:
from typing import Annotated
from protest import Use, fixture
@fixture()
def database():
return Database()
@session.test()
async def test_with_db(db: Annotated[Database, Use(database)]):
assert db.is_connected()
See Dependency Injection for details.
Tags
Tags categorize tests for filtering:
Run only tagged tests:
Tag Inheritance
Tests inherit tags from:
- Their suite (and parent suites)
- Fixtures they use (transitively)
api_suite = ProTestSuite("API", tags=["api"])
@fixture(tags=["database"])
def db():
return Database()
@api_suite.test()
async def test_api_call(db: Annotated[Database, Use(db)]):
# This test has tags: {"api", "database"}
pass
Test Naming
By default, the test name is the function name. Test output shows:
Timeout
Limit test execution time with the timeout parameter (in seconds):
@session.test(timeout=5.0)
async def test_api_call():
"""Fails if takes longer than 5 seconds."""
await slow_api_call()
@suite.test(timeout=0.5)
def test_sync():
"""Works with sync tests too."""
time.sleep(1) # Will timeout
Behavior
- Timeout applies to the test body only (after fixture setup, before teardown)
- On timeout, the test fails with
TimeoutError - Sync tests: the executor thread continues but the test is marked as failed
- Negative timeout raises
ValueErrorat decoration time
With xfail
If a test is expected to timeout:
@session.test(xfail="Known slow", timeout=0.1)
async def test_slow_operation():
await very_slow_operation() # XFAIL, not FAIL
With skip
Skipped tests never run, so timeout doesn't apply:
@session.test(skip="Not ready", timeout=0.001)
async def test_not_ready():
await something() # Never executed
Skip
Mark tests to be skipped:
# Simple skip
@session.test(skip=True)
def test_not_ready():
pass # Never runs
# With reason
@session.test(skip="Waiting for API v2")
def test_new_feature():
pass
Skip Object
For advanced use, import the Skip dataclass:
from protest import Skip
@session.test(skip=Skip(reason="Blocked by #123"))
def test_blocked():
pass
Conditional Skip with Fixtures
Fixture-Aware Skip Conditions
In pytest, skipif conditions are evaluated at import time and don't have access to fixtures. The typical workaround is calling pytest.skip() inside the test body, which mixes skip logic with test logic.
ProTest evaluates skip conditions after fixture resolution, so your callable receives the actual fixture values:
from typing import Annotated
from protest import Use, fixture
@fixture()
def environment():
return {"is_ci": os.getenv("CI") == "true"}
session.bind(environment)
@session.test(
skip=lambda environment: environment["is_ci"],
skip_reason="Skip in CI environment",
)
def test_local_only(env: Annotated[dict, Use(environment)]):
# Clean test body - no skip logic here!
pass
How it works:
1. Fixtures are resolved for the test
2. ProTest introspects the skip callable's signature
3. Matching fixtures are passed as kwargs to the callable
4. The callable returns True (skip) or False (run)
Skip Object with Condition
For complex conditions:
from protest import Skip
@session.test(skip=Skip(
condition=lambda config: config.get("feature_disabled"),
reason="Feature flag disabled",
))
def test_feature(config: Annotated[dict, Use(config_fixture)]):
pass
Async Conditions
Async conditions are supported:
async def check_service_health() -> bool:
response = await http_client.get("/health")
return response.status != 200
@session.test(skip=check_service_health, skip_reason="Service unhealthy")
async def test_service():
pass
Error Handling
If a skip callable raises an exception, the test is marked as ERROR (not SKIP or FAIL).
Expected Failure (xfail)
Mark tests expected to fail:
# Simple xfail
@session.test(xfail=True)
def test_known_bug():
assert False # XFAIL, not FAIL
# With reason
@session.test(xfail="Bug #456")
def test_reported_issue():
raise ValueError()
Xfail Object
from protest import Xfail
# strict=True (default): unexpected pass is a failure (XPASS → FAIL)
@session.test(xfail=Xfail(reason="Flaky", strict=True))
def test_strict():
pass # FAIL (unexpected pass)
# strict=False: unexpected pass is OK (XPASS → PASS)
@session.test(xfail=Xfail(reason="Flaky", strict=False))
def test_lenient():
pass # PASS (OK)
Retry
Retry failed tests automatically:
# Simple retry (3 attempts)
@session.test(retry=3)
async def test_flaky_api():
await call_external_api()
# With delay between retries
from protest import Retry
@session.test(retry=Retry(times=3, delay=1.0))
async def test_with_backoff():
await call_api()
# Only retry specific exceptions
@session.test(retry=Retry(times=2, on=ConnectionError))
async def test_network():
await fetch_data()
# Multiple exception types
@session.test(retry=Retry(times=2, on=(ConnectionError, TimeoutError)))
async def test_resilient():
await risky_operation()
Retry Behavior
times: Maximum number of attempts (including the first)delay: Seconds to wait between retries (default: 0)on: Exception type(s) to retry on (default:Exception)
Behavior Interactions
When combining options:
| Combination | Behavior |
|---|---|
skip + xfail |
Skip takes priority (test not executed) |
skip + retry |
Skip takes priority |
skip(callable) + xfail |
Skip evaluated first; if skips, xfail ignored |
skip(callable) + retry |
Skip evaluated first; if skips, no retry |
xfail + retry |
Retry first, then xfail/xpass evaluation |
timeout + retry |
Timeout triggers retry |
Output Capture
stdout and stderr are captured during test execution. If a test fails, the captured output is displayed in the error report.
For parallel execution, each test's output is isolated to prevent mixing.