| CARVIEW |
Select Language
HTTP/2 308
date: Thu, 29 Jan 2026 15:39:10 GMT
content-type: text/plain
cache-control: public, max-age=0, must-revalidate
location: /docs/test
refresh: 0;url=/docs/test
server: cloudflare
strict-transport-security: max-age=63072000
x-vercel-id: bom1::hb9wb-1769701150832-1ebd8b26951f
cf-cache-status: DYNAMIC
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=t2rCtjZy2XZkMqQiN8o3GCQyeT9tc1TafZJXEvWnCSzZafaQBLohzJGYpJDwk5R3PO8%2Bv0qcVztscF2EWJRAlSjLXqKMgw%3D%3D"}]}
cf-ray: 9c59d2202811c169-BLR
HTTP/2 200
date: Thu, 29 Jan 2026 15:39:11 GMT
content-type: text/html; charset=utf-8
cache-control: no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0
cf-cache-status: DYNAMIC
x-version: dpl_GzgVyxN6LRNxY6gr6PfJT4DMcVR4
content-encoding: gzip
content-security-policy: worker-src * blob: data: 'unsafe-eval' 'unsafe-inline'; object-src data: ; base-uri 'self'; upgrade-insecure-requests; frame-ancestors 'self' https://dashboard.mintlify.com; form-action 'self' https://codesandbox.io;
expires: 0
link: ; rel="llms-txt", ; rel="llms-full-txt"
pragma: no-cache
server: cloudflare
strict-transport-security: max-age=63072000
vary: rsc, next-router-state-tree, next-router-prefetch, next-router-segment-prefetch, Accept-Encoding
x-cache-key: bun-1dd33a4e/44/dpl_GzgVyxN6LRNxY6gr6PfJT4DMcVR4/docs/test#html=html
x-frame-options: DENY
x-llms-txt: /llms.txt
x-matched-path: /_sites/[subdomain]/[[...slug]]
x-mint-proxy-version: 1.0.0-prod
x-mintlify-client-version: 0.0.2424
x-nextjs-prerender: 1
x-nextjs-stale-time: 60
x-powered-by: Next.js
x-served-version: dpl_GzgVyxN6LRNxY6gr6PfJT4DMcVR4
x-vercel-cache: MISS
x-vercel-id: bom1:bom1:iad1::iad1::2886z-1769701150860-5a0b095270d5
x-vercel-project-id: prj_ekSYngkqMLMUb1wdarxNSixTj2nj
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=jKdDXTWG9wfu%2FJMVvA%2FrY60AaAQEZVAHA9BRcFQt360r5IMfizXS8g15ljt9OwLVrC7D7oA8zGs1lKgBEvoxL5LraSZ3Qw%3D%3D"}]}
cf-ray: 9c59d220a862c169-BLR
Test runner - Bun
Skip to main content
Getting Started
Test Execution
Test Features
Specialized Testing
Reporting
- Run tests
- CI/CD integration
- GitHub Actions
- How to install bun in a GitHub Actions workflow
- JUnit XML reports (GitLab, etc.)
- Timeouts
- Concurrent test execution
- --concurrent flag
- --max-concurrency flag
- test.concurrent
- test.serial
- Rerun tests
- Randomize test execution order
- Reproducible random order with --seed
- Bail out with --bail
- Watch mode
- Lifecycle hooks
- Mocks
- Snapshot testing
- UI & DOM testing
- Performance
- AI Agent Integration
- Environment Variables
- Behavior
- CLI Usage
- Execution Control
- Test Filtering
- Reporting
- Coverage
- Snapshots
- Examples
Getting Started
Test runner
Bun’s fast, built-in, Jest-compatible test runner with TypeScript support, lifecycle hooks, mocking, and watch mode
Bun ships with a fast, built-in, Jest-compatible test runner. Tests are executed with the Bun runtime, and support the following features.
Tests are written in JavaScript or TypeScript with a Jest-like API. Refer to Writing tests for full documentation.
math.test.ts
The runner recursively searches the working directory for files that match the following patterns:
To filter by test name, use the
To run a specific file in the test runner, make sure the path starts with
The test runner runs all tests in a single process. It loads all How to install
To use
From there, you’ll get GitHub Actions annotations.
This will continue to output to stdout/stderr as usual, and also write a JUnit
XML report to the given path at the very end of the test run.
JUnit XML is a popular format for reporting test results in CI/CD pipelines.
Use the
When this flag is enabled, all tests will run in parallel unless explicitly marked with
Control the maximum number of tests running simultaneously with the
This helps prevent resource exhaustion when running many concurrent tests. The default value is 20.
Mark individual tests to run concurrently, even when the
math.test.ts
Force tests to run sequentially, even when the
math.test.ts
When using
Reproducible random order with
Use the
The Bail out with
Use the
These hooks can be defined inside test files, or in a separate file that is preloaded with the
See Test > Lifecycle for complete documentation.
math.test.ts
Alternatively, you can use
math.test.ts
See Test > Mocks for complete documentation.
math.test.ts
To update snapshots, use the
See Test > Snapshots for complete documentation.

This feature is particularly useful in AI-assisted development workflows where reduced output verbosity improves context efficiency while maintaining visibility into test failures.
Run all test files with “foo” or “bar” in the file name:
Run all test files, only including tests whose names includes “baz”:
- TypeScript and JSX
- Lifecycle hooks
- Snapshot testing
- UI & DOM testing
- Watch mode with
--watch - Script pre-loading with
--preload
Bun aims for compatibility with Jest, but not everything is implemented. To track compatibility, see this tracking
issue.
Run tests
terminal
Copy
bun test
Copy
import { expect, test } from "bun:test";
test("2 + 2", () => {
expect(2 + 2).toBe(4);
});
*.test.{js|jsx|ts|tsx}*_test.{js|jsx|ts|tsx}*.spec.{js|jsx|ts|tsx}*_spec.{js|jsx|ts|tsx}
bun test. Any test file with a path that matches one of the filters will run. Commonly, these filters will be file or directory names; glob patterns are not yet supported.
terminal
Copy
bun test <filter> <filter> ...
-t/--test-name-pattern flag.
terminal
Copy
# run all tests or test suites with "addition" in the name
bun test --test-name-pattern addition
./ or / to distinguish it from a filter name.
terminal
Copy
bun test ./test/specific-file.test.ts
--preload scripts (see Lifecycle for details), then runs all tests. If a test fails, the test runner will exit with a non-zero exit code.
CI/CD integration
bun test supports a variety of CI/CD integrations.
GitHub Actions
bun test automatically detects if it’s running inside GitHub Actions and will emit GitHub Actions annotations to the console directly.
No configuration is needed, other than installing bun in the workflow and running bun test.
How to install bun in a GitHub Actions workflow
To use bun test in a GitHub Actions workflow, add the following step:
.github/workflows/test.yml
Copy
jobs:
build:
name: build-app
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install bun
uses: oven-sh/setup-bun@v2
- name: Install dependencies # (assuming your project has dependencies)
run: bun install # You can use npm/yarn/pnpm instead if you prefer
- name: Run tests
run: bun test
JUnit XML reports (GitLab, etc.)
To usebun test with a JUnit XML reporter, you can use the --reporter=junit in combination with --reporter-outfile.
terminal
Copy
bun test --reporter=junit --reporter-outfile=./bun.xml
Timeouts
Use the--timeout flag to specify a per-test timeout in milliseconds. If a test times out, it will be marked as failed. The default value is 5000.
terminal
Copy
# default value is 5000
bun test --timeout 20
Concurrent test execution
By default, Bun runs all tests sequentially within each test file. You can enable concurrent execution to run async tests in parallel, significantly speeding up test suites with independent tests.--concurrent flag
Use the --concurrent flag to run all tests concurrently within their respective files:
terminal
Copy
bun test --concurrent
test.serial.
--max-concurrency flag
Control the maximum number of tests running simultaneously with the --max-concurrency flag:
terminal
Copy
# Limit to 4 concurrent tests
bun test --concurrent --max-concurrency 4
# Default: 20
bun test --concurrent
test.concurrent
Mark individual tests to run concurrently, even when the --concurrent flag is not used:
Copy
import { test, expect } from "bun:test";
// These tests run in parallel with each other
test.concurrent("concurrent test 1", async () => {
await fetch("/api/endpoint1");
expect(true).toBe(true);
});
test.concurrent("concurrent test 2", async () => {
await fetch("/api/endpoint2");
expect(true).toBe(true);
});
// This test runs sequentially
test("sequential test", () => {
expect(1 + 1).toBe(2);
});
test.serial
Force tests to run sequentially, even when the --concurrent flag is enabled:
Copy
import { test, expect } from "bun:test";
let sharedState = 0;
// These tests must run in order
test.serial("first serial test", () => {
sharedState = 1;
expect(sharedState).toBe(1);
});
test.serial("second serial test", () => {
// Depends on the previous test
expect(sharedState).toBe(1);
sharedState = 2;
});
// This test can run concurrently if --concurrent is enabled
test("independent test", () => {
expect(true).toBe(true);
});
// Chaining test qualifiers
test.failing.each([1, 2, 3])("chained qualifiers %d", input => {
expect(input).toBe(0); // This test is expected to fail for each input
});
Rerun tests
Use the--rerun-each flag to run each test multiple times. This is useful for detecting flaky or non-deterministic test failures.
terminal
Copy
bun test --rerun-each 100
Randomize test execution order
Use the--randomize flag to run tests in a random order. This helps detect tests that depend on shared state or execution order.
terminal
Copy
bun test --randomize
--randomize, the seed used for randomization will be displayed in the test summary:
terminal
Copy
bun test --randomize
Copy
# ... test output ...
--seed=12345
2 pass
8 fail
Ran 10 tests across 2 files. [50.00ms]
Reproducible random order with --seed
Use the --seed flag to specify a seed for the randomization. This allows you to reproduce the same test order when debugging order-dependent failures.
terminal
Copy
# Reproduce a previous randomized run
bun test --seed 123456
--seed flag implies --randomize, so you don’t need to specify both. Using the same seed value will always produce the same test execution order, making it easier to debug intermittent failures caused by test interdependencies.
Bail out with --bail
Use the --bail flag to abort the test run early after a pre-determined number of test failures. By default Bun will run all tests and report all failures, but sometimes in CI environments it’s preferable to terminate earlier to reduce CPU usage.
terminal
Copy
# bail after 1 failure
bun test --bail
# bail after 10 failure
bun test --bail=10
Watch mode
Similar tobun run, you can pass the --watch flag to bun test to watch for changes and re-run tests.
terminal
Copy
bun test --watch
Lifecycle hooks
Bun supports the following lifecycle hooks:| Hook | Description |
|---|---|
beforeAll | Runs once before all tests. |
beforeEach | Runs before each test. |
afterEach | Runs after each test. |
afterAll | Runs once after all tests. |
--preload flag.
terminal
Copy
bun test --preload ./setup.ts
Mocks
Create mock functions with themock function.
Copy
import { test, expect, mock } from "bun:test";
const random = mock(() => Math.random());
test("random", () => {
const val = random();
expect(val).toBeGreaterThan(0);
expect(random).toHaveBeenCalled();
expect(random).toHaveBeenCalledTimes(1);
});
jest.fn(), it behaves identically.
Copy
import { test, expect, mock } from "bun:test";
import { test, expect, jest } from "bun:test";
const random = mock(() => Math.random());
const random = jest.fn(() => Math.random());
Snapshot testing
Snapshots are supported bybun test.
Copy
// example usage of toMatchSnapshot
import { test, expect } from "bun:test";
test("snapshot", () => {
expect({ a: 1 }).toMatchSnapshot();
});
--update-snapshots flag.
terminal
Copy
bun test --update-snapshots
UI & DOM testing
Bun is compatible with popular UI testing libraries: See Test > DOM Testing for complete documentation.Performance
Bun’s test runner is fast.
AI Agent Integration
When using Bun’s test runner with AI coding assistants, you can enable quieter output to improve readability and reduce context noise. This feature minimizes test output verbosity while preserving essential failure information.Environment Variables
Set any of the following environment variables to enable AI-friendly output:CLAUDECODE=1- For Claude CodeREPL_ID=1- For ReplitAGENT=1- Generic AI agent flag
Behavior
When an AI agent environment is detected:- Only test failures are displayed in detail
- Passing, skipped, and todo test indicators are hidden
- Summary statistics remain intact
terminal
Copy
# Example: Enable quiet output for Claude Code
CLAUDECODE=1 bun test
# Still shows failures and summary, but hides verbose passing test output
CLI Usage
Copy
bun test <patterns>
Execution Control
Set the per-test timeout in milliseconds (default 5000)
Re-run each test file
NUMBER times, helps catch certain bugsTreat all tests as
test.concurrent() testsRun tests in random order
Set the random seed for test randomization
Exit the test suite after
NUMBER failures. If you do not specify a number, it defaults to 1.Maximum number of concurrent tests to execute at once (default 20)
Test Filtering
Include tests that are marked with
test.todo()Run only tests with a name that matches the given regex. Alias:
-tReporting
Test output reporter format. Available:
junit (requires —reporter-outfile), dots. Default:
console output.Output file path for the reporter format (required with —reporter)
Enable dots reporter. Shorthand for —reporter=dots
Coverage
Generate a coverage profile
Report coverage in
text and/or lcov. Defaults to textDirectory for coverage files. Defaults to
coverageSnapshots
Update snapshot files. Alias:
-uExamples
Run all test files:terminal
Copy
bun test
terminal
Copy
bun test foo bar
terminal
Copy
bun test --test-name-pattern baz
Was this page helpful?
⌘I