09 — Understanding the HTML Report
After a simulation finishes Gatling generates a self-contained HTML report in:
- Maven:
target/gatling/<SimulationName-timestamp>/index.html - Gradle:
build/reports/gatling/<SimulationName-timestamp>/index.html
Open index.html in any browser — no server needed.
Report structure
index.html
├── Global stats
├── Charts
│ ├── Active users over time
│ ├── Requests per second
│ ├── Responses per second
│ └── Response time percentiles
└── Per-request stats (expandable table)
Global stats panel
| Metric | What it means |
|---|---|
| Total requests | Number of requests sent |
| OK / KO | Passed / failed (check failures) |
| Min / Max | Fastest and slowest response |
| Mean | Average response time (ms) |
| Std Dev | Standard deviation of response times |
| p50 | Median — 50% of requests are faster |
| p75 / p95 / p99 | Tail latency percentiles |
Percentiles are more useful than the mean. p99 = 1 in 100 users experienced this or worse.
Active users over time
Shows how many virtual users were active at each second. Use this to verify your injection profile executed as intended.
- Flat line → closed workload or constant open workload
- Rising then flat → ramp + steady state
Requests / Responses per second
Two separate charts:
- Requests/s — throughput your test generated
- Responses/s — throughput the server handled (should match; divergence → queuing)
A large gap between requests/s and responses/s signals server saturation.
Response time percentiles over time
Colour-coded bands showing p50, p75, p95, p99 over the duration of the test. Look for:
- Gradual increase → performance degrades under sustained load (memory leak, connection exhaustion)
- Spike then recovery → GC pause, cache warm-up, or JIT compilation
- Flat then cliff → server reaches a breakpoint
Per-request stats table
Each named request (http("name")) gets its own row:
| Column | Description |
|---|---|
| Request | Name you gave the request |
| Executions | How many times it ran |
| OK | Success count |
| KO | Failure count |
| Response time (ms) | Min / Mean / p50 / p75 / p95 / p99 / Max |
Click on a request name to drill down into its timeline chart.
Log files
Gatling writes raw event data to a simulation.log file. The HTML report is generated from this file.
To regenerate the report from an existing log without re-running the simulation:
# Maven
mvn gatling:test -Dgatling.reportsOnly=target/gatling/MySimulation-20240315120000
# Gatling standalone (zip distribution)
./bin/gatling.sh -ro results/mysimulation-20240315120000
Enabling detailed logging for debugging
In src/test/resources/logback-test.xml:
<configuration>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- Log every request + response (very verbose — use only for debugging) -->
<logger name="io.gatling.http.engine.response" level="TRACE"/>
<root level="WARN">
<appender-ref ref="CONSOLE"/>
</root>
</configuration>
Warning:
TRACElogging with many users generates huge output and will slow down the test. Use it only with 1–5 users.
Simulation.log format
Note: Gatling 3.13+ changed
simulation.logto a binary format. Earlier versions used a tab-separated text format. The exact format is an internal implementation detail and should not be relied upon for tooling.
Interpreting results: red flags
| Observation | Likely cause |
|---|---|
| p99 >> p95 | Occasional slow outliers (GC, connection wait) |
| KO% > 0% | Check failures, timeouts, or 5xx errors |
| Response time increases monotonically | Resource leak or DB connection pool exhaustion |
| Requests/s plateau below target | Server saturated — found your bottleneck |
| All requests KO after a spike | Server crashed or rate-limited |
Exporting stats
The report folder contains JavaScript data files used by the HTML report. For programmatic access to results, use the assertions feature in setUp (see 06 — Checks) or integrate with Gatling Enterprise for advanced metrics export.