Skip to content

db file parallel write - Tune Oracle DBWR Write Performance

The db file parallel write wait event is experienced exclusively by DBWR (the Database Writer background process), not by foreground user sessions. It records the time DBWR spends waiting for a batch of asynchronous write I/Os to complete after it has submitted them to the operating system.

DBWR’s job is to write dirty buffers (modified data blocks) from the buffer cache to the data files on disk. DBWR operates by:

  1. Scanning the LRU/dirty list for dirty buffers that need to be written
  2. Batching those buffers together for efficiency
  3. Submitting the entire batch as a parallel write to the OS (if async I/O is enabled)
  4. Waiting for all writes in the batch to complete — this is the db file parallel write wait
  5. Marking the written buffers as clean and returning them to the free list

When you see db file parallel write in an AWR report’s background wait events section, the average wait time tells you the average time DBWR spends waiting for a batch write to complete. This is effectively your data file write latency as experienced by DBWR.

db file parallel write is a background event — it does not directly block foreground sessions. However, it is the causal upstream event for free buffer waits (foreground sessions waiting for free buffers) and write complete waits (foreground sessions waiting for a specific buffer to finish being written). High db file parallel write times are the root cause of these foreground waits.

Acceptable: Average wait times under 20ms are generally acceptable for rotational disk storage. For SSD/NVMe storage, below 5ms is expected.

Investigate when:

  • Average db file parallel write wait exceeds 20ms on SSD or 30ms on SAN storage
  • The event appears in the top 5 background wait events in AWR
  • Foreground free buffer waits or write complete waits are also elevated
  • I/O throughput metrics show the data file storage at high utilization (>70%)

Critical: If average write time exceeds 50ms, DBWR is severely bottlenecked. This will cascade into foreground stalls as the buffer cache fills with dirty buffers.

The P1 parameter is the count of blocks written in the batch, P2 is the count of files written to, and P3 is 0. These can be useful for understanding write batch sizing — very small P1 values may indicate async I/O is not working correctly.

1. Check Background Wait Statistics for db file parallel write

Section titled “1. Check Background Wait Statistics for db file parallel write”
-- Background wait events — includes DBWR-specific events
SELECT
event,
total_waits,
total_timeouts,
ROUND(time_waited / 100, 2) AS total_wait_secs,
ROUND(average_wait / 100, 4) AS avg_wait_secs,
ROUND(max_wait / 100, 2) AS max_wait_secs
FROM
v$system_event
WHERE
event IN (
'db file parallel write',
'db file sequential read',
'db file scattered read',
'free buffer waits',
'write complete waits',
'checkpoint completed'
)
ORDER BY
total_wait_secs DESC;

2. V$FILESTAT — Write Latency by Data File

Section titled “2. V$FILESTAT — Write Latency by Data File”
-- Data file write statistics — identify which files have the highest write latency
SELECT
df.name AS file_path,
ts.name AS tablespace_name,
fs.phywrts AS physical_writes,
fs.writetim AS write_time_cs,
CASE
WHEN fs.phywrts > 0
THEN ROUND(fs.writetim / fs.phywrts * 10, 2)
ELSE 0
END AS avg_write_ms,
fs.phyrds AS physical_reads,
CASE
WHEN fs.phyrds > 0
THEN ROUND(fs.readtim / fs.phyrds * 10, 2)
ELSE 0
END AS avg_read_ms,
fs.avgiotim AS avg_io_time_cs,
fs.miniotim AS min_io_time_cs,
fs.maxiortm AS max_read_time_cs,
fs.maxiowtm AS max_write_time_cs
FROM
v$filestat fs
JOIN v$datafile df ON fs.file# = df.file#
JOIN v$tablespace ts ON df.ts# = ts.ts#
ORDER BY
avg_write_ms DESC NULLS LAST;
-- DBWR operational statistics
SELECT
name,
value
FROM
v$sysstat
WHERE
name IN (
'DBWR timeouts',
'DBWR make free requests',
'DBWR lru scans',
'DBWR checkpoints',
'DBWR buffers scanned',
'DBWR fusion writes',
'physical writes',
'physical writes direct',
'physical write IO requests',
'physical write bytes',
'dirty buffers inspected',
'free buffer requested',
'free buffer inspected'
)
ORDER BY
name;
-- DBWR and I/O configuration parameters
SELECT
name,
value,
description
FROM
v$parameter
WHERE
name IN (
'db_writer_processes',
'dbwr_io_slaves',
'disk_asynch_io',
'filesystemio_options',
'db_cache_size',
'sga_target',
'fast_start_mttr_target',
'db_block_size',
'db_file_multiblock_read_count'
)
ORDER BY
name;
-- How frequently are checkpoints occurring? (From V$LOG history)
SELECT
TO_CHAR(first_time, 'YYYY-MM-DD HH24') AS log_switch_hour,
COUNT(*) AS switches_per_hour,
MIN(ROUND((next_time - first_time) * 60, 1)) AS min_minutes_between_switches,
ROUND(AVG((next_time - first_time) * 60), 1) AS avg_minutes_between_switches,
MAX(ROUND((next_time - first_time) * 60, 1)) AS max_minutes_between_switches
FROM
v$log_history
WHERE
first_time > SYSDATE - 2 -- Last 2 days
GROUP BY
TO_CHAR(first_time, 'YYYY-MM-DD HH24')
ORDER BY
log_switch_hour DESC
FETCH FIRST 48 ROWS ONLY;

6. I/O Calibration — Measure Storage Write Performance

Section titled “6. I/O Calibration — Measure Storage Write Performance”
-- Use DBMS_RESOURCE_MANAGER.CALIBRATE_IO to benchmark storage
-- Run during low-activity period (this is a disruptive test)
-- First check if already calibrated:
SELECT
max_iops,
max_mbps,
max_pmbps,
latency,
num_physical_disks,
calibration_time
FROM
dba_rsrc_io_calibrate;

7. V$IOSTAT_FILE — Cumulative I/O Statistics

Section titled “7. V$IOSTAT_FILE — Cumulative I/O Statistics”
-- Cumulative I/O statistics by file type
SELECT
file_type,
small_read_megabytes,
small_write_megabytes,
large_read_megabytes,
large_write_megabytes,
small_read_reqs,
small_write_reqs,
large_read_reqs,
large_write_reqs
FROM
v$iostat_function
ORDER BY
small_write_megabytes + large_write_megabytes DESC;

The most direct cause. If the storage hosting Oracle data files has high write latency — due to spinning disks, a congested SAN fabric, a misconfigured storage controller, or exhausted I/O credits on cloud storage — every DBWR write batch will take longer to complete. This directly increases db file parallel write average wait times.

Modern Oracle databases expect sub-5ms average write latency on SSD-based storage. Rotational disk-based systems should achieve under 20ms. Exceeding these thresholds consistently indicates a storage problem.

When DISK_ASYNCH_IO = FALSE (or FILESYSTEMIO_OPTIONS is set incorrectly for file system databases), DBWR cannot submit writes in parallel. Instead, it must submit one write, wait for completion, then submit the next. This effectively serializes all write I/O and dramatically reduces DBWR write throughput. The symptom is small P1 values in db file parallel write waits (1 block at a time instead of batches of hundreds).

A single DBWR process (the default DBW0) has a ceiling on how many I/O requests it can process concurrently. On systems with many data files spread across multiple storage devices, a single DBWR process cannot fully exploit the parallel write bandwidth of the underlying hardware. Multiple DBWR processes allow independent write batches to be submitted in parallel.

Frequent checkpoints — triggered by small redo log files, a low FAST_START_MTTR_TARGET, or explicit ALTER SYSTEM CHECKPOINT commands — force DBWR into emergency write mode where it must flush all dirty buffers below the checkpoint SCN. This creates write bursts that overwhelm the storage subsystem and spike db file parallel write wait times.

If the buffer cache is undersized for the working set, the LRU replacement rate is high. DBWR is continuously writing dirty buffers to make room for new reads, even during relatively quiet periods. The sustained write pressure keeps storage utilization high and average write latency elevated.

The undo tablespace receives very high write traffic in DML-intensive systems. If the undo data file shares storage with data files, or is on slower storage, undo writes compete with data file writes in DBWR’s write queue. Placing the undo tablespace on dedicated fast storage reduces this competition.

-- Check current count
SHOW PARAMETER db_writer_processes;
-- Recommended sizing: 1 process per 2-4 CPU cores, up to 36
-- On a 16-core server, try 4-8 DBWR processes
ALTER SYSTEM SET db_writer_processes = 4 SCOPE=SPFILE;
-- Requires restart. After restart verify with:
SELECT name FROM v$bgprocess WHERE paddr != '00' AND name LIKE 'DBW%';

Step 2: Enable and Verify Asynchronous I/O

Section titled “Step 2: Enable and Verify Asynchronous I/O”
-- Check current state
SELECT name, value FROM v$parameter
WHERE name IN ('disk_asynch_io', 'filesystemio_options');
-- Enable async I/O for raw/block devices and ASM
ALTER SYSTEM SET disk_asynch_io = TRUE SCOPE=SPFILE;
-- For file system (non-ASM) data files:
-- SETALL enables both directio and asynch I/O
ALTER SYSTEM SET filesystemio_options = SETALL SCOPE=SPFILE;
-- Requires restart
-- Verify async I/O is working by checking write batch sizes:
-- If P1 in db file parallel write consistently = 1, async I/O is not working

Step 3: Reduce Checkpoint Frequency by Sizing Redo Logs Appropriately

Section titled “Step 3: Reduce Checkpoint Frequency by Sizing Redo Logs Appropriately”
-- Check current redo log sizes vs generation rate
-- Target: switches every 15-30 minutes under normal load
SELECT group#, members, bytes/1024/1024 AS size_mb, status FROM v$log;
-- If logs are too small, add new larger groups (see log-buffer-space guide)
ALTER DATABASE ADD LOGFILE GROUP 4
'/u01/redo/redo04a.log' SIZE 2G;
-- Adjust FAST_START_MTTR_TARGET to reduce aggressive incremental checkpoint
ALTER SYSTEM SET fast_start_mttr_target = 300 SCOPE=BOTH;

Step 4: Move Data Files to Dedicated High-Performance Storage

Section titled “Step 4: Move Data Files to Dedicated High-Performance Storage”
-- Identify the highest-latency data files from V$FILESTAT (query #2 above)
-- Then move those data files to faster storage:
-- Using RMAN to move datafile online (no downtime):
-- RMAN> COPY DATAFILE '/slow_disk/oradata/data01.dbf'
-- TO '/fast_ssd/oradata/data01.dbf';
-- Or using SQL (requires tablespace offline or full DB offline):
ALTER DATABASE MOVE DATAFILE
'/slow_disk/oradata/data01.dbf'
TO '/fast_ssd/oradata/data01.dbf';
-- (Oracle 12c+ supports online datafile move for most cases)

Step 5: Verify DBWR I/O Slaves Configuration (Legacy Systems)

Section titled “Step 5: Verify DBWR I/O Slaves Configuration (Legacy Systems)”
-- DBWR I/O slaves are an alternative to multiple DBWR processes
-- Only use when async I/O is NOT available natively
-- Note: Cannot use both DB_WRITER_PROCESSES > 1 and DBWR_IO_SLAVES > 0
SHOW PARAMETER dbwr_io_slaves;
-- For legacy environments where async I/O is unavailable:
ALTER SYSTEM SET dbwr_io_slaves = 4 SCOPE=SPFILE;
-- (Requires restart, and DB_WRITER_PROCESSES must be 1)

Benchmark storage before deployment: Run DBMS_RESOURCE_MANAGER.CALIBRATE_IO on new systems to characterize storage performance. Establish baseline write latency metrics and monitor for degradation over time.

Separate UNDO and data file storage: Place the undo tablespace on its own fast storage to prevent undo writes from competing with data file writes in DBWR’s I/O queue.

Monitor DBWR effectiveness with the “make free requests” statistic: A rising DBWR make free requests count means foreground sessions are repeatedly asking DBWR to free buffers. This is a leading indicator of future free buffer waits.

Use ASM with properly configured disk groups: Oracle ASM with NORMAL or HIGH redundancy across multiple disk groups provides better write distribution than file system storage, naturally parallelizing DBWR writes across spindles or NVMe devices.

Tune for the right MTTR: FAST_START_MTTR_TARGET controls how aggressively Oracle checkpoints to meet the target recovery time. Setting it too low (e.g., 30 seconds) causes DBWR to write continuously. Setting it appropriately for your RTO (e.g., 120–300 seconds) reduces checkpoint overhead without compromising recoverability.

-- Calculate optimal FAST_START_MTTR_TARGET based on checkpoint throughput
SELECT
recovery_estimated_ios,
actual_redo_blks,
target_redo_blks,
log_file_size_redo_blks,
log_chkpt_timeout_redo_blks,
log_chkpt_interval_redo_blks,
fast_start_io_target,
target_recovery_time
FROM
v$instance_recovery;
  • free buffer waits — Foreground sessions waiting for DBWR to free dirty buffers; directly caused by slow db file parallel write
  • write complete waits — A foreground session needs a buffer that DBWR is currently in the process of writing to disk
  • checkpoint completed — A foreground log switch is blocked waiting for the checkpoint SCN to advance; caused by DBWR falling behind
  • db file sequential read — Single-block read wait for foreground sessions; shares the storage path with DBWR writes and can be impacted by write saturation
  • log file parallel write — LGWR writing redo to redo log files; shares the I/O subsystem and can compete with DBWR data file writes