Skip to content

db file sequential read - Diagnose Oracle Single Block I/O

Wait Event Class: User I/O

Parameters: file#, block#, blocks (always 1 for this event)

db file sequential read fires whenever a foreground session reads a single database block from disk into the buffer cache. The name “sequential” does not mean a sequential table scan — it means the read is issued as a single, serialized I/O operation (one block at a time), as opposed to the scatter-gather multi-block reads used by db file scattered read.

This is the most common I/O wait event in the majority of Oracle OLTP databases. It fires on:

  • Index range scans and unique scans — each block along the B-tree path (root, branch, leaf) is a single-block read
  • Table access by ROWID — after an index lookup, fetching the table row is a single-block I/O
  • Undo block reads — reading rollback/undo blocks for read consistency or transaction rollback
  • Segment header reads — reading segment and extent header blocks
  • Control file / file header reads — during certain database operations

Because nearly every indexed lookup generates this wait, seeing it in the top 5 wait events is expected. The concern is not its presence but its average latency and total time contribution.


Average Wait TimeAssessment
< 1 msExcellent — blocks likely in cache or very fast storage (NVMe/SSD)
1–5 msGood — typical spinning disk or cached SAN
5–10 msModerate — investigate storage contention or cache hit rate
10–20 msPoor — storage bottleneck or very low buffer cache hit rate
> 20 msCritical — severely degraded storage or extreme cache pressure

This wait is normal when:

  • The database is executing many indexed lookups (OLTP workload)
  • Average latency is under 5 ms
  • The buffer cache hit ratio is healthy (> 95%)

This wait is a problem when:

  • Average latency spikes during business hours
  • Total time in this event represents > 30–40% of DB time
  • ASH analysis shows a small number of hot objects or files driving all the waits
  • The buffer cache is undersized and physical reads are excessive

-- Overall db file sequential read statistics from instance startup
SELECT
event,
total_waits,
total_timeouts,
time_waited,
average_wait,
ROUND(time_waited / 100, 2) AS time_waited_secs,
ROUND(average_wait / 100, 2) AS avg_wait_secs,
ROUND(average_wait * 10, 2) AS avg_wait_ms
FROM v$system_event
WHERE event = 'db file sequential read'
ORDER BY time_waited DESC;
-- Compare I/O wait events to understand relative impact
SELECT
event,
total_waits,
ROUND(average_wait * 10, 2) AS avg_wait_ms,
ROUND(time_waited / 100, 2) AS total_secs,
ROUND(time_waited * 100.0 /
SUM(time_waited) OVER (), 2) AS pct_of_all_waits
FROM v$system_event
WHERE wait_class = 'User I/O'
ORDER BY time_waited DESC;
-- Sessions currently waiting on db file sequential read
SELECT
sw.sid,
sw.serial#,
s.username,
s.program,
s.sql_id,
sw.p1 AS file_number,
sw.p2 AS block_number,
sw.p3 AS blocks_read,
sw.seconds_in_wait,
f.name AS data_file_name
FROM v$session_wait sw
JOIN v$session s ON sw.sid = s.sid
JOIN v$datafile f ON sw.p1 = f.file#
WHERE sw.event = 'db file sequential read'
ORDER BY sw.seconds_in_wait DESC;

3. File-Level I/O Statistics — Find Hot Datafiles

Section titled “3. File-Level I/O Statistics — Find Hot Datafiles”
-- I/O breakdown by datafile — identify which files drive single-block reads
SELECT
f.name AS file_name,
ts.name AS tablespace_name,
fs.phyrds AS physical_reads,
fs.phyblkrd AS physical_blocks_read,
fs.singleblkrds AS single_block_reads,
ROUND(fs.singleblkrdtim / 10, 2) AS single_blk_read_ms,
CASE WHEN fs.singleblkrds > 0
THEN ROUND(fs.singleblkrdtim / fs.singleblkrds / 10, 4)
ELSE 0
END AS avg_single_blk_ms,
ROUND(fs.phyrds * 100.0 /
NULLIF(SUM(fs.phyrds) OVER (), 0), 2) AS pct_of_reads
FROM v$filestat fs
JOIN v$datafile f ON fs.file# = f.file#
JOIN v$tablespace ts ON f.ts# = ts.ts#
ORDER BY fs.singleblkrds DESC
FETCH FIRST 20 ROWS ONLY;

4. ASH Drill-Down — Hot Objects Driving Sequential Reads

Section titled “4. ASH Drill-Down — Hot Objects Driving Sequential Reads”
-- Use ASH to identify which objects/SQL are causing the most sequential reads
-- Requires Diagnostics Pack license
SELECT
ash.current_obj#,
o.object_name,
o.object_type,
o.owner,
ash.sql_id,
COUNT(*) AS ash_samples,
COUNT(*) * 10 AS estimated_seconds,
ROUND(COUNT(*) * 100.0 /
SUM(COUNT(*)) OVER (), 2) AS pct_of_sequential_waits
FROM v$active_session_history ash
LEFT JOIN dba_objects o ON ash.current_obj# = o.object_id
WHERE ash.event = 'db file sequential read'
AND ash.sample_time > SYSDATE - 1/24 -- Last hour
GROUP BY ash.current_obj#, o.object_name, o.object_type, o.owner, ash.sql_id
ORDER BY ash_samples DESC
FETCH FIRST 25 ROWS ONLY;
-- Assess buffer cache effectiveness
SELECT
ROUND((1 - (phy.value /
(cur.value + con.value + phy.value))) * 100, 2) AS buffer_cache_hit_pct,
phy.value AS physical_reads,
cur.value AS db_block_gets,
con.value AS consistent_gets
FROM v$sysstat phy,
v$sysstat cur,
v$sysstat con
WHERE phy.name = 'physical reads'
AND cur.name = 'db block gets'
AND con.name = 'consistent gets';

When the buffer cache cannot hold enough working data, Oracle must repeatedly read blocks from disk. Every cache miss results in a db file sequential read wait. This is especially impactful in OLTP workloads where the same index blocks and hot rows are repeatedly accessed.

Indicators: Low buffer cache hit ratio (< 95%), high physical reads in AWR, db_cache_advice recommending more cache.

2. Slow Storage — High Physical I/O Latency

Section titled “2. Slow Storage — High Physical I/O Latency”

Even when reads are necessary, slow storage causes long per-read wait times. Causes include:

  • Spinning disks without adequate caching
  • SAN contention during peak hours
  • RAID rebuild activity
  • Overcommitted shared storage in virtualized environments
  • Data files placed on slow or shared NFS mounts

Indicators: Average wait > 10 ms sustained, V$FILESTAT showing high per-read times on specific files, storage team reports of high queue depth.

3. Excessive Index Range Scans on Large Indexes

Section titled “3. Excessive Index Range Scans on Large Indexes”

Inefficient queries that scan a large portion of an index generate thousands of single-block reads — one per index leaf block visited, plus one per table row fetched. Poor selectivity or missing covering indexes compound this.

Common patterns:

  • Leading column of index has low cardinality (e.g., STATUS = ‘ACTIVE’ on a table where 80% of rows are active)
  • Multi-column index not used optimally because query predicates don’t match index column order
  • Missing index causing full scan followed by ROWID access (worse than a well-chosen full table scan)

4. High-Volume ROWID Lookups (Index + Table Row Fetch)

Section titled “4. High-Volume ROWID Lookups (Index + Table Row Fetch)”

When a query uses an index to find rows but must then fetch each row individually from the table (non-covering index), every row fetch is a single-block read of the table segment. For queries returning thousands of rows this way, the cumulative I/O is very high.

Resolution: Add a covering index that includes all selected columns, eliminating table-level reads entirely.

Long-running queries reading data that has been modified by other transactions must construct a read-consistent version by reading undo blocks. Heavy DML activity or very long-running queries generate many undo block reads, all registered as db file sequential read against the undo tablespace files.

Indicators: ASH shows single-block reads against undo tablespace files; V$UNDOSTAT shows high undoblks.

The clustering factor measures how well a table’s physical row order matches an index’s logical order. A clustering factor close to the number of blocks (ideal) means each index lookup hits a different table block; a factor close to the number of rows (bad) means many index entries point to blocks already in cache. Paradoxically, a very poor clustering factor on a large table can cause the optimizer to prefer full scans — but if the index is still chosen, each row fetch is likely to be a physical read to a different block.

-- Check clustering factor for indexes on hot tables
SELECT
index_name,
table_name,
clustering_factor,
num_rows,
ROUND(clustering_factor / num_rows, 4) AS cf_ratio,
CASE
WHEN clustering_factor / num_rows < 0.1 THEN 'GOOD'
WHEN clustering_factor / num_rows < 0.5 THEN 'MODERATE'
ELSE 'POOR - consider table reorg'
END AS assessment
FROM dba_indexes
WHERE table_owner = 'YOUR_SCHEMA'
AND table_name = 'YOUR_TABLE'
ORDER BY clustering_factor DESC;

-- Check current buffer cache size and advisor recommendation
SELECT
size_for_estimate / 1024 AS size_mb,
buffers_for_estimate,
estd_physical_read_factor,
estd_physical_reads
FROM v$db_cache_advice
WHERE block_size = (SELECT value FROM v$parameter WHERE name = 'db_block_size')
AND advice_status = 'ON'
ORDER BY size_for_estimate;
-- Increase buffer cache (requires restart if using SGA_MAX_SIZE boundary)
ALTER SYSTEM SET db_cache_size = 8G SCOPE=BOTH;
-- If using ASMM, increase SGA_TARGET
ALTER SYSTEM SET sga_target = 16G SCOPE=BOTH;
-- Identify the slowest datafiles
SELECT
f.name,
fs.singleblkrds,
CASE WHEN fs.singleblkrds > 0
THEN ROUND(fs.singleblkrdtim / fs.singleblkrds / 10, 4)
ELSE 0
END AS avg_single_blk_ms
FROM v$filestat fs
JOIN v$datafile f ON fs.file# = f.file#
ORDER BY avg_single_blk_ms DESC
FETCH FIRST 10 ROWS ONLY;
-- Move a hot datafile to faster storage (take tablespace offline first if possible)
-- In ASM: use RMAN to move datafile to a faster disk group
-- RMAN> COPY DATAFILE 7 TO '+FAST_DG';
-- Then: ALTER DATABASE RENAME FILE '...' TO '+FAST_DG/...';

Resolution for Excessive Index Range Scans

Section titled “Resolution for Excessive Index Range Scans”
-- Find the SQL statements generating the most sequential reads via ASH
SELECT
sql_id,
COUNT(*) AS ash_samples,
COUNT(*) * 10 AS est_secs_waiting
FROM v$active_session_history
WHERE event = 'db file sequential read'
AND sample_time > SYSDATE - 1
GROUP BY sql_id
ORDER BY ash_samples DESC
FETCH FIRST 10 ROWS ONLY;
-- Get the execution plan for a problem SQL_ID
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('&sql_id', NULL, 'ALLSTATS LAST'));
-- Add a covering index to eliminate table row fetches
-- Example: query selects CUST_NAME, EMAIL from CUSTOMERS where STATUS = 'A'
CREATE INDEX idx_customers_covering
ON customers(status, cust_name, email)
TABLESPACE indexes;
-- Reorganize the table to improve physical row order matching the index
-- Option 1: Online table move (12c+)
ALTER TABLE sales_history MOVE ONLINE;
-- Option 2: Use DBMS_REDEFINITION for zero-downtime reorganization
-- This physically reorders rows to match the primary access index
-- Rebuild indexes after table move
ALTER INDEX idx_sales_dt REBUILD ONLINE;
-- Re-gather statistics after reorganization
EXEC DBMS_STATS.GATHER_TABLE_STATS(
ownname => 'SALES',
tabname => 'SALES_HISTORY',
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
cascade => TRUE
);

Use V$DB_CACHE_ADVICE proactively. A hit ratio drop below 95% or the advisor recommending significantly more cache are early warning signs. In most OLTP systems, the buffer cache is the most cost-effective tuning lever.

2. Use ASM Smart Scan (Exadata) or Flash Cache

Section titled “2. Use ASM Smart Scan (Exadata) or Flash Cache”

On Exadata, many sequential reads can be served from Smart Flash Cache, drastically reducing latency. On non-Exadata, Database Smart Flash Cache (Solaris/ODA) can cache frequently accessed blocks.

Schedule regular AWR snapshot comparisons during peak hours. Track singleblkrds and singleblkrdtim per file over time to detect storage degradation before it becomes critical.

-- Weekly trending of single-block read latency per file from AWR
SELECT
snap_id,
file#,
singleblkrds AS single_reads,
CASE WHEN singleblkrds > 0
THEN ROUND(singleblkrdtim / singleblkrds / 10, 4)
ELSE 0
END AS avg_latency_ms
FROM dba_hist_filestatxs
WHERE snap_id BETWEEN &begin_snap AND &end_snap
ORDER BY avg_latency_ms DESC
FETCH FIRST 20 ROWS ONLY;

Use SQL Tuning Advisor or manually review high-load SQL from AWR Top SQL reports. Queries that perform index range scans returning large result sets are candidates for covering indexes or query rewrites.

Stale statistics cause the optimizer to choose inefficient index scans. Schedule DBMS_STATS.GATHER_DATABASE_STATS during off-peak hours or use automatic statistics collection.