“Positive attitude plus effort equals performance.” Tommy Tuberville
However, tuning a database using lots of LOBs (Large OBjects) can be rather challenging even for experienced DBAs.
The following case was about slow database performance. I was asked to have a look. What was perplexing for me was the fact that more than 95% of the data in the database was LOBs some storing JSON data. Never seen a thing like that. BLOBs to be more specific.
Calculating the size of the LOB segments in the database is almost straightforward. Here is the link to the note on “How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM” (Doc ID 386341.1)
Looking at the top 5 events, knowing in advance about the BLOB issue, the AWR showed:
As we know, the ‘enq HW – contention‘ may be caused by a number of different reasons, there are also several possible different solutions to alleviate or reduce the contention:
Check I/O performance (disk groups)
Ensure that your LOB segment is not frequently extending
A table containing a very busy LOB segment may need partitioning in a manner that will evenly distribute concurrent DML across multiple partitions
Frequent LOB space/chunk reclaimation can also cause ‘enq HW – contention’
There were a couple of options that helped providing either temporary relief / workaround for the problem:
1. Manually adding extra space to the LOB segment alleviated the issue by allocating more free space to the lob segment so chunk reclaimation does not need to take place, until the free space is again used up:
ALTER TABLE <lob_table> MODIFY LOB (<column_name>) (allocate extent (size <extent size>));
2. Using the shrink space command (dbms_redefinition process can be used too for SECUREFILE LOBS) was used to free up any reclaimable space:
ALTER TABLE test_lob MODIFY LOB (image) (SHRINK SPACE);
It is possible to adjust the number of chunks that are cleaned up when the chunk cleanup operation is required. This can be enabled by setting event 44951 to a value between 1 and 1024 (default is 1). With the value between 1 and 1024 setting the number of chunks to be cleaned up each time a chunk reclaimation operation occurs. This can therefore reduce the number of requests for the High Watermark Enqueue:
EVENT=”44951 TRACE NAME CONTEXT FOREVER, LEVEL < 1 – 1024 >”
Small LOBs (< 1MB) perform better than LONGs for inserts, and have comparable performance on selects. Large LOBs perform better than LONGs in general. And LONGs with be eventually de-supported.
For LOB performance, this is perhaps the best note available: LOB Performance Guideline (Doc ID 268476.1)
With Oracle database 23c, you can experience improved read and write performance for LOBs due to the following enhancements:
Multiple LOBs in a single transaction are buffered simultaneously. This improves performance when you use mixed workload in a transaction. Mixed workload refers to switching between LOBs while writing within a single transaction. Let’s consider that you write to LOB1, then you write to LOB2, and then you want to write again to LOB1 in a single transaction. LOB1 and LOB2 are buffered simultaneously, which provides better throughput and minimizes space fragmentation.
Various enhancements, such as acceleration of compressed LOB append and compression unit caching, improve the performance of reads and writes to compressed LOBs.
The input-output buffer is adaptively resized based on size of the input data for large writes to LOBs with the NOCACHE option. This improves the performance for large direct writes, such as writes to file systems on DBFS and OFS.
Check for more details the LOB Performance Guidelines and also the blog post by Tim Hall on How to rename LOB segments in 23c.
With the latest versions of Oracle, when compatible is at least set to 20, Oracle recommends to use JSON data type to store JSON data and not LOB storage. But if still using LOB storage, then Oracle recommends to you use BLOB and not CLOB.
Using BLOB instead of CLOB is particularly relevant and important if the database character set is AL32UTF8 where CLOB instances are stored using the UCS2 character set, which means that each character requires two bytes.
Read MoreJulian Dontcheff’s Database Blog