Consider implementing multiple performance tuning strategies
to speed up archive processing, delete processing, and subsetting
(either through load processing or insert processing). Not all performance
tuning strategies are applicable to all situations. However, to minimize
the number of bottlenecks, you must consider a wide variety of strategies.
How can I speed up the archive process?
Consider
taking the following steps to speed up archive processing. The applicability
of these steps depends on your environment.
- Install Optim™ server on
the database server or on a computer that is on the same network as
the database server. Eliminating or reducing the number of network
bottlenecks improves throughput, particularly for databases with large
object (LOB) data.
- Set up Optim servers on UNIX or Linux to run multiple instances of Optim server on each server. Only one instance
of Optim server can run at
a time on a Windows computer.
- Enable parallel processing if you have multiple processors available.
Parallel processing increases processing speed for child tables.
- If you process a large number of rows within a table, experiment
with a forced scan table access strategy.
- Use statistical reports to monitor the fetch buffer size. The
fetch buffer size automatically increases with the number of parallel
connections, and larger fetch buffer sizes can reduce I/O speed.
- Reduce the amount of table traversal that is based on relationships
(Option 1 and Option 2).
- Run Optim servers on computers
with fast processors (2 GHz or faster).
- Write the archive file to a fast disk drive, SAN volume, or RAID
1+0 disk array.
- Tune the I/O subsystem that is used by Optim by separating key data onto separate
disk drives (for example, data, logs, data, and indexes) and spreading
the data across multiple disks and controllers.
- When you archive data in a Teradata database, use TPT connection
and change the number of sessions to greater than 1.
- Turn off inline compression for extract and archive processes.
Turning off inline compression reduces the consumption of processor
resources but increases the amount of disk space that is used during
extract and archive processes.
How can I speed up the delete process?
Consider
taking the following steps to speed up delete processing. The applicability
of these steps depends on your environment.
- Set the commit frequency in the delete request so that commits
are not sent as frequently to the database. Frequent commits degrade
performance because of high levels of I/O and handling of various
database storage elements.
- Turn off the Compare Rows option if the request is already verified
in a test environment. By default, Compare Rows is turned on, and
Compare Rows has a high overhead due to row-level operation.
- If business rules permit, disable indexes and foreign keys during
delete processing. If there are several indexes on a table, the database
must update the indexes every time that rows are deleted, which leads
to high levels of I/O and degradation in performance.
- Enable parallel processing if you have multiple processors available
and if there are multiple tables (objects) that contain rows to delete.
Parallel processing allows multiple tables to be processed for delete
at the same time by using concurrent Optim threads.
- Create a unique primary key for the tables in the delete process. Optim solutions require a primary
key for tables in a delete process. If a table does not have a unique
key, an Optim-generated primary
key can be created. However, the Optim-generated
primary key might lead to row-level operations due to non-unique keys.
- If you are not archiving tables and Compare Rows is turned off,
use array processing for Oracle, DB2® DPF,
Teradata, and Netezza® databases.
- Minimize the load on the database server other than delete processing.
Delete processing is complex and requires substantial database resources.
- Use the Lock Table option to grant exclusive use of the table
to Optim. Using the Lock Table
option can reduce concurrency.
- If Optim uses the key lookup
table access method during the delete operation, change the number
of keys from the default value of 1 to a bigger value (for example,
100). Increasing the number of keys helps process rows in bulk and
thus improves the overall delete process timing.
- Defragment the database and update the database statistics to
optimize the speed of the database.
- Break a large delete process into multiple smaller processes.
Multiple smaller processes can be run in parallel, and you can tune
the amount of work more easily to the available maintenance window
by using multiple smaller processes.
I am creating a test environment by subsetting 10%
of my production database. I have an 8-hour window to complete the
subset. What must I do to complete the subset on time?
Consider
taking the following steps to speed up subsetting. The applicability
of these steps depends on your environment.
- Install Optim server on
the production (source) database server or on a computer that is on
the same network as the production (source) database server.
- Use LOAD instead of INSERT to create the subset. If you must use
INSERT, ensure that the Commit frequency is Optimized.
- Use the same character set and code page for both the source and
target database to avoid processor use for data conversion. Optim always stores data in the
native character set/code page format.
- Postpone the creation of indexes until after the data is loaded
or inserted.
- Prepare your data in advance of the actual load/insert process,
if it needs to be masked.
- Store LOAD data files close to the target database server so that
the LOAD data does not have to move over the network. Remote loading
of LOAD data files increases network use, which degrades performance.
- If you use LOAD to move the data, use the Named Pipes option to
reduce the I/O and storage requirements.
- For Oracle databases, set the load request to use the direct path
load method with the parallel loads and unrecoverable load options
enabled.
- Ensure that source and target DDL objects have similar data types.
How best can I use Optim's architecture to improve
performance?
Consider taking the following steps to speed
up archive processing. The applicability of these steps depends on
your environment.
- Install Optim server on
a Linux or UNIX computer. This configuration enables the
running of multiple instances of Optim,
thus allowing better throughput. Only one instance of Optim server can run at a time on a Windows computer.
- Break down an extract or archive job into multiple Optim requests.
- Use multiple physical servers for Optim servers,
thus making it more scalable to run several processes in parallel.
- Install Optim server on
the database server or on a computer that is on the same network as
the database server.
- Tune your database server so that the database server can handle
multiple Optim processes or
requests at a time.