Oracle® Database
Utilities
11g Release 2 (11.2)
E10701-02
August 2009
Oracle Database Utilities, 11g Release 2 (11.2)
E10701-02
Copyright © 1996, 2009, Oracle and/or its affiliates. All rights reserved.
Primary Author: Kathy Rich
Contributors: Lee Barton, Ellen Batbouta, Janet Blowney, Steve DiPirro, Bill Fisher, Steve Fogel, Dean
Gagne, John Kalogeropoulos, Jonathan Klein, Cindy Lim, Brian McCarthy, Rod Payne, Rich Phillips, Mike
Sakayeda, Francisco Sanchez, Marilyn Saunders, Jim Stenoish, Hui-ling Yu
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify,
license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means.
Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this software or related documentation is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data
delivered to U.S. Government customers are "commercial computer software" or "commercial technical
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the
restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable
by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial
Computer Software License (December 2007). Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA
94065.
This software is developed for general use in a variety of information management applications. It is not
developed or intended for use in any inherently dangerous applications, including applications which may
create a risk of personal injury. If you use this software in dangerous applications, then you shall be
responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure the safe use
of this software. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of
this software in dangerous applications.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.
This software and documentation may provide access to or information on content, products, and services
from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and
its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of
third-party content, products, or services.
iii
Contents
Preface xxxiii
Audience xxxiii
Documentation Accessibility xxxiii
Related Documentation xxxiv
Syntax Diagrams xxxv
Conventions xxxv
What's New in Database Utilities? xxxvii
New Features in Oracle Database Utilities 11g Release 2 xxxvii
New Features in Oracle Database Utilities 11g Release 1 xxxviii
Part I Oracle Data Pump
1 Overview of Oracle Data Pump
Data Pump Components 1-1
How Does Data Pump Move Data? 1-2
Using Data File Copying to Move Data 1-2
Using Direct Path to Move Data 1-3
Using External Tables to Move Data 1-4
Using Conventional Path to Move Data 1-5
Using Network Link Import to Move Data 1-5
Required Roles for Data Pump Export and Import Operations 1-6
What Happens During Execution of a Data Pump Job? 1-6
Coordination of a Job 1-6
Tracking Progress Within a Job 1-6
Filtering Data and Metadata During a Job 1-7
Transforming Metadata During a Job 1-7
Maximizing Job Performance 1-8
Loading and Unloading of Data 1-8
Monitoring Job Status 1-8
Monitoring the Progress of Executing Jobs 1-9
File Allocation 1-10
Specifying Files and Adding Additional Dump Files 1-10
Default Locations for Dump, Log, and SQL Files 1-10
Oracle RAC Considerations 1-12
iv
Using Directory Objects When Oracle Automatic Storage Management Is Enabled 1-12
Using Substitution Variables 1-13
Moving Data Between Different Database Versions 1-13
SecureFile LOB Considerations 1-14
2 Data Pump Export
What Is Data Pump Export? 2-1
Invoking Data Pump Export 2-1
Data Pump Export Interfaces 2-2
Data Pump Export Modes 2-2
Full Export Mode 2-3
Schema Mode 2-3
Table Mode 2-3
Tablespace Mode 2-4
Transportable Tablespace Mode 2-4
Network Considerations 2-4
Filtering During Export Operations 2-5
Data Filters 2-5
Metadata Filters 2-5
Parameters Available in Export's Command-Line Mode 2-6
ATTACH 2-8
CLUSTER 2-9
COMPRESSION 2-10
CONTENT 2-11
DATA_OPTIONS 2-11
DIRECTORY 2-12
DUMPFILE 2-13
ENCRYPTION 2-14
ENCRYPTION_ALGORITHM 2-15
ENCRYPTION_MODE 2-16
ENCRYPTION_PASSWORD 2-17
ESTIMATE 2-18
ESTIMATE_ONLY 2-19
EXCLUDE 2-19
FILESIZE 2-21
FLASHBACK_SCN 2-22
FLASHBACK_TIME 2-22
FULL 2-23
HELP 2-24
INCLUDE 2-24
JOB_NAME
2-26
LOGFILE 2-26
NETWORK_LINK 2-27
NOLOGFILE 2-28
PARALLEL 2-29
PARFILE 2-30
QUERY 2-31
v
REMAP_DATA 2-32
REUSE_DUMPFILES 2-33
SAMPLE 2-33
SCHEMAS 2-34
SERVICE_NAME 2-35
SOURCE_EDITION 2-36
STATUS 2-36
TABLES 2-37
TABLESPACES 2-39
TRANSPORT_FULL_CHECK 2-40
TRANSPORT_TABLESPACES 2-40
TRANSPORTABLE 2-42
VERSION 2-43
Commands Available in Export's Interactive-Command Mode 2-44
ADD_FILE 2-45
CONTINUE_CLIENT 2-45
EXIT_CLIENT 2-45
FILESIZE 2-46
HELP 2-46
KILL_JOB 2-46
PARALLEL 2-47
START_JOB 2-47
STATUS 2-48
STOP_JOB 2-48
Examples of Using Data Pump Export 2-49
Performing a Table-Mode Export 2-49
Data-Only Unload of Selected Tables and Rows 2-49
Estimating Disk Space Needed in a Table-Mode Export 2-50
Performing a Schema-Mode Export 2-50
Performing a Parallel Full Database Export 2-50
Using Interactive Mode to Stop and Reattach to a Job 2-50
Syntax Diagrams for Data Pump Export 2-51
3 Data Pump Import
What Is Data Pump Import? 3-1
Invoking Data Pump Import 3-1
Data Pump Import Interfaces 3-2
Data Pump Import Modes 3-2
Full Import Mode 3-3
Schema Mode 3-3
Table Mode 3-3
Tablespace Mode 3-4
Transportable Tablespace Mode 3-4
Network Considerations 3-4
Filtering During Import Operations 3-5
Data Filters 3-5
Metadata Filters 3-5
vi
Parameters Available in Import's Command-Line Mode 3-6
ATTACH 3-8
CLUSTER 3-9
CONTENT 3-10
DATA_OPTIONS 3-10
DIRECTORY 3-11
DUMPFILE 3-12
ENCRYPTION_PASSWORD 3-13
ESTIMATE 3-14
EXCLUDE 3-14
FLASHBACK_SCN 3-16
FLASHBACK_TIME 3-17
FULL 3-18
HELP 3-19
INCLUDE 3-19
JOB_NAME 3-20
LOGFILE 3-21
NETWORK_LINK 3-22
NOLOGFILE 3-23
PARALLEL 3-23
PARFILE 3-24
PARTITION_OPTIONS 3-25
QUERY 3-26
REMAP_DATA 3-27
REMAP_DATAFILE 3-28
REMAP_SCHEMA 3-29
REMAP_TABLE 3-30
REMAP_TABLESPACE 3-31
REUSE_DATAFILES 3-32
SCHEMAS 3-32
SERVICE_NAME 3-33
SKIP_UNUSABLE_INDEXES 3-34
SOURCE_EDITION 3-35
SQLFILE 3-36
STATUS 3-37
STREAMS_CONFIGURATION 3-37
TABLE_EXISTS_ACTION 3-37
TABLES 3-39
TABLESPACES 3-41
TARGET_EDITION 3-41
TRANSFORM 3-42
TRANSPORT_DATAFILES 3-44
TRANSPORT_FULL_CHECK 3-45
TRANSPORT_TABLESPACES 3-46
TRANSPORTABLE 3-47
VERSION 3-48
Commands Available in Import's Interactive-Command Mode 3-49
vii
CONTINUE_CLIENT 3-50
EXIT_CLIENT 3-50
HELP 3-50
KILL_JOB 3-51
PARALLEL 3-51
START_JOB 3-51
STATUS 3-52
STOP_JOB 3-52
Examples of Using Data Pump Import 3-53
Performing a Data-Only Table-Mode Import 3-53
Performing a Schema-Mode Import 3-53
Performing a Network-Mode Import 3-54
Syntax Diagrams for Data Pump Import 3-54
4 Data Pump Legacy Mode
Parameter Mappings 4-1
Using Original Export Parameters with Data Pump 4-1
Using Original Import Parameters with Data Pump 4-4
Management of File Locations in Data Pump Legacy Mode 4-7
Adjusting Existing Scripts for Data Pump Log Files and Errors 4-9
Log Files 4-9
Error Cases 4-9
Exit Status 4-10
5 Data Pump Performance
Data Performance Improvements for Data Pump Export and Import 5-1
Tuning Performance 5-2
Controlling Resource Consumption 5-2
Effects of Compression and Encryption on Performance 5-2
Initialization Parameters That Affect Data Pump Performance 5-3
Setting the Size Of the Buffer Cache In a Streams Environment 5-3
6 The Data Pump API
How Does the Client Interface to the Data Pump API Work? 6-1
Job States 6-1
What Are the Basic Steps in Using the Data Pump API? 6-3
Examples of Using the Data Pump API 6-4
Part II SQL*Loader
7 SQL*Loader Concepts
SQL*Loader Features 7-1
SQL*Loader Parameters 7-2
SQL*Loader Control File 7-3
Input Data and Datafiles 7-3
Fixed Record Format 7-4
viii
Variable Record Format 7-4
Stream Record Format 7-5
Logical Records 7-6
Data Fields 7-6
LOBFILEs and Secondary Datafiles (SDFs) 7-7
Data Conversion and Datatype Specification 7-7
Discarded and Rejected Records 7-8
The Bad File 7-8
SQL*Loader Rejects 7-8
Oracle Database Rejects 7-8
The Discard File 7-8
Log File and Logging Information 7-9
Conventional Path Loads, Direct Path Loads, and External Table Loads 7-9
Conventional Path Loads 7-9
Direct Path Loads 7-9
Parallel Direct Path 7-10
External Table Loads 7-10
Choosing External Tables Versus SQL*Loader 7-10
Behavior Differences Between SQL*Loader and External Tables 7-11
Multiple Primary Input Datafiles 7-11
Syntax and Datatypes 7-11
Byte-Order Marks 7-11
Default Character Sets, Date Masks, and Decimal Separator 7-11
Use of the Backslash Escape Character 7-11
Loading Objects, Collections, and LOBs 7-12
Supported Object Types 7-12
column objects 7-12
row objects 7-12
Supported Collection Types 7-12
Nested Tables 7-12
VARRAYs 7-12
Supported LOB Types 7-13
Partitioned Object Support 7-13
Application Development: Direct Path Load API 7-13
SQL*Loader Case Studies 7-13
Case Study Files 7-14
Running the Case Studies 7-15
Case Study Log Files 7-15
Checking the Results of a Case Study 7-16
8 SQL*Loader Command-Line Reference
Invoking SQL*Loader 8-1
Alternative Ways to Specify Parameters 8-2
Command-Line Parameters 8-2
BAD (bad file) 8-2
BINDSIZE (maximum size) 8-2
COLUMNARRAYROWS 8-3
ix
CONTROL (control file) 8-3
DATA (datafile) 8-3
DATE_CACHE 8-3
DIRECT (data path) 8-4
DISCARD (file name) 8-4
DISCARDMAX (integer) 8-4
ERRORS (errors to allow) 8-4
EXTERNAL_TABLE 8-5
Restrictions When Using EXTERNAL_TABLE 8-6
FILE (tablespace file to load into) 8-6
LOAD (number of records to load) 8-7
LOG (log file) 8-7
MULTITHREADING 8-7
NO_INDEX_ERRORS 8-7
PARALLEL (parallel load) 8-7
PARFILE (parameter file) 8-7
READSIZE (read buffer size) 8-8
RESUMABLE 8-8
RESUMABLE_NAME 8-9
RESUMABLE_TIMEOUT 8-9
ROWS (rows per commit) 8-9
SILENT (feedback mode) 8-9
SKIP (records to skip) 8-10
SKIP_INDEX_MAINTENANCE 8-10
SKIP_UNUSABLE_INDEXES 8-11
STREAMSIZE 8-11
USERID (username/password) 8-12
Exit Codes for Inspection and Display 8-12
9 SQL*Loader Control File Reference
Control File Contents 9-1
Comments in the Control File 9-3
Specifying Command-Line Parameters in the Control File 9-3
OPTIONS Clause 9-3
Specifying File Names and Object Names 9-4
File Names That Conflict with SQL and SQL*Loader Reserved Words 9-4
Specifying SQL Strings 9-4
Operating System Considerations 9-4
Specifying a Complete Path 9-4
Backslash Escape Character 9-4
Nonportable Strings 9-5
Using the Backslash as an Escape Character 9-5
Escape Character Is Sometimes Disallowed 9-5
Identifying XMLType Tables 9-5
Specifying Datafiles 9-6
Examples of INFILE Syntax 9-7
Specifying Multiple Datafiles 9-8
x
Identifying Data in the Control File with BEGINDATA 9-8
Specifying Datafile Format and Buffering 9-9
Specifying the Bad File 9-9
Examples of Specifying a Bad File Name 9-10
How Bad Files Are Handled with LOBFILEs and SDFs 9-10
Criteria for Rejected Records 9-10
Specifying the Discard File 9-11
Specifying the Discard File in the Control File 9-11
Specifying the Discard File from the Command Line 9-12
Examples of Specifying a Discard File Name 9-12
Criteria for Discarded Records 9-12
How Discard Files Are Handled with LOBFILEs and SDFs 9-12
Limiting the Number of Discarded Records 9-12
Handling Different Character Encoding Schemes 9-13
Multibyte (Asian) Character Sets 9-13
Unicode Character Sets 9-13
Database Character Sets 9-14
Datafile Character Sets 9-14
Input Character Conversion 9-14
Considerations When Loading Data into VARRAYs or Primary-Key-Based REFs 9-15
CHARACTERSET Parameter 9-15
Control File Character Set 9-16
Character-Length Semantics 9-17
Shift-sensitive Character Data 9-18
Interrupted Loads 9-18
Discontinued Conventional Path Loads 9-19
Discontinued Direct Path Loads 9-19
Load Discontinued Because of Space Errors 9-19
Load Discontinued Because Maximum Number of Errors Exceeded 9-19
Load Discontinued Because of Fatal Errors 9-20
Load Discontinued Because a Ctrl+C Was Issued 9-20
Status of Tables and Indexes After an Interrupted Load 9-20
Using the Log File to Determine Load Status 9-20
Continuing Single-Table Loads 9-20
Assembling Logical Records from Physical Records 9-21
Using CONCATENATE to Assemble Logical Records 9-21
Using CONTINUEIF to Assemble Logical Records 9-21
Loading Logical Records into Tables 9-24
Specifying Table Names
9-24
INTO TABLE Clause 9-24
Table-Specific Loading Method 9-25
Loading Data into Empty Tables 9-25
Loading Data into Nonempty Tables 9-25
Table-Specific OPTIONS Parameter 9-26
Loading Records Based on a Condition 9-27
Using the WHEN Clause with LOBFILEs and SDFs 9-27
Specifying Default Data Delimiters 9-27
xi
fields_spec 9-28
termination_spec 9-28
enclosure_spec 9-28
Handling Short Records with Missing Data 9-28
TRAILING NULLCOLS Clause 9-29
Index Options 9-29
SORTED INDEXES Clause 9-29
SINGLEROW Option 9-30
Benefits of Using Multiple INTO TABLE Clauses 9-30
Extracting Multiple Logical Records 9-30
Relative Positioning Based on Delimiters 9-31
Distinguishing Different Input Record Formats 9-31
Relative Positioning Based on the POSITION Parameter 9-32
Distinguishing Different Input Row Object Subtypes 9-32
Loading Data into Multiple Tables 9-33
Summary 9-33
Bind Arrays and Conventional Path Loads 9-34
Size Requirements for Bind Arrays 9-34
Performance Implications of Bind Arrays 9-34
Specifying Number of Rows Versus Size of Bind Array 9-34
Calculations to Determine Bind Array Size 9-35
Determining the Size of the Length Indicator 9-36
Calculating the Size of Field Buffers 9-36
Minimizing Memory Requirements for Bind Arrays 9-38
Calculating Bind Array Size for Multiple INTO TABLE Clauses 9-38
10 SQL*Loader Field List Reference
Field List Contents 10-1
Specifying the Position of a Data Field 10-2
Using POSITION with Data Containing Tabs 10-3
Using POSITION with Multiple Table Loads 10-3
Examples of Using POSITION 10-4
Specifying Columns and Fields 10-4
Specifying Filler Fields 10-4
Specifying the Datatype of a Data Field 10-6
SQL*Loader Datatypes 10-6
Nonportable Datatypes 10-6
INTEGER(n) 10-7
SMALLINT 10-7
FLOAT 10-7
DOUBLE 10-8
BYTEINT 10-8
ZONED 10-8
DECIMAL 10-9
VARGRAPHIC 10-9
VARCHAR 10-10
VARRAW 10-11
xii
LONG VARRAW 10-11
Portable Datatypes 10-11
CHAR 10-12
Datetime and Interval Datatypes 10-12
GRAPHIC 10-14
GRAPHIC EXTERNAL 10-15
Numeric EXTERNAL 10-15
RAW 10-16
VARCHARC 10-16
VARRAWC 10-17
Conflicting Native Datatype Field Lengths 10-17
Field Lengths for Length-Value Datatypes 10-17
Datatype Conversions 10-17
Datatype Conversions for Datetime and Interval Datatypes 10-18
Specifying Delimiters 10-19
Syntax for Termination and Enclosure Specification 10-19
Delimiter Marks in the Data 10-20
Maximum Length of Delimited Data 10-21
Loading Trailing Blanks with Delimiters 10-21
How Delimited Data Is Processed 10-21
Fields Using Only TERMINATED BY 10-22
Fields Using ENCLOSED BY Without TERMINATED BY 10-22
Fields Using ENCLOSED BY With TERMINATED BY 10-22
Fields Using OPTIONALLY ENCLOSED BY With TERMINATED BY 10-23
Conflicting Field Lengths for Character Datatypes 10-24
Predetermined Size Fields 10-24
Delimited Fields 10-24
Date Field Masks 10-24
Specifying Field Conditions 10-25
Comparing Fields to BLANKS 10-26
Comparing Fields to Literals 10-27
Using the WHEN, NULLIF, and DEFAULTIF Clauses 10-27
Examples of Using the WHEN, NULLIF, and DEFAULTIF Clauses 10-29
Loading Data Across Different Platforms 10-30
Byte Ordering 10-31
Specifying Byte Order 10-32
Using Byte Order Marks (BOMs)
10-33
Suppressing Checks for BOMs 10-34
Loading All-Blank Fields 10-34
Trimming Whitespace 10-35
Datatypes for Which Whitespace Can Be Trimmed 10-36
Specifying Field Length for Datatypes for Which Whitespace Can Be Trimmed 10-37
Predetermined Size Fields 10-37
Delimited Fields 10-37
Relative Positioning of Fields 10-37
No Start Position Specified for a Field 10-38
Previous Field Terminated by a Delimiter 10-38
xiii
Previous Field Has Both Enclosure and Termination Delimiters 10-38
Leading Whitespace 10-38
Previous Field Terminated by Whitespace 10-39
Optional Enclosure Delimiters 10-39
Trimming Trailing Whitespace 10-39
Trimming Enclosed Fields 10-40
How the PRESERVE BLANKS Option Affects Whitespace Trimming 10-40
How [NO] PRESERVE BLANKS Works with Delimiter Clauses 10-40
Applying SQL Operators to Fields 10-41
Referencing Fields 10-42
Common Uses of SQL Operators in Field Specifications 10-43
Combinations of SQL Operators 10-44
Using SQL Strings with a Date Mask 10-44
Interpreting Formatted Fields 10-44
Using SQL Strings to Load the ANYDATA Database Type 10-44
Using SQL*Loader to Generate Data for Input 10-45
Loading Data Without Files 10-45
Setting a Column to a Constant Value 10-45
CONSTANT Parameter 10-45
Setting a Column to an Expression Value 10-46
EXPRESSION Parameter 10-46
Setting a Column to the Datafile Record Number 10-46
RECNUM Parameter 10-46
Setting a Column to the Current Date 10-47
SYSDATE Parameter 10-47
Setting a Column to a Unique Sequence Number 10-47
SEQUENCE Parameter 10-47
Generating Sequence Numbers for Multiple Tables 10-48
Example: Generating Different Sequence Numbers for Each Insert 10-48
11 Loading Objects, LOBs, and Collections
Loading Column Objects 11-1
Loading Column Objects in Stream Record Format 11-2
Loading Column Objects in Variable Record Format 11-2
Loading Nested Column Objects 11-3
Loading Column Objects with a Derived Subtype 11-3
Specifying Null Values for Objects 11-4
Specifying Attribute Nulls 11-4
Specifying Atomic Nulls 11-5
Loading Column Objects with User-Defined Constructors 11-6
Loading Object Tables 11-9
Loading Object Tables with a Subtype 11-10
Loading REF Columns 11-11
Specifying Table Names in a REF Clause 11-11
System-Generated OID REF Columns 11-12
Primary Key REF Columns 11-12
Unscoped REF Columns That Allow Primary Keys 11-13
xiv
Loading LOBs 11-14
Loading LOB Data from a Primary Datafile 11-14
LOB Data in Predetermined Size Fields 11-15
LOB Data in Delimited Fields 11-15
LOB Data in Length-Value Pair Fields 11-16
Loading LOB Data from LOBFILEs 11-17
Dynamic Versus Static LOBFILE Specifications 11-18
Examples of Loading LOB Data from LOBFILEs 11-18
Considerations When Loading LOBs from LOBFILEs 11-21
Loading BFILE Columns 11-21
Loading Collections (Nested Tables and VARRAYs) 11-22
Restrictions in Nested Tables and VARRAYs 11-23
Secondary Datafiles (SDFs) 11-24
Dynamic Versus Static SDF Specifications 11-25
Loading a Parent Table Separately from Its Child Table 11-25
Memory Issues When Loading VARRAY Columns 11-26
12 Conventional and Direct Path Loads
Data Loading Methods 12-1
Loading ROWID Columns 12-2
Conventional Path Load 12-3
Conventional Path Load of a Single Partition 12-3
When to Use a Conventional Path Load 12-3
Direct Path Load 12-4
Data Conversion During Direct Path Loads 12-4
Direct Path Load of a Partitioned or Subpartitioned Table 12-5
Direct Path Load of a Single Partition or Subpartition 12-5
Advantages of a Direct Path Load 12-5
Restrictions on Using Direct Path Loads 12-6
Restrictions on a Direct Path Load of a Single Partition 12-7
When to Use a Direct Path Load 12-7
Integrity Constraints 12-7
Field Defaults on the Direct Path 12-7
Loading into Synonyms 12-7
Using Direct Path Load 12-7
Setting Up for Direct Path Loads 12-8
Specifying a Direct Path Load 12-8
Building Indexes 12-8
Improving Performance 12-8
Temporary Segment Storage Requirements 12-9
Indexes Left in an Unusable State 12-9
Using Data Saves to Protect Against Data Loss 12-10
Using the ROWS Parameter 12-10
Data Save Versus Commit 12-10
Data Recovery During Direct Path Loads 12-10
Media Recovery and Direct Path Loads 12-11
Instance Recovery and Direct Path Loads 12-11
xv
Loading Long Data Fields 12-11
Loading Data As PIECED 12-11
Optimizing Performance of Direct Path Loads 12-12
Preallocating Storage for Faster Loading 12-12
Presorting Data for Faster Indexing 12-13
SORTED INDEXES Clause 12-13
Unsorted Data 12-13
Multiple-Column Indexes 12-13
Choosing the Best Sort Order 12-14
Infrequent Data Saves 12-14
Minimizing Use of the Redo Log 12-14
Disabling Archiving 12-14
Specifying the SQL*Loader UNRECOVERABLE Clause 12-15
Setting the SQL NOLOGGING Parameter 12-15
Specifying the Number of Column Array Rows and Size of Stream Buffers 12-15
Specifying a Value for the Date Cache 12-16
Optimizing Direct Path Loads on Multiple-CPU Systems 12-17
Avoiding Index Maintenance 12-18
Direct Loads, Integrity Constraints, and Triggers 12-18
Integrity Constraints 12-18
Enabled Constraints 12-18
Disabled Constraints 12-19
Reenable Constraints 12-19
Database Insert Triggers 12-20
Replacing Insert Triggers with Integrity Constraints 12-20
When Automatic Constraints Cannot Be Used 12-20
Preparation 12-21
Using an Update Trigger 12-21
Duplicating the Effects of Exception Conditions 12-21
Using a Stored Procedure 12-21
Permanently Disabled Triggers and Constraints 12-22
Increasing Performance with Concurrent Conventional Path Loads 12-22
Parallel Data Loading Models 12-23
Concurrent Conventional Path Loads 12-23
Intersegment Concurrency with Direct Path 12-23
Intrasegment Concurrency with Direct Path 12-23
Restrictions on Parallel Direct Path Loads 12-23
Initiating Multiple SQL*Loader Sessions 12-24
Parameters for Parallel Direct Path Loads 12-24
Using the FILE Parameter to Specify Temporary Segments 12-24
Enabling Constraints After a Parallel Direct Path Load 12-25
PRIMARY KEY and UNIQUE KEY Constraints 12-25
General Performance Improvement Hints 12-26
Part III External Tables
xvi
13 External Tables Concepts
How Are External Tables Created? 13-1
Location of Datafiles and Output Files 13-2
Access Parameters 13-3
Datatype Conversion During External Table Use 13-4
External Table Restrictions 13-5
14 The ORACLE_LOADER Access Driver
access_parameters Clause 14-2
record_format_info Clause 14-3
FIXED length 14-4
VARIABLE size 14-5
DELIMITED BY 14-5
CHARACTERSET 14-6
PREPROCESSOR 14-6
Restriction When Using the PREPROCESSOR Clause 14-8
LANGUAGE 14-8
TERRITORY 14-9
DATA IS ENDIAN 14-9
BYTEORDERMARK (CHECK | NOCHECK) 14-9
STRING SIZES ARE IN 14-10
LOAD WHEN 14-10
BADFILE | NOBADFILE 14-10
DISCARDFILE | NODISCARDFILE 14-11
LOG FILE | NOLOGFILE 14-11
SKIP 14-11
READSIZE 14-11
DISABLE_DIRECTORY_LINK_CHECK 14-12
DATE_CACHE 14-12
string 14-12
condition_spec 14-12
[directory object name:] filename 14-13
condition 14-14
range start : range end 14-14
field_definitions Clause 14-14
delim_spec 14-16
Example: External Table with Terminating Delimiters 14-17
Example: External Table with Enclosure and Terminator Delimiters 14-18
Example: External Table with Optional Enclosure Delimiters 14-18
trim_spec 14-18
MISSING FIELD VALUES ARE NULL 14-19
field_list 14-19
pos_spec Clause 14-20
start 14-21
* 14-21
increment 14-21
end 14-21
xvii
length 14-21
datatype_spec Clause 14-22
[UNSIGNED] INTEGER [EXTERNAL] [(len)] 14-23
DECIMAL [EXTERNAL] and ZONED [EXTERNAL] 14-23
ORACLE_DATE 14-23
ORACLE_NUMBER 14-24
Floating-Point Numbers 14-24
DOUBLE 14-24
FLOAT [EXTERNAL] 14-24
BINARY_DOUBLE 14-24
BINARY_FLOAT 14-24
RAW 14-25
CHAR 14-25
date_format_spec 14-25
VARCHAR and VARRAW 14-27
VARCHARC and VARRAWC 14-28
init_spec Clause 14-29
column_transforms Clause 14-29
transform 14-30
column_name 14-30
NULL 14-30
CONSTANT 14-30
CONCAT 14-30
LOBFILE 14-30
lobfile_attr_list 14-31
Example: Creating and Loading an External Table Using ORACLE_LOADER 14-32
Parallel Loading Considerations for the ORACLE_LOADER Access Driver 14-33
Performance Hints When Using the ORACLE_LOADER Access Driver 14-34
Restrictions When Using the ORACLE_LOADER Access Driver 14-34
Reserved Words for the ORACLE_LOADER Access Driver 14-35
15 The ORACLE_DATAPUMP Access Driver
access_parameters Clause 15-1
comments 15-2
COMPRESSION 15-2
ENCRYPTION 15-2
LOGFILE | NOLOGFILE 15-3
File Names for LOGFILE 15-4
VERSION Clause 15-4
Effects of Using the SQL ENCRYPT Clause 15-4
Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver 15-5
Parallel Loading and Unloading 15-8
Combining Dump Files 15-8
Supported Datatypes 15-9
Unsupported Datatypes 15-10
Unloading and Loading BFILE Datatypes 15-10
Unloading LONG and LONG RAW Datatypes 15-12
xviii
Unloading and Loading Columns Containing Final Object Types 15-13
Tables of Final Object Types 15-14
Performance Hints When Using the ORACLE_DATAPUMP Access Driver 15-15
Restrictions When Using the ORACLE_DATAPUMP Access Driver 15-15
Reserved Words for the ORACLE_DATAPUMP Access Driver 15-16
Part IV Other Utilities
16 ADRCI: ADR Command Interpreter
About the ADR Command Interpreter (ADRCI) Utility 16-1
Definitions 16-2
Starting ADRCI and Getting Help 16-4
Using ADRCI in Interactive Mode 16-4
Getting Help 16-5
Using ADRCI in Batch Mode 16-6
Setting the ADRCI Homepath Before Using ADRCI Commands 16-6
Viewing the Alert Log 16-8
Finding Trace Files 16-9
Viewing Incidents 16-10
Packaging Incidents 16-10
About Packaging Incidents 16-10
Creating Incident Packages 16-11
Creating a Logical Incident Package 16-12
Adding Diagnostic Information to a Logical Incident Package 16-13
Generating a Physical Incident Package 16-14
ADRCI Command Reference 16-14
CREATE REPORT 16-15
ECHO 16-16
EXIT 16-16
HOST 16-17
IPS 16-17
Using the <ADR_HOME> and <ADR_BASE> Variables in IPS Commands 16-18
IPS ADD 16-18
IPS ADD FILE 16-19
IPS ADD NEW INCIDENTS 16-20
IPS COPY IN FILE 16-20
IPS COPY OUT FILE 16-21
IPS CREATE PACKAGE 16-21
IPS DELETE PACKAGE 16-23
IPS FINALIZE 16-23
IPS GENERATE PACKAGE 16-23
IPS GET MANIFEST 16-24
IPS GET METADATA 16-24
IPS PACK
16-25
IPS REMOVE 16-26
IPS REMOVE FILE 16-27
IPS SET CONFIGURATION 16-28
xix
IPS SHOW CONFIGURATION 16-28
IPS SHOW FILES 16-31
IPS SHOW INCIDENTS 16-31
IPS UNPACK FILE 16-32
PURGE 16-32
QUIT 16-33
RUN 16-33
SET BASE 16-34
SET BROWSER 16-34
SET CONTROL 16-35
SET ECHO 16-35
SET EDITOR 16-35
SET HOMEPATH 16-36
SET TERMOUT 16-36
SHOW ALERT 16-36
SHOW BASE 16-38
SHOW CONTROL 16-39
SHOW HM_RUN 16-39
SHOW HOMEPATH 16-40
SHOW HOMES 16-41
SHOW INCDIR 16-41
SHOW INCIDENT 16-42
SHOW PROBLEM 16-45
SHOW REPORT 16-46
SHOW TRACEFILE 16-47
SPOOL 16-47
Troubleshooting ADRCI 16-48
17 DBVERIFY: Offline Database Verification Utility
Using DBVERIFY to Validate Disk Blocks of a Single Datafile 17-1
Syntax 17-1
Parameters 17-2
Sample DBVERIFY Output For a Single Datafile 17-3
Using DBVERIFY to Validate a Segment 17-3
Syntax 17-4
Parameters 17-4
Sample DBVERIFY Output For a Validated Segment 17-4
18 DBNEWID Utility
What Is the DBNEWID Utility? 18-1
Ramifications of Changing the DBID and DBNAME 18-1
Considerations for Global Database Names 18-2
Changing the DBID and DBNAME of a Database 18-2
Changing the DBID and Database Name 18-2
Changing Only the Database ID 18-5
Changing Only the Database Name 18-5
Troubleshooting DBNEWID 18-7
xx
DBNEWID Syntax 18-8
Parameters 18-8
Restrictions and Usage Notes 18-9
Additional Restrictions for Releases Earlier Than Oracle Database 10g 18-10
19 Using LogMiner to Analyze Redo Log Files
LogMiner Benefits 19-1
Introduction to LogMiner 19-2
LogMiner Configuration 19-2
Sample Configuration 19-3
Requirements 19-3
Directing LogMiner Operations and Retrieving Data of Interest 19-4
LogMiner Dictionary Files and Redo Log Files 19-5
LogMiner Dictionary Options 19-5
Using the Online Catalog 19-6
Extracting a LogMiner Dictionary to the Redo Log Files 19-7
Extracting the LogMiner Dictionary to a Flat File 19-8
Redo Log File Options 19-8
Starting LogMiner 19-10
Querying V$LOGMNR_CONTENTS for Redo Data of Interest 19-10
How the V$LOGMNR_CONTENTS View Is Populated 19-12
Querying V$LOGMNR_CONTENTS Based on Column Values 19-13
The Meaning of NULL Values Returned by the MINE_VALUE Function 19-14
Usage Rules for the MINE_VALUE and COLUMN_PRESENT Functions 19-14
Querying V$LOGMNR_CONTENTS Based on XMLType Columns and Tables 19-14
Restrictions When Using LogMiner With XMLType Data 19-17
Example of a PL/SQL Procedure for Assembling XMLType Data 19-17
Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS 19-19
Showing Only Committed Transactions 19-20
Skipping Redo Corruptions 19-22
Filtering Data by Time 19-23
Filtering Data by SCN 19-23
Formatting Reconstructed SQL Statements for Re-execution 19-23
Formatting the Appearance of Returned Data for Readability 19-24
Reapplying DDL Statements Returned to V$LOGMNR_CONTENTS 19-25
Calling DBMS_LOGMNR.START_LOGMNR Multiple Times 19-25
Supplemental Logging 19-26
Database-Level Supplemental Logging 19-27
Minimal Supplemental Logging 19-27
Database-Level Identification Key Logging 19-27
Disabling Database-Level Supplemental Logging 19-29
Table-Level Supplemental Logging 19-29
Table-Level Identification Key Logging 19-29
Table-Level User-Defined Supplemental Log Groups 19-30
Usage Notes for User-Defined Supplemental Log Groups 19-31
Tracking DDL Statements in the LogMiner Dictionary 19-31
DDL_DICT_TRACKING and Supplemental Logging Settings 19-32
xxi
DDL_DICT_TRACKING and Specified Time or SCN Ranges 19-33
Accessing LogMiner Operational Information in Views 19-34
Querying V$LOGMNR_LOGS 19-35
Querying Views for Supplemental Logging Settings 19-36
Steps in a Typical LogMiner Session 19-37
Enable Supplemental Logging 19-38
Extract a LogMiner Dictionary 19-38
Specify Redo Log Files for Analysis 19-38
Start LogMiner 19-39
Query V$LOGMNR_CONTENTS 19-40
End the LogMiner Session 19-40
Examples Using LogMiner 19-40
Examples of Mining by Explicitly Specifying the Redo Log Files of Interest 19-41
Example 1: Finding All Modifications in the Last Archived Redo Log File 19-41
Example 2: Grouping DML Statements into Committed Transactions 19-43
Example 3: Formatting the Reconstructed SQL 19-45
Example 4: Using the LogMiner Dictionary in the Redo Log Files 19-48
Example 5: Tracking DDL Statements in the Internal Dictionary 19-56
Example 6: Filtering Output by Time Range 19-59
Examples of Mining Without Specifying the List of Redo Log Files Explicitly 19-61
Example 1: Mining Redo Log Files in a Given Time Range 19-61
Example 2: Mining the Redo Log Files in a Given SCN Range 19-63
Example 3: Using Continuous Mining to Include Future Values in a Query 19-65
Example Scenarios 19-66
Scenario 1: Using LogMiner to Track Changes Made by a Specific User 19-66
Scenario 2: Using LogMiner to Calculate Table Access Statistics 19-67
Supported Datatypes, Storage Attributes, and Database and Redo Log File Versions 19-68
Supported Datatypes and Table Storage Attributes 19-68
Unsupported Datatypes and Table Storage Attributes 19-69
Supported Databases and Redo Log File Versions 19-69
SecureFile LOB Considerations 19-69
20 Using the Metadata APIs
Why Use the DBMS_METADATA API? 20-1
Overview of the DBMS_METADATA API 20-2
Using the DBMS_METADATA API to Retrieve an Object's Metadata 20-2
Typical Steps Used for Basic Metadata Retrieval 20-3
Retrieving Multiple Objects 20-4
Placing Conditions on Transforms 20-6
Accessing Specific Metadata Attributes 20-8
Using the DBMS_METADATA API to Re-Create a Retrieved Object 20-10
Using the DBMS_METADATA API to Retrieve Collections of Different Object Types 20-12
Filtering the Return of Heterogeneous Object Types 20-13
Using the DBMS_METADATA_DIFF API to Compare Object Metadata 20-15
Performance Tips for the Programmatic Interface of the DBMS_METADATA API 20-23
Example Usage of the DBMS_METADATA API 20-24
What Does the DBMS_METADATA Example Do? 20-24
xxii
Output Generated from the GET_PAYROLL_TABLES Procedure 20-26
Summary of DBMS_METADATA Procedures 20-28
Summary of DBMS_METADATA_DIFF Procedures 20-29
21 Original Export
What is the Export Utility? 21-2
Before Using Export 21-2
Running catexp.sql or catalog.sql 21-2
Ensuring Sufficient Disk Space for Export Operations 21-3
Verifying Access Privileges for Export and Import Operations 21-3
Invoking Export 21-3
Invoking Export as SYSDBA 21-3
Command-Line Entries 21-4
Parameter Files 21-4
Interactive Mode 21-5
Restrictions When Using Export's Interactive Method 21-5
Getting Online Help 21-5
Export Modes 21-5
Table-Level and Partition-Level Export 21-8
Table-Level Export 21-8
Partition-Level Export 21-8
Export Parameters 21-9
BUFFER 21-9
Example: Calculating Buffer Size 21-9
COMPRESS 21-9
CONSISTENT 21-10
CONSTRAINTS 21-11
DIRECT 21-11
FEEDBACK 21-11
FILE 21-12
FILESIZE 21-12
FLASHBACK_SCN 21-13
FLASHBACK_TIME 21-13
FULL 21-13
Points to Consider for Full Database Exports and Imports 21-14
GRANTS 21-14
HELP 21-15
INDEXES
21-15
LOG 21-15
OBJECT_CONSISTENT 21-15
OWNER 21-15
PARFILE 21-15
QUERY 21-16
Restrictions When Using the QUERY Parameter 21-16
RECORDLENGTH 21-17
RESUMABLE 21-17
RESUMABLE_NAME 21-17
xxiii
RESUMABLE_TIMEOUT 21-17
ROWS 21-18
STATISTICS 21-18
TABLES 21-18
Table Name Restrictions 21-19
TABLESPACES 21-19
TRANSPORT_TABLESPACE 21-20
TRIGGERS 21-20
TTS_FULL_CHECK 21-20
USERID (username/password) 21-20
VOLSIZE 21-21
Example Export Sessions 21-21
Example Export Session in Full Database Mode 21-21
Example Export Session in User Mode 21-22
Example Export Sessions in Table Mode 21-22
Example 1: DBA Exporting Tables for Two Users 21-23
Example 2: User Exports Tables That He Owns 21-23
Example 3: Using Pattern Matching to Export Various Tables 21-24
Example Export Session Using Partition-Level Export 21-24
Example 1: Exporting a Table Without Specifying a Partition 21-25
Example 2: Exporting a Table with a Specified Partition 21-25
Example 3: Exporting a Composite Partition 21-26
Warning, Error, and Completion Messages 21-26
Log File 21-26
Warning Messages 21-26
Nonrecoverable Error Messages 21-27
Completion Messages 21-27
Exit Codes for Inspection and Display 21-27
Conventional Path Export Versus Direct Path Export 21-28
Invoking a Direct Path Export 21-28
Security Considerations for Direct Path Exports 21-28
Performance Considerations for Direct Path Exports 21-29
Restrictions for Direct Path Exports 21-29
Network Considerations 21-29
Transporting Export Files Across a Network 21-29
Exporting with Oracle Net 21-29
Character Set and Globalization Support Considerations 21-30
User Data
21-30
Effect of Character Set Sorting Order on Conversions 21-30
Data Definition Language (DDL) 21-30
Single-Byte Character Sets and Export and Import 21-31
Multibyte Character Sets and Export and Import 21-31
Using Instance Affinity with Export and Import 21-31
Considerations When Exporting Database Objects 21-32
Exporting Sequences 21-32
Exporting LONG and LOB Datatypes 21-32
Exporting Foreign Function Libraries 21-32
xxiv
Exporting Offline Locally Managed Tablespaces 21-32
Exporting Directory Aliases 21-32
Exporting BFILE Columns and Attributes 21-33
Exporting External Tables 21-33
Exporting Object Type Definitions 21-33
Exporting Nested Tables 21-33
Exporting Advanced Queue (AQ) Tables 21-33
Exporting Synonyms 21-34
Possible Export Errors Related to Java Synonyms 21-34
Support for Fine-Grained Access Control 21-34
Transportable Tablespaces 21-34
Exporting From a Read-Only Database 21-35
Using Export and Import to Partition a Database Migration 21-35
Advantages of Partitioning a Migration 21-35
Disadvantages of Partitioning a Migration 21-36
How to Use Export and Import to Partition a Database Migration 21-36
Using Different Releases of Export and Import 21-36
Restrictions When Using Different Releases of Export and Import 21-36
Examples of Using Different Releases of Export and Import 21-37
22 Original Import
What Is the Import Utility? 22-2
Table Objects: Order of Import 22-2
Before Using Import 22-2
Running catexp.sql or catalog.sql 22-2
Verifying Access Privileges for Import Operations 22-3
Importing Objects Into Your Own Schema 22-3
Importing Grants 22-4
Importing Objects Into Other Schemas 22-4
Importing System Objects 22-4
Processing Restrictions 22-4
Importing into Existing Tables 22-5
Manually Creating Tables Before Importing Data 22-5
Disabling Referential Constraints 22-5
Manually Ordering the Import 22-6
Effect of Schema and Database Triggers on I m port Operations 22-6
Invoking Import 22-6
Command-Line Entries 22-7
Parameter Files 22-7
Interactive Mode 22-8
Invoking Import As SYSDBA 22-8
Getting Online Help 22-8
Import Modes 22-8
Import Parameters 22-11
BUFFER 22-11
COMMIT 22-12
COMPILE 22-12
xxv
CONSTRAINTS 22-12
DATA_ONLY 22-12
DATAFILES 22-13
DESTROY 22-13
FEEDBACK 22-13
FILE 22-13
FILESIZE 22-13
FROMUSER 22-14
FULL 22-14
Points to Consider for Full Database Exports and Imports 22-14
GRANTS 22-15
HELP 22-16
IGNORE 22-16
INDEXES 22-16
INDEXFILE 22-17
LOG 22-17
PARFILE 22-17
RECORDLENGTH 22-17
RESUMABLE 22-18
RESUMABLE_NAME 22-18
RESUMABLE_TIMEOUT 22-18
ROWS 22-18
SHOW 22-18
SKIP_UNUSABLE_INDEXES 22-19
STATISTICS 22-19
STREAMS_CONFIGURATION 22-20
STREAMS_INSTANTIATION 22-20
TABLES 22-20
Table Name Restrictions 22-21
TABLESPACES 22-22
TOID_NOVALIDATE 22-22
TOUSER 22-23
TRANSPORT_TABLESPACE 22-24
TTS_OWNERS 22-24
USERID (username/password) 22-24
VOLSIZE
22-24
Example Import Sessions 22-24
Example Import of Selected Tables for a Specific User 22-25
Example Import of Tables Exported by Another User 22-25
Example Import of Tables from One User to Another 22-26
Example Import Session Using Partition-Level Import 22-26
Example 1: A Partition-Level Import 22-26
Example 2: A Partition-Level Import of a Composite Partitioned Table 22-27
Example 3: Repartitioning a Table on a Different Column 22-28
Example Import Using Pattern Matching to Import Various Tables 22-29
Exit Codes for Inspection and Display 22-30
Error Handling During an Import 22-30