Wichtige Views
Alle V$Views auslisten:
SELECT view_name from v$fixed_view_definition
WHERE view_name LIKE 'V$%'
ORDER BY view_name;
|
|
V$FILESTAT
|
Statistics for tuning I/O?
Which view would you query to monitor I/O contention for datafiles?
|
V$SYSTEM_EVENT
|
Total waits for "buffer busy waits" since instance startup?
|
V$LOCK,
V$LOCKED_OBJECT
|
Which two views could you query to determine lock
contention on the EMP table?
|
V$DB_OBJECT_CACHE
|
Which view can be queried to check the amount of sharable
memory used by a cached object?
|
V$SYSSTAT
|
Statistics for Redo Log Buffer
|
V$SYSTEM_EVENT,
V$SESSION_WAIT
V$SESSION_EVENT
|
Which three views should you use to determine if a process
has waited for a resource?
|
$DATAFILE_HEADER
|
Which data dictionary view would you query to determine if a
datafile is in backup mode?
|
V$SESSION_LONGOPS
|
Which data dictionary view can you query to check the progress
of a backup?
|
V$BACKUP
|
To see which Files are in BEGIN BACKUP MODE
|
Oracle-8 Database
Administration
analyze table dept validate structure cascade;
alter system enable restricted session;
alter tablespace TAB nologging add datafile '....' ....;
When the instance is shut down
DBA_EXTENTS
OWNER VARCHAR2(30)
SEGMENT_NAME VARCHAR2(81)
PARTITION_NAME VARCHAR2(30)
SEGMENT_TYPE VARCHAR2(17)
TABLESPACE_NAME VARCHAR2(30)
EXTENT_ID NOT NULL NUMBER
FILE_ID NOT NULL NUMBER
BLOCK_ID NOT NULL NUMBER
BYTES NUMBER
BLOCKS NOT NULL NUMBER
RELATIVE_FNO NOT NULL NUMBER
alter user scott account unlock;
alter user scott account lock;
Only with TO_CHAR and TO_DATE
V$CONTROLFILE and V$SYSTEM_PARAMETER
VARRAY
The Server places locks on the data
Value greather than or equal to INITIAL * MINEXTENTS
Unlimited quota
alter system set resource_limit=true;
raising PCTUSED
CREATE SCHEMA
- Each segment in a database is created with at least one extent.
- Frequent de-allocation of extents can lead to a fragmented tablespace.
Either before or after a COMMIT is issued
Minimal information is required to record the changes in the redo logs,
due LRU is deferred.
Writes to the datafiles are deferred to reduce disk contention and I/O.
Oracle's Fast COMMIT allows LGWR to record only the changes and SCN in the redo log
files. The size of the transaction does not affect the time required to write to the redo
log files. These writes are sequential and faster than writing entire blocks to the
datafiles. Writes to the datafiles occur independently of the COMMIT.
SHOW SGA
SHOW PARAMETER
ALTER SYSTEM SET SORT_AREA_SIZE=163840 DEFERRED;
What affect would this command have on the SORT_AREA_SIZE
initialization parameter?
It will be set to 163840 for all future sessions.
TRANSACTIONAL
To use operating system authentication
To allow users to connect to the database using O/S authentication, set
up the users in the operating system and set the REMOTE_LOGIN_PASSWORDFILE initialization
parameter to NONE. To connect to the database using the SYSDBA role, log in to the
operating system and connect to the database using this connect string: CONNECT / AS
SYSDBA
Database configurations can be stored in the OEM registry and not as
external files.
NORMAL
To detect block corruption errors
V$PARAMETER, V$CONTROLFILE, SHOW PARAMETER
V$SESSION, V$INSTANCE, V$PARAMETER
When an instance is started in NOMOUNT state, you can only access the
views that read data from the SGA. V$PARAMETER, V$SESSION, and V$INSTANCE read from
memory; V$DATABASE, V$DATAFILE and V$LOGFILE all read data from the control file. The
database must be mounted for the control file to be opened. The data dictionary is only
accessible when the database is open.
SHUTDOWN TRANSACTIONAL;
An instance would start, but the database would not be mounted.
When a database is created, the database name is specified in the
CREATE DATABASE command and it must match the DB_NAME parameter. The database will not
mount if the database name recorded in the control file does not match the database name
in the STARTUP command.
CREATE DATABASE prod
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXLOGHISTORY 100
LOGFILE
GROUP 1 '/disk3/log1a.rdo, /disk4/log1b.rdo',
GROUP 2 '/disk3/log2a.rdo, /disk4/log2b.rdo' REUSE
DATAFILE '/disk2/system01.dbf' SIZE 50M;
Which line will return an error under any condition?
==> GROUP 1 '/disk3/log1a.rdo, /disk4/log1b.rdo',
For any datafiles or redo log files that already exist, use the REUSE
option in the CREATE DATABASE command. If the file does not exist, you must use the SIZE
option to specify the size of the file to be created either in kilobytes (K) or megabytes
(M).
Directly below the Oracle directory
Separate objects by lifespans to minimize fragmentation.
Shut down the instance, delete any files created by the CREATE DATABASE
command, and change the SIZE options to REUSE in the LOGFILE parameter of the CREATE
DATABASE command.
To reallocate the space needed for the parameter
CATPROC.SQL
The data dictionary views categories:
ALL_ |
Accessible to all users and each view has an OWNER
column, providing information on objects accessible by the user |
DBA_ |
Accessible to the DBA or anyone with the SELECT ANY
TABLE privilege, providing information on all the objects in the database and has an
OWNER column |
USER_ |
Accessible to all users, providing information on
objects owned by the user |
The DICTIONARY and DICT_COLUMNS data
dictionary views provide an overview for all the data dictionary views.
dbms*.sql defines the database package specifications
cat*.sql provides catalog and data dictionary information
prvt*.plb provides wrapped package code
utl*sql creates views and tables for database utilities.
If errors occur during compilation of objects, the STATUS column in
DBA_OBJECTS is set to INVALID. Compiled objects that are available for use are
marked VALID. In this example, the view would be marked as INVALID.
SQL> select type,records_total from
V$CONTROLFILE_RECORD_SECTION;
TYPE
RECORDS_TOTAL
----------------- -------------
DATABASE
1
CKPT
PROGRESS
4
REDO
THREAD
4
REDO
LOG
62
DATAFILE
256
FILENAME
567
TABLESPACE
256
RESERVED1
256
RESERVED2
1
LOG
HISTORY
1696
OFFLINE
RANGE
290
ARCHIVED
LOG
1603
BACKUP
SET
305
BACKUP
PIECE
514
BACKUP DATAFILE
526
BACKUP
REDOLOG
160
DATAFILE
COPY
512
BACKUP CORRUPTION 277
COPY CORRUPTION
305
DELETED OBJECT
1628
PROXY
COPY
640
8
Reinitialize the log files in Group 2 with the ALTER DATABASE
CLEAR LOGFILE command.
You can clear a group if all its members become corrupt by issuing the ALTER DATABASE
CLEAR LOGFILE command. This command can be used even if there are only two redo log
groups. If you need to clear a redo log group that has not yet been archived, you
must include the UNARCHIVED keyword in the command.
After LGWR writes 25 operating system blocks
- You are tuning the PROD database and need to determine the sequence number of
the current redo log group. Which view would you query to display the
number of online redo log groups?
select SEQUENCE# from v$thread;
It is in use
V$LOGFILE displays each group, each member, and the status of each
member. The values of the STATUS column can be: INVALID - inaccessible, STALE -
incomplete, DELETED - no longer being used, NULL - in use
Complete recovery is possible even after the redo logs have been
overwritten using
the archived redolog files
select LOG_MODE from v$database;
The group is no longer needed for instance recovery.
After the group is archived, issue the ALTER SYSTEM
SWITCH LOGFILE command then the ALTER DATABASE DROP LOGFILE MEMBER command.
To drop a redo log member, use the ALTER DATABASE DROP LOGFILE MEMBER command. If the
database is in ARCHIVELOG mode, you cannot drop an online member until it has
been archived. If the group is active, you must force a log switch before
dropping one of its members. You cannot drop the last valid member of a group. Dropping a
member does not remove its O/S file.
The syntax to add a redo log group:
ALTER DATABASE prod
ADD LOGFILE GROUP 2 ('/disk2/log2a.rdo',
'/disk3/log3b.rdo') SIZE 1M;
If you specify the GROUP option, you must use an integer value.
If you do not use the GROUP option, the Oracle Server will automatically generate
a value. Use the SIZE option if you are creating a new file. Use the REUSE
option if the file already exists.
Reset internal file pointers in the control file.
50 (This is very bad, set it to 0)
Use the ALTER TABLESPACE command to change the MINIMUM EXTENTS value or
the DEFAULT STORAGE parameters for a tablespace.
Use the RESIZE option with the ALTER DATABASE DATAFILE
command.
Issue the ALTER TABLESPACE ONLINE command then the ALTER TABLESPACE
READ ONLY command.
When you make a tablespace read-only, all the datafiles must be
online and the tablespace can have no pending transactions or active rollback
segments. To make a read-write tablespace read-only, use this command: ALTER TABLESPACE
tablespace_name READ ONLY; To make a read-only tablespace read-write, use this command:
ALTER TABLESPACE tablespace_name READ WRITE;
CREATE TABLESPACE user_data
DATAFILE '/disk2/us_data01.dbf' SIZE 10M
MINIMUM EXTENT 500K
DEFAULT STORAGE
(INITIAL 100K NEXT 100K MAXEXTENTS 500 PCTINCREASE 0);
Why will this statement fail?
INITIAL and NEXT should be a multiple of MINIMUM EXTENT.
When MINIMUM EXTENT is specified, every extent allocated in the
tablespace should be a multiple of MINIMUM EXTENT. In this case, NEXT and INITIAL should
be multiples of 500K.
The default value of the NEXT parameter is 5 data
blocks. In this case, the size of the second extent allocated can be determined
by: 5 * DB_BLOCK_SIZE = 10K
When the tablespace is the SYSTEM tablespace
A syntax error would be returned.
The minimum INITIAL value allowed is: 2 *
DB_BLOCK_SIZE
Datafile and OS-Blocks
The physical structure of a database includes O/S blocks,
datafiles,
control files, and online redo log files.
ALTER TABLESPACE MINIMUM EXTENT
Use the ALTER TABLESPACE command to change the MINIMUM EXTENT value or
the DEFAULT STORAGE parameters for a tablespace.
ALTER TABLESPACE data01 COALESCE;
Which background process performs the same task?
Coalescing merges neighboring free extents. This is automatically done
by the SMON background process if the PCTINCREASE value for the
tablespace is greater than zero. You can manually coalesce a tablespace by using this
command: ALTER TABLESPACE tablespace_name COALESCE;
Query DBA_FREE_SPACE_COALESCED to determine if a tablespace needs to be coalesced.
A block is removed from the free list when it falls
below the PCTFREE threshold. It is returned to the free list when the used space falls
below PCTUSED.
An index can be partitioned to minimize disk
contention while decreasing the amount of time required for queries.
ALTER ROLLBACK SEGMENT rbs01 SHRINK TO 4M;
What would be the result if the current size of RBS01 is 3M?
The command is ignored.
The ALTER ROLLBACK SEGMENT command with the SHRINK option will shrink a
rollback segment. If no value is specified, the Oracle Server will attempt to shrink to
the OPTIMAL value. To shrink a rollback segment to a specific size, use this command:
ALTER ROLLBACK SEGMENT SHRINK TO integer K/M;
If the SHRINK TO size or OPTIMAL is greater than the current size of the rollback
segment, the command is ignored.
ALTER ROLLBACK SEGMENT rbs01 OFFLINE;
What will happen if RBS01 has pending transactions when the command is
issued?
The segment is taken offline when all transactions are complete.
A rollback segment cannot be taken offline if it has pending
transactions. If you use the ALTER ROLLBACK SEGMENT command to take it offline before the
pending transactions end, the rollback segment will be marked as 'PENDING OFFLINE'. The
rollback segment will not be taken offline until the transactions complete.
Rollback segments store the before image when changes
are made to data in the database. A transaction can only be assigned to one rollback
segment, however a rollback segment can store images for multiple concurrent
transactions.
The SYSTEM rollback segments record changes made to objects in the
SYSTEM tablespace.
Private rollback segments are acquired by an instance when they are named in the
parameter file.
Public rollback segments are in the pool of rollback segments.
Deferred rollback segments are created by the Oracle Server when a user
attempts to commit a transaction that will change data in a tablespace that is
offline. The Oracle Server will drop these segments when they are no longer
needed.
Use larger extent sizes.
Create rollback segments with higher OPTIMAL values.
Create rollback segments with high MINEXTENTS values
A snapshot too old error occurs when the Oracle Server cannot create a
read consistent image of the data when a query is executed. The error is returned when a
query is issued on a row with a lock, but before the query is finished the transaction
commits and the rollback segment is overwritten. Reduce the chances of snapshot too old
errors by creating rollback segments with high MINEXTENTS values, larger extent sizes,
and high OPTIMAL values.
20
MINEXTENTS for a rollback segment should be greater than or equal to
two. Oracle recommends a value of 20 to reduce the likelihood of
extension. OPTIMAL should be greater than or equal to MINEXTENTS. PCTINCREASE
cannot be specified and should be set to zero. INITIAL should always be equal to NEXT.
All the extents allocated to a rollback segment should be the same size. Place rollback
segments in tablespaces that do not contain data to minimize I/O contention and
fragment.
Before you can take a tablespace with active rollback segments
offline:
Query DBA_ROLLBACK_SEGS to determine which rollback
segments are in the tablespace and take the rollback segments offline.
Query V$TRANSACTIONS to determine the transaction
currently using the rollback segments.
Query V$SESSION to determine the user with the pending
transaction and terminate their session.
DB_BLOCK_SIZE+(multiple of SORT_AREA_SIZE)
Temporary segments are based on the default storage parameters of the
tablespace. For the TEMPORARY tablespace, INITIAL and NEXT should be equal to each other
and a multiple of SORT_AREA_SIZE plus DB_BLOCK_SIZE. PCTINCREASE should always be equal
to zero.
Several transactions can simultaneously share the same sort
segment.
To determine the number of extents allocated to a TEMPORARY tablespace
containing a sort segment
V$SESSION
V$SORT_USAGE
Join V$SESSION and V$SORT_USAGE to display the username, session
address, tablespace, contents, extents, and blocks for active sorts.
V$SORT_SEGMENT
KEEP
To deallocate all unused space in a table above the high water
mark, use this command:
ALTER TABLE table DEALLOCATE UNUSED;
To deallocate unused space while keeping a specified number of bytes
above the high water mark, use this command:
ALTER TABLE table DEALLOCATE UNUSED KEEP integer K/M;
If the size specified in the KEEP option is below MINEXTENTS,
MINEXTENTS will be kept.
TRUNCATE TABLE <schema.table_name> DROP STORAGE;
The ROWID_RELATIVE_FNO function in the DBMS_ROWID
package returns the relative file number of the file that stores the row. The
ROWID_BLOCK_NUMBER function in the DBMS_ROWID package returns the block
number stored in the row in the relative file. The ROWID_ROW_NUMBER
function in the DBMS_ROWID package returns the sequence number of a given ROWID
value.
Increase PCTFREE
If PCTFREE is set too low, there may not be enough room in the data
blocks to update existing rows. When there is insufficient space in the data block, the
row is moved to a new data block and the old block stores a pointer to the new location.
Row migration slows table scans because two blocks are scanned for one row.
3 ( 1 for columns length <= 250, 3 for columns length > 250 )
When row data is stored in a data block, the column length is recorded
as either 1 byte for column lengths 250 bytes or less or 3 bytes for column lengths
greater than 250 bytes.
REF
The REF datatype is a relationship type that stores a pointer to the
data in another table, rather than storing the actual data.
ANALYZE TABLE orders ESTIMATE STATISTICS;
How many rows will be sampled if the ORDERS table contains 5000
rows?
1064
To detect row migration or chaining in a table, use the ANALYZE TABLE
command. The COMPUTE STATISTICS option will return exact statistics because it analyzes
all of the rows in the table. The ESTIMATE STATISTICS option will compute
statistics based on 1064 rows. You can use the ESTIMATE STATISTICS option with a
sample integer or percent to specify the number or percentage of rows that are
scanned.
70 Blocks (INITRANS Index > INITRANS Table)
The high water mark in a table is the last block ever used by
the table. The high water mark is not reset when data is deleted. When a full table scan
is performed, each block is read up to its high water mark.
Higher
Because index entries are small, there are usually more entries per
block so the INITRANS value should be higher than the table.
The TRUNCATE command deletes all the rows in the table and releases all
the unused space. The corresponding indexes will also be truncated. If
the REUSE option is used with the TRUNCATE command, the data is deleted from the table
and its corresponding indexes, but no space is deallocated. If the DROP option is used
with the TRUNCATE command, the extents specified by MINEXTENTS are retained. A table
cannot be truncated that is referenced by a foreign key.
Partitioned indexes allow an index to be spread across multiple
tablespaces.
DBA_INDEXES
To perform a parallel direct-load insert, use the PARALLEL hint in the
INSERT INTO SELECT command. The parallel direct-load insert uses parallel query slaves to
insert the data. The data is written to temporary segments until the transaction
commits.
The order of import:
1. table definitions
2. table data
3. indexes on the table
4. integrity constraints, triggers, and bitmap indexes
The LOAD parameter in the SQL*Loader command specifies
the number of records to load. This value does not include any records that are
skipped.
Other rows in the same table can be concurrently modified by other
users.
Data can be quickly copied from one table into another within the same database because
it
bypasses the buffer cache.
Use the INSERT INTO SELECT command to perform a direct-load insert.
This command can be used to copy data from one table to another table in the same
database. The buffer cache is bypassed to speed up the insert. During a
direct-load insert, users can be concurrently modifying existing data in the table. Data
is loaded above the high water mark.
The QUOTA clause defines the amount of space in bytes that a
user can use to create and maintain objects in a specified tablespace. The
default value is no space and the maximum value is UNLIMITED.
ALTER USER bill QUOTA 0 ON data01;
If a 20M table exists in the DATA01 tablespace owned by user BILL,
which effect
will this command have ?
The objects created by user BILL in the DATA01 tablespace will remain,
but no new extents can be allocated to the objects.
USER_TS_QUOTAS
USER_TS_QUOTAS displays the names of tablespaces that
the user has a QUOTA on, the number of bytes and blocks used, and the maximum number of
bytes and blocks allowed.
When assigning a COMPOSITE_LIMIT to a profile, Oracle calculates a
weighted sum for these resource limits:
PRIVATE_SGA
CPU_PER_SESSION
CONNECT_TIME
LOGICAL_READS_PER_SESSION
To specify the weights assigned to each limit, use the ALTER RESOURCE COST command.
ALTER PROFILE juan LIMIT
CPU_PER_SESSION 3600 (1/100 Sec)
IDLE_TIME 30; (Min)
How much time will be allowed in the next session for user JUAN?
36 seconds of cpu time and 30 minutes of inactivity
CPU_PER_SESSION is expressed in hundredths of seconds and IDLE_TIME is
expressed in minutes
After the VERIFY_FUNCTION is enabled and the DEFAULT profile is
altered, a user must change their password using these guidelines:
1. The new password must differ from the old password by more than three characters.
2. The password must have at least 4 characters.
3. The password cannot be equal to the username.
4. The password must have at least one alpha, one numeric, and one special character.
To disable resource limits, set the RESOURCE_LIMIT parameter to
FALSE. To enable the enforcement of resource limits for the current instance,
use the ALTER SYSTEM command to set the RESOURCE_LIMIT initialization parameter to TRUE.
To enable or disable the enforcement of resource limits for future sessions, alter the
RESOURCE_LIMIT initialization parameter in the parameter file.
DBA_USERS
When the PASSWORD_MAX_REUSE parameter is set to UNLIMITED, a user can
never reuse the same password.
Object privileges are granted WITH GRANT OPTION. When
object privileges are revoked, the revoke cascades.
-
Which system privilege do you need to grant a system
privilege to another user?
System privileges are granted WITH ADMIN OPTION.
When system privileges are revoked, the revoke does not cascade.
-
Which privilege is required to TRUNCATE a table owned by another
user?
DROP ANY TABLE
To define a default role for a user, you must grant the role to the
user with the GRANT command. By default, all the roles granted to the user will be
enabled when the user connects unless the ALTER USER DEFAULT ROLE
command is used to limit the default roles.
ALTER USER jennifer
DEFAULT ROLE ALL;
Which task would this accomplish?
Set all roles as default for user JENNIFER
The NLS_SORT parameter can be used with the NLS_UPPER, NLS_LOWER,
NLS_INITCAP, and NLSSORT functions.
-
NLS Views
NLS_SESSION_PARAMETERS: This View lists NLS parameters of the user
session.
NLS_DATABASE_PARAMETERS: This View lists permanent NLS parameters of the database
NLS_INSTANCE_PARAMETERS: This View lists NLS parameters of the instance.
SQL> select * from nls_session_parameters;
PARAMETER
VALUE
------------------------------ ------------------------------
NLS_LANGUAGE
AMERICAN
NLS_TERRITORY
AMERICA
NLS_CURRENCY
$
NLS_ISO_CURRENCY
AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR
GREGORIAN
NLS_DATE_FORMAT
DD-MON-YY
NLS_DATE_LANGUAGE
AMERICAN
NLS_SORT
BINARY
NLS_TIME_FORMAT
HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT
DD-MON-YY HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT
HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-YY HH.MI.SSXFF
AM TZH:T
NLS_DUAL_CURRENCY
$
NLS_COMP
SQL> select * from nls_database_parameters;
PARAMETER
VALUE
------------------------------ ------------------------------
NLS_LANGUAGE
AMERICAN
NLS_TERRITORY
AMERICA
NLS_CURRENCY
$
NLS_ISO_CURRENCY
AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET
WE8ISO8859P1
NLS_CALENDAR
GREGORIAN
NLS_DATE_FORMAT
DD-MON-YY
NLS_DATE_LANGUAGE
AMERICAN
NLS_SORT
BINARY
NLS_TIME_FORMAT
HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT
DD-MON-YY HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT
HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-YY HH.MI.SSXFF
AM TZH:T
NLS_DUAL_CURRENCY
$
NLS_COMP
NLS_NCHAR_CHARACTERSET WE8ISO8859P1
NLS_RDBMS_VERSION
8.1.5.0.0
Oracle-8 Database Tuning
SYS Option
Statement Level
Limits the number of blocks cached for each table.
- The Shared Pool is too small
- Data Segments are being written to the SYSTEM tablespace
- Sort Segments are being written to the SYSTEM tablespace
Transaction activity of the database during normal processing.
TRUNCATE
number of blocks that contain rows
100 * -----------------------------------------------------
number of
blocks below the high-water mark
For non-empty tables, which value should be your target?
100
- The database is using Parallel Query Option
- The application is performing many full table scans.
desc v$db_object_cache;
Name
Null? Type
------------------------------- -------- ----
OWNER
VARCHAR2(64)
NAME
VARCHAR2(1000)
DB_LINK
VARCHAR2(64)
NAMESPACE
VARCHAR2(15)
TYPE
VARCHAR2(24)
SHARABLE_MEM
NUMBER
LOADS
NUMBER
EXECUTIONS
NUMBER
LOCKS
NUMBER
PINS
NUMBER
KEPT
VARCHAR2(3)
-
Your database is installed on a multi-CPU system. If the
Utlbstat/Utlestat utility shows contention for the redo copy latch, which action should
you take?
Increase the value of the LOG_SIMULTANEOUS_COPIES parameter. This
Parameter specifies the number of redo buffer copy latches. For Single CPU Machines set
it to 0, for Multi CPU Machines set it to 2 x Number of CPUs.
-
Your system has limited physical memory. You have an application that
uses many large sorts. Which parameter can be set to improve the performance of these
large sorts?
SORT_DIRECT_WRITES = True/False/Auto (Bypass Buffer Cache)
Set it to True if there is abundant memory. This parameter controls
whether sort data will bypass the buffer cache to write intermediate sort to
disk.
0
ALTER INDEX REBUILD ...
Alert Log File
User Process Trace File
Query the V$RECENT_BUCKET view
Set the Parameter DB_BLOCK_LRU_STATISTICS to TRUE
V$LOCK (Lock currently held by the Oracle Server)
V$LOCKED_OBJECT (Lists all locks acuired by every transaction on the system)
MAXEXTENTS has been set too low
Tablespace for Rollback Segments too small
Limits the number of blocks cached for each table
8 (Good Performance set it to 2 x Number of CPU's)
SORT_AREA_SIZE=<SIZE> (Increasing the size improves the
efficiency of large sorts)
SORT_DIRECT_SORTS=TRUE (Can improve sort performance for abundant memory)
V$SYSTEM_EVENT (Total Waits on en event since Instance startup)
Modify the DB_BLOCK_LRU_LATCHES parameter
V$FILESTAT (I/O Statistics, V$LOADSTAT ist for SQL*Loader)
Redo Log Buffer, select CLASS = 2
Buffered Data
Hash Join
Recreate the table increasing the number of freelists.
Increase the value of the LOG_SIMULTANEOUS_COPIES parameter.
The MAX_DUMP_FILE_SIZE parameter value is specified in O/S blocks and
specifically limits the size of user trace files.
The V$EVENT_NAME view lists all event names and their
associated parameters.
SQL> select event#, substr(name,1,50) from v$event_name
where rownum < 10;
EVENT# SUBSTR(NAME,1,50)
--------- ---------------------------------------------
0 Null event
1 latch activity
2 latch free
3 free process state object
4 pmon timer
5 inactive session
V$SESSION_EVENTS
SELECT event, total_waits, total_timeouts,
time_waited, average_wait
FROM v$system_event;
Buffer Busy Waits
A user has complained about the time it is taking to execute an
application process.
Active session's last wait time
V$SYSTEM_EVENT
A latch hit ratio of 99% is optimal. Latch contention
should be always be 1% or less. To resolve redo latch contention, increase
LOG_SIMULTANEOUS_COPIES and decrease LOG_SMALL_ENTRY_MAX_SIZE.
HIT_RATIO
The V$SYSSTAT view provides basic instance statistics
cumulated since instance start up.
-
Which view could be queried to determine the type of user
connection?
SELECT sid, username, type, server
FROM v$session;
SELECT sid, username, type, server
FROM v$session;
SID USERNAME
TYPE SERVER
--------- ------------------------------ ---------- ---------
1
BACKGROUND DEDICATED
-
You have a hybrid system processing reports at night. What can you do
to handle heavy transaction processing during the day and report processing at
night?
Use many small rollback segments during the day and fewer large
rollback segments at night.
The segments not being used for reporting should remain offline until they are
needed.
Reset the parameters for day and night using a separate pfile.
SQL Trace can be set at instance or session level and
provides detailed information regarding the execution of a SQL statement.
When DISTINCT is used, the data is sorted. If the
statement selects a large number of rows, the server may generate temporary segments to
handle the sort runs. This process incurs I/O and processing overhead, and should
be avoided.
Hash Join
Information about entries for attributes in the fact
table. Lookup tables are small tables joined to the fact table using a PK-FK
join.
Performance and Resource Usage
DBMS_APPLICATION_INFO contains procedures needed to register and then
track a module. When an application calls these procedures, they record and read
information into and from the V$SESSION and V$SQLAREA views.
Statements including group functions impact performance. Use a WHERE
clause to exclude unwanted rows. HAVING should only be used with an aggregate condition.
Inappropriate use of the HAVING clause increases response time.
READ_MODULE reads the last module and action names set
by SET_ACTION or SET_MODULE. READ_CLIENT_INFO reads the last client
information set for the session.
DB_BLOCK_CHECKSUM
LOG_BLOCK_CHECKSUM
PRE_PAGE_SGA
Instance start up time is increased using this parameter, but it can
reduce the number of page faults that occur. The reduction in page faults allows the
instance to reach its maximum performance capability more quickly
The number of object definitions that have been aged
out of the library cache for lack of space.
The number of executions of SQL statements or
procedures in the library cache.
V$SESSTAT
V$STATNAME
SELECT SUM(value) || 'bytes' "Total session memory"
FROM v$sesstat, v$statname
WHERE name = 'session uga memory'
AND v$sesstat.statistic# = v$statname.statistic#;
Pinning sequences in the shared pool will prevent the
numbers from being lost, which is normal when they are aged out.
SELECT SUM(sharable_mem)
FROM v$db_object_cache
WHERE type = 'PACKAGE' OR type = 'PACKAGE BODY' OR
type = 'FUNCTION' OR type = 'PROCEDURE';
Oracle recommends allowing about 250 bytes in the
shared pool per user per open cursor. This can be tested during peak times using this
query:
SELECT SUM(250 * users_opening)
FROM v$sqlarea;
SHARED_POOL_SIZE
DB_BLOCK_BUFFERS
Determine the size of each buffer pool by subtracting the number of
buffers for each buffer pool from the total number of buffers defined for the entire
buffer cache. If this constraint is violated, the database will not mount.
The buffer cache holds copies of the data blocks from the datafiles.
The DB_BLOCK_BUFFERS parameter specifies the number of blocks in the buffer
cache.
Cache Hint Clause: Using this method causes the Oracle
server to place the table blocks at the most recently used end of the LRU list.
V$SYSSTAT
V$RECENT_BUCKET
Number of additional cache hits gained by adding additional buffer
cache blocks
Total number of buffer cache blocks used by each object. The RECYCLE
buffer pool should be large enough to be retained for the duration of the
transaction.
Retain as many objects in memory as you can.
When the server process is searching the LRU list, DBWR managers the
buffer cache by writing dirty blocks to the datafiles to ensure that there are free
blocks for servers.
- When Dirty List exceeds threshold
- Independent of COMMIT
- No more free Blocks in LRU-List
when the dirty list exceeds its size threshold
V$CACHE is created with the catparr.sql script. It can
be viewed to monitor the number of buffer pool blocks by object and is intended for use
with Oracle Parallel Server.
Objects currently being cached
- Current block
- Read consistent block for rollback
The buffer cache may hold multiple copies of a single database block.
Only one current copy exists, but server processes may need to
reconstruct read-consistent copies using rollback information.
50 times the number of LRU latches
Number of buffers skipped to find a free buffer.
"Checkpoint not complete; unable to allocate file"
What does this indicate?
LGWR has waited for DBWR to finish.
The redo allocation latch controls the allocation of
space for redo entries in the redo log buffer. There is only one redo allocation
latch.
When the process copies the entry into the buffer,
there is only one redo allocation latch per instance. The number of CPU's only has an
effect on redo copy latches.
The LOG_BUFFER parameter needs to be increased.
A checkpoint occurs.
One member of the log group will be archived.
Stored objects such as database triggers and packages are part of the
data dictionary and should be stored in the SYSTEM tablespace.
To allocate extents explicitly, use this command:
ALTER TABLE tablename
ALLOCATE EXTENT (DATAFILE 'filename' SIZE sizeM);
If you do not explicitly set the temporary tablespace during user
creation, sorts occur in the SYSTEM tablespace.
DISK_ASYNCH_IO
TAPE_ASYNCH_IO
Indexes on volatile tables can present a performance problem and should
be monitored and rebuilt frequently.
Having enough contiguous storage space
DELETE * FROM parts;
What will be the state of the high-water mark for the PARTS table after
this statement has completed?
not changed, DELETE statements have no effect on the high water
mark. The TRUNCATE statement must be issued to reset the high water mark.
ANALYZE INDEX index_name VALIDATE STRUCTURE;
PCTFREE
Migration is caused by PCTFREE being set too low; there is not
enough room in the block for updates. All tables that are updated should
have PCTFREE set to allow enough space within the block for updates.
multiple of 5 * DB_FILE_MULTIBLOCK_READ_COUNT
Multiply by five, because Oracle tries to allocate blocks on five block
boundaries.
If you delete all entries for an index block, Oracle puts the
block back on the free list.
ANALYZE TABLE inventory.item COMPUTE STATISTICS;
Which column of the DBA_TABLES view can you query to see the number of
migrated rows in the inventory.item table?
CHAIN_CNT
location of future inserts is changed
PCTUSED is relevant in tables that undergo (=erleiden) deletes, but you
may be able to pack rows into blocks more tightly by setting PCTUSED to a value higher
than the default of 40%.
FORMULA : PCTUSED = 100 - PCTFREE -100 * (upd-ins)/blocksize
where upd is the average amount added by updates in bytes and ins is the average initial
row length at insert.
The Sort Extent Pool (SEP) is located in the SGA. When
a process needs sort space, it looks for free extents in the SEP.
SORT_DIRECT_WRITES
The amount you allocate to direct write buffers should be less than
one-tenth (< 1/10) of SORT_AREA_SIZE when SORT_DIRECT_WRITES is set to TRUE.
EXTENT_HITS
Contention for rollback segment header blocks
The number of waits for any class should be less than
1% of the total number of requests.
Running a large batch job
Before images of changed data
Setting MINEXTENTS to 20 makes it unlikely that the
rollback segment needs to obtain another extent due to the current extent being used by
an active transaction.
Querying V$SESSION and V$TRANSACTION
SELECT a.username, b.used_ublk,b.start_time
FROM v$transaction b, v$session a
WHERE b.addr = a.saddr;
A trace file is generated for a deadlock situation and
recorded in the USER_DUMP_DEST directory. The value for this directory is recorded in the
init.ora.
Transaction detecting the deadlock
Oracle will automatically roll back the statement that detects the
deadlock. The remainder of the transaction will have to be manually rolled back. Oracle
rolls back the statement and returns an ORA-00060, 'deadlock detected while waiting for
resource' error. The transaction is not rolled back and you should roll back the
remainder of the transaction, if necessary.
Statement B is rolled back and an error message is generated for the
transaction.
Free lists are maintained in the segment header. To
increase the number of free lists for an object, you must drop and recreate it using a
higher value for the FREELISTS keyword.
V$WAITSTAT
V$SYSTEM_EVENT
Number of times a process waited for a latch. Ideally, the percentage
of time for which there is latch contention should be less than 1%.
Tables with a high number of inserts. When an insert operation
occurs, the free list is used to determine the availability of blocks for inserts. Server
processes can contend for the same free list if many inserts are occurring
The high water mark has changed.
Server processes delete rows and space falls below PCTUSED.
LATCH1 1
LATCH2 0.98
LATCH3 0.75 <====
LATCH4 0.05 <====
LATCH5 0 <====
How many latches are displaying latch contention?
3
Ideally, the percentage of time for which there is latch
contention
should be less than 1%.
(Misses / Gets < 1%)
Increase the number of redo copy latches (the percentage of time for
which there is latch contention should be less than 1%).
GETS, MISSES, SLEEPS
LOG_SIMULTANEOUS_COPIES
The maximum number of redo copy latches is 2 * <number of CPUs>.
This value should be used for maximum performance.
Oracle-8 Backup an Recovery
Management will have to provide resources and support.
-
What impact will your backup strategy have on recoverability?
It will determine whether complete or incomplete recovery can be
performed.
-
Why is it important to frequently review the validity of your backup
and recovery strategy?
Business, operational, and technical requirements may change.
Database activity level
The default installation of an Oracle database provides one
redo log member per group. To multiplex the redo log files, at least two members
per group is required. Group members should be placed on separate disks. Each group
should have the same number of members that are the same size.
To add a redo log member, use the ALTER DATABASE ADD LOGFILE MEMBER command.
The Oracle server writes redo log files in a circular motion.
Two redo log groups are required to support this configuration. The
default installation creates two redo log groups. To add redo log groups to the database,
use the ALTER DATABASE ADD LOGFILE GROUP command.
When the database is started
The control file stores the physical structure of the database.
When the database is started, the control file is used to locate the datafiles
and log files. The checkpoint on each file is verified. If one checkpoint does not match,
the database cannot be opened.
The large pool is not created.
The large pool is created in the SGA when the LARGE_POOL_SIZE initialization
parameter is set. This area is used by RMAN to buffer I/O during backup and
restore operations when the BACKUP_DISK_IO_SLAVES or the BACKUP_TAPE_IO_SLAVES parameters
are set.
Log buffer
All the memory structures are contained within the SGA. The data buffer
cache stores blocks read from datafiles. The shared pool stores parsed SQL statements.
The large pool is used by RMAN for I/O. The log buffer stores before and after
images of changes made to the data.
DBWR writes all dirty buffers in the database buffer cache to disk.
-
What happens when the checkpoint numbers do not match at database
start up?
Oracle will assume a recovery operation is necessary and
generate an error. The database will not open. Each time a checkpoint
is triggered, all the datafiles, control files, and redo log files are synchronized.
Each file is updated with a checkpoint number. If the DBA overwrites the control file
with an old copy and attempts to start up the database, it will not open because the
checkpoint numbers are inconsistent. The start up will fail and recovery will be
needed.
-
Which two Oracle background processes provide the archived log files
used in a complete recovery?
LGWR and ARCH
SMON
SMON recovers abnormally terminated instances, SMON reclaims any
temporary segments used by a user process. PMON frees up resources used by a terminated
user process. ARCH writes the redo logs to the archived logs. DBWR writes changes in the
database buffer cache to disk.
ALTER SYSTEM ARCHIVE LOG NEXT
The database must be shut down and a full backup performed.
To place the database in ARCHIVELOG mode:
1. Shut down the database.
2. Start the database in MOUNT state.
3. Issue the ALTER DATABASE ARCHIVELOG command.
4. Open the database.
5. Shut down the database.
6. Take a full closed backup.
A new full backup is taken because your previous backup is no longer
usable because the database was in NOARCHIVELOG mode. This new backup will be used for
recovery and the archived logs will be applied to it.
LOG_ARCHIVE_MIN_SUCCEED_DEST
The LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1 or 2. It specifies the
minimum number of archived log destinations to which an online redo log file must be
successfully written.The initialization parameters for the archived destinations are:
LOG_ARCHIVE_DEST - primary archive log file location
LOG_ARCHIVE_DUPLEX_DEST - secondary archive location for duplexing
To enable archiving in an open instance, issue this command:
ALTER SYSTEM ARCHIVE LOG START
To enable automatic archiving at the start of an instance, set the
LOG_ARCHIVE_START initialization parameter to TRUE.
For a database with archiving enabled, you only need to alter the
LOG_ARCHIVE_DUPLEX_DEST initialization parameter. To dynamically alter an
initialization parameter, use the ALTER SYSTEM command.
To disable archiving, use this command:
ALTER SYSTEM ARCHIVE LOG STOP
To ensure that automatic archiving will not be enabled at the next
instance startup, set the LOG_ARCHIVE_START initialization paramter to FALSE.
After the checkpoint, after the redo log file is
archived
When the database is in ARCHIVELOG mode, the redo logs cannot be
overwritten until they are archived. When automatic archiving is enabled, the ARCH
process copies the oldest unarchived redo log group at a log switch. A checkpoint occurs
at each log switch. For manual archiving, the ARCHIVE command is issued, a log switch and
a checkpoint occurs, and the oldest unarchived log group is archived. This group is now
available for reuse.
The database will not open until the tablespace has been dropped or the
entire database has been restored from backup.
Scripts can be created to complete regular DBA tasks.
It is recommended that you use scripts to minimize operator error. Scripts can be stored
in the operating system, but it is recommended that you store scripts in a
recovery catalog because the Oracle Server will maintain them.
Daily
Use the RESYNC CATALOG command in RMAN to update the recovery catalog.
You should resynch the catalog once a day with its target database because information
about the redo logs and archived log files is not automatically written to the recovery
catalog. Frequency really depends on how often archived log files are created.
$ rman target susan/jaguar nocatalog
Why would you issue this command?
To connect the user to Recovery Manager if SUSAN has SYSDBA
privileges
To connect a user with SYSDBA privileges to Recovery Manager locally
without a recovery catalog, use this command syntax:
$ rman target scott/tiger nocatalog.
ARKUM:zahn> rman80 target sys/manager@DOR1 nocatalog;
Recovery Manager: Release 8.0.5.0.0 - Production
RMAN-06009: using target database controlfile instead of recovery catalog
RMAN-06005: connected to target database: DOR1
$ rman rcvcat rman/rman@RCVCAT
rman> connect target
Why would you issue this command?
To connect to a local recovery catalog with a TNS alias of RCVCAT
To connect locally to RMAN, use this command:
$ rman rcvcat rman/rman@RCVCAT
At the RMAN prompt, connect to the target database with this command:
rman> connect target
During a backup operation generated with Recovery Manager,
corrupt blocks will be identified and the information will be recorded
in the target database control file, alert log, and trace files. To view this
information, query V$BACKUP_CORRUPTION and
V$COPY_CORRUPTION.
Because the control file stores backup, recovery, and restore
information, the amount of information stored can become rather large if there are
frequent backups and a large target database. To limit the amount of information stored,
you can alter the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter
to specify the number of days that RMAN maintains backup, recovery, and restore
information. When using CONTROL_FILE_RECORD_KEEP_TIME, be sure the recovery catalog is
resynched more often than the overwrites to the control file.
A recovery catalog can be used to store and maintain long-term
backup information.
A recovery catalog stores backup scripts, but is not required to execute backup scripts.
Additional disk space is required for a recovery catalog because it must be stored in a
separate database. There are several methods to backup archived logs; a recovery catalog
is not required for this task.
rman> set snapshot controlfile name to
'C:/Users/Zahn/Work/ctrl1.snp';
rman nocatalog
rman> connect target sys/manager@dor1
Channel
A channel is required for each degree of parallelization for backup,
recovery, and restore operations. Oracle creates a server process on the target database
for each channel allocated.
ALTER TABLESPACE appl_data BEGIN BACKUP;
Which event will occur when you issue this command?
Updates to the header block of the datafiles belonging to APPL_DATA
will be suspended.
Either online or offline, since no changes will be made to the
tablespace during the backup operation.
ALTER DATABASE BACKUP CONTROLFILE TO '?/data/control01.bak';
<CTRL> + <C> will terminate an RMAN job when RMAN is used
interactively. To terminate RMAN when it is running in batch mode, you must kill
the RMAN process. If RMAN terminates abnormally, you must delete the incomplete
backup files from the operating system.
rman> run
2> allocate channel d1 type disk;
3> format = 'tue-%s_%p.bck';
4> backup incremental level = 1 cumulative (database);
Which type of backup will be performed by this command?
Blocks changed since yesterday's backup
Level 0 is a full database backup that can be used as a base for future
incremental backups.
Level 1 will backup all the changed blocks since the last monthly backup.
Level 2 will backup all the changed blocks since the last weekly backup.
Level 3 will backup all the changed blocks since the last daily backup.
rman> run
2> allocate channel d1 type disk;
3> format = 'tue-%s_%p.bck';
4> backup incremental level = 0 (database);
Which type of backup will be performed by this command?
Full database backup
V$BACKUP_DATAFILE
V$BACKUP_DATAFILE will display the number of corrupt blocks found in
each datafile during a backup taken with RMAN.
When a backup is not progressing, you can query V$SESSION_WAIT to see
if any of the backup events have caused the backup to hang.
Only RMAN will find corrupt blocks during a backup. To list the corrupt
blocks found during a backup set, query V$BACKUP_CORRUPTION. To list the
corrupt blocks found during an image copy, query V$COPY_CORRUPTION.
$ rman TARGET \"system/manager@prod\" \
RCVCAT \"system/manager@prodrec\"
ALLOCATE CHANNEL FOR DELETE TYPE DISK;
CHANGE DATAFILECOPY 'user02.dbf' DELETE;
RELEASE CHANNEL;
CHANGE DATAFILECOPY 'user_data9.dbf' UNAVAILABLE;
$ rman TARGET \"system/manager@ora8prod\" \
> RCVCAT \"system/manager@prodrec\"
After issuing the commands, you create a full backup of the ORA8PROD
database. Which command must you execute to close the connection to the sequential I/O
device in Recovery Manager after the backup is complete?
RELEASE CHANNEL channel_id
When a request for this data is made by a user
2
The database cannot be opened if the file synchronization information
does not match.
username: RCATMAN
password: RCATMAN
connect string: RCVCAT
Which command will connect to this recovery catalog?
$ rman rcvcat rcatman/rcatman@RCVCAT
rman> connect target
-
At approximately 7:00am today, the DEPARTMENT table was accidentally
dropped. The database structure has not changed since last Friday and you decide to
perform an incomplete database recovery through 6:45am today. Using Recovery Manager,
which RUN command clause will perform this recovery?
SET UNTIL TIME = '1998-06-08:06:45:00'
CHANGE ARCHIVELOG '/disk2/archive/arch_232.rdo' UNCATALOG;
ORA-00376: file 4 cannot be read at this time
ORA-00110: data file 4: '/disk2/data/user123.dbf'
Disk 1 is corrupt and the user123.dbf file, uniquely identified by 4,
must be moved to a different disk, Disk 5. Using the RUN command in Recovery Manager,
which must be issued to inform the database of the new location?
SET NEWNAME FOR DATAFILE 4 TO 'disk5/data/user123.dbf';
rman> run { allocate channel c1 type disk
format =
'/disk1/backup/tue_%s_%p.bck';
backup incremental
level = 1 (database); }
Redo log files were not mirrored and you lost a redo log file before it
was archived.
Communication interface with the server
The database must be OPEN
V$BACKUP_REDOLOG
Provide a useful method to refer to a backup set.
collectively hold all datafiles and control files
CHANGE DATAFILECOPY 'query_data12.dbf' DELETE;
Why would you issue this command?
Inform Recovery Manager that the query_data12.dbf has been dropped
from the target database.
CHANGE DATAFILECOPY 'user_data8.dbf' UNAVAILABLE
Why would you issue this command?
Mark the user_data8.dbf datafile as unavailable.
LIST BACKUPSET OF DATAFILE "user_data7.dbf";
Why would you issue this command?
Determine the backup for USER_DATA7.DBF.
Specify the name of a file that contains additional input commands.
-
A user informs you that they are unable to query the EMPLOYEE table.
The table is stored in a tablespace that contains the 'query_data13.dbf' file that is
offline. While investigating, you notice that the drive that contains this file has
been damaged. Which command should you issue in Recovery Manager to mark this datafile
as unavailable?
CHANGE DATAFILECOPY 'query_data.dbf' UNAVAILABLE;
rman> CATALOG DATAFILECOPY "index_data12.bck";
rman> CATALOG ARCHIVELOG "arch_423.log";
$ rman TARGET \"system/manager@prod\" \
RCVCAT \"system/manager@prodrec\"
rman> ALLOCATE CHANNEL FOR DELETE TYPE DISK;
rman> CHANGE DATAFILECOPY 'finance1.dbf' DELETE;
rman> RELEASE CHANNEL;
Inform Recovery Manager of the recent deletion of the finance1.dbf
datafile.
RESYNC CATALOG;
REPORT NEED BACKUP DAYS 4;
Why would you issue this command?
A listing of datafiles that have not been backed up within the last
four days is required.
DROP ROLLBACK SEGMENT RBS01;
You are using Recovery Manager to facilitate the backup and restore
operations. After executing this command, what must you do to the recovery catalog?
Resynchronize it.
rman> CHANGE DATAFILECOPY 'finance2.dbf' UNCATALOG
rman> CATALOG DATAFILECOPY "user_data6.bck";
rman> CATALOG ARCHIVELOG "arch_344.log";
Register these files in the recovery catalog.
point-in-time recovery
rman TARGET \"scott/tiger@v80prod\" \
RCVCAT \"scott/tiger@v80prec\" \
Adding which command will create a list of all commands executed during
the operation saved to a file called rmnv80prd.log?
MSGLOG \"scott/tiger@rmnv80prd.log\" \
CHANGE DATAFILECOPY 'user_data8.dbf' DELETE;
When is the actual file deleted from the operating system?
immediately upon execution of this command
REDUNDANCY
rman> CHANGE DATAFILECOPY 'finance4.dbf' DELETE;
Which command must precede this delete operation?
ALLOCATE CHANNEL FOR DELETE;
Adding a datafile to a tablespace
RESYNC CATALOG FROM BACKUP control file 'prodbu.ctl';
Why would you issue this command?
A datafile has been added to the target database and the current
control files are damaged and unusable.
REPORT NEED BACKUP DAYS 4 REDUNDANCY 2;
Why would you issue this command?
A listing of datafiles that have not been backed up within the last
four days and
do not have at least two backups.
$ rman TARGET \"system/manager@prod\" \
RCVCAT \"system/manager@prodrec\"
rman> CHANGE DATAFILECOPY 'finance2.dbf' AVAILABLE;
REPORT NEED BACKUP DAYS 6;
$ rman TARGET \"system/manager@prod\" \
RCVCAT \"system/manager@prodrec\"
After issuing the commands, you create a full archive log backup. Which
script will create this backup for these specifications?
Number of archive log files to backup: 320
Number of backup sets:
4
Delete input files?
yes
Channel:
t1
rman> CREATE SCRIPT PROD_BACK_ARCH
(ALLOCATE CHANNEL t1 NAME "sbt_tape";
BACKUP FULL FILESPERSET 80
(ARCHIVELOG ALL DELETE INPUT
FORMAT "PRODARCH.%s.%p");
RELEASE CHANNEL t1);
CATALOG DATAFILECOPY "user_data8.bck";
CATALOG ARCHIVELOG "arch_112.log";
$ rman TARGET \"system/manager@prod\" \
> RCVCAT \"system/manager@prodrec\"
> SETLIMIT CHANNEL KBYTES 500;
Each backup piece must be no larger than 500K.
Backup Piece
rman> run
2> allocate channel d1 type disk;
3> allocate channel d2 type disk;
4> copy #2 files copied in parallel
5> datafile 1 to '/backup/file1.dbf',
6> datafile 2 to '/backup/file2.dbf';
This command creates two channels to write to disk. The COPY command
uses the two channels to copy the two datafiles at the same time.
Re-reads the block, a fragmented block is a block that is being written
to as it is being read by the backup utility. The backup utility will re-read the
blocks.
Image copy
RMAN supports an image copy backup and a backup set. An image
copy backup is similar to an operating system backup. It will copy all of the blocks and
check for block corruption. A backup set only copies used blocks.
rman> run {
allocate channel d1 type disk;
copy level 0
datafile 1 to 'backup/file1.dbf',
datafile 2 to 'backup/file2.dbf';
}
How do you know that this copy will be used in an incremental backup
set?
Level 0 was used with the COPY command.
This command creates one channel and writes two datafiles to disk. It
performs a level 0 incremental backup that can be used as a base for incremental
backups.
The degree of parallelization available during the
backup
For each channel allocated by an RMAN RUN command, an Oracle Server
process is created. The number of channels allocated determines the maximum
degree of parallelization available for the backup, recovery, or restore
operations. Channels can be allocated and not used by a RUN command.
Contains only blocks that have been modified since the last incremental
backup
An incremental backup only contains blocks of a
control file or the datafiles that have changed since the last incremental
backup. A cumulative incremental backup backs up blocks changed
since the last cumulative incremental backup and the previously backed up
blocks.
rman> run
2> allocate channel d1 type disk;
3> copy level 0
4> datafile 1 to '/backup/file1.dbf',
5> datafile 2 to '/backup/file2.dbf';
Which task will this command accomplish?
Create an image copy backup that can be used in an
incremental backup.
3
With the RMAN BACKUP command, the FILESPERSET option determines
the number of files included in each set. In this example, four backup sets
would be created. Three backup sets would be multiplexed by 3. The remaining file would
be included in the fourth backup set.
Control file
When errors are detected in blocks during a backup set process, the
errors are written to the control files. To view the errors, query V$BACKUP_CORRUPTION.
The corrupt files are still included in the backup set.
V$BACKUP_REDOLOG will show the backup sets that include
archived logs.
When taking an offline backup using RMAN, the database should
be mounted, but not open. When taking an online backup using RMAN, the
database should be open. When using RMAN, tablespaces should not be put in
backup mode. If a recovery catalog is being used, it should be open for either
type of backup.
The database must be in NOMOUNT or MOUNT state to use
Recovery Manager to restore files in a database that is not archiving.
-
Which format value would you use in a backup set command to include
the backup set number in the name of the backup file?
Theses are the valid values for the FORMAT option with the RMAN
BACKUP command:
%d - target database name
%n - padded target database name
%p - backup piece number
%s - backup set number
%t - backup set stamp
%u - representation of the backup set number and time backup set was created
-
The PROD database has 10 datafiles stored on 2 disk drives. You need
to perform a full database backup with 3 files in each backup set. How many channels
should you allocate to perform a parallel backup operation?
To perform a parallel backup operation, allocate two channels
of type disk to simultaneously read from the two disk drives that store the ten
datafiles to be backed up.
One or more files stored in the RMAN format
RMAN can be used to support backup sets, image copies, and O/S
copies. A backup set is created using the RMAN BACKUP command. There are
two types of backup sets. The datafile backup set can be used to backup datafiles and
control files. The archived log backup set can be used to backup archived
logs.
The benefit of using an incremental backup may be lost.
If the control file of a database that uses incremental backups needs
to be restored, the archived logs would have to be applied to the last full backup to
recover the control file. The speed of recovery, which is one of the benefits of
incremental backups, may be increased.
SESSION_LONGOPS displays the progress of a backup. If % Complete is not
increasing, query V$SESSION_WAIT to see if any of the events have caused the backup to
hang.
Trace files are created by the Oracle Server to document the errors
generated by the background processes. The location of the trace files is specified by
the BACKGROUND_DUMP_DEST initialization parameter.
An attempt is made to archive another redo log member.
If the initialization parameter LOG_BLOCK_CHECKSUM is set to TRUE, each
block in a redo log file is given a checksum before a write is performed. A checksum is
used to detect block corruption before a block is read. If a corrupt block is detected in
a member of a redo log group, the server will attempt to read from another member of the
group. If the entire group is invalid, the group cannot be archived.
User process failure
A user process failure occurs when a user process abnormally
terminates. This is usually caused by an abnormal disconnected user session. The
PMON background process will automatically recover a user process failure.
The block size of the file is greater than 2KB.
When the application requests the data
Instance Recovery:
1. Oracle checks for unsynchronized files to determine if recovery is
necessary.
2. All the changes recorded in the redo log files are applied to the data blocks.
3. The datafiles contain committed and uncommitted data.
4. If the there is uncommitted data in the datafiles, the rollback operation occurs.
5. After the roll forward and rollback operations, the data is committed.
6. The datafiles are synchronized.
Active redo log files are applied to the data blocks.
The log sequence numbers are updated in the datafiles.
During the roll forward phase of recovery, the redo logs are
sequentially applied to the data blocks. After changes are made in the data blocks, the
version number of the data blocks is incremented by 1.
-
If the DBWR process terminates and the database shuts down, which
file logs the error that caused the process to terminate?
DBWR trace file
-
Which view would you query to determine which archived logs are
needed for database recovery?
V$RECOVERY_LOG
Reenter any data that was not committed at the point of failure.
When media failure occurs in a database that is archiving and you have
all the archived logs since the last backup, only the lost files need to be restored.
The database can be recovered to the present time in the control
file.
SWITCH
Take the tablespace for the datafile offline, copy the file to the new
location using the SET NEWNAME command, and make the new datafile active by using the
SWITCH command. The RECOVER command will apply the redo logs, archived logs, and
incremental and cumulative backups to the file. Issue the ALTER TABLESPACE command to
bring the tablespace online.
ALTER DATABASE CLEAR LOGFILE GROUP 1 UNRECOVERABLE DATAFILE;
Why would you issue this command?
To clear a redo log file that is needed for recovery of an offline
datafile
THE ALTER DATABASE CLEAR UNARCHIVED LOGFILE command can be used when
there are only two log groups or the corrupt redo log file is a member of the current
group. This command will clear or recreate the files.
Shut down the database, restore the corrupt file, mount the database,
and recover the datafile.
rman> run
2> allocate channel d1 type disk;
3> restore database;
If the database is in ARCHIVELOG mode, what will this command
accomplish?
Recover only the datafiles from the last full backup
To open the database, the datafiles and the control file would need to
be restored. This command only restores the datafiles. To restore the control, you would
have to include the RESTORE CONTROLFILE command.
SET NEWNAME FOR DATAFILE '/disk1/dfuser02.dbf'
to '/disk2/dfuser02.dbf';
SET AUTORECOVERY ON
RECOVER DATABASE;
Which portion of the database is available for users while recovery is
performed?
None
SYSTEM
If you need to restore datafiles that are associated with the
SYSTEM tablespace, you must restore the datafiles while the database is offline. All
other files can be restored while the database is open. The tablespaces
associated with the datafiles being restored should be offline.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
V$RECOVER_FILE contains the names of the files that
need to be recovered and the point that they need recovery from.
SET AUTORECOVERY ON;
You can enable automatic recovery by issuing the SET
AUTORECOVERY ON command. The redo log files must be in the location specified by
the LOG_ARCHIVE_DEST initialization parameter. If you do not allow the Oracle Server to
perform automatic recovery, you will be prompted for the redo log file names. It is
recommended that you use the RECOVER command for media failure.
SQL "ALTER TABLESPACE user_02 OFFLINE IMMEDIATE";
Before any restore or recovery operation can be done to a
tablespace in a database that is online, the tablespace must be taken offline.
The SQL option allows you to issue SQL commands within Recovery Manager.
Recover all transactions to the point of failure
When a complete recovery is performed, committed transactions are not
lost. After restoring the files, the redo logs and the archived logs are applied and the
database is brought to the point of failure.
A full offline backup is required after the recovery operations are
complete.
After performing an incomplete recovery operation with RMAN, the
database must be opened with the RESETLOGS option. After the database is open, a full
offline backup should be taken. This process increases the amount of down time.
Time-based recovery should be performed to restore the
database to the state it was prior to the unwanted update.
TSPITR allows one or more tablespaces to be restored to a point
that is different from the rest of the database. The datafiles in the
tablespace and a backup copy of the control file are required for the recovery
operation. This method is used very sparingly. It will help the DBA recover from
user errors when bad data was committed to a large table.
Datafiles for the tablespace and a backup copy of the control file (see
above)
RESET DATABASE;
When performing an incomplete recovery using Recovery Manager, what
will this command accomplish?
Register the current version of the database in the recovery catalog.
The RESET DATABASE command in Recovery Manager will update the recovery catalog after the
database has been recovered.
The client-side character set must match the server-side.
The BUFFER parameter will not have any effect on the export.
- Tables are created
- Indexes are built
- Data is imported
- Triggers are imported
- Integrity constraints are enabled.
Point-In-Time
If a database is running in NOARCHIVELOG mode, the database can only be
recovered to the point of the last backup regardless of the backup method.
USERID
Use an index scan to retrieve the data in the corrupt blocks.
To retrieve the data from the table, use this command:
SELECT DISTINCT(key)
FROM corrupted_table
WHERE key> (lowest value for the key)
AND substr(rowid, 1, 8) = corrupt_block_id order by 1;
Restore the last backup and recover until the time that the tablespace
was made read-only.
When a tablespace becomes read-only, you should take a full backup so
that this scenario never occurs. If you should need to recover a tablespace of this type,
restore the last backup and apply the redo logs and archived logs to recover the
tablespace to the point the tablespace became read-only.
Mount the database, take the tablespace with the missing datafile
offline, and open the database.
To recover a lost datafile
- start up the database in MOUNT mode
- take the tablespace with the missing datafile offline
- open the database
- restore the backup datafile and all offline redolog files with OS commands
- recover the datafile (not the database !)
- bring the tablespace online.
The tablespace will be recovered to the current time and the system
will resynchronize the files. Committed data will not be lost.
Recreate the control file using the CREATE CONTROLFILE command.
To change the MAXDATAFILES and MAXLOGMEMBERS parameters, you must
recreate the control file. These settings are fixed when the control file is created and
cannot be dynamically changed.
1 or 2
Parallel recovery helps to reduce the Mean-Time-To-Recover. The Oracle
Server allows you to utilize multiple threads to recover the database. For example, to
issue a recovery process that uses 4 recovery processes and two instances, you would
issue this command:
RECOVER DATABASE PARALLEL DEGREE 4 INSTANCES 2;
This command will initiate 8 recovery processes. To minimize I/O contention, it is
sufficient to have 1 to 2 processes per disk drive.
Recreate the entire recovery catalog using the CATALOG command.
To recreate the recovery catalog, use the CATALOG command to recatalog
all the necessary files or rebuild the recovery catalog using a backup control file. The
RESYNC CATALOG FROM BACKUP CONTROLFILE command will include all the backup pieces and
backup sets; the CATALOG command does not support this type of cataloging. Any files
added to the recovery catalog that are no longer needed should be removed using the
CHANGE command with the UNCATALOG option.
Use the RESYNC CATALOG FROM BACKUP CONTROLFILE command.
-
Evaluate this error message that the Oracle Server generated when you
attempted
to open the database:
svrmgr> alter database open;
ORA-00283: Recovery session canceled due to errors
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: '/user_data/db02/u01/sys01.dbf'
ORA-01207: file is more recent than control file - old control file
What might have caused this error?
Backup control file needs to be restored.
To receive this error, you restored the datafiles and attempted to
open the database with the current control file rather than the backup. The files are
not synchronized and the database will not open. To recover from this error,
restore the backup control file. Using the backup control file, recover the database.
Open the database with the RESETLOGS option.
-
The recovery operation you are preparing includes read-only
tablespaces. Which command can you issue to get a listing of the procedures needed to
recover this database?
ALTER DATABASE BACKUP CONTROLFILE TO TRACE
When you need to recover the database from the loss of the control
file, you recreate the control file with the CREATE CONTROLFILE command. If you are
recreating the control file of a database with read-only tablespaces, issue the ALTER
DATABASE BACKUP CONTROLFILE TO TRACE command to generate a listing of the special
procedures required for recovery.
SVRMGR> RECOVER TABLESPACE user_01;
Which task will this command accomplish if the database is in
ARCHIVELOG mode?
Recover the USER_01 tablespace using the archived redo logs.
To recover a lost datafile
- start up the database in MOUNT mode
- take the tablespace with the missing datafile offline
- open the database
- restore the backup datafile and all offline redolog files with OS commands
- recover the datafile (not the database !)
- bring the tablespace online.
The tablespace will be recovered to the current time and the system
will resynchronize the files. Committed data will not be lost.
Oracle-8 New Features for
Administrators
CREATE PROFILE user_profile
LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 365;
Passwords must be limited to 10 characters in length and cannot be the
same as the user's social security number. This additional rule must be enforced with a
function. Which clause should you add to this profile to execute this function ?
PASSWORD_VERIFY_FUNCTION user_defined_function
CREATE PROFILE user_profile
LIMIT
PASSWORD_VERIFY_FUNCTION your_function_that_verifies_conditions;
CREATE PROFILE user_profile
LIMIT
PASSWORD_LIFE_TIME 30
PASSWORD_REUSE_TIME 100;
This profile allows users to reuse an old password after 100 days.
However, each user can now use an old password after the password has been changed 20
times. Which two changes should you make to this profile to enforce this new rule?
add PASSWORD_REUSE_MAX 20
change PASSWORD_REUSE_TIME to UNLIMITED
CREATE PROFILE side
LIMIT
IDLE_TIME 30;
This profile is assigned to all users of the database to limit session
idle to 30 minutes. There is a need, due to a security breach, to limit the number of
invalid login attempts. Which clause should you add to limit these attempts to 3?
FAILED_LOGIN_ATTEMPTS 3
CREATE PROFILE user_profile
LIMIT
PASSWORD_LIFE_TIME 90
PASSWORD_GRACE_TIME 2
FAILED_LOGIN_ATTEMPTS 3;
PASSWORD_LIFE_TIME 90
CREATE PROFILE user_profile
LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 365;
Assigning this profile to a user will enforce which two rules?
Each password must be changed after 60 days
A previously used password cannot be used again until 365 days have passed.
CREATE PROFILE user_profile
LIMIT
PASSWORD_LIFE_TIME 90;
This profile is currently assigned to your end users. You are receiving
frequent calls concerning password expirations. Which command would you execute to give
users a three day warning after the time limit, but before the password expires?
ALTER PROFILE user_profile
LIMIT
PASSWORD_GRACE_TIME 3;
CREATE PROFILE user_profile
LIMIT
FAILED_LOGIN_ATTEMPTS 5
PASSWORD_LOCK_TIME 1;
The account will be locked preventing future connections for one
day.
Global Index
ALTER TABLE orders
EXCHANGE PARTITION apr97
WITH TABLE orders_apr97;
ALTER TABLE orders MODIFY product_id VARCHAR2(300);
What would cause this command to fail?
The table is partitioned on the PRODUCT_ID column.
connection pooling
DELETE_CATALOG_ROLE
INSERT /*+ PARALLEL (orders97, 6) */ INTO orders97
SELECT /*+ PARALLEL (orders, 8) */ *
FROM orders
WHERE order_date < '01-JAN-98';
What is the degree of parallelism for the whole operation?
6
CREATE PROFILE user_profile
LIMIT
PASSWORD_LIFE_TIME 45;
partition-to-device
CHECK_ONLY
rollback segments
USER_PART_INDEXES
TO_DATE
After creating the new Oracle8 database
DIRECT and PARALLEL
Use a different control file than the default.
Unique global
Your users are notorious for logging onto the database for hours at a
time with very
few calls to the database.
CREATE TABLE record_sales
(id NUMBER(8),
name VARCHAR2(100),
gross_receipts NUMBER(11,2),
week_no NUMBER(2))
PARTITION BY RANGE (week_no)
(PARTITION P1 VALUES LESS THAN (4) TABLESPACE user_data0,
PARTITION P2 VALUES LESS THAN (8) TABLESPACE user_data1)
PARTITION P3 VALUES LESS THAN (MAXVALUE) TABLESPACE user_data3);
Which partition will contain the row with a null value assigned to the
WEEK_NO column?
P3
by analyzing the WHERE clause and eliminating partitions that do not
need to be accessed
CREATE INDEX storm_track_indx
ON storm_track (storm_id)
LOCAL
(PARTITION P1 TABLESPACE user_data1,
PARTITION P2 TABLESPACE user_data2,
PARTITION P3 TABLESPACE user_data3);
If partition P3 is marked as unusable, what is true about subsequent
operations?
You can execute SELECT or DML statements that require the index as long
as they do not access the P3 partition.
SELECT *
FROM storm_track
WHERE current_date_time = '01-SEP-98 08:20:00';
SELECT *
FROM storm_track
WHERE storm_id = 133;
The STORM_TRACK table is partitioned on the STORM_ID column with an
index on the same column. You create an additional index on the CURRENT_DATE_TIME column.
Which type of index will be created on this column?
Global Index
Both local and global indexes are marked unusable.
-
Evaluate this CREATE INDEX statement:
CREATE INDEX storm_track_indx
ON storm_track (storm_id)
LOCAL
(PARTITION P1 TABLESPACE user_data1,
PARTITION P2 TABLESPACE user_data2,
PARTITION P3 TABLESPACE user_data3);
If partition P3 is marked as unusable, what is true about subsequent
operations?
You can execute SELECT or DML statements that require the index as
long as they do not
access the P3 partition.
-
As a database administrator for a decision support system (DSS), you
are planning a partition index strategy. Which type of index should you create for
historical tables?
Local
CREATE TABLE record_sales
(id NUMBER(8),
name VARCHAR2(100),
gross_receipts NUMBER(11,2),
week_no NUMBER(2))
PARTITION BY RANGE (week_no)
(PARTITION P1 VALUES LESS THAN (4) TABLESPACE user_data0,
PARTITION P2 VALUES LESS THAN (8) TABLESPACE user_data1);
Because of limited data storage, you are concerned with the potential
growth of the partitions. You decide to limit the number of extents that can be allocated
to each partition. Which command will limit the number of extents to 25 for the P1
partition?
ALTER TABLE record_sales
MODIFY PARTITION p1
STORAGE (MAXEXTENTS 25);
ALTER TABLE and DROP ANY TABLE
DIRECT and PARALLEL
ALTER INDEX storm_track_long_indx
DROP PARTITION p3;
If partition P3 is the highest partition in the index, which result can
you expect?
Removes a partition and the data in it from a partitioned global index.
Dropping a partition of a global index marks the index's next partition as unusable. You
cannot drop the highest partition of a global index.
ALTER INDEX record_sales_indx SPLIT PARTITION p1 AT (5)
INTO (PARTITION p8 TABLESPACE index_data44,
PARTITION p9 TABLESPACE index_data45);
For this statement to be successful, which type of index must
RECORD_SALES_INDX be?
This must be a global Index, local Indexes are managed with ALTER TABLE
...
CREATE TABLE record_sales
(id NUMBER(8),
name VARCHAR2(100),
gross_receipts NUMBER(11,2),
week_no NUMBER(2))
PARTITION BY RANGE (week_no)
(PARTITION P1 VALUES LESS THAN (4) TABLESPACE user_data0,
PARTITION P2 VALUES LESS THAN (8) TABLESPACE user_data1);
When loading rows into this table using the IMPORT utility, what will
the result be if the rows from the import include record sales statistics for week number
8?
The execution of the import will reject these rows and continue
successfully.
ALTER TABLE record_sales TRUNCATE PARTITION p3 DROP STORAGE;
If global indexes are defined on RECORD_SALES and the table is not
empty, what is the result of this command?
All global index partitions will be marked as unusable. DROP STORAGE
means that space from the deleted rows will be deallocated and made available for use by
other schema objects in the tablespace.
ALTER TABLE record_sales ADD comments LONG;
What would cause this command to fail?
The table is partitioned, partiton tables cannot include LONG's.
-
Which statistic in the V$PQ_SYSSTAT data dictionary view indicates
the number of parallel DML statements currently active?
DML Initiated
SQL> select * from v$pq_sysstat;
STATISTIC
VALUE
------------------------------ ---------
Servers
Busy
0
Servers
Idle
0
Servers
Highwater
0
Server
Sessions
0
Servers
Started
0
Servers
Shutdown
0
Servers Cleaned
Up
0
Queries
Initiated
0
DML
Initiated
0
DFO
Trees
0
Sessions
Active
0
Local Msgs
Sent
0
Distr Msgs
Sent
0
Local Msgs
Recv'd
0
Distr Msgs
Recv'd
0
-
Evaluate this code:
SELECT *
FROM orders
WHERE order_filled = 'Y'
FOR UPDATE;
UPDATE /*+ PARALLEL(orders, 6) */ orders
SET date_shipped = SYSDATE
WHERE order_filled = 'Y';
You decide to explicitly lock all orders that have been filled before
issuing the UPDATE command. What is true about this transaction?
SELECT FOR UPDATE is not allowed in the same transaction as parallel
DML,
therefore the update will fail.
-
You issue these commands:
SET TRANSACTION USE ROLLBACK SEGMENT rbs01;
ALTER SESSION ENABLE PARALLEL DML;
What is the result?
All slave processes assigned to subsequent DML operations will share
the same rollback segment.
CREATE TABLE storm_track
(storm_id NUMBER(8),
current_time DATE,
max_wind_speed NUMBER(3),
current_lat NUMBER(5,2),
current_long NUMBER(5,2)
PARALLEL (DEGREE 2);
An application is extremely slow due to a large UPDATE command. You
decide to use a degree of parallelism of eight instead of two for this command. Which
command will you use to change the degree of parallelism for this operation?
UPDATE /*+ PARALLEL (storm_track, 8) */ storm_track
SET current_time = SYSDATE;
INSERT /*+ PARALLEL (storm_track97, 4) */ INTO storm_track97
SELECT /*+ PARALLEL (storm_track, 6) */ *
FROM storm_track
WHERE current_time < '01-JAN-98';
What is the degree of parallelism for the whole operation?
4
PDML_ENABLED
UPDATE /*+ parallel (record_sales,4) */ record_sales
SET gross_receipts = gross_receipts * 1.15;
ALTER SESSION ENABLE PARALLEL DML;
ALTER SESSION ENABLE PARALLEL DML;
The last update statement executed taking advantage of parallel DML.
However, you want to execute a delete statement without using parallel DML. How will you
achieve this?
Execute a commit, then execute the delete statement.
The data object number increments.
SELECT dbms_rowid.rowid_type(row_id)
FROM rid;
DBMS_ROWID.ROWID_TYPE(ROW_ID)
-----------------------------
0
Which type of ROWID is indicated?
0 = Restricted, 1 = Extended
Tablespace of the row
SELECT dbms_rowid.rowid_object('AAAACrAABAAAARaAAA') "VALUE"
FROM DUAL;
VALUE
------------------------------------------------------
171
What does the value 171 indicate?
Data object number of the table to which the row belongs
6 Bytes
The column width changes.
The table or partition
The ability to fetch complete objects to the client side and provide
complete navigational support through programmatic interfaces is gained.
N
CREATE TABLE storm_track (
id NUMBER(4),
name VARCHAR2(30) CONSTRAINT storm_track_name_uk UNIQUE)
ORGANIZATION INDEX;
Why will this statement fail?
Unique constraints are not allowed on index-only tables.
-
When configured, the Oracle8 new feature large pool uses memory for
session memory (UGA), I/O slaves, and which operation?
backup and restore
View specific SYS schema objects.
CREATE TABLE company (
id NUMBER CONSTRAINT company_id_pk PRIMARY KEY
DEFERRABLE INITIALLY IMMEDIATE,
name VARCHAR2(30));
CREATE TABLE product (
id NUMBER CONSTRAINT product_id_pk PRIMARY KEY,
company_id NUMBER
CONSTRAINT product_company_id_fk REFERENCES
company(id)
DEFERRABLE INITIALLY IMMEDIATE,
name VARCHAR2(50),
type VARCHAR2(50));
Products from new companies are frequently added to the system before
the company information is recorded. When this occurs, which command should you issue
before inserting rows into the PRODUCT table?
ALTER SESSION SET CONSTRAINTS = DEFERRED;
Oracle-8 Net8
The connection will fail.
N-tier, Two-Tier
SUPPORT
SUPPORT
tnsnames.ora
data integrity
by using the same hash calculation used by the client
V$CIRCUIT
Database Servers
tnsnames.ora
SRV
protocol
host name
port number
SAVE_CONFIG
Profile
Profile
MTS_DISPATCHERS
STATUS
sqlnet.log
CMAN
Networking Interface (NI)
Net8
Allowing additional sessions to access the server above those allowed
by the operating system
List of Name Servers
global role
Allowing multiple logical sessions across a single physical link
|