TRIM instead of RTRIM und LTRIM SELECT TRIM(' One Word ') FROM dual; TO_LOB (Converting LONG to CLOB) In Oracle 8.1.5, there is a new simple way to convert LONG's to CLOB's and ONG RAW's ind BLOB's. INSERT INTO tab_new SELECT TO_LOB(long_value) FROM tab_old; SYS_CONTEXT (Schema Environment) A Context is a set of application-defined attributes that validates and secures an application. You can even create your own contexts with CREATE CONTEXT. Oracle have defined the following pre-built Contexts. 'NLS_TERRITORY' returns the territory The following query will retun the IP-Address of the connect client computer. SELECT sys_context('USERENV','IP_ADDRESS') FROM dual; SYS_GUID (Generate 16-Byte Unique Identifier) Generate global unique Identifier, which takes care of host, process, sid. INSERT INTO my_table VALUES ('BOB', SYS_GUID()); Temporary Tables (Session and Transactionlevel) If you need a temporary storage to save some results, temporary tables are a good feature, because they doesn't need a cleanup job. You have two options: ON COMMIT DELETE ROWS (Transactionlevel) or ON COMMIT PRESERVE ROWS (Sessionlevel). Each session can only see his/her own data. CREATE GLOBAL TEMPORARY TABLE my_temp ( At times, you may want to commit or roll back some changes to a table independently of a primary transaction's final outcome: DROP PACKAGE Banking; CREATE OR REPLACE PACKAGE BODY Banking AS The COMMIT of ErrHandler() doesn't commit SetSal() COMMIT. Bulk binds improve performance by minimizing the number of context
switches between the PL/SQL and SQL engines, it reduces network I/O. DECLARE The same with Bulk Bind (no more looping) DECLARE Bulk Collects (BULK COLLECT INTO) In the example above (Bulk Binds), the list with empno's was statically built. With Bulk Collect you can dynamically build the entire list using BULK COLLECT INTO. DECLARE Bulk Collects with RETURNING INTO You can even use Bulk Collects with DML-Commands to return a value to the calling procedure using RETURNING without an additional fetch. DECLARE In the PL/SQL table "Bonlist" you can now find the updated salaries. When the parameters hold large data structures, all this copying slows
down execution and uses up memory. To prevent that, you can specify the
DECLARE Dynamic SQL to reference database objects that do not exist at compilation ? Dynamic SQL can solve this problem, because dynamic SQL allows you to wait until runtime to specify the table names you need to access. For example, in the next sample you might allow a user to specify the name of the table at runtime with a dynamic SQL query. CREATE OR REPLACE PROCEDURE ShowSal (TabName VARCHAR2) IS EXECUTE IMMEDIATE to execute dynamic SQL (Very good new feature) The EXECUTE IMMEDIATE statement prepares (parses) and immediately executes a dynamic SQL statement or an anonymous PL/SQL block. The following examples are from the Oracle Manual. DECLARE PlSqlBlock := 'BEGIN ShowSal(:TabName); END;'; EXECUTE IMMEDIATE 'DROP TABLE special_bonus'; SqlStmt := 'INSERT INTO dept VALUES (:1, :2, :3)'; EXECUTE IMMEDIATE 'DELETE FROM dept SqlStmt := 'ALTER SESSION SET SQL_TRACE TRUE'; In the example below, a stand-alone procedure accepts the name of a database table (such as 'emp') and an optional WHERE-clause condition (such as 'sal > 2000'). If you omit the condition, the procedure deletes all rows from the table. Otherwise, the procedure deletes only those rows that meet the condition. CREATE PROCEDURE delete_rows ( execute delete_rows('emp','sal > 2000'); We think, that EXECUTE IMMEDIATE is one of the most important new features in Oracle 8i Definer Rights (Default) By default, stored procedures and SQL methods execute with the privileges of their definer, not their invoker. Such definer-rights routines are bound to the schema in which they reside. A user of a definer-rights procedure requires only the privilege to execute the procedure and no privileges on the underlying objects that the procedure accesses, because a definer-rights procedure operates under the security domain of the user who owns the procedure, regardless of who is executing it. The procedure's owner must have all the necessary object privileges for referenced objects. Note, that the EXECUTE privilege cannot be granted to ROLES for definer rigths procedures. Advantage of Definer Rights You can use definer-rights procedures to add a level of database security. By writing a definer-rights procedure and granting only EXECUTE privilege to a user, the user can be forced to access the referenced objects only through the procedure (that is, the user cannot submit ad hoc SQL statements to the database). Invoker Rights An invoker-rights procedure executes with all of the invoker's privileges, including enabled ROLES. A user of an invoker-rights procedure needs privileges on the underlying objects that the procedure accesses for which names are resolved in the invoker's schema. Advantage of Invoker Rights Invoker-rights routines let you centralize data retrieval. They are especially useful in applications that store data in different schemas. In such cases, multiple users can manage their own data using a single code base. Example on Invoker Rights CONNECT sys/manager; CONNECT scott/tiger; CONNECT sys/manager; CONNECT scott/tiger; CONNECT blake/lion; This example shows, each user can see only his/her own data ! This behaviour is completly different from normal Definer Rights, where each user will see the data in Scott's DEPT table. Use AUTHID CURRENT_USER AS to define Invoker Rights. Please not, that Invoker Rights can be granted to ROLES, as this example shows. The functionality to support fine-grained access control is based on dynamic predicates, where security rules are not embedded in views, but are acquired at the statement parse time, when the base table or view is referenced in a DML statement. A dynamic predicate for a table or view is generated by a PL/SQL function, which is associated with a security policy through a PL/SQL interface. For example: DBMS_RLS.ADD_POLICY ('scott','emp','emp_policy','secusr','emp_sec','select'); Whenever EMP table, under SCOTT schema, is referenced in a query or subquery (SELECT), the server calls the EMP_SEC function (under SECUSR schema). This returns a predicate specific to the current user for the EMP_POLICY policy. The policy function may generate the predicates based on whatever session environment variables are available during the function call. These variables usually appear in the form of application contexts. The server then produces a transient view with the text: SELECT * FROM scott.emp WHERE P1 Here, P1 is the predicate returned from the EMP_SEC function. The server treats the EMP table as a view and does the view expansion just like the ordinary view, except that the view text is taken from the transient view instead of the data dictionary. If the predicate contains subqueries, then the owner (definer) of the policy function is used to resolve objects within the subqueries and checks security for those objects. In other words, users who have access privilege to the policy protected objects do not need to know anything about the policy. They do not need to be granted object privileges for any underlying security policy. Furthermore, the users also do not require EXECUTE privilege on the policy function, because the server makes the call with the function definer's right. Example We need a policy on scott's EMP table, which shows only the rows belonging to the caller of the query. Therefore we have to create the following predicate: (e.g. ename='SCOTT') connect scott/tiger
Important Note The User SCOTT can only see his own rows, the User SYSTEM can see NO rows, but the User SYS can see ALL rows. Policies are not enabled for the SYS User ! Policy Functions without predefined Context If you want to define your own predicate in the RETURN value, set it to (
' 1 = 2 ' ) for false .... DBMS_RLS.ADD_POLICY DBMS_RLS.ADD_POLICY (
Application Context (Security Policy) Application context facilitates the implementation of fine-grained access control. It allows you to implement security policies with functions and then associate those security policies with applications. Each application can have its own application-specific context. Users are not allowed to arbitrarily change their context (for example, through SQL*Plus). To define an application context:
Click here for an example, which shows the steps above.
The whole database can now be opened READ-ONLY (Before 8i, only Tablespaces). Note that Disk-Sorts in the TEMP tablespaces are not possible, therefore you must set SORT_AREA_SIZE and SORT_AREA_RETAINED_SIZE to values big enough to process the sort operation in the memory. sqlplus /nologin SQL> connect sys as sysdba; SQL> startup mount; Total System Global Area 96394640 bytes SQL> alter database open read only; SQL> connect scott/tiger DEPTNO DNAME
LOC SQL> insert into dept values (50,'DBTEAM','Seftigen'); Multiplexed Archiving up to 5 Destinations In Oracle8 it was possible to define an alternate archive log destination with LOG_ARCHIVE_DUPLEX_DEST. In Oracle 8i it's possible to have up to 5 alternate destinations. You cannot use LOG_ARCHIVE_DEST_x with LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST. Enter in INIT.ORA (Best if all alternate destinations are on different, fast disks). log_archive_dest_1 = "location=/u01/db/SOL3/arc1 mandatory
reopen=300" sqlplus /nologin From now the archive logs will be saved to the specified destinations; Oracle8i contains an enhanced feature that automatically keeps a standby database synchronized with your production database. This new feature, called the Automated Standby Database (ASD), greatly reduces the amount of manual work database administrators must perform. The new featues in 8i are:
DB_BLOCK_CHECKING is used to control whether block checking is done for transaction managed blocks. As early detection of corruptions is ueful, and has minimal, if any, performance impact. As the parameter is dynamic, it provides more flexibility than events 10210 and 10211, which it will ultimately replace. If DB_BLOCK_CHECKSUM is set to TRUE, a checksum is calculated and stored in the cache header of every data block when writing it to disk. Checksums will be verified when a block is read only if this parameter is TRUE and the last write of the block stored a checksum. Every log block will also be given a checksum before it is written to the current log. Warning:Setting DB_BLOCK_CHECKSUM to TRUE can cause performance overhead. We normaly use the following Settings in the INIT.ORA File: db_block_checking = true Other possibilities to check the integrity of the data can be done with:
There exists a Package so called DBMS_REPAIR .... hopefully we never have to use this .... Using LogMiner to Analyze Online and Archived Redo Logs LogMiner allows you to read information contained in online and archived redo logs. LogMiner is especially useful for identifying and undoing logical corruption. LogMiner processes redo log files, translating their contents into SQL statements that represent the logical operations performed to the database. The V$LOGMNR_CONTENTS view then lists the reconstructed SQL statements that represent the original operations (SQL_REDO column) and the corresponding SQL statement to undo the operations (SQL_UNDO column). Apply the SQL_UNDO statements to roll back the original changes to the database. Steps to perform a LogMiner Analyze:
Event Handling and Event Attributes System events, like LOGON and SHUTDOWN, provide a mechanism for tracking system changes. With Oracle, this tracking can be combined with database event notification. You can obtain certain event-specific attributes when a trigger is fired. These attributes can be used as standalone functions. The following Event Attributs are defined (from Oracle Manual).
System Events with Trigger Examples
Example: Log when database is started CREATE TABLE event_table (EVENT_TEXT VARCHAR2(255)); CREATE OR REPLACE TRIGGER event_startup CREATE TRIGGER log_errors Client Events with Trigger Examples Client events are the events related to user logon/logoff, DML, and DDL operations
Example: On Logon and On Create Trigger CREATE OR REPLACE TRIGGER Sys.On_Logon CREATE OR REPLACE TRIGGER Scott.On_Create EVENT_TEXT Important If there is an error in a LOGON Trigger, nobody (not even SYS) may connect to the database. Connect with INTERNAL and drop the trigger may help in this situation. Faster Startup after Instance Crash DB_BLOCK_MAX_DIRTY_TARGET = n Blocks specifies the number of buffers that can be dirty (modified and different from what is on disk) in the buffer cache. It indirectly specifies a rough limit on the number of blocks that must be read during crash and instance recovery. Optimizer Statistics and Create Index Create the Cost based optimizer statistics when you create the index create index addr_indx on customer(ADDRESS) compute statistics; Manipulate Optimizer Statistics with DBMS_STATS DBMS_STATS provides a mechanism for you to view and modify optimizer statistics gathered for database objects.The statistics can reside in the data dictionary or in a table created in the user's schema. Only statistics stored in the dictionary itself have an impact on the cost-based optimizer. DBMS_STATS is divided into three main sections:
Gathering Optimizer Statistics The following procedures enable the gathering of certain classes of optimizer statistics, with possible performance improvements over the ANALYZE command: GATHER_INDEX_STATS Gather Table Statistics: EXEC DBMS_STATS.GATHER_TABLE_STATS Gather Schema Statistics: EXEC DBMS_STATS.GATHER_SCHEMA_STATS Manipulate Optimizer Statistics You can transfer statistics from the dictionary to a user stat table and from a user stat table to the dictionary. The statistics can be manipulated in the user stat table. Create the Export Table: EXEC DBMS_STATS.CREATE_STAT_TABLE Retrieve statistics for a particular table and stores them in the user stat table: EXEC DBMS_STATS.EXPORT_TABLE_STATS Manipulate statistics in the user stat table: EXEC DBMS_STATS.SET_TABLE_STATS Now transfer manipulated statistics from the user stat table to the dictionary. EXEC DBMS_STATS.IMPORT_TABLE_STATS Preserve Execution Plans (Outlines) Plan Stability preserves execution plans in "Stored Outlines". Advantage
Stored Outlines will be saved in the OUTLN schema. Create Outline manually The following statement creates a stored outline called SALARIES, stored in the category SPECIAL. CREATE OR REPLACE OUTLINE Use Outlines When this same SELECT statement is subsequently compiled, if the USE_STORED_OUTLINES parameter is set to SPECIAL, Oracle generates the same execution plan as was generated when the outline SALARIES was created. ALTER SESSION SET USE_STORED_OUTLINES = special; Deaktivate Outlines ALTER SESSION SET USE_STORED_OUTLINES = false; Show stored Outlines SELECT ol_name,creator FROM outln.ol$; Rebuild stored Outline ALTER OUTLINE salaries REBUILD; OUTLN_PKG Package The OUTLN_PKG package contains the functional interface for subprograms associated with the management of stored outlines. A stored outline is the stored data that pertains to an execution plan for a given SQL statement. It enables the optimizer to repeatedly recreate execution plans that are equivalent to the plan originally generated along with the outline.The data stored in an outline consists, in part, of a set of hints that are used to achieve plan stability. Drops all outlines that have not been used since they were created execute OUTLN_PKG.DROP_UNUSED; Enable Parallel Automatic Tuning Parallel execution dramatically reduces response time for data-intensive operations on large databases typically associated with Decision Support Systems (DSS). You can also implement parallel execution on certain types of OLTP (Online Transaction Processing) and hybrid systems. The optimal setting of the different parameters which are involved for parallel execution are extremly complex. The new initSID.ora parameter PARALLEL_AUTOMATIC_TUNING = TRUE helps to simlify this complex task. When PARALLEL_AUTOMATIC_TUNING is TRUE, Oracle automatically sets other parameters as shown below. For most systems, you do not need to make further adjustments to have an adequately tuned, fully automated parallel execution environment. Parameters calculated by PARALLEL_AUTOMATIC_TUNING:
Caution ! Be very careful when you set this parameter in OLTP and Hybrid Systems. This Parameter is mainly used for DSS Systems. Read chapter 26 "Tuning Parallel Execution" in the Oracle Tuning Guide !
Typically, tablespaces are "dictionary mapped," which means that such tablespaces rely on SQL dictionary tables to track space utilization. Locally managed tablespaces, on the other hand, use bit maps (instead of SQL dictionary tables) to track used and free space. Therefore there is a smaller overhead for extent allocation and coalesce of free extents is no more necessary. Note that, SMON coalesce adjacient extents in dictionary mapped tablespaces only if PCTINCREASE > 0. Space Management with locally managed tablespaces Extents can be allocated UNIFORM (Each extent has a fixed size, typically 1MByte) or AUTOALLOCATE ((Extents with minimal size of 64K). For the SYSTEM tablespace, you can specify EXTENT MANGEMENT LOCAL in the CREATE DATABASE command. If the SYSTEM tablespace is locally managed, other tablespaces in the database can be dictionary-managed but you must create all rollback segments in locally-managed tablespaces. Normal Tablespace (System Managed) CREATE TABLESPACE users2 Temporary Tablespace (UNIFORM Managed) CREATE TEMPORARY TABLESPACE temp2 Note:
Read Only Tablespaces are not new in 8.1.5, but id you try to switch to an Read-Only Tabelspace on a busy System you no longer get an error, Oracle waits until the ressource becomes free. alter tablespace users read only; You can use transportable tablespaces to move a subset of an Oracle database and "plug" it in to another Oracle database, essentially moving tablespaces between the databases. Moving data via transportable tablespaces can be much faster than performing either an import/export or unload/load of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace structural information. You can also use transportable tablespaces to move index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data. Current Limitations
Example Step 1: Pick a Self-contained Set of Tablespaces You can only transport a set of tablespaces that is self-contained. In this context "self-contained" means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. execute dbms_tts.transport_set_check('USERS',TRUE); Here, transport_set_check is a PL/SQL routine in the PL/SQL package DBMS_TTS: PROCEDURE transport_set_check( ts_list: List of tablespace names separated by comma After invoking this PL/SQL routine, you can see all violations by selecting from the TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, this view will be empty. If the set of tablespaces is not self-contained, this view lists all the violations select * from transport_set_violations; Step 2: Generate a Transportable Tablespace Set on Source DB After identifying the self-contained set of tablespaces you want to transport, generate a transportable set by performing the following tasks:
Step 3: Plug In the Tablespace Set To plug in a tablespace set, perform the following tasks:
Notes When you specify TABLESPACES, the supplied tablespace names are compared to those in the export file. Import returns an error if there is any mismatch. Otherwise, tablespace names are extracted from the export file. If you do not specify FROMUSER and TOUSER, all database objects (such as tables and indexes) will be created under the same user as in the source database. Those users must already exist in the target database. If not, import will return an error indicating that some required users do not exist in the target database. You can use FROMUSER and TOUSER to change the owners of objects. For example, if you specify FROMUSER=dcranney,jfee TOUSER=smith, williams, objects in the tablespace set owned by dcranney in the source database will be owned by smith in the target database after the tablespace set is plugged in. Similarly, objects owned by jfee in the source database will be owned by williams in the target database. In this case, the target database does not have to have users dcranney and jfee, but must have users smith and williams. For an in depth discussion with examples on Oracle8 and 8i Partitioning click here. Online Index Creation and Rebuild Previously, when creating an index on a table there has always been a DML S-lock on that table during the index build operation, which meant you could not perform DML operations on the base table during the build. Now, with the ever-increasing size of tables and necessity for continuous operations, you can create and rebuild indexes online--meaning you can update base tables at the same time you are building or rebuilding indexes on that table. Note, though, that there are still DML SS-locks, which means you cannot perform other DDL operations during an online index build. Indexes can now be created and rebuild online, without to lock the corresponding table. CREATE UNIQUE INDEX pk_cdr ON cdr(bkg_id) ONLINE; Creating a Key-Compressed Index Creating an index using key compression enables you to eliminate repeated
occurrences of key column prefix values. Append COMPRESS = i, where i = Number of
attributes in the index for Non-Unique indexes and CREATE INDEX emp_ename ON emp (ename) You can create indexes on functions and expressions that involve one or more columns in the table being indexed. A function-based index precomputes the value of the function or expression and stores it in the index. You can create a function-based index as either B*-tree or bitmap index. Using FBI with SQL-Function
Note to use a function-based index:
Using FBI with SQL-Operation CREATE INDEX sal_comm_idx ON emp (sal + comm); Using FBI with PL/SQL Function CREATE OR REPLACE FUNCTION sal_com (numSal IN NUMBER, CREATE INDEX sal_com_idx ON emp (sal_com(sal,comm)) COMPUTE
STATISTICS; DETERMINISTIC Functions In some cases the optimizer can use a previously calculated value rather than executing a user-written function. This is only safe for functions that behave in a restricted manner. The function must always return the same output return value for any given set of input argument values. Index only Tables and Secondary Index Support You can move your existing data into an index-organized table and do all the operations you would perform in an ordinary table. There exists no real "table" in an IOT, all the data are packed in a B*Tree Index. IOT's have NO physical rowid, therefore a secondary index on an index-organized table cannot be based on a physical rowid which is inherently fixed. Instead, a secondary index for an index-organized table is based on what is called the logical rowid (UROWID). A logical rowid has no permanent physical address and can move across data blocks when new rows are inserted. However, if the physical location of a row changes, its logical rowid remains valid. Use IOT when all or nearly all attributes are in the index (Intersection Tables). Example In the following example, an IOT table is created which is often used by Web text-search engines. CREATE TABLE DocIdx ( Now create the secondary Index (new in 8.1.5) CREATE INDEX DocHitsId ON DocIdx(Hits); Now use the logical ROWID (UROWID) DECLARE UPDATE DocIdx SET Token='Or81' Dropping columns lets you free space in the database by dropping columns you no longer need, or by marking them to be dropped at a future time when the demand on system resources is less.
Moving Tables to other Tablespaces The move table clause lets you relocate data of a nonpartitioned table into a new segment, optionally in a different tablespace, and optionally modify any of its storage attributes. ALTER TABLE emp For an index-organized table rebuilds the index-organized table's primary key index B*-tree. Specify ONLINE that DML operations on the index-organized table are allowed during rebuilding of the table's primary key index B*-tree. ALTER TABLE iot_tab Skip locked rows with SKIP LOCKED Usually Oracle locks the rows for other session in a SELECT ... FOR UPDATE statement. This behaviour can be desireable or not. Another approach offers the SKIP LOCKED clause. This means, that only unlocked rows will be displayed for all other sessions.
The Database Resource Manager allows the database administrator to have more control over resource management than would normally be possible through operating system resource management alone. Using this facility, the database administrator can:
Please consult the Manual Oracle8i Concepts for more Information, this topic is too specific to present here.
Automatic Instance Registration Database instances register themselves with the listener when started. Prior to this release, information about the instance had to be manually configured in the LISTENER.ORA file. Database instance registration is comprised of two elements:
When an instance is started, initialization parameters are read from the INITSID.ORA. One of these initialization parameters is the service name. By default, an instance background process registers instance information to a listener on the local machine. If a listener is started after the instance, there may be a delay before the instance and dispatchers are registered. The instance will attempt to connect to the listener periodically. Similarly, if a listener gets an incoming request before an instance is registered, the listener may reject the request. Setup initSID.ora ### Automatic Instance Registration Setup TNSNAMES.ORA ### NetService Descriptor in TNSNAMES.ORA SOL3.WORLD = Load Balancing and Client Failover Instance registration enables connection load balancing. Connection load balancing balances the number of active connections among various instances and dispatchers for the same service. This enables listeners to make their routing decisions based on how many connections each dispatcher has and on how loaded the nodes that the instances run. Connection Load Balancing for Parallel Server Connection load balancing evenly distributes the number of active
connections among various instances and dispatchers for the same service.
The load of a instance and dispatcher is determined by the number of connections.
Connection load balancing is only enabled for an MTS environment. SOL3.WORLD = Up to Oracle 8, the client was configured with the Oracle System Identifier (SID) of a database instance. This SID was then passed to the listener. The listener would then verify this information and permit or deny a connection. The SID was also used internally by the database as pointer to the System Global Area (SGA). While a SID identified a database instance, it did not identify a database. This limitation caused a database to have no more than one service associated with it. In Oracle 8.1 multiple instances are supported, using the following new parameters in connect descriptors: SERVICE_NAME is typically the global database name, a name comprised of the database name and domain name, entered during installation or database creation. INSTANCE_NAME is typically the SID entered during installation or database creation. INSTANCE_NAME is optional, representing the name of instance and is used to uniquely identify a specific instance when multiple instances (Parallel Server) share common services names. INSTANCE_NAME should not be confused with the SID, which actually uniquely identifies the instances shared memory on a host. Oracle 8.0 Naming Concept (TNSNAMES.ORA) SOL3.world = Oracle 8.1 Naming Concept (TNSNAMES.ORA) SOL3.WORLD = DB_DOMAIN in the initialization file (INITSID.ORA) no longer has a default setting of .WORLD. The new setting is NULL. Therefore, service names do not need to include the domain, if the default setting is used. In prior releases of Oracle, the default setting was .WORLD.
Setup initSID.ora db_domain = AKADIA.COM Setup TNSNAMES.ORA SOL3.WORLD = Prior to this release, Net8 only supported connections that used the Two-Task Common (TTC) presentation layer and Transparent Network Substrate (TNS) Network session (NS) layer to establish client connections. The Java option allows customers to program the database server using traditional database stored procedures, Enterprise JavaBeans and CORBA Servers. To support clients accessing Enterprise JavaBeans and CORBA Servers in the database, the presentation layer and session layer support have been expanded. IIOP Clients Clients access EJBs and CORBA Servers in the database via the Inter-Orb Protocol (IIOP) protocol. To support IIOP, the database must be configured in MTS mode with the General Inter-Orb Protocol (GIOP) presentation protocol. (IIOP is an implementation of GIOP over TCP/IP). Oracle8i provides a GIOP service implementation. The Oracle8i Java VM is a session-oriented Java VM. This means that each session in the database effectively gets its own VM as a private server. More Information can be found in the Original Oracle Documentation: Oracle8i Enterprise JavaBeans and CORBA Developer's Guide, Oracle8i Java Stored Procedures Developer's Guide and Net8 Documentation. The Net8 Configuration Assistant is post-installation tool that performs basic configuration. After installation, it automatically configures default configuration files. In addition, the Net8 Configuration Assistant may be run in stand-alone mode to configure various elements of configuration, including the: Start Net8 Configuration Assistant: $ DISPLAY=<X-Window-Host:0.0>
Oracle has announced, that SVRMGRL (Server Manager) will be replaced with SQL*PLus. In Oracle 8.1 you can now use all DBA commands as in SVRMGRL. sqlplus /nologin Export / Import to / from several files Export supports writing to multiple export files and Import can read from multiple export file, you can specify multiple filenames to be used. When Export reaches the value you have specified for the maximum FILESIZE, Export stops writing to the current file, opens another export file with the next name specified by the parameter FILE and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export filenames to complete the export, Export will prompt you to provide additional filenames. If you specify a value (byte limit) for the FILESIZE parameter, Export will write only the number of bytes you specify to each dump file. Specify the filesize in (B)ytes, (K)Bytes, (M)Bytes or (G)Bytes. Of course, IMP is able to import these files again. exp system/manager full=y file=full1.dmp,full2.dmp filesize=500K The new QUERY parameter allows you to select a subset of rows from a set of tables when doing a table mode export. The value of the query parameter is a string that contains a WHERE clause for a SQL select statement which will be applied to all tables (or table partitions) listed in the TABLE parameter. For example, if user SCOTT wants to export only those employees whose job title is SALESMAN and whose salary is greater than 1600, he could do the following (note that this example is Unix-based): exp scott/tiger tables=emp query=\"where job=\'SALESMAN\' and sal\<1600\" Note Since the value of the QUERY parameter contains blanks, most operating
systems require that the entire strings Export / Import with Optimizer Statistics Exporting Optimizer Statistics In some cases, Export will place the precomputed statistics in the export file as well as the ANALYZE commands to regenerate the statistics. However, the precomputed optimizer statistics will NOT BE USED at export time if:
However, specifying ROWS=N does not preclude saving the precomputed statistics in the Export file. This allows you to tune plan generation for queries in a non-production database using statistics from a production database. exp scott/tiger file=emp.dmp tables=\(emp\) statistics=compute Importing with Optimizer Statistics If statistics are requested at Export time and analyzer statistics are available for a table, Export will place the ANALYZE command to recalculate the statistics for the table into the dump file. In certain circumstances, Export will also write the precalculated optimizer statistics for tables, indexes, and columns to the dump file. Use precalculated optimizer statistics in dump file: imp scott/tiger analyze=y recalculate_statistics=n file=emp.dmp Recalculate optimizer statistics with ANALYZE when importing: imp scott/tiger analyze=y recalculate_statistics=y file=emp.dmp Do not use any optimizer statistics: imp scott/tiger analyze=n file=emp.dmp Materialized Views for Data Warehouses Overview Materialized views are used in warehouses to increase the speed of queries on very large databases. Queries to large databases often involve joins between tables or aggregations such as SUM, or both. These operations are very expensive in terms of time and processing power. The following queries make use of MVIEWS:
The following query don't use MVIEWS:
Materialized view can be used to replicate data, which was formerly achieved by using the CREATE SNAPSHOT command. Now CREATE MATERIALIZED VIEW is a synonym for CREATE SNAPSHOT. Query Rewrite Materialized views improve query performance by precalculating expensive join and aggregation operations on the database prior to execution time and storing these results in the database. The query optimizer can make use of materialized views by automatically recognizing when an existing materialized view can and should be used to satisfy a request. It then transparently rewrites the request to use the materialized view (Query Rewrite). Queries are then directed to the materialized view and not to the underlying detail tables or views. Rewriting queries to use materialized views rather than detail relations results in a significant performance gain. Prerequisites for MVIEWS Privileges: SQL> grant query rewrite to scott; Set in initSID.ora: optimizer_mode = choose Example with Aggregation Create the Materialized View: Execution Plan Example with Join / Aggregation Create the Materialized View: Create Optimizer Statistics and
Refresh Materialized View: Execution Plan Refreshing Materialized Views (MVIEWS) When creating a materialized view, you have the option of specifying whether the refresh occurs manually (ON DEMAND) or automatically (ON COMMIT, DBMS_JOB). To use the fast warehouse refresh facility, the ON DEMAND mode must be specified, then the materialized view can be refreshed by calling one of the procedures in DBMS_MVIEW. MVIEW manual Refresh with DBMS_MVIEW The DBMS_MVIEW package provides three different types of refresh operations.
Manual Complete Refresh A complete refresh occurs when the materialized view is initially defined, unless it references a prebuilt table and complete refresh may be requested at any time during the life of the materialized view. Since the refresh involves reading the detail table to compute the results for the materialized view, this can be a very time-consuming process, especially if there are huge amounts of data to be read and processed.
Manual Fast (incremental) Refresh If you specify REFRESH FAST (only deltas performed by UPDATE, INSERT, DELETE on the base tables will be refreshed), Oracle performs further verification of the query definition to ensure that fast refresh can always be performed if any of the detail tables change. These additional checks include:
Create the MVIEW Log(s)
MVIEW automatic Refresh with DBMS_JOB Instead of using DBMS_MVIEW you can automtically refresh the MVIEW (Snapshot) using Oracle DBMS_JOB Management. Note, that the CREATE SNAPSHOT is now a synonym for CREATE MATERIALIZED VIEW. CREATE MATERIALIZED VIEW emp_dept_sum Check the generated Job SELECT SUBSTR(job,1,4) "Job", SUBSTR(log_user,1,5) "User", Job User Schem Last Date
Next Date B
Fail Command Automatic Fast Refresh of Materialized Views This is completly new in Oracle8i, so far it was possible to refresh a
snapshot with DBMS_JOB in a short interval according the snaphot log. With Oracle 8i, it's
possible to refresh automatically on the next Example 1. Create ROWID Materialized View Log's. DROP MATERIALIZED VIEW LOG on emp; DROP MATERIALIZED VIEW LOG on dept; 2. Prebuild the table for the Materialized View CREATE TABLE empdep AS 3. Create Indexes on prebuilt table for performance reasons CREATE UNIQUE INDEX emprowid CREATE INDEX deprowid CREATE UNIQUE INDEX empdep 4. Create the REFRESH FAST ON COMMIT Materialized View CREATE MATERIALIZED VIEW empdep Calculate multiple levels of subtotals with ROLLUP ROLLUP enables a SELECT statement to calculate multiple levels of subtotals across a specified group of dimensions. It also calculates a grand total. ROLLUP is a simple extension to the GROUP BY clause, so its syntax is extremely easy to use. Example SELECT deptno,job,count(*),sum(sal) DEPTNO JOB
COUNT(*) SUM(SAL) Create cross-tabular reports with CUBE CUBE enables a SELECT statement to calculate subtotals for all possible combinations of a group of dimensions. It also calculates a grand total. This is the set of information typically needed for all cross-tabular reports, so CUBE can calculate a cross-tabular report with a single SELECT statement. SELECT deptno,job,count(*),sum(sal) DEPTNO JOB
COUNT(*) SUM(SAL) GROUPING Function with ROLLUP and CUBE Two challenges arise with the use of ROLLUP and CUBE. First, how can we programmatically determine which result set rows are subtotals, and how do we find the exact level of aggregation of a given subtotal? We will often need to use subtotals in calculations such as percent-of-totals, so we need an easy way to determine which rows are the subtotals we seek. Second, what happens if query results contain both stored NULL values and "NULL" values created by a ROLLUP or CUBE? How does an application or developer differentiate between the two? Grouping Function To handle these issues, Oracle 8i introduces a new function called GROUPING. Using a single column as its argument, Grouping returns 1 when it encounters a NULL value created by a ROLLUP or CUBE operation. That is, if the NULL indicates the row is a subtotal, GROUPING returns a 1. Any other type of value, including a stored NULL, will return a 0.
Example: Identify Aggregats SELECT deptno,job,count(*),sum(sal), DEPTNO JOB
COUNT(*) SUM(SAL) GROUPING(JOB) Example: Only show Subtotals and Total Summary SELECT deptno,count(*),sum(sal) DEPTNO COUNT(*) SUM(SAL) Example: Using GROUPING in DECODE for pretty Reports SELECT DECODE (GROUPING(dname),1,'All
Departements',dname) AS Departement, DEPARTEMENT JOB
COUNT(*) SUM(SAL) Top-N queries ask for the n largest or smallest values of a column. Never use ROWNUM and ORDER BY together, because Oracle first fetch the rows according ROWNUM and then sort these found rows, this is of course not what we really want ! Oracle8i now offers ORDER BY in Views and Inline Views, due to this, it's easy to implement a Top-n query. Never use this construct ! SELECT * FROM emp EMPNO ENAME JOB
MGR HIREDATE
SAL COMM
DEPTNO Example with an ORDER BY View CREATE OR REPLACE VIEW emporder SELECT * FROM emporder EMPNO ENAME JOB
MGR HIREDATE
SAL COMM
DEPTNO Example with an Inline View SELECT * FROM (SELECT * FROM emp ORDER BY sal DESC) EMPNO ENAME JOB
MGR HIREDATE
SAL COMM
DEPTNO |