Wednesday, September 30, 2009

How to: ORA-00059: maximum number of DB_FILES exceeded

Symptom:


create tablespace data_tbs38 datafile "+DATA' size 100M;
*
ERROR at line 1:
ORA-00059: maximum number of DB_FILES exceeded


Solution:

Increase db_files initialization parameter and bounce the database. Increase this parameter will incease the PGA consumation for every server process.


SQL> alter system set db_files=500 scope=spfile sid='*';

System altered.


How about MAXDATAFILES specified when creating database?

From Oracle8 onwards, Controlfile will auto-expend if MAXDATAFILES is smaller. similar inforamtion could be found in the alert log:


SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.0.1 - Beta
PL/SQL Release 11.2.0.0.1 - Beta
CORE 11.2.0.0.1 Beta
TNS for 32-bit Windows: Version 11.2.0.0.1 - Beta
NLSRTL Version 11.2.0.0.1 - Beta



SQL> select type,record_size rs, records_total rt, records_used ru,first_index f
2 last_index li, last_recid lr from v$controlfile_record_section;

TYPE RS RT RU FI LI LR
---------------------------- ------ -------- --- --- --- ---
DATABASE 316 1 1 0 0 0
CKPT PROGRESS 8180 11 0 0 0 0
REDO THREAD 256 8 1 0 0 0
REDO LOG 72 16 3 0 0 0
DATAFILE 520 52 23 0 0 17
FILENAME 524 2138 27 0 0 0
TABLESPACE 68 40 24 0 0 18
TEMPORARY FILENAME 56 20 1 0 0 1
RMAN CONFIGURATION 1108 50 2 0 0 2
LOG HISTORY 56 292 1 1 1 1
OFFLINE RANGE 200 81 0 0 0 0

TYPE RS RT RU FI LI LR
---------------------------- ------ -------- --- --- --- ---
ARCHIVED LOG 584 308 3 1 3 3
BACKUP SET 40 409 0 0 0 0
BACKUP PIECE 736 44 0 0 0 0
BACKUP DATAFILE 200 81 0 0 0 0
BACKUP REDOLOG 76 215 0 0 0 0
DATAFILE COPY 736 44 0 0 0 0
BACKUP CORRUPTION 44 371 0 0 0 0
COPY CORRUPTION 40 409 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0
PROXY COPY 928 88 0 0 0 0
BACKUP SPFILE 124 131 0 0 0 0

TYPE RS RT RU FI LI LR
---------------------------- ------ -------- --- --- --- ---
DATABASE INCARNATION 56 292 1 1 1 1
FLASHBACK LOG 84 2048 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0
REMOVABLE RECOVERY FILES 32 1000 0 0 0 0
RMAN STATUS 116 141 0 0 0 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0
MTTR 100 8 1 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0
STANDBY DATABASE MATRIX 400 31 31 0 0 0
GUARANTEED RESTORE POINT 212 2048 0 0 0 0

TYPE RS RT RU FI LI LR
---------------------------- ------ -------- --- --- --- ---
RESTORE POINT 212 2083 0 0 0 0
DATABASE BLOCK CORRUPTION 80 2045 0 0 0 0
ACM OPERATION 104 64 5 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0

37 rows selected.


Expanded controlfile section 6 from 20 to 40 records
The number of logical blocks in section 6 remains the same
Completed: create tablespace test12 datafile size 10m
create tablespace test13 datafile size 10m
Expanded controlfile section 4 from 20 to 52 records
Requested to grow by 32 records; added 1 blocks of records

How to enable powerpath failed path in a faster way

By unplug one of the Fiber cable to simulate FC HBA failure, the result as expect:



[root@VMXDB01 ~]# powermt display
Symmetrix logical device count=39
CLARiiON logical device count=0
Hitachi logical device count=0
Invista logical device count=0
HP xp logical device count=0
Ess logical device count=0
HP HSx logical device count=0
==============================================================================
----- Host Bus Adapters --------- ------ I/O Paths ----- ------ Stats ------
### HW Path Summary Total Dead IO/Sec Q-IOs Errors
==============================================================================
3 qla2xxx optimal 39 0 - 0 0
4 qla2xxx failed 39 39 - 0 39


[root@VMXDB01~]# powermt display dev=emcpowera
Pseudo name=emcpowera
Symmetrix ID=0009899012321
Logical device ID=0141
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3 qla2xxx sdbe FA 7aA active alive 0 0
4 qla2xxx sdce FA 9aA active dead 0 1


After put back the fiber cable, it takes 5 minutes to detect this recovery action and change the state to "alive".

How to make it faster? the answer is to use "powermt enable hba=#hba" to enable the hba (path) manually. Here #hba is referring to the first column of the output (hba is 3 or 4)


[root@VMXDB01 ~]# powermt enable hba=4

Monday, September 28, 2009

Using datapump to migrate data between databases

Objective:

Refresh data from VMXDB01 to VMXDB02

From Source VMXDB01

Create directory object


SQL> create directory logical_export as '/u01/logical_export';


Export data:


[oracle@VMXDB01 ~]$ expdp \'/ as sysdba\' directory=logical_export dumpfile=code.dmp logfile=code.log tables=donghua.tbl_code

Export: Release 10.2.0.4.0 - 64bit Production on Sunday, 27 September, 2009 21:12:29

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01": '/******** AS SYSDBA' directory=logical_export dumpfile=code.dmp logfile=code.log tables=donghua.tbl_code
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 5 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "DONGHUA"."TBL_CODE" 3.553 MB 43960 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/u01/logical_export/code.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 21:12:40


To Destination VMXDB02

Create directory object


SQL> create directory logical_export as '/u01/logical_export';


Import data:


[oracle@VMXDB02 ~]$ impdp \'/ as sysdba\' directory=logical_export dumpfile=code.dmp logfile=code.log full=y content=data_only table_exists_action=truncate

Import: Release 10.2.0.4.0 - 64bit Production on Sunday, 27 September, 2009 21:16:57

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": '/******** AS SYSDBA' directory=logical_export dumpfile=code.dmp logfile=code.log full=y content=data_only table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "DONGHUA"."TBL_CODE" 3.553 MB 43960 rows
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 21:17:04


References:

TABLE_EXISTS_ACTION
Default: SKIP (Note that if CONTENT=DATA_ONLY is specified, the default is APPEND, not SKIP.)

Purpose

Tells Import what to do if the table it is trying to create already exists.

Syntax and Description

TABLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | REPLACE}

The possible values have the following effects:

•SKIP leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

•APPEND loads rows from the source and leaves existing rows unchanged.

•TRUNCATE deletes existing rows and then loads rows from the source.

•REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

The following considerations apply when you are using these options:

•When you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.

•When you use SKIP, APPEND, or TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored. For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded (using EXCLUDE) and they exist in the source dump file or system.

•When you use APPEND or TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.

The existing table is loaded using the external tables access method if the existing tables have active constraints and triggers. However, be aware that if any row violates an active constraint, the load fails and no data is loaded.

If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before reenabling the constraints.

•When you use APPEND, the data is always loaded into new space; existing space, even if available, is not reused. For this reason, you may wish to compress your data after the load.

Restrictions

•TRUNCATE cannot be used on clustered tables or over network links.

Sunday, September 27, 2009

Symmetrix parition alignment (Windows and Linux)

On Linux/Winows, first 63 blocks has been reserved for MBR (master boot record), and first data partition starts with offset at 63*512bytes=31.5KB.

However, Symmetrix array define tracks to be 64KB on DMX-3 and later (32KB for DMX-2 and earlier). This misalignment can use misalignment on Symmetrix memory cache (Just imaging one 64KB track on Host mapping to 2 tracks in the memory cache and storage), this will cause performance degradation.

Below steps shows how to manually perform the alignment using fdisk:
(128 blocks x 512bytes/block = 64KB)


[root@VMXDB01 ~]# fdisk /dev/emcpowera
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklab
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n <-- New partition
Command action
e extended
p primary partition (1-4)
p <-- Primary partition
Partition number (1-4): 1 <-- 1st partition
First cylinder (1-1011, default 1): <-- leave default to use 1st cylinder
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): <-- Default
Using default value 1011

Command (m for help): x <-- enter eXpert mode

Expert command (m for help): p <-- print partition table

Disk /dev/emcpowera: 67 heads, 62 sectors, 1011 cylinders

Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID
1 00 1 1 0 66 62 1010 62 4199632 83
2 00 0 0 0 0 0 0 0 0 00
3 00 0 0 0 0 0 0 0 0 00
4 00 0 0 0 0 0 0 0 0 00

Expert command (m for help): b <-- move to beginning of the partition
Partition number (1-4): 1 <-- choose 1st partition
New beginning of data (62-4199693, default 62): 128 <-- Enter starting block

Expert command (m for help): p <-- print partition table

Disk /dev/emcpowera: 67 heads, 62 sectors, 1011 cylinders

Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID
1 00 1 1 0 66 62 1010 128 4199566 83
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 1, 1) logical=(0, 2, 5)
2 00 0 0 0 0 0 0 0 0 00
3 00 0 0 0 0 0 0 0 0 00
4 00 0 0 0 0 0 0 0 0 00

Expert command (m for help): w <-- save partition table
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@VMXDB01 ~]# fdisk -l /dev/emcpowera

Disk /dev/emcpowera: 2150 MB, 2150891520 bytes
67 heads, 62 sectors/track, 1011 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes

Device Boot Start End Blocks Id System
/dev/emcpowera1 1 1011 2099783 83 Linux
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 1, 1) logical=(0, 2, 5)

Saturday, September 26, 2009

Exploring EMC Storage: Introducing Symmetrix Software

EMC Solutions Enabler Kit

Contains all the base management software that provides a host with SYMAPI-shared libraries and the basic Symmetrix command line interface (SYMCLI)

What is SRDF?

The Symmetrix Remote Data Facility (SRDF) is a business continuity solution to replicate between Symmetrix storage arrays.

SRDF currently supports the followin modes of operation:
  1. Synchronous mode (SRDF/S)
  2. Asynchronous mode (SRDF/A)
  3. Adaptive copy mode (transferring data regardless of order/consistency)

What is EMC TimeFinder?

TimeFinder is replicating data within a Symmetrix storage array.

  • TimeFinder/Mirror - Genral monitor and control operations for Business continuance Volumes (BCV)
  • TimeFinder/CG - Consistency groups
  • TimeFinder/Clone - Clone copy
  • TimeFinder/Snap - Snap copy

What is PowerPath?

PowerPath is host-based software that works with networked storage systems to intelligently manage I/O paths. PowerPath manages multiple paths to a storage array. Supporting multiple paths enables recovery from path failure because PowerPath automatically detects path failures and redirects I/O to other available paths.

A key feature of path management in PowerPath is dynamic, multipath load balancing.

What is EMC Replication Manager?

Replication Manager has a generic storage technology interface that allows it to connect and invoke replication methodologies available on:

  • EMC Symmetrix arrays
  • EMC CLARiiON arrays
  • HP StorageWorks arrays

Exploring EMC Storage: Introducing Clariion Storage

Clariion array was originally designed for AViiON computer within Data General Corporation, and acquired by EMC on 1999. Occupying the entry-level and mid-range of EMC's SAN disk array product palette, it is complemented by the high-end Symmetrix.

CLARiiON AX-Series



  • AX4: level disk array, support upto 60 SAS/SATA disks through expension pack, with total capacity 60TB


CLARiiON CX-Series

  • CX-300/CX-300i/CX-500/CX-500i/CX-700: End-of-Life Product, each with unique data-in-place upgrade to the next level. ("i" inside the product name refers to "iscsi")

  • CX3 Model 10/CX3 Model 20/CX3 Model 40/CX3 Model 80: each with unique data-in-place upgrade to the next level.



  • CX4 Model 120/CX4 Model 240/CX4 Model 480/CX4 Model 960: each with unique data-in-place upgrade to the next level. Number (120/240/480/960) in the product represents maximum disk drives, and maximum capacity.





Reference:

http://en.wikipedia.org/wiki/Clariion

http://www.dg.com/collateral/hardware/comparison/emc-clariion.htm

Exploring EMC Storage: Introducing Symmetrix Storage

Symmetrix storage is designed for high-end, contrasting to Clariion, which is targeting to entry-midsize market.

These EMC Symmetrix product before DMX-2 have been omitted.

DMX-2 configurations: (Based on Direct Matrix Architecture)

  • Symmetrix DMX-800EL: 8-60 drives, 4-64 GB cache memory, 8 x 2 GB FC front ends, modular rack mount
  • Symmetrix DMX-800: 60-120 drives, 4-64 GB cache memory, 16 x 2 GB FC front ends, modular rack mount
  • Symmetrix DMX-1000: up to 144 drives, 64-128 GB cache memory, 48 x 2 GB FC front ends, single cabinet
  • Symmetrix DMX-2000: up to 288 drives, 128-256 GB cache memory, 64 x 2 GB FC front ends, two cabinets
  • Symmetrix DMX-3000: up to 576 drives, 128-256 GB cache memory, 64 x 2 GB FC front ends, three cabinets

DMX-3 configurations:

  • Symmetrix DMX-3 950: non-expandable version of Symmetrix, up to 360 drives, up to 128 GB cache memory, 8 front-end ports, one or two cabinets
  • Symmetrix DMX-3: up to 2400 drives, up to 512 GB cache memory, 64 x 2 GB FC front ends (even 4 Gb FC's are available at this time), two to nine cabinets

DMX-4 configurations:

  • Symmetrix DMX-4: up to 1920 drives (or up to 2400 drives for 500 GB drive only configurations), up to 512 GB cache memory, 64 x 4 GB FC front ends, two to nine cabinets

V-Max configurations: (Based on Virtial Matrix Architecture)

  • Symmetrix V-Max SE: a entry point single enginee storage system
  • Symmetrix V-Max: high-end storage array that scales from a single configuration with a dedicated system cabinet and a single storage bay to a larger eight-engine configuration with up to 10 storage bays capable of holding 2400 physical disk drivers.

Reference:

http://en.wikipedia.org/wiki/EMC_Symmetrix

EMC Symmetrix DMX Architecture Product Description Guide

Oracle Databases on EMC Symmetrix Storage Systems

Thursday, September 24, 2009

How to: Linux Ethernet Bonding

How to create:


[root@VMXDB01 u01]# cat /etc/modprobe.conf
alias eth0 bnx2
alias eth1 bnx2
alias eth2 bnx2
alias eth3 bnx2
alias eth4 bnx2
alias eth5 bnx2
alias eth6 bnx2
alias eth7 bnx2
alias scsi_hostadapter megaraid_sas
alias scsi_hostadapter1 ata_piix
alias scsi_hostadapter2 qla2xxx
alias bond0 bonding
alias bond1 bonding
options bonding miimon=100 mode=1
alias net-pf-10 off
alias ipv6 off
install usb-storage :



[root@VMXDB01 u01]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
USERCTL=no
ONBOOT=yes
NETMASK=255.255.255.224
BROADCAST=192.168.0.159
GATEWAY=192.168.0.129
IPADDR=192.168.0.144
BOOTPROTO=none
TYPE=Ethernet
IPV6INIT=no
PEERDNS=yes



[oracle@VMXDB01 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
IPV6INIT=no
PEERDNS=yes



[oracle@VMXDB01 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
IPV6INIT=no
PEERDNS=yes



How to monitor:


[root@VMXDB01 u01]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:10:18:4F:BA:E0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2877200 errors:0 dropped:0 overruns:0 frame:0
TX packets:560853 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4091658143 (3.8 GiB) TX bytes:62462481 (59.5 MiB)
Interrupt:169 Memory:ce000000-ce012100



[root@VMXDB01 u01]# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:10:18:4F:BA:E0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:1043 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:66822 (65.2 KiB) TX bytes:0 (0.0 b)
Interrupt:218 Memory:da000000-da012100



[root@VMXDB01 u01]# ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:10:18:4F:BA:E0
inet addr:192.168.0.144 Bcast:192.168.0.159 Mask:255.255.255.224
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2878296 errors:0 dropped:0 overruns:0 frame:0
TX packets:560895 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4091729879 (3.8 GiB) TX bytes:62468081 (59.5 MiB)



[root@VMXDB01 u01]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:10:18:4f:ba:e0

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:10:18:4f:b9:94


Interesting informationfrom dmesg and /var/log/messages

eth0: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem ce000000, IRQ 169, node addr 0010184fbae0
... ...
eth1: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found at mem da000000, IRQ 218, node addr 0010184fb994
... ...
bonding: MII link monitoring set to 100 ms
bonding: bond0: Adding slave eth0.
bnx2: eth0: using MSIX
bonding: bond0: enslaving eth0 as a backup interface with a down link.
bonding: bond0: Adding slave eth1.
bnx2: eth1: using MSIX
bonding: bond0: enslaving eth1 as a backup interface with a down link.
bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
bonding: bond0: link status definitely up for interface eth0.
bonding: bond0: making interface eth0 the new active one.
bonding: bond0: first active interface up!
bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex
bonding: bond0: link status definitely up for interface eth1.


How many "mode" do we have:

/usr/src/kernels/2.6.18-128.2.1.el5-x86_64/include/linux/if_bonding.h

#define BOND_MODE_ROUNDROBIN 0
#define BOND_MODE_ACTIVEBACKUP 1
#define BOND_MODE_XOR 2
#define BOND_MODE_BROADCAST 3
#define BOND_MODE_8023AD 4
#define BOND_MODE_TLB 5
#define BOND_MODE_ALB 6 /* TLB + RLB (receive load balancing) */



Important: Switch configuration required

For this section, "switch" refers to whatever system the bonded devices are directly connected to (i.e., where the other end of the cable plugs into). This may be an actual dedicated switch device, or it may be another regular system (e.g., another computer running Linux),

The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch.

The 802.3ad mode requires that the switch have the appropriate ports configured as an 802.3ad aggregation. The precise method used to configure this varies from switch to switch, but, for example, a Cisco 3550 series switch requires that the appropriate ports first be grouped together in a single etherchannel instance, then that etherchannel is set to mode "lacp" to enable 802.3ad (instead of standard EtherChannel).

The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an "etherchannel" (as in the Cisco example, above), a "trunk group" or some other similar variation. For these modes, each switch will also have its own configuration options for the switch's transmit policy to the bond. Typical choices include XOR of either the MAC or IP addresses. The transmit policy of the two peers does not need to match. For these three modes, the bonding mode really selects a transmit policy for an EtherChannel group; all three will interoperate with another EtherChannel group.


Reference:
Net: Bonding
Metalink Notes: 456055.1, 436555.1, 434375.1

Wednesday, September 23, 2009

Differences between SELECT ANY DICTIONARY and SELECT_CATALOG_ROLE

SELECT ANY DICTIONARY is a system privilege, but SELECT_CATALOG_ROLE is a role, with pre-granted access to views under sys schema. (SELECT_CATALOG_ROLE has 2197 granted object privileges in 11.2.0.1)

If "O7_DICTIONARY_ACCESSIBILITY" is true, then "SELECT ANY TABLE" privilege can select any tables/views, from sys and non-sys schemas.

If "O7_DICTIONARY_ACCESSIBILITY" is false, and SELECT_CATALOG_ROLE granted, user can select these pre-granted views from sys schema.

If "SELECT ANY DICTIONARY" granted, it can access sys schema tables/views.

If you want to create a super read-only user, grant it with "CREATE SESSION", "SELECT ANY TABLE", "SELECT ANY DICTIONARY" system privileges.

Examples of PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME Used Together

Symptoms:

ORA-28007: the password cannot be reused


9i(and above) Behaviour

a. If both PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME are unlimited, it means
that this parameter is ignored.
This is so for the backward compatibility also.

b. If PASSWORD_REUSE_MAX is set to 10 and PASSWORD_REUSE_TIME is set to
UNLIMITED, the user can never reuse his/her password.

c. If PASSWORD_REUSE_TIME is set to 1 month and PASSWORD_REUSE_MAX is set to
UNLIMITED, the user can never reuse his/her password

d. If PASSWORD_REUSE_TIME is set to 1 month and PASSWORD_REUSE_MAX is set to 10,
the user is allowed to reuse his/her password after 1 month if the user has
changed the password 10 times

8i Behavior

If either PASSWORD_REUSE_TIME or PASSWORD_REUSE_MAX is set to unlimited, the
password can be reused whichever command you use, SQL*Plus command
PASSWORD or SQL ALTER USER ...IDENTIFIED by.

Reference:

Oracle notes: 228991.1

News: End of Service Life Status for Solaris Operating System

Currently Status:

Solaris 8: 10/3/08 (LS), 10/23/10(Phase 1), 10/23/13 (Phase 2)
Solaris 9: 10/30/09 (LS), 10/30/11(Phase 1), 10/30/14 (Phase 2)
Solaris 10: TBD for LS, Phase1/2





As shown above in Figure 1, the Solaris life cycle is defined by seven major events:
E1.
General Availability (GA): This represents when a new version, e.g. Solaris 10, becomes available in the market.
E2.
End of Life (EOL) Pre-Notification: When Sun can provide the first notification to customers that it will be announcing the EOL of a milestone version. At this time, the date of the final EOL announcement is set to be one year later1.
E3.
End of Life (EOL) Announcement: When Sun announces that it will stop active development of a milestone version. The period from GA to the EOL announcement is at least four years and six months, and could be longer. EOL announcement triggers the start of a 90 day Last Order period.
E4.
Last Order Date (LOD): Marks the end of a 90 day period after the EOL announcement during which customers can place their final orders for the product. Note: customers can continue to purchase and renew regular support contracts until End of Service Life (E7).
E5.
Last Ship Date (LSD): Marks the 90 day period after LOD for Sun to ship the EOL product to customers. LSD marks the formal start of the five year Retirement Phase of that product.
E5 to E6.
Phase 1: Phase 1 support will provide contract customers with full remedial support, accepting requests for enhancements and cosmetic bugs. Solaris SW updates, formerly known as patches, will be created as needed, and SunAlerts and recommended Patch Clusters will be maintained through the end of Phase 1. You will continue to receive the same level of support that you received before the product End-Of-Lifed.
E6 to E7.
Phase 2: Contract customers will continue to receive online and telephone support and have access to pre-existing Solaris SW updates (patches). Pre-existing means Solaris SW updates created prior to start of Phase 2. Sun may offer the option to purchase a Solaris Vintage Patch Service during Phase 2.

Reference:
http://www.sun.com/software/solaris/lifecycle.xml
http://www.sun.com/service/eosl/eosl_solaris.html?cid=e9497

Monday, September 21, 2009

News: Oracle Enterprise Linux 5.4 is available for download

You can download OEL 5.4 from edelivery.oracle.com, which is a "clone" of RHEL 5.4. Comparing OEL and "CentOS", which is another clone of RHEL, OEL is better when we just want to test Oracle installation with RHEL clone Linux.



There is a public yum server created for OEL by Oracle, but currently YUM repository for OEL 5.4 is still not avaliable yet, but coming soon.

Sunday, September 20, 2009

How to: CAP_PROPAGATE CAP_BYPASS_RAC_VMM CAP_NUMA_ATTACH

Symptoms:

During 10.2.0.3 RAC upgrading to 10.2.0.4, OS is AIX 5300-09-03

The output during execution of root102.sh as following:

Error : Please change the CRS_ORACLE_USER id to have the following OS capabilities :

< CAP_PROPAGATE CAP_BYPASS_RAC_VMM CAP_NUMA_ATTACH >
by running command then 'crsctl stop crs' and 'crsctl start crs'.
The CSS daemon, ocssd.bin, will not run in realtime without this change
Oracle strongly recommends that the CSS daemon run realtime


$ORA_CRS_HOME/log//cssd/cssdOUT.log

priority string (0)
s0clssscSetScheduling: realtime requested, but not supported by user authorization Please ensure that this user has the followingcapabilities, using the /usr/bin/chuser command:
CAP_NUMA_ATTACH CAP_BYPASS_RAC_VMM CAP_PROPAGATE
setsid: failed with -1/1
2009-09-20 17:51 CSSD starting


Causes:

These capabilities CAP_NUMA_ATTACH, CAP_BYPASS_RAC_VMM, and CAP_PROPAGATE were introduced in AIX 5.3 TL5 (5300-05)

Solution:


chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE,CAP_NUMA_ATTACH


Verify:


/usr/sbin/lsuser -a capabilities


References:

Metalink notes: 726234.1, 739371.1

Troubleshooting: ORA-600 [LibraryCacheNotEmptyOnClose] during shutdown

Symptom:


ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [], [], [], []


Affected Oracle versions:

10.2.0.1, 10.2.0.2, 10.2.0.3, 10.2.0.4

Workaround:

Ignore the error as it just indicates that there are some items in the library cache when closing down the instance. The error itself occurs AFTER the database close and dismount stages so only affects the instance shutdown itself. Datafiles have been closed cleanly.

Alternative:

Implementing database trigger to flush shared pool


CREATE OR REPLACE TRIGGER flush_shared_pool
BEFORE SHUTDOWN ON DATABASE
BEGIN
EXECUTE IMMEDIATE 'ALTER SYSTEM FLUSH SHARED_POOL';
EXCEPTION
WHEN OTHERS
THEN
raise_application_error (num => -20000, msg => 'Error flushing pool');
END;


References:

Oracle Metalink: Bug 4483084 - OERI[LibraryCacheNotEmptyOnClose] on shutdown

Friday, September 18, 2009

RMAN: Restore database preview

Sample output for RESTORE DATABASE PREVIEW


RMAN> restore database preview;

Starting restore at 2009-MAY-22 11:24:50
using channel ORA_DISK_1


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
1 Full 209.16M DISK 00:01:50 2009-MAY-22 11:21:04
BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20090522T111914
Piece Name: /u01/app/oracle/flash_recovery_area/ORCL/backupset/2009_05_22/o1_mf_nnndf_TAG20090522T111914_51d6fm3b_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- -------------------- ----
1 Full 1621673 2009-MAY-22 11:19:15 /u01/app/oracle/oradata/ORCL/system01.dbf
2 Full 1621673 2009-MAY-22 11:19:15 /u01/app/oracle/oradata/ORCL/sysaux01.dbf
3 Full 1621673 2009-MAY-22 11:19:15 /u01/app/oracle/oradata/ORCL/undotbs01.dbf
4 Full 1621673 2009-MAY-22 11:19:15 /u01/app/oracle/oradata/ORCL/users01.dbf

List of Archived Log Copies for database with db_unique_name ORCL
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - --------------------
30 1 97 A 2009-MAY-22 11:11:24
Name: /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2009_05_22/o1_mf_1_97_51d6kvcr_.arc

Media recovery start SCN is 1621093
Recovery must be done beyond SCN 1621673 to clear datafile fuzziness
Finished restore at 2009-MAY-22 11:24:51

RMAN> exit


Recovery Manager complete.


How to find log sequences used during the recovery

From above restore preview, we confirm that all redo logs generated from SCN 1621093 and 1621673 will be used.


SQL> select thread#,sequence# from v$archived_log
2 where --cross begin backup only (sample SEQ# 90)
3 (first_change#<=1621093 and next_change#>=1621093 and next_change#<=1621673)
4 or --within begin backup and end backup (sample SEQ# 91,92)
5 (first_change#>=1621093 and next_change#<=1621673)
6 or --cross end backup only (sample SEQ# 93)
7 (first_change#>=1621093 and first_change#<=1621673 and next_change#>=1621673)
8 or --cross begin backup and end backup (sample SEQ# 150)
9 (first_change#<=1621093 and next_change#>=1621673)
10 ;

THREAD# SEQUENCE#
---------- ----------
1 97

How to: get oracle hidden (underscore) parameters

Example:

Below sql statements list all the parameters including "optim", including these hidden parameters.


SELECT
x.ksppinm name,
y.ksppstvl VALUE,
decode(ksppity,
1, 'BOOLEAN',
2, 'STRING',
3, 'INTEGER',
4, 'PARAMETER FILE',
5, 'RESERVED',
6, 'BIG INTEGER',
'UNKNOWN') typ,
decode(ksppstdf,
'TRUE', 'DEFAULT VALUE',
'FALSE', 'INIT.ORA') isdefault,
decode(bitand(ksppiflg / 256, 1),
1, 'IS_SESS_MOD(TRUE)',
'FALSE') isses_modifiable,
decode(bitand(ksppiflg / 65536, 3),
1, 'MODSYS(NONDEFERED)',
2, 'MODSYS(DEFERED)',
3, 'MODSYS(*NONDEFERED*)',
'FALSE') issys_modifiable,
decode(bitand(ksppstvf, 7),
1, 'MODIFIED_BY(SESSION)',
4, 'MODIFIED_BY(SYSTEM)',
'FALSE') is_modified,
decode(bitand(ksppstvf, 2),
2, 'ORA_STARTUP_MOD(TRUE)',
'FALSE') is_adjusted,
ksppdesc description,
ksppstcmnt update_comment
FROM x$ksppi x,
x$ksppcv y
WHERE x.inst_id = userenv('Instance')
AND y.inst_id = userenv('Instance')
AND x.indx = y.indx
AND x.ksppinm LIKE '%optim%';



Sample output in 11gR2


NAME VALUE DESCRIPTION
---------------------------------------- ------------ ------------------------------------------------------------
_enable_NUMA_optimization TRUE Enable NUMA specific optimizations
_db_file_optimizer_read_count 8 multiblock read count for regular clients
_db_noarch_disble_optim FALSE Image redo logging (NOARCHIVEMODE)
_disable_cell_optimized_backups FALSE disable cell optimized backups
_log_max_optimize_threads 128 maximum number of threads to which log scan optimization is
applied

plsql_optimize_level 2 PL/SQL optimize level
_plsql_nvl_optimize FALSE PL/SQL NVL optimize
_optimizer_trace none optimizer trace parameter
object_cache_optimal_size 102400 optimal size of the user session's object cache in bytes
optimizer_features_enable 11.2.0.1 optimizer plan compatibility parameter
optimizer_mode ALL_ROWS optimizer mode
_optimizer_mode_force TRUE force setting of optimizer mode for user recursive SQL also
_optimizer_null_aware_antijoin TRUE null-aware antijoin parameter
_optimizer_extended_cursor_sharing UDO optimizer extended cursor sharing
_optimizer_extended_cursor_sharing_rel SIMPLE optimizer extended cursor sharing for relational operators
_optimizer_adaptive_cursor_sharing TRUE optimizer adaptive cursor sharing
_optimizer_cost_model CHOOSE optimizer cost model
_optimizer_undo_cost_change 11.2.0.1 optimizer undo cost change
_optimizer_system_stats_usage TRUE system statistics usage
_optimizer_cache_stats FALSE cost with cache statistics
_optimizer_unnest_all_subqueries TRUE enables unnesting of every type of subquery
_optimizer_push_down_distinct 0 push down distinct from query block to table
_optimizer_cost_based_transformation LINEAR enables cost-based query transformation
_optimizer_squ_bottomup TRUE enables unnesting of subquery in a bottom-up manner
_optimizer_cbqt_factor 50 cost factor for cost-based query transformation
_optimizer_skip_scan_enabled TRUE enable/disable index skip scan
_optimizer_join_sel_sanity_check TRUE enable/disable sanity check for multi-column join selectivit
y

_optim_enhance_nnull_detection TRUE TRUE to enable index [fast] full scan more often
_optimizer_max_permutations 2000 optimizer maximum join permutations per query block
optimizer_index_cost_adj 100 optimizer index cost adjustment
optimizer_index_caching 0 optimizer percent index caching
_disable_sample_io_optim FALSE disable row sampling IO optimization
_optimizer_adjust_for_nulls TRUE adjust selectivity for null values
_optimizer_degree 0 force the optimizer to use the same degree of parallelism
_optimizer_choose_permutation 0 force the optimizer to use the specified permutation
_optim_peek_user_binds TRUE enable peeking of user binds
_optim_new_default_join_sel TRUE improves the way default equijoin selectivity are computed
_optimizer_dyn_smp_blks 32 number of blocks for optimizer dynamic sampling
optimizer_dynamic_sampling 2 optimizer dynamic sampling
_optimizer_new_join_card_computation TRUE compute join cardinality using non-rounded input values
_optim_adjust_for_part_skews TRUE adjust stats for skews across partitions
_optimizer_compute_index_stats TRUE force index stats collection on index creation/rebuild
_optimizer_autostats_job TRUE enable/disable auto stats collection job
_optimizer_push_pred_cost_based TRUE use cost-based query transformation for push pred optimizati
on

_optimizer_extend_jppd_view_types TRUE join pred pushdown on group-by, distinct, semi-/anti-joined
view

_optimizer_filter_pred_pullup TRUE use cost-based flter predicate pull up transformation
_optimizer_connect_by_cost_based TRUE use cost-based transformation for connect by
_optimizer_connect_by_combine_sw TRUE combine no filtering connect by and start with
_optimizer_connect_by_elim_dups TRUE allow connect by to eliminate duplicates from input
_optimizer_mjc_enabled TRUE enable merge join cartesian
_optimizer_sortmerge_join_enabled TRUE enable/disable sort-merge join method
_optimizer_undo_changes FALSE undo changes to query optimizer
_optimizer_percent_parallel 101 optimizer percent parallel
_optimizer_search_limit 5 optimizer search limit
_optimizer_block_size 8192 standard block size used by optimizer
_optimizer_save_stats TRUE enable/disable saving old versions of optimizer stats
_optimizer_cost_filter_pred FALSE enables costing of filter predicates in IO cost model
_optimizer_correct_sq_selectivity TRUE force correct computation of subquery selectivity
_optim_dict_stats_at_db_cr_upg TRUE enable/disable dictionary stats gathering at db create/upgra
de

_optimizer_dim_subq_join_sel TRUE use join selectivity in choosing star transformation dimensi
ons

_optimizer_disable_strans_sanity_checks 0 disable star transformation sanity checks
_optimizer_ignore_hints FALSE enables the embedded hints to be ignored
_optimizer_random_plan 0 optimizer seed value for random plans
_optimizer_ceil_cost TRUE CEIL cost in CBO
_optimizer_invalidation_period 18000 time window for invalidation of cursors of analyzed objects
_optimizer_join_order_control 3 controls the optimizer join order search algorithm
_optimizer_cartesian_enabled TRUE optimizer cartesian join enabled
_optimizer_starplan_enabled TRUE optimizer star plan enabled
_optimizer_join_elimination_enabled TRUE optimizer join elimination enabled
_drop_table_optimization_enabled TRUE reduce SGA memory use during drop of a partitioned table
_optimizer_multiple_cenv generate and run plans using several compilation environment
s

_optimizer_cost_hjsmj_multimatch TRUE add cost of generating result set when #rows per key > 1
_optimizer_transitivity_retain TRUE retain equi-join pred upon transitive equality pred generati
on

optimizer_secure_view_merging TRUE optimizer secure view merging and predicate pushdown/movearo
und

_optimizer_cbqt_no_size_restriction TRUE disable cost based transformation query size restriction
_windowfunc_optimization_settings 0 settings for window function optimizations
_truncate_optimization_enabled TRUE do truncate optimization if set to TRUE
_optimizer_enhanced_filter_push TRUE push filters before trying cost-based query transformation
_optimizer_rownum_pred_based_fkr TRUE enable the use of first K rows due to rownum predicate
_optimizer_better_inlist_costing ALL enable improved costing of index access using in-list(s)
_optimizer_self_induced_cache_cost FALSE account for self-induced caching
_optimizer_min_cache_blocks 10 set minimum cached blocks
_optimizer_or_expansion DEPTH control or expansion approach used
_optimizer_outer_to_anti_enabled TRUE Enable transformation of outer-join to anti-join if possible
_optimizer_order_by_elimination_enabled TRUE Eliminates order bys from views before query transformation
_optimizer_star_tran_in_with_clause TRUE enable/disable star transformation in with clause queries
_optimizer_sortmerge_join_inequality TRUE enable/disable sort-merge join using inequality predicates
_optimizer_complex_pred_selectivity TRUE enable selectivity estimation for builtin functions
_optimizer_aw_stats_enabled TRUE Enables statistcs on AW olap_table table function
_optimizer_skip_scan_guess FALSE consider index skip scan for predicates with guessed selecti
vity

_optimizer_distinct_elimination TRUE Eliminates redundant SELECT DISTNCT's
_add_col_optim_enabled TRUE Allows new add column optimization
_optimizer_multi_level_push_pred TRUE consider join-predicate pushdown that requires multi-level p
ushdown to base table

_optimizer_group_by_placement TRUE consider group-by placement optimization
_optimizer_coalesce_subqueries TRUE consider coalescing of subqueries optimization
_optimizer_enable_density_improvements TRUE use improved density computation for selectivity estimation
_optimizer_rownum_bind_default 10 Default value to use for rownum bind
optimizer_use_pending_statistics FALSE Control whether to use optimizer pending statistics
_optimizer_improve_selectivity TRUE improve table and partial overlap join selectivity computati
on

_optimizer_aw_join_push_enabled TRUE Enables AW Join Push optimization
optimizer_capture_sql_plan_baselines FALSE automatic capture of SQL plan baselines for repeatable state
ments

optimizer_use_sql_plan_baselines TRUE use of SQL plan baselines for captured sql statements
_optimizer_native_full_outer_join FORCE execute full outer join using native implementaion
_optimizer_enable_extended_stats TRUE use extended statistics for selectivity estimation
_optimizer_free_transformation_heap TRUE free transformation subheap after each transformation
_optimizer_use_subheap TRUE Enables physical optimizer subheap
_optimizer_or_expansion_subheap TRUE Use subheap for optimizer or-expansion
_optimizer_star_trans_min_cost 0 optimizer star transformation minimum cost
_optimizer_star_trans_min_ratio 0 optimizer star transformation minimum ratio
_optimizer_reuse_cost_annotations TRUE reuse cost annotations during cost-based query transformatio
n

_optimizer_fkr_index_cost_bias 10 Optimizer index bias over FTS/IFFS under first K rows mode
_optimizer_nested_rollup_for_gset 100 number of groups above which we use nested rollup exec for g
set

optimizer_use_invisible_indexes FALSE Usage of invisible indexes (TRUE/FALSE)
_optimizer_extended_stats_usage_control 224 controls the optimizer usage of extended stats
_optimizer_fast_pred_transitivity TRUE use fast algorithm to generate transitive predicates
_optimizer_fast_access_pred_analysis TRUE use fast algorithm to traverse predicates for physical optim
izer

_optimizer_multiple_cenv_report result control what to report in trace file when run in multi-plan
mode

_optimizer_instance_count 0 force the optimizer to use the specified number of instances
_parallel_optimization_phase_for_local FALSE parallel optimization phase when all slaves are local
_optimizer_unnest_disjunctive_subq TRUE Unnesting of disjunctive subqueries (TRUE/FALSE)
_optimizer_unnest_corr_set_subq TRUE Unnesting of correlated set subqueries (TRUE/FALSE)
_optimizer_distinct_agg_transform TRUE Transforms Distinct Aggregates to non-distinct aggregates
_aggregation_optimization_settings 0 settings for aggregation optimizations
_optimizer_eliminate_filtering_join TRUE optimizer filtering join elimination enabled
_optimizer_connect_by_cb_whr_only FALSE use cost-based transformation for whr clause in connect by
_optimizer_join_factorization TRUE use join factorization transformation
_optimizer_force_CBQT force CBQT transformation regardless of cost
_optimizer_use_cbqt_star_transformation FALSE use rewritten star transformation using cbqt framework
_optimizer_table_expansion TRUE consider table expansion transformation
_xsolapi_sql_optimize TRUE OLAP API enable optimization
_xsolapi_optimize_suppression TRUE OLAP API optimizes suppressions?

131 rows selected.

How to: ORA-32004: obsolete and/or deprecated parameter(s) specified

Symptoms:


[oracle@vmxdb01.dbaglobe.com ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Sep 18 14:30:15 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2151248 bytes
Variable Size 612371632 bytes
Database Buffers 448790528 bytes
Redo Buffers 5623808 bytes
Database mounted.


Solutions:

Deprecated parameters can be found after non-default parameters, for example, in our case, the deprecated parameter is "sql_trace".


Fri Sep 18 14:30:21 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
IMODE=BR
ILAT =121
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.6.0.
Using parameter settings in server-side pfile /opt/app/oracle/product/11.1.0/dbs/initorcl.ora
System parameters with non-default values:
processes = 1000
sessions = 1105
nls_territory = "SINGAPORE"
resource_manager_plan = "DEFAULT_PLAN"
memory_target = 1G
control_files = "+DATA/orcl/controlfile/control01.ctl"
control_files = "+ARC/orcl/controlfile/control02.ctl"
db_block_size = 8192
compatible = "11.1.0.0.0"
log_archive_dest_1 = "LOCATION=+ARC/"
log_archive_format = "%t_%s_%r.dbf"
db_create_file_dest = "+DATA"
db_recovery_file_dest = "+ARC"
db_recovery_file_dest_size= 45000M
undo_tablespace = "UNDOTBS1"
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
dispatchers = "(PROTOCOL=TCP) (SERVICE=orclXDB)"
audit_file_dest = "/opt/app/oracle/admin/orcl/adump"
audit_trail = "DB"
db_name = "orcl"
open_cursors = 300
sql_trace = FALSE
diagnostic_dest = "/opt/app/oracle"
Deprecated system parameters with specified values:
sql_trace
End of deprecated system parameter listing
Fri Sep 18 14:30:21 2009
PMON started with pid=2, OS id=29803
Fri Sep 18 14:30:21 2009
VKTM started with pid=3, OS id=29805 at elevated priority
VKTM running at (20)ms precision

Thursday, September 17, 2009

Statspack is configurable for data capturing and reporting

Capturing

SQL statements and segment statistics are captured based on threshold values, by default, stats$statspack_parameters is empty, and all values taken from their default values.


5.6. Input Parameters for the SNAP and MODIFY_STATSPACK_PARAMETER procedures

Parameters able to be passed in to the statspack.snap and
statspack.modify_statspack_parameter procedures are as follows:

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ------------ ------- -----------------------------------
i_snap_level 0,5,6,7,10 5 Snapshot Level
i_ucomment Text Comment to be stored with Snapshot
i_executions_th Integer >=0 100 SQL Threshold: number of times
the statement was executed
i_disk_reads_th Integer >=0 1,000 SQL Threshold: number of disk reads
the statement made
i_parse_calls_th Integer >=0 1,000 SQL Threshold: number of parse
calls the statement made
i_buffer_gets_th Integer >=0 10,000 SQL Threshold: number of buffer
gets the statement made
i_sharable_mem_th Integer >=0 1048576 SQL Threshold: amount of sharable
memory
i_version_count_th Integer >=0 20 SQL Threshold: number of versions
of a SQL statement
i_seg_phy_reads_th Integer >=0 1,000 Segment statistic Threshold: number
of physical reads on a segment.
i_seg_log_reads_th Integer >=0 1,0000 Segment statistic Threshold: number
of logical reads on a segment.
i_seg_buff_busy_th Integer >=0 100 Segment statistic Threshold: number
of buffer busy waits for a segment.
i_seg_rowlock_w_th Integer >=0 100 Segment statistic Threshold: number
of row lock waits for a segment.
i_seg_itl_waits_th Integer >=0 100 Segment statistic Threshold: number
of ITL waits for a segment.
i_seg_cr_bks_sd_th Integer >=0 1000 Segment statistic Threshold: number
of Consistent Reads blocks served by
the instance for the segment (RAC).
i_seg_cu_bks_sd_th Integer >=0 1000 Segment statistic Threshold: number
of CUrrent blocks served by the
instance for the segment (RAC).
i_session_id Valid sid 0 (no Session Id of the Oracle Session
from session) to capture session granular
v$session statistics for
i_modify_parameter True,False False Save the parameters specified for
future snapshots?


Reporting

Statspack report is using "spreport.sql" from $ORACLE_HOME/rdbms/admin. and it calls "sprepins.sql" for the actual reporting work.

Certain parameters can be tuned to change how many events/top sqls/top segments inside the spreport output.


Rem $Header: sprepins.sql 17-may-2004.14:15:50 cdialeri Exp $

clear break compute;
repfooter off;
ttitle off;
btitle off;
set timing off veri off space 1 flush on pause off termout on numwidth 10;
set echo off feedback off pagesize 60 linesize 80 newpage 1 recsep off;
set trimspool on trimout on;
define top_n_events = 5;
define top_n_sql = 65;
define top_n_segstat = 5;
define num_rows_per_hash=5;

Wednesday, September 16, 2009

ORA-01956: invalid command when OS_ROLES are being used

SQL> grant dba to donghua;
grant dba to donghua
*
ERROR at line 1:
ORA-01956: invalid command when OS_ROLES are being used

ORA-01956: invalid command when OS_ROLES are being used
Cause: This command cannot be used when the INIT.ORA parameter OS_ROLES is TRUE.
Action: Grant the role to the user in the operating system.

Suggestion: Use oracle managed role privileges by setting OS_ROLES=false.

More info on OS managed roles:
Using Operating System Role Identification

Oracle import hang with "latch free" wait event more than 8 hours

Facts:

Oracle 8.1.7.0 + Solaris 9

SQL> select event,p1raw,p2raw,seconds_in_wait from v$session_wait where sid=11;

EVENT P1RAW P2RAW SECONDS_IN_WAIT
------------------------------ -------- -------- ---------------
latch free 80002B20 00000002 29386


Findings


SQL> select sql_hash_value,sql_address from v$session where sid=11;

SQL_HASH_VALUE SQL_ADDR
-------------- --------
2329822092 99765838

SQL> select sql_text from v$sqlarea where hash_value='2329822092'
2 and address='99765838';

SQL_TEXT
--------------------------------------------------------------------------------
AUDIT AUDIT,COMMENT,DELETE,GRANT,INSERT,LOCK,RENAME,SELECT,UPDATE ON "VIEW_IMXX
P_WORKLISTS" BY ACCESS



SQL> select name,'child 'child# child,gets,misses,sleeps from v$latch_children
2 where addr='&p1raw'
3 union
4 select name,null,gets,misses,sleeps from v$latch
5 where addr='&p1raw';
Enter value for p1raw: 80002B20
old 2: where addr='&p1raw'
new 2: where addr='80002B20'
Enter value for p1raw: 80002B20
old 5: where addr='&p1raw'
new 5: where addr='80002B20'

NAME CHILD GETS MISSES SLEEPS
------------------------------ ---------- ---------- ---------- ----------
session allocation 19579180 121 24


How to resolve:

I tried to analyze why "session allocation" latch has requested 19579180+121 times, but no root cause discovered.

Then i realize this is a view, and many other views are invalid. I recomplied all the invalid views, including this one. After that, problem disappears!

Franking speaking, I do not have the answer, and it's not re-producible.

The End:


SQL> select event,p1raw,p2raw,seconds_in_wait from v$session_wait where sid=11;

EVENT P1RAW P2RAW SECONDS_IN_WAIT
------------------------------ -------- -------- ---------------
log file sync 00000EED 00 0

Tuesday, September 15, 2009

Hit Oracle bugs: ORA-600[16606] when alter cluster index

Sympton:

SQL> alter index TEST_USER.TEST_TABLE_N1 STORAGE ( MAXEXTENTS 2147483645 );
alter index TEST_USER.TEST_TABLE_N1 STORAGE ( MAXEXTENTS 2147483645 )
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [16606], [2], [3], [2585510728], [],
[], [], []

Related bug:

1350740: ALTER INDEX STORAGE ( NEXT 3 M) FAILS WITH ORA-600 [16606]


Affected versions:

8.1.6/8.1.7 on all platforms. (Mine is Solaris 9)

How to reprocedure: (Taken from above bug info)

1. create a cluster
CREATE CLUSTER ADE_SITE_CL (
CPFN VARCHAR2 ( 11) ,
VERAR VARCHAR2 ( 1) ,
WAF_NR NUMBER ( 3,0 )
)
Pctfree 0
Pctused 64
Size 800
Tablespace TS_1
Storage ( INITIAL 180000 K
NEXT 180000 K
MINEXTENTS 3
MAXEXTENTS 20
);

2. alter cluster
Alter Cluster ADE_SITE_CL
STORAGE ( NEXT 40000 K) ;

3. create an index
Create Index ADESTE_P on Cluster ADE_SITE_CL
Pctfree 10
Tablespace TS_1
Storage ( INITIAL 10 M
NEXT 10 M
MINEXTENTS 3
MAXEXTENTS 20
);

4. alter index
alter index ADESTE_P
STORAGE ( NEXT 3 M) ;
fails with ORA-00600: internal error code, arguments: [16606], [2], [3],
[52655056], [], [], [], []

Performance tuning: whether SQL plan will be shared if underlining objects not the same

Question: I have many schemas, each with same table names, thus my application will use the same SQL statements, but running under different schema. Does it mean that SQL statement parse once and used by all the applications?

Answers: There is one parent cursor, since it determined by SQL Text. But different child cursors (versions) for the same SQL statement. (If either parent cursor or child cursor not in library cache, Oracle must "hard" parse the SQL statement.)

Below is one example, I have created 2 users, user1 and user2, each with one table t. But there is one index on user1.t(object_id).

Same SQL ID (aka. parent cursor), but 2 versions (aka, child cursors). We also notice that PLAN_HASH_VALUE is different.


SQL> select sql_id,sql_text,executions exec,version_count vers,
2> plan_hash_value from v$sqlarea where sql_text like '%id=558%'
3> and sql_text not like '%sqlarea%'
SQL> /


SQL_ID SQL_TEXT EXEC VERS PLAN_HASH_VALUE
------------- ------------------------------ ---------- ---------- ---------------
2a7uchc9628su select object_name,object_type 1 2 3772518221
from t where object_id=558

2a7uchc9628su select object_name,object_type 1 2 1601196873
from t where object_id=558


Why the child cursor is not sharing:


SQL> select child_number,auth_check_mismatch, translation_mismatch
2> from v$sql_shared_cursor where sql_id='2a7uchc9628su';

CHILD_NUMBER A T

------------ - -
0 N N
1 Y Y


We can confirm that sql plan are different.


SQL> select * from table(dbms_xplan.display_cursor('2a7uchc9628su',0));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
SQL_ID 2a7uchc9628su, child number 0
-------------------------------------
select object_name,object_type from t where object_id=558

Plan hash value: 3772518221

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 29 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_PK | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OBJECT_ID"=558)




SQL> select * from table(dbms_xplan.display_cursor('2a7uchc9628su',1));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID 2a7uchc9628su, child number 1
-------------------------------------
select object_name,object_type from t where object_id=558

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 32 (100)| |
|* 1 | TABLE ACCESS FULL| T | 1 | 29 | 32 (4)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OBJECT_ID"=558)



Reference:

120655.1 VIEW: "V$SQL_SHARED_CURSOR" Reference Note

http://www.dbaglobe.com/2009/08/troubleshooting-oracle-performance.html

Monday, September 14, 2009

Quick fix: ORA-01631 max # extents (xxx) reached in table YYYYY

Sympton:

ORA-01631: max # extents (120) reached in table TESTUSER.TEST_TABLE

Quick Fix:

ALTER TABLE TESTUSER.TEST_TABLE STORAGE ( MAXEXTENTS 2147483645 );

How to use "/ as sysdba" during exp/imp

On Windows platform:

userid="'"/ as sysdba"'"
or
userid="""/ as sysdba"""

On Unix Platform:

userid=\'/ as sysdba\'

These tricks can be used with Oracle datapump (expdp/impdp) as well.

Friday, September 11, 2009

How to: adding linked server for Oracle in MS SQL 2008

Step 1: enable "AllowInProcess" for provider "OraOLEDB.Oracle"


USE [master]
GO
EXEC master.dbo.sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'AllowInProcess', 1
GO


Step 2: add one linked server


Linked server: Test (here is name of your linked server)
Provider: Oracle Provider for OLE DB
Product name: Oracle
Data source: Test (here is the tns name for Oracle database)
Provider string: (empty)

Security tab:
Remote login: system
With password: (system password for oracle)

Step 3: Verify

Perform selection from remote Oracle databse:


select * from [TEST]..[SYSTEM].[T]
go

update [TEST]..[SYSTEM].[T]
set username='TEST' where sid=143;
go

select * into test1
from [TEST]..[SYSTEM].[T]
go

select * from test1
go

delete from [TEST]..[SYSTEM].[T]
go

insert into [TEST]..[SYSTEM].[T]
select * from test1
go

select * from [TEST]..[SYSTEM].[T]
go


Caution: If you hit following error message, please go to step 1


The OLE DB provider "OraOLEDB.Oracle" for linked server "TEST" reported an error. The provider did not give any information about the error.
Cannot initialize the data source object of OLE DB provider "OraOLEDB.Oracle" for linked server "TEST". (Microsoft SQL Server, Error: 7399)

How to resolve data flow warnning from Oracle to MSSQL

Sympton:


Warning at {56AA1808-8DC8-44FC-9946-7076F50D9CCE} [OLE DB Source [1]]: Cannot retrieve the column code page info from the OLE DB provider. If the component supports the "DefaultCodePage" property, the code page from that property will be used. Change the value of the property if the current string code page values are incorrect. If the component does not support the property, the code page from the component's locale ID will be used.



Warning: 0x80202066 at Data Flow Task, OLE DB Source [1]: Cannot retrieve the column code page info from the OLE DB provider. If the component supports the "DefaultCodePage" property, the code page from that property will be used. Change the value of the property if the current string code page values are incorrect. If the component does not support the property, the code page from the component's locale ID will be used.


Solution:

Set "AlwaysUseDefaultCodePage" to true.





Gather statistics log and default values in 10g

Invoking of dbms_stats package to gather statistics log information to database dictionary, the information is avaiabile using below method:


SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production



SQL> col operation for a30
SQL> col start_time for a38
SQL> col duration for a12
SQL> set lin 80

SQL> select operation, start_time,
2 (end_time-start_time) day(1) to second(0) as duration
3 from dba_optstat_operations
4 order by start_time desc
5 /

OPERATION START_TIME DURATION
------------------------------ -------------------------------------- ------------
gather_system_stats 10-SEP-09 10.58.38.567359 PM +08:00 +0 00:00:05
gather_database_stats(auto) 10-SEP-09 10.56.39.972009 PM +08:00 +0 00:00:13
gather_database_stats(auto) 10-SEP-09 10.12.02.228579 PM +08:00 +0 00:00:33
gather_database_stats(auto) 31-AUG-09 10.00.02.685188 PM +08:00 +0 00:00:45
set_system_stats 30-AUG-09 11.04.04.842575 PM +08:00 +0 00:00:00
set_system_stats 30-AUG-09 11.04.04.835971 PM +08:00 +0 00:00:00
set_system_stats 30-AUG-09 11.04.04.720261 PM +08:00 +0 00:00:00
gather_dictionary_stats 30-AUG-09 11.03.17.423471 PM +08:00 +0 00:00:24
gather_dictionary_stats 30-AUG-09 11.01.34.178535 PM +08:00 +0 00:00:15
gather_dictionary_stats 30-AUG-09 11.00.45.056407 PM +08:00 +0 00:00:07
gather_dictionary_stats 30-AUG-09 11.00.21.235183 PM +08:00 +0 00:00:02
gather_dictionary_stats 30-AUG-09 11.00.03.201262 PM +08:00 +0 00:00:04
gather_dictionary_stats 30-AUG-09 10.59.38.403871 PM +08:00 +0 00:00:15
gather_dictionary_stats 30-AUG-09 10.58.28.456440 PM +08:00 +0 00:00:08
gather_dictionary_stats 30-AUG-09 10.58.04.499535 PM +08:00 +0 00:00:04
gather_dictionary_stats 30-AUG-09 10.57.41.872271 PM +08:00 +0 00:00:04
gather_dictionary_stats 30-AUG-09 10.57.25.151534 PM +08:00 +0 00:00:03
gather_dictionary_stats 30-AUG-09 10.56.44.901781 PM +08:00 +0 00:00:05
gather_dictionary_stats 30-AUG-09 10.56.02.422642 PM +08:00 +0 00:00:09
gather_dictionary_stats 30-AUG-09 10.54.54.390954 PM +08:00 +0 00:00:06
gather_dictionary_stats 30-AUG-09 10.53.25.199028 PM +08:00 +0 00:00:50


These information will be purged automatically, as statistics history table. current retention period can be found using below query:


SQL> select dbms_stats.get_stats_history_retention() AS retention from dual;

RETENTION
----------
31


From 10g onwards, dbms_stats package does not hardcode parameter values, these parameter default values (for example estimate_percent, can be changed using dbms_stats package).

Here is how we can find out the current default values:


SQL> select dbms_stats.get_param('cascade') from dual;

DBMS_STATS.GET_PARAM('CASCADE')
------------------------------------------------------------------------------------------
DBMS_STATS.AUTO_CASCADE

SQL> select dbms_stats.get_param('degree') from dual;

DBMS_STATS.GET_PARAM('DEGREE')
------------------------------------------------------------------------------------------
NULL

SQL> select dbms_stats.get_param('estimate_percent') from dual;

DBMS_STATS.GET_PARAM('ESTIMATE_PERCENT')
------------------------------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

SQL> select dbms_stats.get_param('method_opt') from dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
------------------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> select dbms_stats.get_param('no_invalidate') from dual;

DBMS_STATS.GET_PARAM('NO_INVALIDATE')
------------------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE

SQL> select dbms_stats.get_param('granularity') from dual;

DBMS_STATS.GET_PARAM('GRANULARITY')
------------------------------------------------------------------------------------------
AUTO

Thursday, September 10, 2009

Oracle database physical data file limits and ORA-03206

Symptom:


SQL> create tablespace test
2 datafile '/oradata/orcl/test01.dbf' size 100m autoextend on next 1m maxsize 40960m;
create tablespace test
*
ERROR at line 1:
ORA-03206: maximum file size of (5242880) blocks in AUTOEXTEND clause is out of range


Cause:

There is a limit for each database datafile's maximum size, without specifying BIGFILE keyword, the limit is 2^22 Blocks (which is 4 Million blocks), if the block size is 8k, the file limit will be 32768MB.

With BIGFILE keyword, the limitation is 4 billion blocks.

Examples:


SQL> create tablespace test
2 datafile '/oradata/orcl/test01.dbf' size 100m autoextend on next 1m maxsize 32768M;
create tablespace test
*
ERROR at line 1:
ORA-03206: maximum file size of (4194304) blocks in AUTOEXTEND clause is out of range

SQL> ! rm /oradata/orcl/test01.dbf

SQL> create tablespace test
2 datafile '/oradata/orcl/test01.dbf' size 100m autoextend on next 1m maxsize 32767M;

Tablespace created.

SQL> create tablespace test_new
2 datafile '/oradata/test/test_new01.dbf' size 100m autoextend on next 1m maxsize 20480m,
3 '/oradata/orcl/test_new02.dbf' size 100m autoextend on next 1m maxsize 20480m;

Tablespace created.

Wednesday, September 9, 2009

Quick guide: how to manage Redhat Cluster in RHEL 5

Check cluster status


[root@ncs-db-1 ~]# clustat
Cluster Status for ncs_dbcluster @ Wed Sep 1 15:22:08 2009
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
ncs-db-n1 1 Online, Local, rgmanager
ncs-db-n2 2 Online, rgmanager

Service Name Owner (Last) State
------- ---- ----- ------ -----
service:clusvc ncs-db-n1 started


Disable cluster status
After disable, resource group shutdown and will not affect by server reboot and/or failover.


[root@ncs-db-1 ~]# clusvcadm -d clusvc
Local machine disabling service:clusvc...Success



Enable cluster status


[root@ncs-db-1 ~]# clusvcadm -e clusvc
Local machine trying to enable service:clusvc...Success
service:clusvc is now running on ncs-db-n1


Freeze cluster resource group
After freeze resource group, it will not being monitored by the cluster manager. But current resource group state will not be affected by this command.


[root@ncs-db-1 ~]# clusvcadm -Z clusvc
Local machine freezing service:clusvc...Success


Unfreeze cluster resource group


[root@ncs-db-1 ~]# clusvcadm -U clusvc
Local machine unfreezing service:clusvc...Success



Stop cluster resource group
Resource group can failover if current node due to fencing.


[root@ncs-db-1 ~]# clusvcadm -s clusvc
Local machine stopping service:clusvc...Success



Restart cluster resource group
Resource group can failover if current node due to fencing.


[root@ncs-db-1 ~]# clusvcadm -s clusvc
Local machine stopping service:clusvc...Success


Relocate resource to another member in the failover domain


[root@ncs-db-1 ~]# clusvcadm -r clusvc -m ncs-db-n2
Trying to relocate service:clusvc to ncs-db-n2...Success
service:clusvc is now running on ncs-db-n2


Other useful commands


[root@ncs-db-1 ~]# ccs_tool lsnode

Cluster name: ncs_dbcluster, config_version: 21

Nodename Votes Nodeid Fencetype
ncs-db-n1 1 1 ncs-db-1-ilo1
ncs-db-n2 1 2 ncs-db-2-ilo2



[root@ncs-db-1 ~]# cman_tool status
Version: 6.1.0
Config Version: 21
Cluster Name: ncs_cluster
Cluster Id: 27444
Cluster Member: Yes
Cluster Generation: 1796
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Quorum: 1
Active subsystems: 9
Flags: 2node Dirty
Ports Bound: 0 11 177
Node name: ncs-db-n1
Node ID: 1
Multicast addresses: xxx.xxx.xxx.xxx
Node addresses: xxx.xxx.xxx.xxx

Tuesday, September 8, 2009

FYI: 2 news on Linux

1. RHEL 5.4 has been released, you can find the release notes here:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Release_Notes/index.html

2. Good website to check whether your Linux distribution is hot or not.

http://distrowatch.com/stats.php?section=popularity

I only "heard" some popular Linux distributions first time from here, for example Mint, Arch, PCLinuxOS, etc.

How to differentiate RHEL releases correctly

Please take note, this artical is only focusing on Server relate product, not Desktop product.

Before RHEL5, we usually talked about AS & ES for server editions.

  • AS: Red Hat Enterprise Linux AS ("advanced server")
  • ES: Red Hat Enterprise Linux ES ("entry/mid server")

Key limitations with RHEL3 ES (kernel foundation Linux 2.4.21)

  • Maximum 2 CPU sockets
  • Maximum 8 GB RAM
  • No support on IBM Power

Key limitations with RHEL4 ES (kernel foundation Linux 2.6.9)

  • Maximum 2 CPU sockets
  • Maximum 8 GB RAM
  • No support on IBM Power

Start with RHEL 5, Redhat starts to use "RHEL" and "RHEL Advanced Platform" to replace "ES" & "AS".

Key limitations with RHEL5 (kernel foundation Linux 2.6.18)

  • Maximum 2 CPU sockets
  • Maximum 4 virtualized guests/instances
  • No support for Storage virtualization (with Red Hat GFS and Cluster Suite)

Reference:

http://www.redhat.com/rhel/compare/

http://www.redhat.com/whitepapers/rhel/RHEL3FamOverWPSO.pdf

http://www.tips5.com/difference-between-as-ws-es-and-desktop-of-redhat-enterprise-linux

Monday, September 7, 2009

Oracle Database Gateway Products

9i/10g Transparent Gateway for MSSQL/Sybase/Informix/Teradata has been desuported, and new product in 11g called "Oracle Database Gateways for XX", where XX is the third party database product name. (Refer to the desupported note here)


HSODBC also got desupported from Oracle, and replacement product is "Oracle Database Gateway for ODBC" (a.k.a DG4ODBC).


Anyway, DG4ODBC is still free of charge.

Below are current price taken from Oracle Price Lists.

Oracle database 11gR2 Installation (Linux x86)

The installation guide from Oracle documentation is quite comprehensive already, here are just some add-ons.

You can find the documentation here: http://www.oracle.com/pls/db112/homepage

Looking at the kernel parameters, even Oracle already said these are minimum required values, but I guess still some engineers just do "copy & paste". Be aware "shmmax" now is the minimum value in 11gR2 documentation, instead of 2GB in 10gR2/11gR1.

Kernel parameters in 11gR1 (only different parameter values are highlighted here)

fs.file-max = 512 * PROCESSES
kernel.shmmax = 2147483648
kernel.shmmni = 4096
net.ipv4.ip_local_port_range = 1024 65000
net.core.wmem_max = 262144


Kernel parameters in 11gR2

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = 536870912
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 1048586



Another changes is the OFA name for $ORACLE_HOME, in 9i, Oracle has recommend below similar name for Oracle database home directory:
"/u01/app/oracle/product/9.2.0"

In 10g, because ASM home has been introduced, suggested Oracle home directory becomes: "/u01/app/oracle/product/10.2.0/db_1"

I just did a test with 11gR2, the default suggested Oracle Home directory is "/u01/app/oracle/product/11.2.0/dbhome_1"

Remark: to get Oracle suggested OFA directory, before installation, only set ORACLE_HOME and ORACLE_SID environment values.

Below example shows the default installed options for Enterprise edition:

[oracle@vmxdb02 stage]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Sep 6 10:22:23 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production



SQL> select comp_name, version, status from dba_registry;

COMP_NAME VERSION STATUS
-------------------------------------------------- -------------------- ----------
OWB 11.2.0.1.0 VALID
Oracle Application Express 3.2.1.00.10 VALID
Oracle Enterprise Manager 11.2.0.1.0 VALID
OLAP Catalog 11.2.0.1.0 VALID
Spatial 11.2.0.1.0 VALID
Oracle Multimedia 11.2.0.1.0 VALID
Oracle XML Database 11.2.0.1.0 VALID
Oracle Text 11.2.0.1.0 VALID
Oracle Expression Filter 11.2.0.1.0 VALID
Oracle Rules Manager 11.2.0.1.0 VALID
Oracle Workspace Manager 11.2.0.1.0 VALID

COMP_NAME VERSION STATUS
-------------------------------------------------- -------------------- ----------
Oracle Database Catalog Views 11.2.0.1.0 VALID
Oracle Database Packages and Types 11.2.0.1.0 VALID
JServer JAVA Virtual Machine 11.2.0.1.0 VALID
Oracle XDK 11.2.0.1.0 VALID
Oracle Database Java Packages 11.2.0.1.0 VALID
OLAP Analytic Workspace 11.2.0.1.0 VALID
Oracle OLAP API 11.2.0.1.0 VALID

18 rows selected.

SQL> select * from v$option order by 2
2 /

PARAMETER VALUE
---------------------------------------- ----------
Real Application Clusters FALSE
Oracle Database Vault FALSE
Oracle Label Security FALSE
Automatic Storage Management FALSE
Partitioning TRUE
Objects TRUE
Advanced replication TRUE
Bit-mapped indexes TRUE
Connection multiplexing TRUE
Connection pooling TRUE
Database queuing TRUE

PARAMETER VALUE
---------------------------------------- ----------
Incremental backup and recovery TRUE
Instead-of triggers TRUE
Parallel backup and recovery TRUE
Parallel execution TRUE
Parallel load TRUE
Point-in-time tablespace recovery TRUE
Fine-grained access control TRUE
Proxy authentication/authorization TRUE
Change Data Capture TRUE
Plan Stability TRUE
Online Index Build TRUE

PARAMETER VALUE
---------------------------------------- ----------
Coalesce Index TRUE
Managed Standby TRUE
Materialized view rewrite TRUE
Materialized view warehouse refresh TRUE
Database resource manager TRUE
Spatial TRUE
Export transportable tablespaces TRUE
Transparent Application Failover TRUE
Fast-Start Fault Recovery TRUE
Sample Scan TRUE
Duplexed backups TRUE

PARAMETER VALUE
---------------------------------------- ----------
Java TRUE
OLAP Window Functions TRUE
Block Media Recovery TRUE
Fine-grained Auditing TRUE
Application Role TRUE
Enterprise User Security TRUE
Oracle Data Guard TRUE
OLAP TRUE
Basic Compression TRUE
Join index TRUE
Trial Recovery TRUE

PARAMETER VALUE
---------------------------------------- ----------
Data Mining TRUE
Online Redefinition TRUE
Streams Capture TRUE
File Mapping TRUE
Block Change Tracking TRUE
Flashback Table TRUE
Flashback Database TRUE
Transparent Data Encryption TRUE
Backup Encryption TRUE
Unused Block Compression TRUE
Result Cache TRUE

PARAMETER VALUE
---------------------------------------- ----------
SQL Plan Management TRUE
SecureFiles Encryption TRUE
Real Application Testing TRUE
Flashback Data Archive TRUE
DICOM TRUE
Active Data Guard TRUE
Server Flash Cache TRUE
Advanced Compression TRUE
XStream TRUE
Deferred Segment Creation TRUE

65 rows selected.