Flowers To India

Flowers To India
Send cakes,flowers to India

Thursday, December 20, 2012

How to track failed login attempts in Oracle

 How to track failed login attempts in Oracle

audit_file_dest string /lapps/ag2/oappsr12/db/tech_st/11.1.0/rdbms/audit
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string DB, EXTENDED

I run the following query to get more info on who tries to login with incorrect username/password and who locks the user.

select USERID,
userhost,
decode(returncode,01017,'Login Error','Acount Locked') "ISSUE",
spare1,
TO_CHAR ( CAST(
( FROM_TZ(
CAST(
TO_DATE(
TO_CHAR( ntimestamp# , 'DD/MM/YYYY HH:MI PM'),
'DD/MM/YYYY HH:MI PM'
)
AS TIMESTAMP
) ,
'GMT'
) AT LOCAL
)
AS TIMESTAMP)
, 'DD/MM/YYYY HH:MI PM') "Time",
sqltext,
comment$text from SYS.aud$
where ( returncode=1017 OR returncode=28000 )
order by ntimestamp# desc ;

Wednesday, December 19, 2012

Setup VNC Server on Linux

Configure VNC Server on Linux

1) Install VNC server packages
# yum install tigervnc-server
2) Edit the "/etc/sysconfig/vncservers" file to configure the required displays. The following entries enable VNC for display numbers ":2" and ":3". Notice multiple "display:user" pairs are defined on a single line, but the arguments for each display are defined separately.
VNCSERVERS="2:root 3:oracle"
VNCSERVERARGS[2]="-geometry 1280x1024 -nolisten tcp -localhost"
VNCSERVERARGS[3]="-geometry 1280x1024"
3) Set the VNC password for any users defined in the "/etc/sysconfig/vncservers" file.
# vncpasswd
Password:
Verify:
#

# su - oracle
$ vncpasswd
Password:
Verify:
$ exit
logout
# 
4) Enable the "vncserver" service for autostart and start the service.
# chkconfig vncserver on
# service vncserver start

Stop Firewall for 12c grid control

Make sure that no firewall is enabled. It is usually enabled by default. In Redhat or Fedora, you can quickly disable the firewall:

First, login to the Linux computer.In a terminal window or shell prompt, type



 service iptables stop 
 
 If you see the below error while opening the 12c grid control EM url then
 stop the iptables
 
 
The connection has timed out
The server at ***** is taking too long to respond.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer's network  connection.
If your computer or network is protected by a firewall or proxy, make sure 
that Firefox is permitted to access the Web. 

Friday, November 23, 2012

12c grid Agent deployment error

If you see the below error while deploying the 12c agentput true in agentpush.properties and make sure you add correct target hostname in /etc/hosts

You can either set the "oracle.sysman.prov.agentpush.enablePty" property to true in the "/u01/app/oracle/oms12cr2/oms/sysman/prov/agentpush/agentpush.properties" file, pass -S argument to the sudo command, or enable the "visiblepw" flag in the sudoers file


Wednesday, November 21, 2012

ex: 0602-101 Out of memory saving lines for undo

# Avoid problems when using vi to edit large files (files with many lines) in AIX
export EXINIT="set ll=20000000"
 
 

Monday, November 19, 2012

How to use DBMS_METADATA.GET_DDL for ddl

DBMS_METADATA.GET_DDL


Oracle introduced the usage of DBMS_METADSATA package's GET_DDL function to retrieve the DDL of a particular object type from 9i onwards. It takes two/three arguments depending on the type of object you want to retrieve the DDL for.

GENERAL USAGE
set long 300000 pages 0 lines 140
SELECT DBMS_METADATA.GET_DDL('OBJECT_TYPE','OBJECT_NAME','OWNER') FROM DUAL;

The 'OWNER' argument is sometimes optional if you are connected as the same user who owns the object you are trying to retreive the DDL for. Pasted below are the statements to generate the DDL for most of the object types.

select dbms_metadata.get_ddl('CLUSTER','C_MLOG#','SYS') from dual;
select dbms_metadata.get_ddl('CONTEXT','LT_CTX') from dual;
select dbms_metadata.get_ddl('DB_LINK','PROD.WORLD','ADAM') from dual;
select dbms_metadata.get_ddl('DB_LINK','TEST.WORLD','PUBLIC') from dual;
select dbms_metadata.get_ddl('FUNCTION','TO_DATE_FUNC','SCOTT') from dual;
select dbms_metadata.get_ddl('INDEX','REPCAT$_REPCOLUMN_PK','SYSTEM') from dual;
select dbms_metadata.get_ddl('JAVA_SOURCE','java_util','ADAM') from dual
select dbms_metadata.get_ddl('JAVA_SOURCE','/6c363944_Dumper','SYS') from dual
select dbms_metadata.get_ddl('LIBRARY','UTL_SMT_LIB','SYS') from dual;
select dbms_metadata.get_ddl('MATERIALIZED_VIEW','BS_TAB_MV','SYS') from dual;
select dbms_metadata.get_ddl('MATERIALIZED_VIEW_LOG','MLOG$_BS_TABLE','SYS') from dual;
select dbms_metadata.get_ddl('OPERATOR','OLAP_EXPRESSION') from dual;
select dbms_metadata.get_ddl('OPERATOR','OLAP_EXPRESSION','SYS') from dual;
select dbms_metadata.get_ddl('PACKAGE','XMLPARSER') from dual;
select dbms_metadata.get_ddl('PACKAGE','XMLPARSER','SYS') from dual;
select dbms_metadata.get_ddl('PACKAGE_BODY','STATSPACK','PERFSTAT') from dual;
select dbms_metadata.get_ddl('PROCEDURE','ORA$_SYS_REP_AUTH','SYSTEM') from dual;
select dbms_metadata.get_ddl('SEQUENCE','STATS$SNAPSHOT_ID','PERFSTAT') from dual;
select dbms_metadata.get_ddl('SYNONYM','/2fddc42_paintARGB_PRE_ONTO_S5','PUBLIC') from dual;
select dbms_metadata.get_ddl('TABLE','SQLEXPERT_PLAN1','SYSTEM') from dual;
select dbms_metadata.get_ddl('TABLESPACE','USERS') from dual;
select dbms_metadata.get_ddl('TRIGGER','DEF$_PROPAGATOR_TRIG','SYSTEM') from dual;
select dbms_metadata.get_ddl('TYPE','XMLSEQ_IMP_T','SYS') from dual;
select dbms_metadata.get_ddl('TYPE_BODY','ORACLE_LOADER','SYS') from dual;
select dbms_metadata.get_ddl('VIEW','DBA_PROPAGATION','SYS') from dual;
select dbms_metadata.get_ddl('VIEW','DBA_PROPAGATION') from dual;
select dbms_metadata.get_ddl('PACKAGE','DBMS_METADATA','SYS') from dual;

Sunday, November 18, 2012

How to redirect grid agent to another OMS in 12cGrid Control

 Redirecting Oracle Management Agent to Another OMS

If you want to redirect or repoint your Oracle Management Agent (Management Agent), which is already communicating with an Oracle Management Service (OMS), to communicate and upload data to another OMS, then follow these steps:
  1. Stop the Management Agent:
    $/bin/emctl stop agent
  2. Remove the Management Agent instance home:
    rm -rf
  3. From the Enterprise Manager Cloud Control console, remove the targets monitored by the Management Agent.
  4. Create a new instance home for the Management Agent so that it directs to the new OMS:
    $/core/12.1.0.2.0/sysman/install/agentDeploy.sh AGENT_BASE_DIR= OMS_HOST= EM_UPLOAD_PORT= AGENT_REGISTRATION_PASSWORD= AGENT_INSTANCE_HOME= -configOnly
    Note:
    The specified agent base directory location and the new agent instance home location map to locations on the same host, where the Management Agent was already configured. The OMS host name, of course, maps to the other host where the new OMS is configured, that is, the OMS with which you want the Management Agent to communicate now.

Wednesday, November 14, 2012

How to spool data to excel sheet from Oracle

#sqlplus user/password
SQL> SET MARKUP HTML ON ENTMAP ON SPOOL ON PREFORMAT OFF ;
SQL> SPOOL test.xls;
SQL> SELECT * FROM TAB;
SQL> SPOOL OFF;
------------------------------------------------------------------------

For not getting any sql or feed back or lines, for getting proper excel report I have done like below.

export ORACLE_SID=sid
export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
export ORACLE_HOME=home path
export PATH=$ORACLE_HOME/bin:$PATH
export DD=`date +%d%m%Y`
sqlplus username/password <show user;
alter session set nls_date_format='dd/mm/yyyy';
set pagesize 0 trimspool on feedback off lines 900;
set feedback off;
SET VERIFY    OFF;
SET TRIMSPOOL ON ;
set trimspool on;
set pages 1500;
set lines 1000;
set underline off;
set colsep ','
set echo off
@sqlscript.sql
exit;
EOF


in sqlscript.sql
spool filename_${DD}.csv
select * from emp;
spool off;
spool filename2.csv
select * from dept;
spool off

Wednesday, October 31, 2012

12c Grid Control New Features

New Features in 12c Grid Control


1. Database Creation Via Enterprise Manager Cloud Control
2. Database Upgrade Via Enterprise Manager Cloud Control
3. Database Cloning Enhancements
4. Oracle Exadata Server Management Enhancements
5. Manage Automatic Storage Management (ASM) Clusters as a Target
6. Database Configuration Compliance Standards Support
7. Emergency Performance
8. Database Backup and Restore Enhancements
9. Database System Discovery Enhancements
10. Change Plans Added to Change Management Pack
11. Compare Period Advisor
12. Compare Functionality
13. Active Reports
14. Real Application Testing and Data Masking Integration
15. Application Templates for Data Masking and Data Subsetting
16. Data Subsetting
17. Application Data Model Support for Data Masking
18. Reversible Data Masking
19. Performance Diagnostics Enhancements
20. Streams and XStreams Support
-----------------------------------------------------------------------------------------------------------

1. Database Creation Via Enterprise Manager Cloud Control

A wizard allows you to create an Oracle Database from within Enterprise Manager Cloud Control. You can create different configurations, including Single Instance and Real Application Clusters (RAC) databases, as well as file system and Automatic Storage Management (ASM) storage options.

2. Database Upgrade Via Enterprise Manager Cloud Control

You can now upgrade Single Instance and Real Application Clusters (RAC) Oracle databases through Cloud Control. This feature makes it possible to manage upgrades from a single console. You no longer have to access each individual database machine to perform upgrades.

3. Database Cloning Enhancements

Cloning procedures have been enhanced to capture configuration properties as well as the software payload. This is particularly useful when migrating databases from development to test to production or the reverse. A new EMCLI verb (clone_database) allows for database cloning using the same job type as the existing Clone Database feature of Cloud Control.

4. Oracle Exadata Server Management Enhancements

Oracle Exadata management capabilities now provide support for full target lifecycle management:

- Automatically discover Exadata targets
- Automatically create an Enterprise Manager System for end-to-end monitoring
- Provide extensive administration capabilities for databases, Exadata cells and Infiniband switches
- Simplify performance diagnostics with the help of in-depth performance charts covering all Exadata components

5. Manage Automatic Storage Management (ASM) Clusters as a Target

You can now manage clustered ASM resources as a single target, instead of each individual ASM instance having to be managed separately. Incident management and metric monitoring can be managed for the entire cluster.

6. Database Configuration Compliance Standards Support

Oracle database configuration data can now be managed within the new configuration and compliance standards frameworks.

7. Emergency Performance

This feature allows a DBA to diagnose and troubleshoot a hung or extremely slow database using the memory access mode. This mode is designed to bypass the SQL information retrieval layer and read performance statistics directly from the SGA of the target.

8. Database Backup and Restore Enhancements

You can now use Enterprise Manager Cloud Control to centrally maintain the settings for Oracle database and file system backups. This enhancement enables you to apply uniform settings to multiple databases and file systems when backing up multiple objects—all in one operation. Users can perform a backup on multiple databases concurrently or sequentially in one deployment procedure. An Oracle Home OSB tape backup can be restored either to the original or to a different location, and the restored Oracle Home can be reconfigured to function in the newly restored location.

9. Database System Discovery Enhancements

As the database system is now built upon the new target and association model, you can use it to monitor and manage a database’s storage, connectivity, and high availability. This also enables you to easily monitor and manage the applications that are dependent on the database. The database discovery functionality is enhanced to work with the new discovery framework and to provide a reliable workflow to create a database system.

10. Change Plans Added to Change Management Pack

As part of the Oracle Change Management Pack, the new Change Plans function allows application developers and database administrators to encapsulate schema changes needed to be made to a database into a “change plan,” which can be used to document, capture, and apply schema changes. Change Plans are also integrated with developer and DBA tasks into SQL*Developer and Oracle Enterprise Manager task automation. This integration reduces the manual processes between the various stakeholders involved in the process of promoting planned changes across enterprise databases while ensuring the integrity of the process.

11. Compare Period Advisor

This feature compares the performance of a database over two different time ranges. It analyzes changes in performance, workload, configuration, and hardware to highlight changes between the two time periods. The Compare Period Advisor gives the DBA the ability to compare two arbitrary periods of time.

12. Compare Functionality

The Compare functionality has been enhanced with new capabilities such as template support, system level comparison, and change notification. Users can now selectively include or ignore types of differences. Output of a comparison can easily be saved and exported, both in printable (for example, plain text) and data-centric (for example, CSV) formats. Users can select comparison start and end dates and view a history of changes for composite targets.

13. Active Reports

A new Active Reports function allows users to save performance data into an HTML file. Once saved, the report can be used for offline analysis or sent to other users, including Oracle Support. Active Reports enhances the visual representation of performance data and facilitates the convenient exchange of complex data.

14. Real Application Testing and Data Masking Integration

Real Application Testing and Data Masking integration provides users with the ability to perform secure testing in situations where data in production needs to be shared by nonproduction users due to organization or business requirements. Typically testing is done in a nonproduction environment or by a different group or organization. This integration addresses a common requirement that the data used for testing be shared in a manner that adheres to data privacy and compliance regulations.

15. Application Templates for Data Masking and Data Subsetting

This feature provides predefined data masking and data subsetting templates for applications. It allows users to automatically create test systems based on best practices recommendations.

16. Data Subsetting

Data subsetting provides the ability to create a smaller sized copy of the original production data that can be given to developers for testing. While it is a data subset, the referential relationships are preserved so that the data set is complete. This allows enterprises to lower storage costs while making production data available to developers for testing, without having to incur the storage footprint of the entire production database.

17. Application Data Model Support for Data Masking

The application data model (ADM) now stores the sensitive data elements used to generate mask definitions dynamically. Instead of having to manually discover sensitive data, the application data model identifies and stores the sensitive data elements.

18. Reversible Data Masking

Using encryption and decryption algorithms, reversible masking allows encryption of a user’s data deterministically into a format chosen by the user as a regular expression. Unmasking reverses the process to revert back to the original data. This feature is useful in environments where sensitive data needs to be masked and sent to a third party for processing. Coupling integrated masking with the application data model (ADM), an application’s data model is now available for certain packaged applications and can serve as a knowledge base containing sensitive column and data relationships.

19. Performance Diagnostics Enhancements

With the interactive user interface in the Active Session History (ASH) Viewer, users now can visualize the many performance dimensions that were not available to them in earlier releases. The Enhanced Enterprise Manager Performance and Top Activity pages allow users to visualize the multidimensional data in ASH. The ASH viewer enhances the performance troubleshooting capabilities of a DBA by providing the facility to detect skews in workload. Emergency ADDM adds performance diagnostics for databases suffering from severe performance problems.

20. Streams and XStreams Support

Streams and XStreams configurations can now be managed and monitored using Cloud Control. In addition to improvements in configuration and performance monitoring screens, logical change record (LCR) tracking is available for high-level diagnosis of replication issues. Cloud Control also simplifies the management and monitoring of replicated environments.

Tuesday, October 30, 2012

How to delete the cygwin sshd server

1) Check the services list

cygrunsrv -L

2) stop the service

cygrunsrv --stop sshd

3) remove the service

cygrunsrv --remove sshd

Shut down Oracle Database

What Shut down option is best based on the Situation in Oracle

Well, as an Oracle DBA, we have been taught and learnt the various database shutdown modes, NORMAL|IMMEDIATE|TRANSACTIONAL|ABORT that would be used as situation demands.

When the NORMAL|IMMEDIATE mode doesn't work, as a last resort, we use the 'SHUTDOWN ABORT' to terminate an active instance as quickly as possible, leaving a database inconsistent mode. Of course, the subsequent database startup requires an instance recovery and the recovery will be performed by the SMON b/g process automatically. Having said that, sometimes, this mode would have a huge risk of data corruption, specifically in pre 8.1.6 version.

Beyond a doubt, the 'SHUTDOWN ABORT' is the fastest mode of a database shutdown. Nevertheless,  we sometime afraid of using this mode due to the above mentioned facts. In order make a decision between a clean shutdown and shutdown about, one can do the following exercise:

Determine the of rollback is required (in bytes) for a clean database shutdown:

      select sum(used_ublk) * 
         segment tablespace> 
         from v$transaction;

If the amount of rollback required for a clean shutdown is very little, then go ahead with 'SHUTDOWN IMMEDIATE'.  In case if the amount of rollback required for a clean shutdown is huge and time consuming, use the 'SHUTDOWN ABORT' command, preferably if you are on >8.1.6 version.

Bring up the database in RESTRICT MODE subsequently to verify the rollback progress:

        select sum(distinct(ktuxesiz)) 
        from x$ktuxe where ktuxecfl = 'DEAD';

Upon rollback completion, shutdown the database cleanly, using the 'SHUTDOWN IMMEDIATE'.


Reference:
What Is The Fastest Way To Cleanly Shutdown An Oracle Database? [ID 386408.1]

Sunday, September 30, 2012

if condition options in shell scripting





if condition options in shell scripting



PrimaryMeaning
[ -a FILE ]True if FILE exists.
[ -b FILE ]True if FILE exists and is a block-special file.
[ -c FILE ]True if FILE exists and is a character-special file.
[ -d FILE ]True if FILE exists and is a directory.
[ -e FILE ]True if FILE exists.
[ -f FILE ]True if FILE exists and is a regular file.
[ -g FILE ]True if FILE exists and its SGID bit is set.
[ -h FILE ]True if FILE exists and is a symbolic link.
[ -k FILE ]True if FILE exists and its sticky bit is set.
[ -p FILE ]True if FILE exists and is a named pipe (FIFO).
[ -r FILE ]True if FILE exists and is readable.
[ -s FILE ]True if FILE exists and has a size greater than zero.
[ -t FD ]True if file descriptor FD is open and refers to a terminal.
[ -u FILE ]True if FILE exists and its SUID (set user ID) bit is set.
[ -w FILE ]True if FILE exists and is writable.
[ -x FILE ]True if FILE exists and is executable.
[ -O FILE ]True if FILE exists and is owned by the effective user ID.
[ -G FILE ]True if FILE exists and is owned by the effective group ID.
[ -L FILE ]True if FILE exists and is a symbolic link.
[ -N FILE ]True if FILE exists and has been modified since it was last read.
[ -S FILE ]True if FILE exists and is a socket.
[ FILE1 -nt FILE2]True if FILE1 has been changed more recently than FILE2, or if FILE1 exists and FILE2 does not.
[ FILE1 -ot FILE2]True if FILE1 is older than FILE2, or is FILE2 exists and FILE1 does not.
[ FILE1 -ef FILE2]True if FILE1 and FILE2 refer to the same device and inode numbers.
[ -o OPTIONNAME ]True if shell option "OPTIONNAME" is enabled.
[ -z STRING ]True of the length if "STRING" is zero.
[ -n STRING ] or [ STRING ]True if the length of "STRING" is non-zero.
[ STRING1 == STRING2 ] True if the strings are equal. "=" may be used instead of "==" for strict POSIX compliance.
[ STRING1 != STRING2 ] True if the strings are not equal.
[ STRING1 < STRING2 ] True if "STRING1" sorts before "STRING2" lexicographically in the current locale.
[ STRING1 > STRING2 ] True if "STRING1" sorts after "STRING2" lexicographically in the current locale.
[ ARG1 OP ARG2 ]"OP" is one of -eq, -ne, -lt, -le, -gt or -ge. These arithmetic binary operators return true if "ARG1" is equal to, not equal to, less than, less than or equal to, greater than, or greater than or equal to "ARG2", respectively. "ARG1" and "ARG2" are integers.


Thursday, September 27, 2012

How to Copy virtual machines into ESXi using the vSphere Client

How to copy vmdk to esxi



Rather than copying a virtual machine to the ESXi hardware node, there is a quick easy way to upload it via the VMware vSphere client.  This can be done by doing the following:
  1. In the vSphere client, click on the server.
  2. Click on the summary tab.
  3. Right click on datastore -> Browse Datastore
  4. Use the icon with the up arrow in front of some disks (Upload files to this datastore)
  5. Upload the folder or files that you require.
  6. Create a new (or use an existing) VM and use the vmdk file you have uploaded.
 

Saturday, August 18, 2012

SSH Automatic Login

source:192.168.1.11 target:192.168.1.9


from source

1)ssh-keygen -t dsa

2)cat /oracle1/oraprod/.ssh/id_dsa.pub
ssh -l oraprod 192.168.1.9 'cat >> /oracle1/oraprod/.ssh/authorized_keys'

Thursday, July 5, 2012

Increasing Swap Space on Linux

The requirement for increasing the Swap size.
Increasing the Swap space at runtime Imagine this, you start the OUI and your pre installation check fails due to insufficient swap space. Now I agree that you got to check all this before starting the installation but just in case you missed it. You could always increase it on the fly.

Check the memory on your server
[root@host root] # free -m
Now say you need to increase it by 500 MB for your server, first locate a place you can spare this 500 MB in my case i found it in /stage
Use the dd command to create a swapfile
#cd /u01
smtp.ap.airtelbroadband.in-out
bop.ap.airtelbroadband.in-in
# dd if=/dev/zero of=swapfile bs=1024 count=512000
512000+0 records in 512000+0 records out
# ls -ltr drwx------ 2 root root 16384 May 1 2006 lost+found -rw-r--r-- 1 root root 524288000 Nov 28 13:58 swapfile
Next issue the following two commands
# mkswap swapfile
Setting up swapspace version 1, size = 524283 kB
# swapon swapfile
Now check you memory again
# free -m
Bingo! here is your increased SWAP.
To make this change permanent add the line in your /etc/fstab /stage/swapfile swap swap defaults 0 0

Tuesday, June 19, 2012

select any table privilege





•If you have O7_DICTIONARY_ACCESSIBILITY=TRUE then SELECT ANY TABLE privilege provides access to all SYS and non-SYS objects.

•If you have O7_DICTIONARY_ACCESSIBILITY=FALSE then SELECT ANY TABLE privilege provides access only to non-SYS objects.

•If only SELECT_CATALOG_ROLE is enabled then it provides access to all SYS views only.

•If only SELECT ANY DICTIONARY privilege is enabled then it provides access to SYS schema objects only.

•If both SELECT ANY TABLE and SELECT any DICTIONARY privilege is enabled then it allow access to all SYS and non-SYS objects.

•SELECT ANY DICTIONARY privilege and SELECT_CATALOG_ROLE has no affect over O7_DICTIONARY_ACCESSIBILITY settings.

How to select special character(like _ ) tables in Oracle


Example SQL query :
select table_name from dba_tables where owner='ownername' AND table_name like '%\_%' escape '\'

One more example
where ksppinm like '%\_io\_%' ESCAPE '\';

Oracle Escape Characters
Oracle allows the assignment of special escape characters to tell Oracle that the character is interpreted literally.  Certain characters such as the underscore “_” are not interpreted literally because they have special meaning within Oracle.
So we have to use escape command for selecting the special character tables like _

example for escape command
Example
select table_name from dba_tables where owner='ownername' AND table_name like '%\_%'escape '\'
One more example
where ksppinm like '%\_io\_%' ESCAPE '\';


Tuesday, June 12, 2012

apply cpu patch

Apply CPU Patch on linux server

cpu patch location on the Server
/opt/oracle/July2010/9655017

 1) take the below commands outputs.
select name from v$database;
select * from registry$history;
select * from v$version;
select * from dba_registry_history;
select count(1) from dba_objects where status like 'I%';
SELECT OBJECT_NAME,OBJECT_TYPE,owner FROM DBA_OBJECTS WHERE STATUS= 'INVALID';
cd $ORACLE_HOME/OPatch
opatch version
opatch lsinventory

2) Shut down the database & stop the listener.
3) check the iventory path and make sure the inventory path should be point correct inventory.

4)use below commands and take the home and inventory backups.
cd /oracle10g/PRDRCD1/product/10.2
tar cvf - .|gzip -c > /oradb/PRDRCD1/oradata1/back_home/home_prdrcd1_`hostname`_`date +%Y%m%d`.tar.gz
cd /oracle10g/PRDRCD1/product/10.2/inventory
tar cvf - .|gzip -c > /oradb/PRDRCD1/oradata1/back_home/oraInvent_prdrcd1_`hostname`_`date +%Y%m%d`.tar.gz
cd /oracle10g/oraInventory
tar cvf - .|gzip -c > /oradb/PRDRCD1/oradata1/back_home/oracle10g_oraInventory_`hostname`_`date +%Y%m%d`.tar.gz

5)take the OPatch backup.
cd $ORACLCE_HOME
cp OPatch OPatch_bak

6) unzip the p6880880_102000_SOLARIS64.zip under ORACLE_HOME

7) go to CPU patch directory and apply the CPU patch
cd /opt/oracle/July2010/9655017
export PATH=$PATH:/usr/ccs/bin
export PATH=$ORACLE_HOME/OPatch:$PATH:.
opatch version
opatch napply -skip_subset -skip_duplicate

8) Run catbunle.sql
cd $ORACLE_HOME/rdbms/admin
sqlplus /'as sysdba'
startup
@catbundle.sql cpu apply

9) run the utlrp.sql
10) excute the below statement

SELECT * FROM registry$history where ID = '6452863';
if that statement returns no rows please execute below commands
cd $ORACLE_HOME/cpu/view_recompile
sqlplus /'as sysdba'
@recompile_precheck_jan2008cpu.sql
sql>shut immediate
startup upgrade
@view_recompile_jan2008cpu.sql
shut immediate
startup
@utlrp.sql

11) perform 1st step for taking patch information.

Friday, August 26, 2011

RAC Architecture

RAC Architecture

Oracle Real Application clusters allows multiple instances to access a single database, the instances will be running on multiple nodes. In an standard Oracle configuration a database can only be mounted by one instance but in a RAC environment many instances can access a single database.

Oracle's RAC is heavy dependent on a efficient, high reliable high speed private network called the interconnect, make sure when designing a RAC system that you get the best that you can afford.

The table below describes the difference of a standard oracle database (single instance) an a RAC environment

Component Single Instance Environment RAC Environment
SGA Instance has its own SGA Each instance has its own SGA
Background processes Instance has its own set of background processes Each instance has its own set of background processes
Datafiles Accessed by only one instance Shared by all instances (shared storage)
Control Files Accessed by only one instance Shared by all instances (shared storage)
Online Redo Logfile Dedicated for write/read to only one instance Only one instance can write but other instances can read during recovery and archiving. If an instance is shutdown, log switches by other instances can force the idle instance redo logs to be archived
Archived Redo Logfile Dedicated to the instance Private to the instance but other instances will need access to all required archive logs during media recovery
Flash Recovery Log Accessed by only one instance Shared by all instances (shared storage)
Alert Log and Trace Files Dedicated to the instance Private to each instance, other instances never read or write to those files.
ORACLE_HOME Multiple instances on the same server accessing different databases ca use the same executable files Same as single instance plus can be placed on shared file system allowing a common ORACLE_HOME for all instances in a RAC environment.

RAC Components

The major components of a Oracle RAC system are

  • Shared disk system
  • Oracle Clusterware
  • Cluster Interconnects
  • Oracle Kernel Components

The below diagram describes the basic architecture of the Oracle RAC environment

Here are a list of processes running on a freshly installed RAC

Disk architecture

With today's SAN and NAS disk storage systems, sharing storage is fairly easy and is required for a RAC environment, you can use the below storage setups

  • SAN (Storage Area Networks) - generally using fibre to connect to the SAN
  • NAS ( Network Attached Storage) - generally using a network to connect to the NAS using either NFS, ISCSI
  • JBOD - direct attached storage, the old traditional way and still used by many companies as a cheap option

All of the above solutions can offer multi-pathing to reduce SPOFs within the RAC environment, there is no reason not to configure multi-pathing as the cost is cheap when adding additional paths to the disk because most of the expense is paid when out when configuring the first path, so an additional controller card and network/fibre cables is all that is need.

The last thing to think about is how to setup the underlining disk structure this is known as a raid level, there are about 12 different raid levels that I know off, here are the most common ones

raid 0 (Striping)

A number of disks are concatenated together to give the appearance of one very large disk.

Advantages
Improved performance
Can Create very large Volumes

Disadvantages
Not highly available (if one disk fails, the volume fails)

raid 1 (Mirroring)

A single disk is mirrored by another disk, if one disk fails the system is unaffected as it can use its mirror.

Advantages
Improved performance
Highly Available (if one disk fails the mirror takes over)

Disadvantages
Expensive (requires double the number of disks)

raid 5

Raid stands for Redundant Array of Inexpensive Disks, the disks are striped with parity across 3 or more disks, the parity is used in the event that one of the disks fails, the data on the failed disk is reconstructed by using the parity bit.

Advantages
Improved performance (read only)
Not expensive

Disadvantages
Slow write operations (caused by having to create the parity bit)

There are many other raid levels that can be used with a particular hardware environment for example EMC storage uses the RAID-S, HP storage uses Auto RAID, so check with the manufacture for the best solution that will provide you with the best performance and resilience.

Once you have you storage attached to the servers, you have three choices on how to setup the disks

  • Raw Volumes - normally used for performance benefits, however they are hard to manage and backup
  • Cluster FileSystem - used to hold all the Oracle datafiles can be used by windows and linux, its not used widely
  • Automatic Storage Management (ASM) - Oracle choice of storage management, its a portable, dedicated and optimized cluster filesystem

I will only be discussing ASM, which i have already have a topic on called Automatic Storage Management.

Oracle Clusterware

Oracle Clusterware software is designed to run Oracle in a cluster mode, it can support you to 64 nodes, it can even be used with a vendor cluster like Sun Cluster.

The Clusterware software allows nodes to communicate with each other and forms the cluster that makes the nodes work as a single logical server. The software is run by the Cluster Ready Services (CRS) using the Oracle Cluster Registry (OCR) that records and maintains the cluster and node membership information and the voting disk which acts as a tiebreaker during communication failures. Consistent heartbeat information travels across the interconnect to the voting disk when the cluster is running.

The CRS has four components

  • OPROCd - Process Monitor Daemon
  • CRSd - CRS daemon, the failure of this daemon results in a node being reboot to avoid data corruption
  • OCSSd - Oracle Cluster Synchronization Service Daemon (updates the registry)
  • EVMd - Event Volume Manager Daemon

The OPROCd daemon provides the I/O fencing for the Oracle cluster, it uses the hangcheck timer or watchdog timer for the cluster integrity. It is locked into memory and runs as a realtime processes, failure of this daemon results in the node being rebooted. Fencing is used to protect the data, if a node were to have problems fencing presumes the worst and protects the data thus restarts the node in question, its better to be save than sorry.

The CRSd process manages resources such as starting and stopping the services and failover of the application resources, it also spawns separate processes to manage application resources. CRS manages the OCR and stores the current know state of the cluster, it requires a public, private and VIP interface in order to run. OCSSd provides synchronization services among nodes, it provides access to the node membership and enables basic cluster services, including cluster group services and locking, failure of this daemon causes the node to be rebooted to avoid split-brain situations.

The below functions are covered by the OCSSd

  • CSS provides basic Group Services Support, it is a distributed group membership system that allows applications to coordinate activities to archive a common result.
  • Group services use vendor clusterware group services when it is available.
  • Lock services provide the basic cluster-wide serialization locking functions, it uses the First In, First Out (FIFO) mechanism to manage locking
  • Node services uses OCR to store data and updates the information during reconfiguration, it also manages the OCR data which is static otherwise.

The last component is the Event Management Logger, which runs the EVMd process. The daemon spawns a processes called evmlogger and generates the events when things happen. The evmlogger spawns new children processes on demand and scans the callout directory to invoke callouts. Death of the EVMd daemon will not halt the instance and will be restarted.

Quick recap

CRS Process Functionality Failure of the Process Run AS
OPROCd - Process Monitor provides basic cluster integrity services Node Restart root
EVMd - Event Management spawns a child process event logger and generates callouts Daemon automatically restarted, no node restart oracle
OCSSd - Cluster Synchronization Services basic node membership, group services, basic locking Node Restart oracle
CRSd - Cluster Ready Services resource monitoring, failover and node recovery Daemon restarted automatically, no node restart root

The cluster-ready services (CRS) is a new component in 10g RAC, its is installed in a separate home directory called ORACLE_CRS_HOME. It is a mandatory component but can be used with a third party cluster (Veritas, Sun Cluster), by default it manages the node membership functionality along with managing regular RAC-related resources and services

RAC uses a membership scheme, thus any node wanting to join the cluster as to become a member. RAC can evict any member that it seems as a problem, its primary concern is protecting the data. You can add and remove nodes from the cluster and the membership increases or decrease, when network problems occur membership becomes the deciding factor on which part stays as the cluster and what nodes get evicted, the use of a voting disk is used which I will talk about later.

The resource management framework manage the resources to the cluster (disks, volumes), thus you can have only have one resource management framework per resource. Multiple frameworks are not supported as it can lead to undesirable affects.

The Oracle Cluster Ready Services (CRS) uses the registry to keep the cluster configuration, it should reside on a shared storage and accessible to all nodes within the cluster. This shared storage is known as the Oracle Cluster Registry (OCR) and its a major part of the cluster, it is automatically backed up (every 4 hours) the daemons plus you can manually back it up. The OCSSd uses the OCR extensively and writes the changes to the registry

The OCR keeps details of all resources and services, it stores name and value pairs of information such as resources that are used to manage the resource equivalents by the CRS stack. Resources with the CRS stack are components that are managed by CRS and have the information on the good/bad state and the callout scripts. The OCR is also used to supply bootstrap information ports, nodes, etc, it is a binary file.

The OCR is loaded as cache on each node, each node will update the cache then only one node is allowed to write the cache to the OCR file, the node is called the master. The Enterprise manager also uses the OCR cache, it should be at least 100MB in size. The CRS daemon will update the OCR about status of the nodes in the cluster during reconfigurations and failures.

The voting disk (or quorum disk) is shared by all nodes within the cluster, information about the cluster is constantly being written to the disk, this is know as the heartbeat. If for any reason a node cannot access the voting disk it is immediately evicted from the cluster, this protects the cluster from split-brains (the Instance Membership Recovery algorithm IMR is used to detect and resolve split-brains) as the voting disk decides what part is the really cluster. The voting disk manages the cluster membership and arbitrates the cluster ownership during communication failures between nodes. Voting is often confused with quorum the are similar but distinct, below details what each means

Voting A vote is usually a formal expression of opinion or will in response to a proposed decision
Quorum is defined as the number, usually a majority of members of a body, that, when assembled is legally competent to transact business

The only vote that counts is the quorum member vote, the quorum member vote defines the cluster. If a node or group of nodes cannot archive a quorum, they should not start any services because they risk conflicting with an established quorum.

The voting disk has to reside on shared storage, it is a a small file (20MB) that can be accessed by all nodes in the cluster. In Oracle 10g R1 you can have only one voting disk, but in R2 you can have upto 32 voting disks allowing you to eliminate any SPOF's.

The original Virtual IP in Oracle was Transparent Application Failover (TAF), this had limitations, this has now been replaced with cluster VIPs. The cluster VIPs will failover to working nodes if a node should fail, these public IPs are configured in DNS so that users can access them. The cluster VIPs are different from the cluster interconnect IP address and are only used to access the database.

The cluster interconnect is used to synchronize the resources of the RAC cluster, and also used to transfer some data from one instance to another. This interconnect should be private, highly available and fast with low latency, ideally they should be on a minimum private 1GB network. What ever hardware you are using the NIC should use multi-pathing (Linux - bonding, Solaris - IPMP). You can use crossover cables in a QA/DEV environment but it is not supported in a production environment, also crossover cables limit you to a two node cluster.

Oracle Kernel Components

The kernel components relate to the background processes, buffer cache and shared pool and managing the resources without conflicts and corruptions requires special handling.

In RAC as more than one instance is accessing the resource, the instances require better coordination at the resource management level. Each node will have its own set of buffers but will be able to request and receive data blocks currently held in another instance's cache. The management of data sharing and exchange is done by the Global Cache Services (GCS).

All the resources in the cluster group form a central repository called the Global Resource Directory (GRD), which is distributed. Each instance masters some set of resources and together all instances form the GRD. The resources are equally distributed among the nodes based on their weight. The GRD is managed by two services called Global Caches Services (GCS) and Global Enqueue Services (GES), together they form and manage the GRD. When a node leaves the cluster, the GRD portion of that instance needs to be redistributed to the surviving nodes, a similar action is performed when a new node joins.

RAC Background Processes

Each node has its own background processes and memory structures, there are additional processes than the norm to manage the shared resources, theses additional processes maintain cache coherency across the nodes.

Cache coherency is the technique of keeping multiple copies of a buffer consistent between different Oracle instances on different nodes. Global cache management ensures that access to a master copy of a data block in one buffer cache is coordinated with the copy of the block in another buffer cache.

The sequence of a operation would go as below

  1. When instance A needs a block of data to modify, it reads the bock from disk, before reading it must inform the GCS (DLM). GCS keeps track of the lock status of the data block by keeping an exclusive lock on it on behalf of instance A
  2. Now instance B wants to modify that same data block, it to must inform GCS, GCS will then request instance A to release the lock, thus GCS ensures that instance B gets the latest version of the data block (including instance A modifications) and then exclusively locks it on instance B behalf.
  3. At any one point in time, only one instance has the current copy of the block, thus keeping the integrity of the block.

GCS maintains data coherency and coordination by keeping track of all lock status of each block that can be read/written to by any nodes in the RAC. GCS is an in memory database that contains information about current locks on blocks and instances waiting to acquire locks. This is known as Parallel Cache Management (PCM). The Global Resource Manager (GRM) helps to coordinate and communicate the lock requests from Oracle processes between instances in the RAC. Each instance has a buffer cache in its SGA, to ensure that each RAC instance obtains the block that it needs to satisfy a query or transaction. RAC uses two processes the GCS and GES which maintain records of lock status of each data file and each cached block using a GRD.

So what is a resource, it is an identifiable entity, it basically has a name or a reference, it can be a area in memory, a disk file or an abstract entity. A resource can be owned or locked in various states (exclusive or shared). Any shared resource is lockable and if it is not shared no access conflict will occur.

A global resource is a resource that is visible to all the nodes within the cluster. Data buffer cache blocks are the most obvious and most heavily global resource, transaction enqueue's and database data structures are other examples. GCS handle data buffer cache blocks and GES handle all the non-data block resources.

All caches in the SGA are either global or local, dictionary and buffer caches are global, large and java pool buffer caches are local. Cache fusion is used to read the data buffer cache from another instance instead of getting the block from disk, thus cache fusion moves current copies of data blocks between instances (hence why you need a fast private network), GCS manages the block transfers between the instances.

Finally we get to the processes

Oracle RAC Daemons and Processes
LMSn
Lock Manager Server process - GCS

this is the cache fusion part and the most active process, it handles the consistent copies of blocks that are transferred between instances. It receives requests from LMD to perform lock requests. I rolls back any uncommitted transactions. There can be up to ten LMS processes running and can be started dynamically if demand requires it.

they manage lock manager service requests for GCS resources and send them to a service queue to be handled by the LMSn process. It also handles global deadlock detection and monitors for lock conversion timeouts.

as a performance gain you can increase this process priority to make sure CPU starvation does not occur

you can see the statistics of this daemon by looking at the view X$KJMSDP

LMON
Lock Monitor Process - GES

this process manages the GES, it maintains consistency of GCS memory structure in case of process death. It is also responsible for cluster reconfiguration and locks reconfiguration (node joining or leaving), it checks for instance deaths and listens for local messaging.

A detailed log file is created that tracks any reconfigurations that have happened.

LMD
Lock Manager Daemon - GES

this manages the enqueue manager service requests for the GCS. It also handles deadlock detention and remote resource requests from other instances.

you can see the statistics of this daemon by looking at the view X$KJMDDP

LCK0
Lock Process - GES manages instance resource requests and cross-instance call operations for shared resources. It builds a list of invalid lock elements and validates lock elements during recovery.
DIAG
Diagnostic Daemon This is a lightweight process, it uses the DIAG framework to monitor the health of the cluster. It captures information for later diagnosis in the event of failures. It will perform any necessary recovery if an operational hang is detected.