Tuesday, December 27, 2011

Finding related tables recursively


When we try to refresh a set of tables from one database to another database, we have to refresh all direct/indirect parent/child tables to those set of tables for data integrity. To find out all those related tables, here is the query.

select
    distinct related_table
from
    ( /* All related tables in the database */
      ( /* Related tables are parent tables in this subquery */
       select
              parent_con.owner       related_table_owner,
              parent_con.table_name  related_table,
              child_con.owner        given_table_owner,
              child_con.table_name   given_table
       from
              all_constraints  parent_con,
              all_constraints  child_con
       where
              child_con.r_constraint_name = parent_con.constraint_name
      )
      union
      ( /* Related tables are child tables in this subquery */
       select
              child_con.owner        related_table_owner,
              child_con.table_name   related_table,
              parent_con.owner       given_table_owner,
              parent_con.table_name  given_table
       from
              all_constraints  parent_con,
              all_constraints  child_con
       where
              child_con.r_constraint_name = parent_con.constraint_name
      )
    )
connect by nocycle prior related_table = given_table
start with
        given_table_owner = 'MURTY'
    and given_table in ('TAB1', 'TAB2')
order by 1;

BIND (DNS Server) sample configuration

The below is sample DNS configuration for example.com domain. This includes
a) MX record for example.com domain
b) Single hostname resolving to multiple IPs (used for SCAN IPs in Oracle grid infrastructure 11gR2)
c) Subdomain delegation (used for GNS configuration in Oracle grid infrastructure 11gR2)
d) Auto generation of records (to generate systematic names)

/etc/named.conf
 
options {
        directory "/var/named";
        // forward only;
        forwarders { 10.0.0.1; };
};

zone "example.com." IN {
        type master;
        file "example.com.db";
};

zone "0.168.192.in-addr.arpa." IN {
        type master;
        file "192.168.0.db";
};

  

/var/named/example.com.db
 ; Start of Authority record
@                               SOA     ns   nsadmin@example.com.   (
                                                        11    ; serial
                                                        3H    ; refresh
                                                        1H    ; retry
                                                        1W    ; expiry
                                                        1H    ; minimum
                                                )

; Name server record of this zone
                                NS      ns

; MX Record for mail server
                                MX      10 mail

; Address record for name server itself
ns                              A       192.168.0.1

; Other address records
server5                         A       192.168.0.5
; Address record with FQDN name
server6.example.com.            A       192.168.0.6

; CNAMEs
mail                            CNAME   server5.example.com.
www                             CNAME   server6

; Single hostname resolving to multiple IPs
; IP addresses will be given as answers in Round Rabin fashion
cluster1-scan                   A       192.168.0.51
                                A       192.168.0.52
                                A       192.168.0.53

; Deligating to subdomain grid.example.com
; gns.grid.example.com is name server for grid.example.com domain
grid                            NS      gns.grid
; Glue record for delegation
gns.grid                        A       192.168.0.65

; Auto generation of records
$GENERATE 11-20 server$         A       192.168.0.$

 
 /var/named/192.168.0.db
 ; Start of Authority record
@                       SOA     ns.example.com. nsadmin@example.com. (
                                                           4     ; serial
                                                           3H    ; refresh
                                                           1H    ; retry
                                                           1W    ; expiry
                                                           1H    ; minimum
                                                   )
; Name server record of this zone
                        NS      ns.example.com.

; Pointer record of name server
1                       PTR     ns.example.com.

; Other PTR records
5                       PTR     server5.example.com.
6                       PTR     server6.example.com.

51                      PTR     cluster1-scan.example.com.
52                      PTR     cluster1-scan.example.com.
53                      PTR     cluster1-scan.example.com.

; Auto generation of records
$GENERATE 11-20 $       PTR     server$.example.com.


Saturday, September 17, 2011

Finding if an index is created with constraint, or created before constraint

Oracle database needs an index to enforce primary key constraint on a table. If there is no index existing already on primary column(s), it creates a unique index while creating constraint. If there is an index existing already (any of unique and non-unique, but not bitmap), it uses the existing index to enforce the constraint. 

Here is an example that shows 2 tables, where a table t1 uses existing unique index to enforce primary key, but for the other table t2 a unique index is created while adding constraint.

SQL> create table t1 (a number, b number);

Table created.

SQL> create unique index pk_t1 on t1(a);
 
Index created.

SQL> alter table t1 add constraint pk_t1 primary key (a); -- Uses existing index pk_t1

Table altered. 
SQL> select index_name from user_indexes where table_name='T1';

INDEX_NAME
------------------------------
PK_T1

SQL> create table t2 (a number, b number);

Table created.

SQL> alter table t2 add constraint pk_t2 primary key (a); -- Creates a new index pk_t2

Table altered.

SQL> select index_name from user_indexes where table_name='T2';

INDEX_NAME
------------------------------
PK_T2

SQL> alter table t1 disable constraint pk_t1; -- Disabling constraint will not drop index here

Table altered.

SQL> select index_name from user_indexes where table_name='T1';

INDEX_NAME
------------------------------
PK_T1

SQL> alter table t2 disable constraint pk_t2; -- Disabling constraint will drop index here

Table altered.

SQL> select index_name from user_indexes where table_name='T2';

no rows selected

SQL> alter table t1 enable constraint pk_t1;

Table altered.

SQL> select index_name from user_indexes where table_name='T1';

INDEX_NAME
------------------------------
PK_T1

SQL> alter table t2 enable constraint pk_t2; --Enabling constraint will create index again

Table altered.

SQL> select index_name from user_indexes where table_name='T2';

INDEX_NAME
------------------------------
PK_T2

None of the indexes can be dropped as they are being used to enforce constraint

SQL> drop index pk_t1;
drop index pk_t1
*
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

SQL> drop index pk_t2;
drop index pk_t2
*
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

If we want to control the behaviour of whether index should be dropped or not, we can use the clause "keep index" or "drop index" while disabling primary key.

alter table t1 disable constraint pk_t1 drop index;

alter table t2 disable constraint pk_t2 keep index;

But the default behaviour of whether the index has to dropped or kept, depends on if the index was created during creating/enabling of constraint or before. The question is here is, how to know if the index was created with constraint or if that was there before constraint (and used to enforce constraint). I could not find a data dictionary view available to find this information (searched in 10.2 and 11.2). So, tried tracing of "disable constraint" statement on a table, and found what dictionary tables it uses to find this information. Here is the answer I got. And execution these queries need SELECT ANY DICTIONARY privilege.
 
SQL> select i.property from sys.obj$ o, sys.ind$ i where o.name='PK_T1' and i.obj# = o.obj#;

PROPERTY
----------
     1

SQL> select i.property from sys.obj$ o, sys.ind$ i where o.name='PK_T2' and i.obj# = o.obj#;

PROPERTY
----------
4097

So, the information about the index if was created with constraint or not, is stored in ind$ table, in the column property. It uses 13th bit as flag. If that is 1, then the index is created with constraint and it will be dropped when the constraint is dropped or disabled. Below query decodes the flag and gives the required information.

 SQL> select decode(bitand(i.property,4096),
                   4096, 'Created with constraint',
                   'Created before constraint')
from sys.obj$ o, sys.ind$ i
where o.name='PK_T2' and i.obj# = o.obj#;

DECODE(BITAND(I.PROPERTY,40
---------------------------
Created with constraint

SQL> select decode(bitand(i.property,4096),
                   4096, 'Created with constraint',
                   'Created before constraint')
from sys.obj$ o, sys.ind$ i
where o.name='PK_T1' and i.obj# = o.obj#;

DECODE(BITAND(I.PROPERTY,40
---------------------------
Created before constraint

The above queries are also applicable for unique index created with unique key constraint.

Creating a block device for ASM from a file using iscsi target in RHEL 6

When we want to use a disk for ASM and do not have any unpartitioned space on system, we can create a file on disk and use it as disk with steps below. For this, we are going to create a file in /vdisks and map it as block device /dev/asmdiskA. 

1. Create directory /vdisks
# mkdir /vdisks

2. To share the files in this directory using iscsi target, we need to set SELinux context tgtd_var_lib_t to it.
# semanage fcontext -a -t tgtd_var_lib_t "/vdisks(/.*)?"
# restorecon -R /vdisks
# ls -ldZ /vdisks
drwxr-xr-x. root root system_u:object_r:tgtd_var_lib_t:s0 /vdisks

3. Create a file asmdisk1.img of 2GB. SELinux context of the file will be automatically set to tgtd_var_lib_t.
# dd if=/dev/zero of=/vdisks/asmdisk1.img bs=1024 count=2097152
2097152+0 records in
2097152+0 records out
2147483648 bytes (2.1 GB) copied, 27.8373 s, 77.1 MB/s

# ls -lhZ /vdisks
-rw-r--r--. root root unconfined_u:object_r:tgtd_var_lib_t:s0 asmdisk1.img

4. Configure iscsi target by writing below lines at the end of file /etc/tgt/targets.conf to share the file using iscsi target.
<target iqn.2011-10.com.example:asm1>
<backing-store /vdisks/asmdisk1.img>
vendor_id MYVENDOR
scsi_id 456789
</backing-store>
</target>

5. Start iscsi target, make it to be started automatically while booting the OS.
# service tgtd start
Starting SCSI target daemon: [ OK ]
# chkconfig tgtd on

6. Start iscsi initiator and discover the targets. Here target is on same machine, so we use loopback address (127.0.0.1)
# service iscsi start
Starting iscsi: [ OK ]
# iscsiadm --mode discovery --portal 127.0.0.1 --type sendtargets
Starting iscsid: [ OK ]
127.0.0.1:3260,1 iqn.2011-10.com.example:asm1

7. Once target is discovered, we can login to target using command below.
# iscsiadm --mode discovery --type sendtargets --portal 127.0.0.1 -l
127.0.0.1:3260,1 iqn.2011-10.com.example:asm1
Logging in to [iface: default, target: iqn.2011-10.com.example:asm1, portal: 127.0.0.1,3260]
Login to [iface: default, target: iqn.2011-10.com.example:asm1, portal: 127.0.0.1,3260] successful.

After this, a new disk is visible as under fdisk -l output.

# fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
..trimmed for clarity..
Disk /dev/sdc: 2147 MB, 2147483648 bytes
..trimmed for clarity..
Disk /dev/sdc doesn't contain a valid partition table

8. Now we have the disk available, but it will not available after reboot (even if we execute chkconfig iscsi on). The reason is, target will be started later initiator. So, we have to change the order of starting services while booting the service.
# cd /etc/rc5.d/
# ls *tgtd*
S39tgtd
# ls *iscsi*
S07iscsid S13iscsi

For this we should edit the scripts /etc/init.d/iscsid and /etc/init.d/iscsi like below.
/etc/init.d/iscsid:
Change this line
# chkconfig: 345 7 89
to
# chkconfig: 345 40 89

/etc/init.d/iscsi:
Change this line
# chkconfig: 345 13 89
to
# chkconfig: 345 41 89

When we execute the following commands it recreates symbolic links in rc*.d directories with desired numbering.
# chkconfig iscsid off
# chkconfig iscsid on
# chkconfig iscsi off
# chkconfig iscsi on
# cd /etc/rc5.d/
# ls *tgtd*
S39tgtd
# ls *iscsi*
S40iscsid S41iscsi

9. The block device attached now as /dev/sdc, but we want this name to be persistent with a custom name /dev/asmdiskA. And also this should be owned by user oracle, and group oinstall. So, we have to configure udev rules accordingly based on scsi id of the device.
We can find scsi id using the command
# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
1456789

The udev rule we write here is to map the device as /dev/asmdiskA with owner oracle, group oinstall and 660 permissions.
Create a file /etc/udev/rules.d/55-asmdisk.rules with the line below . The entry for RESULT should be what we got output as scsi_id command output above.
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id --whitelisted --replace-whitespace /dev/$name", RESULT=="1456789", NAME="asmdiskA", OWNER="oracle", GROUP="oinstall", MODE="0660"

Once we restart iscsi service, /dev/asmdiskA will be ready.
# service iscsi restart
Stopping iscsi: [ OK ]
Starting iscsi: [ OK ]
# ls -l /dev/asmdiskA
brw-rw----. 1 oracle oinstall 8, 32 Sep 16 11:23 /dev/asmdiskA

After completing all the steps above, we can just restart the system just to ensure it is available after reboot also.