Search This Blog

Wednesday, May 28, 2008

Sharing windows folders on unix hosts using samba filesystem (smbfs) using virtualbox

The need for windows sharing..


A few days ago, I wanted to do a Oracle 12i installation on a x86 linux distribution using virtualbox. The media was available on an external hard disk, but I thought that instead of copying the 30+ G of media on unix virtual machine, why not just make it shared between the windows host OS and unix guest OS.

The quest for sharing..


And so began my quest for sharing files between windows and virtualbox unix OS (SuSE linux). I discovered that for sharing folders using virtualbox, the guest OS additions have to be installed.
I have seen that upgrading the kernel breaks the guest OS additions at times. After upgrading the guest OS from SuSE linux 9 to SuSE linux 9.3, the guest additions stopped working. The same problem also exists with vmware tools.

So first, I created a shared folder in the virtual machine's definition:

Click here for a bigger view



After starting the Guest OS (Ubuntu),
root@gverma-laptop:/home/gverma# mkdir /mnt/winshare

root@gverma-laptop:/home/gverma# mount -t vboxsf public /mnt/winshare

Mind you, the filesystem type is NOT vboxfs, it is vboxsf (s and f are interchanged). If you do not take note of this, you might end up wasting your time for nothing.

Now, we can check that the shared folder did indeed mount:
root@gverma-laptop:/home/gverma# mount | grep vbox
public on /mnt/winshare type vboxsf (rw)

root@gverma-laptop:/home/gverma# ls /mnt/winshare
CyberLink desktop.ini Downloads Music Recorded TV Videos
Desktop Documents Favorites Pictures StarzEntertainment Vongo

Let it be mentioned here that the vboxadd and vboxvfs (note that it has vfs after vbox) modules needed to be loaded for this to work:
root@gverma-laptop:/home/gverma# lsmod | grep vb
vboxvfs 42432 1
vboxadd 24872 9 vboxvfs

The Caveat..


But, as every cloud has a silver lining, there was a caveat here for my problem.

The above case was just a demonstration of how the shared folder could have worked fine. I actually then went on to mount some linux media for Oracle Applications 12i for x86 intel platform. After the directory mounted all right, I had a really hard time adding the execute permissions to the rapidwiz files. Even after doing persistent # chmod -R +x commands, the shell scripts under the media were not showing the "x" execute flag.

This was very frustrating. It was probably just another virtualbox bug.

So, then I started looking around for more options. I had heard that many people had used samba with success and hence I started scouring the internet (Google Zindabaad! - Long live, Google!). Came across a link that explained how to do it in a succinct manner.
The good thing about sharing files from windows to unix using samba file system (smbfs) was that was that I did not need to run samba on unix at all.

Doing it using samba filesystem..


So here are some quick and easy steps for sharing a windows share using smbfs:

First things first. You need to create a network share for the folder that you want to share. In my case, I wanted to share an external hard disk that had the media for Oracle Applications 12i. So, I did a network share of windows folder (e.g. \\GAURAV-PC\media)

On the guest OS (Ubuntu), did the following:
# mkdir -p /mnt/media

# smbmount //GAURAV-PC/media /mnt/media -o username=gaurav,password=welcome

Please note that // (forward slashes) worked for me. Some guides on the internet say that you need to use \\ (back slashes)

Note here that GAURAV-PC hostname was defined in the local hosts file to a resolvable IP address in the LAN. The Ubuntu Virtual machine was connected to the windows Host through a host only networking type interface, that allowed the host and guest OS to talk to each other.
# more /etc/hosts
GAURAV-PC 192.168.0.2

Please note here that a ping to GAURAV-PC HAS TO work when you give the smbmount command. I wasted many hours trying to run strace on this command, just because it was hanging. The issue was that due to some strange happenstance, the ping to GAURAV-PC (windows host) was hanging.

The workaround was to ping the Ubuntu virtual machine (say 192.168.0.7) first from the host OS (windows), after which the ping to GAURAV-PC 192.168.0.2 started working from Ubuntu Guest OS. Strange, strange, I agree, but I am just a guy looking for solutions! I have seen this behaviour in Windows Vista home premium edition - host OS.

Make sure the smbfs module is loaded in the guest OS. If not, then load it using # modprobe smbfs:
root@gverma-laptop:/mnt/media/oracle_12.0.4/startCD/Disk1/rapidwiz# lsmod | grep smbfs
smbfs 66296 0

Note that the Samba server does not need to run:
root@gverma-laptop:/mnt/media/oracle_12.0.4/startCD/Disk1/rapidwiz# ps -ef | grep smb
root 6116 5616 0 00:00:00 grep smb

This was my samba configuration file:
root@gverma-laptop:/mnt/media/oracle_12.0.4/startCD/Disk1/rapidwiz# more /usr/share/samba/smb.conf
[global]
workgroup = WORKGROUP
security = share
usershare path = /var/lib/samba/usershare
usershare max shares = 100
usershare allow guests = yes
usershare owner only = yes
wins support = no
netbios name = GAURAV-PC

I am not exactly sure if setting the netbios name parameter was a major factor in making it work or not, but I guess it certainly did not hurt.

Now when I run mount command, the windows network share was showing up there just fine:
root@gverma-laptop:/mnt/media/oracle_12.0.4/startCD/Disk1/rapidwiz# mount
/dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
/sys on /sys type sysfs (rw,noexec,nosuid,nodev)
varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755)
varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777)
udev on /dev type tmpfs (rw,mode=0755)
devshm on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
lrm on /lib/modules/2.6.24-16-generic/volatile type tmpfs (rw)
securityfs on /sys/kernel/security type securityfs (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
gvfs-fuse-daemon on /home/gverma/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=gverma)
//GAURAV-PC/media on /mnt/media type cifs (rw,mand)

Thankfully, I was able to make the unix shell scripts executable after doing this and launch the rapid wizard for 12i installation. That's all I wanted.

Conclusion


Well, so there you have it. If you are using windows host OS and you upgraded your guest OS kernel, the guest additions may not work due to compilation errors. In such a case, you are better off using the samba file system as its much easier to deal with and not too much of a pain to setup.

Saturday, May 3, 2008

A poor man's guide for creating iscsi targets without using external USB hard disks

A special need..


Those of you who have a need for exposing iscsi targets to other machines for discovery, but who do not want to invest in an external USB hard disk OR who do not want to have a USB external hard disk connected to a desktop 24x7, there is a better way of creating and exposing iscsi target logical volumes.

This method does not need installing openfiler either as noted in my earlier article Combining Openfiler and Virtualbox (Ubuntu guest OS on windows host).

A simple solution..


The method is simple: create logical volumes based on SCSI devices on a Ubuntu Hardy Heron 8.04 installation using virtualbox.
The advantage of using Ubuntu is that it creates the hard disk devices as scsi devices (with the naming convention /dev/sd*, instead of /dev/hd*). I have seen that in other unix OS like SuSE linux, the local disks are listed as /dev/hd* (IDE). Ubuntu seems to have better disk drivers.

If you use Hardy Heron 8.04 Ubuntu, it is *really* easy to get iscsitarget package working. If you try to make the iscsi-target package work with Gutsy Gibbon 7.x release of Ubuntu, there is a very good chance that you will run into lot of compilation issues. I went down this path myself and later realized that a lot of bugs existed for Gutsy Gibbon release. (e.g. https://bugs.launchpad.net/ubuntu/gutsy/+source/iscsitarget/+bug/160104, http://ubuntuforums.org/showthread.php?t=692651) and eventually found https://bugs.launchpad.net/ubuntu/+source/iscsitarget/+bug/145539 which said that the module was fixed in Hardy Heron 8.04.

So instead of breaking my head over making iscsi-target package work in Gutsy Gibbon 2.6.22, I decided to give Hardy Heron 8.04 (still beta) a try.

Thankfully, with a little effort, I was able to make it work. In this article, I present a simplistic scenario in which we create three logical volumes that can be easily discovered by another virtualbox iscsi initiator machine(s) using open-iscsi package. This is the beauty of the entire approach.

Here is a block diagram of the end configuration:

Click here for a bigger view


Iscsi support in Hardy Heron 8.04 Ubuntu..


The first thing to understand is that as per the published features of Hardy Heron 8.04 Ubuntu at https://wiki.ubuntu.com/HardyHeron/Beta#head-da07b62e1e43afd0bef06ab8b60d2502c734a0f9, the iscsi support is enabled out of the box if we add iscsi=true in the boot options during the installation.

Click here for a bigger view



While installing Hardy 8.04 OS using virtualbox or any other virtualization software being used, remember to add iscsi=true in the boot options (after pressing F6)

Click here for a bigger view


Configuring iscsi targets..


I would like to give credit to a really awesome link I found through google: http://www.linuxconfig.org/Linux_lvm_-_Logical_Volume_Manager, which I rate as a one of the best article s for doing this!

Some other relevant links are:

https://help.ubuntu.com/community/SettingUpLVM-WithoutACleanInstall
http://t3flyers.wordpress.com/2007/04/24/logical-volume-manager-on-ubuntu-feisty-704/
http://www.howtoforge.com/linux_lvm

After Ubuntu installation, the output of uname -a looks like this:
HARDY# uname -a
Linux gverma-laptop 2.6.24-16-generic #1 SMP Thu Apr 10 13:23:42 UTC 2008 i686 GNU/Linux

And the output of fdisk -l looks like this:
HARDY# fdisk -l

Disk /dev/sda: 16.5 GB, 16592666624 bytes
255 heads, 63 sectors/track, 2017 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0004ccc2
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        1927    15478596   83  Linux
/dev/sda2            1928        2017      722925    5  Extended
/dev/sda5            1928        2017      722893+  82  Linux swap / Solaris

At this point, I added two more hard disks to the virtualbox virtual machine. Virtualbox 1.5.6 allows upto three hard disk devices to be attached to a machine.

Click here for a bigger view



After booting up HARDY again, /dev/sdb and /dev/sdc were also visible in the output of fdisk -l. Using fdisk, I created a single partition in both devices that spanned the entire device, resulting in /dev/sdb1 and /dev/sdc1:
HARDY# fdisk -l

...
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xd310045f

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2610    20964793+  83  Linux

Disk /dev/sdc: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x93d4c692

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        2610    20964793+  83  Linux

Also make sure that you configure host only networking so that other virtual machines (scsi initiators) are able to ssh/telnet into HARDY. Please look at article to understand how host only networking can be setup: Virtualbox Case Study: Making host only networking work between two Ubuntu Guest OS (virtual machine) on Windows Vista host



Getting the right packages installed..


Now, we need to create the target physical volume, virtual group, and logical volume, in that order. These logical volumes will serve as SCSI targets that can be discovered by other iscsi initiator machines or virtual machines as per your configuration.

Make sure you reload the package definitions from the configured repositories (get the latest packages that are published from the ubuntu repositories). You can do this from the Synaptic Package Manager:

Click here for a bigger view



Now, you should install the lvm2 package in Synaptic Package Manager:

Click here for a bigger view


Create the physical volumes:


HARDY# lvm

lvm> pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created
lvm> pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created
lvm> pvdisplay
--- NEW Physical volume ---
PV Name               /dev/sdb1
VG Name
PV Size               19.99 GB
Allocatable           NO
PE Size (KByte)       0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               OHKhOq-AMG7-XtYz-7CrP-q2VH-2b53-tW0yMO

--- NEW Physical volume ---
PV Name               /dev/sdc1
VG Name
PV Size               19.99 GB
Allocatable           NO
PE Size (KByte)       0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               M9qXXB-GVxC-FZjI-9Zb7-IhcH-rbhb-vQ3aek

Create the virtual group:


lvm> vgcreate vg /dev/sdb1 /dev/sdc1
Volume group "vg" successfully created
lvm> vgdisplay
--- Volume group ---
VG Name               vg
System ID
Format                lvm2
Metadata Areas        2
Metadata Sequence No  1
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                0
Open LV               0
Max PV                0
Cur PV                2
Act PV                2
VG Size               39.98 GB
PE Size               4.00 MB
Total PE              10236
Alloc PE / Size       0 / 0
Free  PE / Size       10236 / 39.98 GB
VG UUID               GkUMNq-3atR-qKTK-lG0b-gM0n-budV-Uc4lH8

lvm>

Create logial volumes:


lvm> lvcreate -L 1G -n ocr vg
Logical volume "ocr" created
lvm> lvcreate -L 1G -n vote vg
Logical volume "vote" created
lvm> lvcreate -L 35G -n asm vg
Logical volume "asm" created

If you get an error message saying:
/proc/misc: No entry for device-mapper
you can get around it by issuing sudo modprobe dm-mod

lvm> lvdisplay
--- Logical volume ---
LV Name                /dev/vg/ocr
VG Name                vg
LV UUID                a2ARIQ-11dn-kdoB-cHCd-gtoR-aokO-HGqoo4
LV Write Access        read/write
LV Status              available
# open                 0
LV Size                1.00 GB
Current LE             256
Segments               1
Allocation             inherit
Read ahead sectors     0
Block device           254:0

--- Logical volume ---
LV Name                /dev/vg/vote
VG Name                vg
LV UUID                1DqYL0-Ptsx-9kmE-UHmn-PmE9-ebVm-bfr0r0
LV Write Access        read/write
LV Status              available
# open                 0
LV Size                1.00 GB
Current LE             256
Segments               1
Allocation             inherit
Read ahead sectors     0
Block device           254:1

--- Logical volume ---
LV Name                /dev/vg/asm
VG Name                vg
LV UUID                UKWcPz-aNGl-VdPm-51wa-Ewvm-ahvD-Y7N57v
LV Write Access        read/write
LV Status              available
# open                 0
LV Size                35.00 GB
Current LE             8960
Segments               2
Allocation             inherit
Read ahead sectors     0
Block device           254:2

lvm>

Now, you can check the logical volume devices like this:
HARDY# ls -l /dev/vg
total 0
lrwxrwxrwx 1 root root 18 2008-05-03 02:42 asm -> /dev/mapper/vg-asm
lrwxrwxrwx 1 root root 18 2008-05-03 02:41 ocr -> /dev/mapper/vg-ocr
lrwxrwxrwx 1 root root 19 2008-05-03 02:41 vote -> /dev/mapper/vg-vote

You can also check the newly created devices in the /dev/disk/* directories on HARDY:
HARDY# ls -l /dev/disk/*
/dev/disk/by-id:
..
lrwxrwxrwx 1 root root 19 2008-05-03 02:42 dm-name-vg-asm -> ../../mapper/vg-asm
lrwxrwxrwx 1 root root 19 2008-05-03 02:41 dm-name-vg-ocr -> ../../mapper/vg-ocr
lrwxrwxrwx 1 root root 20 2008-05-03 02:41 dm-name-vg-vote -> ../../mapper/vg-vote
..

/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root root  9 2008-05-02 22:30 pci-0000:00:01.1-scsi-0:0:1:0 -> ../../sdb
lrwxrwxrwx 1 root root 10 2008-05-03 02:34 pci-0000:00:01.1-scsi-0:0:1:0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 2008-05-02 22:30 pci-0000:00:01.1-scsi-1:0:1:0 -> ../../sdc
lrwxrwxrwx 1 root root 10 2008-05-03 02:35 pci-0000:00:01.1-scsi-1:0:1:0-part1 -> ../../sdc1

You need to install the iscsitarget package using Synaptic Package Manager:

Click here for a bigger view



Now, you need to configure the /etc/ietd.conf file with the Target names and entries for each logical volume. The target names just need to unique within the network and are totally upto your imagination.
HARDY# more /etc/ietd.conf
Target iqn.2001-04.com.ubuntu:scsi.disk.vg.vote
Lun 0 Path=/dev/vg/vote,Type=fileio
Target iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
Lun 0 Path=/dev/vg/ocr,Type=fileio
Target iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
Lun 0 Path=/dev/vg/asm,Type=fileio

To enforce these entries, we need to restart the iscsitarget service manually now.
Keep in mind that there is no dash (-) between iscsi and target in the name of the service, as in openfiler 2.2 (openfiler's IET implementation service has the name iscsi-target)

HARDY# /etc/init.d/iscsitarget restart
Removing iSCSI enterprise target devices: succeeded.
Stopping iSCSI enterprise target service: succeeded.
Removing iSCSI enterprise target modules: succeeded.
Starting iSCSI enterprise target service: succeeded.

To double check if the logical volumes have been discovered and published, you can check content of /proc/net/iet/volume on HARDY:
::::::::::::::
/proc/net/iet/volume
::::::::::::::
tid:3 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
lun:0 state:0 iotype:fileio iomode:wt path:/dev/vg/asm
tid:2 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
lun:0 state:0 iotype:fileio iomode:wt path:/dev/vg/ocr
tid:1 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.vote
lun:0 state:0 iotype:fileio iomode:wt path:/dev/vg/vote

If you do a more of /proc/net/iet/session, you can clearly see that no iscsi-initiator machine have connected to HARDY yet (there are no session sub-entries under the volume name):
HARDY# more /proc/net/iet/session
tid:3 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
tid:2 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
tid:1 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.vote

Here is a quick check of the physical volume, volume group, and logical volume :

HARDY# lvm
lvm> lvscan
ACTIVE '/dev/vg/ocr' [1.00 GB] inherit
ACTIVE '/dev/vg/vote' [1.00 GB] inherit
ACTIVE '/dev/vg/asm' [35.00 GB] inherit

lvm> pvscan
PV /dev/sdb1 VG vg lvm2 [19.99 GB / 0 free]
PV /dev/sdc1 VG vg lvm2 [19.99 GB / 2.98 GB free]
Total: 2 [39.98 GB] / in use: 2 [39.98 GB] / in no VG: 0 [0 ]

lvm> vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg" using metadata type lvm2
lvm>

Discovering the iscsi targets from another iscsi initiator machine..


We will assume that the iscsi initiator machine name is GUTSY and it has the open-iscsi package installed (this also installs iscsiadm utility along with it). We will also assume that the IP of HARDY (iscsi target) is 192.168.0.7.
You can leave the majority default values in /etc/iscsi.conf. Make sure you disable the CHAP authentication parameter values since we have not configured them in the scsi target.

Discover the new iscsi targets in HARDY:
GUTSY# iscsiadm -m discovery -t st -p 192.168.0.7
192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.vote

Verify the newly discovered target machine:
GUTSY# sudo iscsiadm -m discovery
192.168.0.6:3260 via sendtargets
192.168.0.7:3260 via sendtargets  --> the new iscsi target got added to the local database

Check the combination of discovered iscsi-target and logical volumes (the new ones are in BLUE color):
GUTSY# sudo iscsiadm -m node
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.ocr
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.vote
192.168.0.6:3260,1 iqn.2006-01.com.openfiler:openfiler.asm
192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.vote

But, at this point, we have no active sessions connected to these targets:
GUTSY# iscsiadm -m session
iscsiadm: No active sessions.

Establish connection to the scsi targets from the initiator:
GUTSY# iscsiadm -m node \
-T iqn.2001-04.com.ubuntu:scsi.disk.vg.asm \
-p 192.168.0.7 -l
Login session [iface: default, target: iqn.2001-04.com.ubuntu:scsi.disk.vg.asm, portal: 192.168.0.7,3260]
GUTSY# iscsiadm -m node \
-T iqn.2001-04.com.ubuntu:scsi.disk.vg.vote \
-p 192.168.0.7 -l
Login session [iface: default, target: iqn.2001-04.com.ubuntu:scsi.disk.vg.vote, portal: 192.168.0.7,3260]
GUTSY# iscsiadm -m node \
-T iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr \
-p 192.168.0.7 -l
Login session [iface: default, target: iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr, portal: 192.168.0.7,3260]

Verify the newly formed connections:
GUTSY# iscsiadm -m session
tcp: [1] 192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
tcp: [2] 192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.vote
tcp: [3] 192.168.0.7:3260,1 iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr

It would be useful to enable automatic startup/discovery of these volumes on server and client. For this, you need to do the following:
  • On server(where the logical volumes are created):

HARDY# sudo gedit /etc/iscsi/iscsid.conf

Change node.startup=manual to: node.startup = Automatic
GUTSY# sudo /etc/init.d/open-iscsi restart

  • On client

GUTSY# sudo iscsiadm -m discovery -t st -p <ipaddress>
GUTSY# sudo iscsiadm -m node
-T iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
-p <ipaddress> -o update
-n node.conn[0].startup -v automatic
GUTSY# sudo /etc/init.d/open-iscsi restart

  • or reboot both the systems if required (make sure server boots first). To check if the scsi devices are automatically setup and list all your scsi iqns:

GUTSY # sudo iscsiadm -m session

Now assumed that the TCP scsi connections have been formed, let us check if the scsi devices are visible from fdisk -l:
GUTSY# fdisk -l

..
Disk /dev/sdb: 37.5 GB, 37580963840 bytes
64 heads, 32 sectors/track, 35840 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk identifier: 0x00000000

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk identifier: 0x00000000

Disk /dev/sdd doesn't contain a valid partition table
GUTSY#

The newly discovered devices can also be checked under the contents of /dev/disk/*:
GUTSY# ls -l /dev/disk/*
/dev/disk/by-id:
total 0
..
lrwxrwxrwx 1 root root  9 2008-05-02 23:05 scsi-149455400000000000000000001000000795600000e000000 -> ../../sdc
lrwxrwxrwx 1 root root  9 2008-05-02 23:05 scsi-1494554000000000000000000020000004f5600000e000000 -> ../../sdd
lrwxrwxrwx 1 root root  9 2008-05-02 23:05 scsi-149455400000000000000000003000000a45600000e000000 -> ../../sdb
..

/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root root  9 2008-05-02 23:05 ip-192.168.0.7:3260-iscsi-iqn.2001-04.com.ubuntu:scsi.disk.vg.asm-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root  9 2008-05-02 23:05 ip-192.168.0.7:3260-iscsi-iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr-lun-0 -> ../../sdd
lrwxrwxrwx 1 root root  9 2008-05-02 23:05 ip-192.168.0.7:3260-iscsi-iqn.2001-04.com.ubuntu:scsi.disk.vg.vote-lun-0 -> ../../sdc
..

Relevant messages in the system log of scsi initiator (GUTSY)


# dmesg
...
[ 1156.290868] scsi2 : iSCSI Initiator over TCP/IP
[ 1156.607026] scsi 2:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[ 1156.607026] sd 2:0:0:0: [sdb] 73400320 512-byte hardware sectors (37581 MB)
[ 1156.607026] sd 2:0:0:0: [sdb] Write Protect is off
[ 1156.607026] sd 2:0:0:0: [sdb] Mode Sense: 77 00 00 08
[ 1156.607026] sd 2:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1156.607111] sd 2:0:0:0: [sdb] 73400320 512-byte hardware sectors (37581 MB)
[ 1156.608275] sd 2:0:0:0: [sdb] Write Protect is off
[ 1156.608293] sd 2:0:0:0: [sdb] Mode Sense: 77 00 00 08
[ 1156.610635] sd 2:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1156.610665]  sdb: unknown partition table
[ 1156.619320] sd 2:0:0:0: [sdb] Attached SCSI disk
[ 1156.619398] sd 2:0:0:0: Attached scsi generic sg2 type 0
[ 1163.865083] scsi3 : iSCSI Initiator over TCP/IP
[ 1164.132972] scsi 3:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[ 1164.135287] sd 3:0:0:0: [sdc] 2097152 512-byte hardware sectors (1074 MB)
[ 1164.136424] sd 3:0:0:0: [sdc] Write Protect is off
[ 1164.136438] sd 3:0:0:0: [sdc] Mode Sense: 77 00 00 08
[ 1164.138002] sd 3:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1164.140341] sd 3:0:0:0: [sdc] 2097152 512-byte hardware sectors (1074 MB)
[ 1164.141443] sd 3:0:0:0: [sdc] Write Protect is off
[ 1164.141460] sd 3:0:0:0: [sdc] Mode Sense: 77 00 00 08
[ 1164.144746] sd 3:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1164.144809]  sdc: unknown partition table
[ 1164.154241] sd 3:0:0:0: [sdc] Attached SCSI disk
[ 1164.154290] sd 3:0:0:0: Attached scsi generic sg3 type 0
[ 1170.533158] scsi4 : iSCSI Initiator over TCP/IP
[ 1170.798343] scsi 4:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[ 1170.798343] sd 4:0:0:0: [sdd] 2097152 512-byte hardware sectors (1074 MB)
[ 1170.801229] sd 4:0:0:0: [sdd] Write Protect is off
[ 1170.801263] sd 4:0:0:0: [sdd] Mode Sense: 77 00 00 08
[ 1170.806093] sd 4:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1170.809875] sd 4:0:0:0: [sdd] 2097152 512-byte hardware sectors (1074 MB)
[ 1170.812347] sd 4:0:0:0: [sdd] Write Protect is off
[ 1170.812380] sd 4:0:0:0: [sdd] Mode Sense: 77 00 00 08
[ 1170.816757] sd 4:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 1170.816757]  sdd: unknown partition table
[ 1170.828375] sd 4:0:0:0: [sdd] Attached SCSI disk
[ 1170.828446] sd 4:0:0:0: Attached scsi generic sg4 type 0

Cross checking incoming sessions at scsi target (HARDY)..


In the meanwhile, in HARDY, the incoming sessions can be checked by:
HARDY# more /proc/net/iet/*
::::::::::::::
/proc/net/iet/session
::::::::::::::
tid:3 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
sid:281474997486080 initiator:iqn.1993-08.org.debian:01:950a218cdd1
cid:0 ip:192.168.0.6 state:active hd:none dd:none
tid:2 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
sid:844424984461824 initiator:iqn.1993-08.org.debian:01:950a218cdd1
cid:0 ip:192.168.0.6 state:active hd:none dd:none
tid:1 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.vote
sid:562949990973952 initiator:iqn.1993-08.org.debian:01:950a218cdd1
cid:0 ip:192.168.0.6 state:active hd:none dd:none
::::::::::::::
/proc/net/iet/volume
::::::::::::::
tid:3 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.asm
lun:0 state:0 iotype:fileio iomode:wt path:/dev/vg/asm
tid:2 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.ocr
lun:0 state:0 iotype:fileio iomode:wt path:/dev/vg/ocr
tid:1 name:iqn.2001-04.com.ubuntu:scsi.disk.vg.vote
lun:0 state:0 iotype:fileio iomode:wt path:/dev/vg/vote
root@gverma-laptop:/home/gverma#

Conclusion..


Congratulations! You have been successful in creating a poor man's scsi target and discovered them using open source softwares in an easy manner. Now GUTSY can read/write to /dev/vg/asm etc logical volumes. For more machines to do the same thing, similar setup is needed on them and they should also be access these volumes in a clustered fashion. This kind of setup is most beneficial for creating your own Oracle 10g RAC sandbox environment or for doing installs over shared disk.
In this case, we did not format the discovered logical volumes in ext3 or OCFS2 file system, because it is assumed that these volumes will be used as raw devices only.

Thursday, May 1, 2008

Creating your own 10g RAC cluster at home using virtualbox and SAN targets (using openfiler + with/without USB hard disk)

Preface


After playing around with some open source technologies like virtualbox and openfiler, we are finally at a point when it is possible to create your own network lab of Oracle 10gR2 RAC cluster and ASM database using iscsi targets in an external hard disk drive. The last time I checked, USB 2.0 hard disk drives are cheaply available in the market.
If you do not want to use an external USB hard disk, then please go through the article A poor man's guide for creating iscsi targets without using external USB hard disks

All it takes is a little persistence and willingness to follow easy to follow tutorials. In the previous articles on this blog, I have talked about the building blocks of using virtualbox and openfiler open source softwares. If we combine that knowledge with Oracle 10g RAC setup, the following end configuration can be achieved:

Click here for a bigger view


The building blocks..


The operating system


First of all, you need to install a linux distribution of your choice using virtualbox. In this example, I chose SuSE linux 9.3. I feel that SuSE linux will be embraced by more and more enterprises in the near future, hence this decision.

For doing the installation, just boot from the Linux install media CD and then keep switching CDs as per request of the install process. It is always better if you have an ISO file of the installable media. You can get the SuSE media from www.opensuse.org

Always keep a decent amount of swap space for the OS installation. Also it is better to have a /tmp mount point as a separate mount point from the root (/) partition. These are some evergreen tips that a sysadmin friend of mine (Erik Niklas) had suggested to me a long time back.
A word of caution while deciding the VDI disk size for virtualbox machine (especially relevant for Version 1.5.6). The virtualbox disks cannot be resized!!

The dynamically expanding option means that the disk can grow UPTO the maximum limit and it will grow as per usage till the maximum limit. After that, it is not possible to resize it using some Vbox utility.A workaround may be to copy the device using partition editors, but till virtualbox comes out with an enhancement on this feature, better to be safe than sorry and keep a very high disk size - as per your judgement.

For building a Oracle 10g RAC configuration, it is better to focus all your installation efforts on one virtual machine and then clone the virtualbox disks (including the bootable disk) using the VboxManage clonevdi command. This way, you save a lot of duplicate effort.

The networking configuration - Public and Private IPs


The other networking building blocks of this configuration can be achieved by the following:

1) RAC needs a public IP, which is usually (but not limited to) eth0. It is advantageous to configure the public IP as host only networking interface when using virtualbox because it gives you the dual advantage of being able to SSH into the machine from outside (e.g. other rac node or another machine on the LAN) and also make internet work.
I tried to keep this as a static IP, but the downside is that the default gateway was somehow not working for me. However, if I kept this as DHCP interface, the default gateway and nameservers were being accessible out of the box.

A detailed example of how to do this is shown in the article Virtualbox Case Study: Making host only networking work between two Ubuntu Guest OS (virtual machine) on Windows Vista host

2) A RAC setup also needs a private network between the RAC nodes for global communication system (GCS) and synchronization. This is also usually referred to as the private interconnects. This is usually configured on a private subnet like 10.10.x.x or 192.168.x.x, wherein other network traffic would not impinge on the sacrosanct synchronization traffic between the RAC nodes.

A detailed example of how to do this is shown in the article Case study: Making Internal networking for talking between two linux guest OS (Ubuntu) on windows vista host

The Network Attached Storage or Storage Area Network setup..


NAS is hot! Well, what i mean is that its like configure once, use everywhere kind of thing. You setup iSCSI target logical volumes and then discover and access them from other machines. There are packages like linux-iscsi and open-iscsi which will help you do that. As per the latest announcements, both these open source projects have been merged into open-iscsi now.
Why should we choose Ubuntu for hard disk drivers? The partitions from a /dev/sd* devices gets discovered as separate devices, whereas all partitions in /dev/hd* get discovered as a single device.

There are quite promising open source softwares in this area too. openfiler and freenas are a few of them. For no particular reason, I picked openfiler and did a few experiments for discovering iscsi targets using it.

A detailed example of how to do this has been shown in the article Case Study: How to discover iscsi targets with linux-iscsi initiator package — Suse linux 9 (scsi Initiator) and openfiler (scsi target)
If you are using USB external hard disks for this exercise, there are many gotchas involved in their proper detection when using virtualbox on windows host. But some golden rules mentioned in this article can be useful: Virtualbox How to: Gotchas involved in making a USB external hard disk device work on windows host

A problem with openfiler 2.2 is that it forgets the previous logical volumes etc after reboots. This situation can result in frustration, especially when you see no LUNs being discovered from the initiators. If you read the following article, there is a good chance that you will be fine: iSCSI: no LUNs detected for session when using openfiler

The Oracle 10gR2 RAC setup


With the above components in place, the stage is now set for you to install the Oracle 10g CRS software. Needless to say, there are several detailed guides available on Oracle Technology Network on how to do this. Each Operating system has unique package pre-requisites for the Oracle software to be installed properly.
Some more guides area available on the novell.com website at http://www.novell.com/products/server/oracle/documents.html

Check out the guides under SUSE Enterprise Linux Server 9 tab. They are pretty detailed.

Some more guides are available at http://www.nextre.it/oracledocs/rac10gonsles9.html and http://www.oracle.com/technology/pub/articles/smiley_rac10g_install.html#oracle

Suggested reading..


It is very likely that you would have come across the widely read article by Jeffrey Hunter on a similar topic. In http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html, he deals with creating a 10g RAC on Redhat linux.

If you do not want to use an external USB hard disk for iscsi SAN targets, then please go through the article A poor man's guide for creating iscsi targets without using external USB hard disks

iSCSI: no LUNs detected for session when using openfiler

A common problem...


If you are using openfiler, at some point of time, you are bound to reboot the openfiler OS/machine. After the reboot, a common problem is that openfiler forgets or loses the logical volume/volume group information.

So, as a workaround, you need to issue the following commands:
################################
# to discover the volume groups
################################
# vgscan

#################################
# to discover the logical volumes
#################################
# lvscan

##################################
# to discover the physical volumes
##################################
# pvscan

Then, you can display them using the commands vgdisplay, lvdisplay, pvdisplay. Similarly, there is a whole suite of commands in the same location.

Then, you STILL need to activate the discovered logical volumes using the -ay switch of lvchange:
# lvchange -ay openfiler/asm
# lvchange -ay openfiler/ocr
# lvchange -ay openfiler/vote

Now, if you run the lvscan or lvdisplay command, the status of the Logical volumes will show as ACTIVATED.

Remember..


AFTER you do this, You will STILL need to restart the iscsi-target service on openfiler machine.
openfiler $> service iscsi-target restart

If you fail to do this, you will see these messages in the dmesg output on the iscsi-initiator node (not the openfiler machine) - iSCSI: no LUNs detected for session. This message can be very confusing, especially since you will see that the session to the iscsi target WAS established:
openfiler $> dmesg
...
...
iSCSI: bus 0 target 0 = iqn.2006-01.com.openfiler:openfiler.vote
iSCSI: bus 0 target 0 portal 0 = address 10.143.213.248 port 3260 group 1
iSCSI: starting timer thread at 286340
iSCSI: bus 0 target 0 trying to establish session to portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 1 = iqn.2006-01.com.openfiler:openfiler.ocr
iSCSI: bus 0 target 1 portal 0 = address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 1 trying to establish session to portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 0 established session #1, portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 2 = iqn.2006-01.com.openfiler:openfiler.asm
iSCSI: bus 0 target 2 portal 0 = address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 1 established session #1, portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 2 trying to establish session to portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 3 = iqn.2006-01.com.openfiler:openfiler.test
iSCSI: bus 0 target 3 portal 0 = address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 3 trying to establish session to portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: bus 0 target 2 established session #1, portal 0, address 10.143.213.248 port 3260 group 1
iSCSI: no LUNs detected for session (bus 0 target 0) to iqn.2006-01.com.openfiler:openfiler.vote
iSCSI: no LUNs detected for session (bus 0 target 1) to iqn.2006-01.com.openfiler:openfiler.ocr
iSCSI: no LUNs detected for session (bus 0 target 2) to iqn.2006-01.com.openfiler:openfiler.asm

iSCSI: bus 0 target 3 established session #1, portal 0, address 10.143.213.248 port 3260 group 1