Data Backup and Recovery

Solaris 10 Zones using Netapp Volumes

danpancamo
4,236 Views

Hopefully SUN and Netapp will settle their childish squabbles soon...

With the growing scaleability of Sun Servers with CoolThreads Technology (now up to 128 virtual CPUs and 128GB memory) it's inevitable that Solaris Zones will be a common configuration.

So the next logical step is to use shared storage for the Zone data. This will allow for snapshots and snapmirrors of Zones to different hardware and locations...

Has anyone experimented using Netapp for Solaris Zone storage?

Dp

5 REPLIES 5

karl_pottie
4,236 Views

We have a few zones that live on an iSCSI NetApp LUN.

So far, we only use this on test/dev environments, because we found that there are some issues with Solaris iSCSI timeouts. E.g. at boot time, the iSCSI timeout is exactly zero seconds, zero retries. So if the first attempt to mount the LUN fails, your Solaris box goes into maintenance mode and just sits there until somebody looks at it.

What would be great is running Solaris zones over NFS instead of iSCSI (like VMWare NFS datastores). Gives you all the NetApp advantages and the extra reliability and robustness of NFS, and none of the management complexity of iSCSI LUNs.

netappigsupp
4,236 Views

Hello Experts:

I would like to implement the following configuration:

1. Configure a Non-Global Solaris Zone.

2. Install Oracle 10g in the zone

3. Present NetApp LUNs to the zone

4. Implement Snap Manager for Oracle in the zone

Here is what I've done so far:

1. Configured a Non-Global Solaris Zone.

2. Installed Oracle 10g in the Zone.

3. Created Volumes and LUNs on NetApp Filer and preseted the LUNs to the Global Zone host, which has filber connection to the SAN.

4. Made the LUNs available to the Non-Global Zone using Solaris Zone Administration techniques

Here is what I would like to accomplish:

1. All of the above, plus I would like to be able to present a LUN directly to the Non_Global Zone.

2. Implement Snap Manager for Oracle in the zone.

I wonder if anybody has any thoughts.

Thnanks.

Roman.

wmccormick
4,236 Views

Dp,

Further to what Karl said, we found that once we had more than four iSCSI LUNs on a Solaris server running Zones, we would start getting a lot of time-outs and eventually several of the LUNs would get dropped. iSCSI doesn't seem to work well for this.

I would like to see this done over NFS if Sun can't improve their iSCSI client.

W.

andrea_annoe_iks
4,236 Views

Hi to all,

we have more than 40 lun iSCSI : Sun Solaris, Microsoft Windows, Suse, Redhat.

When do to maintenance: reconfig of some network interface, we have problem with Zone Solaris and some Windows 2003 SP2.

Major problem is with Solaris is 5.10

From switch with ping utility we note that netapp lost 1 packet at every 20 (about).

This lost of packet icmp is normal?

SWITCH1#ping

Protocol [ip]:

Target IP address: 10.11.1.121

Repeat count [5]: 1000

Datagram size [100]: 1400

Timeout in seconds [2]:

Extended commands [n]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 1000, 1400-byte ICMP Echos to 10.11.1.121, timeout is 2 seconds:

!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!

!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!

!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.

!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!

!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!

!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!

!!.!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!

!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!

!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!

.!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!

!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!

!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!

!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!

!!!!!!!!!!!!!!!!!!!.

Success rate is 96 percent (963/1000), round-trip min/avg/max = 1/1/28 ms

We have this strange response in every netapp in our company.

pascalduk
4,236 Views

andrea.annoe.iks wrote:

From switch with ping utility we note that netapp lost 1 packet at every 20 (about).

This lost of packet icmp is normal?

Yes, this is normal because the filers have ping throttling. Check the options ip.ping_throttle.

http://now.netapp.com/NOW/knowledge/docs/ontap/rel80/html/ontap/nag/GUID-3BD32BDF-26D2-46D5-BCD9-C08E07F3B304.html

Public