You have a VMDK on an NFS datastore, which expanded to hold files which have since been deleted. You are looking for a way to reclaim that space, correct? VSC has a space reclamation utility for NFS datastores, but I think it requires an NTFS file system in the vmdk. With 8.3 or later inline zero elimination should give the space back to the volume if you zero fill the whitespace on that disk, but the vmdk itself will still appear to be full size. Otherwise zero fill the whitespace, then storage vmotion it to another datastore with "thin provisioned" selected as the disk format.
... View more
Thanks. Glad you got it worked out. I actually wrote a doc for that whole procedure but in this iteration of the communities forums theres no good way to post it.
... View more
I think we can fix it. Since drive A is missing, see if the VM has a floppy drive. If not, add one and try it again. If it still doesn't work, point the floppy drive to a blank floppy image. If that still doesn't work, we can change the pointers so it looks for disk0, but one of them is hard to get to.
... View more
I've been holding off on upgrading mine. Thanks for posting back. SO from what I see there, the loader can't find the kernel or the env file. This makes me think the drive ordering has changed. From the loader can you run "lsdev" and post the output? It should resemble this: VLOADER> lsdev
cd devices:
disk devices:
disk0: BIOS drive A:
disk1: BIOS drive C:
disk1s2: FAT-16
disk2: BIOS drive D:
disk2s1: FFS bad disklabel
disk2s2: FFS bad disklabel
disk3: BIOS drive E:
disk3s1: FFS bad disklabel
disk3s2: FFS bad disklabel
disk4: BIOS drive F:
pxe devices: The kernel, ontap image, and the env file are on the fat16 volume on disk1s2. If that volume appears to be on a different disk number, it won't be able to find it at boot.
... View more
Curiosity got the better of me. After the last message on the vidconsole, the panic message is written to the comconsole. The panic happens just after PCI device detection begins. On a supported virtual platform, you would see a long string of device detections at this point, including the IDE controller and the network cards. So it appears that Hyper-V isn't presenting virtual devices the vsim can work with. I was not expecting the network cards to work, but without even an IDE controller it will never make it to the boot menu. Heres an unmuted capture: VLOADER> boot
x86_64/freebsd/image1/kernel data=0x921710+0x3db888 syms=[0x8+0x45300+0x8+0x2e834]
x86_64/freebsd/image1/platform.ko size 0x2b0a48 at 0xe71000
GDB: debug ports: uart
GDB: current port: uart
KDB: debugger backends: ddb gdb
KDB: current backend: gdb
get_freebsd_maxmem: 778240
bootarg.init.low_mem set. This will set the FreeBSD size to 512MB
calling find_physmem_partition for 570425344
find_physmem_partition[0]: total_size = 1048576, seg_size = 1048576, map[i+1] = 638976, map[i] = 0 shift 20
find_physmem_partition[2]: total_size = 2147483648, seg_size = 2146435072, map[i+1] = 2147418112, map[i] = 1048576 shift 20
Total physical memory 2097088K, FreeBSD physical memory 557056K
Init phys memory segments
BSD 0: 0x0000001000 ... 0x000009c000 = 634880d
BSD 2: 0x00019a6000 ... 0x0020000000 = 509976576d
VNVRAM 4: 0x0020000000 ... 0x0022000000 = 33554432d
OnTap 6: 0x0022000000 ... 0x007fff0000 = 1576992768d
BSD Maxmem (512 MB) ... 0x27f - 0x20000000
BSD Physical Memory (486 MB)
SK Physical Memory (1503 MB)
NetApp Data ONTAP 8.2.3 7-Mode
platform module loaded
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Core(TM) i7-3615QM CPU @ 2.30GHz (2295.18-MHz K8-class CPU)
Origin = "GenuineIntel" Id = 0x20651 Family = 6 Model = 25 Stepping = 1
Features=0x1f8bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,SS,HTT>
Features2=0x82982203<SSE3,PCLMULQDQ,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AESNI,<b31>>
AMD Features=0x20100800<SYSCALL,NX,LM>
AMD Features2=0x1<LAHF>
TSC: P-state invariant
usable memory = 510611456 (486 MB)
real memory = 536870912 (512 MB)
avail memory = 490725376 (467 MB)
MPTable: <MICROSFT HYPERV >
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)
cpu0 (BSP): APIC ID: 0
cpu1 (AP): APIC ID: 1
ioapic0: Changing APIC ID to 0
ioapic0: Assuming intbase of 0
ioapic0 <Version 1.1> irqs 0-23 on motherboard
ichpwr module loaded
vsensor: <vsensor>
jaspfor_ident
DBG: [nvmem3.c:83, nvmem3_identify] Identify
DBG: [nvmem4.c:97, nvmem4_identify] Identify
smbios0: <System Management BIOS> at iomem 0xf57d0-0xf57ee,0xf8ec0-0xfd1cd on motherboard
SMBIOS: WARNING: Type 1 structure: SKU Number field has invalid index 77.
SMBIOS: WARNING: Type 1 structure: Family field has invalid index 105.
SMBIOS: WARNING: Type 2 structure: Asset Tag field has invalid index 77.
SMBIOS: Failed to create hw.smbios.t1_partno.
SMBIOS: Failed to create hw.smbios.t2_tag.
pcib0: <MPTable Host-PCI bridge> pcibus 0 on motherboard
pci0: <PCI bus> on pcib0
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address = 0x62
fault code = supervisor read data, page not present
instruction pointer = 0x20:0xffffffff80522ee0
stack pointer = 0x28:0xffffffff811275a0
frame pointer = 0x28:0xffffffff811275d0
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 0 (swapper)
trap number = 12
PANIC : page fault (supervisor read, page not present) on VA 0x62 cs:rip = 0x20:0xffffffff80522ee0 rflags = 0x10246
version: 8.2.3: Thu Jan 15 21:30:45 PST 2015
conf : x86_64
cpuid = 0
Uptime: 1s
The operating system has halted.
Please press any key to reboot.
... View more
You are wwelcome Vinu. If you find one of my posts helpful, even the older ones, you can help other people find them by clicking on the kudo link to add a kudo. This board is indexed and searcheable on google, so you may want to remove your email from that last post. You can send me a private message on the forum instead.
... View more
At this point I think you are better off building a new sim. That panic used to show up when the host crashed unexpectedly or the vm was shut down improperly. Maybe a wafliron, but probably not worth the effort.
... View more
The panic you are seeing is caused by changing the sysid/serial number after the first boot. The file referenced by the panic: /sim/dev/,disks/,reservations is used to simulate scsi reservations on the simulated hard disks. Since you've changed the sysid, you need to remove that file so a new one can be created. To remove that file: power on the simulator Press ctrl-C when prompted to stop at the boot menu At the boot menu prompt, enter this command instead of picking a menu item: systemshell At the systemshell prompt, enter this command to remove the reservations file: rm /sim/dev/,disks/,reservations Type exit to return to the boot menu: exit At the boot menu, pick option 8 to reboot the simulator The old sysid is also imprinted on the nvram slice, so you will probably see a message during the next boot asking if you want ot overwrite. Pick "y" If you have installed ontap (option 4) before changing the sysid, you will also need to boot into maintenance mode and reassign the disks to the new sysid. Search my old posts for a full walk through of this particular scenario.
... View more
It varies by configuration based on the number of drives being sliced. You can get the root slice size for a particular configuration from hardware universe.
... View more
The useful part has already scrolled off screen. What was the panic message? If the panic was from the sysid/serial change it would panic at boot, every time. Sometimes the IDE drive containing the sim disks fills. If you've added disks, check df -h from the systemshell to see if the device mounted on /sim is full. 8.1 was still a classic disk model vsim, so it should be a slice off ad0. On newer standard diskmodel sims its on ad3. Classic diskmodel sims used to have pretty small /sim volumes, but I don't recall when they changed to the larger device.
... View more
Perhaps this will be a useful data point: I have seen hangs at this exact point when running older "q" images of ontap in the simulator. This is the point where the loader passes execution to the kernel. As it turns out, older (8.0 era) optimized kernels were missing the drivers required to run the vidconsole (among other things). So console output fails back to the comconsole on serial0. Whatever errors the sim is throwing here are probably landing on the comconsole. If anyone is still trying to get it running on hyper-v, you might try booting with a comconsole configured to see if any additional information is presented. The next likely sticking point would be NIC drivers, followed by more subtle things like timing accuracy within the guest.
... View more
And here's the quick&dirty node shell variant: From the problem node's console: login to the cluster shell run local disk assign all aggr status The root aggr will have "root" in the option list. Typically its aggr0 aggr add aggregate_name 3@1g Assuming the default 1gb disks were used. Adjust as necessary. vol status The root vol will have "root" in the options list. typically its vol0 vol size root_volume +2290m The size increase availble may vary depending on the type of disks used. 2560m or 2290m are most common. Try 2560 first, if that fails fall back to 2290, if that fails the error will give the max size in kb exit reboot You may or may not need a second reboot to remove the recovery flag in the loader. If required it will tell you when you log in from the node shell. After a clean reboot, go back and disable aggr snaps and vol snaps on the root, delete any existing snaps, and clean out old logs and asup files in the mroot.
... View more
The fundamental problem is the root aggregate and root volume are too small. If you can some get some files cleaned off and get back into the cluster shell you can increase its size with the following procedure. I wrote this from the cluster shell perspective as part of a larger document, but if you can't get back into the cluster shell the node shell equivilents should work just as well. Increasing the size of the simulator root volume Steps: log in to the cluster shell Assign any unowned disks by entering the following command: run * disk assign all Identify the root aggregate by entering the following command: storage aggregate show -node node_name -root true Example: demo1::> storage aggregate show -node demo1-01 -root true Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0 900MB 42.72MB 95% online 1 demo1-01 raid_dp, normal Add 3 disks to the root aggregate by entering the following command: storage aggregate add-disks -aggregate root_aggregate -diskcount 3 Use the root aggregate name to identify the root volume by entering the following command: volume show -node node_name -aggregate root_aggregate Example: demo1::> volume show -node demo1-01 -aggregate aggr0 Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- demo1-01 vol0 aggr0 online RW 851.5MB 529.8MB 37% Increase the size of the root volume by 2.5GB by entering the following command: run -node node_name vol size root_volume +2560m
... View more
On new gear you can quit out of the setup script and view the installed licenses. In cluster mode, type quit at the prompt, then log in as admin, no password. In 7mode, enter q for the hostname. It will prompt again, enter q a second time, and you'll land at the 7mode CLI. From there use the regular license commands to view or install licenses.
... View more
There are also tighter limits on stack depth for stacks containiing SSD. You don't want to go over 4 shelves in the stack if it contains SSD. Should be covered in the storage subsystem faq.
... View more
Create a new igroup for FC, and add the hosts FC WWPNs to the new igroup. Then unmap the lun from the old iSCSI igroup, and map it to the new FC igroup.
... View more
Here are the steps, but you should note thats not a supported upgrade path: Upgrading a system in an HA pair to a different system in an HA pair by moving disk shelves
... View more
I would strongly recommend professional services for this. Headswaps are routine procedures but there is plenty of opportunity for things to go sideways. At the very least though, you'll need to prep the 3240. Update to 8.1.4P9, then go to 8.2.3P5, including firmware, bios, sp, etc. Make sure your config is clean, the running config matches the on-disk config, and you've remediated any issues flagged by config advisor. Request 8.2 keys for the old head, just in case, and review the big controller upgrade doc. Even that doc doesn't cover every scenario, but if your config is simple and everything lives in vfiler0 your config may be covered.
... View more
It appears that you have a shelf with only 4 drives in it. Is this a lab box? If so, you would have to assume ownership of the drives from the maintenance prompt, unfail them, and destroy any aggregates they were assign to. Then you can reinstall ontap, assuming the drives were even functional.
... View more
If you were just swapping ports maybe. But you are merging two stacks. Need a full outage for that. Also, please don't cable it that way. Instead: controllerA-0a -> Shelf00-IOMA-Square controllerB-0a -> Shelf00-IOMB-Square controllerA-0b -> Shelf10-IOMB-Circle controllerB-0b -> Shelf10-IOMA-Circle
... View more
For what its worth, direct-connect ISCSI is supported but only in a non-ha configuration. Why would be an interesting discussion, but its called out explicitly in the documentation.
... View more