Hi Toasters,
We created something of an interesting dilemma for ourselves.
The original problem was the lack of a means to extend existing VMFS LUNs through ESX without an extent and or downtime.
The idea was to use NetApp thin provisioning as a way to offer up a large disk to ESX without the need to back that up with real disks, as a way to monitor real disk usage we created a hard volume and monitor with df the real use of disk against a hard limit (being the volume).
So a 1 TB Lun (thin) was created inside a 200 GB (hard) volume, subsequently we wanted to keep track of ESX 3's usage through controlling the size of the VMDK files we would create. As this environment hosts desktops and was a testing ground at first there were a whole bunch of VMDK being created and destroyed.
The situation we end up with is VMware thinking it has 60 GB in use of a 1 TB LUN while NetApp reports 180 GB in use of 200 GB (being the hard limit at this time).
Basically the question becomes whether we can force ESX to actually reuse any of the [netapp in use] - [real use according to ESX] = 130 GB of space.
Probably we need to clean up and start over with a new clean LUN, which we could then either hard provision and forget about thin provisioning altogether, because in our scenario it is no help. A broader question becomes were we went wrong? Our situation seems to suggest that we waste a whole lot of disk space - which VMware wont reuse on its own, and which NetApp can't touch (how would it know it was now freed up space) In that sense thin provisioning is just a space waster in an environment with VMDK's being destroyed and recreated from scratch.
Of course we could always go flexcloning... but that would force netapp world on a whole lot of admins.
Best regards,
Delorean