I have a volume with following details: [xxxhost ~yyy/zzz]# rsh xxxfiler df -h /vol/vol_1111_222 Filesystem total used avail capacity Mounted on /vol/vol_1111_222/ 453GB 422GB 31GB 93% /vol/vol_1111_222/ /vol/vol_1111_222/.snapshot 0GB 101GB 0GB ---% /vol/vol_1111_222/.snapshot
when I clone a new flexclone volume using this volume and try to get the used space of that newlyc loned volume, I see that used space value changes drastically in couple of seconds We have noticed that with in 17 seconds the used space reported for this newly cloned volume was altered to the tune of 76 GBs (starting from ~387 GB to ~311 GB).
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)?
Use 'aggr show_space' rather than 'df' and see if that makes a little more sense.
Stetson M. Webster Onsite Professional Services Engineer PS - North Amer. - East
NetApp 919.250.0052 Mobile Stetson.Webster@netapp.com www.netapp.com
-----Original Message----- From: Anna M [mailto:annpurna@gmail.com] Sent: Thursday, April 03, 2008 5:46 PM To: toasters@mathworks.com Subject: impact of setting snap reserve to 0
I have a volume with following details: [xxxhost ~yyy/zzz]# rsh xxxfiler df -h /vol/vol_1111_222 Filesystem total used avail capacity Mounted on /vol/vol_1111_222/ 453GB 422GB 31GB 93% /vol/vol_1111_222/ /vol/vol_1111_222/.snapshot 0GB 101GB 0GB ---% /vol/vol_1111_222/.snapshot
when I clone a new flexclone volume using this volume and try to get the used space of that newlyc loned volume, I see that used space value changes drastically in couple of seconds We have noticed that with in 17 seconds the used space reported for this newly cloned volume was altered to the tune of 76 GBs (starting from ~387 GB to ~311 GB).
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)? -- View this message in context: http://www.nabble.com/impact-of-setting-snap-reserve-to-0-tp16468497p164 68497.html Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
Not really...aggr show_space for a volume also keeps changing for pretty long time..
Webster, Stetson wrote:
Use 'aggr show_space' rather than 'df' and see if that makes a little more sense.
Stetson M. Webster Onsite Professional Services Engineer PS - North Amer. - East
NetApp 919.250.0052 Mobile Stetson.Webster@netapp.com www.netapp.com
-----Original Message----- From: Anna M [mailto:annpurna@gmail.com] Sent: Thursday, April 03, 2008 5:46 PM To: toasters@mathworks.com Subject: impact of setting snap reserve to 0
I have a volume with following details: [xxxhost ~yyy/zzz]# rsh xxxfiler df -h /vol/vol_1111_222 Filesystem total used avail capacity Mounted on /vol/vol_1111_222/ 453GB 422GB 31GB 93% /vol/vol_1111_222/ /vol/vol_1111_222/.snapshot 0GB 101GB 0GB ---% /vol/vol_1111_222/.snapshot
when I clone a new flexclone volume using this volume and try to get the used space of that newlyc loned volume, I see that used space value changes drastically in couple of seconds We have noticed that with in 17 seconds the used space reported for this newly cloned volume was altered to the tune of 76 GBs (starting from ~387 GB to ~311 GB).
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)? -- View this message in context: http://www.nabble.com/impact-of-setting-snap-reserve-to-0-tp16468497p164 68497.html Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
Anna M wrote:
snip
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)?
i believe that when you take a clone, it takes a snapshot of the src, since your snapshot space comes out of the actual volume, you will see the size change
correct me if i am wrong.
Hmm yep thats right. But then how much time should I wait before it gives me accurate space usage?
bluezman wrote:
Anna M wrote:
snip
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)?
i believe that when you take a clone, it takes a snapshot of the src, since your snapshot space comes out of the actual volume, you will see the size change
correct me if i am wrong.
Use snap delta/list to see the size of your snaps
Sent via BlackBerry from T-Mobile
-----Original Message----- From: Anna M annpurna@gmail.com
Date: Sun, 6 Apr 2008 23:48:36 To:toasters@mathworks.com Subject: Re: impact of setting snap reserve to 0
Hmm yep thats right. But then how much time should I wait before it gives me accurate space usage?
bluezman wrote:
Anna M wrote:
snip
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)?
i believe that when you take a clone, it takes a snapshot of the src, since your snapshot space comes out of the actual volume, you will see the size change
correct me if i am wrong.
The most accurate (and consistent) space usage comes from:
aggr show_space
Cheers ...................
Stetson M. Webster Onsite Professional Services Engineer PS - North Amer. - East
NetApp 919.250.0052 Mobile Stetson.Webster@netapp.com www.netapp.com
-----Original Message----- From: lists@up-south.com [mailto:lists@up-south.com] Sent: Monday, April 07, 2008 3:34 AM To: Anna M; owner-toasters@mathworks.com; toasters@mathworks.com Subject: Re: impact of setting snap reserve to 0
Use snap delta/list to see the size of your snaps
Sent via BlackBerry from T-Mobile
-----Original Message----- From: Anna M annpurna@gmail.com
Date: Sun, 6 Apr 2008 23:48:36 To:toasters@mathworks.com Subject: Re: impact of setting snap reserve to 0
Hmm yep thats right. But then how much time should I wait before it gives me accurate space usage?
bluezman wrote:
Anna M wrote:
snip
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)?
i believe that when you take a clone, it takes a snapshot of the src, since your snapshot space comes out of the actual volume, you will see
the size change
correct me if i am wrong.
If I have 4 RAID groups in an aggregate does the IO normally get balanced across them? I trying to figure out what the best way to present ~15 TB of NFS space to my VMware environment is. I have 5 shelves of 500 gig drives so my plan was to create 4 RAID DP stripes down the shelves (so 4 RAID groups of 10 drives each, counting parity) and then stick all 4 raid groups into an aggregate. Anyone seen issues with this? It's on a 3070, I'll be connecting via 10 gigE and NFS will carry most of the traffic.
I apologize if this is a repost, I thought I sent this last week but don't see it on the list.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
The IO load is not perfectly balanced, but is pretty well distributed: when WAFL writes to disk, it will build the data map in memory and dump it to disk in large chunks - part of that process for multiple Raid Groups is to dump chunks of data to each in a round robin fashion (chunk size used to be about 64MB per RG, unsure what it is now). This does a pretty good job of making sure the write _and_ read workload is distributed unless you are hitting a very small chunk of data over and over again.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Page, Jeremy Sent: Monday, April 07, 2008 8:18 AM To: toasters@mathworks.com Subject: balancing load across Aggregates
If I have 4 RAID groups in an aggregate does the IO normally get balanced across them? I trying to figure out what the best way to present ~15 TB of NFS space to my VMware environment is. I have 5 shelves of 500 gig drives so my plan was to create 4 RAID DP stripes down the shelves (so 4 RAID groups of 10 drives each, counting parity) and then stick all 4 raid groups into an aggregate. Anyone seen issues with this? It's on a 3070, I'll be connecting via 10 gigE and NFS will carry most of the traffic.
I apologize if this is a repost, I thought I sent this last week but don't see it on the list.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
Unfortunately you are going to need more than 1 aggregate.
The max raw aggreate size is 16TB. That works out to ~2 shelves of 500GB drives. Plus the max raid group size for ATA drives is 16, but you've said you're only planning on putting 10 in each, so that's fine.
The System Configuration Guide on now.netapp.com lists all of this.
On Mon, Apr 07, 2008 at 05:17:52AM -0700, Page, Jeremy wrote:
If I have 4 RAID groups in an aggregate does the IO normally get balanced across them? I trying to figure out what the best way to present ~15 TB of NFS space to my VMware environment is. I have 5 shelves of 500 gig drives so my plan was to create 4 RAID DP stripes down the shelves (so 4 RAID groups of 10 drives each, counting parity) and then stick all 4 raid groups into an aggregate. Anyone seen issues with this? It's on a 3070, I'll be connecting via 10 gigE and NFS will carry most of the traffic.
I apologize if this is a repost, I thought I sent this last week but don't see it on the list.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
If you aren't concerned about the iops, you could ask to trade up to 1t drives, you can fit 19 of them in a single aggregate.
-Blake
On 4/7/08, Jeff Bryer bryer@sfu.ca wrote:
Unfortunately you are going to need more than 1 aggregate.
The max raw aggreate size is 16TB. That works out to ~2 shelves of 500GB drives. Plus the max raid group size for ATA drives is 16, but you've said you're only planning on putting 10 in each, so that's fine.
The System Configuration Guide on now.netapp.com lists all of this.
On Mon, Apr 07, 2008 at 05:17:52AM -0700, Page, Jeremy wrote:
If I have 4 RAID groups in an aggregate does the IO normally get balanced across them? I trying to figure out what the best way to present ~15 TB of NFS space to my VMware environment is. I have 5 shelves of 500 gig drives so my plan was to create 4 RAID DP stripes down the shelves (so 4 RAID groups of 10 drives each, counting parity) and then stick all 4 raid groups into an aggregate. Anyone seen issues with this? It's on a 3070, I'll be connecting via 10 gigE and NFS will carry most of the traffic.
I apologize if this is a repost, I thought I sent this last week but don't see it on the list.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
-- Jeff Bryer bryer@sfu.ca Systems Administrator (778) 782-4935 IT Infrastructure, Simon Fraser University
Since I am totally new to the world of filers, I would really appreciate if you can just explain me as to why the snapshot size for a newly created flexclone volume is not zero? After all the source for it also was a snapshot.
The issues in space calculation for a newly clones volume is actually because of the changes in the snapshot size for this vol. For one of my newly cloned flexclone volume I am seeing that with in 3 minutes, the snapshot size reported is altered from /vol/prod_v/ 475777656KB 400203924KB 75573732KB 84% /vol/prod_v/ /vol/prod_v/.snapshot 0KB 74248140KB 0KB ---% /vol/prod_v/.sn
to /vol/prod_v/ 475777656KB 325971708KB 149805948KB 69% /vol/prod_v/ /vol/prod_v/.snapshot 0KB 16112KB 0KB ---% /vol/prod_v/.snapshot
Similarly the agg show_space output that I took at the same time as I took above df command output shows change in allocated and Used space values for this volume as: prod_v 75574680KB 32KB volume .. .... prod_v 149823776KB 17816KB volume
O/p for both these commands are still changing... We have found that values take approx 19 minutes to settle down...19 minutes seems to be a BIG duration :-(
Any help in this regard will really be appreciated.
-Anna
bluezman wrote:
Anna M wrote:
snip
Any clue as to why would this be heppening? Can it be because snap reserve is not set properly(we have set it to zero)?
i believe that when you take a clone, it takes a snapshot of the src, since your snapshot space comes out of the actual volume, you will see the size change
correct me if i am wrong.