While the subject is ‘i2p’, this isn’t an
email telling you about the mapping that causes some high utilization, rather
it is about the aftermath of the i2p mapping process.
I recently performed an upgrade of some of our nearstore
systems (used for email compliance, among other things), and was expecting a
pretty serious utilization curve while the i2p mapping was performed as part of
the upgrade (we upgraded from 7.0.6, not 7.1.X). This came and passed with
high utilization of CPU and DISK, but no real issues. However, the i2p mapping
apparently keeps the inode-2-path map for every file\inode in the metadata –
we have volumes with snaplocked data in the terabytes, the most heavily
utilized of which is about 110million files (the 2nd highest
utilized is about 82million). As this data has been marked as ‘archive’
in the Enterprise Vault system, it isn’t being written to anymore, so I’m
not terribly concerned with the snapmirror update taking a while –
however, it is throttled down pretty low as our WAN is not yet up to snuff
(coming later this year). Apparently, because i2p touches each inode, we have
about 110million*4KB to transfer (~420GB) for the ‘update’ to our
4.2TB volume. Likewise, the 3.2TB volume with 82million files had about 313GB
to transfer.
If you have volumes with a significant amount of files, be
aware that the snapmirror update after the i2p process completes post-7.2.2
upgrade (or 7.1.1 even, I suppose) will be fairly significant in size.
Hope this helps some of you out there so that you are not
caught unaware.
‘evening
Glenn