Am I getting overcharged on using Cloud ?

When you are putting data onto cloud, what all would you think ?

1. How long is it going to take ?
2. How safe it is ?
3. How much would be cost savings ?
4. Will I be able to get this data back ?

Safety aspect of the data is covered under my another blog, and here I am sharing my thoughts on validating the cost savings.

These are few ways cloud providers charge on the backup or restore(data retrieval) operation:
1. Amount of storage being used for the backup
2. Amount of transactions i.e No. of get and put operations
3. No charge for backup but only charge for data retrieval

Eg., Pricing Links for Amazon:
For IBM SoftLayer –

IBM Spectrum Virtualize 7.8, which supports Cloud Backup on Amazon S3, IBM SoftLayer and Open Stack Swift with recently introduced Transparent Cloud Tiering (TCT),  provides following data points for you to validate or estimate your data consumption:

  • Amount of data uploaded
  • No. of successful PUT and GET operations
  • Total amount of bytes and blocks downloaded and transferred to cloud
  • Total no. of blocks uploaded


All these parameters are inside the usual Vdisk (Nv*) and Node (Nn*) stats of the configuration node files of your cluster and can be retrieved from them for reference.
There are many other parameters like failed operations counter, backup and download latencies, retries counters etc. which also gets recorded in these files.

There are few other newly introduced CLI’s which show cloud usage:

  • svcinfo lscloudaccountusage – This shows total cloudaccount usage by the cluster
  • svcinfo lsvolumebackup – This shows how much backup is done for every single volume in cloud
  • svcinfo lsvolumebackupprogress – This shows backup time and current progress of the backup for all the volumes
  • svcinfo lsvolumerestoreprogress – This shows restore operation progress for all the volumes

I am not covering the new CLI list exhaustively but there are many more mentioned in Knowledge center.
With careful analysis and by using a small script, monthly usage calculation can be done and bill can be estimated for the particular service provider.

PS: These are my personal thoughts and they may or may not match with my employer.

Is my data secure in Cloud with Spectrum Virtualize ?

With Spectrum Virtualize 7.8 release, feature to put snapshots in cloud is supported. Details on how to start setup are here .
These snapshots are stored in the form of objects. Objects contain both metadata and data and for a large size volume (We support upto 256 TB volume size), there could be millions of blobs in cloud.

What if I don’t have Encryption on SVC ?

These objects are created by Spectrum Virtualise code and are uploaded to cloud using internal gateway. These objects are not in human readable format and in case a cloud account is compromised (It is not a trivial thing though), this snapshot data isn’t directly usable by a rogue user. Any restoration of data requires a Spectrum Virtualise code to run on the system with few other mandatory parameters.
This implies that if a customer is opting for an On-premise cloud using Open Stack Swift and doesn’t want encryption, in that case also, data is very secured in cloud, though IBM highly recommends encryption to be ON.

Advantage with Encryption ON:

When encryption is enabled on SVC or Storwize cluster, data is encrypted first and then put in cloud (public or private) with encryption keys at various levels i.e Cluster, Cloud account layer, volume layer, snapshots generations level etc.

Without access of correct keys, data can never be restored by unauthorised user.

What if all my keys are compromised ?

Spectrum Virtualize provides a re-keying mechanism which can swiftly change all keys of the system including the ones relevant to the cloud accounts. It is a 2 step process and has fault tolerance of SVC as well.

USB vs SKLM Encryption ?

With USB mode encryption, upto 3 USB’s can be enabled with master keys which are critical for accessing data onto cloud.
In case of a site failures, physical access to these keys is required for regeneration of data.

In case of SKLM based keys, keys are stored onto IBM SKLM servers. SKLM stands for Secure Key Life Manager and it acts a secured central repository for all your datacenter keys be it from servers or storages. In case of regeneration of cloud data onto a site, network access to SKLM server is required.

PS: All these thoughts are mine and not necessarily reflect that of my employer.

Cloud Backup using TCT in Spectrum Virtualize V7.8.0

Hello, I am going to talk about the star feature of Spectrum Virtualise V7.8 Release and it is Transparent Cloud Tiering. It is abbreviated as TCT. This name is shared with other Spectrum family of products like Spectrum Scale. Spectrum
Scale introduced TCT in their latest release 4.2.1

Coming back to Block Storage world, TCT in Spectrum Virtualize enables customers to now take backups onto cloud in terms of Point in time Flashcopy snapshots. Virtual disks can now be backed onto various cloud providers supported in this release 1.

For first release, support is on 3 Cloud Accounts:
1. Amazon S3
2. IBM Softlayer
3. Open Stack Swift

The first 2 are public clouds while third one is configured as a On premise private cloud.


Cloud Backups are supported on following hardware level:

  • DH8
  • SV1
  • V9000
  • V7000 Gen 2

These backups are going to happen from Configuration Node of the cluster and over management IP.

This backup is point in time snapshot and can be easily restored onto the same production volume or onto another virtual disk of admin’s choice.


Steps to Take backup:

1. Select a volume for backup. Note down the UID of this vdisk.

2. Create a DNS server and create a cloud account on Amazon S3.

3. Enable cloud backup onto this volume.

This would create 2 internal flashcopy disks as visible under “svcinfo lsvdisk” and under “svcinfo lsfcmap”. These copies would not be visible in GUI.

4. Start backup and check progress

5. Once backup is completed, generation is visible in cloud and details can be seen in Cluster GUI.

Here before data gets uploaded to cloud, it is compressed in-line and encrypted before it leaves SVC/ Storwize system and sent over the wire.


Steps to do restore of backups:

1. Check which generations are visible for a particular volume in cloud

2. Select a generation to restore from.

3. Select a volume and start restoring this generation on it. Check its progress:

Now restore is completed.

Important things to note: Starting release V7.8, a new encryption method of SKLM – Secure Key Life Manager is introduced for IBM Spectrum Virtualize which is going to store cluster keys securely on a secured network key server. Till V7.7, only method available to user was to use USB drives with secret keys to encrypt and decrypt the data.

Data backup in cloud is supported both with SKLM and with USB encryption.

PS: All these thoughts are mine and are not necessarily reflect that of my employer.

SVC and Storwize 7.7 Version Features:

A new version of IBM Spectrum Virtualize – SVC and Storwize 7.7 has been announced with following feature set:

1. Increased Read Cache to 64 GB.

2. Compression for IP Replication

3. iSCSI Initiator support for virtualizing iSCSI targets behind SVC

4. Encryption support for Distributed RAID

5. RAS with NPIV host port fabric virtualization

It would be available for GA in June time frame. I would be covering all these above topics in greater details on this page. Stay connected.

For details of this announcement look here:

PS: All thoughts from this blog are personal and not necessarily reflect that of my employer.


MS-ODX implementation with IBM Storwize – Part 2

Hi, here is Part-2 of Microsoft Offload Data Transfer (MS-ODX) Description and continuation from Part-1 posted earlier –

Just a quick summary: IBM Spectrum Virtualize range of products – SVC, Storwize now supports MSODX feature starting release 7.5.0. This feature is for offloading data from one volume to another volume within a storage controller array itself without it being first transferred to host, saving both host CPU cycles and network bandwidth.

Since we did lot of case studies and performance measurements, I with my colleague from ISV team in US thought of creating a whitepaper for the same and last month we could publish it.  You can find it here:

This is covering both basics as well as few case studies. It also talks about performance improvements we may expect to see with MSODX enablement.

Please note: These are my personal views and don’t necessarily reflect that of my employer.

MS-ODX implementation with IBM Storwize – Part 1

IBM Storwize Release has introduced support for MS-ODX feature for its entire Storwize family of products and for IBM Spectrum Virtualize (previously called as SVC-SAN Volume Controller) product.

MS-ODX stands for Microsoft Offload Data Transfer.
Primary objective of MS-ODX feature is to free up host CPU and network cycles and to offload all the copy operations to the Storwize/ISV controller itself.

In traditional/buffered Windows copy operation, each file copy triggers read request to go from host server to storage array which then fetches the data back to server memory via network and then a write request is sent from server to storage array with this same data again over the network. So every data goes through the wire twice, once in reading and once in writing. It causes huge latency in IO operations.

Incase of Microsoft clustering solutions (MSCS) environments this data gets copied from one host to another over the wire which causes further delay in IO copy operation.

Microsoft simplifies this with usage of 3 XCOPY commands introduced in SPC-4
1. Populate Token – PT
2. Receive ROD Token Information – RRTI
3. Write Using Token – WUT

For Windows host to start triggering ODX, storage array needs to be supporting it, for eg. Storwize family or products – V7000, V5000, V3700 and IBM Spectrum Virtualize (earlier name – SVC). I would be writing a detailed document for this in part 2.

PT – It creates token on storage array. It contains source vdisk (virtual disk) starting LBA, amount of data to be copied, list identifier along with other segment related parameters

RRTI – It fetches token created on storage array to server. It is server’s responsibility to share token with another host in MSCS environment.

WUT – It contains token received by RRTI and is sent to destination vdisk. It contains LBA to which data needs to be copied, no. of logical blocks etc.


Since now no data would gets copied over to Host, there is significant saving over infrastructure and network requirements. There is also huge copy performance boost that gets achieved due to this offloading.

This boost is even more visible when your server and network is busy doing other operations. In that case your CPU cycles will not be tied up in doing a read and write operations from storage array instead it will just send series of ODX commands(few KB’s) which takes care of offload.

Important thing to note is, Microsoft limits no. of offload copy requests (primarily write operations using Write Using Token – WUT’s) to 1 (yes one!) for most of its Hyper-V operations like VM Storage Migration, VHDX Creation, VM Cloning and few more operations.  So in spite of storage array’s capability, all these operations gets triggered in the multiple of 1 copy operation at a time. There are other applications like Robocopy which utilizes ODX very efficiently by sending up to 128 copy operations.

Storwize V7000 does support up to 2048 copy operations per cluster(8 node) i.e it is capable of performing simultaneous 2048 copy operations (VM Cloning, Storage migration, any other copy operation combined) if it receives these many WUT commands from a single or a cluster of hosts.

MS-ODX has few caveats though:

From Microsoft side:
1. It works only with NTFS filesystem
2. File size to be more than > 256K to trigger ODX.

From Storwize side:
1. Only Intracluster support is present currently.

2. Blocksize of NTFS to be >=4K.

In my next series of this blog, I’ll share some performance no.’s I have been experimenting with Storwize.

Please note, these are my personal thoughts and not necessarily represent that of my employer.