Blog categories

My blog posts and tweets are my own, and do not necessarily represent the views of my current employer (ESG), my previous employers or any other party.

I do not do paid endorsements, so if I am appear to be a fan of something, it is based on my personal experience with it.

If I am not talking about your stuff, it is either because I haven't worked with it enough or because my mom taught me "if you can't say something nice ... "

Why Doesn’t IT back up BYOD!?

ESG recently started offering TechTruths … single nuggets of data and the analyst perspectives of why they matter. Check out all of them via the link above, but here is my favorite so far on BYOD data protection:

TechTruths%20June%202014_Endpoint%20Backup

So, why doesn’t IT back up BYOD endpoints?! It isn’t a rhetorical question.

I have always been confounded why IT, the custodians of corporate data, doesn’t feel obliged to protect that corporate data when it resides on an endpoint device, and more particularly when the corporate data resides on a BYOD endpoint device.  I understand the excuses – its hard to do well, the solutions are expensive, its difficult to quantify the business impact and therefore the ROI of the solution. In fact, in ESG’s Data Protection as a Service (DPaaS) trends report, we saw several excuses (not reasons) to not back up endpoint devices.

     DPAASblogimage1

The myth that endpoint protection is hard and without justifiable value is old-IT FUD, in much the same way that tape is unreliable and slow. Both were true twenty years ago, and neither are true today.

Today, with the advent of all devices being Internet-connected, it’s never been easier to protect endpoint data, whether employer- or employee-owned. The real trick and one of the most interesting areas of data protection evolution is in the IT enablement of endpoint protection. It used to be that most viable endpoint protection solutions were consumer-only, meaning that IT was not only excluded from the process, they were unable to help when it matters. Today, real business-class endpoint protection should and does enable IT to be part of the solution, instead of the problem:

  • Lightweight data protection agents that reach through the Internet to be protected, with user experiences that look like they came from an AppStore, instead of from the IT department.
  • Encrypted connections in-flight and at-rest ensure the data is likely more secured while protected than it might be on the device itself.
  • And most importantly, IT oversight – so that IT can do the same diligent protection of corporate data on endpoints that it does with corporate data on servers.

If you aren’t protecting your corporate data on endpoint devices, then you aren’t protecting your corporate data, period – and with today’s technologies, you are out of excuses (and reasons).

[Originally posted on ESG’s Technical Optimist.com]

HDS bought Sepaton – now what?

Have you ever known two people that seemed to tell the same stories and have the same ideas, but just weren’t that in to each other? And then one day, BAM, they are besties.

Sepaton was (and is) a deduplication appliance vendor that has always marketed to “the largest of enterprises.” From Sepaton’s perspective, the deduplication market might be segmented into three categories:

  • Small deduplication vendors and software-based deduplication … for midsized companies.
  • Full product-line deduplication vendors, offering a variety of in-line deduplication, single-controller scale-up (but not always with scale-out) appliances from companies that typically produce a wide variety of other IT appliances and solution components … for midsized to large organizations.
  • Sepaton, offering enterprise deduplication efficiency and performance to truly enterprise-scale organizations, particularly when those organizations have outgrown the commodity approach to dedupe.

Aside from the actual technology differences between the various deduplication systems, the Sepaton approach is somewhat reminiscent of the different marketing philosophies between American cars that appear to be commonplace and some well-engineered, European beast that is positioned for the select few – justifiably so, or not.

To be fair, Sepaton’s technology really is markedly different in a few aspects that do lend it to enterprise environments, but their challenge until now has been gaining penetration into those enterprise accounts – and defending against the other deduplication vendors in those enterprise accounts whose solution portfolio typically includes production storage systems (not just deduplication secondary systems) and other key aspects of the overall IT infrastructure, often with higher relationships and more flexibility in pricing due to the broader portfolio … and that is where this gets interesting.

HDS tells a similar story around understanding and meeting the needs of truly large enterprises, so Sepaton story is congruent with what the HDS teams know how to talk.

HDS’s core DNA is a conservative approach to knitting together solution elements, while trying to help the customer see the bigger picture in IT – which should allow the Sepaton product lines to seamlessly be evangelized within a broader HDS data protection and agility story. I for one am eager to hear more about what that new broader story sounds like, with the Sepaton pieces integrated in.

Similarly, HDS serendipitously did not have its own deduplication solution already (unlike almost every other big storage/IT vendor that HDS competes with), so there should be very little overlap – and in fact, the primary data protection products (e.g. Symantec NetBackup) that HDS often offers its customers already work well with the Sepaton platform (due to OST support).

But most importantly, HDS has the broader enterprise gravitas that Sepaton alone could not achieve. HDS ought to be able to have similar C-level meetings to those of Sepaton’s (and HDS’s) competitors, and that means that more enterprises will be exposed to Sepaton and HDS will have a broader story to tell (win-win).

Put it all together:

  • Sepaton gets better enterprise reach and more enterprise sales folks that align with the Sepaton story.
  • HDS broadens its storage portfolio beyond primary storage with a complimentary secondary/deduplication solution that is aligned with its core story and customer base for incremental sales and broader customer penetration, due to a more complete story.

Congratulations to HDS and Sepaton on what looks like a good fit for both of them – now, let’s start talking about that cohesive data protection story!

[Tardiness disclaimer: The HDS-Sepaton announcement occurred while I was on vacation in August, but it was interesting enough that “late” seemed better than “silent”]

[Originally posted on ESG’s Technical Optimist.com]

Backing up SaaS : an interview with Spanning

One of the more frequent topics that I get asked about is “How do you back up production workloads, after they go to the cloud?”

A few months ago, I blogged on that – essentially saying that history is repeating itself (again). As new platforms usurp the old way of doing things (NetWare to Windows to VMware to SaaS), it is not typically the existing data protection behemoths that are first to protect the new platform. Instead, it is often smaller, privately-held innovators who are willing to do the extra work and “protect the un-protectable.” And in most cases, those early innovators ended up leading the next generation while the APIs eventually made standardized backups possible by anyone.

  • ARCserve didn’t back up midrange systems, but led NetWare’s backup market
  • Backup Exec and CommVault weren’t overly known for backing up NetWare, but dominated the early years of protecting Windows NT, later Windows Server
  • Veeam didn’t back up anything before VMware, but it is the defacto VM-specific solution to beat today

So, as traditional workloads like file/collaboration and email move from on-premises servers to cloud services like Office365 and GoogleApps and SalesForce, there will likely emerge new dominant innovators that could put all of the legacy solutions on notice.  That dominance has historically been based on two things: 1) early brand awareness in the space and 2) their influence on the platform provider that the rest of the backup ecosystem will eventually depend on. 

So, I recently took the opportunity to visit with Jeff Erramouspe, CEO of Spanning Cloud to hear his thoughts on SaaS backup:

Thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

vBlog: Everyone Should Archive (period)

  • Even if you aren’t in a highly regulated industry.
  • Even if you aren’t a Fortune 500 enterprise.
  • Even if you don’t keep data over five years.

Everyone should archive, as a means of data management – because storage (both primary and secondary) are growing faster than storage budgets, so you can’t keep doing what you have been doing.  Here is a video on the simple math of archiving/grooming your data.

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

Riverbed Announces the End to Excuses in Trying D2D2C

For several months, I’ve been talking about the inevitability of D2D2C (meaning that data is goes from primary/production storage to secondary protection storage and then to a tertiary cloud). In fact, I blogged a few months ago that iit s seems hard to imagine organizations of any size meeting their recovery SLAs with a straight-to-cloud solution. Instead, the intermediary backup server or appliance provides a fast and flexible local restore capability, while the cloud provides longer-term retention.

But even D2D2C has several permutations, including:

  • Backup-as-a-Service intermediary caching devices before the BaaS service itself
  • Traditional backup servers/appliances writing to a cloud tertiary storage tier
  • Traditional backup servers/appliances replicating to a cloud-hosted copy of the backup engine
  • Traditional backup storage/dedupe platforms replicating to a cloud-hosed appliance

And there are other options, all of which equate to D2D2C with various benefits and drawbacks.  But many people are still not convinced to evaluate any D2D2C solution, for a few valid reasons:

  • Security concerns – many folks are justifiably concerned about data privacy in the cloud
  • Complexity concerns – Appliances can be difficult to set up or a challenge to operate
  • Interoperability concerns – Not all backup software works with every appliance or cloud-solution
  • Cloud-Evaluate-ability concerns — Those that haven’t started using cloud-services yet believe it to be expensive and complicated to initially configure, even for a simple evaluation.

Riverbed, long known for disrupting the status quo through its technology innovations around WAN optimization has taken that know-how to deliver a backup appliance called SteelStore (formerly WhiteWater), which provides a local appliance for fast recovery and compressing/deduplicating data before it is sent to the cloud – and then extended that with cloud-storage.  SteelStore provides yet another permutation in D2D2C, by offering a deduplication-capable on-premise appliance (physical or virtual) that extends it storage capacity directly from the cloud-storage provider of your choice.

Riverbed’s announcement today isn’t so much about the technology, as much as how Riverbed is removing your excuses for not trying D2D2C:

  • Security solution – Because the cloud-storage is simply an extension of Riverbed’s own storage container, the data is unreadable without a SteelStore in front of it. And yes, the data is encrypted in-flight and at-rest.
  • Complexity solution – This week’s announcement includes a free virtual appliance. So, nothing to install other than a VMware .OVF file.
  • Interoperability solution – The SteelStore appliance presents itself as a NAS. So any backup software that can write to a file share (all of them) can leverage the SteelStore appliance.
  • Cloud-Evaluate-ability solution – this is the cool one. Riverbed is partnering with Amazon to provide 6 months of AWS S3 storage for the Riverbed evaluation.

Specifically, the free virtual appliance supports 2TB of on-premise deduplicated/compressed storage that is then extended with up to 8TB of cloud storage. So, for the six-month evaluation, Riverbed and Amazon will cover the costs of those 8TB of cloud storage for six months, by issuing (8 x 6) 48 monthly TB credits. Sure, there is some fineprint:

  • If you later want a bigger appliance than the 2TB virtual model, Riverbed will be happy to sell you one – but the free appliance is yours.
  • When you finish those six months, Amazon will start charging you for the storage if you continue using it.
  • And you will have to talk to friendly Riverbed person, to qualify for the evaluation and get started with the technology and such.

Check out their website for more details, but the bigger picture is that there are many that would benefit from a better data protection infrastructure – with deduplication and fast agility on-prem, but with a scalable and economic tertiary capability in the cloud. And for those folks (and you all know who you are), you may be out of excuses. In fact, this kind of offer with a virtual appliance and free Amazon storage, may have singlehandedly removed the barrier to evaluation for D2D2C more than any single other announcement in 2014, thus far.

So what is your excuse for not trying D2D2C yet? Riverbed may have an answer for that, too.

 

[Originally posted on ESG’s Technical Optimist.com]

Event Marketing Folks Don’t Get Enough Credit

With 25 years of attending tradeshows and launch events, I can attest that the Marketing/Events team does not get enough credit.

  • Booth folks show up and find a massive display ready for them to click their mouse and start talking to customers. And when the show floor closes, the staff leave and the booth magically dismantles itself.
  • Execs show up to private venues that are full of style and ready to ensure that whatever is discussed will be better remembered, because of the atmosphere around them.

Of course, not all marketing events are awesome (or memorable) – but I wanted to highlight a few recent examples of how to really do a marketing event well:


Launch Event: EMC ProtectPoint in London

This should be a case study for launch events: with what I believe to be the perfect combination of style and substance. The style came in the form of its Doctor Who theme, complete with an unassuming Tardis (blue police phonebooth) on the outside of a nondescript building, which immediately led through a tunnel of lights and sound that led you to believe that you were transporting to a different place and time. And on the other end of that tunnel, we learned about “backupless backups” (among other things). The day included:

  • Executive “vision” in the morning
  • Opportunities to meet 1:1 with early-adopter customers in the afternoon
  • Optional technical deep-dives mid-day by product experts

Many launch events include two out of three of the above, but miss the ability to tell the whole story by omitting one of the three facets. Throw in a small plastic Tardis that somehow found its way from one of the many discussion tables to my bookshelf of geekstuff, and it’s an event and a product that will stick with me for a while. See my vBlog from the launch.


Influencer Event: Acronis at VMworld 2014

VMworld, like most major tradeshows, is often capped every night with various parties and gatherings. Many of them are in loud venues that are might show appreciation to customers, but can be challenging to find colleagues or have meaningful conversation — with most often bereft of any memorable-ness.

Acronis took a decidedly different approach that I thought was really smart – they rented out the Cartoon Art Museum in San Francisco. Hint: One reason for IT stereotypes like “we like comics” is because some of us do or did, and all of us “know someone” who likes art.  The walls were adorned not just with classic strips like Dick Tracy but modern exhibits from recent definitive artists for Captain Marvel, the Avengers and Punisher. Even those who didn’t geek out as kids can be impressed at the true “art” when drawings that originally appeared as 3” squares in a monthly comic are blown up to canvas size drawings. Pair that up with a relaxed atmosphere to actually talk to folks and a very cool souvenir from a modern day artist doing caricatures, and it leaves a lasting impression about both the evening and the vendor who sponsored it. PS> for those that think that Acronis is just the image-level migration utility that shipped with your new flash drive, you really need to look again (without blinders).

Sidenote: to be fair, Rackspace had a caricature artist on the show floor. But the difference is that there were likely several folks that got Rackspace caricatures who really didn’t (and still don’t) understand what Rackspace does – good draw, but hard to convert into a meaningful conversation. The Acronis artist was simply an integrated part of a well-planned event, which makes the artist less of an attraction and the hosting vendor more so.


Customer Event: Veeam Party at VMworld

The annual Veeam party at VMworld is described by some as “legend….ary,” but what I find impressive is the balance that Veeam makes between its influencer engagement and the broad appreciation that it shows its customers and prospects. The main party starts at 8, admissible only by the coveted green Veeam necklace – something which I can personally attest to being rigorously enforced by two very inhospitable bouncers. But for a few, there was a second small event that started a little earlier, included candid conversation by Veeam executives for press/analysts, and gave folks a chance to really understand Veeam’s strategy and aspirations before their big customer event happens. In this case, the venue-itself may not be overly memorable (other than the smart layout for the dual events), but the net effect and company perceptions will be remembered long after the battery in my green necklace expires.


Booth Experience: Zerto

There were many well-attended and well-staffed booths on the VMworld Expo floor, often with presumably knowledgeable subject matter experts and some with charismatic or pretty people to pull you in. PS, never, ever assume that those two groups (experts & hosts) are mutually exclusive – which I was reminded of while talking to extremely knowledgeable Zerto booth staffer wearing an “I am not a booth bunny” badge and her horrible story of one attendee who insultingly presumed otherwise – and the humorous retort that put him in his place. Those buttons alone make for an honorable mention on this list, but what really impressed me was what happened after the expo closed.

I was in a meeting until 20m after the show floor closed. When I came out, carpets were being rolled up, shipping boxes were being packed, and not a single booth staffer could be found (except for the obligatory event managers) … and the entire Zerto team. They were in a circle, celebrating each other’s successes throughout the week. Having done show booths for years, the norm is that your collective staff immediately become individuals when the announcement is made. Those with flights run, those who don’t clump together to wind down, but almost never does the entire staff stay – to celebrate each other and learn from each other. And that culture doesn’t happen by accident, nor as a one-off gimmick; which makes me wonder how that highly-energized, community-approach must be within the Zerto team throughout the rest of the year … and how Zerto’s customers must benefit from it.

[Originally blogged on ESG’s Technical Optimist.com]

EMC RecoverPoint for VMs is another step forward in vAdmin enablement

This week, EMC released RecoverPoint for VMs (RP4VM). For storage administrators, RecoverPoint has long been seen as the seamless synchronous/asynchronous storage replication of choice for EMC storage, to deliver higher levels of resiliency for enterprise workloads. But for virtualization administrators, it was part of the “magic” that made the storage under the hypervisor surprisingly durable – or perhaps not even recognized at all.

With the RP4VM release, enterprise-grade storage replication is now in the hands of the VMware Administrator (vAdmin). RP4VM is made up of three core components:

  • A virtual appliance for replicating to another appliance on a different host
  • An IO splitter that captures disk IO from the hypervisor, and weans a copy for the appliance
  • A vCenter plug-in for management

With this approach, any storage that the hypervisor can use (iSCSI, FC, DAS, vSAN) can be replicated – even to heterogeneous storage on another ESX hypervisor. Via its vCenter plugin, this solution is designed to be administered by a virtualization specialist, instead of a storage specialist. It is another huge example of vAdmin enablement. In broader terms, it is another example of workload owners (e.g. virtualization, database, storage) taking on traditional infrastructure management responsibilities that might have previously been managed by a backup administrator – similar to how EMC recently released ProtectPoint, as a way to enable storage administrators to do protect their own data.

Both of these releases are HUGE WINS for workload administrators who want to achieve better levels of resilience and recover-ability for their various workloads (storage & virtualization). But in both cases, there is another side to the story that should also be included: the need for better communication and shared strategy between the workload owners and the traditional facilitators of infrastructure services – see previous blog post on the evolution of workload protection.

With ProtectPoint, storage administrators can now protect their own data, but they will likely be more successful and their organizations will be more compliant, if that protection is strategized and then enacted through a partnership between the now-empowered storage administrator and the backup administrator who typically helms that responsibility (and still likely maintains the Data Domain platforms).

Now, with RecoverPoint for VM, virtualization administrators can ensure higher resilience of their virtualized infrastructure, but they will likely be more successful if that resilience is strategized and then enacted through a partnership between the now-empowered virtualization administrator and the storage administrator who traditionally facilitated the underlying capacity for the virtualization infrastructure (and likely still maintains the storage platforms themselves).

Key Takeaway: There are other examples, but the point is – as workload administrators gain new capabilities that were previously helmed by their IT peers, their shared success will not be JUST from the new technology innovations themselves (often as elegant as they are impressive), but also in the shared strategies and collaboration between those who understand the workloads, and those that understand the organization’s mandates and expectations of resiliency, recovery, retention, etc.

[Originally posted on ESG’s Technical Optimist.com]

Backup Alone Just Isn’t Enough! (vBlog)

If you haven’t already checked them out, ESG recently started delivering ESG “Video Capsules” – video wisdom in 140 seconds or lesswww.esg-global.com/esg-video-capsules.

One of the more recent ESG Video Capsules, “Backup Is No Longer Enough,” discusses that IT organizations of all sizes struggle to achieve the SLAs that their business units require, if their only recovery solution is a traditional backup solution. In fact, when looking at core platforms like server virtualization systems (VMware & Hyper-V), less than 10% of folks are only protecting their VMs with backups; the rest are using a combination of snapshots and replication to supplement their backup mechanisms – a strategy which is consistent with the Data Protection Spectrum that I often discuss.

With average SLAs for all systems (not just “critical” or Tier-One platforms) shrinking to <3 hours or <1 hour, its really hard to diagnose the problem, restore the data set, and resume business in those timelines. Instead, one should strongly consider combining snapshots, replication and backup for a comprehensive data protection strategy – ideally using a single management UI to control all of it.

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

CommVault announces “You Can Have It Your Way”

There is a famous hamburger chain that used to tout, “You can have it your way,” whereby instead of getting your burger fully-loaded (with all the fixin’s), you can choose whether you wanted pickles, tomatoes, or anything else.

For the last two decades, CommVault has been offering a fully-loaded data protection solution that encompassed backup, archiving, replication, snapshots, etc. Over the course of time, and based on customer feedback, it continually added features – just like the burger chains that now add bacon, steak-sauce, grilled onions instead of fresh, etc. The challenge was and is that not everyone wants their burger fully-loaded, nor their data protection solution fully-featured.

Often IT organizations are either intentionally diversifying their data protection tool set (instead of seeking out one solution to protect it all) – or there are addressing a point problem, where one particular kind of workload needed better protection than the status quo. Unfortunately, a fully loaded solution doesn’t always align with that point product approach that many IT organizations are looking for – so CommVault did something rather atypical, they segmented the Simpana licensing around four solution-sets:

  • Virtualization-protection & Cloud-management
  • IntelliSnap (snapshot) recovery
  • Endpoint protection
  • Email archiving

While many companies have a broad spectrum of data protection capabilities, those capabilities often come from running multiple products – even if they are from the same vendor, perhaps even licensed in bundles or suites. CommVault has never had that challenge – as its overall feature-set has grown through its one and only Simpana codebase. It’s always been “fully loaded,” just like the big burger.

So, while some other vendors are trying to converge their products for better interoperability, CommVault is going the other direction – separating out the licensing, while maintaining a single product line. The result is very similar to the burger options:

Everyone gets the same top bun – the management UI – though only the licensed functions will appear within it. When you add new functionality, new features will simply appear in the UI.

Everyone gets the same bottom bun – the Simpana ContentStore – which is the converged storage infrastructure consisting of tape, cloud and disk repositories, which will vary based on business goal, but has always been shared by the higher-level Simpana functions.

Everyone gets the same meat patty – the Simpana engine – most typically used for backups, but is in its essence a job scheduler, catalog of recoverable points, etc.

Choose your toppings … based on which workloads need better protecting and/or which point products that you are considering or needing replacement for.

Certainly, when adapting a data protection product to be utilized by secondary IT Pros (those seeking point solutions, such as storage, virtualization, archive or endpoint administrators), there will be some adjustments in the sales dialog, as well as the evaluation and adoption cycles. On the upside, this does allow secondary IT decision makers to choose Simpana for their particular workloads, while the backup administrator will still have the skills and tools to help those IT professionals be successful in protecting their organizations’ data – see earlier blog on workload-centric protection enablement.

In short, by offering an ala-carte way to consume Simpana, CommVault continues to demonstrate its willingness to adapt to customer demand – without sacrificing the single, core-platform that it has built its success upon for many years of data protection evolution.

[Originally posted on ESG’s Technical Optimist.com]

What I am looking for at VMworld 2014 … a Data Protection Perspective

For the past few years, the big data protection trend in virtual environments was simply to ensure reliable backups (and restores) of VMs. That alone hasn’t always been easy, but with the newer Data Protection APIs from VMware (VADP), that is becoming table-stakes – and the real differentiation coming from the agility to restore (speed and granularity), as well as manageability and integration.

And while there is certainly still a lot of room for many vendors to improve in those areas, the industry overall needs to move past the original question of “Can I back up your VM?” and even past “How quick can I restore your VM?

The new questions to be answered are:

Does your data protection solution understand which VMs should be protected and how?

Have protection/recovery enabled is your Virtualization Administrator?

The answer to the latter question may in fact inform the former one, in that a Backup Administrator isn’t always the best person to determine how the VMs should be backed up – because they don’t know what is running in those VMs. The only folks that really know that are the folks that provisioned those VMs in the first place, which are typically not the backup admins … it’s the virtualization administrators.

I covered that in some detail in a TechTarget article – discussing that the provisioning process is the right place to quantify how the VM should be protected, including retention length and RPO/RTO, which would then affect how the data protection process(es) are enacted. Maybe the provisioning process links directly to the backup engine, or the snapshot engine, or replication engine, or ???

Remember, “data protection” is not synonymous with “backup” – especially as it relates to server virtualization. In fact, when ESG asked IT professionals how they protect VMs, less than 10% stated that they only used VM-centric backup mechanisms. The other 90%+ used a combination of snapshots, replication or both to protect VMs in combination with VM-centric backups, as reported in ESG’s Trends in Protecting Highly Virtualized Environments in 2013.

      VM-protection methods -- from ESG Research Report Trends in Protecting Highly Virtualized Environments 2013

Short of augmenting the VM provisioning process to include data protection, the next best answer is to enable data protection management from within the virtualization administrators’ purview – because those folks understand the business requirements of the VMs. That doesn’t always mean ensuring that your data protection (snapshot/backup/replication) tool has a vCenter plug-in, though that helps. It does mean:

Have you truly built your data protection product or service to understand highly virtualized environments?

Is the solution VM-aware (per VM or VM-group), or simply hypervisor host-centric?

Are the management UI’s (standalone or plug-in) developed with the virtualization administrator in mind? Or are they backup UIs that you hope the virtualization administrator will learn?

And of course, how agile is the restore? How fast? How granular? How flexible to alternate locations (other hosts, other sites, other hypervisors, cloud services)?

Yes, it’s a long list of questions – and I expect to be very busy at VMworld 2014, trying to find the answers from the exhibiting vendors, as well as from VMware who enables them.

[Originally posted on ESG’s Technical Optimist.com]