Tuesday, December 15, 2009

"Fabric Based Computing" the future of Fabric Apps?

If you attended the Gartner Data Center conference a week ago, you may have already heard the buzz on “Fabric-based computing”. It’s a fancy new label for Cisco’s entry into the market with the UCS platform. It’s almost a marketspace of one, but with the addition of the HP BladeCenter matrix and a couple other smaller players, 3 leaf networks and Liquid Computing. And that surprisingly, 30% of Global 2000 companies will run some form of it by the end of 2012.

Is this really a market? Or just a server bundling exercise being wrapped in revisionist PR. When Gartner first published about Computing Fabrics they were speaking to the emerging enterprise compute-clustering space, now populated with just a few remaining players such as Egenera, Scalent and Surgient. The others have died or been bought – some just for their IP – such as Virtual Iron/Oracle, Cassatt/CA, Platespin/Novell, Qlusters. The problem is that the Cisco modular blade server / network switch chassis is a world away from an offering that does virtualized server compute, shared system memory, automated/dynamic resource management, etc.

Also, I’d love to think that “Fabric-based computing” will enable a rebirth of the network-based application space. Again, products like UCS are just a server chassis (Cisco now offers both blades and rack-mounted servers) that run Intel processors and standard operating systems. This is decidedly different from how Invista or RecoverPoint runs actually within the network fabric, able to work with heterogeneous host operating systems. Not saying it’s better or worse, but just that it’s different.

Anyway, in Gartner’s study on the topic (just published, not yet available online), they point out that “Adoption will likely follow a similar pattern to that of the early years of the blade server market.” This is because UCS is primarily a blade server. No need for a fancy new name. And no need to invoke the spirits of Fabric Application or Compute clustering.

Thursday, November 19, 2009

New Video on Fabric-based RecoverPoint


A product update video about RecoverPoint on Brocade was posted today on youtube featuring Rick Walsworth of EMC.  It provides an update about this leading fabric-based Replication and Data protection solution.  Known as the product of choice for enterprises and large organizations protecting heterogeneous storage envuronments, the offering now works with Virtualized environments and supports Brocade's newest SAN directors, the DCX and DCX-4S. Highlighted in this video are how RecoverPoint works with Brocade fabrics, protecting Virtualized environments, and EMC / Brocade technology synergy.  The video is available here, and more info on the solution is available at www.RecoverpointOnBrocade.com.

Tuesday, September 29, 2009

Fabric-based Replication In the News


A recent Storage Magazine story addresses network or Fabric-based storage management including Replication. The article, "The pros and cons of network-based data replication" by Jacob Gsoedl specifies, "Network-based replication combines the benefits of array-based and host-based replication. By offloading replication from servers and arrays, it can work across a large number of server platforms and storage arrays, making it ideal for highly heterogeneous environments."

The writer clearly gets the core benefit of fabric apps for enabling the management of mixed storage environments. It cites a number of replication and virtualization products including EMC RecoverPoint, IBM SVC and HP SVSP.

To learn more of the specific features and benefits of a leading fabric-based replication offering, go to www.RecoverpointonBrocade.com.

Monday, September 28, 2009

The Golden Age of Fabric Apps


I found myself describing Fabric Applications as being in their "Golden Years" the other day. For being completely off-the-cuff, it's not a bad description. They are very mature and in that sense, people understand what they are and the value they bring. For instance, Network-based storage virtualization is still written about as an efficient approach to deliver the benefits of virtualization across a cross-vendor or heterogeneous storage environment.

Having said that, we're seeing more developments in array-based storage such as FCoE. And we can continue to expect more virtualization and replication capabilities consolidating onto the array, esp. with new modular designs like V-Max. Network-based offerings from IBM SVC and HP SVSP seem to defy this trend but those roadmaps should converge towards the DS8000 and EVP respectively at some point, as HDS offered the market solid validation of array-based heterogenous virtualization with the USP-V.

Don't get me wrong - I'm not sounding the death knell of network-based storage services. I'm guessing just the opposite: more intelligence being built into converged networks leveraging advanced standards such as FCoE, CEE and TRILL. And also continuing to provide common data management services and offload that allow applications using the fabric to be that much more efficient.

So for now, in addition to fabric apps, I'll be spending more time in the cloud space. Please follow the conversation on my new CloudItch blog (www.clouditch.com).

Tuesday, April 14, 2009

Congrats on V-Max





EMC had big news today thanks to their virtual launch of the new Symmetrix V-Max. It has garnered broad industry coverage, as well as its share of competitive brickbats.

It's a revolutionary break from the old-line monolithic array. Based on x86 components, designed with modular 'engines' to enable more granular scale-out, and built-in automated tiering makes it more like an offering from a nimble innovator like Compellent or 3Par. But with scale-up to a claimed 3PB and "tens of millions of IOPS" this is still gold-tier enterprise storage.

EMC's Chuck Hollis paints a typically empassioned view of the technology. It includes storage virtualization features such as pooling data across multiple arrays, non-disruptive data mobility and I/O load-balancing. He listed architectural approaches customers have to achieve Storage Virtualization: "today, we've got three different approaches to putting storage virtualization in the network: (1) use a server appliance (e.g. IBM SVC), (2) use an array controller (e.g. HDS USP), or (3) use an intelligent switch (e.g. EMC Invista)." Forgiving the over site of "host-based" (e.g. DataCore), this list does cover the typical enterprise, large service provider or government agency considered set. There are a number of choices each with its pros and cons.

And despite all the greatness of V-Max, the Storage Virtualization buyer often needs to address issues of heterogeneous storage resources, performance and vendor lock-in. Where these are issues, a fabric-based approach is still the best way to go. (And if you're an EMC shop, you can still make your sales rep happy by choosing Invista.)

Congrats to the Symmetrix team on the launch -- we're all eager to see how the new V-Max vision rolls out over the coming months.

Friday, April 10, 2009

Virtual Storage as well as Coffee Cups


The report from SNW: still a big show but not quite as large as last year. Cost cutting was top of mind, with one attendee sharing that his company eliminated styrofoam coffee cups to save $70k!

Storage news included the apparent maturing of Storage Virtualization, with 76% of attendee orgs either having already deployed or expecting to deploy a solution by next year. Companies are realizing savings from Storage Virtualization through efficient resource pooling, reduced management complexity and the ability to manage more data with the same staff. One company shared that they had 15PB all on a performance SAN, and wanted to reduce cost through tiering to cheaper arrays -- a compelling use for Storage Virtualization.

With the maturing of the category, the key players will stabilize their respective positions. But there's a potential disruption on the way, because the market still hasn't found an equilibrium in terms of form-factor, with shares split between array-based (e.g. HDS USP), appliance-based (e.g. IBM SVC and NetApp V-Series), Host-based (e.g. Datacore) and Fabric-based (e.g. EMC Invista). Expect to see innovation in how network-based technology works to improve the performance of these and other storage apps...

Friday, March 6, 2009

Pay-as-you-go Computing

It's exciting to see as both vendors and customers become more evolved with this IT business, that the nature of our transactions change. The recent buzz about the expected explosive growth in Cloud computing has not-surprisingly coincided with continued IT cost reduction pressures. Some is certainly due to the current global economic crisis, but the trend has been in play for a long time. What has gone from an exercise in procuring something cool and breakthrough, has become buying something basic and vital, like milk. I've taken a stab at capturing some key aspects of this evolution to cloud computing.

What I'm seeing as especially significant is how we as infrastructure technology providers are going to have to package our solutions in a more pay-as-you-go fashion. Very much like how we all buy our power at home - it sounds like a no-brainer, but let's say you make your living selling network hardware. How does that work? And as the customer, what do you expect this utilization-based arrangement to look like? Or maybe the real takeaway is that you don't want the arrangement at all. You want email, a way to track your sales pipeline, and a means to do book-keeping, and don't want to have to staff experts to determine if the best way to do this requires Fibre Channel, Ethernet, FCoE, CEE, or M-O-U-S-E.

Wednesday, March 4, 2009

The New Multi-Tier Support Model

First of all, I want to give credit where it's due. Intuit redeemed themselves and found our lost TurboTax Online 2007 return. I didn't think it was really deleted or inaccessible. Even though their support person did.

The real story is their breakthrough Customer Support model. It leverages a typically un-tapped resource: the spousal network. After being told 'Sorry' by the Support rep, I shared my frustration with my wife, who works at Intuit, and 24 hours later the problem is solved. Why waste expensive call-center resources when you can just have a spouse help solve customer problems? It worked for Intuit, and it can for you, too.
Step 1. Operationalize the new support model, see diagram.

Step 2. Require your employees to marry your customers. May require some convincing, and depending on company size, polygamy, but you can achieve anything with the right attitude.

Saturday, February 21, 2009

Setting back network-based computing one return at a time

There are many advantages to running applications on a network, like not having to worry about an operating system, eliminating your local host footprint, etc. But there's a big unspoken but widely understood issue: it's trust. You need to be able to trust that things will work just as well on the network as they would on your local server. You hand-over control and therefore accept some risk in exchange for the benefits. The overall function should net benefits to the customer.

Back in the early days of 'cloud computing' circa 1999 when we were running client-server apps like Exchange straight over public IP, customers were taking trade-offs in performance but conscientious ASPs more than delivered in terms of cost savings, availability and reliability. The last thing you wanted to do was make things on balance worse for the customer. And the ABSOLUTELY last thing you wanted to do was lose someones data even if it was 'just email'.

Well, fast foward a decade later. Here's a $3 billion so called leader in personal finance software, Intuit, with thier leading tax prep solution, Turbotax, trying to remain relevant within a growing market of online personal finance and related alternatives such as mint.com, wesabe.com , and others, many founded by Intuit alumni. These startups get that the key to the battle is in building trust. Well, let me tell you, if you are trying to build trust in an online business, you don't do it by losing someone's data.

I just got off a call with the Turbotax support line, because the help page directions I had been following to access a previous years return didn't seem to work. After spending 30 minutes waiting for an answer (this is actually the average hold time) and another 20+ on the phone with support, I'm told that the 2007 to 2008 conversion I had attempted, per the online help instructions, can only work once. If it didn't work the first time then your tax data has been deleted.

Never in my years of experience with various forms of online-based services would I ever have expected to encounter a system that would, by default, delete customer data. I wouldn't write an MRD for a program like that. I had never worked with an engineer that would code something like that. Nor do I recall a sys admin who would ever manage an environment where you couldn't recover data even if the program couldn't access it.

Not only is Intuit showing incompetance in application development, but by losing customer data they risk pulling down the entire industry. Whether you're doing online storage, CRM, ERP or security across the public cloud or within a private cloud, we absolutely must be stewards of the customer's data and ultimately thier trust. We strive to bring net benefits. But at the very least, even if you as a company are incapable of evolving yourself, even if you can't deliver customer value within an online model, then for the sake of those companies around you who will make it, please at least 'do no harm' and don't drag down the industry around you.

Wednesday, February 18, 2009

Technology Value vs. Doom and Gloom

Granted times are tough. But it’s gotten to the point where economic fear-mongering is beyond a popular political tool and has become a competitive weapon. It would be laughable when math-challenged vendors spread rumors of layoffs that exceed the competitor’s headcount. But layoffs, furloughs and slowdowns are not a laughing matter.

Luckily some companies remain focused on delivering valuable technology, esp. where it’s solving important problems, like how the new KMIP encryption standard will make organizational data more secure. Or they're innovating to reduce data center complexity, lower costs or save the environment. Some of us will remain focused on delivering new benefits and savings through technology -- hopefully customers are finding this helpful and maybe even hopeful.

Thursday, February 5, 2009

Fabric Applications = Private Cloud computing

I've never been one to add to what is already some serious vendor over-hype, e.g. 'Cloud Computing', but then again I'm not one to rant against something either (regardless of how funny that can be). I'm hearing that some customers find the concept helpful. Cool. I know I find it helpful to review why these things have value to customers.

In my day job, we've been delivering what we call Fabric Applications for many years. In concept Fabric Applications are similar to what is being called a Private Cloud, especially as it relates to fabric apps such as Storage Virtualization. Our company's focus up to this point has been specific to Storage networks (vs. LANs), though this will change with FCoE and CEE.

And I had been wondering if given the current economy, or trends in vendor consolidation, or a new administration, or Jessica Simpon's weight challenges, etc... if customers' needs were really changing with regard to Fabric Apps and this whole cloud computing thing. As you may recall, a key driver to network-resident computing is the value it brings to managing resources from many different vendors. As it turns out, this is still a growing need for companies.

Data recently released by a leading IT research firm shows that more that 80% of Enterprises are working with 2 or more storage vendors, and more than 25% of companies are now working with 6-10 different vendors - an increase over last year! In addition, enterprises that have products for Storage Virtualization are continuing to expand their use. The hype around ‘cloud computing’ and ‘private clouds’ is apparently there because there's value in it: companies want best-of-breed solutions along with management flexibility. And Fabric Apps or Private Clouds or Enterprise Clouds or insert-your-favorite-term-here, are the way to achieve it.

Friday, January 9, 2009

Bringing Clarity to Cloud Computing

There's a great deal of news out there on Cloud Computing. And a number of new innovators and recent deals show this to be an IT hot spot for '09. But as shown in earlier posts, the idea of running applications on a network fabric isn't new. Storage management apps have been running within the SAN fabric for sometime.

So, What fabric apps are on the market today?

If you're looking for a more efficient way to do Storage virtualization, Data replication or Data migration, to name a few, you can accomplish this Today with established, enterprise-class applications from brand name suppliers. Here's a focus on one:

EMC Invista -- this leading Storage Virtualization app runs on the SAN fabrics of hundreds of leading global companies, some with up to a Petabyte of data virtualized. Invista enables non-disruptive data movement, ILM, increases storage hardware utilization, supports server virtualization efforts, and does it all across a heterogeneous storage environment.

Invista has proven benefits: one company realized a 3:1 consolidation of storage equipment. Another saved 91% on storage provisioning time via pooling. And another reduced a 22-month migration plan to just 4 months. And being able to mix-and-match storage allows organizations to optimize on lower-cost hardware and reduce thier overall Cost/TB - a big deal in today's economy.

The hallmark of todays fabric apps is Enterprise-class functionality with very high-performance. If this is the space you're operating in, then you should definitely be looking into this quieter corner of cloud computing: SAN fabric applications.

Enjoy this very recent story on reducing storage cost through things like storage virtualization.