In 2018, Data Center Technology Will Become Smarter, Hybrid, More Distributed, and Easier to Consume

Monday.com
C. Drake

Summary Bullets:

  • In 2018, rising enterprise demand for hybrid cloud solutions will fuel new and expanded partnerships between traditional infrastructure vendors and hyperscale public cloud providers.
  • Vendor initiatives will target the challenge of managing workloads across hybrid and increasingly distributed IT environments, along with ways of simplifying the procurement, deployment and consumption of IT.

2017 saw a growing recognition that private cloud technology is both a realistic and desirable way to manage enterprise workloads, and can be used more efficiently through effective integration in conjunction with public cloud services. A common theme during the year’s industry events was envisaging and enabling multi- and hybrid cloud futures. At the same time, in 2017, data center infrastructure vendors from Cisco and Dell EMC to IBM and HPE continued to transform their solutions and services businesses. These transformations were a response to enterprise digitalization initiatives and recognition that in the future, IT will be hybrid, and must be able to span the full spectrum of enterprise locales from the cloud to core data centers to the network edge. In 2017, individual vendors went through quite different transformation processes: in addition to launching new solutions, technology companies acquired and integrated new businesses, and forged alliances with one another and with hyperscale cloud providers in order to fill out their portfolios. These developments were all driven by a competitive push to help enterprises modernize their traditional data center environments, capitalize on the benefits of hybrid cloud, and expand their ability to handle growing volumes of data at the edge of their networks. Continue reading “In 2018, Data Center Technology Will Become Smarter, Hybrid, More Distributed, and Easier to Consume”

Partnering with Competitors Will Remain Central to Dell EMC’s Converged and Hyper-converged Solutions Strategy in 2017

C. Drake

Summary Bullets:

• The market for hyper-converged infrastructure (HCI) will be a major battleground for solutions jointly engineered by Dell Technologies group businesses.

• Dell EMC will maintain partnerships with competitors in relation to specific converged and hyper-converged solutions as long as customer demand for these solutions continues.

The launch in December of a new VxRack solution based on Dell EMC’s PowerEdge servers and VMware’s software-defined data center platform, gives us only a partial indication of how Dell EMC’s HCI will evolve in 2017. For a fuller understanding, it is necessary to look at the broader range of decisions and announcements the company has made both prior to and since its September merger. It can be argued that the launch of a new HCI solution based entirely on infrastructure provided by Dell Technologies group businesses – together with a move to drop the VCE brand for all of Dell EMC’s converged and hyper-converged solutions – points to a change of strategy for the vendor. They also note the way in which Dell’s PowerEdge servers have been steadily incorporated into several EMC solutions since the completion of the merger – including the company’s VxRail hyper-converged appliance. Continue reading “Partnering with Competitors Will Remain Central to Dell EMC’s Converged and Hyper-converged Solutions Strategy in 2017”

Although HPE Announces a Milestone for Its “Machine” Project, Future Success is Far from Certain

C. Drake

Summary Bullets:

• HPE announced a major milestone for The Machine research project, which promises to transform future computing and data center architectures.

• Despite real achievements, it remains to be seen whether HPE can make a success of The Machine in the way the vendor originally envisaged.

Of the various announcements Hewlett Packard Enterprise (HPE) made at its recent Discovery Event in London, November 2016, one of the most interesting related to “The Machine”, a Hewlett Packard Labs research project that was inaugurated in 2014 and which aspires to revolutionize the way computers are built and data centers of the future are architected. At the London Discovery event HPE announced that it had reached a major milestone for The Machine project, having built and successfully tested a prototype of the “world’s first memory-driven computing architecture”.

Continue reading “Although HPE Announces a Milestone for Its “Machine” Project, Future Success is Far from Certain”

Three Key Networking Trends for 2016

M. Fratto

Summary Bullets:

  • Enterprise SDN momentum is still slow to pick up indicating that enterprises are struggling to find relevant use cases or use cases with sufficient benefit.
  • Integration capabilities industry wide need to improve including technical implementations and go to market tactics that prioritize accessibility.

I dislike yearly predictions. If I could make accurate predictions I’d be rich and living on a beach somewhere pondering my next fruit and umbrella drink. But, I can see what enterprises are asking for from vendors and how various vendors are responding to those demands. The big picture end game that creates a great vision and makes for an exciting keynote on stage pixelates when it comes to practical questions on how products and services can positively impact an enterprise. I think there are three critical changes in the market occurring in 2016.

Continue reading “Three Key Networking Trends for 2016”

Honey, I Shrunk the Blade Server

Steven Hill

Summary Bullets:

  • Does server vendors’ increasing focus on higher-density, multi-node server platforms actually reflect a growing need for them in the typical enterprise, or is it just a response to the IT industry’s fascination with high-profile, mega data centers?
  • Many of the new ‘multi-node’ servers that are appearing now come across as blade servers ‘lite,’ but it remains to be seen if they offer the same degree of flexibility, component redundancy and economy of scale as traditional blade systems.

I’ve been watching with great interest the new modular server systems being offered by big server vendors such as HP, Dell and Cisco, as well as a number of third-tier vendors, and I cannot help but be intrigued by the value proposition for these modular systems. Most are based on the extremely popular 2U server form factor and offer space for between two and eight server modules as well as aggregated networking and a fairly wide gamut of onboard storage options – all features that sound surprisingly similar to existing blade systems, but on a smaller scale. Continue reading “Honey, I Shrunk the Blade Server”

The Old Guard: Out of the Frying Pan and into the Frying Pan

Steven Hill

Summary Bullets:

  • The decision for HP to split into separate consumer and enterprise companies is long overdue, and done correctly it will allow both siblings to be more responsive to their respective markets.
  • By shedding low-margin business units IBM is doing the right things to allow them to continue as innovators without bogging themselves down with manufacturing considerations.

No other industry moves as fast as IT, and every vendor faces the challenge of evolving to remain current with the changing nature of this business. But the challenges for old-school industry stalwarts like IBM and HP are a little different, in part because they’re still simply perceived as “old-school” (irony intended), plus they have a legacy of products that they must continue to sell and support. Does this mean I give them a pass on everything they do? Not on your life – but I certainly admire the commitment it takes to recognize their own weaknesses and make the tough choices. Continue reading “The Old Guard: Out of the Frying Pan and into the Frying Pan”

The SDN Application I Want to See

Mike Fratto

Summary Bullets:

  • SDN applications are not exciting to enterprises and aren’t generating much interest.
  • SDN applications that drastically improve operations and application performance are a vendors’ ticket to success.

We know that network congestion impacts application performance. The physical network matters because bottlenecks in the physical network will impact overlay networks regardless of what some folks at VMware want you to believe. We also know that some applications have more stringent demands than others. Real-time media such as IM, voice, and video are affected more by long delay and delay variability (jitter) than lack of capacity. Audio and video CODECs can adjust for some degradation based on network conditions, but let’s face it, those adjustments are a precursor to unrecoverable poor quality. We also know that other applications are more tolerant towards delay and jitter like email or HTTP. Continue reading “The SDN Application I Want to See”

On Tap for 2014: SDN in the Campus LAN

  • Mike Fratto

    While SDN in the data center gets most of the attention, there’s going to be significant SDN activity in the campus LAN as well.

  • Campus LAN administrators are already using automation extensively, so making the transition into SDN should be easy.

When SDN is brought up, it’s almost always in the context of the data center, but few are talking about taking SDN to the campus LAN. The data center focus makes sense because there is a considerable enterprise spend on data center acquisitions and networking, which has been holding back many enterprises from seeking additional benefits from further virtualization. And there are technologies in the market now and more coming in 2014 that will address SDN in the data center. Continue reading “On Tap for 2014: SDN in the Campus LAN”

Asking ‘What’s the Point of SDN?’ and the Impact on Adoption

Mike Fratto

Summary Bullets:

  • Many enterprises are struggling to find a reason to invest in SDN when what they have in place works well enough.
  • There will not be a killer app for SDN.  Rather, enterprises will adopt SDN as their data centers needs evolve over time.

I recently attended and presented at the SDN and OpenFlow World Congress in Bod Homburg, Germany, and while I was there, I had a chance to talk to some enterprise IT attendees who were investigating SDN for their organizations.  The common question through most of the discussion was: “Why should I use a SDN?”  Luckily, I started my talk with that very question. Continue reading “Asking ‘What’s the Point of SDN?’ and the Impact on Adoption”

Three Bits of Advice from Discussing the Impact of VMware’s NSX at VMworld

Mike Fratto

Summary Bullets:

  • Networking vendors need to embrace homogeneity and provide frictionless integration with virtual environments. Your value add occurs below the hypervisor layer.
  • Networking vendors—any vendor for that matter—should integrate with as many platforms as possible. Remove a barrier to adoption and you’ll reach a wider audience.

Embrace homogeneity. While walking the expo floor at VMworld last week, I spent a lot of time talking to vendors about software defined networking and what VMware’s NSX platform meant. There’s a surprising amount of confusion about what SDN is and how vendors can make the most of it, but the simplest way I can make sense of NSX is homogeneity. Server virtualization and all the processes make a data center dynamic like VM motions, robust storage, and scale-up/in/out architectures rely on running VMs being oblivious to what is happening underneath them. The virtual world is homogenous. It doesn’t matter if the CPU is from Intel or AMD. It doesn’t matter if the storage is FC or iSCSI based. Regardless of where a VM runs, the platform it sees is the same. That enables enterprises to swap out a FC SAN with an iSCSI array with nary a hiccup in the VM. Continue reading “Three Bits of Advice from Discussing the Impact of VMware’s NSX at VMworld”