What | Why | How – Software Defined Data Centre

By Sunday, June 16, 2013 0 Permalink 0

Within the infrastructure space there is a growing number of companies starting to call themselves “Software Defined” but what does this really mean, why should we care and how can organisations start to make use it?

The “What”

Now if we think back to where the “cloud” era started it began with standardisation and virtualisation of servers; this gave us automation which led to flexibility and efficiency. The Software Defined Data Centre (SDDC) is effectively extending these same principals for the entire data centre… the more we can virtualise the more we can automate.

In a nutshell “Software Defined” concepts really boil down to virtualisation of the underlining component and accessibility through a documented API to provision, configure and manage the low-level component.

The “Why”

Today we generally have to build physical infrastructure to meet peak requirements, we all know that this adds cost (and sometimes complexity) because each application requirement is different. So how can the SDDC can help with this? By providing pools of storage, pools of compute, pools of network and built-in automation and security you will be be able to run each of your applications from the pool.

The vision is that for each application you will be able to stipulate it’s service levels, policy requirements and cost restrictions and the SDDC will automatically work out exactly what you need to meet these requirements and build it for you!

When you aren’t using it (even for small periods of time) it will put it back into the shared pool. You will be able to run all your application environments flexibly – isn’t this the way a true cloud works?

The “How”

So how can organisations transform to make use of the SDDC? As I currently work for EMC I’m going to talk about the role of storage… how is storage going to work within the SDDC? EMC have spent the last 2-years developing a software defined storage architecture known as ViPR (previously known as Project Bourne).

ViPR works by abstracting the intelligence away from the underlying physical infrastructure components. ViPR allows the abstraction of both the “Control Plane” and the “Data Plane”. When we talk about the “Control” plane we mean how the data is actually managed – think of it as creating pools, LUN’s, file systems or managing snapshots, creating replication policies, really anything to do with controlling your storage environment. When we talk about the “Data” plane we mean how storage is consumed, think of it as the connectivity between the storage and the hosts such as FC, NFS, CIFS or next-generation services like Object on File.

ViPR was built to provision storage for the cloud, if your interested in hearing more about ViPR see my next post πŸ™‚

Comments are closed.