This reliability is facilitated by predictability and availability/redundancy techniques. Traditional redundancy techniques (such as RAID) are faced with revolutionary developments, as they were cumbersome, complex and inefficient. New software-defined techniques have revolutionised today’s enterprise data center infrastructures when it comes to availability techniques (e.g. policy-driven management, clustering techniques and RAID-less storage systems, etc.). These new techniques mean that data centre resources can be delivered to the business faster and more easily, with better availability than ever before.
“Predictability” and “Standardisation” are terms which go together, however despite the obvious, many IT organisations lack standardisation: they are often preoccupied with the daily routine and operations, so that they fail to deal with standardisation from a strategic point of view. To give the business the predictability to guarantee the required availability (and therefore reliability), standardised solutions must be used which can be delivered to the business.
Automation and standardisation
We notice that IT managers are often asked by the business to just have a quick look at “automation”, but “just” looking is simply not enough! Automation is a means to standardise and that means that the issues of the business must also be supported with standard solutions: In other words, the business needs to meet with their IT department to see how and where they can standardise IT operations. This also means that issues that fall outside the realm of these standard solutions cannot be catered to.
In short, good coordination is necessary to meet mutual expectations.
In addition to coordination, the IT manager must also be given time to focus on how to realise solutions, whereby he will have to familiarise himself with new techniques, programming languages, tools and working methods. This therefore requires an organisational change in which the IT manager must be taken out of the “day-to-day routine” so that he can focus on his new task (as a cloud engineer). This may seem illogical at first, but this move should result in a reduction in the number of daily issues (support cases) where all solutions offered are standardised and in which most questions have already been answered. It is an investment in the future, in realising a predictable IT environment.
At first, the IT administrator will mainly be confronted with tasks pertaining to “automation”, supplying small-scale scripts that can be used to perform simple (sub) tasks quickly (such as: the roll-out of virtual machines, the set-up of standard servers, the creation of volumes on the storage system (SAN), the creation of networks and/or firewall rules, etc.). Other IT managers are encouraged to reuse these scripts to help reduce their daily workload, and thus achieving uniformity within the IT department.
The ultimate goal of these (sub) tasks is to minimise the amount of human interaction: This also means that a lot of thought must go into the scripts and tasks so that only a minimum amount of input data is needed to complete a task (e.g. automating IP addressing, naming, etc.). An IPAM solution is therefore a must in fully automated environments in order to be able to cope with the above.
The scripts are extensively tested in advance, further reducing the chance of human error (the source of most unexpected downtime moments) and increasing reliability.
One question remains: “Why still orchestrate”, with the answer: “To further reduce the number of human interactions, thereby further increasing reliability”. By means of “Orchestration”, sub-tasks are placed in chronological order (in a workflow), with which complete standard solutions are realised.
These workflows simplify all the complexity of today’s technological solutions so that they can be used by everyone. It is therefore possible to have these workflows carried out from an IT catalogue, via a self-service portal, by application managers and/or developers themselves without the intervention of an IT administrator (as they are accustomed to with an app store on their mobile phone).
Cloud Management Platform (CMP)
The above steps are a stepping stone to a private cloud environment and can be realised by a Cloud Management Platform (CMP) tool in combination with a Software Defined Data Center (SDDC) infrastructure:
- The CMP tool contains all the means to realise the above points (in addition to the standardisation itself) quickly and easily.
- The SDDC ensures that all physical and virtual components can be controlled by software (from the CMP).
A CMP tool contains all the means you need to provide insights into resource utilisation and associated costs through show and chargeback capabilities as well as capacity and performance management.
Often the CMP tool is integrated with other configuration management tools (for example, Microsoft System Center Configuration Manager, Puppet, Chef or Ansible) with which, as expected, the business can be fully operated.
IT solutions are becoming increasingly complex and integration with the business seems to lack consideration for this: The translation of the functional requirements is not properly translated into technical solutions and/or technical peripheral matters have not been properly mapped out (due to a lack of preparation time). By means of standardisation, a selection of technical solutions is made, and these are implemented by means of a prescribed set of configuration settings: the complexity is reduced to such an extent that the technical solutions again meet the functional requirements of the business.
Conscia offers a complete private-cloud solution based on Cisco (ACI, UCS and UCS Director) and NetApp Solidfire, which meets today’s business requirements. We can assist you with all your automation and orchestration needs, so that we can help your business on its journey into the cloud.