Many companies are faced with the challenge of adapting their storage infrastructure to the unlimited growth of data. The question of costs is usually at the top of the list. At the same time, the criteria are very demanding:
There may be advantages to sourcing all components from a single vendor. In principle, however, it is advisable to avoid exclusive commitment to one manufacturer – a so-called vendor lock-in. In our previous blog article “Vendor independence and investment protection”, we explained how you can retain the greatest possible flexibility in designing your storage infrastructure by using standardized components.
Fundamental to the sustainability of the storage strategy is its scalability: it must be able to grow flexibly and easily with data volumes.
- Smooth workflows:
Users in business departments are primarily concerned with their own workflows: They expect fast and seamless access to data. So regardless of the physical storage location within the infrastructure, the user interface must enable this continuity.
- Compliant archiving:
Companies need to archive much of their data in a legally compliant manner. This means, for example, that files must be stored for a specified period of time in such a way that they cannot be modified or deleted. In our article “Storage solutions for legally compliant archiving”, we have already addressed the question of how you can meet this challenge with smart planning of the storage architecture.
Information Lifecycle Management: Hot, Warm and Cold Data
The concept of Information Lifecycle Management assumes that data goes through a lifecycle of different phases – depending on the frequency with which it is used.
- Creation and hot phase:
Immediately after creation, data is current and used in day-to-day operations. In this phase, data needs to be accessible quickly.
- Warm and cold phase:
Over time, the timeliness and frequency of access decrease, and the data first becomes warm and then cold data.
The “hot phase” of most data that accumulates in companies is quite short. A large part of the available storage space is usually consumed by cold data – even though it is hardly used anymore, or not used at all.
In simplified terms, data can be described as active and inactive data. It is assumed that in most companies up to 80% of the data is inactive data. An analysis of the data inventory and the storage locations of the data often comes to the conclusion that the fast and cost-intensive primary storage is overloaded with inactive data – with the cold data, in other words, for which particularly high-performance storage is not required.
So it’s time to optimize the storage infrastructure and data and storage management. There is a lot of potential here for efficiency increases and cost savings.
Hierarchical Storage Management: Infrastructure Optimization
A multi-level storage architecture can be used to map the concept of Information Lifecycle Management. This storage architecture consists of several levels on which storage technologies with different properties can be used.
- High-performance but cost-intensive primary storage is used for active data.
- These storage systems are combined with cheaper but slower secondary storage, which is intended for cold data.
Data and storage management software automatically moves data to the storage tier that corresponds to the frequency of access and use in each case. This shifting between different storage tiers is called tiering.
- Data that is no longer used for a certain period of time can be moved to a lower storage tier.
- Primary storage is relieved and used only for data that needs to be accessed that quickly.
With regard to the selection of components and storage services, flexibility is crucial, i.e.: you should be able to freely select the storage systems for the respective application – without vendor lock-in and the possible associated restrictions. This is where the keyword standardization comes into play.
- By using standardized interfaces, protocols, formats, etc., you can combine storage systems according to your individual needs.
- Existing systems and existing investments can continue to be used.
- As data volumes increase, capacity can be flexibly adapted by adding further components or service extensions.
Within the framework of such a Hierarchical Storage Management (HSM), legally compliant archiving of important company data can be easily implemented. The archive storage tier is used for this purpose. In principle, archiving proceeds as follows:
- The software automatically moves the data to be archived from the primary storage to the archive storage tier based on user-defined policies. There, the software ensures compliance with legal archiving requirements.
- The “write once, read many” principle, WORM protection, prevents changes to archived files. If an archived file is accessed and changed, a new file version is created. The original file remains unchanged.
- The software controls compliance with the retention period by means of so-called retention management. For example, the deletion of a file before the expiry of a specified period can be prevented.
Due to retention periods of sometimes several decades, an HSM system must also take into account the migration of storage systems. Ideally, the corresponding software supports background migration so that no business interruptions are necessary.
In our blog article “Storage solutions for legally compliant archiving” you will find detailed information on how to optimize your storage infrastructure and at the same time set the course for legally compliant archiving.
Store as much data as possible at the lowest possible cost – and archive it in a legally compliant manner at the same time: You can meet the challenges of data and storage management with a hierarchical storage architecture within which data is moved according to the principle of information lifecycle management. If you rely on standardized components for this, you remain flexible in your selection and in extensions. Here in the blog, you can learn how you can simultaneously increase protection against cyberattacks and ransomware in the course of such optimization.