Storage Analytics is becoming more complex – can AI and ML help?

The traditional data center that we know of may be obsolete by 2025

Take a second to absorb this.

In a performance-oriented world, what do you think fuels ‘performance’? No prizes for guessing, data. The world as we know it has been propelled into a hybrid environment. This environment is made up of thousands of applications and devices supported by a complex infrastructure environment.  Networks, servers, silo-specific tools, switches, and a cloud mix etc. are the elements that drive the new data center. And this new data center is only growing in complexity with the ever-evolving technology landscape.

Applications today are co-dependent and prone to impact from one another. With the adoption of technologies such as IoT, mobility, connected devices, and sensors, the new reality of the modern-day data center is that storage analytics is becoming complex. And as usual, it is technology that will come to our rescue.

The volume of data is going to explode. We had been warned of huge growth. But did we imagine that by 2025 the global data sphere will grow to 163 zettabytes? Ten-times the volume of data generated in 2016?

This growing volume of data, the enterprise dependency on it and the need for faster access to this data is challenging the present-day data center. As good storage becomes central to good business, the focus to optimize storage with great storage analytics capabilities is not the ‘best thing’ to do. It is the most reasonable, the most common-sense thing to do.

Which brings up to the ‘how’ part of it. How can we simplify storage analytics and optimize the data center? Perhaps, with AI and Machine Learning.

Intelligent Failure Prediction

Do businesses today have the capacity to afford unplanned downtime? I don’t think so.

Global data center outages have increased by 6% since 2017! The Delta Airlines power outage, for example, cost the company $150 million.

While downtime is a reality that we have to live by, ‘unplanned’ downtime is a luxury we cannot afford. The data center operators need quick insights to identify the root cause of failure, prioritize troubleshooting, and ensure that the data center is up and running before the data gets impacted. Yes, you can’t predict when lightning strikes like the one that took down the Microsoft data center in San Antonio, or even zero-day malware attacks. But with AI and Machine Learning algorithms at play, we can design an optimized data center that can stand tall against outages that occur due to unexpected events such as the weather, human error, unpatched systems, and other such things.

AI-based deep learning and ML-based recommendation engines can help identify and solve data center failure prediction and troubleshooting woes ahead of time and by creating automated solutions.

Intelligent Server Optimization

Physical servers and storage equipment are a data center reality. How can we ensure that the workloads are distributed correctly across this infrastructure?

AI and Machine Learning can come to the rescue here as well. With these technologies, data centers can distribute workloads equally and efficiently across these servers. They also help data center loads become more predictable, leveraging the advanced intelligent analytics that is ingrained in load-balancing tools. The AI capabilities learn from past data and ensure optimized load distribution.

And who does not want to track server performance, identify network congestions, and infrastructure issues? Or to find faults in the system that impact data processing times and augment risk factors ahead of time?

With AI and ML to the rescue, these areas can be optimized easily. This also facilitates maximized server optimization and facilitates intelligent distribution of loads and resources.

Intelligent storage and monitoring

Don’t we all want to improve the efficiency of our IT teams and departments? Don’t we want them to become more effective with respect to the quality of tasks they manage? How can we enable this? By ensuring that the right resources are available to them when they need it, where they need it.

Intelligent data storage and data monitoring are critical to facilitate this. By combining existing employee knowledge with real-time data, the data center can potentially “hear” when a machine is close to failing. They can identify patterns that are not conducive to the health of the data and the data center. They can spot intruders. Additionally, these technologies also help us store the huge volumes of data more intelligently so that it can be used easily. Decisions on storage optimization and tiering become smarter with AI. This can transform the data center and storage management environment.

Intelligent and Optimized DCIM solutions

Data Center Infrastructure Management (DCIM) solutions also need to deal with the change in the data center. These systems are responsible for factors such as equipment status, fire hazards, ventilation, cooling systems, temperature etc. These factors are only increasing and multiplying today.

AI-based DCIM solutions can help in making storage more efficient by handing off many of the mundane tasks to the AI system. The creative and critical aspects of data center management can then be managed more capably by humans. With intelligent DCIM solutions, we also get to optimize the disaster recovery process. We can stay compliant with the complex regulatory landscape as well (think HIPAA, PCI DSS, SOC, and other such regulations).

Intelligent security

The cybersecurity landscape is ever-changing and ever-evolving. That data centers have to be prepared for cyber attacks and threats is a given. But is it even possible to stay up-to-date on the continuous information exchange or monitor it proactively without machine intervention? Employing human hours here is not only hard but unnecessary and unrealistic.

AI and ML help data centers adapt to these changing needs and requirements faster and make the necessary adjustments along the chain. This ensures 100% availability at all times. Restricting access has never been enough to ensure maximum security considering the growing pool of data users. Using AI-based systems make the data center more secure without the extra load on the users.

Intelligent Energy efficiency

And while the conversation revolves mainly around security, performance, and connectivity, we cannot ignore the ‘Big E’ here. ‘E’ stands for environmental impact. Something that we all should be concerned about and yet rarely talk about.

The “Data Centers and the Environment” report, by American server manufacturer Supermicro, discovered that businesses underestimate the importance of energy efficiency. Only 28% of the respondents said that environmental issues impact their selection of data center technologies. Only 9% put energy efficiency as a top criterion. AI and ML could play a major role in helping manage energy usage in the data center. Looking at last data to identify possible areas of wastage, making predictions for the amount of energy that would be needed as the data center scales to ensure efficient procurement and supply, and automating the spinning up and powering down of equipment intelligently based on demand patterns.

The data center clearly has to move to embrace optimization. And given the rising complexity, the new age data center will have to be powered by AI and ML to achieve that.

Parag Kulkarni

Parag Kulkarni

Chief Operating Officer at Calsoft Inc.
Parag Kulkarni heads the engineering function at Calsoft. Parag is an industry veteran, bringing more than 20 years of experience in designing and developing complex technology products. As Vice President, Engineering, Parag drives the technology roadmap at Calsoft to improve the overall quality of delivery processes. Before joining Calsoft, Parag held key positions at Veritas (formerly Symantec) where he conceived the Database Edition Product for Windows and led its delivery through two product lines - Database Edition for Microsoft (R) Exchange Server and Database Edition for Microsoft (R) SQL Server. Parag also served as the Site Manager for the Backup Exec team in Mountain View, California. Prior to his role at Veritas, he was a key contributor to Informix Corporation’s line of database products, now acquired by IBM. Parag holds a master’s degree in computer science from IIT, Roorkee and a bachelor’s degree in industrial management from University of Indore, India.
Parag Kulkarni

Leave a Reply

Your email address will not be published. Required fields are marked *