Access the podcast to hear AIOps experts discuss how Hitachi VSP One delivers high-performance file services that accelerate IT modernization by improving fleet management, operational visibility and API-driven automation. Learn how Hitachi Vantara Federal can help your agency streamline firmware updates, licensing and system setup to enhance application performance and automate critical data workloads.
[Anthony Jimenez]
Welcome back to CarahCast, the podcast from Carahsoft, the trusted government IT solutions provider. Subscribe to get the latest technology updates in the public sector. I'm Anthony Jimenez, your host from the Carahsoft production team.
On behalf of Hitachi Vantara Federal, we would like to welcome you to today's podcast, focused around discussing how Hitachi VSP1 delivers high-performance file services that accelerate IT modernization by improving fleet management, operational visibility, and API-driven automation. Learn how Hitachi Vantara Federal can help your agency streamline firmware updates, licensing, and system setup to enhance application performance and automate critical workloads. Guy Gawrych, Solutions Consultant, and Todd Hanson, Senior Solutions Consultant, will discuss how VSP1 offers high-performance file services, united fleet management, as well as automation and AI.
[Guy Gawrych]
Good morning, good afternoon. Hello, everybody. My name is Guy Gawrych.
I'm a Solutions Consultant with Hitachi Vantara Federal. I will be doing the initial part of the discussion and briefing, giving a high-level overview of VSP1, new data platform, and then I'll turn it over to my much more technical counterpart, Todd Hanson.
[Todd Hanson]
Hey, guys. I'm Todd Hanson. I'm a technical expert in the Hitachi Center of Excellence.
That's our demo and proof of concept group where we do live demos for our customers.
[Guy Gawrych]
Let me give a little bit of history, who we are, how we got here. Hitachi is probably one of the few companies that you'll actually talk to that's been around for over 100 years. We are a very longstanding engineering company.
Hitachi Limited is actually the number one submitter of patents in big data analysis worldwide. Huge conglomerate, several hundred thousand employees. Inside of that is Hitachi Vantara, and that is the information technology arm of Hitachi.
HV, as we call it, basically provides multi-cloud infrastructure, data engineering, data services, and has roughly 10,000 people worldwide. Hitachi Vantara Federal is the agency in the branch that I'm with, and it's a wholly-owned subsidiary of Hitachi Vantara, and we're dedicated, supporting secure federal government agencies. We are fully FOCI mitigated, which means Japan has no influence over what we do, and this allows us to hold a top-secret facilities clearance.
We also have our own dedicated clearance support facility that's fully U.S.-based and staffed, so all federal customers will always interact with a U.S. citizen. And then lastly, everything that Hitachi Vantara Federal provides is assembled and configured in the United States. You can see that our customer base is wide and varied.
We have agencies in both civilian and Department of War and legislative from top to bottom, so you name an agency, our chances are we're on the floor. The whole point of the conversation today is all about Virtual Storage Platform 1, one strategy. We recognized what customers were looking for, and we developed this one strategy.
Quite simply, it's one data infrastructure platform across block, file, object, and mainframe, and one data management capability for everything. It's pretty much that simple. Some of the challenges, though, we see the data is pretty much at the center of just about every modern innovation, and it's becoming much more data-driven as companies gear up for what they call the whole digital transformation.
So placing data at the center of your agency and your organization pretty much improves your decision-making capabilities, overall operational efficiency, greater insights, and accelerates this performance to this digital transformation. But we see it as being a very big challenge with storing the data, especially with the explosion of capacity, reduction of budgets. We're dealing with that right now with the federal government, as everybody's well aware.
Poor quality of data, and then how to manage and configure, manage the configuration and consumption of all these solutions. We're very much in a world of hybrid cloud, which was seen as a savior for a complex data center and integrating private and public clouds, but we know that reality is pretty much different than the theory. Data is pretty much highly distributed across multiple different data types and deployments from block, file, and object, and we've been starting to build separate hybrid cloud solutions based on those capabilities.
They will stretch into different deployment types for appliances with software-defined, and now what we're seeing is many siloed hybrid cloud solutions across different configuration platforms, each with their own management interfaces and challenges. And so now we've created this multi-cloud complexity. Also, customers and agencies sometimes don't really have a good understanding of what data is actually being stored in the first place, and if it actually has value to the business and the agency.
Do we really know what it is? Is it a liability? Are we keeping it for compliance reasons?
Is it highly accessible, or is it just there and we just continue to store it because that's the way it's always been? And then this $3.1 trillion, this was a study that IBM did a little while back, and just to give you an idea, roughly $3.1 trillion they determined was wasted each year in the U.S. alone for storing very poor quality data that has little to no value. So how do we address this?
Quite simply, we decided to put data first. So for the most part, customer environments, they're pretty much a wide mix of different appliances and products, and sometimes they're loosely connected, but they're mostly siloed. And you can walk into any data center and you'll see rows and rows of different hardware designed for any number of capabilities.
You'll see most hybrid cloud deployments are also mixed siloed solutions, and applications have trouble transiting between these various silos in order to actually fully access the data and consume it. I would say software defined was an attempt to kind of break the dependency on the underlying proprietary hardware and allow applications some ability to consume these resources without having to worry about the underlying infrastructure, but it also has its limitations. Customers really want, though, they want a single integrated solution that's going to give their applications the right data in the right place regardless of where it's located and regardless of the access protocol.
So this is the first step in what we want to call total data freedom, and we're focusing on the data first. A bit over a year ago, we basically set out to make our vision a reality, and we delivered the initial rollout of the solution, and we're going to continue to innovate and involve as customer demands and data requirements change, and we view VSP1 pretty much as a journey rather than a finite capability. We talk to customers, we show up about siloed products, talk about data type and protocol, and each one of these are separate solutions with a wide and varied subcomponent capability.
They each have their own management interface, and if you look at the left side, there's different appliances for different data types, different protocols, and it's not just Hitachi. It's the entire storage industry. We pretty much show up or rattle off model types, IOPS, throughput, all those kind of things, but customers really don't care about that stuff as much as they care what they can actually do with their data.
How are they going to get better time to insights? How can they get faster and more accurate results? How does this help me meet my mission objectives?
Does this help me with my command and control decision chain? Everything in between. So we took the storage virtualization operating system, the SFOS capability from our virtual storage platform, the VSP block family, and we applied the many aspects of performance, the reliability, the 100% availability that Hitachi is known for, manageability, and we put it across the entire product portfolio.
Now we can create a single data plane that's accessible by applications across all platforms. Moving to this single operating system has also allowed us to do some things like extend our effective capacity guarantee, the data sets beyond block, and businesses can now take advantage of the modern storage assurance. So you can increase the lifecycle of your total storage investment without the traditional forklift upgrade or need for downtime.
And our goal here is to significantly improve and simplify the end user experience, but maintain all the features and functionalities across the entire Hitachi portfolio. The virtual storage platform is comprised of two main components. We have the data plane and the control plane.
So if you look now at the bottom working your way up, the data plane is actually where the physical bits and bytes of data are stored. This can be physically appliance on-prem. It can be software defined in the cloud or on-prem.
It can be a hybrid cloud storage configuration, and it encompasses all the storage protocols, block, file, object, and mainframe, which is a big differentiator here. The main advantage here is that customers' application data is now accessible across all these different data silos. If you move up the stack a little bit, the control plane is where all the data is actually managed and controlled, and this is regardless of where the underlying infrastructure, access protocol, or location.
So we're actually moving, sending, and replicating the data here. If you go up to the next level, the VSP360 Unified, right on top of that is 360 and EverFlex. 360 provides a common management interface across any and all of the VSP1 data platforms, as well as any virtualized external storage that is under our control.
So this allows customers to do fleet management operations, streamlined resource orchestration, and IT integration all based on standard APIs. EverFlex is a single interface for infrastructure management, orchestration, and automation that includes third-party, I mean non-Hitachi storage, and by design is designed to be agnostic, and it works across heterogeneous infrastructure and the cloud. And these applications, this layer sits between customer applications, and the control plane allows a single pane of glass for management interfaces.
A little bit deeper about 360, this is our unified data management platform that supports the entire VSP1 product portfolio, as well as all the virtual external storage I mentioned that's under our control. It has built-in tools that provide automation, lifecycle management, and governance, and it also supports role-based access control, licensing, retention, classification, everything integrated into existing workflows. It's also designed to plug into a broader automation and IT service management stack.
So if you're using Ansible or Terraform or ServiceNow, you can orchestrate your infrastructure's code, and they can manage everything through a single consistent control experience in single pane. Using Hitachi Remote Ops and metering, you can get the continuous monitoring and the insight that's going to help keep your storage infrastructure optimized and running smoothly. This product is actually designed to give total visibility and complete operational alignment across the entire VSP1 product portfolio from one control platform.
EverFlex is interesting. It's designed to actually be our cloud management portal, and it works with more than just storage infrastructure. I mentioned that it actually works across the heterogeneous infrastructure, so we can manage non-Hitachi storage as well as our own, and if you look, most customers and agencies actually have multiple hardware vendors on their floor, each with their own management interfaces, so if you can imagine being able to consolidate all that capability into a single interface rather than using each of the proprietary offerings from those various vendors. These integrations, the integration is supported with thousands of application vendors, and we can manage these wide and varied software deployments in addition to the storage.
It is technology agnostic, as I mentioned, and we can do complete automation with any of the vendors that you see listed up on the top right, Ansible, Chef, Puppet, or whatever. This is a pretty major shift from Hitachi's traditional focus on storage, and we wanted to bring capabilities beyond storage and align with what modern strategies and agencies are using cloud and as-a-service type solutions, and so customers get this cloud-like single unified point of management. They also get the automation and orchestration, regardless of what they're managing and regardless of where it's at, all from a single service catalog.
It's a very, very powerful tool, and it is truly agnostic, which means it's not threatening to your underlying relationships with any hardware vendor or capability. It is truly designed to be a self-service, consumable, paid-per-use model, all cloud-based, start with a smaller contract, and then you can expand and grow from there.
[Todd Hanson]
We've made quite a big push to do REST API-enabled features in our storage arrays, in our VSP1 file, in our VSP360, all the different components, so that we can interface with a lot of these other tools, especially like Ansible. We can launch into Ansible automation platform and run commands to our storage, do basic provisioning, set up replication, all of that through the REST API calls.
[Guy Gawrych]
And that's key because we're very much of an API-driven infrastructure, and having that open connectivity, it provides us a lot of this additional functionality that we're actually talking about and we're rolling out to the marketplace. So, thank you, Tom.
[Todd Hanson]
And we've even gone further to get into AI-driven solutions, too. So, we built an MCP server that connects to our storage, and you can type in a chat GPT or Cloud AI and tell the storage, hey, this is what I want, and it'll go out there and do it.
[Guy Gawrych]
And there's some really amazing capabilities that we are doing with AI and heavy analytics, and there's some fantastic discussions that we can have. So, at the heart of everything that we're doing is this unbreakable core, our modern storage assurance. This is the ability to do a data-in-place upgrade to the latest product line without any need for data migration.
You update controllers, your data remains in place. We've offered the 100% data availability guarantee for over 20 years, and now we're extending that same guarantee to the entire VSP-1 portfolio. So, no data loss, no fine print, there's no exceptions.
Effective capacity guarantee is something I find pretty interesting. The industry is definitely moving in this direction, and we've offered some version of it in the past, but we're actually offering a no-questions-asked four-to-one data reduction. Now, we say no documents, no data type verification.
I will issue a public service announcement here. Our data sets that are just not compressible, right? I mean, you have video files, audio, images, already pre-compressed data.
So, realistically, we're not going to get data reduction on that. Nobody is. But we're saying that if you have data types that OLTP that you should get data reduction, we're not going to question it.
We're going to offer you four-to-one data reduction, and then if we don't meet that, we'll actually close the gap to give you the capacity that you were promised. So, you know, always there's, even though it says no questions asked, there should be realistic managed expectations to guarantee that nobody is going to have uncomfortable conversations later down the road. The latest thing that we've introduced is a SafeNAP guarantee.
So, this is focused on our cyber resiliency and recoverability. So, in the event that a situation requires, that data recovery is required, we're going to ensure that the recoverability of your data, as well as the immutability, is guaranteed as long as you're using our snapshot and data protection capability. So, this is at the heart of our infrastructure and VSP portfolio going forward.
We'll talk more about each one of the product families here. And just so you know, VSP1 product family, this is a brand new technology. So, we're talking new hardware, new infrastructure, architecture, and then we're incorporating the common operating system, you know, across all the capabilities.
So, this is a true leap and a true advance forward from the legacy Hitachi storage capabilities. The VSP1 block family is a series of 2U arrays. They're populated with all flash NVMe drives and always on adaptive data reduction.
This is what's providing our guaranteed 4 to 1 data reduction. And we built this with the modern applications in mind, and it gives a whole new perspective on data and data storage. Our emphasis is on simplifying very complex environments, but providing maximum performance and security while reducing power and energy demands.
Sustainability is a very, very big thing for Hitachi. And with the effective capacity guarantee in 2U, we can offer upwards of 1.8 petabytes, you know, in this configuration. With virtual storage scale out, VSSO, we can scale out to 65 nodes with the block 20 family.
So, customers can start with a small data center footprint and then pretty much grow to meet whatever storage demands arise. You know, each of these arrays is, they have 24 NVMe drives in the initial drive tray, and each one supports two additional drive enclosures, 32 gig fiber channel front end connectivity. Everything you can imagine, a very dense amount of storage and capability, very high performance and a very small footprint.
Demands we're seeing now with the mission critical businesses and especially AI workloads, they're pretty much pushing data infrastructure to the limits, and legacy systems are really struggling under the weight of a lot of these massive data volumes and protecting more and more mission critical data. The VSP1 block 85, I still call it the block 85 high end, is specifically engineered for the AI era. It delivers superb performance, also delivering maximum scale and resilience in a simple, sustainable way, and is guaranteed.
The number here to point out is 50 million IOPS. It is truly a massive leap forward in throughput, and it makes it pretty much ideal for mission critical workloads like AI inferencing, high frequency trading, large scale database operations, or any application that really pretty much demands extreme performance. We're offering eight nines of availability without any additional tweaking or tuning.
This is pretty much near zero downtime, and this level of resilience is pretty much critical for agencies where seconds count and maximum uptime is non-negotiable. The architecture scales massively and efficiently. We support multiple petabyte capacity, and customers can continue to consolidate workloads without having to worry about sacrificing speed and performance.
Why does this matter, though? Agencies and customers, they pretty much want a predictable performance envelope, simplified management, but also future proofing their infrastructure, and it's all backed by Hitachi's legendary reliability and availability. Again, I mentioned before, it's really designed to meet tomorrow's workload.
It's really designed for these AI native and data intensive mission critical applications, whether it's gen AI, analytics, whatever the case may be. I mentioned the 50 million IOPS, but I also want to call your attention to 37 microseconds of latency. We had a customer that tested that, and they were seeing that as a low latency number.
Now, naturally, these are hero numbers, but it demonstrates the performance envelope and the capacity that this array is designed to meet. Using our effective ADR, I mean effective capacity with four to one ADR, we've been able to see over 18 petabytes of data stored. So that's a massive amount of data scale for both structured and unstructured data.
The system itself starts with four controllers. It can scale up to 12, so you can grow with your needs. We actually utilize data engines, and we start off with three data engines.
This guarantees the high availability as well as parallel processing. System supports upwards of 288 60 terabyte drives. So anywhere in between the drives can be 3.8 terabyte all the way up to 60 terabytes in size. So it's optimized both for density and performance. And then, like Todd said, this system actually supports both open systems and mainframe, and it can do so concurrently. And that is one thing that I think Hitachi brings that other vendors have struggled with historically, the ability to support both of those protocols simultaneously, each with different characteristics for data storage, performance, and supportability.
We've built a lot of intelligence into the device itself, and this is utilizing the AI capabilities Todd was mentioning earlier. We do intelligent provisioning, so we can automatically allocate resource based on the workload profiles. We're dynamically optimizing the system for performance and efficiency, so we're load balancing the demands on the system.
There's built-in compliance and policy enforcement, so that falls in line with the data governments. And then, naturally, security. Security is something that is first and foremost with Hitachi all the time.
And then these data services are native. They're not bolted on. Everything is integrated into the system.
[Todd Hanson]
One thing to be aware of is on the B20 and the B80 series, block high-end series, we use NVMe drives. There's no spinning drives. Some people aren't aware of, we use dual ported drives.
They're more expensive, but we could have saved money and just went with what industry standard, single ported drives, but we use dual ported drives for the reliability and availability.
[Guy Gawrych]
Thank you, sir. This right here, I want to talk a bit about some of the features and functions that we've incorporated in over here, and they're pretty significant in a number of ways. We've moved away from a traditional RAID, and we're using a distributed spare drive rebuild methodology, so in essence, the data and spares, they're distributed across all drives in the array, so we're significantly reducing any sort of system impact or drive rebuilds in the event of a failure.
In the block 20, we've implemented a single drive capacity upgrade, so you don't have to purchase a brand-new entire RAID group if you want to expand your block 20 system. After the initial nine-drive configuration, you can add these in single-drive implements. For the block 85 high-end, you can add a minimum of four drives.
We just have a slightly different architecture for the block 85 versus the block 20. We offer always-on compression, so there's really no need to set up manual tuning. We've also incorporated a hardware offload, so we can reduce the processing resources that are taken up real-time, and then this allows us to actually improve the overall data reduction ratio, so we do compression, ABASUN ingest.
We're going to do some sort of deduplication, and then we're going to do a post-process data reduction, so all of this is allowing us to do our no-questions-asked data reduction without need for certification or caveats, and you can imagine this is huge in the federal space because typically agencies will do no-sorts authorization. They will not sign these forms, you know, guaranteeing that a specific level will be met. We simply give this guarantee without questions or without fanfare, again, just utilizing common sense, knowing what data types you're going to be sending to the array.
Besides these performance enhancements that we've made, we've made changes to our snapshotting capability. We're now using redirect-on-write-based snapshots, and then we're offering ransomware protection via SafeSnap. Our snapshots are immutable, and they cannot be overwritten or deleted, and deletion actually will not even take place until the retention period's expired.
[Todd Hanson]
There's a new feature along with the thin-image advanced that's not listed that a lot of people aren't aware of, vClones, so we can take a thin-image advanced snapshot, turn it into a vClone, which is a space-sufficient copy, and in turn replicate that and use it and mount it to hosts, so it turns a snapshot into a vClone.
[Guy Gawrych]
The clones, I think, are very powerful, and I think it's a fantastic use of the snapshot capability, because you can keep your gold image on the actual disk itself, and then you can actually create a clone from the snapshot, and it's a truly disposable copy. I always found clones were very, very good for test dev and type environments like that. Again, we're bringing this capability as a mean of this native features and functionality, and just another enhancement that we're offering.
From a connectivity perspective, you can read for yourself. Obviously, the 100 gig NVMe over TCP is huge. We support 64- and 32-gig fiber channel, which is a big step up, and we're seeing most of our customers are going that way now, and we also have the 32-gig support for FICON for mainframe.
All of this is based on the Gen 4 PCIe capabilities over here, so connectivity continues to expand as the environments continue to demand. Then going over to security, we pretty much have always maintained a major focus on security, and in that vein, we've implemented what's a hardware root of trust. This pretty much acts as an anchor for the boot process, so along each stage of our boot chain, it's going to be verified to ensure that all code is cryptographically signed and it's valid, and that both the system and the firmware are certified Hitachi products.
If there's an issue found, a recovery process is going to be initiated, and if both the active and the recovery copies are invalid, the system is going to stop, and it's going to prevent any sort of corrupted firmware from running, so we're very, very aware of making sure that the only firmware that goes on our arrays actually comes from Hitachi. It's cryptographically validated, and it is meant to be on that particular array. We offer KMIP support natively for external key management, and when I talked about the 360, the fleet management capabilities, so all of this is built in with mind for security and management of the entire product portfolio.
I have a slide later towards the end of the presentation about sustainability that can go into some of these in more detail. This is just some of the use cases. Naturally, there's really no block-based use case that the Block 20 system can't support, but there's a couple, obviously, to keep in consideration.
Mission-critical applications, you basically want to put your mission-critical agency data on a platform that's guaranteed availability for over 20 years, and we've delivered enterprise-class storage for both midsize and enterprise customers for decades. You can do application consolidation. I mentioned before the Block 20 scales up to 65 nodes.
The Block 85 supports upwards of 18 petabytes, so there's not too many data sets that we cannot support, both from a performance and capacity perspective, and then naturally hybrid cloud. We're starting to see so many customers that went to the cloud first, cloud only, coming back. It's kind of like the people that left New York, come all the way down to Florida, realize how hot it's here in the summer, and then they go halfway back to North Carolina.
That's the same way I look at hybrid cloud, so we can consolidate both private and public cloud storage simply across one interface and all managed and supported from Hitachi via 360. Further expanding on the block arrays, we talked about the Block 20 and the Block 85. Now we have a software-defined storage capability.
It is truly hardware agnostic. It can be deployed using Hitachi servers on-prem. It can be deployed software-only using customer-supplied x86 servers in a private cloud, and it can also be deployed in the public cloud.
It's a single data platform, so data is written, and it's read across the cluster. We have a three-cluster configuration, so that's the minimum. Start with three nodes, and you can expand up to 32.
It supports roughly over 3 petabytes of capacity and over 2.8 million IOPS in performance. So the reason we use three controllers, each one is acting as a controller itself, so in the event that one node fails, the system continues to operate and provide the 100% data availability that we all guarantee. So software-defined, you get the same capabilities, features, and functions that you would on the actual Hitachi arrays themselves, same replication, all the capabilities that you would expect, but it's in a software configuration that can deploy as you see fit.
Taking software-defined to the next level, we can actually do software-defined in the cloud, and this is our solution that pretty much runs in AWS, Azure, and Google Cloud, so same OS, same block-based capabilities. It supports synchronous replication, so we can go from on-prem to the cloud. It can be managed from a single interface using Hitachi, EverFlex, and 360, and this synchronous replication is supported between all the VSP systems, so it can be VSP one block, software-defined, as well as our 5000 and E-series arrays.
I find that in the cloud makes a great use case for test dev where you need true copies of your data, but they're disposable. Bring it up in the cloud, have your tools up in there that can do whatever test and QA that you would like, and it's a disposable copy. When you're done with it, turn it off, and it's good.
It's also good for supporting distributed applications, and it's also great for migration. One thing that Hitachi has been known for is our ability to virtualize and move data, especially third-party storage, so migration to the cloud is another good use case. But again, same capabilities that you would get from the physical array now assist up in the cloud, as well as on-prem.
We've made significant enhancements with our file capability. Our prior HNAS generation has now been replaced by VSP one file. We built this with the next generation floating point unit, FPU, to basically create a much more accelerated file system.
The analogy here is what GPUs are for analytics, the graphical processing units, FPUs are for file system and protocol. So this enables us to do massive parallelization and deliver a significantly higher performance and scalability from the file capability. In fact, the VSP one file is now anywhere between five to six times faster than our previous NAS solution.
We've also integrated a number of cyber resiliency capabilities to include immutability, denial of service protection, as well as security and ransomware features and protections, such as multi-factor authentication, single sign-on, and then support for NFS 4.1. We've also improved and enhanced our network capabilities. So, you know, 10, 25, 100 gigabit Ethernet connectivity and all with native cloud Turing capabilities. So know that when you are using VSP one file, you're going to get a significantly improved and enhanced version.
And then kind of rounding out our VSP one family is our objects. And we've actually had object capability with the Hitachi content platform, but now we've enhanced that as well. So this is our massively scalable object based platform for unstructured data and S3 native storage.
We can start a cluster from a few terabytes and it grows up to multiple exabytes. So we can start with a minimum of eight nodes and scale to 32 nodes. This is the ideal platform pretty much for any sort of large scale data lakes, heavy analytics, AI based environments.
We can support all flash policy driven storage, you name it, and everywhere in between. We've also offered a number of cyber resiliency capabilities, including immutability. We're going to basically use policy based object management and worm right once read many capabilities.
So we're going to ensure that your data is immutable. It's authentic. It's available and it's secure.
But we're also going to continually safeguard against the corruption or any sort of tampering. And this is important because when guaranteeing chain of custody, we have a number of federal agencies and law enforcement that are utilizing this capability and the chain of custody guarantee is non negotiable. We do bucket level and object locking.
So we're going to prevent data that can be either deleted or compromised in the event of any sort of ransomware attack or an unlawful intrusion. And then legal hold is something interesting. This is going to allow administrators to permanently place an indefinite hold on data.
So it's going to ensure that it will not be deleted until this hold is explicitly removed. Even if there's a retention period that's expired, that hold will remain in effect. Again, tying into the whole immutability chain of custody thing, we're offering an SEC 17A compliance.
So this meets all those stringent financial requirements for record keeping and data integrity. And then we've added a new capability for personally identifiable information, PII. It's automated discovery.
So what we're doing is if we see any sort of PII data, it's going to be notified, flagged. And then the administrator will be actually notified and making sure that any sort of this information is not either compromised or sent out unlawfully. One of the things that we're known for, again, is our availability.
So the object capability has 10 nines data availability and 14 nines of data durability. So guaranteed to not only be always available, the data itself will never be lost or corrupted. Supports multiple protocols.
So we can support S3 API. We support Amazon S3 tables natively, SQL, POSIX, HTTP, REST APIs, the whole shebang. So this is very, very designed for large unstructured data sets where you need maximum scale as well as maximum performance.
I mentioned a little while ago about sustainability, and this is actually becoming much more of a priority and a focus, especially with government agencies. We're number one rated by Energy Star for IOPS per watt. This is something that we're very proud of.
Our carbon footprint is significantly less than other vendors in this space. We do something that's pretty interesting, and it's an automated patented process where we're doing compression switching between inline and post-process. So this is actually going to lower the overall power consumption.
So as data is being ingested naturally, we're going to do whatever data reduction we can do. And then post-process, when the system is less taxed, we're going to actually do an additional data reduction. One of the things that I really think that we're doing pretty interestingly is carbon reduction.
And I hate the term carbon reduction, but in essence, we're actually monitoring the temperature of the system internally. And then we're adjusting the RPMs of the fans based on the internal temperature. We're also monitoring the controller CPU, and we switch it to a lower power mold based on the workload.
So we're going to reduce the CPU clock speed based on those internal temperatures. And so lowering the temperature in this internal heat helps us extend the internal lifespan of all these internal components. And then lastly, the modern storage assurance.
It's going to guarantee the overall media by doing data in place and non-destructive upgrades. So agencies can pretty much get an overall return on their investment, stay up to date with current products, but also modernize and upgrade their infrastructure without a forklift upgrade or downtime. So we basically are saying, you invest in Hitachi today, you get VSP1 tomorrow and the next generation in the future.
[Todd Hanson]
And part of the sustainability, we're also reporting that via the call home functionality to ClearSight. And there's a sustainability dashboard that kind of tell you, hey, you got an old storage array. It's using a lot more power.
You can define where that power is coming from, whether it's renewable or it's a different type of power. And based on the components that are reporting, it'll tell you this is the heat that's generated and give you a sustainability type improvement options.
[Guy Gawrych]
The nice thing about ClearSight, and the reason I didn't mention it and I should have earlier, is it's the phone home capabilities. So we can actually do metrics and monitoring via external connections. A lot of the federal agencies do not allow that.
We offer something called ClearSight Advanced, which is the same capability, but it sits behind the firewall. So you will get the sustainability and the monitoring features that Todd talked about without having to worry about the external connectivity out there. Correct.
I wanted to put this one in here, and this is pretty much how we're extending our support for high performance storage that's going to meet the demands for AI workloads and how our architecture actually consolidates both the data plane, the control plane, and the data fabric via APIs to eliminate all the silos and streamline applications for AI workloads. Look at the bottom and start going up. So again, this is kind of tying into the whole ecosystem.
Our data plane is actually going to manage this massive amount of data. That's for the AI training and the inference, and this could be regardless of the data type, and this is regardless of the ingest and the protocol. The control plane is actually going to orchestrate all these resources intelligently, and it's going to ensure optimal performance and scalability.
Hitachi IQ, it's our portfolio of AI-ready infrastructure, and it's solutions from Hitachi and NVIDIA, and it's designed to address all of the needs of the AI market. So we're providing the compute, the networking, and ultra-high-performance file systems, storage infrastructure, and it's all designed for AI, generative AI, analytics, data-like environments. We have integration with NVIDIA GPU Direct, so this is going to allow us to go GPU to storage communication, and we remove these bottlenecks for AI workflows.
We have a whole series of discussions and presentations specifically around Hitachi IQ, but I wanted to bring that to everybody's attention because it's just yet another example of how Hitachi is looking at the overall data center and the solutions for customers as opposed to individual point product solutions. Between the layers is our data fabric, and that pretty much connects everything. So everything is seamless data movement regardless between infrastructure and environments.
What we've seen is measurable results. We're seeing a 30x improvement in bandwidth, 2x performance improvement, and 7x faster pipeline acceleration. And if you talk to AI customers, it's always about the data pipeline.
How can I get my data into my system where I can actually do something with it? How can I actually ingest as fast as possible from multiple disparate sources? So what we're saying here, this is really more than just a platform.
I mean, it really is the foundation for AI infrastructure, and I wanted to bring this to your attention before I wrapped up. I just want to give a quick example on where we're using it. This is a Department of War use case.
Cannot name the customer. The goal was they basically really wanted to understand real-time location and movement of space-owned abscess, as well as any potential adversarial hypersonic weapons. They ingest and store telemetry data that's gathered from multiple medium-Earth orbit satellites, and the data was critical.
The reason they chose Hitachi was, first and foremost, we're providing the all-flash NVMe storage. This is for mission-critical data. Absolutely cannot – performance was non-negotiable.
And they also really, really needed the uninterrupted availability and the 100% data accessibility. That's a critical factor in their decision. Plus, they also like the fact that we had cleared personnel from support to sales to on-site engineering support.
This is one of my customers. This customer had an end-of-life storage area network, and then they also had existing block storage. They weren't exactly thrilled with it.
And they wanted to modernize these capabilities because they're a science application. You can imagine what they're doing over here. I find it in bad taste, and I always respect my competitors.
We did not discuss what they didn't like about their current storage environments. We just kind of listened, and we didn't add anything onto it. And we simply just prevented our capabilities, independent of what they thought about their current infrastructure.
We wanted to give them our solution and why we thought it was better. They liked the idea that they could actually have one vendor that provided both their storage as well as their SAN fabric. That was a big deal for them.
They also liked the fact that we were able to consolidate all of their spinning drives into SSD all-flash base. And so with the data reduction, the higher capacity drives, it gave them better performance, but it also gave them huge cost savings. And so with the cost savings, they were able to actually mirror their environment infrastructure.
So inside the same data center, so now they have a synchronously mirrored configuration. And they were also able to buy an additional storage array to stand up in their lab so they could do further testing and everything. So this one was about economies of scale, cost savings, and basically taking their existing aging hardware and modernizing it.
[Anthony Jimenez]
Thanks for listening, and thank you to our guests, Guy Gawrych and Todd Hanson. Don't forget to like, comment, and subscribe to CarahCast, and be sure to listen to our other discussions. If you'd like more information on how Hitachi Vantara Federal can assist your organization, please visit www.Carahsoft.com or email us at hitachivantara@Carahsoft.com.
Thanks again for listening, and have a great day.