The Federal Government has made artificial intelligence and network security a strategic priority, so agencies must modernize legacy systems, mitigate data siloes and adhere to complex compliance requirements. With Peraton’s system integration tools and Federal AI expertise combined with NetApp’s intelligent data infrastructure and ai-ready data center, agencies are transitioning from AI experimentation to delivering operational impact at scale by modernizing mission delivery, streamlining decision-making and enhancing security. Discover how to modernize and unify critical systems, embed security and governance at the data layer and deliver high-performance AIOps across platforms built for hybrid and multicloud environments. platforms that enable AIOps across Fill out the form to access the Peraton and NetApp podcast series to explore how modern data infrastructure can close the Government AI readiness gap with secure, mission-aligned AI capabilities.
Introduction
Welcome back to Carahcast, the podcast from Carahsoft, the trusted government IT solutions provider. Subscribe to get the latest technology updates in the public sector. I'm Anthony Jimenez, your host from the Carahsoft production team.
On behalf of NetApp and Paraton, we would like to welcome you to the first episode of the CTO Fireside Chat between Paraton and NetApp, focused around federal AI readiness. Jason Blinn, Paraton's FedSYS CTO, and Matt Lawson, NetApp's Director of Solutions Engineering, will discuss where industry and government technology leaders come together to discuss what it really takes to modernize federal IT and operationalize AI.
Matt Lawson
All right. Well, thank you for joining us today. I'm excited to have Jason Blinn here with me.
My name is Matt Lawson. And welcome to a Fireside Chat series we're doing, and today we're going to be talking about federal AI data readiness. I also want to give a shout out to Kerasoft for hosting us today.
But before we get started, let's do some introductions. Jason, do you want to do an introduction? I'll introduce myself after that.
Jason Blinn
Sure. Thank you, Matt. And thanks to Carahsoft as well.
Thanks for NetApp for helping to organize this. I'm Jason Blinn from Paraton. I'm a CTO for our federal civilian team.
I'm helping federal civilian agencies with AI adoption and cloud readiness challenges. And happy to be here talking about this. I think it's a timely topic.
So over to you, Matt.
Matt Lawson
All right. Well, again, my name is Matt Lawson. I'm a Solution Engineering Director for NetApp Federal.
And I was actually a former state agency level CTO. And so, again, really excited to talk with you today, Jason, about AI readiness. It seems like we can't have a conversation these days without talking about those two letters AI.
Jason Blinn
Oh, for sure.
Matt Lawson
And so I thought we could just kind of riff off of each other a little bit today and ask each other some questions and just have a discussion about what we're seeing in the federal space as it relates to AI, some of the challenges, some of the opportunities. Sounds great. So with that, let me start us off with a question here.
So as I said, everyone is talking about AI. But most agencies are still struggling with data readiness. So what does AI data-ready infrastructure actually mean in a federal environment?
Jason Blinn
Great question to start off with, Matt. And like I said, I think this is a really timely topic. We take a step back and look at what organizations are facing today and the challenge of AI adoption and trying to understand what AI is really about, what it can do for them, the hype versus the reality, where to get started.
There's so many questions out there that I've also seen with a lot of my clients. And it comes down to, is the organization ready? Do they have the proper data foundations in place?
And really moving from a point of curiosity to real capability. And I think that's what we're seeing out there. Organizations are just trying to figure out, how do I make the most of AI?
And really, it's about, do I have the right data? Do I have the right platform? And do I have the right governance to enable my AI adoption process?
Matt Lawson
Yeah. No, there's a word you said a couple of times, data, right? I think AI is about the data.
And I think that's one of the biggest challenges that agencies see is, how do they get their data ready for AI? Whether it's siloed or whether it's bringing the right amount of security to the data, making sure that they understand what their data is, make sure that they don't have what I call poisonous data or data that has specific sensitivity classifications they need to be protective of. So I think there's a lot to unpack as we talk about, what does that AI data readiness mean So another thing that we talk a lot about is a lot of pet projects, AI pet projects or prototypes in the federal space.
So how should agencies start thinking about building AI data projects, AI data pipelines to scale from that prototype or project to full production?
Jason Blinn
Yeah. Well, some of the challenges I'm seeing so far are that people are focused really on models and tools. And I think that's not the right focus.
I think the focus should be first on use cases and determining what's the problem that we're actually trying to solve. Is it operational intelligence? Is it predictive analytics?
Is it knowledge management? There's lots of use cases, whether it's mission focused or maybe even IT operations focused, that we can point these AI tools at. But before we do that, we have to really baseline what are the problems that we're trying to solve?
What are the right use cases to start with? And then we can look at how we're going to make sure that we have that right data foundation in place that we're talking about. Those are all kind of prerequisites in order to be successful.
And the organizations that are going to be successful are going to focus more on that data foundation and less on the models and tools. There's lots of tools also out there. People are testing and experimenting in their own personal capacity.
So they're seeing a lot of benefits that AI can provide. But I'm seeing sometimes people are getting a little bit too hung up on the specific models or tools versus some of those more foundational things that need to be in place.
Matt Lawson
Yeah. One of the things that we see quite a bit is these data sets that are required to build these AI models are huge. And sometimes it's challenging to take your data to that model, whether the model's in the cloud or elsewhere.
We talk about data having gravity, having mass, and you start thinking about actually the speed of light is a limiting factor when we're moving very large, multi-petabyte sizes of data. And so one of the things that we talk about at NetApp is either if you can't bring your data to the model, bring the model to the data in terms of ways that how can you make that data accessible by the models and still protect and use that data across multiple different boundaries of where that data resides, whether it's on-prem in your data center or in the cloud or in both locations.
Jason Blinn
How are you guys approaching that, Matt, from a NetApp perspective as a platform provider? What are the capabilities that you guys are really building to help facilitate that?
Matt Lawson
Oh, thank you for that question. Because that's one of the things that we're trying to help our customers and agencies understand is that NetApp truly is a platform provider. We give capabilities for agencies to leverage their data wherever that data may reside, whether it's on-premises in their data center, whether it's in a cloud or whether it's in multiple clouds, that we can give the SLAs and the performance capabilities.
We can give the right amount of security. We can give the right amount of performance, and we can help our agencies move that data without friction to wherever that data needs to be. So we talk about helping get the data to the model, bringing the model to the data.
But then other things that we help with are helping in terms of just cataloging that data, understanding what is the metadata about those various silos that exist across the universe of on-prem and in the cloud. And so we've got tools that will help agencies automatically discover their data and catalog what that data is, so you don't need a human to go in and start tagging data and say, oh, this is HR data or this is financial data or this is mission data. We've got tools that will help them automatically discover and identify what that data is.
So what it does is it brings the friction significantly down and the ease of being able to understand, catalog, and break down those silos.
Jason Blinn
Those are really important. Thanks for sharing that, Matt. We're definitely seeing the same thing when looking at client environments.
We're consistently seeing challenges with establishing things like data pipelines and data catalogs, data lineage, as well as security and governance wrapped around those things. Like you said, a lot of people are having challenges, whether that's on-premises or in the cloud. Many people are utilizing both of them, and I think that's great.
And we're big proponents of that as well at Heriton. We help our customers solve the challenges wherever it may be. We're kind of agnostic, and we're happy to help out in whichever environment that may be, because there's definitely important use cases and niches for all of those.
And some customers we're seeing have concerns about losing control of their data, so they feel maybe more comfortable about doing some of that more in the on-premises type format. And I think that's very valid and very smart to think about, especially at the early stages of this AI adoption cycle. Some are more forward-leaning and innovative, want to leverage the best that public cloud has today, and that's wonderful as well.
It's really about right-sizing that, I think, for the organization. But like I said, there's a few of those foundational pieces that are really important to have in place, and I think that's really the crux of this AI readiness. It's really about data readiness, so having that right data, having that right approach, having the right controls and security guardrails around that.
So we're helping organizations also rethink things like a data mesh, for example, where you can have data owners still be the controllers and the owners of the domain of data that they are in charge of today, so they don't have to lose control, but they can also share that out in a way across the organization more effectively, as well as creating an enterprise kind of unified approach for data. So data mesh, data fabric, data lakes, those are becoming really important pieces that are those things that need to be in place in order to really get the most out of AI.
Matt Lawson
I couldn't agree more. You know, as you talk about data meshes and data fabrics, that becomes even more important in AI because there's so many different types of workloads that go into AI. And as NetApp is a data platform provider, you know, we can seamlessly help agencies deal with all those different types of workloads, whether it's kind of the very, you know, large data lake type of data pools of data, or whether it's that high-performance ingest data that's need for training or inferencing or retraining.
And then something else you said that, again, I want to touch on is, you know, agencies have concerns about the data, you know, whether it's on-prem or in the cloud. And security is absolutely something that we have to keep top of mind as it relates to, you know, our AI data. You know, can we automatically redact sensitive data before we send it to a model?
Well, that's one of the things that we're helping with. Another big concern, and again, I don't want to scare anybody, is, you know, just data security in terms of, you know, protecting your data sets against ransomware attacks. Again, those are things that NetApp provides at the data layer, and we detect and track that automatically.
And then also, you know, one of the big topics is what we call post-quantum cryptography. This notion of, you know, we're on the verge of having quantum computing, and the promise and threat of quantum computing is our current encryption technologies, you know, quantum computing could render useless. And so one of the big threats that a lot of agencies are thinking about is the harvest now, decrypt later type of attack, where, you know, it may be encrypted in a format today, but they'll harvest that data in an encrypted format, thinking, hey, well, maybe I'll be able to get at that data tomorrow once I have access to quantum computing.
And so, again, NetApp's helping agencies by providing quantum resilient, quantum resistant encryption routines that'll actually protect your data from that harvest now, decrypt later types of attacks that we're seeing. And so there's a number of things that we're doing at NetApp to kind of help, you know, agencies lean into AI, knowing that, you know, partnering with NetApp and partnering with Paraton, they can be secure and confidently use their data without, you know, thoughts of, hey, you know, I might have risks because of the data security threats out there.
Jason Blinn
Makes sense. And I think, you know, you're right. It is a little bit of a scary time.
And but glad to hear you guys are really thinking about the future and, you know, help positioning today for, you know, what is probably going to be that reality not too far off. One of the other things I'd like to mention there to piggyback on that is, you know, from a compliance perspective, right? You know, working with a lot of federal agencies, compliance is very important, top of mind.
So we're also seeing, you know, things like, you know, audibility of the models, the, you know, making sure that the data sources, you know, it has good quality and good security around it. You know, making sure that, you know, people are using AI responsibly. You know, all those things are really important.
And also part of that compliance story as well to make sure, you know, as we're leveraging the best of this new technology, we're still compliant, we're still secure. We're still making sure that all of the assets, you know, for the organization are protected, but people are able to use the data to make their jobs easier, make smarter decisions, make operations more efficient. So that's another aspect that, you know, we're also seeing out there closely related to, you know, some of those security concerns you mentioned.
Matt Lawson
Yeah, well, you talked about data auditability. Earlier, you said data lineage. You know, it's that notion of, hey, I've got this model that I've trained and built.
How can I certify that the data that went into that model was not inappropriately sourced data or has, you know, what I would call poisonous data, data that potentially, you know, should be protected and should not be used in training? And that's, you know, again, some of the tools that, again, NetApp provides is we provide some of that auditability and traceability. So, again, agencies can confidently know what pieces of data went into every model that they built.
They can go track that back, and they can use it from a, and they can prove compliance that the right data went into the training of the right model. So that's good. So I'm going to go on to our next question, and we've talked about silos a couple of times in our conversation already, but how can agencies break down those silos between data teams, AI teams, and mission owners?
Jason Blinn
Yeah, very important point there, Matt. Some of it we've touched on. I think there's a couple things that come to mind.
First is, you know, going from that experimentation phase to real enterprise capability. And I think, you know, where a lot of agencies, a lot of organizations are at today is still sort of in that experimentation phase. And that's great, you know, because that means we're trying things out, and we're trying to see where we can get value from the technology.
So that's fantastic. And then how do we move that into an enterprise capability, I think, is really the next step. And that's where organizations need, you know, to be thinking about what's the strategy.
Because if we only do things in kind of pockets of proofs of concept or pilots, we're probably going to just exacerbate the problem of the data fragmentation and silos that you referenced in the question. And so we need to be thinking about that more holistic strategy of, you know, how are we going to unify data access? So things like we did touch on earlier in terms of data mesh, data fabric, data lakes, those types of architectures are really important, you know, to establish how data is going to integrate across the various applications and mission use cases.
And then also establishing and thinking about the strategy for the data platform. So looking at, you know, folks like yourself of what's going to be our go-to platform for data. And then we can build AI tools and models on top of that data.
But again, you know, like we said at the beginning, you know, that's we really need to make sure that we have the data foundation in place, you know. So those are the things that I see as kind of critical path into scaling AI into an enterprise capability and unifying that across teams. What about you?
What are you guys seeing?
Matt Lawson
No, I mean, I think you're spot on. And I love what you said, right? You know, in terms of breaking down those silos, it really is about having that platform, right?
It's about having a platform that can do any protocol, a platform that can do any performance level. It's about a platform that can manage security. It's about a platform that can give you data mobility to move your data to where it needs to be or to bring the model to the data.
It's about that platform that can give you that peace of mind knowing that, hey, I'm going to be protected from ransomware attacks. I'm going to be protected from, you know, post-quantum cryptography, right? So absolutely.
And, you know, going back to, you know, why are AI projects failing? It's because we're seeing a lot of those agencies just can't get a handle of the data. They can't get access to the right data where the data is siloed.
They can't rationalize it. So truly, I think as agencies are heading to more of that platform approach for their data, where they can unify their data and access their data from a single point, it helps them get higher level success and better outcomes with their data, so.
Jason Blinn
And I'm curious what you guys, you know, would recommend for people to get started. You know, we're advising our customers, of course, in various ways. But I'm curious, you know, from the platform provider perspective, how are you advising your clients, you know, on how to get started?
And then how to scale to enterprise AI?
Matt Lawson
Yeah. So, I mean, first of all, I would say the easiest answer is get on NetApp. But, I mean, besides that, I mean, we have some tools that help, you know, I think talk to the NetApp team.
You know, because, again, we obviously have a perspective of we bring a platform to the AI challenge and opportunity. And generally, what we do is we start working with those agencies to start, you know, automatically cataloging and discovering their data. So just knowing where all their data is, is the first step of, you know, understanding how can I rationalize?
Do I have to bring the data to a central location? Or in some cases, we've got agencies that can leave their data in a distributed format, but we can help them manage that centrally and know in a central point where all that metadata is. So there's a number of ways that we can help and facilitate with that.
But, you know, we love talking to agencies and help them bring that more, get past just storage and more into that mindset of, hey, let's talk about a platform approach to your data. Because, again, I think it's that platform approach that's really helping agencies find higher degrees of success with their data readiness for AI. So, all right, I do have another question for you.
So, you know, we've been talking about shadow IT for a long time, you know, since the dawn of cloud computing. If someone had a credit card, they could swipe it and generate shadow IT. But now we're starting to hear about shadow AI.
And how do we prevent shadow AI in agencies? And where are teams starting to use tools outside of the governance framework? Is this a bad thing?
Is this a good thing? What's your thoughts on that?
Jason Blinn
Interesting topic. And it's definitely a challenge. And if we think back to the genesis of shadow IT and what really caused it was there's capabilities that are out and available in the marketplace, you know, especially from a personal consumption standpoint, that may not be available within the enterprise.
And that becomes the challenge. And so, you know, in past years, we saw people, like you said, circumventing that process with a credit card to go get the capabilities they need. And really, in an understandable kind of way, they're trying to facilitate, you know, their mission, you know, as best they can.
So I think that's, you know, analogous for AI adoption, where we have all kinds of tools available to us today. You know, you think of Chad GPT, and Gemini, Copilot, you know, you name it, there's so many out there. And people are testing and trying things.
And I'm sure a lot of ways, very successful, you know, using it for their personal capacity of, you know, being more efficient with organizing their notes, or helping generate content, you know, so many different things that it can help out with. So I think that speaks to the importance of organizations moving quickly, but also moving smartly, like we talked about, of, you know, treating it as a data problem, not as a model or a tool problem, per se, having that data foundation in place. And so that really, I think, should motivate all of us to get started and start experimenting, think about that broader strategy, but not be hesitant about moving the ball forward.
And I think that is really what that problem kind of speaks to. What about you? What are you guys seeing?
Matt Lawson
Yeah, I think the shadow AI happens is when the agency innovation is not as fast as what individuals want to see. And I think it's a challenge in the sense that, you know, the shadow AI can sometimes circumvent some of the security and compliance controls that we want to put in place. But I think it's a balance of trying to move faster so that you're giving capability to your knowledge workers so that they can achieve that.
It's, I don't think there's an easy answer for that one, but it's something that should have us thinking as agency leaders, as industry leaders, how can we move faster so that we can give those capabilities so that, you know, those individuals don't have to go innovate on their own with the credit card?
Jason Blinn
Yeah, very important. And I think, you know, that kind of makes me think about, you know, broader data protection strategies, you know, in terms of data leakage, data loss prevention, you know, and how those fit in to make sure that your sensitive data is not moving outside of the boundary of your enterprise. And, you know, you can think about zero trust type approaches for things like that, where, you know, we're consistently auditing and controlling, you know, users, access, you know, devices, you know, to the data itself.
And that has to be a, you know, a constant kind of consistent process. So I think zero trust approaches, you know, kind of go hand in hand with what we're talking about as a way to help solve for that. And, you know, also, like I said, you know, ensure that we have mitigations in place for data leakage, data loss.
Can you talk about some of the capabilities NetApp has for some of those data protection type offerings?
Matt Lawson
Yeah, I mean, so there's a couple of things that we do to help customers around that data side. We call it cyber resiliency, both the cyber side and the resiliency side. And it's all the way from, you know, like I talked about, the built in data layer, data sensitive, data centric approach to autonomous ransomware protection, where we can capture ransomware attacks in real time.
And if we can capture them in real time, we can shut them down in real time and then basically mitigate any damage that's done. In fact, you know, I've got an example of a customer that we had, I'll just say in the Mid-Atlantic area, that had some NetApp platforms in their data center and they had some non NetApp platforms in their data center. And they had a massive ransomware attack where they were able to recover 100 percent of their infrastructure that was on the NetApp platform.
But all of the non NetApp platform, they had irrecoverable data loss. So that's an example of just from a ransomware protection. There's also this notion of being able to have immutable and indelible copies of their data such that, you know, if they write data or if they have outcomes from AI models, they can actually ensure that and certify that no one can tamper with that data or no one can change that data or no one can delete that data.
That becomes critical in terms of understanding what are some mission critical data sets. Other things that we can do in terms of helping customers protect their data is helping them architect the right disaster recovery plan. I mean, sometimes that's not a sexy thing to talk about, but so critical.
Jason Blinn
Absolutely.
Matt Lawson
And we like to talk in terms of like RPOs and RTOs, you know, and I like to, you know, talk about RPOs and RTOs in terms of flipping into the question of what's an acceptable amount of data loss, what's an acceptable amount of downtime. When you're talking mission critical, oftentimes no data loss, no downtime is acceptable. And so we can actually help agencies get to true, you know, zero downtime, zero data loss types of disaster recovery scenarios.
So those are just a couple of ways that we can help agencies with that data protection framework.
Jason Blinn
Excellent.
Matt Lawson
So I want to end on this question. It's going to be a fun question. I'm going to ask you to bring out your crystal ball.
Okay. So the right, you know, you'll be right or wrong, but you'll be hopefully somewhere in the middle. Right.
But as you look forward over the next five years, what will separate agencies that succeed from AI from those that don't?
Jason Blinn
Yeah, excellent question. I think kind of wraps up a lot of, you know, what we've discussed here today. The agencies that are going to be successful with AI are going to be those that embrace the idea that this is not a technology problem.
It's a data foundation and readiness problem. It's an operational maturity problem. And those are the key points that organizations need to be thinking through and developing a strategy around, even as they're, you know, getting started and experimenting and piloting and conducting proofs of concept with folks like yourself at NetApp.
Those are the key points that they need to think about today in order to have successful AI adoption in the future. So from the platform provider perspective, where do you guys go and where do you see taking us over the next five years to help make it easier for organizations to embrace AI?
Matt Lawson
Yeah, no, thank you for the question. I think that very much aligned with what you said. I think those agencies that take a data centric approach to AI are the ones that are going to really be the kingmakers with AI.
In the sense that, you know, the ones that are mapping out their data, building strong governance around their data, building compliance requirements around data, understanding where their data is, how it is, who's responsible for the data. You know, those are the ones that once they understand their data, they'll be able to effectively use it with AI. It's about finding the right data sets for the models.
Because, again, the success in the AI world as we're seeing is about being able to access the right data and getting it to those models effectively. And again, the agencies that are doing that effectively, taking that data centric approach, are the ones that I think are going to be the ones that are successful for the next five years. Yeah, definitely agree.
All right. Well, Jason, thanks for your time today. And again, I want to thank Kerasoft for hosting us today.
And thanks for joining us for our inaugural AI chat. Or excuse me, I want to thank you, Jason, today. And also want to thank Kerasoft for hosting us today for our inaugural fireside chat.
And today's topic was federal AI data readiness. And we hope to have you join us on the next one. So thank you.
Outro
Thanks for listening. Thank you to our guests, Jason Blinn and Matt Lawson.
Don't forget to like, comment and subscribe to Carahcast and be sure to listen to our other discussions. If you'd like more information on how NetApp and Periton can assist your organization, please visit www.Carahsoft.com or email us at matt.lawson at netapp.com. Thanks again for listening and have a great day.