Watch the podcast to hear Joseph Hackman, Founder and CEO of Permanence, discuss how integrating AI code-checking technology into complex IT environments enhances software development while remediating security vulnerabilities. Discover essential insights into how Permanence’s AI-powered code generation tools address skepticism around AI, simplify security maintenance and optimize resource allocation.
[Susan Hunt]
Welcome to Stare Down the Bull, I'm Susan Hunt and I'm your host. Let's dig in. Today, I'm thrilled to introduce Joseph Hackman, founder and CEO of Permanence, and a true software engineer extraordinaire.
Joseph was the technical lead for machine learning at Intel, later built the AI team at ASAP, was the head of AI at Attentive, and has since gone on to found Permanence. He also shared his experience as faculty at Columbia University teaching data center systems. Welcome Joseph.
Thank you. So nice to be here. Good to be here too with you.
You've been on this AI journey longer than most. I'd love for you to give our audience a little bit of insight into what you've accomplished, what your journey was like. I know you have told me a story that you started coding when you were six years old, which I love this story and love to hear more about it.
It goes directly to the idea that having availability, access to different things ends up creating great tech leaders.
[Joseph Hackman]
Yeah, access I think does matter. I was really lucky to have access to a computer and my brother, who's three years older than me, was able to teach me how to code way back when I was six. Which is impressive enough that he learned without any additional coaching.
But I started when I was six, I was writing in BASIC, came with every DOS or Windows computer. Pretty good IDE actually, a great way to learn programming I think, but very self-contained. And I kind of picked up new languages, the internet was kind of just getting started, but there were the people who were on the internet, largely software engineers.
So a lot of the content that was available really early on was about programming. And that's kind of what got me into AI. I got really interested in chatbots and kind of this idea of these competitions to see who could build software that would pass the Turing test for the longest amount of time or something like that.
And I started doing what you would now call AI in playing with language models that were open source, tinkering with them for a junior high school science fair project back in 2001. And I've just kind of maintained an interest in it since then. Kept coding all the way through elementary school, junior high school, high school, went to college for computer hardware engineering, and also Chinese, and thought that a lot of the problems that I had in studying Chinese could be solved with a computer.
In particular, I was really interested in this case of trying to determine whether a word in Chinese text was a native Chinese word or a loan word. Basically should it be spelled? Should you try to sound it out phonetically, or should you look it up in a dictionary?
And I, in college, had a very, very rudimentary solution for this, it was pretty bad. But it got me really thinking about kind of how the state of AI was evolving for, there's the so-called AI winter. So actually most of the time that I was a kid, the field of AI, particularly neural AI, wasn't really moving, especially in natural language processing, computational linguistics.
And I was really excited out of undergrad to kind of get my dream job at Intel. I was a hardware guy by training, but I wrote a lot of software, and so Intel was the place to be. They were kind of the number one semiconductor manufacturer in the world at the time.
And they very generously sent me to grad school. And I got to make a much better version of my Chinese, you know, is it a transliteration program using statistical methods. But it also brought me kind of up to speed on what had happened in all of the world of neural networks over the past 20 years.
And it was right at the time I was going to grad school, I started in 2013, which was like right after the Word2Vec paper had come out, it was a big turning point. And I was leaving grad school in 2015. There's another major paper in neural language processing, neural machine translation, was coming out at that time.
So I kind of happened to be into grad school at like the time when neural networks were kind of making their return in language processing. And right after neural networks, convolutional neural networks had made a big splash in image processing.
[Susan Hunt]
That's right. At the same time that I was leaving Nuance, actually, because I thought that they were missing an opportunity in R&D, and I've seen a lot of guys like you coming into the market with brand new systems, not having to deal with legacy garbage, building new applications that were really interesting and fast moving. And a company that should have really been able to be on the cutting edge of things kind of let that go.
So you were exactly the kind of person I was afraid of at that time. And I'm glad I was right about it. It also happens to be right around the same time that we met, right?
Yes.
[Joseph Hackman]
2013? Yes. So I moved to New York and left Intel and joined ASAP.
I thought they were in a position to generate a lot of really interesting data. So they were in customer service.
[Susan Hunt]
Yeah.
[Joseph Hackman]
And kind of the thesis was that the volume in the customer service business would move from phone to chat. And that kind of was my personal interest, was this chat data. And if you had a bunch of data that was in the right format, this chat format, and also, hey, suddenly there's this kind of, it seems, nascent revolution in neural language models, where you want to be as the place that has the data.
So that's why I joined ASAP. And would you join like early 2016, maybe?
[Susan Hunt]
Somewhere around there. Yeah. I came out of Salesforce at that time because I had already left Nuance for a couple of years.
Yes.
[Joseph Hackman]
So yeah, it seemed like the place to be.
[Susan Hunt]
And I think it was a pretty good bet. It was a pretty good bet. It was fun.
And you built an incredible team, which is now your founding permanence engineering team, which, in my opinion, is second to none in the industry. You want to tell us a little bit?
[Joseph Hackman]
I'm very flattered.
[Susan Hunt]
It's true. It's very true. I really would like to understand how, A, you attract the talent.
And then you also, they follow you. So that says something about your leadership and their desire to be with the startup that they believe in. Because honestly, all of you could go to any company that you wanted to in AI right now.
So talk to me a little bit about that little team of yours.
[Joseph Hackman]
So I was leading machine learning engineering. So my task was to take the latest out of research, both inside and outside the company, and turn it into products that deliver value for customers. And I think that is kind of the seed of how I built the organization and the team.
I really like making customers successful. That's the joy of the job. You sit down with somebody, you hear about their problems, and you're like, I can fix that.
And then you come back a week later and you fixed it. And they're very happy. And obviously, you're very directly changing at least a very tiny portion of the world.
That emotional feedback is pretty instantaneous. And I think that real application is kind of what set us apart. So there were a lot of AI labs that were kind of springing up at this time, or had sprung up or even older.
And they were starting to recruit reasonably aggressively. But there was often a huge disconnect. Even at like really big companies or like FANG organizations, there's a huge disconnect between research and production.
And for us, we were a startup, we were moving very quickly. And we would say, hey, there's hundreds of millions of end customers. You can make their lives better directly.
We'll put you directly with the leadership of these companies. You can talk to them. You can hear the problems.
You can hear what needs to be fixed. You can fix it. You can have them directly tell you, yep, that's better.
That is a much better loop than submitting papers for publication. You don't know if you're actually making real lives better or not. That was kind of our key advantage was saying, if you work here, you will get to be on the cutting edge and also you'll be having a real impact in the world.
And that's kind of how I continue to think about this thing. We are not a research lab at Permanence. We are laser focused on taking all of this technology that exists and making lives better with it.
So I can't say we're not going to be the most famous for having super high impact publications. That's not the thing. The thing is, you will get to sit down with engineers and they will tell you that their life is miserable and here's why their life is miserable.
And then you get to go back two weeks later and say, hey, look at this thing I built using the technology at Permanence. Does this improve your life? And they tell you it improves their life.
That's kind of the addictive loop that we provide. And that's not for everybody. What I would say is the people who are on this team who decided to be at ASAP and then follow me to Permanence, they're the people who really love that work.
They love getting into the nitty gritty and producing that change and that value.
[Susan Hunt]
Well, I think the thing that's really interesting about the way that you go after a space is that you solve a specific problem. There's so much hype around AI. There's been a lot of AI failures in enterprise.
You actually came in, looked at a problem that could be solved that's really going to be impactful, not just to enterprise, but kind of across the board with all companies. And you went after that. Can you just tell us a little bit about the product that you've developed and how some of our customers, your customers, are using them?
[Joseph Hackman]
So I think it's really key when you're kind of developing any technology to think about where you're going. And in my mind, the way that you get where you're going is you build one complete aspect of your vision. You don't want to tell people your vision and narration.
What you want to do is say, you can actually touch it. You can feel it. It's a different mode of interacting with the software.
And I think that kind of sets us apart from the other tools in our space. So we do AI for software engineering. There are a lot of tools that do AI for software engineering.
And what you're seeing is in the average tool, the way that the human interacts with the tool is changing very quickly. People started out doing type ahead, and now they're doing broader. Do you have an agent kind of running alongside you in your IDE?
I think the people who are the best are actually plugging directly into the workflows where the engineers already are. So probably GitHub. So we started from the very beginning saying, if you can do type ahead in an IDE, that gets you 0% of the way there, basically.
Our vision for the product was that you don't need to tell it to do anything. You have an AI agent that finds work and solves entire tasks to human quality, so that you as a human are not coaching the AI. You're not driving the AI.
You really shouldn't even be having to check the work. We've never shipped a bug. You should only be getting completely correct work.
And so rather than saying, we'll do everything a little bit, we'll say, hey, what is a subsection of the world where we can achieve this standard? And then we'll say, this is how the world should operate. Every year, we're going to add more and more and more and more things that we can do to this standard.
That's how we expand.
[Susan Hunt]
What do you think the top three value props are for an enterprise for using permanence? How would you describe that?
[Joseph Hackman]
So I think the top one is quality and what that unlocks for you. So right now, enterprises are kind of caught in this squeeze. Costs are going up and maintenance is a very, very, very high portion of your cost basis.
And so if you're in this space, you have a couple options. One, you can still do all of the maintenance work that you have to do. But this means that you don't have as much money left over to build delightful products and features.
Or you let quality slip and you keep building products and features. The second one actually kind of mortgages your business because you keep building more products and features. That means you have even more maintenance work to do.
And that maintenance work is not getting done. And so...
[Susan Hunt]
And that becomes a security risk a lot of times, correct?
[Joseph Hackman]
That can explode into a security risk.
[Susan Hunt]
Yeah.
[Joseph Hackman]
People like when the software that they use acts in the way that they expect it to. And they can trust it. And so you really can't put off maintenance forever.
And so what we're saying is engineers don't like doing that work, right? It's very easy as an engineering leader to say, yeah, let's just defer maintenance, right? Just keep deferring maintenance.
But eventually it'll be very hard for engineers to work in that code base where that has a lot of deferred maintenance. And so what we say is it's boring work. Engineers don't want to do it.
And the same reasons that engineers don't want to do it actually make it better for AI. It's kind of samey. It's like pretty...
It's boring, menial work. That's actually more of a reason to have AI do it. And so by doing that work, we free up all of this resource for you to build new products and features and actually accelerate that.
So not only do you have a lot more resource to do products and features, because that maintenance burden is paid down. Also, those engineers get to work a lot faster because they don't keep stubbing their toes on maintenance backlog items. So I think that's problem one.
Problem two that we solve really well is AI adoption. So one of the things that's been really kind of funny to hear a lot from software executives is that these tools are sold as kind of such a great thing for software engineers. And they keep hearing about like in startup land, they're having these great results, right?
People are able to build demos faster than ever. Why in enterprise software land are they not... Why aren't engineers clamoring to pick these things up?
And when you talk to the engineers, they're having a completely different experience, right? Yeah, you can build a demo faster than you ever could before. That adoption is great.
But trying to get an AI to solve a real problem in enterprise is way harder than just doing it yourself. It's like you have a super junior employee who really kind of doesn't understand anything, but also doesn't learn over time, right? With a human, you accept that, yeah, if you hire somebody and it's their first day on the job, it is going to be way harder to teach them to do a thing than to do it yourself.
But you're making an investment in the workforce of your company. With AI, that's not happening, right? Like it doesn't feel good for humans.
And so I think it's kind of very obvious once you actually dig in why engineers aren't actually that excited to use a lot of AI tools. Some of them are pretty good. Like TypeAhead is, I think, better than it was before.
But we are not kind of achieving what we should be achieving in AI. And so how Permanence thinks about this differently is that we're not asking you to do anything, right? I don't think humans should be using AI, right?
AI should just be doing the work. That's it. If you have a human that's like overseeing it, you haven't achieved the complete solution yet.
[Susan Hunt]
Right.
[Joseph Hackman]
And so for us, a human doesn't interact with the Permanence AI coder until the work is completely done. And the AI coder says, hey, here's what I did. Here's why it's important.
Here's how I can prove that I'm right. And then a human just goes, yep. And so now humans doing their same workflow.
It's the exact same workflow you would do if you were reviewing a human's work. They don't change anything. And suddenly now they've adopted AI and their performance metrics go through the roof because it took them five minutes to accept the work that would have taken a human eight hours or 10 if they were coaching an AI.
[Susan Hunt]
Yeah. This seems like a logical segue to be utilized after you use one of the scanners, the security scanners. Are you seeing that collaboration happen a lot in the market with your technology?
[Joseph Hackman]
Yeah, it's kind of a bread and butter. Yeah. What's great about having security scanners is that they provide a ready set list of tasks.
And we're pretty battle hardened on improving security scan results because everybody tends to use the same few security scanners. The things that the security scanners find tend to be in a kind of closed set.
[Susan Hunt]
Yeah.
[Joseph Hackman]
We've seen a large part of kind of the security universe. And we know that the AI coder performs well on these tasks. So that's part of how we have our ship of luck.
[Susan Hunt]
I think the team is incredible at Permanence. I think the product set that you've developed is really, really a useful tool, a useful way to use AI in a smart, smart way that's going to bring a lot of efficiencies and accuracy to enterprise and others. How do you see AI in the next five years?
I know your view's a little bit different than others. So can you just share a little bit about how you see it evolving over the next five years? Is it going to kill us all?
Or will we just find efficient ways to use it?
[Joseph Hackman]
I think it's pretty unlikely that it kills us all, which is good. It's very, very, very hard to guess. In the past, some of the guesses I've made about AI have been correct and some of them have not.
So for context setting, I don't organize my photos anymore. I haven't for about 15 years, a little bit over 15 years. Even at that time, I was reading the literature.
Image recognition was getting pretty good. And I realized that in a couple of years, it would be easy to build a tool that would determine what's every photo that has my wife in it. And that is exactly what happened, right?
It's come standard on every iPhone. You don't need to do anything as a customer to get this experience. What I did not expect, actually, was so much of the advancement in language models.
I mean, it was kind of my area. There was a lot of thought about what you use for training data. And so either you use kind of like small, very domain-specific training data sets.
And I think we were in 2019, even with more advanced training methodologies like transformers, we were kind of starting to tap out on what you could do there. And then you had, well, train on everything you possibly can. And there are a lot of problems with training on potentially dishonest data or training on low-quality data that needed to get solved in order to make this possible.
And that's kind of what underpins GPT. And totally honestly, once that worked, once they got that working, which is about GPT-2, I kind of figured that it was going to slow down because you were training on all the data that you could have. You wouldn't really get much more from scale.
But it turned out they were not using enough parameters to actually capture that. And so GPT-3 and GPT-3.5 were like huge advancements.
[Susan Hunt]
Yes. That was actually really shocking how it kind of went from being very, very slow over a 20-year period to the last two years. It feels like everything had changed in AI.
I know that's not how it was. It was still pretty fast. It felt really fast all of a sudden.
[Joseph Hackman]
Yeah, I thought we were more training data-constrained than GPU-constrained. And so, one, honestly, I don't really know what's going to happen. I don't have a super strong thesis.
I do think that a lot of the core problems in these models are going to stick around. So AIs have a thing called hallucination. They create something that isn't real.
This is showing up a lot in terms of a ton of different tools, site, legal cases, and they're making cases up, right? It sounds plausible, but they're not real. Nothing is really changing in the science right now that will prevent that thing from happening.
The aspiration is with enough fine-tuning data, this will be attenuated. It will be decreased. But currently, there's no real belief, at least in my mind, that there's something on the horizon where suddenly this is going to change.
I think we will probably get substantially stronger models that can solve harder tasks. I think the amount, this memory problem, which talks about how previous logic from a model gets compressed so that you can have a model reasoning over a larger token horizon or a larger time horizon, I think that is going to get substantially better. Agents will do less really nonsensical things, but I don't know that there is going to be as huge a shift in the actual underlying science of AI in the next five years as there was in the past five years.
But the world is behind. I do think the experience that you have as a human being operating in the real world is going to change really dramatically as people start working these models into everything that they do. So that's going to change.
And I think what's going to enable that is actually the science slowing down a little bit. Because once the science slows down, it's worth building experiences and building layers on top of it that stick around. And I think that's ultimately what's going to happen.
Every company is going to be building experiences on top of these models where the model functions more as a primitive than the thing in the driver's seat. And AI is already touching a lot of our lives, but we'll soon be touching much more of it. And I can't fully predict what that's going to look like.
[Susan Hunt]
Is there, just one last question, is there any advice that you can give to enterprises to kind of decipher the AI hype from what is actually real? Is there anything that you can say that can kind of help them cut through the noise? Because there's so much noise in the industry.
I always tell them, look at who is actually running the company. Who are the resources that they have within their company? Are they people that understand AI the way that the permanence people do?
If the answer to those questions is no, I would take a step back and maybe work with a company that really understands things. What is your perspective on that one?
[Joseph Hackman]
I think that's fair. I would expect, if you're going to pick a partner for this kind of AI transformation or to buy an AI product from, they should either know AI really well, or they should know some specific domain really well. Because adapting AI to individual domains is, I think, where the world is going.
Realistically, the best are the people who can do both. So what you want to see is that there are human beings that you could reasonably hold to account about both the AI side and the domain-specific side. And so if you go and you have a bad experience, and none of these people are AI experts or domain experts, and they're like, what do you expect?
We're not AI experts. We're not domain experts. It's kind of on you.
[Susan Hunt]
Yeah.
[Joseph Hackman]
And I, unfortunately, am seeing that a lot. It's, in a lot of ways, easier than ever to buy a .AI domain name and get venture capital. And that is not necessarily tied to execution ability.
[Susan Hunt]
Agreed.
[Joseph Hackman]
What I would say, in general, from kind of a black box perspective, where you're not looking at the company, is to look at the quality. I think that's kind of the primary differentiator between people who really know what they're doing and people who don't. Because it's easier than it has ever been before to build a demo.
But it is harder than ever before to actually control an AI system. And I think quality is kind of the way that you tease out this actual difficulty. And this is also what makes me so proud about our quality record.
[Susan Hunt]
Yes. Thank you, Joseph. Thank you for being on Stare Down the Bull today.
Thank you so much. It's a great conversation. If you need more information, you can contact permanence.ai to contact Joseph or get more details on the company. Thank you.
[Joseph Hackman]
Thank you so much.