Agent Wars: The Hype, Hope, and Hidden Risks with Nate B. Jones
In this episode of AI Explained, we are joined by Nate B. Jones, AI strategist.
He explores high-level advice for organizations, technical ideas such as prompting and application architecture, and the current state of agent adoption. Key topics include challenges in building production-ready agents, architectural decisions, and ensuring ROI from these agents.
[00:00:00]
[00:00:06] Josh Rubin: Welcome and thank you for joining us here today on, uh, AI Explained from, uh, Fiddler AI. Um, I'm Josh Rubin. I'm the Head of AI Science at Fiddler. Um, and I'm gonna be the host today. So, uh, I am super pumped to have, uh, Nate B Jones join us. I've been enjoying his content. For, for several months.
[00:00:26] Josh Rubin: And I think he's just, uh, incredibly insightful. I think he's one of the most thoughtful voices today on how enterprises are navigating agentic AI. Um, his stuff ranges from, you know, high level advice for organizations to some really insightful technical ideas about things like prompting and, um, you know, agentic, uh, application architecture.
[00:00:46] Josh Rubin: So just, just thrilled to be able to, um, uh, have him here and bounce some questions off of them. Uh, topic today is, um, uh, agent wars, uh, the hype, the hope hidden risks. Um, we'll talk about how, um, agent adoption actually stands, how architectural decisions are playing out, um, what it takes to build agents that hold up in production, how your companies can get value out of that.
[00:01:13] Josh Rubin: Um, so we've got about 35 minutes, uh, just to deep dive and riff with Nate around some of these topics. There's 25 minutes after that for your live questions. Um, so go ahead and, uh, put those in the chat somewhere. There's a box where you can enter your questions and we'll get to those at the end. Um, we're also gonna record and send the session to all the attendees.
[00:01:31] Josh Rubin: Um, so, uh, I don't know, let's, let's kick things off. Um, so may maybe we just start out by talking about, uh, you know, like what. What are agents, what is, what is an agentic application? Anyway, a friend of mine wants to know, um, how do, how do you think?
[00:01:51] Nate Jones: I, I, I'll give you the usual sort of one sentence answer and you and I both know we could really open that box up and go for two and a half hours. But, uh, I usually say an agent is a large language model, tools plus guidance. And if you have those three things together, you have the core ingredients for an agent.
[00:02:12] Josh Rubin: That's great. I like, I like the succinctness there. Um. I've heard folks, uh, there was one, uh, technical contributor from Anthropic that I heard recently who, you know, uh, he, his take was, uh, looping was important, like he said. Uh, and I, and I presume he also means sort of conditional branching, like maybe it doesn't always be the same thing every time.
[00:02:33] Josh Rubin: Um, I thought that was.
[00:02:35] Nate Jones: I think that's of the really complex things about agents, right, is that depending on the way you've architected the system, you may be in a position where you want the consistency, but you want the consistency within a policy guide set. You may want. Deliberate creativity where you, you don't want the consistency.
[00:02:55] Nate Jones: You want a creative response from the agent because the agent's job is to come up with blog post ideas, right? I don't know, and that's part of what makes it hard to talk about.
[00:03:04] Josh Rubin: Do, do you think that, uh, like reasoning is a essential component? Also? I hate to like take your, your succinct version and throw all these other pieces in it, but it seems like a lot of these applications involve some sort of planning a reasoning. Now I'm second guessing myself now 'cause I'm thinking through, uh, customer examples.
[00:03:23] Josh Rubin: It, it does seem like a kind of a broad, uh, a broad scope of things that can happen.
[00:03:29] Nate Jones: I think that's implicit in tool use. Like you would need some kind of inference compute to do tool use,
[00:03:36] Josh Rubin: Okay.
[00:03:36] Nate Jones: reliable.
[00:03:37] Josh Rubin: Yeah, I think that's fair. Uh, so I don't know what, so, you know what, what, what are you hearing? Like, uh, where are we in terms of organizations adopting these things today? Like, are you seeing the whole spectrum or.
[00:03:57] Nate Jones: That's a good question. I, I feel like if I'm being really honest, we are in a little bit of, you know, the, the famous adoption curve. We're in a little bit of the trough of disillusionment with agents. When I talk to companies right now. Um. The hype was really big. I know Jensen Huang started everybody off in 2025 with this big speech.
[00:04:17] Nate Jones: This is the year of AI agents. Uh, venture capital firms have been big on this being the year of AI agents. We've had startups launching agents left and right. The last major agent launch I noticed was this morning, lovable launched agent mode in, uh, their system. And so it's not that we're short of. Software jumps. It's not that we're short of hype for it, that successful implementation of agents a tremendous amount of skill and work in current enterprise workflows. It's not plug and play yet, um, unless it's a very simple use case. And then you, you know, there, there are some plug and play tools for that, but. That s-curve where like you see how easeful it is get into the idea. You see how fun it looks in the, in the, in the video demo that you see on LinkedIn or whatever. And then you hit the s-curve of, oh wow, this is really hard. Uh, this is not easy at all. I, I think that's where a lot of people are at right now, and they don't necessarily have the tools and they don't necessarily have the skill sets to know. How do you get through that S-curve to the point where you actually have mature adoption and the organizations that are able to get there? Well, they're actually realizing ROI, and they're the reason why everybody else is chasing this. But it's not easy right now.
[00:05:40] Josh Rubin: Interesting, interesting. So I'm, I'm looking right now at our, uh, so we just rolled a poll and I guess everybody out there has, has, has voted on, but, uh, you know, where is your team in the agentic adoption journey? So there's a lot of just exploring, that's the dominant answer. And there's some prototyping and experimenting.
[00:05:54] Josh Rubin: So this isn't, isn't super surprising. We have, uh, you know, 6% are running in production. Um, I I wonder if you feel like, you know, I think, you know, one kind of failure mode is assuming everything is vibe coating and that just works. 'cause I think in a way we've all been dazzled by how dynamic generative AI is when you sit down with a chat GPT and have a conversation,
[00:06:17] Nate Jones: Yeah.
[00:06:17] Josh Rubin: um, you know how much of that like.
[00:06:21] Josh Rubin: The failure mode is, uh, sort of unrealistic expectation versus just lack of experience across everybody with building applications like this versus. Projects that are not fully thought out or specked out well, in terms of what kind of ROI you're looking for and what the application's supposed to ultimately be and do, like how to, how to measure its success versus not having that where, where do you think is it all of those things?
[00:06:54] Nate Jones: You know what's interesting is I kind of go back to our own mental models of work and creativity. Pre AI, the blank page is notoriously a really hard place to be. If you're a writer,
[00:07:04] Josh Rubin: Mm-hmm.
[00:07:05] Nate Jones: blank whiteboard is a hard place to be if you're an engineer. Um, you have to architect the whole system from scratch. If, if you don't have a blank whiteboard as an engineer, that's sometimes even worse. 'cause like I've been in the position where you have incredibly outdated stack and you like fill the whiteboard with it, and now you have to add this new feature, and now what do you do? Right? And again, you're sort of facing this blank space where you have to architect something.
[00:07:30] Josh Rubin: Yeah.
[00:07:30] Nate Jones: AI does is it reverses the complexity. so easy to get started now. The blank page is not a problem anymore. It will give you an idea and people think because that was the hard part before that, the rest of it will be easy. And in fact, it is still quite difficult to get to done. And part of why I am. Bullish on the value of technical skills in the AI age that as much as I see tremendous progress on AI intelligence solving some of that initial get work started piece intelligence helping us to come up with wider ranges of ideas, helping us to do some of our data transformations more broadly in ways we, we just didn't have time for before.
[00:08:18] Nate Jones: You know, pulling out contracts, technical details and giving them to the engineering team, things like that. We, we, I am not seeing the intelligence gains translate into reliably getting stuff done as easily as we get started. That is a place that seems like a wicked problem. Yeah.
[00:08:38] Josh Rubin: Yeah. Yeah. Yeah. Um, I think that's a really, really thoughtful point. I, I like the idea of. Really inverting the whiteboard, right? Like I think the thing that we see with generative AI is that it kind of fills whatever glass you put it in, unlike staring at the empty glass in the old days of software engineering where you had to put every piece in there, you kind of, you know, you, you give it a container and it will do its best to fill the container
[00:09:04] Nate Jones: Yep.
[00:09:04] Josh Rubin: unless you figure out how to properly constrain the system to do exactly the right things.
[00:09:09] Josh Rubin: We were, we were joking the other day. I, we just had our hackathon. Two weeks ago, we do this kind of like one, once a quarter. Um, and, uh, you know, I was talking to one of my colleagues and he was talking about how, uh, you know, basically he had, you know, messed up in the implementation and forgotten to wire the, you know, the LLM to the data source.
[00:09:29] Josh Rubin: So like. It was a full software failure where no information was going from the, the database into the LLM. Um, you know, and the LLM was just riffing with it. Like it was, uh, you know, from, you know, unless you knew the topics it was supposed to be talking, you know, talking about, um, you know, it was happy to just kind of fill the gaps and pretend like it was getting information and make up answers as though it had received information.
[00:09:50] Josh Rubin: You know, there was no, uh, you know, no exception thrown anywhere. Uh, you know, the software was correct, uh, but you know, the, uh. As we were saying, like the, the agent kind of just, just filled in the space with what, what it thought it was supposed to do. Um, you know, and without kind of the right guardrails there or the right constraints on how the agent is supposed to behave, um, you know, it could be some time before you realize, I mean, that was kind of an ex uh, um, egregious kind of a problem.
[00:10:20] Josh Rubin: It was sort of ultimately pretty easy to catch. But, but you could imagine for a system that was complicated, that had many components, many data sources. You know, things that would have typically caused, uh, an error in, uh, a traditional software workflow. Um, you know, it might just look like a system that's un underperforming or occasionally in some weird corner cases could it produces some sort of answer or takes an action that's, uh, you know, not at all intended.
[00:10:48] Josh Rubin: Um, so yeah, I think that's sort of scary and really emphasizes this kind of headspace change we need to make. Um, in order to get things working. Um,
[00:11:00] Nate Jones: I, I think one of the biggest questions in business right now that that's a human question is asking ourselves, do we need AI to infer and where do we need AI to know? Those are two very different things, we're not very good right now at even instructing AI to follow one of those two pathways,
[00:11:21] Josh Rubin: yeah.
[00:11:21] Nate Jones: alone architecting systems that inference in particular context or expect knowledge in other contexts.
[00:11:31] Josh Rubin: Yeah. Yeah, totally. It looks like we have a request from out there that it sounds like your voice is a little bit low. Uh, I don't know if,
[00:11:39] Nate Jones: is a little bit low. Do I, do I get close to the mic? Is that working better?
[00:11:43] Josh Rubin: I think that's the request in the chat.
[00:11:46] Nate Jones: Happy to get closer to the mic.
[00:11:48] Josh Rubin: Love it. Okay. Awesome. So. I don't know. You know, there are a lot of different, as we've said, right? There's a lot of different, um, architectural approaches. Lots of different ways you can build these things. Like say I'm an organization, I'm an enterprise, and I'm trying to solve some sort of problem. Like, you know, what, what's the kinda list of tools and how do I start thinking about getting the right experience?
[00:12:12] Josh Rubin: Even like, like what's, what's, what's the on-ramp look like to you? Like what's the, if you had to do a, a Nate Jones kind of, um. Simple recipe for success starting from Greenfield. What does, what does what, what does that look like for you?
[00:12:26] Nate Jones: It starts with problem framing, I think. I think so much of where agents tend to go wrong is you. You do have to give them that guidance. You have to give them that constraint and that clarity. But if the business problem doesn't have constraints and clarity, you're not gonna get that for the agent.
[00:12:40] Nate Jones: That's not happening. And so I challenge folks that I talk with ask themselves if they really know what they want solved and if it does get solved. Is it going to be worth the effort it will take to install an agent? Because agents generate disproportionate payoff, but they require disproportionate investment at the moment to get to done. And so especially, and, and that disproportionality is bigger for organizations that are new to agents, which so many are these days. And so if you've never done one before, you're learning. Enough about AI to build an agent at the same time as you're building your first agent. And you have to factor all of that into the ROI calculation.
[00:13:26] Nate Jones: And so I say like, is this worth it to you? Do you have the impact assessment where you know there's a 10 x yield on this that would give you real value? And if you do, that's great, and if you constrain the business problem, that's a step forward. And then we say. Or what I, what I suggest is you look at the data piece first, let's, let's like leave AI to the next step. You have to understand the kind of data and business operation that you're trying to run here. What are the business rules that you're using? How is the data currently encoded? Is this unstructured data? Is this structured data? What does it look like? Is it moves through the business? What are the decisions that are made against the data? Tell me the story of the data. If you really understand that lot of the architectural decisions fall out of that data story you can then say, well, semantic schemas with RAG would work well here. Or, wow, that's a really terrible idea. You don't wanna semantically associate with this because of the kinds of questions you're expecting to ask. people often start with that architectural question too early,
[00:14:27] Josh Rubin: Yeah.
[00:14:27] Nate Jones: and they say, well, should we use a rag? And I say, I don't know. What's your data? me your, tell me your data story, and then we'll get to like whether you should use a rag and, and how an AI agent interacts with that rag, et cetera, et cetera.
[00:14:39] Josh Rubin: Yeah. Yeah. That's super interesting. Um, do, do you like I think, I think it's interesting that we're at a point where, you know, basically no one has a lot of experience right? Where sort of as a industry, um, if not a civilization sort of, uh, learning a new way of thinking about interacting with machines. Um.
[00:15:02] Josh Rubin: If you as a, from, from the kind of, um, enterprise perspective, like, I don't know if you was, you, you may have last week when we chatted, mentioned kind of a crawl, walk, run strategy. If I'm, that's not, if that's not your perspective, I'm lifting it from somebody, somebody else, but
[00:15:17] Nate Jones: No,
[00:15:17] Josh Rubin: what
[00:15:18] Nate Jones: did.
[00:15:18] Josh Rubin: I like, like what do you think about like, is, does that de-risk some of those decisions also, is that another way to think about it?
[00:15:25] Josh Rubin: Like in terms of getting experience and getting real traction on implementation before you're over committ.
[00:15:32] Nate Jones: I think that that principle really does hold in the world of AI and I think one, one of the pieces that. Maybe is a little different from the traditional crawl, walk, run of management theory. that because AI is so experiential, because you have to learn it for yourself in your organization, in your context to do it well, you have to really lean much harder on getting through the crawl phase to get anywhere else. It is harder to get through the crawl phase with AI and the payoff is bigger if you can get to the walk in the run.
[00:16:03] Josh Rubin: Hmm.
[00:16:04] Nate Jones: And so I find in practice a lot of organizations underestimate how much challenge there is in properly doing that initial prototyping. they often, they often try to shortcut it. They often try to say, well, it's, it's really, it's gonna be fine.
[00:16:18] Nate Jones: We're just gonna start building the system. Um, and you can do that, but you know, your batting average goes down. It's a much higher risk proposition. And so. I challenge people to think of it as less that you're bolting on to a product in your business and more a cultural change effort first, where you are adding an intelligence layer to the business. And you have to think about all the change components that go with that and
[00:16:45] Josh Rubin: Yeah.
[00:16:46] Nate Jones: business accordingly. And if you're committed at that level to change, you're much more likely to realize value.
[00:16:53] Josh Rubin: Yeah, I think that's a super point. Um, what about measure? Well, so, so I guess, I guess my f. Feeling there is that, you know, more than most technologies that I've experienced so far. Um,
[00:17:10] Josh Rubin: what do I mean to say here? Um, like if you start with something simple and it's not solid, right? If you don't feel like this simple thing is robust, like. You really run into trouble when you start adding things on top of it. Like
[00:17:25] Nate Jones: That's right.
[00:17:25] Josh Rubin: you mentioned earlier, like it can get out of hand very fast.
[00:17:29] Josh Rubin: You know, some common, some of that is experience, some of it is, um, sort of, uh. You know, the, the, the structure of what you build, like the complexity level of the thing you build, I think some of it is sort of instrumentation and what tools, and we can get into this sort of stuff later about what it takes to run these things in production reliably and with the right telemetry in place.
[00:17:54] Josh Rubin: Um, but, uh. I think all of those things fold together, where if you don't have a lot of clarity around each of those things, um, it can get away from you pretty fast in ways that are, you know, it's not just an application that, you know, throws an error when somebody goes to the webpage. It's an application that can potentially do, you know, misleading things tell you misleading things, take dangerous actions.
[00:18:17] Josh Rubin: Um, yeah. Um.
[00:18:21] Nate Jones: AI is an accelerant, and so if you make bad initial decisions, they run faster.
[00:18:26] Josh Rubin: Do you, I dunno if you wanna talk a little bit more about some of like the, uh, you've, you've mentioned a couple of patterns in the past that are, um, you know, sort of multi-agent systems versus kind of single agent with a lot of tools and a, a little more constraint. I don't know if you wanna talk about like, the difficulty trade off.
[00:18:41] Josh Rubin: I, I, I don't know if I'm, I'm, I'm jumping the card in front of the horse, but I think that's an interesting, that's kind of like. You know, sort of the level two of the, what we were just talking about with, you know, kind of first steps. Um, I, I do think that oftentimes, you know, people jump to, to sort of trying to use all the tools in the box to solve the problem before getting the simple ones just, just crushed.
[00:19:04] Nate Jones: Yes, I, I, I have actually heard people who have never done agents before ask, you know, tell me how to build my multi-agent system, and I. Well, it's a big question, right? And like, it feels like it's jumping, uh, and putting the cart in front of the horse. I think when I get asked the question about what kind of Asian architecture makes sense, I walk back to the problem and I, I haven't seen this articulated a lot of other places, but, but I walk back and I say. How hard is this problem and how token fungible is the problem? So, so is this a problem where throwing more tokens at the problem, linearly make it more likely that the problem will be solved? Is it a business problem where you know that if you could get, you know, a hundred thousand tokens of thinking time on it, you absolutely, with 95% confident you're gonna get to a correct answer every time? Or is it a situation where it doesn't really respond to that? And whatever you're gonna get, you're gonna get, and you have to deal with it. And so when, when you separate the problems out like that, and people sometimes have to actually just practice and try and see, and sometimes they use Chad GPT to do it.
[00:20:15] Nate Jones: Sometimes they do other things. Sometimes they talk to an engineer and ask, is this token fungible? But, but once you have a little bit of a sense of like how likely it is that throwing more AI at it will help. You can then decide what kind of architecture makes sense to get you those tokens. And, and the reason I start to think that way is, um, I just, you know how you sometimes read white papers and they just sort of live rent free in your head for a bit because there's some like resonance to it that sticks with you. One of the ones that did that for me is Anthropics writeup of using multi-agent systems. Um. And this was in the middle of the, sort of the clap back they did to cognition. Because cognition released a white paper basically saying multi-agent systems are unreliable. They're brittle, they don't last. Well, that's why we use one agent for Devon. And like I, I, I, I don't know who runs the PR at Anthropic, but someone decided they were gonna clap back to that, uh, and they almost immediately published a sharp white paper that said, this is how we do multi-agent systems at philanthropic, and this is why they work. And. The thing that resonates and lives rent free in my head about their white paper is not all the details about architecting the agents.
[00:21:27] Nate Jones: It's the simple assessment that they did where they said, what is it about multi-agent systems that helps us solve problems? And what they realized is multi-agent systems are proxies for spending more tokens on a problem. And they explained roughly 80, 90% of the value of the system as a function of multiple agents spending more tokens on the problem. And so that has just been resonating for me and I've been thinking about it a lot and I, and then I now look at it and I'm like, what are the problems? We're spending more tokens matters. And in those cases it may be worth it to architect a multi-agent system. Else want to default to something simpler.
[00:22:07] Josh Rubin: I wonder how, um, that, that's super, super. Interesting. I, I, I, I wonder how, um, you know, is, is breaking the system up into, into multi-agent, does that help us as designers, um, more properly partition the problems so that we can spend those tokens more efficiently? Like, is, is that a problem? Are we, are we solving a human problem by architecting it that way versus.
[00:22:39] Josh Rubin: Feeding in some of those, you know, things that you might try to do in parallel, uh, into a prompt for a single agent or some other kind of mechanism for pulling in prompt and context like I'm, I'm, I don't know if you have a, have a feel for that, but
[00:22:55] Nate Jones: Yeah,
[00:22:55] Josh Rubin: I'm getting at.
[00:22:56] Nate Jones: a really good question actually. I'm so glad you asked that. Um, this sort of gets back to like engineering principles and separation of concerns and how we think about architecting systems that humans can maintain. And I think that one thing that I haven't seen sort of talked about anywhere is this idea that these agen X systems software. Their software we will have to maintain. We have to think about how we maintain that software over time. As much as we can talk about, you know what I just said, where you wanna like architect the system so it solves these problems with more tokens in practice. I see more of what you talk about as a rationale for multi-agent systems.
[00:23:33] Nate Jones: I see people saying, well, I need the agent to go check the inventory, and then I need the agent that can be the master agent that formulates the response back to the customer. And then I need the agent that can go check for the refund policy and they need to be able to come back and report to the master agent too.
[00:23:48] Nate Jones: And so effectively. Humans are articulating a separation of concerns that helps them through the flows.
[00:23:56] Josh Rubin: Yeah, it kind of seems like that, right? It's like we're trying to impose some sort of, uh, you know, not well, you know. Yeah. It seems, it seems like somehow we're, uh, I mean, I do wonder sometimes 'cause it's like, okay, we're gonna call this thing a separate. You know, a separate agent or a separate component in the system, but, but really it's maybe just a different prompt going to the same, the same model running on the backend or you know, another
[00:24:18] Nate Jones: we just can handle it better in our heads,
[00:24:20] Josh Rubin: Yeah.
[00:24:21] Nate Jones: or, and this is, this is the one caveat I tend to give when people do this. I say, that's great. I'm glad it works for you. open to the idea that it may not need to be an LLM.
[00:24:32] Josh Rubin: No, that's interesting. Yeah. I.
[00:24:33] Nate Jones: Because like an inventory check, we've solved that. We had inventory checks long before LLMs.
[00:24:38] Nate Jones: I don't think LLMs are the best tool for the job. Just do it the regular way.
[00:24:42] Josh Rubin: yeah. Please, please, if there are any parts of this problem that you can, that you can bolt on in a, in a traditional way, please, please do those. That way, unless, unless you're doing something out of the, the agent, I totally, totally feel that. What, what about build versus buy? Where are we in terms of, um, you know, do you think we are in terms of what solutions are just.
[00:25:03] Josh Rubin: There are products now that I like, some of the rag stuff, there are, you know, companies that you can just hand your database, you know, your customer support database or whatever, and will digest it and, and, uh, um, index it and, you know, make it available to some commodity, LLM, and then it's just a, um, a hosted solution.
[00:25:23] Josh Rubin: Like where, which problems do you think are, are already pretty solved by specialists?
[00:25:30] Nate Jones: That's a complicated question. I, I kind of want to give you a split answer. I am very, very bullish buy for tools that enable the 40 some million developers that we have across the globe to build AI systems in their companies.
[00:25:46] Josh Rubin: Hmm.
[00:25:47] Nate Jones: think those are gonna do very well. I, the classic example, of course is cursor. Cursor for this, cursor for that is the whole story of YC this year, but. The idea that developers need to essentially be reskilled and reequipped at scale is monetizable. Like I think we are looking at a 10 x increase in software costs per developer that people will happily pay, and there's a massive opportunity on the table there.
[00:26:13] Josh Rubin: Yeah.
[00:26:14] Nate Jones: am much less bullish on finished tools that you can buy as a company, because I find if you're in the agent space. You have really complex business context that you're trying to process. That's why you want the LLM. That's why you see the value. And that's really hard to just stamp out and, and I'm aware that some of the value in AI powered SaaS and services is that you can extend software in ways that I was taught to never do as a PM in the 2010s where it's like, no, you got it.
[00:26:47] Nate Jones: This is what you buy, right? We're not gonna customize it for you. This is what you buy PM say no. Right? That's what we were all taught. Now you can say yes. Now you can customize, now you can extend. And so I know that these footprints are edging out that SaaS businesses are getting smarter about sort of customizing stuff.
[00:27:01] Nate Jones: They are getting better at domain expertise.
[00:27:03] Josh Rubin: Uh, yeah.
[00:27:04] Nate Jones: I think that there's a difference between powered software and AI agents AI agents. Are I, I have yet to see a good domain vertical example with enterprise level complexity. Where you can just say, here, have your AI agent and just buying it soup to nuts against all of your data.
[00:27:27] Nate Jones: It doesn't matter what your initial data setup looks like, we'll make it work.
[00:27:31] Josh Rubin: That's super interesting. Do you think that is because of the differences between verticals? Like is that what, like what's, what's the, you know, my, my, the. The naive part of me says like, you know, every, every company that you know, may not have any business trying to engineer some large scale gen AI application probably wants to have some internal tools that, I mean, I guess, you know, you, you do see things like Microsoft Copilot and all these internal things that did that, that, that scrape your slack and help you bring context into like, so, so for you, you think of those as being sort of AI powered applications.
[00:28:09] Josh Rubin: You don't think about that as a.
[00:28:11] Nate Jones: Yeah, like if we're really building an agent, to me that feels like it needs to be something that solves a really meaningful business problem. I think you're right. I think the simpler agents are going to get commoditized out really, really fast. It scrapes the slack and it delivers a report. That's done right?
[00:28:27] Nate Jones: Yeah. I'm sure you can buy it off the shelf, but also your developer can coat it in an afternoon, like
[00:28:31] Josh Rubin: Yep. Yep. Yep.
[00:28:32] Nate Jones: be a ton of meat on the bone there. if you're building an agent that is designed to handle dispatch for your fleet of trucks, and the agent has to be aware of the weather in 15 different cities, and it has to be aware of the maintenance records of the trucks and the schedules of the drivers. That's not something like either you're buying the SaaS application designed to do that, and it's a lot of point and click and it's a lot of traditional software. Or you're building an AI agent that helps you do that autonomously, but I don't think you're buying the agent for that.
[00:29:04] Josh Rubin: The agent isn't a product. An agent is a, is a, um, something that works under the hood of a.
[00:29:10] Nate Jones: It's almost like it's a new class of asset. It's an entity that you develop within the business. That previously, like it sort of straddles the line between software and employee.
[00:29:23] Josh Rubin: Yeah, I think that's really interesting. I think we, uh, tend to think about it like it's, you know, this bespoke internally built software solution stuff. And, and in a way, like it reminds me a little bit more of like.
[00:29:40] Josh Rubin: I don't know.
[00:29:41] Josh Rubin: I wanna say like, you know, Excel macros or something like that. Like it's this kind of, when properly used, very powerful kind of, um, smart glue. Um.
[00:29:52] Nate Jones: Smart glue is such a great frame. Like I, um, was reading a note from Dan Shipper who runs every this morning.
[00:29:59] Josh Rubin: Mm-hmm.
[00:30:00] Nate Jones: Um, and what he noted is that everyone in the business, and they have a small team, but like everyone in the business is going to be able to use Claude Code to commit and build features on their products. Even if they can't code, and now that's taken a fair bit of setup. They've had to set up the file structure for that. They've set up the system rules for that. They have an engineering architect or something on staff that helps to kind of keep everything roughly in order. Um, but that's an example of where you start to like weave an intelligence layer across the business and all of a sudden you unlock capabilities that would've been unthinkable two years ago.
[00:30:36] Josh Rubin: Yeah, yeah, yeah. I, I think I was listening to something on my commute yesterday. Uh, there was a, just a Claude code tutorial, um, and it's super fascinating to me and the idea of Claude Code, and they were talking about using cloud code basically in a sort of, um. Fire and forget, uh, bash, where it basically just becomes this like super intelligence Linux command line tool.
[00:30:59] Josh Rubin: And you, you put away all of the like, interactive conversation about codebase and stuff, and you just say like, um, you know. Of all the thousands of, you know, new tools that run on our,
[00:31:11] Nate Jones: Yeah.
[00:31:12] Josh Rubin: our command prompts. Like this is like a magic tool that you just invoke and, uh, you know, you can have it do something totally pedestrian or something sophisticated, but it has the brains to, to, to plan and solve any sort of complicated problem that exists within, um, prompt terminal space with all of the, all of the.
[00:31:31] Josh Rubin: You know, rights and privileges that prompt and terminal has. And, you know, it sort of, sort of blew my mind, right? Like this is just a, you know, the first smart command line tool. Um, that, that can do sort of a, a very broad scope of things. Um.
[00:31:50] Nate Jones: Yeah, it's, it's remarkable and I think that they sort of misnamed it, calling it Claude code, because it's good for so much more than code.
[00:31:57] Josh Rubin: Yeah. Yeah, it's for sure, like, uh, I don't know if it was a, a stealth move to call it CLO code or if it's, uh, you know, I, I think it's gonna be important. Um, I'm, and I'm, it's fun. It's for sure fun to play with. Um, so, uh, I don't know. I think, you know, I'm, I, I'd be, I'm, we're an observability company, so I'd be remiss if I didn't ask you about.
[00:32:19] Josh Rubin: What your thoughts are on, you know, instrumentation of age agentic applications. Like what, what is necessary from your perspective in order to make sure that some generative AI solution is operating well and continues to do so and it kind of production basis.
[00:32:38] Nate Jones: I, think that's one of the things that companies tend to underinvest in, to be honest with you, because. Traditionally with software QA was that gate. You have the QA step prior to launch and the, if you look at sort of the investment matrix, your 80 20 rule, the 80% is on making sure the software is right before you launch it, and the 20% is on sort of observing the software and making sure that bugs don't crop up in unforeseen ways because you're making deterministic software. Not anymore. Now you're making probabilistic software. Effectively the AI agent will behave in ways you cannot predict. have to, I was talking with a director at Microsoft a few months ago and he was observing that like it's entirely flipped how PMs do work because PMs have to discover the capabilities of the tool.
[00:33:24] Nate Jones: They're not determining the capabilities of the tool.
[00:33:27] Josh Rubin: Yeah.
[00:33:28] Nate Jones: Um, and so when you think about it that way, it means you're 80 20 rule flips and you have to spend a little bit of time. On making sure that the worst stuff isn't getting into production. But you have to spend a lot on making sure that you have ongoing evaluation, ongoing observation of what is going on with your AI agents in actual production.
[00:33:46] Nate Jones: And most people, like if I say, well, have you been sampling queries against your agent? Like do you have a stable of queries and you observe how they work and you have red lines and you have a regular update cadence to the prompt, you have versioning on your prompts. You make sure that your agents actually can be rolled back where necessary. They look at me like I'm speaking Greek, like, no, no, no. This is really important. Like just like we had to invent a language and set of processes for qa. We have to do that for agents. We have to take it seriously that we are putting these into production. We have to care about them.
[00:34:21] Josh Rubin: Yeah. That, that's great. Yeah. I like the idea that this, this 80 20 rule is a, uh, being inverted is a really interesting point that we're, you've now got this sort of like. Sigmoid like onboarding before you get to the, um, you know, large returns. Part of the, you know, at least while we're discovering what these do, that's, that's, that's super powerful.
[00:34:40] Josh Rubin: 'cause I think people are usually in the head space that, you know, you can get a lot of value outta the thing you can do immediately. And, and we all just plan to, you know, um, a low rate of return kind of warmup period for this stuff, you know, at least until we know it better. And, and just 'cause your last.
[00:34:58] Josh Rubin: Comment raised so many things in my head. I think one of the things that really stuck out when I was listening to the Claude Code thing last night from Anthropic was like they were talking about it, like in terms of discovering what it can do rather than building for it to do a thing, right? Like, like Claude Code is very much an explanation for an uh, uh, uh, an exploration for anthropic.
[00:35:22] Nate Jones: Mm-hmm.
[00:35:23] Josh Rubin: much as it's a product or a, a set of features that they're designing, um,
[00:35:27] Nate Jones: right.
[00:35:27] Josh Rubin: like they don't even, you know, they don't even know all the things it's going to be able to do and, and how to make the most of that. And I think if philanthropic doesn't have their, you know, if that's what they're experiencing, I think we should all expect to have some, uh, some experience like that.
[00:35:43] Nate Jones: I, I think that gets at one of the core attributes of this age we're in, that we don't discuss enough these. Labs that have produced AI models are almost without exception research labs. They came from these like PhD who were just trying to figure out machine learning problems and stumbled upon something remarkable, and now they've let it out into the world.
[00:36:07] Nate Jones: We're all using it, but they are discovering it as they go, just as we are. And one of the things that sort of keeps me humble or keeps me reflecting on the unpredictability of the future is that. We, we really genuinely don't know how much magic there is left in this incredible innovation around reinforcement learning and transformer architectures.
[00:36:30] Nate Jones: We're still learning. We're still learning. So far, scaling laws seem to hold, but even if scaling laws hold, genuinely don't know what. Jagged intelligence futures look like. Are we going to keep getting smarter on very specific verticals very rapidly, but then we'll have these weird glue work areas where we're not getting as smart, which seems to be what's happening in 2025, or is it gonna start to even up very rapidly where we'll hit some emergent point and suddenly things will start to even up and we'll get a, a smooth intelligence curve. I'm a little skeptical of that one, but it's a, it's a possibility. I have to be
[00:37:05] Josh Rubin: Yeah. Yeah, yeah. I think maybe it was one of your, your, um, recent posts where you talked a little bit about the possibility that. Um, you know, the fact that generative AI is so good at code is partially because it's made by engineers. And that that, you know, I think you were talking a little bit about the, um, you know, the announcements recently on the, uh,
[00:37:26] Nate Jones: the math Olympiad?
[00:37:27] Josh Rubin: the math, math, uh, from, from open AI that they had
[00:37:30] Nate Jones: Yes.
[00:37:30] Josh Rubin: sort of gold medaled and like,
[00:37:32] Nate Jones: Yeah.
[00:37:34] Josh Rubin: there is this really interesting hypothesis I guess, that, um, you know, if you can bring in domain experts from different domains and.
[00:37:41] Josh Rubin: Figure out, you know, I mean, it does help that we have a GitHub out there with so much code that is fertile, fertile training ground. But, um, you know, it'll be really interesting to find out if it's a jagged intelligence or a smooth one as you put it. Uh.
[00:37:58] Nate Jones: Yeah. No, and and I think that's actually one of the stories, like one of the lucky, the reasons I feel lucky living through this moment is that these stories will have endings. We will see, we will know whose bets were correct. We will know how these stories turn out because everyone is making date specific bets in the next couple of years.
[00:38:17] Josh Rubin: Yeah. Yeah. Yeah. Should we flip over and do, do some questions? It looks like we have a question on digital twins, which is a place I would love to take this conversation to anyway. Um, so, so let me, I'll, I'll read this. So, um, this is from Brad Daria out there. Can you speak to ROI with digital twins? Uh, I guess we'll have to introduce the concept a little bit, but, uh, you know, which sector has the lowest hanging fruit?
[00:38:40] Josh Rubin: Uh, quickest return. By sectors, I mean medical education, transport, et cetera.
[00:38:47] Nate Jones: Do we wanna introduce digital twins first and then get
[00:38:49] Josh Rubin: Yeah, I think we should probably talk a little bit about, like, Brad's clearly, clearly familiar with some of the stuff that you've been talking about recently. Um, I, yeah, why don't you go ahead first.
[00:38:58] Nate Jones: I mean, I, I think the simplest way to talk about digital twins is you have this base idea of AI agents, um, and typically we assume they do things, but what if instead we assume they modeled things. That's the fundamental difference. So the value is not in the execution of a task, the value is in the ability to model multiple timelines and to explore multiple options for a future. Uh, and so the, the question that comes up, like I, I think what's really interesting to me is that we are talking about this now in a software context, but like I have managed and led PMs who come from, uh, advanced manufacturing contexts and. They've been talking about that for longer. The idea that like John Deere would have a digital twin for tooling in their factory is not particularly new. Uh, and so in that sense, I think some of the low hanging fruit. Has been harvested in the advanced manufacturing and robotics areas
[00:39:57] Josh Rubin: Yeah.
[00:39:57] Nate Jones: now. And so it's up to us to think about this concept now with LLMs involved. How can we start to model things that would previously not have been modelable? So in the era when you were building a digital twin for your locomotive at Burlington Northern Santa Fe railroads, which they also did great, didn't necessarily need an LLM to model that.
[00:40:17] Nate Jones: It was still machine learning. It's still AI, but it's a different kind of AI. Now you have LLMs, you can model other classes of problem. What can you model that's susceptible now? And so I think that in that sense, we are dramatically underinvested in problems that use unstructured data LLMs are very good at unstructured data.
[00:40:37] Nate Jones: So look around sectors that have a lot of unstructured data and ask yourself, is there a way we could model different ways of attacking this problem?
[00:40:47] Josh Rubin: Yeah, that's super good. I think, you know, we were talking a little bit before we signed on officially to this, um, about the. You know, expecting AI tools to solve a much larger domain of problems, you know, as, as much more flexible problem solvers. Um, and, and the drawback there is, you know, as the domain gets bigger, the space of inputs and outputs becomes exponentially larger.
[00:41:13] Josh Rubin: And, and, you know, in order to make sure things are. Truly robust. And I think this is the, this is a lesson that we're learning from like, as you say, from robotics, like where people have done a lot of domain generalization using simulation. I think we're just realizing now, um, how this is sort of our only opportunity to, um, explore this really large domain of possibilities.
[00:41:37] Josh Rubin: I can tell you from our own work at Fiddler, as we're developing our tools for observability around age agentic systems, you know. Traces and spans and aggregate, um, calculations on performance of components. Um, you know, we're building an example code, um, to exercise all of this so we can see how diagnostics behaves, but you know, to do it right, it's forcing us to generate synthetic inputs that are, um.
[00:42:06] Josh Rubin: Really widely explore the available space and, and, and you know, you try to find ways to throw it into domains that you didn't even necessarily think of as the engineer. Like you're depending some level on the creativity of, uh, the model. You give it some seeds, right? You say like, uh, map.
[00:42:25] Nate Jones: up the temperature,
[00:42:26] Josh Rubin: That's right.
[00:42:26] Josh Rubin: Turn up the temperature. Imagine, imagine you're this thing in this situation and
[00:42:30] Nate Jones: right?
[00:42:31] Josh Rubin: with this other thing. Uh, go. Um, you know, and hopefully you've thrown it far enough out into unfamiliar space that you're exploring the domain of things that as an engineer you wouldn't necessarily think to wrap a unit test around, or you wouldn't, or data that you wouldn't think to, you wouldn't have in hand to get labeled.
[00:42:52] Josh Rubin: Uh, to test with for a, for an eval. And I, I, that's turned out to be a really interesting dimension and to my taste when, and sorry to rant a little bit, but, um, you know, I think ultimately that kind of stress test ends up being part of, you know what, uh, again, is another one of these things that enterprises should be thinking about in terms of.
[00:43:13] Josh Rubin: You know, making sure that they don't end up in the news having, you know, issued somebody a, um, an airplane ticket for $2, or, you know, uh, given away a truck on a non-existent promotion for free. Um.
[00:43:27] Nate Jones: Yeah, and I think that people read the headline stories and they assume that stuff will be fixed determinative in QA 'cause they don't understand how AI agents works and what you're describing, where you're generating synthetic data, you're throwing things at the model, you're simulating model responses is much closer to what is actually needed to make sure that you hedge that risk.
[00:43:49] Josh Rubin: Yeah, yeah, yeah. Um, so there's a question here about, uh, about. Companies dealing with, so from this is from from Lisa. My company has in internalized models that we can't use, where we can't use open source tools. Any thoughts on how to work within a large company to deal with privacy hurdles? Do you, do you have any, any feelings about how to navigate?
[00:44:12] Nate Jones: Yeah, I, the most popular variant of this is I have to use copilot, and I hear copilot is terrible. What to. I do,
[00:44:19] Josh Rubin: Mm.
[00:44:19] Nate Jones: actually wrote a whole guide for that. So like, if, if folks wanna check that out, they can. But the, the long and the short of it is people lean into big brand names on intelligence, and we forget that in the absence of those big brand names, in the absence of chat, GPT for example, would be over the moon about a product like copilot. would be so excited that it exists.
[00:44:42] Josh Rubin: Right. Yeah.
[00:44:43] Nate Jones: And so instead of sort of focusing on what you miss out on and what you, you don't have, think about it as most people under utilize the intelligence they have on the table anyway. And a well executed UU sort of high utility team with copilot is going to beat a badly executed install of she GPT all day. So it's not about raw intelligence, it's about whether your team has figured out how to move from sort of individual productivity silos to working across multiple teams or working within the team more effectively. And so, as an example, a lot of people think about it like the classic example is the CEO writes and says, Hey, today we've launched copilot.
[00:45:27] Nate Jones: Copilot is a great tool for enhancing your productivity to get started. Try writing an email with copilot like I've seen that happen over and over again. And just that simple example can be flipped on its head and you can think about it differently. And instead you could say, we are going to have a team conversation as a sales team about who is best at writing.
[00:45:48] Nate Jones: Nurture emails that follow up on deals that are stuck. Jenny is fantastic at this, actually. Jenny, can you show a few of your templates and then you start to pull out your copilots around the table. You start to feed those templates in. You start to compare and contrast with your own work. You start to learn, your co-pilot starts to learn the styling that the team wants, now you're looking at team level productivity gains, where you're actually lifting up the whole team, making the sort of the follow up from sales more consistent. And you have those kinds of opportunities all across the business. And so. I don't think people are as stuck as they think. I think documentation has been a gap, and that's part of what I wrote up what I did, because I feel like a lot of people just say, well, I'll wash my hands, right? Like you're using copilot.
[00:46:34] Nate Jones: There's nothing I can do. that's just not helpful to anybody.
[00:46:37] Josh Rubin: Do you think measurement helps in that problem? Like if I, I, this goes back to when we were talking about like, you know, how do you ensure that there's ROI, can you, you, you, you think sort of building in metrics to quantify lift of these systems gets people out of the head space of. Um, you know, we're very limited in what we can do because of our security posture.
[00:46:57] Josh Rubin: Like, if you were to just take the tools that,
[00:46:58] Nate Jones: Mm-hmm.
[00:46:59] Josh Rubin: know, matched whatever your, uh, your CISO said was, um, sort of reasonable and secure for your org and your security posture, posture, like, I don't know, is that an interesting question? Like, but.
[00:47:12] Nate Jones: I think what I have observed is most meaningful is if organizations at the team level set specific goals that matter to them as a team to get better at. And then start to track to that because what I've found is if you try and do something across the org as a whole, you usually settle out to hours saved. And hours saved is a super squidgy widge metric. At the org level, like I've talked to people who've done multi-thousand person installs of a chatbot and they do the little survey and everyone reports hours saved. In theory, they're saving 6,000 hours a week, an hour per person. Where is that time going? Nobody can say is, is it going to coffee, is it going to other stuff that's higher value? Nobody knows. And so then you end up with like, yeah, we can measure it, but we have no idea what the measurement means. so I think that having some team level metrics increases ownership and skin in the game and makes it much more useful.
[00:48:12] Josh Rubin: Gotcha. Gotcha. Um, let me, here, here's, here's, I think this is sort of an interesting question. Um, so there's, there's one from a. While ago from 15 minutes ago, um, from Philip, that was, you know, how would you invest to implement a next step for an organization developing like a, a sort of Java product, a legacy database information RIE Pro, you provide some metrics, um, run by a team of 10 devs.
[00:48:36] Josh Rubin: Are they currently using some limited AI models, loose optimization experiments, but not really thinking about rag agents lang chain? He's asking how much resources would you appoint, but I, I think there's a question here about. What the role of generative AI is in interfacing with more legacy systems.
[00:48:56] Josh Rubin: Like where, how much, how much do is What you see is, is, is greenfield versus like, here's something that we already have and how do, how do we, how do we interoperate with it in the most efficient, effective way?
[00:49:10] Nate Jones: No, there's, there's a lot of really brownfield opportunities. Uh, and I actually think sort of that there's a lot of, uh, opportunity there, but it's not easy to uncover and that's why there's margin in the, in those businesses that can figure out how to do that.
[00:49:23] Josh Rubin: Yeah.
[00:49:24] Nate Jones: The most successful approaches I've seen at the problem first as a talent problem, and second as a technical problem.
[00:49:35] Josh Rubin: Hmm.
[00:49:35] Nate Jones: if your talent on the table doesn't know to architect an AI system, you are unlikely in a Brownfield environment to negotiate all the complexities. Get to a really successful high rollout ROI driven project. It's not impossible, it's just lower probability. And so I would say you don't need to replace the team, but you need at least one person that you can bring in who really knows what they're doing with AI engineering, who really knows what they're doing with architecting and building systems, and you bring them in. Then they become the seed of a DNA change of the talent upskilling on AI.
[00:50:18] Josh Rubin: Hmm.
[00:50:18] Nate Jones: first, your first goal is just to get everyone to a level of comfort with AI engineering where you don't have to go back and learn it over again. And so, as much as you want to get started on the project, the project will work better if your talent upskills a little bit first.
[00:50:34] Josh Rubin: Gotcha.
[00:50:35] Nate Jones: I would allocate a little bit of time there, and then once you have a talent base that works, I think you can approach the problem again and say, okay, now that we all have some fluency here, as we look at this problem with a fresh lens, have the classic software engineering questions like, does it make sense to refactor this entirely? Is there a, uh, sort of piece of the data and business operation that we can bite off in silo and use, like as an AI test bed? And those are gonna be unique questions. Like nobody can answer that until they look at the particular software stack that you have.
[00:51:11] Josh Rubin: Yeah.
[00:51:12] Nate Jones: classical engineering questions with a layer of AI that the team can't answer fluently without that AI understanding.
[00:51:18] Josh Rubin: Yeah, I think that's great. Yeah. I, I love this as a, it's sort of both an engineering problem and a sort of humans learning problem. The, the, the dimension, like figuring out what's possible for all of us. Like, it's like, uh, you know, some sort of new magic that's been uncovered and we, we know it's good for some things and there's this whole space of how do we get better, uh, at, at working with it.
[00:51:43] Josh Rubin: Um, so it seems like there's a lot of engagement in our, in our chat thread. Uh, we'd love to hear from you guys if, you know, we should try to twist Nate's arm into like doing some sort of follow up or something like that. This is certainly a fun conversation for me. Um,
[00:51:57] Nate Jones: it's been fun.
[00:51:58] Josh Rubin: so do, do, do let us know if, uh, you guys want to hear, hear more from Nate in the future.
[00:52:03] Josh Rubin: Um, let's see. So let, there was a question here about standards. Um, w. Which is, and I think we've all experienced this, like we're just in this sort of real time kind of hype field where, uh, you know, there's new tools and frameworks coming out every week. Um, you know, someone was asking specifically about MCP and A to a, like, is your sense that we're gonna converge on some set of standards for specific things, or do we end up.
[00:52:34] Josh Rubin: In this very heterogeneous environment for a long time, and this is a real pain point for us at Fiddler in that, um, you know, we're trying to develop tools that interoperate with the most commonly used standards and frameworks and, you know, there's real work involved in, you know, not just sort of the cognitive map of, you know, what is the right representation of this information that spans all of this, uh, software stuff, but, but also the real work of sitting down and implementing, you know, um.
[00:53:04] Josh Rubin: Instrumentation that, uh, you know, drops into a l graph or, you know,
[00:53:09] Nate Jones: Yeah.
[00:53:10] Josh Rubin: your, whatever your framework is of choice. Where, where do you think we are in the standardization, um, lifecycle?
[00:53:16] Nate Jones: this, this was in, I think the note I put on Substack yesterday. There was a little throwaway piece that I put in around sort of how data and privacy incentives are with each other right now. And so from a technical perspective, I think we are missing massive layer of data middleware that should be there to help us feed data to LLMs. It is weird to me that we are still in a world where LLMs are so siloed versus data. It, it's not a technical issue. We can absolutely chunk and get the data ready. It's, it's, it's that the data that we would like to have available is so often locked down by boards or by leadership teams that say we wanna protect our data. We've been told to protect our data. The first thing we're told is to protect the data from AI. Uh, and so those incentives collide. And so I think that one of the challenges right now we need to be in a position where we can articulate from a data value perspective. What is the incremental value of investing in additional integration, additional data access, given that data privacy landscape. I think that one of the nice things about MCP is it gives us a protocol where, where you can sort of get ahold of data technically relatively easily, but it's also dependent on. Just like APIs, the ability of the other side to write the service well, right? Like the MCP actually has to be useful and not all of them are.
[00:54:47] Nate Jones: And I think that's part of why, perplexity has been talking about AgTech search because they, they don't want to be bound to the MCP standard if they feel like they can get more data by using agents to search instead. So to me what that suggests is that we are in, for sadly, more chaos before we all solidify, even though there's going to be frameworks that people are using because
[00:55:12] Josh Rubin: Yep.
[00:55:12] Nate Jones: we have this tectonic battle between the privacy incentives and the tech itself that wants the data and is hungry for the data, and that we have the capability to chunk the data for.
[00:55:26] Josh Rubin: Good answer. I think, uh, we're, we're pretty, pretty close to time here. Um, so, uh, I think we'll just out outro and, you know, uh, this has been a total pleasure for me. Nate, I've been totally digging your stuff. Um, you guys can find Nate b Jones on your substack and your, uh, you know, Oliver TikTok. I see, um, YouTube also.
[00:55:48] Josh Rubin: Is there any other place that we can, uh, direct
[00:55:50] Nate Jones: I think those are the three. We can go with those three.
[00:55:52] Josh Rubin: Thanks a lot Nate. Thanks everybody for coming and listening. This has been a blast. Um,
[00:55:56] Nate Jones: I had so much fun, Josh. Thanks for, uh, chatting with me for a bit.
[00:55:59] Josh Rubin: all right, well, you take care. Everybody out there have a great day.
[00:56:03] Nate Jones: Bye-bye guys.
[00:56:04]