KP Unpacked

AI Won’t Kill Jobs, Just Billable Hours | MIT AI Report

KP Reddy

Welcome to KP Unpacked, the #1 podcast in AEC.
Hosted by @KP Reddy and co-host @Nick Durham

What if the way we measure work is completely wrong? In this episode, KP and Nick dive into the MIT “Gen AI Divide” report and uncover how incentives, adoption, and measurement distort the real impact of AI inside enterprises.

Here’s what you’ll hear in this conversation:

  • Why most companies report “no AI benefits” even though 95% of employees use AI daily
  • The hidden incentives that keep workers from admitting AI is making them faster and better
  • The rise of the Shadow AI Economy and what it means for enterprises vs individuals
  • Why current ROI metrics miss the mark and why we should measure tokens instead of time
  • Token arbitrage explained: how value will shift when human work is priced per token
  • The psychology of professional services and why billable hours are nearing extinction
  • How AI adoption differs across industries and why AEC faces unique challenges
  • Fresh founder lessons on persistence in construction tech and the brutal reality of this industry

AI is not just about saving hours. It is about redefining how value gets measured, priced, and delivered. The question is not if this shift happens, but when.

Sounds like you? Join the waitlist at https://kpreddy.co/

Check out one of our Catalyst conversation starters, AEC Needs More High-Agency Thinkers

Hope to see you there!

Speaker 1:

What's going on today, man? It has been call after call after call. There's this thing happening called AI. I'm not sure if you're aware, I don't know if you've heard, Are you in the zeitgeist?

Speaker 2:

I would consider myself in the zeitgeist.

Speaker 1:

Yeah, okay, yeah. So this AI thing, man, it's pretty interesting and everyone wants to bring up the MIT report. Yeah, let's start there and guess what, nobody's actually read it. Yeah, everybody's read the headlines. Everybody's read the headlines.

Speaker 2:

Yeah, I actually had the chance to read it every single sentence, but I read it, the report that you're referring to, just for everyone's awareness, it's called the Gen AI Divide State of AI in Business 2025. And business 2025. Let's get into your post in a second, but do you just want to give a high-level overview of what the report talks about? What captured your attention when you first glanced through it? Why is everyone talking about this report?

Speaker 1:

I think everybody's talking about that there's been no benefit to deploying AI. Companies are not seeing the benefit. It was something like what was it? A stat Like 95% of companies have seen no benefit from deploying AI in their organizations. Is that the hot take?

Speaker 2:

Yeah, yeah, yeah, I mean that's one of the yeah, that's one of the hot takes.

Speaker 1:

Yep, so tell me this. And this is like I haven't read it, right, but my first reaction was in the world of incentive alignment, this is all surveys, right, this is all surveys. This is all like oh, we're going to go talk to a bunch of enterprises. If you work in an enterprise, whose incentive is there to say that AI is being very effective? I'm not saying that they're not wrong. I'm just saying, in the world of we are humans and there's incentive drivers, either notionally or otherwise, we have incentive drivers. Who's going to stand up and say, man, ai is letting me do all my work in a tenth of the time?

Speaker 2:

Yeah, like the incentivist to sandbag Right. Have you seen some of the timeline sentiment right now where people are suggesting Sam Altman, some of the other foundation model CEOs? They're actively underselling where they are and I even saw a conspiracy theory show up on the timeline today that GPT-5 was a model release that was a misdirection Like. It was released intended to be a poor improvement on previous models so that the world would be, you know, basically the AI bubble would pop and you know, essentially be less enthusiastic about AI, less concerned about, you know, alignment, about AI, less concerned about alignment, safety, et cetera. Have you seen that conspiracy theory floating around? I think it's related to what you're saying in that who is actually incentivized to talk about the superpowers that they've experienced?

Speaker 1:

by using AI Right, like I'm getting all my work done in 30 minutes, you're paying me the same. Why am I going to say otherwise?

Speaker 2:

they, so they actually capture that in the report oh, really okay yeah, there's a, there's a section and coincidentally it's called the shadow ai economy. It's called the Shadow AI Economy Quick reference to Shadow Ventures, but the stat that they show in that section? They have a stat that shows the companies who have purchased LLM subscriptions. In the survey, 40% of companies said they've purchased LLM subscriptions for their employees and there's some sort of enterprise adoption happening. However, 90% of employees said they use LLMs regularly. So what's the disconnect between the number of companies who are reporting enterprise-wide usage and individuals that are reporting that they're using it basically every single day for multiple tasks throughout the day? That, to me, tells me I mean, it's what you're referring to Everyone's using the tools. However, right now, do you actually want to admit to your employer? Do you actually want to admit to your customers that you're using better tools to do work in less time? Like that's not the incentive structure at hand, right?

Speaker 1:

No, it's the opposite, right, and so I think it's interesting that they even mentioned in the report, because the headlines seem differently. But there's also like so let's take away like this nefarious shadow thing that's going on. If you asked me how often I use the Internet, I don't know that I can actually tell you. So there is a possibility in the world of maybe people are better people than what we think. Maybe people are using it and they're using it so much now they can't actually delineate when they're using it and when they're not using it.

Speaker 2:

Like across their whole, across internet usage or intentional usage, on just like opening up ChatGPT and typing in a query.

Speaker 1:

Yeah, maybe they don't remember. I mean, how often do you use the internet, Nick?

Speaker 2:

I mean, it's like how often do I eat?

Speaker 1:

How often do I drink water?

Speaker 2:

Yeah, Like just yeah, it's abundant.

Speaker 2:

Stop bragging about your hydration game yeah, yeah, yeah, I mean it's, it's, I think the um that for what it's worth, I found that to be the most interesting stat because I think that tells you all you need to know. Like the, the report is about the state of enterprise business adoption. Individual adoption is uniformly like in the report, it's, it's very obvious that individual adoption is completely through the roof. Like, if you did it, if you did the same survey of early for early internet usage and 95% of individuals were using the internet in the early days, everyone would be losing their minds. I mean, and that wasn't true. Like, no, not everyone was using the internet. There were bottlenecks to using the internet. You actually, you know, you had to have a, you had to have a connection. You didn't have a smartphone, you had to be in front of a computer.

Speaker 2:

This is not true today, like we all have the ability to query an AI tool at any point in time with a smartphone in front of a computer, via voice mode, via the intelligence on our phone applications, right, and so to me it almost renders the report somewhat useless. Like what are we talking about here if 95% of individuals are actually using it?

Speaker 1:

Right, shame on you, mit. Or we're just that much closer to singularity. We don't actually know we're using it. Could be true. Could be true Maybe that, like chat six, is actually singularity and we're not going to see it coming.

Speaker 2:

Yeah, a couple other things I found interesting about the report. So they highlight five myths about Gen A, gen AI in the enterprise. Myth number one AI will replace most jobs in the next few years. Research found limited layoffs currently from from Gen AI and only in industries that are already affected significantly by AI. There was no consensus among executives as to hiring levels over the next three to five years. What do you think about?

Speaker 1:

that. I think it's once again like it's not popular to say. I mean, I think if you're a public company CEO, there does seem to be something happening right now where there is it's popular to say you're laying people off in terms of as it affects the. You know the story of your stock. Oh, we're seeing such improvements with AI, we've let go of all these people and buy our stock. Right, that seems like an easy story to sell to Wall Street, so to speak. You know we deal with a lot of private companies. I don't know that there are private companies we talk to, that we work with. I don't think any of those CEOs would ever say that publicly, even if it was happening they want to talk about no, it's your co-pilot, it's your friend, it's going to give you superpowers, not, oh, it's coming for your job.

Speaker 2:

I think that's very true. I do think there is. So the lesson is there is no consensus among executives to hiring levels over the next three to five years. I think that's probably true, right, like I don't the CEOs we talk to privately in the built environment. I don't think they have a grasp on exactly what hiring looks like. That's my take. I don't think they. I think they know change is coming. They can't really predict it. But if they, if you know, I don't think they're anticipating laying I mean, yeah, I don't think they're anticipating laying off 30 to 50% of their staff- yeah, I'm seeing that's also a little bit of a contradictory.

Speaker 1:

on one statement they're saying, oh, we haven't been affected, and then we're also saying like, but we can't predict. In other words, nobody's dug their heels in on the idea of, oh, AI has not affected my current teams and staffing levels and it never will. Right, they haven't said that. What they've said is. My point is they haven't dug their heels in on the idea that it's not going to happen. I think it's inevitable. People believe that it will happen. It's not an if, it's a when. And if I was a CEO close to retirement, I would probably just say no, no, no, we're fine. And then I'm going to get the hell out of here in three years when it happens.

Speaker 2:

right, yeah, the next myth generative AI is transforming business. Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated into workflows at scale and seven of nine sectors showed no real structural change. And so, yeah, this is like. Again, this is the premise of the report. This is why people are reacting. 5% of enterprises have AI tools, but, again, 95% of their employees are using these tools multiple times throughout the day. Like what?

Speaker 1:

Okay, maybe it's like the people that go to the gym and don't lose weight.

Speaker 2:

It's just not like to. To me it's like a form factor thing. It's like not delivered in the form factor that an enterprise is used to. And, yes, are we getting the most productivity possible? Are, all you know, at it at an enterprise level, our work, our workflows, fully integrated and tied together? Is there measurable quantification on exactly how many hours are saved per day by the employees? No, that's not true, but the way that they're framing this seems ridiculous.

Speaker 1:

Very disappointed in MIT. I used to want to go to college there, no more.

Speaker 2:

The decay of institutions is real. Enterprises are slow in adopting new tech. Yeah, no shit. Enterprises are extremely eager to adopt AI at 90%. So that was actually a myth for what it's worth, but the reality is enterprises are extremely eager to adopt AI. 90% have seriously explored buying an AI solution at the enterprise level. Fourth myth the biggest thing holding back AI is model quality, legal data and risk. In reality, what's holding it back is that most AI tools don't learn and don't integrate well into workflows. I think that's fair. I think that's a fair thing.

Speaker 1:

But I so. So I would say also you know we're talking about enterprises in general, right, and I think there's a quotient to consider. If you are a law firm, that is a people business, the bulk of your business is P&L. There is no major investments, there is no capex. So when you look at the capital required to invest in the firm, it's a very unusual thing.

Speaker 1:

I remember law firms of the early days of internet would ask me how they can charge their customers back for sending emails. They wanted to charge per email because they used to charge for FedEx and faxes, right, like, how do we charge for it? Of course, that was short lived, but so, if you think about it, right, these professional services organizations that don't have a lot of capital to invest they have IT, but they can't really invest CapEx in a meaningful way to transform back to the other conversation to transform their businesses will actually just iterate adoption in an unmeaningful way. Versus, if you are call it Coca-Cola and you're looking at robotics and AI and you're deploying it in your next factory, your next manufacturing facility and you're deploying a billion of CapEx, you can now do AI and robotics because you're a CapEx mindset.

Speaker 1:

I think the markets with the biggest opportunity for AI transformation and disruption, do not understand or don't have the capital balance sheet to invest in a major way to make a difference, so they screw around with. You know, $29 a month, chat, gpt. I mean, look, the early days of the Internet, internet was free. Right, internet was free. Is the Internet free? We have to have cybersecurity. I mean, there's so many things that don't make it free.

Speaker 2:

At an enterprise level it's very expensive to have the internet, but as consumers we thought the internet was free, building their own tools, you know, like this one, obviously we know that is not true. Internal builds fail twice, twice as often. If you're building your own, if you yeah, if you're building your own, your own tool sets, um, I would say that that's probably vastly underestimated. I would say close to 5 to 10x more often. Yeah, yeah, don't build your own tools. People do not build your own tools, not worth it. The build their own tools, not worth it.

Speaker 2:

The other thing I wanted to bring up on the report so they they took a look at adoption across different industries, the two sectors they call out with actual real adoption structural change, technology, new challengers gaining ground, you know cursor, co-pilot, shifts in workflows. I think you know mainly the call out. There is probably coding tools, infrastructure-related tools, and then secondly, the other industry that was called out, media and telecom, essentially AI-native content, ai's ability to assist with creative content, video, voice, altering both of those industries pretty significantly and structurally the other industries they highlighted. So, professional services, healthcare, consumer and retail, financial services, advanced industries, call it manufacturing, industrial energy and materials. You know, no notable infringement on their productivity and workflows is what the report says.

Speaker 2:

I mean, let's talk about the built environment here for a second. Obviously we probably span across multiple categories that are listed here. Where do you think the biggest efficiency gains, productivity gains are happening in the companies that we work with? And you know, use a few different examples. Obviously, like you know, an A&E firm is going to be different than a, you know, than a subcontractor doing work in the field.

Speaker 1:

Yeah, I think what you're, some of the productivity gains are happening just on finding things right. Like it's quite, quite fascinating. We're a pretty small organization but sometimes I'm like, hey, where's that powerpoint, where's that deck from three months ago? Everybody's, like you know, slack gets lit up, everybody's digging around trying to find something right. And that's at our scale, a couple people right, we can't find shit in our own, in our own stuff, right? So I think in the world of just being able to find stuff quickly, I think you're seeing a lot of productivity gains there like, oh, where's that project we worked on? Where's that bio? Where's that?

Speaker 2:

so from a search, some of from a search standpoint right.

Speaker 1:

Yeah, just being able to find stuff, because as humans, apparently we stopped organizing our information and we don't have things in the right place and things are on the cloud and things are on our hard drive. So I think I think there's some pretty, pretty decent productivity gains there. And then I think around the you know, there's like get the work, do the work, and the world of get the work. I think there's some productivity gains Just being able to rewrite a resume. You know one of the things when I was in engineering, we'd had these long form bios to be put into proposals, like oh, here's your team. And the big thing was always like they were very long form because if you were going after a hospital project, you wanted to make sure marketing would pull the paragraphs that were relevant to health care because you wanted to be very specific to that customer. I think you're in a position now where you can say rewrite this bio and highlight all the health care experience that KP and it probably auto-generates it pretty quickly. But it happens Like our chief engineer, his resume, his bio, was 50 pages long with all his experience and we had someone manually going in and cutting and pasting specific project experience and relevant experience. So I think you see some productivity gains there.

Speaker 1:

I don't know what's going on, you know everybody's got otter and fireflies and everything jumping into these meetings. I really have a you know, and people say it's been huge productivity for them not having to take meeting notes. I don't know, I just feel like I have a lot of meeting notes but I don't. They're just going into a drive somewhere. Yeah. So there, I just feel like I have a lot of meeting notes but I don't. They're just going into a drive somewhere. Yeah, so there's a question of like, when you're manually taking notes, you're running your own human inference filter. You know why it matters, right, it's not about just taking notes. We're not transcribing. And I feel like people are transcribing a lot of stuff but there's no knowledge and takeaways from it per se, and then they're just sharing that with other people who are probably not reading it. But people have said, people have told me like so much productivity gains being able to transcribe my meeting notes and I'm like I don't depends on your definition of productivity.

Speaker 2:

Yeah, there's a section in the report it's called perceived fitness for high stakes work. I think everyone in the in the ac industry would consider their work high stakes work. Yeah, one of the questions they ask in the survey would you assign this task to ai or to a junior colleague? Complex projects, which would be multi-week you know multi-step, multi-week type in the survey. Would you assign this task to AI or to a junior colleague?

Speaker 2:

Complex projects, which would be multi-step, multi-week type work, client management, more complex tasks 90% human preferred, 10% AI preferred. And that lines up. I mean, that's one of the more difficult things that models are working through in terms of where they're hobbled today is like. Completing multi-day, multi-week, long tasks is very difficult at this moment in time with LLMs. And then for quick tasks, same question would you assign this task to AI or junior colleague? And the quick tasks being defined as emails, summaries, transcription, basic analysis 70% said AI preferred and 30% said human preferred. So I think that I mean that lines up. Yeah, I think so. I think so. I guess, like the way that this report is being talked about, there's like a certain level of certainty.

Speaker 2:

People are talking, are talking as if loms are not going to be able to go eat and accomplish that complex project task and like have you been paying attention like, in what world do you think that the models will not get unhoppled and eventually be able to tackle some of these tasks, like I? Just I struggle with that mentality of thinking that this is the bottleneck and it's going to be here for you know five, five or 10 years. I think like, yeah, it may not be. It may not be this year that you're assigning really complex tasks and high stakes work to people, but in the next couple of years. I find that very hard to believe if that doesn't occur.

Speaker 1:

Well, like, the human condition is driven by ego and self-worth, and I think it's hard for people to contemplate and check their ego and not take their self-worth to be aligned to things that AI can do, right, I mean, I think, if you enjoy your work right, if you enjoy your work, I think it's very hard to contemplate that a computer can do your work and somehow you are worthless Because people that enjoy their work, like we do, we tie a lot of our self-worth and ego to our work. So I think that is just what that report is indicating is, you know, it's just reflecting the human condition in many ways.

Speaker 2:

My, uh, my therapist wife would um have some comments on tying, tying work to self-worth, but we won't, we won't, we won't venture there today.

Speaker 1:

Um, I mean, look, I I know very few people that in their work life that want to be in that top first, top 5% in their work life, that want to be in that top first, top five percent in their work life. Right, whether you're patrick, you know, does patrick mahone's not think of himself as, like, is his life not work mostly? Yeah, is he not thinking, right? I mean, I just think that there is a percentage I got one comment on this.

Speaker 2:

So um you been. I'm sure you've seen some of what Scotty Scheffler has done in the golf world. He is the one professional athlete that I've I've seen potentially in my lifetime. I'm not, I'm not gray beard yet, but he, he is detached. He's actually detached a little bit from at least the way that he presents himself and talks, and I think I mean, if you talk to people around him like I think they validate this. He actually is the first person that can openly talk about I have this life that's separate from golf and yet is still performing at the absolute highest level.

Speaker 1:

Yeah.

Speaker 2:

Winning week over week, you know, basically dominating the sport, but saying like at the end of the day, I got to go change my kid's diaper just like everybody else and I think it's actually been cool to see that, because I come from.

Speaker 2:

I come from the. You know the, the, the DNA fabric that you're alluding to, which is like the Tiger Woods mentality that everything revolves around. You know the. The thing that you want to accomplish, like your entire life's work, is, is, is. You know, is is measured in in, in what you accomplish, in the legacy that you leave, and I don't necessarily think that's a bad thing at all. I'm, I'm, I'm, I'm wired that way, as as you are, but I do think it's interesting that people are finding, at least in the example of Scotty, he's still performing at the highest level while still maintaining that balance, maybe, or he has a really good branding consultant that said, hey, here's a white space that nobody owns.

Speaker 1:

Scotty, nobody's going to say that you outwork Tiger on a relative basis.

Speaker 2:

He's a category leader in that spot.

Speaker 1:

I mean, I'm suspicious, yeah, yeah, here's'm suspicious, yeah yeah, here's this great white space. We can make it very authentic. We'll have your we'll have your babies waiting at the 18th hole and, instead of like high-fiving your caddy, you're going to go grab your baby first. That'll be a good look. Good picture up, yep.

Speaker 2:

Yep, yep, okay, your commentary on the MIT report. Without reading it, I'm actually still thinking about this idea. I don't know what I think about it yet. I don't know that I agree or disagree with. The MIT report says KP Ruddy. Here is something that I'm going to explore. The ROI fallacy is that we are comparing weight to distance and concluding that there is no ROI. Let me explain a bit and then I'll publish a white paper next week on the topic.

Speaker 2:

The unit of effort of AI is a token. It's measured in tokens. It doesn't have a time domain at all when it comes to tokens. So humans, on the other hand, have limitations of time. Our unit of charging is time, salary or hours. This pricing isn't based on specific effort, but rather the combination of two key metrics competency and capacity. You can outsource telemarketing because you either don't have the ability to do it or you don't have enough warm bodies, hours, or both. Remember when people said that they were paid in salt? Comparing AI productivity against hours saved isn't a comparative measurement. The better comparative is how much did a human charge per token and how many tokens to do the task that AI accomplished? So, essentially, what's your token rate to do the tasks? Expand?

Speaker 1:

So we are all limited by time, right? So every study that MIT studies like saving time saving time AI is more about, do I have enough tokens right? Each model uses a different. There's a cost per token that goes up based on the complexity of work. So, even as you think about building an MCP server or anything you're going to do, you're not just going to go work with chat five. There's a reason why the legacy versions I know they're deprecating some of them, but the legacy versions aren't just about better or not. Aren't just about better or not it's. I'm going to use chat four for certain tasks because those are lower dollar tasks per token. I'm going to use chat five for some tasks that are more advanced, because those are higher dollars per token. So we're sitting here trying to compare.

Speaker 1:

A unit of AI does not understand time. There is no time domain. So if I said, hey, nick, can you write me a white paper, and you say you're responsible, like, hey, I can get it to you tomorrow. It's going to probably take me five hours. I got to find time to do it.

Speaker 1:

If I said, well, how many tokens for you to do the work, I'm not going to pay you, like, let's say you're a consultant, I'm not going to pay by the hour. What's your rate per token? So is the input my request via Slack or my request via voice that gets converted into bits and bytes and tokens, and then you're spending whatever you do to process these things. Maybe I'm watching you click on your computer and it's how many ever clicks right, um, and then there's an output which is once again bits and bytes, tokens. Is that what you charge me for? Are you charge me for the round trip of the token usage? And so what I would say is because, because, by the way, maybe for 80 percent tokens are oil in that, in that concept, because, by the way, maybe for 80% of writing that white paper, tokens are oil in that concept, right?

Speaker 1:

So maybe you're going to use AI tools to build version one, so those are the lowest cost tokens, but then you're having to use your brain, your experience, to do the higher level editing, et cetera, et cetera, and you're going to charge more for those tokens Because, one, those tokens are scarce, right, and you're going to charge more for those tokens because, one, those tokens are scarce, right, and two, that they're higher value tokens. You're, you're chat GPT-6, right, you're? You know so. So I think that the problem is we're using, you know, these units of measurement and trying to do a comparative study versus, like, maybe we need to think about ourselves as tokens and not as ours. Back to the every engineering firm we talk to, every architecture, anyone in professional services that charges by the hour, um, I always say, like, stop tracking your time. We didn't go to, we didn't educate ourselves this way to be good at entering time, right? What if we charged per token back to value pricing, right? Maybe? If you send me a drawing, via whatever interface, that's converted into tokens as an input, and I'm going to do so many work, I'm going to charge you 10 tokens for that work, but my tokens are $10,000 a token. It's not about my time, so it's a simple idea in many ways, I think.

Speaker 1:

But the implementation is somewhat complex because then we have to decide. You know, so a friend of mine was looking for a job. I hadn't created a resume in like a decade, right, and he was like, dude, I don't even know how to begin doing a resume. And so I was like, hey, I have a consultant, they can help you. And so then I asked the consultant how much of that time, like how much did you use chat, gpt? And they were like I recorded the call with this person, I dropped it into chat and said hey, hey, like summarize this person's experience, et cetera, et cetera. But then at the end of it that was like the 80% solution, maybe, or the 70%, I don't know, I wasn't in the weeds on it.

Speaker 1:

But then the human factor kicked in for the last, you know, to make it great. Right, ai made it good, this person made it great. So how do you charge for that? Do you charge by time or do you charge per token? Because we charge per time. That's the whole thing in professional services, that, well, if I use ai, I can't charge as much because I'm not using my time, so my idea of turning human effort into tokens actually supports this idea that if you can do it in less time, you just charge by tokens. Now I don't know what the human interface is and there's probably a stable coin to be created in this Nick Human coin, time coin. You have an attorney coin, an accountant coin.

Speaker 2:

Yeah, it's actually kind of an interesting thing. Yeah, it's an interesting thought experiment, like your original premise of the ask to do a white paper. So to do a white paper, the amount of tokens I would use with an LLM would be quite significant. Actually, because I'm giving it a lot of context. I'm plugging it in the context window of the LLM. I'm giving it a lot of context. I'm plugging it in the context window of the of the LLM. I'm giving it a lot of information. The bulk of the work is actually done there, in the initial context window. That's going to eat up a lot of tokens, which is different than a resume. A resume would not eat up nearly as many tokens. I mean, the price probably justifies it. The price difference between a white paper and a resume is probably captured there.

Speaker 1:

So the quantification of token right.

Speaker 2:

Yeah, quantification of token Right, Not the cost, it's how difficult the task is. So, like a white paper is a difficult task. It requires a lot of context and token usage. The easier, simpler task like not as much, you know, you don't have to give too much context to the models to get an effective outcome. Um, the more time consuming effort that I've found currently is it's in this synthesis, after the initial white paper is produced. So you get back if you're using a deep research tool, like if you're using chat, gpt, deep research or grok, you know research, um Claude's research tool, they give you these like 20 page outputs and it might be like 70 to 80% there, even compared, you know, comparing to the resume example, it might be mostly there but there's still. I mean to get it to where you want to get, to get it to where it's actually good enough to release that last 10 to 20% actually takes a huge effort and I, I, I find I'm probably, I'm probably three X-ing my token usage in that synthesis process where I'm like actually reading.

Speaker 2:

So I'll go through the, the output, read word for word, like an editor would. I'll take notes during that process, I'll ask it to produce. You know directionally, you know and, and you know each paragraph or subsection exactly. You know the changes I want to make, and then it's still not going to be perfect. And then the actual, the entire output, and sometimes the structure changes. So the repetition of doing that over and over again, and then I find myself piecemealing, all of the different outputs that are 20 pages in length. So now I'm editing I'm actually the editor of five research reports and some sections are great, other sections are not.

Speaker 2:

You have to structure. You have to structure the report in way you know to make sure that the language and the flow is is proper across the multiple queries that you've run. It actually is like a little bit of a task, um, over time I'm not like this will be much easier and simpler to do, but the. I find the token usage would be both, yeah, in the initial um, the initial query, and prompt, feeding it as much information as possible and then. But but the, the, the next series of iterations, probably less severe from a token usage perspective and more like I mean my brain. It's like I'm relying on my brain right now to do that synthesis. So it's like, how do you capture that human element? Like that's my question on that.

Speaker 1:

So think about it, this right Services business. Let's pick on lawyers because we know a lot of AEC people listen to our stuff. Let's pick on lawyers because nobody likes them. The legal profession, like most professional services, is a labor arbitrage model. So what you just described is there a token arbitrage. Let's assume that chat's tokens are a dollar a token I'm making it up, I don't know what they actually are are but if the added wrapper of nick's effort and your expertise because you have to do, you have to process the, what you put into the context window and then there's, like, the post processing, right. So if chat gpt's token is a bucket token and your value add is forex, that you charge me $4 a token at a cost of $1 a token, like now we're doing token arbitrage.

Speaker 1:

The difference being is now the only problem is there could be incentive misalignment. Where you start, which also happens in services. I'm sure all lawyers are tracking their time exactly and being honest on their bills with me. I'm sure they're not rounding up or having random people build my account, no way. So there is some incentive misalignment where, like, you may use more tokens to get the work done because you want to charge more, that's no different. That's like a um, an ethics type thing. I mean, for what, for what?

Speaker 2:

it's worth is actually an interesting thing because, like you, can use as many tokens as you possibly want in a query right right.

Speaker 1:

And if I'm asking you to do a white paper and you come back and say, kp, I charge five dollars a token and you send me an invoice for a thousand tokens, I might be like okay, but then the next one you're like take advantage of me and it's fifty thousand'm like hey, this guy works for this guy takes less tokens. He might charge me $6 a token, but he takes less tokens. So the whole like efficiency versus output, you're still playing the same game. You hire the-.

Speaker 2:

Competitive. There's an efficient market dynamic there, yeah.

Speaker 1:

Right, do you hire the lawyer that charges you $5,000 an hour that can solve something in a quarter hour, or the one that charges you 1,000 hours and takes 20 hours to solve for it, right? So maybe the market will dictate your token rate and you can't gouge me with your excessive token charges compared to the next person. Yeah, and so now. So translate that so like. So, in project work in our industry, or lawyers, you say they send you a rate sheet, right, they send you a rate sheet and they give you a lump sum. They say give you a not to exceed. So it's now the rate sheet not dollars per hour, but dollars per token. I feel like there's a there there with this. I like it.

Speaker 2:

I like it. I think there's definitely a future state where token use is just definitely involved in pricing, for sure.

Speaker 1:

Because we all agree, we hate this idea of tracking our time and the only way for me to make more money is to work more hours. I mean, we none of us love that aspect of work and that's why some of us get into becoming financial. You know, capital allocators. Yeah, it's. It's not correlated. Our, our dollars are not coordinated at all, correlated at all to how much time we spend really yeah, the hourly, the per hour rate is a tough.

Speaker 2:

it's like a tough human psychological thing to be comfortable with. Like you feel like a slave, yeah, because you are. Yeah, I mean.

Speaker 1:

My little insert around history in that post was around salt, right? That's where the term salary comes from. Is salt because they used to pay previously slaves. But then when they decided commercially, they're like oh, they're not, so they're real, we'll pay them in salt, right? So I think you know, does all this stuff come to head over time, like I used to always say, like, what's the airline industry? Their airline is in the thermodynamics industry, they convert fuel, they add value to fuel by burning fuel to get you somewhere, right? That's. That's the value add, right. The value creation, the value stack, is converting something that started off as fifty five dollars or sixty dollars barrel and then somehow turns into a $1,200 ticket for some god awful reason, right? So that's the conversion and the value stacking that tends to happen. So I think that the exercise that I think, look, I think this is beyond a thought experiment, though, and that's why I put on Catalyst. I'd put a note that anyone want to write this white paper with me, because I think there could be some real math in.

Speaker 2:

Well, we already know that the hourly model, the billable hour model, is changing. Maybe next week we can do a deep dive on the owner's rep white paper.

Speaker 1:

Yeah, I think it is. I mean, I think you know it's, I think it is, I think you know. There's all these great stories, like you know, Van Gogh is in a bar and someone asked him to draw something and he drew it and gave it to them and they're like aren't you going to sign it? And he's like, oh, that's $500 or whatever, like I don't know. There's all these like ideas of stories around value creation, and so I think, once you take time out of the factor and turn it into a token and we get to argue about what we're charging per token, I love that, but we have a baseline.

Speaker 1:

Right, we have a baseline. What's the whole wholesale rate of a token? Is chat six, right? Right, chat GPT, six is, or five is, the wholesale rate of a token. And if you say, well, I can do that for 10 tokens at five dollars a token and I'm like, um, I can do it with chat gpt without you for a dollar a token.

Speaker 2:

Well, now, like you're not creating enough value above and beyond the ai, I think humans are incentivized to keep that the hourly model as long as possible. Right now, actually, like, if I'm having to, like I don't want despite what I said about the human psychology thing I don't want to have to give some of my value away and anchor to token pricing, knowing that token pricing is going to go down and essentially I would be able to, I think I would be able to capture more margin on my work at this moment in time by leveraging the tool, knowing that the client I'm working for to produce that paper knows I'm using it, but they don't know how much. And it's up to me to be. You know to be as efficient as possible and produce, as you know, as high as quality of an output as I can. But I do think that you know you're describing a future state where customers have wisened up, like as the customer, if you're requesting me to, you know to bid on a project, um with that sort of pricing model, because you understand the dynamics at play, that all your consultants are now using models to do the work and a tenth of the time they they use before.

Speaker 2:

Essentially, you've just changed. You flip the script and change the game for what it's worth. The white paper I'm talking about, the owner's white paper, is that's the exact concept of it. Like when customers start requesting you to change your model because they're like, hey, man, it's, this doesn't take you 20, 20 hours to produce the project, the project reporting analysis, the deck like I can spin up a deck in an hour. I know the game. Like it's over the jigs, the jigs up. I want the J as the, as the consultant. I want the jig to stay as you to stay up for as long as possible. I can capture that arbitrage. But eventually, if you're smart enough to ask that question, I'm going to have to meet you on your terms if I want to get the work.

Speaker 1:

Well, think about this too. I think this is the condition of the professional services industry. People say this thing oh, but what about my IP? What about my IP? They say this nonsense over and over again and one could argue that the token-based model with a layer of your own. You take the LLM and then you're throwing your data on top of the LLM for the training model. You're actually now capturing your IP. So if you've been in a law firm and you've been doing it for 100 years and you have all that IP to train the model, maybe you do get to charge more per token because you have a better model.

Speaker 1:

So I don't know, I mean I think you know that's why I posted a catalyst. I mean I think it would be fun to take someone's you know. I mean maybe it'd be fun, be fun. My definition of fun is clearly different than a lot of people's definition of fun tokens, right, it'd be super interesting. And I think there might be some folks in the catalyst community that are like, hey, I do these steel connections and whatever it is. Maybe they can find some tasks that they do quite frequently to come up with some case studies and do some token conversion of time.

Speaker 2:

Yeah, that would be interesting. Maybe I'll do some research and figure out what my last white paper costs under your model. Yeah, I don't even know if I want to know that answer actually.

Speaker 1:

I was going to say we're now going to turn it, that's the next big build. Right, We'll go build something around token arbitrage. I'm going to pay you two $2 a token, I'm going to sell you for $6 a token and it's all run on our own proprietary, on our, on our unstable coin Cause then if we run our unstable coin, then it looks like a barter system and none of us pay taxes.

Speaker 2:

It's also funny.

Speaker 1:

Yeah, because we, because we all have help, right, so I have folks that come clean the house or watch the kid. When I want to go do something, right, what is that worth? Right, because there's also a I don't want to clean my toilets, like I'm not interested in that, yeah, right, so it's not. So the value has to do with you know, that is aligned with the person and and watching my own kid is aligned to depends on how he's doing that day. There's some days I'm just ready to like here's someone come take this kid Right, but depends on the day. What I would pay per token. But if you tokenize all of our work, none of us have W-2s. The future of the vision of crypto holds true.

Speaker 2:

We can all not be part of the internet. We're no longer part of the government system.

Speaker 1:

We're no longer part of the government system.

Speaker 2:

I'm sure I just got added to a list.

Speaker 1:

I'm pretty convinced I just got added to a list. I'm pretty convinced I just got added to a list.

Speaker 2:

It's a future. I'm ready for All right. Next LinkedIn post, if you have time.

Speaker 1:

Yeah, I think we've got a little bit of time.

Speaker 2:

Lately, I've been meeting a lot of founders focused on the construction industry with very limited industry experience. While I think fresh ideas from outside the industry are needed, I question their tenacity to stick with it. This industry is brutal. Yeah, let's talk about this.

Speaker 1:

Yeah, I mean, I think you know you see the folks that show up. You know, hey, I came out of healthcare and I solved all these big problems in healthcare and you know, I think construction you know, I know where this post originated.

Speaker 2:

now.

Speaker 1:

I mean, I don't think. Does construction have a lot of problems? Yes, do we have a monopoly on all the problems? Probably not. But I love the idea of fresh perspectives on how to think about something in our industry. But I also think it's easy to abandon the mission, like, I think, when I find founders that are, um, from industry. They're just really like they. They're just mad. They're mad about being about some problem. It's just really just. They just really hate the problem. They really want to solve it. And the industry gets tough.

Speaker 1:

This industry punches you in the face every day, right, and I think when you're from the industry and you have a true passion, an authentic passion around the problem statement, because you've experienced it, you're going to keep operating with a level of tenacity and stay energized to see it through. I think if you come out of another industry and you're being opportunistic to come into our market and say, oh my God, there's so many problems. I could solve all these things, I think the minute you get punched in the face a few times, you're like why am I here? I mean, I'm a good example of that. I mean, I went into the telecom industry for a few years and I was just like I can't, like I don't have any, I don't have enough resources left to see it through, right. And so I pivoted into telecom, made some money there, made more money there than I would have made it ever in the construction space, right. And then I popped back into it. So I was like, no, no, I got, I got a few bucks. I can play the long game in the construction industry.

Speaker 1:

But I think that's the problem and I think that's the balancing. I mean, try to keep up. If you're an industry founder and you get a co-founder, it's a great AI engineer and you're like, yeah, and we're gonna have all this scale and we're gonna do all the things, and then it doesn't happen. Good luck, good luck retaining that founder. Like what, we can't raise money. What do you mean? We can't raise money? There's so much money out there.

Speaker 2:

Everybody, every, all the other ai companies, are raising money I'm out of here so oh yeah, I I think this is uh, I've seen this be a controversial topic because I think everyone you know to your original point. They welcome different perspectives and different perspectives. You know, the idea of different perspectives is like you're coming in fresh, you have no preconceptions, you don't have any biases. That is a really important you know. I mean, like maybe the number one example of this is someone like Elon right, who you know goes to do electric cars, then rockets, and then you know, originally PayPal and payments, and now he's onto X and you know he'll do something else and he brings it. You know, he, the argument is like he brings a fresh perspective and just for a first principles approach and says like how the hell do we solve this problem?

Speaker 2:

Having you know, as someone who comes from this industry myself, I completely agree with what you're saying, though, like I think there's a lot I've seen, I've seen a lot of people leave the industry too early that came in with that non-biased, differentiated, unique perspective on on, on how to do things and leave because it's like, yeah, it's pain. It's like you're eating glass. You're gonna eat glass in this space it's true of a lot of industries. But I think, like there's a unique, there's a unique way of operating in the built environment where it kind of it spits you out at a point in time and universally like looking across, I think, even the success stories in construction, even outside of our portfolio. Um, I mean, like you know, um, someone, someone like tooey from from procore, is a good example of this. I mean he, it took him 10 years, 10 years to get to 5 million in revenue, something, something like that, wow, um, I mean, that's insane. Like I might be, I might be exaggerating, but it was like not a quick.

Speaker 1:

I don't think you're off by. You know much magnitude.

Speaker 2:

I think you're probably which is like I mean, most founders would have given up. But he really, he really cared about solving the problem and you know, for you know bringing software to project management and construction and has built one of the biggest software companies in the space. But he but, like, the instinct is to give up, like, if you're, if you're, if it takes you eight years to get to five million revenue, every investor that you talk to, all of your friends and family, anyone who's wise will tell you dude, go like, go make a buck. Like you're smart, you're a smart guy. Yeah, like, don't waste your life on this space. And yeah, I mean, we hear that, we hear that a lot.

Speaker 2:

But I even look at, you know, the vantage point we have working with our founders, and it has to be a personal conviction that you have in order to move through that phase of like, hey, I'm not going to. You know, like, this is not going to be an easy path. Why am I here? Why am I? You know, why am I waking up every day? But I will say about this, about the industry, not to turn anyone off. We have arguably the biggest problem sets of any industry in the world. So if you're looking for a mission and a purpose. Do you want to solve the housing crisis? It's arguably the biggest problem that we face as a society today, and it's it's my, it's, it's it's my generation's problem, like I don't think there's a bigger problem to solve in the world right now than the affordability of housing.

Speaker 1:

No, a hundred percent. I think that's the thing right. Like I think you have to be high mission oriented around this stuff. I don't think you can. I think if you come in with an opportunistic mindset, I don't think you can. I think if you come in with an opportunistic mindset, I don't think you're going to last. But I mean, you've been on board calls with me where I'm practically trying to talk our founders into giving up and they're like, but my work is not done, like I'm not done yet I have. Like they don't feel like they're done with their work and I'm like, yes, you're also running out of money. Like maybe you do something different. You