KP Unpacked

Build a Bot, Win More Work: Custom GPTs for AEC

KP Reddy

In this episode of KP Unpacked, the number one podcast in AEC, Jeff Echols and Frank Lazaro unpack the rising wave of custom GPTs—and why every forward-thinking AEC firm should be building one. From knowledge centers to proposal bots, this episode dives into how AI tools like Azure Foundry and ChatGPT Projects are reshaping how firms capture experience, accelerate onboarding, and win more work.

What you’ll learn:

  • The two AI tools leading the charge in AEC: Azure Foundry vs ChatGPT Projects
  • Why centralizing your firm’s knowledge gives you a serious edge
  • How to turn your senior team’s experience into a searchable asset
  • Smart use cases: proposal bots, resume finders, SOP assistants
  • The one con firms overlook before building a custom GPT
  • A simple, no-code starting point to test this today

If your firm’s expertise lives in people’s heads or buried folders—this episode is your roadmap to change that.

Ignite what's next

We're launching something new... It’s called Catalyst.

It’s a space for AEC forward-thinkers are reimagining what’s next. This is where the top minds in the industry are sharing ideas, leading change, and pushing the future of AEC forward.

Sounds like you? Join the waitlist at https://kpreddy.co/

Check out one of our Catalyst conversation starters, AEC Needs More High-Agency Thinkers

Hope to see you there!

Speaker 1:

Hey, welcome back to KP Unpacked. This is where the biggest ideas in AEC, ai and innovation they all collide in one podcast. It's powered by KP ReadyCo. This is where we break down the trends, the technology, the discussions and the strategies that are shaping the built environment and beyond. My name is Jeff Eccles. I'm the Executive Director of Catalyst here at KP Redico, and I am joined by my colleague and teammate, frank Lazzaro, who is heading up our AI advisory as well as a lot of other things, but I mentioned AI advisory because that's what we're here to talk about. This is AI and AEC, or AI Unpacked. We were talking about that before we went live. What are we calling this thing now? This is the time for Frank and I to come together and unpack something some sort of AI tool, some sort of something to do with AI and find actionable ways for you in an AEC firm to implement these things that we're talking about. So, frank, thank you for joining me again today.

Speaker 2:

Yeah, absolutely yeah. You know, I I always think about like, leading up to this. You know, when we start discussing the show notes and in conversations that I have, it's like I have so many different topics that I want to kind of feed into this. It's almost kind of like which one do we pick today, um, but no, that just means that you know, we're gonna have a lot to talk about for the rest of the year, so it's really exciting.

Speaker 1:

So yeah, yeah, and that's one of the beauties here we can, we can take one thing and we can dive into it. You, you know, break things down into really small pieces, and I think that does, and the feedback that we get tells us that it does really help. Right, making it actionable. Making it digestible first of all, but then making it actionable. You know, I ask you that at the end of every episode. You know, I ask you that at the end of every episode it's like okay, what's one thing that someone, a leader in an AEC firm, needs to do with this information today? Right, I think that's that's what's unique about this and very valuable about these conversations.

Speaker 2:

They're also fun.

Speaker 1:

We get to, we get to talk about some fun things.

Speaker 2:

Absolutely. You know it's great about this topic. This, this topic actually came from a client conversation, so this seems to be top of mind because, oddly enough, unscripted, unprovoked, I got a LinkedIn message asking me about this topic today. So the timeliness the timeliness of it is is is uncanny. But I think what we're going to find is that this concept of custom GPTs is really top of mind for a lot of firms and a lot of teams, so I'm looking forward to diving into this today.

Speaker 1:

Yeah, there are a lot. As you know, and maybe folks that are listening know I facilitate all of our mastermind groups and in our innovation leaders mastermind groups especially, we hear a lot about custom GPTs, right, what they're building in-house for knowledge transfer, et cetera. It's a very, very popular topic and you know these are tools that are being used throughout the industry. And if you're not, if you don't know about custom GPTs, get ready for the knowledge bombs, because here we go, we're going to talk about custom GPTs in your firm's knowledge center, and today we're going to touch on a couple of different tools. You've picked a couple to focus on here, as usual, as you do every week, so so what are the tools that you're selecting for today's conversation?

Speaker 2:

Yeah, so it's. It's a really good question and it's interesting is that I think the tool, like availability, has has really evolved within the last like six months, because if we take a step back, let's take our start at the beginning. The only option that you had a year ago two years ago was to really do, like a enterprise implementation, custom development of ChatGPT right. That requires software developers. It required, you know, having advanced knowledge. You had to teach the data. You had to do so many things to actually get it to work. Now you have all these new tools out that are really low code or no code, that basically give you the same functionality that you would get through a custom implementation, say two years ago.

Speaker 2:

You can deploy something within days now, and some of the tools that we've been looking at and some of the tools that we support our clients with is the Azure AI Foundry is one that's really kind of popped up on the radar. You get to pick a bunch of different GPTs in the backend. You don't have to use Copilot. You could use the OpenAI or Grok or Lama or any of the more commercialized versions, but you get all the ability to customize without actually having to worry about all the coding, that's a really fascinating one. I look forward to kind of diving into that one.

Speaker 2:

And the second tool I want to talk about today is that earlier this year, OpenAI ChatGPT launched a new feature called Projects, and what's really cool about Projects is that instead of coding like a chat bot to do something specific, you can actually just kind of create a chat that allows you to kind of add documents to this project folder and then anytime you chat within that folder, it uses those files. So you basically get that mini version of a call it a custom GPT using an off the shelf tool. So both of those tools are definitely on our radar and those are things that I think you know our clients have been asking for. What's really cool about that is it's low code, no code, fast deployment and relatively inexpensive. You're not spending hundreds of thousands of dollars to customize a tool. You're basically just using something that's off the shelf.

Speaker 1:

Yeah, and if I'm not mistaken, we talked about Project in the same episode that we talked about Google Notebook LM.

Speaker 2:

Yep same concept.

Speaker 1:

Five episodes ago, something like that. So if you missed that, just go back. Just go back a few episodes and you'll find frank and I talking about um chat, gbt project and google notebook lm.

Speaker 2:

Yeah so, yeah, what's interesting? So I guess the question is going to come up like what's the difference between like notebook and and projects? Not only you add like project files right, both of them actually have the ability to add project files but ChatGPT projects gives you another option or configuration module where you can actually add instructions. So it's almost kind of like I give you files and then I'm going to give you instructions on how I want this bot to work. So it almost takes it one step up in terms of, yes, you can use notebook and you could use chat GPT to kind of work off of multiple files, but being able to create a repeatable project file to where, every time you chat inside that folder, it does that, and there's some really good examples of that. So you know, obviously we use it internally here at KPR, right? It's one of those things to where not only are we telling our clients that they should be using AI, but we also incorporate AI into some of our workflows as well.

Speaker 2:

So when we think about doing an analysis, with our dial assessment right, with our dial assessment right, I have a dial assessment chatbot that I use to kind of help me distill client notes. You know, like we do a workshop or we do something. How do I get a client to kind of think about, like you know, are they a disruptor or are they an innovator? I have all of my base knowledge that sits there and then anytime I interact with it, it's like oh, xyz company, I just did a workshop for them. You know, here are some of my thoughts in terms of how they answered some of these questions. How does that baseline against our dial assessment and our rules when it comes to innovation?

Speaker 1:

no-transcript. Yeah, it's based on that or related to that comment. It's not surprising that what we're talking about, like the custom GPTs, and then also including project and notebook, they're often referred to as the knowledge center building the knowledge center or the firm librarian. I think those are really good descriptive terms for them in the way that they're being used. Think about this when we think about implementing or building and implementing a, whether it is a custom GPT or it is a project or a notebook.

Speaker 1:

Certainly we can do it for organization and research or, you know, whatever the different use cases we could come up with. But is there, is there a scenario, or can you explain the scenario, where having a tool like this available to your people gives you some sort of competitive advantage?

Speaker 2:

Yeah, I mean it's a great question. So, consistently, what we find when we talk to firms is is that access to information seems to be very problematic for most firms. We don't do project archiving really or project closeout very consistently or pretty well. We have information that tends to be scattered across the organization. You know we had a server on site but now we're in the cloud. We have some stuff in Teams and we have stuff in SharePoint and Jeff, you know this stuff in Teams and we have stuff in SharePoint and Jeff, you know this, even internally, right right at KPR, like we.

Speaker 2:

You know, up until recently, when you know we expanded our operations team, you know we had information that was just kind of disparate. It was all over the place and it's hard to sit there and say, well, this project or this client looks like another thing that I've done before. Where was that document? How can I go back and reference that? So this concept of a firm librarian or knowledge center gives you that competitive advantage, because if you could come back and say, let me identify the top five projects for this RFP that I can include in my submittal and I can find it relatively fast, that is a competitive advantage because you're not wasting time right? So if it takes you 40 hours to do a submittal but you can cut that time down to 20 hours because you have access to information, that becomes highly valuable being able to look across your. You know, I had another client that came back and we were talking about using pricing tables, comparing pricing tables to where they ended up, but being able to go back and find those other pricing tables so that when they're pricing out a new job or they're doing something along that, they have access to that knowledge so that they're not losing money or they're not leaving money on the table by not pricing things correctly because they don't have the ability to go back and look at that information.

Speaker 2:

That becomes a competitive advantage. So just having access, readily accessible information becomes the advantage because the faster you can get to it, the more accurate that you're going to be and less time you're going to waste. It's more of an efficiency play as well. It just centralizes the knowledge right. It just gives you I don't know, delivers consistent answers from wide. So if I go in and ask the question about our pricing and you ask a question about the pricing, we're all pulling from the same data. So there's some consistency there's. It centralizes. There's. There's a lot of advantages that would make you more competitive in the marketplace.

Speaker 1:

Yeah, I mean conceptually. Think about, about this, as you're listening to this and maybe this hasn't resonated with you yet, but think about what, if you're on the let me say it this way if you're on the client side, what is one of the reasons, what's one of the decision-making factors for selecting one firm over another firm? It's experience, it's years in the business. It's experience, it's the level of the engineers, the architects or whoever they are, and so there's so much in in aec firms which is true for all professional services firms there's so much banking being done on the experience and knowledge of the in-house talent yeah, and usually it all lives in somebody's head it lives in somebody's head.

Speaker 1:

It's disorganized on their desktop, in their p drive, you know wherever right. Well, I know that, you know the firm that I worked at.

Speaker 2:

You know we would run into issues to where we couldn't find the final submittal. Like I would get, I would be able to find all the drafts, but I'd be like, well, who has like, who has the absolute, like the final, final one yeah um and that and that, and that's a challenge when you're, when you really want to make sure that you're, you're putting your best foot forward.

Speaker 1:

Yeah, and there are examples out there. You can Google this. There are examples out there and we're seeing this again. The innovation leaders who participate in our mastermind groups, they're all doing this. They're all building these custom GPTs.

Speaker 1:

But think about that knowledge. You know that's in your senior engineers, your senior architects, your senior construction managers, whoever they are. What happens when they're not there anymore? Mm-hmm, yeah, you can build this. You can have that firm librarian that has all of that knowledge. Right, they can access all the knowledge that was once in that senior engineer or whoever it was. Um, that's, that's inside their head.

Speaker 1:

And and I think that's part of the beauty of what we're talking about here is you, we we've got you know different versions of, you know experience gaps, generation gaps We've got you know certain generations aging out, certain generations coming up into, into leadership, um, certain generations like mine and Frank's, that are like I don't know, apparently we're the lost boys or something, but, um, you know, how do you, how do you retain that? Yeah, from this generation to that generation, while skipping over Frank and I? Um, how do you retain that knowledge and retain that advantage? If that was, in fact, your advantage, how do you retain that? So I think, I think this is a really important conversation here, um, and you know, it's.

Speaker 2:

You know it's interesting, though, jeff too.

Speaker 2:

So if you ever spend any time talking with KP, when we're thinking about custom GPTs or just the AI chat bots or the agents in general, you know he's under the firm belief that that, that you know, the person that can figure this out from a knowledge perspective, is going to have a huge competitive advantage, right? So when you start thinking about, you know all codes and regulations they're, they're, they're, they're in some document, they're in some book, they're on some, they're online. But being able to access that stuff, you know, more effectively with AI, that's going to be. It's going to be a game changer for our industry in a lot of ways, because you know we are basically run by books, whether it's, you know it's a yellow book, blue book, red book, whatever they call them right, icc, those books, you know we always reference those, but if you can make them digital and turn them into a library or a knowledge center and be able to access them quickly, could we produce bigger, better, faster, safer.

Speaker 2:

You know bridges, buildings, roads. So I think there's a. There's a. There's a lot of value in this more broader term, outside of the firm as well.

Speaker 1:

Yeah, yeah. And you know there, there are products out there. Yep, you know there are some out there that are doing that with codes, and, and, and and. One of the biggest, biggest struggles I do this with my graduate students right there's a couple things that always come up when they're thinking about okay, well, how can we, how can we make the you know aec better, right, and they come up with well what if we, what if we create some sort of ai that that helps us, um, analyze?

Speaker 1:

you know, do our code research analyze projects per code and things like that? It's like you know, do our code research analyze projects per code, and things like that. It's like, well, you know, there's one major one out there that has been in the news. Right, they've won some important lawsuits and whatnot, but I've actually interviewed their founder before, and one of the big problems that we have, at least in the United States, is codes vary by jurisdiction, and so it's a huge task to build that as a product. However, if you are a firm that specializes in XYZ in certain areas, all of a sudden that amount of data just shrunk drastically, and so that changes the equation. And you know, another one that the students will touch on is well, what about code submission? You know, like for building permit, same thing right.

Speaker 1:

It varies, right, it varies. How are you going to put together a um, some sort of automated process for code building, code review and permit submission when that process varies by every jurisdiction across the united states, which is I don't even know the number right, it's in the thousands um but well, we were.

Speaker 2:

We ran into that issue just even on the residential side. So the town that I live in, um, it was an unincorporated county and then the little area got carved out and became a city and at the time we were having our water heater replaced and the plumber put the water heater in based off of the county code, but then there also had to be a city inspection and their code was slightly different, so the expansion tank was slightly different. So it's like you start thinking about, like even even at the at the integrator level, the person that's actually doing the work, if they had access to that kind of information. So, as the plumber is at your house doing something, he knows that, oh, we need to use this size expansion tank versus that or something, whatever it may be. Yeah, that I mean even at the residential level.

Speaker 2:

I think you would. You'd see a lot of benefit. But now take that and you scale it up to, you know, a 20 story building. There's a lot of more code that you got to deal with, right, there's a lot more things and it becomes more expensive versus a residential. But yeah, I could definitely see the value in that yeah, yeah, that's a great example.

Speaker 1:

I, back in my architecture days. We've worked on a project that spanned two jurisdictions. It's a street down the middle of the project and one one is in one jurisdiction, one is another jurisdiction. It happened to be in a state where everything was local. So there's a code here and a code there, right, same same building, same project, right. But you've got to, you've got to review it and apply it differently on one side of the street versus the other side of the street, and tools like this could could greatly facilitate um that work. So, okay, let's let's dig in to some of the specifics a little bit more. You mentioned Azure AI Foundry, which we hear a lot about in our mastermind group. So tell me more about Azure AI.

Speaker 2:

Yeah, so, interestingly enough, what's really cool about this one is that you know, obviously Microsoft wants you to kind of store their data in their cloud right, the Azure cloud. But the way this works is that you data in their cloud, right, the Azure cloud. But the way this works is that you basically create a playground, right, a chatbot playground, which one gives you the option to select one of like 2000 different AI models. On the backend, there's the obvious ones like ChatGPT and Copilot, et cetera, but for those that want to use ChatGPT but don't want to use the off-the-shelf ChatGPT, this is a great option because it gives you the ability to customize and control and really kind of deploy, and so all you really do is and the other nice thing about Azure is that you can actually just point your data index to somewhere that's already in your environment, so you don't have to copy or move or do anything special with your data. All you have to do is really index it and once it's indexed, you just point to it. So the beautiful part about that is is that your data stays your data and it stays within your, within your Azure environment. You can access all of your data. You're not like so that like. So that's the. The one big difference between chat gpt projects and the azure foundry is chat gpt projects limit you to 20 20 files, right. So you cap out about 20 files and then you're done. Foundry can take all of your data. You know just. All you do is just point it to a sharepoint folder or another folder could have hundreds of thousands of documents and it doesn't matter, um. So you get a lot of flexibility on that and then, once you write the instructions, all you do is deploy it as a web app and you're good to go. So you get all the customizing of it, meaning I can have as much data as I want. It's my data. I can write custom instructions. I can pick the model that I want to use to do this ChatGP D4, chatgp D4-5, grok Lama, whatever you want. Once you pick that and then you deploy it. So you get all the ability and you don't have to customize. You don't have to code the interface, you don't have to code the data connections, you don't have to code anything. All you do is just like really point and click. So it makes the customization really good. So this is what I would call the Cadillac version of custom GPTs, with the no code, low code, right, so you get a lot of flexibility in there.

Speaker 2:

Say you only want to do it on a smaller scale. Chat GPT projects would be a better option because it's even simpler than the Azure Foundry, right, because it's even simpler than the Azure Foundry, right, so one you would have to upload your 20 documents. But once you upload your 20 documents and you write your instructions, you're done, so there's no other deployment on that. Now you do get the safety and security that you do get in the Azure Foundry as long as you're on ChatGPT Teams or ChatGPT Enterprise. You get all the data protections, so you get all the benefits of the data protections. You get all the data protections, so you get all the benefits of the data protections. You're just limited in the number of files that you can use in ChatGPT projects versus Foundry. The only other downside with ChatGPT projects is you have to use ChatGPT, right? You can't pick different models underneath there. So think about Azure Foundry is a lot of. It operates exactly the same.

Speaker 1:

It's just you get more flexibility, more capabilities is really the big difference between the two so let me ask you this, because you know, we know, that ai is good with dealing with lots and lots of data, and you know this may be like a mr obvious question, because we've talked about chat, gpt projects and we've talked about Google notebook and, right, you created a notebook for this and a notebook for that.

Speaker 1:

But when you're thinking about the, the idea of the firm librarian, of course there's there are the operate, there are the project based things like building codes and material science and all kinds of things that AEC firms have that need to be referenced, but then there's also operational things and then, of course, there's HR things and IT department. There's all kinds of departments and things. Would you create one chatbot to rule them all? You create one chat bot to rule them all, or would you break it down? And here's our bot for HR and here's our bot for IT and here's our bot for projects and and maybe even specific types of, you know, project verticals, project type verticals or something like that but strategically, how would you look at it?

Speaker 2:

Strategically, I would create a bot for each of those different functions or operations. Right, because I think that if you can tighten up the content for whatever the bot is, it's going to perform better. So if you have a bot just for project resumes, it's going to do much better if it's only focused on project resumes and it doesn't have to sort through all this other data that's not necessarily related to what it's looking for. The beautiful part about this is that you could easily create these bots like you're just talking about quick deployment, so they're very easy to create. So creating, you know, specific bots for different tasks would be something that I would do. Like I would have a resume bot, I would have a proposal bot, I would have an SOPs, I would have a proposal lookup bot, like I. It's not that they're they're hard to create, um, and you're going to get better results, better performance, when they're kind of tightly tuned to one specific task.

Speaker 1:

And then you can create a bot to rule them all.

Speaker 2:

That's right. I want one bot to rule them all. It's like little ring.

Speaker 1:

If you set it on fire, it glows. I'm pretty sure we can make it glow yeah, absolutely all right.

Speaker 1:

So we know that we, if we create these custom gpts it, it helps us to centralize the knowledge, it helps us to to share that knowledge. It helps us to share that knowledge right, to distribute that knowledge and the word just went out of my head but to pass that knowledge along. We know that we haven't really talked about it necessarily, but we can ramp up, we can get people onboarded faster, right in terms of hiring. You know, here's all the trainings, here's all the things you need to know. We can also take a junior engineer and share with them more readily information from a senior engineer, some of that knowledge.

Speaker 1:

And we know that having these custom GPTs in place, with the data in place, helps us to have, deliver, deliver and find consistent answers firm-wide all the time, which is which is huge. I mean the inconsistency back in the. You know, when I started my career and I was working in marketing and business development part of the time, we went to file drawers and pulled out pieces of paper hey, maybe someone modified that and kept it in their own file cabinet. So this changes that equation completely. But those are some of the pros. Yeah, what are some of the cons around creating our own custom GPTs?

Speaker 2:

You know, the one big con right up front is, you know this is the concept of build versus buy. If you build this, you own it. So you have to maintain it right. So making sure that you've built a structure in place to make sure that there's a governance, like what data is going into the bots, you know when and how often is it reviewed, when is it refreshed so that whole concept of that upfront training. But then it's also the configuration and the maintenance of it right. If you bought this chat GPT off of the shelf and you just kind of use the tool, the tool is the tool and you don't have to worry about that, that maintenance part.

Speaker 2:

When you build a bot, yes, you're going to get a lot of benefit from it. It's very customized to you, it's very customized to your data. But you got to remember if you build it, you own it. And if you own it, that means you're going to have to put resources and time in maintaining it, particularly around. You know the initial ramp up right around. You know just training the model.

Speaker 2:

When we say train, it's basically giving it data. You know you're going to have to go through the effort of giving it the right data the first time. Then you got to configure and maintain that going forward. Like, how often do you go back and refresh it, the data right? Is it once a quarter? Is it once every six months? What's your governance process about adding and removing content from the, from the model Right? So those are all the things that you need to take into consideration. Yes, it's kind of great that you can do all of that and have a customized bot, but you build it, you own it and you got to maintain it. Pretty much the biggest con. So you know the governance staying keeping it up to date, getting the data there and just kind of making sure that you can you can maintain it going forward. It's going to essentially make sure that the tool is staying as productive and producing the content that you want going forward. You just can't set it and forget it.

Speaker 1:

Yeah, yeah, and with that explanation, you just touched on at least a lot of the episodes that we've had so far right, governance and policies and training and all of those things. So, okay, so that's really good information we need to remember. You know, the idea that we're talking about here is we're talking about these custom GPTs. It's not just a chatbot. It is, in fact, your internal expert on whatever right you were talking about the different chatbots that you might have. So it is your internal expert. So, in order for us, we do. I ask you this at the end of every episode how does the listener make this actionable? What's one thing that they need to do with everything that they heard us talking about today? What's one thing they need to do today to start to implement this idea of creating?

Speaker 1:

a custom GPT.

Speaker 2:

You know, and I think we've said this numerous times before but you know, identify that one problem first.

Speaker 2:

Right, you don't have to try to solve and try to do something really big, but just go and say you know what. What I want to do is I want to create a bot around finding information out of past proposals and just start with that one project and and and build yourself a. You know, a. You know. Call it a minimal, viable product or proof of concept. Right, go create a chat GPT project or test something in Foundry, but with just one small thing, and kind of see how it works, understand how it works, and what you're going to find is that once you kind of get that first one built, you're going to find use case after use case after use case that you're going to want to build more.

Speaker 2:

So again, I think we've said this numerous times before but start with the problem, start with one small thing that you think you can solve. I want to be able to look at past projects. I want to look at past proposals. I want to do something to where I'm just querying information, that we have to see how it works. You'll get addicted to it and then you'll get addicted to it and then you eventually want to kind of continue, expand it to other parts of the organization.

Speaker 1:

Yeah, that's, that's great advice, and we hear that. We hear that, hey, you know we're trying to do this, but it just got too big and too unruly and you know whatever. And it's started too big right and it gets a small win yeah yeah, a lot of times it makes sense to start small, like that. Get that small win, build something well the first time and then continue to iterate on it. Yeah, build the next one like that. All right, frank, this has been a good one. This has been fun.

Speaker 1:

I am always curious when we record one of these and you know, we know it's going to be published, it's going to roll out via YouTube and the different podcast platforms in a week or two. Whatever our production schedule is, I'm always curious to know how people are implementing what we're talking about. So, as you're listening to this, first of all, the things that we've talked about you know Azure, chat notebook. You know all the things that we've touched on. Don't worry about figuring it out, googling it, whatever. Our production team is putting the links in the show notes below. So just go down there, find the link, click on it and you're good to go. So don't worry about that. The other thing is, as you listen to these and I still think it's easier, which again is odd to me, because this started out, as you know, a literal podcast, so I always think about this as an audio, but we publish this in audio and video. I think it's easier for you to do this on YouTube, but wherever it is that you consume this, in the show notes you'll find the links and whatever and we would like for you to ask us questions, to make comments, to give us feedback on everything that we're talking about, because that helps to guide.

Speaker 1:

You know, frank said that right at the very beginning of this episode today, that that this topic comes from work with a client, and he got a message about it. You know, coincidentally, got a message about it via LinkedIn today, right the day that we're recording this. So we're always open to and welcome and craving your feedback, your questions, your comments on these, because that'll guide topics that we cover in the future. There's no shortage of topics around AI for AEC, none whatsoever but we want to make sure that what we're talking about is the most relevant to you and, um, it's the things that resonate with you and your work.

Speaker 1:

So, uh, do us that favor, go go wherever it is that you consume this, and, uh, leave us a question or a comment. Let us know what you think, what you're wondering about, what you need to know more about. Let us know what you think, what you're wondering about, what you need to know more about, and, uh, and we will respond to that. So, uh, thank you for listening to this, thank you or watching this, whichever version you're doing, and, uh, frank, as always, I appreciate you joining me for this today and we'll do it again next week yep, looking forward to it, always, always look forward to our conversations, jeff that's.

Speaker 1:

It's fun, I fun, I like digging into this one, so thanks for that. This is KP Unpacked. It's where the biggest ideas in AEC, ai and innovation all collide in one podcast powered by KP ReadyCo. This is where we break down the trends, the technology, the discussions and the strategies that are shaping the built environment and beyond. Discussions and the strategies that are shaping the built environment and beyond. That means where you live, where you work, where you play, where you pray all the things that you do in the built environment. Everything that we talk about here in the various versions of this podcast are impacting those places and probably your life.

Speaker 1:

So we appreciate you being here and, as I said earlier, my name is Jeff Eccles. I'm the executive director of Catalyst. Catalyst is quickly becoming the hub for everything that we do Our integrated owners forum, our mastermind groups, our advisory work, of course, conversations about AI. Our research team is publishing articles there on a daily basis, so you'll hear more about that in the post roll here. You heard about it in the pre-roll. You'll find links to it in the show notes below, so check out Catalyst. It's our brand new online community and we want you to be a Catalyst, so we'll see you there. We'll be back again next week. We appreciate you. And actually, Frank and I are going to see each other in just a couple of days in Atlanta for our Q2 second quarter one day in-person mastermind event. So, frank, I'll see you in a couple of days.

Speaker 2:

Absolutely Look forward to having you down Absolutely.

Speaker 1:

It'll be great to be back. All right, Thanks everybody.