KP Unpacked

AI in AEC: Data Privacy in the AI Era

KP Reddy

In this episode of AI Unpacked, Jeff Echols is joined by Frank Lazaro, KP Reddy Co.'s in-house AI expert, to dive into the latest trends and innovations in AI for the AEC industry. Together, they break down how AI is transforming business operations, from improving efficiencies to boosting productivity.

In this bite-sized, actionable episode, Jeff and Frank explore the various AI tools reshaping the AEC world, offering insights that you can implement in your firm today. Whether you're new to AI or looking to optimize existing tools, this episode has you covered.

Key Takeaways:

  • How AI is streamlining business operations across the AEC industry
  • The role of AI in enhancing productivity and cutting down on manual work
  • Frank’s insights on the most effective AI tools for AEC professionals
  • The benefits of embracing AI for both short-term wins and long-term scalability

Tune in to learn how AI can help you work smarter, not harder in the AEC space.

🎉 Special Offer for KP Unpacked Listeners: Get 55% off your ticket to the 9th Annual AEC Summit on October 29th at the Diverge Innovation Center in Phoenix! Click the link below and use promo code UNPACKED55 at checkout.

🔗 9th Annual AEC Summit

Don't miss this opportunity to connect with top minds in AEC and beyond. Tickets are limited—act fast!

Speaker 1:

Hey, welcome back to KP Unpacked. This is where the biggest ideas in AEC and innovation collide. This is powered by KP Ready Company and this podcast breaks down the trends, the technologies, the discussions and the strategies that are shaping the built environment and beyond. If you've joined us before, you know there's lots of different versions of this podcast. I guess the one constant is me. My name is Jeff Eccles, I'm a senior advisor at KP ReadyCo and in this version, ai Unpacked I am joined by Frank Lazzaro. He is my friend and colleague here at KP ReadyCo. He is our in-house AI expert and I love this version of Unpacked because I get to learn a lot. Frank and I go through different aspects of AI, different tools, different uses, and these are meant to be bite-sized chunks to give you actionable things that you can take away from this podcast and implement in your firm today. So, frank, welcome back. Glad to see you.

Speaker 2:

Yeah, great, glad to be back. It's interesting. I've, um, I've had several people reach out to me, um, and they've started listening to the podcast and they love the concept of of how we we kind of like dive deep on like one thing, um, but it's digestible, like we can get. We can get through in 20 minutes, um. So I've gotten some really good feedback. So i'm'm glad that it's resonating with people and it's just not you and I enjoying what we're doing and others are getting a little bit from it.

Speaker 1:

It's good to have several listeners Just saying you, me, our moms and two other people. That's good.

Speaker 2:

The producer in the back end, you know they listen to it as well. People, that's good. The producer in the back end, you know they listen to it as well.

Speaker 1:

This is true. This is true. We're growing. We're growing exponentially with each listener to, the math increases. All right, so for our several listeners out there, we're really glad that you're here and we, you know we say that a little bit tongue in cheek, but we also understand this is a pretty narrow niche audience and we're perfectly happy with the fact that we can dig deep on these and make them actionable and digestible for people in the AEC industry, so we're happy to do it for them.

Speaker 1:

Today we're going to talk about data privacy and being the director of our mastermind programs and spending a good chunk of my week sitting down with innovation leaders, construction tech leaders. I know that many of these folks have clients. Their firms have clients that are going hey, what about our data data? How are you using our data? What? Where's our data going? Is it being exposed? There's all kinds of those. How are you using ai? I guess is where it's, where it first starts. But but there's a lot of concern over the data and I just posted something in the last day or so from my old show.

Speaker 1:

This is a blast from the past. Um, my old show was called Context and Clarity Live and I recorded a session, probably two years ago at this point, with Matthias Del Campo. At the time he was at Taubman at University of Michigan Now he's at New York Institute of Technology, but he's head of AI and AI research and we sat down for a conversation about ethics in AI and it was all about the same thing how is data being used? So I'm glad we're digging into this today. I think this is going to be our most popular episode so far. So, frank, yeah, you know I agree.

Speaker 2:

Yeah, so you know it's interesting about this. You know so and I guess this is also for our listeners. You know a lot of our topics actually generate from. You know our online community on Catalyst where people are asking questions, but they also generate from when we have one-on-one conversations and we're doing advisory with clients. I make note of what people are asking, like what are the, what are the top three or four things that people are always asking when it comes to and this topic in and of itself, I would say, is probably either one or two when we think about that. Right, every single time we go in, we have a conversation about innovation, we start talking about artificial intelligence. Data privacy, data security tends to be a question that is consistently asked, regardless of the size of the firm, regardless of what form that we're talking in whether I'm speaking from the stage and we're doing the Q&A after a presentation or meeting with a client one-on-one this inevitably is either the number one or number two question that comes up.

Speaker 1:

Yeah, yeah. So we know that AI saves time. Right, we got there right and we I didn't even in the introduction, I didn't even talk about about your book. You can, you can give us the the quick recap on that if you'd like. But we know that it saves time. We should be using it to to get more efficient. We also know that we should be thinking about different ways to serve our clients. But again to your earlier point efficiency and everything else aside, what about the data security and how is it handled in different tools and how is it handled in different applications? So what are you hearing you said from Q&A after a speaking event or in an advisory session with a client? What are you hearing? What concerns? Let's start there. What concerns are you hearing?

Speaker 2:

artificial intelligence, but those same exact firms also push back like we don't. We don't know what happens with our data. And so what you find is is that there's this, this, this, this, this pull, this tug of. We want to be innovative and we want to incorporate this, but I also don't want to share my data with these public ais and and for a lot of reasons, I don don't blame them right. So what you're finding is is that they're looking for solutions, they're looking for ways to be able to incorporate this great technology.

Speaker 2:

To kind of go back to what you mentioned earlier about the book, you know, yeah, I want to find those 12 minutes of efficiencies and productivity, I want to increase my utilization, but I'm not going to put my data at risk if I don't really know how it's using it. And so what you find is is that most firms, the IT departments, are the ones that really push back on the data security, and that's it's an obvious reason why. And but there I think there's ways to mitigate that and there's ways to solve for that in a multitude of ways to where you can still get the benefit of doing it. You can still find your 12 minutes of efficiency but, at the same time, be able to protect yourself and your data, particularly your client data.

Speaker 1:

Yeah, yeah and it's necessary. But again, a lot of these, a lot of the people I spend my time with right, I'm over on the mastermind side. You spend more of your time over on the advisory side. What I'm over on the mastermind side you spend more of your time over on the advisory side. What I'm hearing on the mastermind side is hey, our clients we've got this big RFP and they want to know exactly how our data is going to be used. They want us to prove that our data has been scrubbed. You know, et cetera, et cetera. So it's, I think, a lot of it, a lot of the concern is.

Speaker 1:

It's certainly in-house right. It's certainly there's a concern with the IT department inside the AEC firm, but it's also being driven from their client side, from the user side maybe, if we use that term.

Speaker 2:

You know, and it's more problematic for firms that are chasing a lot of the GovCon work right, sure, sure. When you're dealing with the federal government, you're dealing with state governments. You know there's data retention and there's data issues, data integrity, excuse me.

Speaker 2:

So it's one of those things to where, depending on what kind of work you're chasing, this becomes more or less a bigger problem for some than others. But generally speaking, most firms are like no, I don't want to use the public chat GPT because I can't secure the data. So what you are finding is that a lot of firms are now moving towards this concept of private AI. So you have clients that are going out there and custom developing a chat GPT enterprise solution. Now there's obvious pros and cons to that right. The pros around that is that I control my environment. I can control my data. The downside to that is that it does cost you more to do the upfront development to get it up and running. So now it's a balance of do I go the cheap route and potentially have some unknown data integrity, data privacy issues, or do I go a different route and spend more money and have a solution that actually solves what I'm looking for?

Speaker 1:

Yeah, yeah, and I guess maybe we skipped a step. Perhaps at this point because there's also at the heart of everything needs to be this AI policy. Ai policy we had, um, we had David Shulman, who's an attorney in Atlanta. He spoke at our summit last October. He's also done a session with our mastermind groups. Um, david is an expert on on IP and AI, and so he he walked through with all of our mastermind members. Here are the things that you need to be paying attention to. Here are the things you need to consider as you build out your own AI policy. And, by the way, you need to have an AI policy because, whether you approve it or not, whether you like it or not, whether you know it or not, your employees are using AI in some way. Your employees are using AI in some way, you know, even beyond their Amazon wish list. They're using chat or they're using whatever. So what policies, best practices do you need to have in place that everybody's adhering to?

Speaker 2:

You know, at a minimum, you just need to have rules on what you can and can't do with the artificial intelligence right. So when you think about that for a second, like your policy should be if you wanted to start at the most basic level, you're allowed to use these tools. You're allowed to do these things. Some firms allow it for internal meetings and internal type activities, but not client deliverables. Other firms are like try to put it in everything that you possibly can figure out where it fits into your workflow. So right, at a minimum, it's just you have to go through the evaluation of what tools are we going to allow everyone to use.

Speaker 2:

What you find is most AEC firms are Microsoft shops, meaning that they have Microsoft Office, they have Microsoft Outlook. It's very easy to turn around and say you can only use Copilot. That's a very, very easy thing. And Microsoft has their data policies are the best that I've seen so far among all of the AI solutions, mainly because what they say is that if your data lives within your Microsoft entity, in your Azure environment, it stays there. It doesn't go out to the underlying model. So what you find is that the folks that are at Microsoft tend to be gravitating towards those kinds of solutions simply because they're getting the best of both worlds.

Speaker 2:

One it's already built into my existing licenses, so I don't have another subscription. It's built into my Microsoft license and my data is generally secure because it's within that Microsoft environment. That's great. So that's what you see. So it's as simple as identify what tools you want everyone to use and then basically start putting rules around. These are the things that you're allowed to do, these are the things you're not allowed to do, and that should be your baseline policy.

Speaker 1:

Yeah, I mean that obviously takes a bit of due diligence, right. And again, you know, the people that we work with are doing quite a bit of due diligence, or vetting all of the tools that are being used AI, forced or not. They're vetting the tools that they use and they all have terms and conditions. Every tool that you're using does so, okay. So we understand the terms and conditions of the tools. We understand how the the that particular tool maybe it's all within Microsoft, maybe it's not we start to understand how it's being used, where it's being stored.

Speaker 1:

Then and you mentioned right, we've got private AI models or custom AI models hosted on custom or on our own servers, et cetera. What about regulatory considerations? What's out there and you mentioned this before a lot of people that are pursuing government work. You know this is absolutely part of that conversation. There are other types of clients, you know types of work where this is going to be much more stringent, much clients are going to demand much more transparency, et cetera. But what are some of the regulatory considerations that we need to keep in mind?

Speaker 2:

So one of the things that comes up a lot, too, is that if I'm using an AI note taker or I'm using AI to create this content or something, what happens if we get sued? Where are those things? And so I tend to lean back on the sense of you really now have to start thinking about your data retention policy. How long are you really required to keep information? And you have to kind of build some diligence around that. So we've gotten, you see, a lot of firms that get into this habit like, oh, we've been around for 25 years and I have proposals that date back and I have content and notes and everything to date back 25 years. We probably need to revisit that.

Speaker 2:

We probably need to revisit the fact of, like, now that everything is digital right, what is the legal requirement for us to retain information and at what point do we have to consider either archiving or purging some of that One, not necessarily to limit our liability, but also to you know, at some point, does the retention of the data actually benefit the business long term, right? Do I need something from 10 years ago? So there's a lot of questions around that. So I think you really would have to go in and start thinking about your data retention policy, because that is not something that most firms are thinking about going forward. And then you need to really kind of think about okay, from a from a government perspective, what does the contract stipulate? How do we, you know, what am I allowed to do and not do? So you, it's really having to take all of those things into consideration, understanding where your business is.

Speaker 1:

Yeah.

Speaker 2:

One of the things.

Speaker 1:

I'm going to throw this out there because I wonder right, this is a question I have I wonder what the risk managers, the risk management consultants out there have to say about this, because, you know, over the years, as things have changed, right, we went from, you know, taking a camera out to the job site to a lot of people using their personal smartphone on the job site to take job site pictures, and, as it turns out, that's not a good idea, because if something happens, everything becomes discoverable and that your employee's phone gets subpoenaed. It would be interesting if and if you're a risk management consultant, maybe comment on this post, whether you're consuming it in video or audio format, because I think that's going to end up being an interesting part of this discussion as well.

Speaker 2:

Yeah, I mean you're also seeing some of it too, that the specialty apps that are coming out at Target. You know, subgments of our industry that are focused on, say, like GovCon. So there's some GovCon AI proposal tools out there and they're very tailored very specifically on how those operate. So in some of these instances and we go back to the evaluation of the tools depending on the kind of business that you're pursuing, you may not be best suited to use a general AI like a chat GPT. You may have to look at something that's a little more specialized, whether that's from an open asset or a unit net where they have these specialized tools. So, again, your tool evaluation is going to be really important, depending on the kind of work that you pursue.

Speaker 1:

Yeah, yeah, absolutely. And also, you need to understand and as you're listening to this, you probably already do understand this, so I'm going to play Mr Obvious perhaps, but you need to understand what the rules are and understand how to stay compliant with the regulations in whichever sandbox you're playing in.

Speaker 2:

And well, I think there's, you know there's, because of those unknowns, what you do find is some firms aren't really pursuing the technology or the innovation because they don't really know how to solve for some of those things Right, and I think that's a miss. I think the organizations need to kind of make that investment in time and energy to figure out how these tools work for them so that they're prepared going forward, because I think what you're going to find is that five years from now, 10 years from now, if you're an AEC firm that hasn't embraced this kind of innovation or technology or AI now, I think you're in real trouble in the future, because I think more firms are going to be oriented that way, that have figured it out?

Speaker 1:

Yeah, absolutely. I mean, I was talking with somebody a couple of weeks ago that basically their sentiment was hey, we don't know, we don't understand, so we're not going to right, they're going to stick their head in the sand, Complete head in the sand type of mentality, and I don't think that's the right approach.

Speaker 2:

So, yes, there's probably some specialty tools out there, but I also think too and we've been working with some clients around this focused on developing essentially custom solutions. And what's nice now is that a lot of these what we call custom solutions are very much no-code, load-code type of deployments. Right, you look at the Azure AI Foundry. The beautiful part about that is, yes, it's made by Microsoft, but you can pick any model that you want underneath it. You don't have to use Copilot. You could use ChatGPT or Claude or Sonnet or Llama, so you get to pick the model that you want on the backend, and the benefit to that is is that some models are more expensive and cheaper than others, so that you can basically rein in your costs. So you get the custom solution with the model that fits your organization that doesn't require any real hard development. You don't need a software developer to get it done.

Speaker 2:

So there are solutions out there that help solve for some of these data security issues, data questions. The other big thing that comes up and this kind of ties into this loosely too is that cybersecurity plays a more significant role. When you start thinking about how these tools are all connected. Everything's in the cloud. If you don't have strong cybersecurity policies and solutions in place today, adding this tool is not going to make the problem any easier. So you really do have to be thinking about your cybersecurity, whether or not you're AI enabled today, and more so when you're AI enabled in the future.

Speaker 1:

Yeah, yeah, absolutely so, if we think about the pros of AI policies and really being intentional about your cybersecurity, your data privacy, your data security when we have these things in place, it reduces our data security risk, right, it helps us stay compliant with the regulation, or the different regulations, depending on the sandbox that we play in, and then it ensures that it that we are aligning our ai policies with the overall company policies and and the culture that we've developed in our organization.

Speaker 2:

I also think too right. It's, even if you want to break it down even simpler. Giving someone do's and don'ts when it comes to this can help solve each of the things that you just identified. Right. If I tell you you're allowed to do this and you're not allowed to do that, we've given somebody rules that helps reduce your data security risk. If I tell them to say do this, you're allowed to do this and not do this, it allows them to stay compliant with any regulations that they're going after. And then if I say you can do this and you can't do this, it aligns with the internal culture of what the company's policies are when it comes to this. So I think when you fundamentally break it down, you know just coming up with simple do's and don'ts to start is a great place to help solve for all those other things downstream yeah, yeah, every game has rules, so why wouldn't uh?

Speaker 1:

why wouldn't the game that you play with with ai? Uh? Why wouldn't it have rules? All right, what are the key takeaways? What are the action steps? We promised this at the beginning. Right, it's going to be digestible and it's going to be actionable. So what are the action steps that we need to take away from this conversation today?

Speaker 2:

Yeah, I think you hit it. You know you hit the nail on the head early, right, get your AI policy in place. I know that we kind of jumped at the start but we circled back to it, but get that AI policy in place. The second thing is really figure out what tool works best for your organization, right? Obviously, people that are focused on GovCon, they have other requirements, so it requires other diligence, maybe specialized solutions. For the rest of us, we have options. Right, there's the obvious no code, low code type of options out there that you should be exploring.

Speaker 2:

If you're a smaller firm, off the shelf solutions with making sure that you're setting up your configuration correctly on the settings can help solve those problems at all. So AI policy is number one. Settings can help solve those problems at all. So AI policy is number one. Two, it's the evaluation of the tools that are going to be best suited for your organization. And then the third one is that, once you kind of solve for those other things, start thinking about your regulatory considerations, anything that may actually be that's unique, that you want to solve for, and then, when you have those three pillars in place, I think you're in a really good place to actually figure out how best to use artificial intelligence inside your firm.

Speaker 1:

Yeah, yeah, absolutely Agree, 100%. You have to be intentional, you have to do your due diligence, you have to to vet all the tools, how they're used, where they're used, your policies, everything else. So I think this is a really good discussion. On data privacy, data security and AI we hear this day in and day out concerns on those fronts, so hopefully, this conversation that Frank and I have been having will help you put into action some of these things that we're talking about.

Speaker 1:

Again, wherever it is that you consume this be it the podcast version, be the YouTube version, the video version Maybe you're seeing it on LinkedIn or somewhere else Let us know what you think, let us know what your questions are. What are you struggling with? What do you need to know more about? What are you doing? We, we you know Frank said it earlier these topics come from conversations and from work that we're doing with our clients, with the people in our ecosystem, so to speak, and we want to make sure that these are as relevant as we possibly can. So let us know, help us out with that. Our production team will put links to anything that we've mentioned that's deserving, that's needed, into the show notes below, and we'll be back again next week with another episode of AI Unpacked. So, frank, thanks so much for joining me today. It was a great conversation.

Speaker 2:

Appreciate it. Yeah, again, it's one of those hot topics that comes up literally almost every single time we have a conversation with somebody.

Speaker 1:

So I'm glad that we were able to share it with everyone. Yeah, absolutely. Thanks, frank, thanks to everybody that's listening, and we'll see you again next week. Bye, everybody.