PowerBI.tips

Build for AI or Build for Reports? – Ep. 430

Build for AI or Build for Reports? – Ep. 430

Mike and Tommy debate whether your next investment of time should go toward making reports better or making your semantic model AI-ready. They unpack Microsoft’s new “Prep Data for AI” features — AI instructions, AI data schema, and verified answers — and ask the hard question: is the juice worth the squeeze?

Beat from the Street: Lakehouse Renaming Gone Wrong

Mike shares a painful real-world experience: after renaming a Fabric Lakehouse (simply removing “01” from the name), the SQL analytics endpoint completely broke. Tables were visible in the Lakehouse itself, but the SQL endpoint, semantic models, and Power BI reports all failed to authenticate. Microsoft support’s fix? Just rename it again to something different — and it worked.

The takeaway: don’t rename your Lakehouses unless you’re ready to troubleshoot authentication failures across your entire stack. The SQL analytics endpoint connection string (all 700 characters of it) didn’t change, but something in the internal name resolution broke silently.

TMDL to the Rescue

When the renamed Lakehouse left 15 tables with broken M code references, Mike used TMDL (Tabular Model Definition Language) in VS Code. A quick Ctrl+Shift+L to multi-select the old Lakehouse name across all Power Query expressions, replace with the new name, and everything refreshed cleanly. A great real-world use case for why you should have TMDL enabled in Power BI Desktop.

The Connections Experience Rant

Mike also calls out the clunky connection re-authentication flow in the Power BI service. After publishing a model with a changed connection string, there’s no clear prompt to fix authentication — you have to dig through semantic model settings, data gateway options, create a new connection, save, authenticate, and then go back to select it. Way too many clicks for what should be a straightforward operation.

Main Discussion

  • Prepare Your Data for AI — Power BI — Microsoft’s overview of the new tooling features: AI data schema, AI instructions, and verified answers to optimize your semantic model for Copilot.

  • AI Instructions for Copilot — Power BI — Detailed walkthrough on setting up AI instructions that provide context, business logic, and terminology guidance directly on your semantic model.

Three New Tools for AI Readiness

Tommy breaks down the “Prep Data for AI” features announced at Microsoft Build:

  1. AI Data Schema — Select which tables and columns Copilot can see. Finally, a way to scope down what the AI uses instead of exposing the entire model.
  2. AI Instructions — Custom system instructions on the semantic model itself, helping Copilot understand business terminology, priorities, and how to interpret questions.
  3. Verified Answers — Pin specific visuals from Power BI reports as canonical answers to common questions, so Copilot surfaces curated results rather than generating from scratch.

These features work across Copilot in Power BI, Data Agents, and even Copilot in Microsoft 365 (Outlook, Teams home).

Is the Lemon Worth the Squeeze?

Mike raises the central tension: Microsoft is now asking report builders to maintain good DAX, build attractive and actionable reports, optimize for mobile, and invest time curating AI metadata. Where should you invest your limited time?

The hosts agree the tooling is a step in the right direction — but needs to mature. Mike draws an analogy: AI is in the “horseless carriage” stage. Just as early cars were carriages with engines bolted on, today’s AI features feel bolted onto existing report paradigms. It works, but it’s not yet optimized.

The Semantic Model Is Still King

Both hosts emphasize that the semantic model remains the single source of truth. Whether AI succeeds or not, investing in clean models with proper descriptions, synonyms, and organized measures pays dividends for traditional reporting too. The AI metadata is additive — it doesn’t hurt your reports.

Real Data: ChatGPT vs. Google Traffic

Mike shares analytics from PowerBI.tips: over 12 months, ChatGPT referred about 2% of web traffic. But narrowing to the last 30 days, that number jumped to 7%. The trajectory is clear — AI-driven discovery is growing fast, and content creators need to think about how AI surfaces their work alongside traditional search.

What Needs to Improve

The hosts identify several gaps:

  • No API access yet for AI instructions — you can’t automate or version-control them programmatically
  • Perspectives should integrate with the AI schema selection rather than requiring manual reselection
  • Error messages across Fabric remain frustratingly vague — “something went wrong” doesn’t help anyone debug
  • The experience needs to prove ROI before most organizations will invest the extra authoring time

Looking Forward

Mike and Tommy agree the Prep Data for AI features are headed in the right direction but aren’t yet mature enough to demand priority over core reporting work. Their advice: start small — add descriptions and synonyms to your semantic model (you should be doing this anyway), experiment with AI instructions on one model, and watch how the tooling evolves. The semantic model investment pays off regardless of whether AI takes off or not.

Episode Transcript

Full verbatim transcript — click any timestamp to jump to that moment:

0:01 [Music] Thank you. [Music] Good morning everyone and welcome back

0:34 To the Explicit Measures podcast with Tommy and Mike. Good morning. Oh yeah, good morning. It’s Thursday this week. We’ve we’ve clipped on through this week. It’s going fast. We’re already into June. Zooming through here. It feels like this year has already gone by so fast. My family and I were talking. We’re just like, “Wow, it’s already halfway through the year. We’ve already made it to the to the halfway month.” I I really don’t believe that at all. That’s I’m worried about that actually. So, but what? We’ll we’ll keep on

1:06 Keeping on. The days are long and the years are short. I don’t know how they say the phrase, but all right, let’s talk about our main topic today. So, our main topic for today is going to be around building AI or building for AI, my apologies, or building for your reporting. What does this look like? , as you can tell, Microsoft is putting co-pilot all over the place over fabric, over Microsoft applications, over Office documents. , and I heard some interesting analogies recently. I watch

1:39 A lot of YouTube. , they made the analogy of AI as in the the horse in the carriage stage. And so apparently when they when they made engines or cars, they had replaced horses with motors or steam engines or something, right? At the ear in the early days, they just took off the horse and they just replace it with a with an engine, right? So something gas powered. And yeah, it worked, but like it wasn’t very optimized yet. It didn’t get very efficient. So it took a couple years to

2:10 Like refine the design and and start removing the idea of like the horse and and replacing it with the engine. So, it was like a very early design, wasn’t fully functional, needs a couple years to mature and really come into its own. And it feels like that’s where we are with AI right now. It’s like this really awkward stage. We’re we’re in like the early teenager stage of of AI. it’s awkward. It doesn’t quite do exactly what you want. You’re not quite sure how to It’s growing faster than we thought it would thing. It’s growing f like you’re you’re growing up quick. You grew three inches this year,

2:42 But you’re in that awkward phase right now. So, we’ll talk more about that in a little bit and especially how this is going to interact with your report side of things. All right, with that being said, I do have a we’re going to call it beat from the street. Something that I experienced here recently that I would like to share with everyone with one of my experiences. Tommy, let me ask you a question. , when you are building inside fabric, how often are you renaming your lakehouse? Do you ever rename your lakehouse? Well, actually from one of our naming conventions, I

3:16 Have a pretty good setup for all the process, but like the the name of the lake house. Yep. The name I’m pretty good with the names I have there at least. I’m dealing more with changing like notebook names like because the the idea changes, but lake houses I’m I’m pretty straightforward with what it’s going to be. So, but not necessarily like the creation of the lakehouse with a name, but like did you change the name of a lake? like a lake house had a name initially and then you renamed it to something else like yeah because I too many things and I wasn’t sure which one

3:48 My initial model was pointing to. Go. So, believe it or not, I feel like naming the lakehouse is quite difficult. Like, what do you really name it? Like, do you just call it data warehouse, data catalog, like all these there’s a lot of buzz word. Anyways, all this to say, I don’t know what I did, but I experienced the weirdest error I’ve seen in the lake. So, Tommy, your experience has been you had that lakehouse, it was named incorrectly, and you were able to step in and just rename the lakehouse. It just worked. you then the rename work

4:20 And then you could still hit your semantic models to that lakehouse and and the tables and the SQL analytics endpoint. For whatever reason, whatever name I gave my lakehouse, I totally just killed the SQL analytics endpoint. It would not work. I could not I could look at the tables in the lakehouse, but if I went to the SQL server, it said SQL server could not authenticate back to the lakehouse. And this was only a name change I did. So, I just removed like it was my lakehouse01. I just deleted the 01 at the end of it and it just freaked out.

4:51 It could not connect to anything and I was like what the heck? What’s going on? So, I I submitted a help desk ticket. I went through the whole process, had someone on the support with me and they said just rename it. I said no. They said just give it a different name. So, I adjusted the name yet again. Everything lit up. It was just fine. So, I don’t know what it was that I named the lake, but either I had created a lake previously and when I renamed the lake, it somehow had the same name as a lake that I had previously deleted or

5:25 Maybe it was a different lake that somewhere else had in the region. I don’t know what was going on, but for whatever reason, that lakehouse name was not able to be used. And it the error message just for people who are aware of this if you ever rename a lakehouse and you get like so here’s the symptoms right the symptoms are you could read the tables in the lakehouse itself you could click on them the data would show up it would appear right SQL analytics endpoint failed to authenticate semantic model connected to the lakehouse failed

5:56 To authenticate PowerBI reports failed to authenticate it would not nothing would authenticate back to the tables in the lakehouse and it would not let me read them for whatever reason and fail to authenticate which I’ve never seen before in a lakehouse which was really weird. So, I don’t know if there would there may have been needed to be some like data checks before I renamed it to say you can’t name it that or don’t use this name or something cuz I did not which is which is really strange too because

6:29 I’ve dealt with this with a lot of symmetric models too and you would think that there’s the the idea of that is what is actually in a sense talking to each other and surprisingly not so there I had I had the same instance with semantic model. , I didn’t like the name of it, but I had some things changed to it and I had two and I was actually using get. , I’m like, what? I’m just going to duplicate this. However, there were some IDs somewhere that was from the old one. So, so in some places it uses the ID like if you actually in some places it

7:01 Uses the name which is strange. So, really the name shouldn’t break everything. Yep. those connection strings are you ever see whenever you connect to the endpoint it’s what 700 characters. Yeah. So you’re exactly right Tommy. So the funny thing was the name of the SQL analytics endpoint never changed that that character based name for the SQL server never adjusted. It was the same every single time. Just the fact that the lakehouse name changed something broke all the authentication and it was

7:34 Totally messed up. So one just be cautious. So my my beat from the street here is just be cautious around that. But what I will say is after I renamed my lakehouse, I had a semantic model that I had created on top of it like an import model. So the import model was connecting to the SQL server which was then connecting back up to the lakehouse. Well, because I renamed the lakehouse, the the way that the M code was written, the M code was looking for the lakehouse name, the old name, and so it wasn’t refreshing. It was totally breaking. So I was like, “Oh, dang it.”

8:07 And then I went into the editor and the mcode editor would not allow you. So if you went into just Power Query and tried to edit it, the way that it was doing the navigation step. So you have when you go to Power Query, you start your first table, it starts with source and then it starts with navigation. So when you’re going to like a lakehouse, well, you can go to the navigation step and you can try to adjust the name of things, but it wasn’t it wasn’t working. It still was failing cuz it couldn’t find the table. Can’t find said table. So, I had to go into the advanced editor. Only when I went into the advanced editor did it work.

8:39 After I was able to find it, I found, oh, here’s the name of the old lakehouse that I had. I had to update the name to the new lakehouse. And I was like, what? I’ve got not a ton of tables. I’ve got like 15 tables in here. I don’t want to have to go through every single table and rewrite them all. I wanted to just I want to just do it all in one shot. Well, I was like, “Oh, I forgot about Timle.” So, I went over to the Tim editor, opened it up, I scrolled down till I saw the M code sections, and because I was doing something that had an old name and a new name. Oh, I was

9:14 Able to just highlight the old name, control shift L, select every instance of it, delete out the portion of the old name that I needed, update it with the new name, and then I hit close, refresh my query, and everything just started working. like all like all 15 tables just loaded correctly because it had updated the name. And I was like, what, that is a great use case for where you should be using TIMDLE. I was in I was editing 15 tables of M code by adjusting a single name or connection to

9:47 The source of something. That’s useful. Like that saved me some time than going through one thing at a time. That was just brilliant. Anyways, I just can’t speak highly enough about Tim, how great it is. , and the fact that I can see the entire definition of the semantic model. If you’re not using Tindle and you don’t have Tindle turned on in your PowerBI desktop, you need to start looking at it. Like that is a very important thing you should start learning how to use because I’m finding I’m saving hours of time mcking with models and going there instead of going into the semantic model and using

10:19 Desktop, which is the latest. And also be wary of your language settings on your computer. So I don’t know if it’s the latest version of the extension and I I’ve been trying to use it a ton but if I have anything power query copilot it’s like oh you have a syntax error especially if you have a more complicated power query. So what it does is rather putting on the first line it’s like we’re going to put on the bottom. So PowerBI has a syntax if you open it in VS Code they say ah it’s an error let’s fix it for you then PowerBI won’t open anymore. So just that

10:53 I have found a similar problem. It’s in my experience it comes from the co-pilot. If you have co-pilot on and co-pilot is a bit aggressive around the M language. I have used an I do have a power query or M language interpreter on my machine. So I I I go have gotten one of the add-ins that help me read the M code. I also have the Tindle package as well. That’s one’s from Microsoft. So I’ve also added Tindle. So between the combination of TimLE and or the M code, I don’t do a lot of M

11:26 Editing in VS Code directly, but those seem to help out a little bit. Oh, no. Yeah, because it’s especially when you’re looking at the TIM tools like, oh, there’s a syntax error. Let’s fix that for you. And then PowerBI, no. No, now. So, but no, I I love opening it up because I have changed all my report pages. every single visual I can rename it across the board with a new name and basically oh what instead of image count or whatever count by I can say the official

11:57 Title and what a what a game saver but the connection strings I I’m just you have to deal with it being 700 characters what it is right now but can’t change that but I’m not again I don’t really worry about that the idea here is like once once you have something like that it’s very easy to go into timle and actually go into power query make your parameter add the server name as the parameter and then you can edit all your power query queries and update get rid of the server name and use the variable

12:29 That you made earlier right so there’s a lot of things where you can do mass edits now with the tindle experience and I think that’s just again I’m just ranting and raving about how great that was my my last note here before we move on to the main topic was I do have so instead of a beat from the street this is more like a rant this is like a pain point that I find and again I’d als also be interested in the chat here either on YouTube or other social media places, let me know what you think about this as well. My rant is around the connections experience. So when you create a semantic model and you have a brand new lakehouse connection in my

13:02 Example, right, I renamed my lakehouse. I had a new connection string because the the the name of the lakehouse was different which means I had to go update the M code which means since the M codeet was now different for connection strings RBI thought oh I don’t have access I need to go reauthenticate fine no big deal but boy digging around and con like I have the report here in desktop and I authenticate it and I publish it I just feel like in the service something’s missing like you

13:36 Have to dig like for example I published my report I updated the report I took the old report out I just re over wrote the old report fine no big deal right yeah sure it wouldn’t refresh and I hit refresh it wouldn’t go I’m like what’s going on here so when I open the report in the service there should be some prompt like hey this thing’s not authenticated correctly we can’t authenticate you or something it should take me right to like the here’s the connection reauthenticate and just work but it doesn’t it takes you to this like

14:09 You have to go into the settings of the semantic model you have to click data connections I can’t remember what it was data data gateway or something like that then in a drop- down menu you have to pick something else create new connection another whole dialogue window pops up and you have to name the connection and do all these things and like make it all appropriate and then you have to hit save then you have to hit authenticate and then you then you refresh the model again and it still fails. You’re like what the heck? No, you have to go back to the settings, pick the new connection in the drop down

14:41 Menu and then it works. Like dude, there’s so many more clicks now than what than what’s happening now than what it used to be for connections or using connections on semantic models. It’s just it’s just clunky. So, it’s just a rant. I don’t know what’s what the I don’t even know if I could fix it. Like I I even I don’t even know if I know the right way of solving this problem. But for whatever reason, this whole connections experience is just awkward now. They’re fixing other things, but it what the result of this is is just a

15:13 Very poor highclick experience, and I don’t like it. Honestly, it’s not just the connection strings I that I feel your pain on. It’s a lot of times doing the lineage. It’s hard, especially when you have a lot of different items. It’s like, okay, you got to know the name of everything, the column. It’s hard to navigate because we’re just dealing with so much stuff and dealing with those some of those settings. what I’m going to put for my Christmas gift from Secret Santa this year is that 50% of the budget is spent on error

15:46 Messages. Oh jeez, you can spend a You can spend 100% of your budget on error message because they’re never clear enough. just just I feel like they’ve gotten worse thing. I’m I’m getting the worst error message possible. Like I can’t even put this something went wrong. How do I Google that? Like there’s nothing I can Google for that message. Well, can’t authenticate SQL Server. Okay. But it gives me no information about like is there another name? Is it is there two things? Like I had no clue. Like again, this is that’s why I had to filter out like what did I do wrong? I didn’t I

16:19 Didn’t even understand how to debug it from the message. So, the whole point of error messages is to just point you to a place where you can like figure it out a little bit and like fix it. That was not happening. Anyways, that’s my that’s my beat from the street. Tim’s awesome. You definitely want to use that. Don’t rename your lakeouses. Try to make a name and stick with it. Don’t don’t do that. It creates a lot of pain. And then, dog on it, the connection stuff is just difficult right now. And maybe that’ll get better over time. , I don’t know. it it the whole experience just feels way especially when you’re just

16:52 Publishing a semantic model connected to like a single SQL analytics endpoint. It should just work and it should just be easier to get that done. I I I don’t know how it works but it just feels way overly complex right now at this point. Anyways, enough of that. All right, Tommy, frame us out. Give us give us the main topic today. Build for AI or build for reports. What’s going on here? What are we going to talk about in our main topic? So this is going to be I feel like we’ve mentioned this a few time like overpassing but now we really have it to the forefront at Microsoft

17:25 Build. One of the more major announcements was this idea where you can prepare your data for co-pilot where you can really prep your data for AI is the marketing tagline that we’re going with and Mike how many times have I know I’ve said this on here there and everywhere that all the AI features that Microsoft has released to us we’ve never really had a good way to manage it author or customize And because you dealt with

17:57 Either hide everything and it’s got to be different for AI, it’s got to be different for natural language. So there were always the this rib well prep your day with AI. We actually have three tools that have come out now around a semantic model and they’re calling it authoring AI instructions and AI data schema which is all going to be PowerBI desktop. Interesting. They did desktop. I know our conversations about surface and web but the idea is this that we can actually allow model authors to define a dedicated really just right

18:31 Now just choosing the columns and the tables what copilot should look at I asked for that since automated insights just something like that right now we have the ability just to select things there is this idea we’re going to add to the model itself AI instructions where you can pro it’s really just a custom instructions your system instructions helping copilot generate response that are more aligned with what people want to find. And then the last feature tool that is actually available is going to is called verified answers where you can actually choose a visual in any PowerBI

19:05 Report and you simply say what some of the answers should be when people are looking at that particular visual. Those will then go into the PowerBI desktop and then you can vet them but anyone can ask those. So these are three tools that are trying to make you go in three places, right? It’s not just the tooling itself. It’s for co-pilot data agents and even copilot standalone. So when I’m on office 365 and I’m just at my home screen or on Outlook, I can say, “Hey, how’s that report doing? It will we should be able to feed it.” So all that

19:39 Being said, that’s all the features. What’s the impact here? Because Mike, I have been asking for something like this forever, but you mentioned this on ear maybe last week or two weeks ago, this idea of the spantic model being gold, but we build models right now for some tabler modeling analysis. That’s how the table model works. It works with the dim and relation or fact and relation tables. We don’t know if this is going to work

20:11 Because we also know that AI likes a little different format, a little different structure. So Mike, I’m just going to drop the ball here and really just say, how does this impact semantic models or what are your first thoughts of this feature? Yeah, my my initial thoughts are, is this lemon worth the squeeze? We’ve had a number of AI based features, Q&A, a couple other features here as well. And I I do think there’s an opportunity to see a lot more use of these features. So, letting

20:43 The AI be more intelligent around, combing through the model, figuring out what’s important, what’s not important, and ranking things, what things are not what elements are supposed to be used by the co-pilot. I think I think this is all a good step in the right direction. Right now, I’m also looking at this going like, okay, one, Microsoft, you’re asking us to build a semantic model efficiently with good DAXs. You’re asking us to build good-looking reports that are actionable and easy to build and and useful for people. And now you’re also asking us to do a whole bunch of extra metadata adding to the semantic model so the AI

21:17 Can use it. So, when I step back at this, I go, where should I invest my time? Where’s where’s the best? And I was looking at a at a customer recently and we we went through a whole session around how to build reports for mobile, right? So add that in there, too. So there’s all these things you could look at. Okay, well you’re now going to need this massive checklist of okay, I got to make a good model. I got to make a good-look report. It’s got to be ready for mobile and I have to ready make it ready for AI and and and all these things. Add the synonyms. Make sure the descriptions are there. There’s a lot of burden now on

21:50 The creation of the semantic model. And so as I’m looking at this, I’m going where is the best place for me to put my time to make sure that the users can utilize this in a fair and and easy to use way. Right? So, right, all this to say is every organization is going to be different. Some organizations are going to say, “Yes, our leadership is on board. We’re going to use co-pilot. Let’s let’s go, baby. Let’s let’s turn it on. Let’s get people running C-pilot.” Other organizations are super scared of co-pilots and things and

22:23 They’re all, no, no, no, don’t don’t use it. Every time you you give them some AI or agent thing, they’re like, I that’s too much. I can’t understand that. I don’t know how it knows how to do everything right. And they and they really resist that movement. So, I think there’s a trend happening. I think we’re getting to a point and Tommy, I think we’re on the forefront of this because we’re so used to just using AI on everything. I’m now expecting most of my app experiences to now have some AI. However, I’ve been reading a lot of this content around this. It feels like a

22:57 Lot of the AI is doing these very remedial tasks. That’s where it’s most efficient, right? It’s the simple things. Hey, I’ve written a paragraph in my email. Rewrite it with a happier tone or write an email with super blunt words in it. Right? And then and then tell the AI and say, “This is this is too aggressive. make this softer, right? And it can rewrite. So, it’s these little things like when I go to publish things from Adobe Express, right? Here’s a 150 character box. I have a limitation of things I can write.

23:29 Okay, I talk about what I think I want to write for my my post, my article. And yeah, the AI says, “No, no, no, here’s a better way of writing it.” Right? So, it’s these little nitpicky things, the things that just took me a lot of time. It’s removing some effort from me away from those little effort, those pieces. And the reason I’m saying this is my observation on this one is I’m now stepping back and going, “Oh man, these new tools, they don’t have that. I have to really think about what I’m going to write. I have to really think about what my phrase is going to be here, and I’m

24:02 And I’m I’m not able to lean on the AI to just make it better.” And so, I think that’s where we’re at right now. I think we’re at this stage again. I was bringing up earlier about the horse and the carriage example. AI as a horse and carriage experience. Yeah, that seems right to me. Like that that seems a jive like we’re in this very early stage. We’ve been driving around on horses and carriages for a number of years. We just landed AI. We’re going to introduce the steam engine or an engine to a a car, right? Well, right now you’re you’re just building everything to be like the engine is the horse. You just literally

24:34 Swapped them out. We haven’t actually figured out where to like we haven’t figured out how to put the engine in the front, build all the sheet metal around it. We haven’t figured out how to you make the engine reliable and start it with a battery. We have to crank it right now. Like there’s there’s a lot of these manual things that we’re doing to make the AI fit. It’s definitely proving useful. I can go farther and faster with the AI like I can in the car, but it’s not a comfortable ride yet. what ? It’s not it’s not fully it’s not fully I’ll pause there. I’ve said a lot of things, a lot

25:06 Of ideas. Go ahead, tell me. What are your thoughts? you you’re already expanding the the idea here and understanding that this is more than just figuring out the tool, right? Because to your point, Mike, it’s a lot about organizations and where they’re comfortable with and how much they’re actually going to let go. And then also, I forgot about, our own time to do stuff. where’s the best time? I’ve been looking at this whole thing about the just the strug first it’s the idea that’s got to work and I think that’s where I’m starting here because we we have this like beautiful every

25:41 Semantic model that we create is a glossery of what should be the facts about the company you if you wanted any better data to tell you what’s going on in the company with the metrics that we’ve created really it’s design It’s is supposed to be in a sense the the code book, the guide book, the blueprint that your company follows and it has all these truths in it. So to me that makes it a great candidate where a lot of times AI is usually going to be fed raw information and input output but that’s also what AI does. but

26:16 So this idea to me is oh we already have this structured data that we’ve already put so much time in and we have the everything refined and now we can allow co-pilot to actually iterate and I can customize this because that’s the first time for me where it’s like okay I can actually see some potential here. , all that being said, Mike, it’s there’s right now the three tools that are like available, I don’t know if they’re going to be enough

26:48 For this to be worth it. And the reason I say this is you’re dealing with someone else’s data. You’re dealing with the company’s data. So unlike you created your own chatbot, and put on GitHub where people like, “Yeah, it’s fine. It’s actually pretty cool.” Well, you can’t get this wrong and or it just can’t give weird answers. It has to be impactful every time or there’s no use doing it. Can’t be anywhere in the middle. It it either bet it to me it honestly better spit out, go jump in a lake or give you exactly

27:22 What someone’s intending it to do. Anything else in the middle of lukewarm to me like no one’s I know how people are. I know how I am. I am I know people who are adverse to technology are they’re gonna say I don’t know show me what are top sales this week and it spits out 18 other things. Well then all that’s for not. So anyways, all that being said, to me it’s even before we release this and when we go out to the world with this, the the

27:55 Tooling itself to me has to have the features and the scale for me to make it worthwhile to spend my time to actually release it at the company because but to me I’m so frustrated because would you agree that the semantic model itself that’s it’s it’s a background of all the truth of your the my my problem here with I think your statement there is a little bit is the information

28:27 Is in the data and it takes some effort to extract from this large pile of tables and relationships like that’s that’s the information that’s inside your company someone or something or chatgpt or something has to go through that and say okay how do I decompress address this large pile of information. What is important in this information? Right? And that’s that’s harder to get to. you can do a lot of like statistics, you can throw a lot of numbers at things. And this is where I I look at this going, I’ve

28:59 Looked at hyperparameterization in like machine learning. So hyperp parameterization in machine learning allows users to, run AI against a data set or some correlation. Right? Right? I’m trying to I’m looking for correlation in my data. Right? And what you do is you just say, look, I’m not going to just pick one specific algorithm. I’m going to just say pick all the algorithms and then hey computer, because you’re cheap and you can run for an hour and figure out what what’s best, change all the the

29:32 Weights and the parameters and just adjust it. So hyperparameterization allows you to just throw a bunch of machine learning at some data and say computer, you go figure it out, right? And so where where me I have to sit down and think about what’s the best way to do this because my time is valuable. I I don’t have the I physically can’t do a thousand tests in an hour. I just can’t do it. So but this is where the AI and computers are great is because they’re great at multi-threading. They can do a lot of things at once. So if we look at this and go, again back to my analogy, there’s a big pile of data

30:06 And we need to pull out what information do I want about it? And so we’re now trying to connect this human question, why are my sales down to this big pile of data, who is my most impactful sales representative? What how do you measure that? What are those KPIs? What does that look like? what is our biggest product opportunity? Those these are the questions we’re trying to ask, but we’re trying to use the data to inform those decisions, right? That those are the things that we’re looking at. So, as I step back on this one and and think about it, I I think that’s where

30:38 I’m struggling a bit now is because does does AI replace this? I’m not sure. I’m not sure we’re there yet. So, let let me let me give you some other I’m going to bring some anecdotal data here as well. So, let me let me give you some other anecdotal data here that I’m seeing. I I run a website and my website gets a number. This is PowerBI tips, right? So, I’m I’m going to give you some just real numbers here around PowerBI tips. Okay. In PowerBI tips, it’s a website. It has information. It’s

31:13 Tutorials, tutorial based information. And as I look at that information over time, I’m able to get some metrics back from Google Analytics. Google Analytics tells me like how many views you’ve had, how many hits you have. Okay. If you look at the last 12 months in total and compare the web traffic for GPT chat GPT versus Google right so the Google and if you again if you have a website and and you don’t have Google Analytics Google is probably the largest referer of traffic to your website period it just is okay right so if you look at that

31:48 Comparison okay Google sending people to my website versus other systems Google is always number on by far the majority of the traffic comes there. Okay. Now in 12 months chat GPT was around call it 2% of my web traffic. Okay that’s 12 months looking at a 12-month window. Okay let’s change that window down to 30 days the 30-day window. Okay, when I look at the last 30 days of data, chat GPT has now increased from

32:23 Only 3% to now 12% of my web traffic. So, whoa whoa chat huge move. So, chat GPT has learned go to that website, search it for information, return the results in a chat GPT method. So, what does this tell me? This tells me that people are getting more comfortable with AI. And if I look at the traffic in months of over time, it’s it’s greatly increasing. And in another month or two, we’re going to see more chat GPT traffic

32:56 Than Google. Think about that. Your website is going to be scraped more by an AI and using chat GPT to get answers and deliver those answers to your users than actually Google Analytics. This is this is happening. This is in real time. This is shiftur is occurring. Honestly, what what you’re going to see more is an AI just scraping your site more than people actually visiting. I have I have another I have another friend who’s who has a an online marketplace and he was telling me he

33:29 Goes in the morning I see chat GPT launching and just hogging all my web traffic on my website. It’s just going to town beating up my site, scraping the site. And what’s happening is chat people are asking questions of chat GPT. It’s finding your website as a source of truth and it’s using information from that and bringing it back to you. So fine. Okay, interesting. But as it does this, like I now, and if you think about like WordPress or whatever your site is, you’re going to need to start really

34:01 Seriously considering like, okay, I’m building a website where half of my traffic is coming from people. the other half of my traffic is coming from an AI agent. So, what does that look like? How do you make your website more AI friendly to the AI agents, right? That’s that’s something else here you’re going to have to start thinking about. So, okay, why do I give you this data point? Why do I tell you this? I’m actively monitoring how much Google traffic I’m getting versus how much ChatGpt traffic I’m getting because I’m fairly convinced that in the next couple months chat GPT

34:34 Is going to surpass the amount of Google Analytics traffic. And if I even go, even more narrow, like let’s do the last 7 days of data. I’ll even do that. I’ll tell you that the ratio of traffic between my site and the and Google and and that as well. So if you do the last seven days of data from web traffic, chat GPT is now up to almost 14%. And Google is down to 20% where previously it was more like 60%. incredible shift in traffic coming at this point. so anyways, I would I

35:09 Would also encourage people that are in the data space, if you have websites or if you have website data, go check out your website data. Go look it up. Go look at your traffic acquisition and see where it’s coming from. I guarantee you you’re now starting to see a great shift towards AI. You are making an interesting point though because here’s the thing about the concept of the AI stuff. The thing is it does text very well right now like coding and text are really AI. But that’s where it starts. This is we’re start this is the Sure. Sure. But what I’m saying so let’s let’s bring it back

35:41 Then to the idea of your semantic model that might be already aggregated. I’m not asking the AI to go rowby row. I’m give theoretically I’m already have given it all the metrics and the right columns and it just has to do some analytics some insights that I would not think of and help answer questions about it. But the the thing is again it’s not text. and that as we go through this Mike where are you in terms of we I think we’re all

36:14 In agreement about we know AI is going to is just the forefront. it’s just part of our lives now. How much it’s going to be what it’s going to look like 10 years who knows but this is like when the internet came out. Oh I feel like we’re definitely at this threshold right now where where the again Google is going to have to change its algorithm now also right so Google look I’m looking at like what what metrics does my website use and what’s what does Google rank is important right Google highly ranks how much is the engagement

36:45 Rate on your website yeah if Google’s going to rank high engagement rate and chat GPT is building a better engagement rate than Bing or other things like it’s going to funnel more traffic to it like it it’s just this is so so all this to say is like look let’s come back to semantic models real quick right okay if people are getting the the trend here is people are getting more comfortable by using agents or chatting things and they’re they’re trusting the answers that are coming out of these chat things the chat experience with AI is getting better over time it’s it’s becoming part

37:18 Of your everyday and honestly every day Tommy I look at another announcement for this another breakthrough this great break great break great break great break great break this great breakthrough. That’s hard to say. Inside AI, someone’s building a new model, someone’s building this new, it’s faster, it’s cheaper, it does more things, it has more capabilities. Like, every week we’re seeing huge improvements in this space. Oh, yeah. And it’s it’s going to just keep accelerating. So, I think we’re going to have to design for it. I and but again I don’t I don’t want to spend me personally I don’t want

37:50 To spend a ton of time building all of the AI level things like I should review it I need an AI to produce the AI right give me a bunch of recommendations across this thing give me all the descriptions you think right I should be able to pre-prompt this and say what they’re doing here is tell us what is important pick which columns are important pick which tables are important what should the AI focus on those are the things I think that we need to be able to use and hopefully as we work with this train for AI experience we can give we

38:24 Can see inventory of what questions are being asked of my semantic model and I think actually that’s more of the useful part right I think the data that comes out from talking to the AI is going to be more useful to the creators than anything else because well think about it right I’ve built these tables I’ve made these things and then we’re going to have phys people ask business questions against the model. So how do you consolidate hundreds thousands of questions about a semantic model? How do you how do you how do you distill them

38:57 Down into what topics or information that is there? Right? People are asking real questions of their semantic model and you’re getting their d like that is as direct of a need as possible. Right? I got to be honest, Tommy. Let me let me be real with you here for a hot second. All right. Be real, man. When AI came out and I started talking to co-pilot, I would ask it to do something. C-Pilot, do this. Make this mquery. Write this code. Make this Python Python statement. And sometimes the AI would say, “Here’s your answer.”

39:30 Or in other times it would give me this really trash answer. It would just be like, “I can’t do that. Unable.” And I’m like, I would literally say back to the chat, I was like, “You’re useless, co-pilot. I hate you.” Like I would literally tell cuz I would give it I would say it and then it would just be like, “I can’t compute. I can’t compute. I’m like, okay, this I can’t insult the AI. I it makes me feel better when I insult you to to say you did a bad job, right? But I would I would take the So me personally, I would take the effort of giving feedback to the AI like I this is not what I wanted. This is not right. Also, now that I’ve been working with AI

40:04 A bit longer now, I know to be a bit more descriptive, right? I can ask it more specific things. And if I don’t get my prompt right the first time I make a second prompt that says adjust my hey write me a function. So the other day I was doing a stream with read havens on Havensbi. Okay. And we were we were talking about our theme generator right and our theme generator we were talking about all these cool features all these neat things. Oh by the way did if you’re using Edge Copilot’s right there? You can actually take a color from your color

40:37 Palette in theme generator, put it into co-pilot and say, “Hey, co-pilot, make a color palette with six colors based on this color. Make it a , a soft color palette, right? You could give it descriptions of what you want it to do. It will list out the names of the colors and then all the hashtags and the hex color codes. So, you can use Copilot to make color codes and then copy paste it right back into your into the tool. It was super easy. But I realized I needed a commaepparated list of those numbers. Right? So then

41:11 What I did is I changed my prompt and I didn’t I went back into the prompt. I said, “Okay, give me a color palette of six colors, but make it a commaepparated list instead.” And then what it did is it gave me five different palettes of colors and I could just grab one, copy it, paste it right into my tool. And so now with a with a little bit of prompting knowledge, I can now get this like I don’t have to make color palettes anymore. I can talk to a co-pilot and make them for me done. How cool is that? Okay, but it’s and all the things are micro. They’re amazing

41:44 When you have these this wide spectrum, but again, we’re still not dealing with our data thing, that’s so sensitive. all the things we’re talking about general colors again where it’s just giant list just like all the states in the in the world thing. It’s just a list of information that it can easily categorize because it’s already pretty also universally wellknown and it has a bunch of text from all the people talked about color theory already get into it that work. So I see you’re saying like on things that it knows it’s been trained on like Python, SQL like it’s

42:18 Going to it’s going to perform better in those spaces because it has a lot to train from. You’re saying the the data and the data analysis space is a lot harder to train on because it’s like you need to like deep think on everything. Like you need to be able to run a whole bunch of queries against a semantic model, figure out which tables come back from that and then analyze the results into something that would make sense to the user. My even more you need the better context, not just deep think. It doesn’t know what to think, right? It’s

42:50 Just looking at columns and has no significant some of sales is like what does that mean right what’s the impact of that to my business right yeah right all the things that we were we’re finding amazing on co-pilot there are millions of pages of documentation is is learned from but now when we’re dealing with our business information and all I have is column names table names and I don’t know this is this is not GPT but I’m assuming that Microsoft’s like really making this co-pilot this prep for AI going hey you’re a model but

43:22 You’re an expert on tabler models right because that’s also a difference here we’ve seen this before AI has never really played super nice with our PowerBI models and trying to get the right information do you remember and I’ll bring it back because sometimes just bears worth repeating you remember automated insight still available in power still don’t use it to this Okay. Sure. And but did you ever use I tried it. it doesn’t take too many I’m going to give you a little bit

43:57 Of grace on the first couple times I try something. I’m especially if I’m going to demo something I’ll at least show people the feature exists. But honestly after three or four tries of it and it doesn’t wow me. It doesn’t save me time. It doesn’t save me clicks. It doesn’t do thing. I’m not gonna do it anymore. Like I’m done. Dude, you you’ve had your try. So, I think a lot of these things Microsoft’s just trying to push out C-Pilot to like make it be part of the product, but they’re not saying, “What’s the wow factor? Is there anything here that’s wowing me in this space? And if it’s not, I’m not going to use it again. I’m sorry. You get you get

44:29 A couple tries and you’re done. I’m not going to revisit it for 6 months. You you got to wait now.” And and the biggest the biggest problem with those insights, which I love how easy it was to get to. for anyone who doesn’t know what I’m talking about on if you’re viewing any report and you click right next to the the ellipses the first option is get insights and it will give you anomalies trends or what’s the KPIs problem is we can’t do anything like from the author point of view we’ve never had the is ability to say you

45:01 Know what I really want you to wait and provide more consideration to these columns or these measures and just whatever the visual So that’s what it did. The thing is we have custom instructions right now and I So here are the things that I can do and I feel like this is still a bit limited. I can choose what columns for AI which before already a major already a major improvement. Yeah. Because before if I didn’t want something I had to hide it globally from report. I may want things. So again it was you got to pick or choose your battle. So that’s one.

45:35 And now I can again actually go through and provide custom instructions. It’s pretty simple. That being said, I would I don’t know how detailed it’s going to be and to be very honest, I’ve tried to give Copilot my atomic prompt thing and it’s like no, I’m not I’m not working with you here. So, I was just dealing with a PowerPoint thing and it was like I’m going to kill the co-pilot thing. So, well, it’s asking I feel like right now it’s

46:06 Asking too much of us, right? So, when you do like in in this new, prompt for AI, right? So, build build your model for AI. You have to go down to like the visual level like so again, I’m I’m looking at this going like what’s where’s my time best spent here, right? If I had to go down to every single visual and say, what questions would I ask about that visual? Now, in certified data sets, things that are going to a big audience with lots of people, this makes sense. But if this is a small little report that I’m building just for me, my local team, I’m not sure if I’m really ready to spend a half a day, a day, two days, like you could

46:40 Probably spend a day prepping the model for AI. Are we going to get that many questions back from the AI that’s going to actually prove that it was worth my time to spend a whole day prepping it? In some situations, yes. In some situations, no. So, I I’m not going to I’m not going to just say, “Yeah, do it on everything because it’s it’s amazing.” I’m going to say you need to be critical about what is your time investment on shaping the model for AI and do you have enough users to actually do this like now I will say though the the feature

47:13 Though if it was just custom instructions by himself itself I would say that would be a miss but I will say the new feature were in a visual itself so I can say what get verified answers so unlike before where you In a sense, I felt like it was always like you were going backwards. This way, I’m not I don’t even have to provide the answer. I can just simply go to choose that visual, choose a verified answer thing. And it’s funny because you’re writing the phrase about what people are asking. So verified

47:46 Questions, I guess. Well, and and to your point there too, Tommy, like, if I look at a report, right, there are so let’s just say we’re looking at a bar chart or you build a bar chart that does something, right? Inside that bar chart, you should be adding some information about the bar chart. Now, again, this is something I do think is relevant in a in a more like if you’re expecting to share a report with a large audience of people, right? I’m building it. I’m going to share it to like you Tommy and other people on my team. I’m

48:18 Not going to assume the data literacy of everyone on every visual is at the same level of of mine. Right? I understand what I built. I made it. I have the measures. I understand why that visual exists. Right? I do think there’s some intent to say this visual exists to answer this question. And then you should be able to say, when I I have a line and a column chart, right? It’s a it’s a dual axis left and right axis chart. Then I explain how to read it. Hey, this is a chart that explains cumulative totals of sales year-over-year, right? what

48:51 We’re looking for on this chart is we’re looking to see that this year has the same, if not higher sales than last year. That’s denoted by this color bar and this color bar. Like you’re almost like telling the you’re almost saying like, okay, to the users of that report, I’m going to describe how this visual should be interpreted. Okay, that’s something I’m doing right now to in instructions, right? I’ll write that out and put some instructions with the visual with the information button and that way when people hover over it, they can say, “Oh, this is visual is doing this.” That is so tedious. But that’s that’s my this is my

49:25 Point though. I don’t want AI to give me the answers. I want AI to tell me that like so make it easier for me. So, here here’s my thing. And maybe I’m having an epiphany now, or maybe I’m going off the rails. I don’t know. , it could be it could be both of these things, but the AI should help me do these simple things, things that are showing trends and talking about the visual. Like, I take away a lot of the tedious build work that I do inside visuals. let me focus on this visual. What does this visual tell me? What are trends in this visual?

49:58 What should I be looking for if that line is going down or if that line chart is going up or in this visual what is denoting good performance versus medium performance versus poor performance. That’s the information I’m trying to convey like I we’ve talked about this a lot in the past Tommy about like the visual language. What information am I trying to convey to you the reader of my visual? How do I talk to you with the visual? It’s its own visual communication language. And so what I want to do is I want to shortcut as much

50:30 As I can between my vision for this visual and the user side of what they’re going to see and how they’re going to interpret it. So, why not click on the information button and have the AI do all that for you and say, “Hey, this visual is showing this.” , when this is tren when this trend line’s going this way, this is what this means. When this trend line goes that way, this is what it means. a a positive trend is the line going up and to the right. A negative trend is when it’s going down to the left. But so the see that’s the thing though and I I I like where you’re going with

51:02 That but for the consumers right where it’s it’s also the lingo too when you’re dealing with your own information right so I’ve been playing around with the that like the verified answers and I will say the questions they’re coming out with are already so much better than the Q&A one whereas like I’m like oh maybe I would ask that but the problem is how people ask and this is the other thing too, right? So, most things you do in chat TPT, like if you want to play the trumpet, guess what? Pretty pretty universal way to play the

51:36 Trumpet or whatever you want to do. I can feed it information. But the thing is when a consumer is going into this, they’re not going to say, “Hey, what are my best model name by table name correct by measure, for month.” They honestly they have their own vernacular about how it works. Cannot be expected, right? Honestly, they’re going to say, “What are my sales this week?” That because that’s what we’ve seen AI can do already where they should know who I am. So, it knows the user knows that report and knows it’s important. So, then it just becomes Q&A. If it’s co-pilot, it’s

52:09 Like, “How’s my numbers?” To me, honestly, that’s what that this is where the success is is like, “Hey, who should I call this week?” because it’s using the data not just to give you a bar chart but using the data to for the to finally summarize all those things that you want to read. But it’s more than that though like it’s it’s more than like there’s this there’s this gap between like how generous do you make the AI build its own synonyms around the different columns that are inside the model right so again this is where I’m torn I’m torn between like so you’re you’re there’s a

52:44 There’s a gap in my mind Tommy here right there’s how do users naturally talk to the model with AI like give me the total sales give me my sales representatives there may there may not be anything in the model called sales representative. That’s just the language that I’ve adopted and maybe I don’t have a clear data dictionary in my company. Maybe that’s something that I’ve just thought of that makes sense to me or how I communicate in my team. Fine, no big deal. But the the per the point here is how how does the AI make a jump between salesperson and sales representatives?

53:17 Is it close enough that the AI can interpret that? And then me as the the author or the creator of the report, I’m looking at this going, do I need to add every single synonym known to man? So the the AI is smart enough to understand that there’s there’s a level of I’m never going to be able to as an author to be able to use all the vernacular for every single user who’s ever going to use my report. So how do you so how do you funnel that back, right? I think I think there’s a whole missing analytics part of the AI, right? I I don’t I don’t see,

53:54 These are the key terms and phrases that this user or these groups of user were using in this model. And then I can come back and say, oh, I need to link those things back. So, it’s it’s this idea of like retraining the model a little bit, right? You’ve got to put it out there. You’ve got to do your best effort. Get it out to the people’s hands. Let them ask questions about it. I need data back about those questions so I can then refine the synonyms. I can refine the table statements. I can refine adding questions to this visual. Maybe someone asks a bu a brilliant question to the visual. And maybe the AI did a great job

54:27 Answering that question. Well, how do I know? How do I know that happened? How do I tell the AI that was a good thing that you did there and yes, that is a verified question. How do I promote questions back from the audience say these verified questions make sense for this visual? Those people are thinking about it, right? So, it’s it’s now about it’s now about like taking the information back from the AI and also letting the AI interpret the model, taking the best questions from my audience, funneling them back into the model and saying these are great examples of how to use it because then everyone else who uses the model has now

55:01 A better library of examples around things you should be using in the model. So yeah, and I think feedback loop, it’s a whole loop of what’s going on here. And so I think from where do we go from here? I we’re already oh my gosh, near the end of time the time here. So I want to say my semiclosing thought here. I think what we’re seeing here with this tooling is incredibly promising. , that being said, I I think honestly I don’t know anything, but I have a

55:34 Feeling we’re just going to see eventually fabric, it’s going to have your data engineering, data science, and there’s going to be a one guy for AI next to them, too, because I think it can’t just be a limited set of tools. , especially that with all of the research and all the knowledge we have now on what it takes to make AI work. So, this is promising, but I still think it’s it’s actually it’s still pretty limited before I’m going to say, what, this is probably going to give me a valid answer, . And the other thing, too, is un until you let it go to the wild, right, where you

56:08 Say, hey, everyone has this start asking questions to your point, I’m not sure people are going to ask. And so if I’m the one answering all the questions, that’s what goes into what we talk about from adoption point of view. But regardless, we are on the way for us as the people who I don’t want to say own the data, but hold the data, clearly next to our heart to have better tooling to actually allow AI to do the things that I think people expect it to do. Hey, what accounts are slow on, you

56:42 Know, who should I what what are some of the accounts that are performing well? and just being able to as a real natural language thing, not just and that’s where we’re going and I think we’re getting closer and closer. I feel like what I hear you saying, Tommy, is you’re looking for that business analyst. You want the AI to fill that business analyst role. There’s people building the model. There’s people asking questions of the data. And that business analyst person sits in the middle and says, “Look, I’m going to interpret what you say at the business level. And then I’m

57:14 Going to go into the technical side and figure out from this data, this model, whatever. Either build you visuals or work with the data to get you answers out.” And I I don’t think we’re there yet. I don’t think AI can fully replace that data analyst at this time. I think we’re getting closer. It’s definitely getting better. I think just this very nature of having add AI instructions huge. I think that’s I think that’s a major improvement as to where we’ve been. I

57:48 Think also simplifying the data schema. We’ve been asking for this for years, Tommy. This is something we’ve been saying like look, we don’t need every single column. We only want the measures. Any of the ids or key columns, don’t use them. They’re not part of the model. like that’s not interesting to us. So I think all these two features alone really make the AI more effective because of those things. What I’m saying is probably next or the next wave of things is you need to really evaluate does it make sense for you to spend the time to do all this

58:20 Work to make the AI work. Is this going to be an initiative driven by your leadership? what is the capability of your team? Do people in your organization are they even interested in using chat GPT? They may not be and they may not be at this time. is your organization very pro AI? Like have you rolled out co-pilot to the rest of your company? Is this something that’s very common? So I think there’s a lot of culture pieces, cultural pieces of your business that you need to consider when you’re doing this. So yes, PowerBI will support it. It’ll be there for when you’re ready to use it. But if your

58:52 Company is restricting, I’ve seen companies also. So another example here, I I’ve worked with a company that they monitor all their web traffic that comes out of the out of the company. Most all companies do this. This is very common. But what they do is they also monitor how many times and which AI agents they’re using, which websites are being hit for AI. And so they’re monitoring this and they’re seeing that again the trend is there. People are increasingly finding these AI agents as they get better, as the models improve, more and

59:24 More people are asking those AI agents to do something to to use them more. So you’re again, the same analogy goes for my website, right? I’m seeing it going from like 2% up to 13%. I’ve had a 10% increase in the last 12 months of AI agents scraping my website. This is going to continue to increase and at some point this will be the majority of my website being used by that. All that tells me is more and more people are getting more and more comfortable using the AI agents to produce data and use that as information for themselves. So

59:57 In lie of that your data culture your your company culture will be changing and it will I think it will naturally start adopting AI because it’s going to get more useful for people and they’re going to trust it. Dude, I love it. I love it. I I’m very It’s very promising. I’m getting closer to my evil dream, so to speak. So, we’ll see. Well, I’m I’m looking for the day where a lot more of these AIs can be localized and run on your device, like your machine, your computer, inside your web browser. I don’t really love this

60:29 Whole idea of having to pay for an AI for like 20 to $200 a month. it does it it is effective like if in coding things like it’s 100% it’s like worth the money, like not a problem. But if I step back from that and look about everything else, it’s not nearly as effective. So, I’m I’m still trying to evaluate like where’s the best place to do this? How much do I spend? Do I spend $20 per user? Do I spend $100 per user, $200 per like what’s what’s the right threshold of value that I get out of these agents that my team can use? So, that’s where I’m evaluating

61:00 Right now is like where’s the best place? And right now, frankly, there’s too many of them. I don’t want to spend a $200 subscription on all the different AIs that are out there. Open AI, Anthropic, Claude, Grock, whatever the things are out there. I don’t want to spend $1,000 a month on a single user just to hit a couple prompts, right? I I want I want to I want a consolidated experience, right? I’d pay $200 if I can get all of them, right? If I could share it across prompts across everything, but I don’t think that’s that’s possible yet. So, anyways, all that being said, this is a

61:32 Great new world. I really like this new data prep for AI. , I’m going to continue to play with it. I’m going to keep adjusting it. I really like the AI instructions. I think that’s super super sharp. Simplifying the data schema has been needed since day one. I really like that as well. So I think this has got some really good promise. what I’d like to see next is really some concerted effort around when you roll out these models with AI. I want to see I want to see the usage adoption. I need to see when I when I roll out this AI and this model, it’s actually being

62:07 Used. I I know what prompts are happening. I know what questions are being asked and it’s it’s there. So all this to say my final thoughts are this is a great feature. Be careful where you apply your time to build this do in the in the best use cases. and then my last thought here would be is we need to we need a new way of we need to think about how do we get the data out of the co-pilot or the agents so that we can understand what people are doing with our models and our reports. That’s going to be I think it’s going to be so so valuable to have that out. All right,

62:39 That being said, you have spent a an egregious amount of time talking with us about AI, PowerBI, and the prep for data AI feature inside PowerBI desktop. , hope you found this valuable. Hope this has gave you some things to think about about what you’re thinking about using for this AI feature. , and with that being said, we only ask one thing of you as as a listener. Just share with somebody else. If you thought this conversation was valuable, if this helped you improve your thinking, we’d love for you to share this with somebody else and help them think about AI in the same way. So share with somebody else

63:10 Who’s also contemplating AI. Tommy, where else can you find the podcast? You can find us on Apple, Spotify, or wherever you get your podcast. Make sure to subscribe and leave a rating. It helps us out a ton. You have a question, idea, or a topic that you want us to talk about on a future episode? Well, guess what? You can do so by going to powerba.tips/mpodcast. We have verified we have a ton that we can’t wait to dive into. Send go to powerbaodcast, leave your name, enter your great question. And finally, join us live every Tuesday and Thursday, 7:30 a.m. Central and join the conversation

63:43 On all of PowerBI tips social media channels. Thank you all so much and we appreciate your time today. Have a wonderful weekend and we’ll see you next week. Yeah. Out.

Previous

Shortcut Mania! Use Cases for Shortcuts – Ep. 429

More Posts

Mar 4, 2026

AI-Assisted TMDL Workflow & Hot Reload – Ep. 507

Mike and Tommy explore AI-assisted TMDL workflows and the hot reload experience for faster Power BI development. They also cover the new programmatic Power Query API and the GA release of the input slicer.

Feb 27, 2026

Filter Overload – Ep. 506

Mike and Tommy dive into the February 2026 feature updates for Power BI and Fabric, with a deep focus on the new input slicer going GA and what it means for report filtering. The conversation gets into filter overload — when too many slicers and options hurt more than they help.

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.