PowerBI.tips

Looking at AI Assisted Development - Ep.521 - Power BI tips

April 22, 2026 By Mike Carlo , Tommy Puglia
Looking at AI Assisted Development - Ep.521 - Power BI tips

In Episode 521 of Explicit Measures, Mike Carlo and Tommy Puglia unpack the latest Power BI and Microsoft Fabric topics from the show. You’ll get a quick read on the episode’s biggest ideas, why they matter, and where to dig deeper in the full conversation.

News & Announcements

  • No linked announcements were available in the episode description for this post.

Main Discussion

This episode covers the major themes, opinions, and practical lessons Mike and Tommy surfaced during the conversation. The transcript below captures the full verbatim discussion if you want the exact phrasing and context.

  • Mike and Tommy react to the episode’s biggest Power BI and Fabric developments and explain what stood out to them.
  • They connect product announcements to day-to-day practitioner decisions instead of treating the news as abstract roadmap chatter.
  • The conversation highlights where teams can move quickly, where they should slow down, and what tradeoffs deserve attention.
  • They share candid perspective from real project work, which gives the discussion more practical value than a headline recap alone.
  • The episode mixes tactical advice, opinionated takes, and a few forward-looking predictions about what listeners should watch next.

Looking Forward

If this episode’s topics affect your current Power BI or Fabric plans, use the transcript and linked resources to identify one concrete change you can test with your team this week.

Episode Transcript

0:03 Explicit measures. Pump it up. Be it high. Tommy and Mike lighting [music] up the sky. Dance to the day. The laughs in the mix. Fabric and A. I get your feels. Explicit measures. [music] Drop the beat now. H feel the crowd. Explicit measures. Welcome back to the explicit measures podcast with Tommy and Mike. Hello, Tommy. Good morning. Good morning, Mike. How you doing?

0:33 Good morning, Mike. How you doing? I’m doing great. I am just clipping along. A lot of fun news and things happening right now. So, we’ll get into that in just a moment. Just for people who are online and listening to this episode, this is a recorded episode, so it’s not live., oh, however, I want to give a great shout out here. If you like these episodes and you want to hear them without any advertisements on YouTube,, we are having a membership area. So, if you want to become a member of our channel, go down to the bottom of this video, become a member on our channel. You can get all of these videos as they are immediately released. So, as

1:04 as they are immediately released. So, as soon as we record them, we post them. Doesn’t matter what day it is. because sometimes we record videos beforehand. But then also, there’s no advertising. So, you get a clean advertising slate as well. And I will make another announcement here as well. the because Michael can’t talk enough, I started a second podcast. Oh, what? Yeah, Tommy, this is like your first time learning about it. So, it’s called Agentic Thinking and it’s all about agents and AI and the

1:35 and it’s all about agents and AI and the intersection of like agents, AI thinking and then PowerBI and fabric. What does that look like for people? So, it’s just it’s more of an a talk around agents. It’s probably heavier on technical details around what agents are doing, but I’m doing this with Matias Tierbach. So, Matias and I are going to be sitting down and doing that together., we’re going to be doing that on later on Tuesdays and Fridays. So, that’ll be our Oh. Oh, you’re going to do two, too. So, that’s You’re basically starting your own serious radio station. It’s like a radio station at this point.

2:06 It’s like a radio station at this point. We’re getting to the point where it’s just at at one point I was hoping to do like some content or piece of content like every day of the week. Now, I’m up to four hourong pieces of content a week. a week. Yeah. But Tommy to be honest like yes there’s some planning and setup to get into this the episode here. But really just talking about like what are you working on? What are you spending your time and effort on? And all my time and effort is spent on every day all that I’m thinking about working on is doing more things with fabric

2:37 on is doing more things with fabric doing more things with PowerBI. There’s so many new features coming out that I’m constantly testing and designing new things. And now that I’ve really pushed more towards app development or agentbased app development, it’s just such an interesting topic, I had to talk about it. I got to start thinking about like what does this look like? How to unpack it. anyways, that’s not going to be so keep talking because because yeah, yeah, something with the lights. But some So tell me this is not going to be so much fabric based or PowerBI based. This is really going to be all about AI in any

3:09 really going to be all about AI in any workflow. But keep Yeah. Yeah. So it’s a little bit about like the the harnesses and what harnesses that you have and what does that look like? So part of this is like we say the word harness but what what defines that what does that mean? There’s a lot of articles happening that are like very specific to AI things just recently chat GPT no sorry opus 4. 7 just was released but it’s really tokenheavy. So just unpacking what this means. It doesn’t necessarily have topics that always fit

3:39 necessarily have topics that always fit directly with like fabric and PowerBI, but these are going to be much more AI focused conversations. Anyway, well, I would love to be a guest on that one. one. Well, we we are taking guests, so this will be this one’s going to be a much more of a looser format, Tommy, than what we do here on explicit measures. So, yeah, we’ll get you in, Tommy. We’ll get you over to the other saying that. That means you’re saying this one isn’t loose. We’re a little bit more restricted about like who who’s the guests when we do guests? guests? Oh. Oh. Oh, I was going to say the format. I’m like, our format’s pretty wild over here. We barely have a format. We hold on for

4:10 barely have a format. We hold on for dear life with a format. Yeah. Well, I think I’ve always said like us doing the podcast here. Yes. There’s a lot we’ve learned and I think there’s a lot of skills we’ve learned doing a podcast. How to talk, how to let the other person listen. But at the same point, if you and I were doing something more news relevant, I know I had a buddy of mine who did a sports podcast. We’re not spending nearly the amount of time on something that was like more prevalent like if if Fabcon happens, right? Like research, research, research.

4:40 Like research, research, research. We can talk about that for the next three months. Yeah, correct. Unlike we only have this week to talk about episode two. And I think a big part and we’re I’ve been seeing more in the mailbags too from people. People really enjoy the whole idea of the water cooler thing where we’re able to talk out ideas rather than us doing training right now. Like we can do training, but this is all about that gray area that I think has really hit a nerve with people. people. There’s also something to say for two

5:10 There’s also something to say for two experts who have been in the space since everything started, right? We’ve we’ve seen PowerBI and Fabric mature, both of them. We were there at the beginning of PowerBI. We watched it start from nothing and grow into this really good piece of software and now we’re starting to see like fabric become the same kind to see like fabric become the same mantra. It started three years ago. of mantra. It started three years ago. It’s been slowly growing. It’s now got all these new features and things and so now it’s this other monolithic thing that’s happening and growing. And I’d even argue Tommy like I think the fabric potential here is way larger

5:41 the fabric potential here is way larger than what we had at PowerBI because this touches so many more other specialties. There’s so many more at Microsoft that are like absorbed into the space just having the SQL team a part of what we’re doing doing right right and building SQL things for fabric just that in itself that’s a huge line of business for Microsoft to have SQL running that that’s massive so just to having that part as as part of the ecosystem and then add all the other solutions around it real time right right custom

6:12 custom these different databases and then all the data engine that goes on top of it so it’s becoming quite the platform it’s definitely a platform as a service and it’s really intriguing to be a part of right now. Anyways, all that to those are all news and just getting up to speed on things., our main topic today will be looking at funnily enough we’re going to talk about looking at AI assisted developments. So, like how does this this not our idea? It’s a mailbag which is good. Which means people are starting to think about this Tommy. I think in my opinion here,

6:43 this Tommy. I think in my opinion here, Tommy, and this is one of the reasons why I started the the podcast with Matias also here, is we did this 8 hour session at Fabcon and I feel like before Fabric Conference, people were like,, AI’s out there. I’m really not sure what to do with it. And then Microsoft did a really good job where they were actually showing people how to build things with AI. Our eight hour session was talking all about the PowerBI MCP modeling server. So 100% the whole day was just focused on talk to a model, go get

7:14 focused on talk to a model, go get results, move things around, shape a semantic model with that MCP server. And from that session, people are now exploring other MCP servers. Okay, great. Now that we’ve learned some of the fundamentals around manipulating a data model, which everyone knows how to do, right? right? You now have the ability of being able to go get the fabric MCP server and the real time MCP server. And now there’s going to be an announcement around an ontology ontology MCP server. So everything’s going to get these like tools that you’re going to be

7:45 these like tools that you’re going to be able to use at your disposal and leverage them to go do things., and so I think right now Fabric Conference opened the floodgate for everyone to really start exploring and figuring out what are agents and I think we’re going to get more questions around agentic things. Now, I think you and I should in the next few weeks do an episode our predictions just to have a little fun and then we could write them down and see what’s going to go on in the next two years because we did this we did this our first year

8:15 we did this we did this our first year with the podcast and we were both way off. off. Some of some of the some of the predictions throughout the first year. Yes. Yes. Yeah. Yeah. Some of them were spot on. We’re on like episode like 22. I’m talking about like episode 22. Not your first prediction. We had that one. I saw the right on the wall on that one. that one. You knocked that one out of the park. Yes. But we had one like what is a PowerBI Pro in two years? And I always like to look back on that one because the we both went very logical directions. Like it wasn’t like you and

8:46 directions. Like it wasn’t like you and I were going completely off the wall, but we didn’t see fabric coming at that time. time. So you were very much more I if I recall you were very much more on the Azure is going to be the place where people go and I was like no it’s going to be the power platform. Oh yeah. Well so we took two different approaches really what happened was Azure just landed inside fabric. So people don’t want to go into like like more Azure at this point. It was too complicated. It wasn’t integrated

9:17 complicated. It wasn’t integrated enough. It wasn’t it wasn’t as easy to turn and and turn things on. The other thing too I I think I heard somewhere I’m not sure where but there’s this Microsoft did a very different thing when they started building. So Azure was pretty much pay per item right that you build which is fine that works. There’s always a little bit of integration and how to get everything to talk together. Yes. Yes. Fabric reduces that integration effort but it gave you a consistent billing model. That’s what fabric does. Fabric gives you this it’s just cus use whatever you want. it

9:47 it’s just cus use whatever you want. it just comes out at the CU level and you don’t have to really worry know you don’t have to really worry about provisioning anything. It just turns on when you want it to and it runs when it needs to and then it turns off when it doesn’t. And that seems to be a lot that’s one of the advantages why I really like what’s going on in fabric. It’s doing a lot of those on demand and and pulling from a consumption unit level. You just said something that I think really hits a nerve on why fabric is so impactful. And it’s not so much that they brought

10:17 it’s not so much that they brought everything into Azure, but honestly, I think it’s the building model model, but it’s the ability without a lot of work or any work to have the different artifacts talk to each other because if you ever worked in Azure,, you need to have that all set up every time you create an artifact, a lakehouse or data factory. Well, do I have that managed identity and then I need to have that connection string? In fabric, they make that dead easy. And I think that is such that is I think the most one of the

10:47 such that is I think the most one of the more underrated impacts or features of what fabric does. Interesting. Yeah, I’d agree. I really would agree with that, Tommy. I I think it’s an underrated. Well, I’m just I’m [laughter] just there’s so many other things that are coming out now. It’s like does it is it still in that position or is there something else that’s creeping up on it? And I still think I still think that holds the number one right when you create a data factor you’re like ah I need to create a managed identity here. Oh let me go to the key vault and let me check that and

11:18 the key vault and let me check that and that was the only way you have things talk to each other. Now you’re like oh just pick from a list. Yeah done. Yeah. Every every tool has another hook to the other tools like Yeah. and and the harmonization around the harmonization around one lake being like the the the glue that holds all these tools together. Phenomenal like that. that. Oh boy. Yeah. Without that, none of this would really feel as fluid and seamless because everything just goes there. It’s it’s literally the landing zone for all the information and data. And I also heard someone talking about I was watching

11:49 someone talking about I was watching a YouTube short about someone talking about AI. I think it was the the gentleman from Lang Chang. I think it’s Yes. this he was doing an interview or a podcast and he was talking about where where does the value sit with agents and AI and he was said it’s really just in the data he goes a lot of small businesses or new companies can start up and build really cool things but they don’t have the customer history they don’t have the legacy data they don’t have all the product the the the stored

12:19 have all the product the the the stored information in their infrastructure and that’s when you can actually have really effective usage of things when you start going after the okay we we now have all this really rich data don’t sacrifice agents and just go through agents and things and then give it crappy or unquality data so you you at the end of the day the ages are more effective when you give it more information and more data okay I I just having a conversation about that with someone so yeah

12:50 about that with someone so yeah really good okay I love it all right so let me before we get into the main topic I have one news item Tommy, I I sent you a link here. I’ll put it in the description of the video as well. It’s called awesomecopilot. Awesome-C. github. com. So, if you just Google, I think it’s just awesome co-pilot and someone from the I think this is from I think the gentleman who created this one is from Microsoft and he’s on the VS Code team and and so people are

13:21 the VS Code team and and so people are contributing to this and it has workflows, it has instructions, it has 307 skills, it has 204 agents on here. There’s this thing called a hook, Tommy, in an agent thing. Mhm.

13:36 Mhm. If you want to go build something, if you’re working in a codebase right now, I recommend you at least go to awesome GitHub copilot and go look for the software that you’re building in React,, Tanstack, whatever the thing is that you’re building in, if you’re using Jira, just at least go look at this because there’s this idea of building a custom agent and custom agents are like tuned and designed and there’s a bit of a description, but it gives you like here’s what this tooling does, right?

14:07 here’s what this tooling does, right? You’re you’re an expert React front-end developer. That’s one of the things in here. And it has a whole bunch of instructions and rules and things that you want to give it. So that way, if it’s building front-end stuff, there’s actually like a a UI designer. There’s an accessibility checker in here. If you look up the word accessibility,, it has another one., I think I can’t spell the word accessibility. VS Code Insiders accessibility tracker. there’s there’s one for Salesforce de workflow development.

14:39 de workflow development. So they’ve got all these really interesting like tools and the idea is you don’t have if nothing else you may need to customize them for what you want but at least go here first and check them out because that way you got something to start from. We need more of this Mike. So this idea here so what is a custom agent right and I think let’s start there too. So you can actually have your normal agent if you’re working in cloud code if what and also people ask what is the difference between this and a skill. So and I this is a big thing. So you can

15:10 So and I this is a big thing. So you can have your normal agent or your normal chatbot that you’re talking to. A skill is something that your normal agent uses. uses. An agent is something that your a sub agent rather is something that your agent can deploy. For example, if and you don’t even know, most people don’t know this, but when you say, “Hey, let’s take a look at this repo and I’m going to do X, Y, and Z.” Most of the time that that agent that you’re talking to is already deploying an exploration

15:40 is already deploying an exploration agent. Yep. There’s a specific task that it does. A lot of people get confused with this in skills and they are different things and I’ve struggled with this. So, I love seeing examples here of what custom agents doing. Also, again, commonly known as sub agents, which means again that your agent can deploy this. It’s not doing the work. The a sub aents doing the work, but it’s in a sense prompting another agent to do some exploration for you. So, it’s going to help carry out a task.

16:11 So, it’s going to help carry out a task. Unlike your normal agent who may be using a skill part of your workflow, it’s kicking off other subworkflows. I love this, Tommy. And I’ll also even I’m going to go down that one. Yes. Yes. And some more of this one. If you go into the agent modes on this thing and you search for PowerBI, you’ll find PowerBI data modeling expert mode, PowerBI DAX expert mode, PowerBI performance expert mode, PowerBI

16:42 performance expert mode, PowerBI visualization expert mode. Again, a lot of these things already exist. And if you’re looking to use agents to help and aid you with development with PowerBI and fabric things, there’s already stuff here. Power Platform, MCP integration expert. There’s a lot of other things that are already here and ready to go. You just start with these elements and this can already assist you with getting really good feedback and knowledge. And what it what it does, it steers the model towards a a type of persona, right? right?, an employee that fits that

17:12 , an employee that fits that role and it carves out portions of like the large language model can do anything. It can tell you about Python. It can tell you about React. It can tell you about what to eat for dinner. It can give you travel recommendations. It can do all these things. What these things do is steers the agent more towards a a particular type of answers that activates that part of the model that makes it better for those kinds of experiences. And it’s all written in markdown and you can modify and change stuff. This is really really interesting. The last thing I’ll add to that is why

17:42 The last thing I’ll add to that is why use a sub agent or why use a custom agent? agent? Yeah, Yeah, it’s all about context, baby. Because here’s the thing. You could have your normal agent, that single agent, and you can make skills out of this. Sure. But the problem is you’re adding a lot more context. And we’ve already seen the studies where agents perform much better in the beginning and at the end. That middle part of a conversation is usually where they lose a lot of context. So deploying a sub agent allows your normal agent to not do the work. It’s not using

18:12 agent to not do the work. It’s not using tokens. It still has a context of what you are trying to achieve. And agents are best when they have a single task, when they are good at a single function. And you can see all these agents are doing one specific task like the janitor or we’re going to we’re going to check the Python notebook sample builder. It builds sample. Excuse me. Excuse you. So it’s not meant to do all the notebook things. It’s not meant to build notebooks in fabric and do pi spark. It

18:43 notebooks in fabric and do pi spark. It just builds samples. So really building sub agents can be very powerful but it’s understanding the different purposes that they can be achieved [clears throat] and I love we need more of this Mike we need more of this website for people to get those examples. So again, it’s just a lot of these things like we say terms, we have things. The best way to learn is go look at real things that actually work for other people and see like again this is all the neat thing about this is we’re in a place right now, Tommy, where everything has just everything is

19:18 has just everything is not happening in real time, but it it’s more of like [sighs] oh oh the the playing field has been leveled. The hockey stick. No, not necessar like the playing field has been leveled. Like there was a lot of people who with a lot of different knowledge about a lot of different things. All of a sudden these agents come out and it shifts everything to this new world where like no one really knows what it does. It’s undeterministic. We’re not used to writing software in an undeterministic way where the agent may give you ask it one prompt that may give you two different kinds of answers. Like we’re not used to this.

19:48 not used to this. However, when I work with Tommy or anyone else on a team, if I ask them the same question twice, they might give me the same answer twice, but they might not. So it’s like working with like other people. So how do you like narrow down the knowledge that when I ask you like what is filter context? The answers may not be the same but at least they’re the same answer, right? Something like that. So this is highly new, very cool area to be in. Very excited about it. Anyways, I just wanted to point out this awesome co-pilot awesome co-pilot project. If you have skills that you want to provide

20:18 you have skills that you want to provide or you want to have custom agents you want to contribute, you can do so. It’s a GitHub project. You definitely want to check it out. there’s over I would say now over 500 useful things on there that may apply or may not apply to your projects. Anyways, just something to get you started faster with using Aentic or agents inside your workflows. Okay, enough enough talking about the introduction stuff. Tommy, over to you for the question for the mailbag. Yeah, so another great mailbag here. And guys, I’m never going to tell someone to write mailbags differently because I

20:48 write mailbags differently because I always love the mailbags, but you guys have been writing some very long ones lately. So I feel like Mike, we’re doing a bit of therapy for people. We’re doing a bit of data therapy. These people are really writing out their hearts to us here. So here. So great. great. Here we go. This [clears throat] is Yoke from Munich. So again, thank you for putting your name, Yoke, and I hope I’m saying that right. Hi Tom and Mike. I [clears throat] really enjoyed episode 508. Kicking off fabric the right way. The discussion around

21:19 the right way. The discussion around unifying semantic models made me reflect on a setup I’m currently using and I’d be curious to hear your perspective. I maintain a single golden semantic model and develop several PowerBI reports on top of it. This sounds familiar. Originally, this was one very large report, but I split it into about five or six smaller reports using the PBR format. format. Love that. Great design. The the main reason wasn’t the classic

21:49 The the main reason wasn’t the classic thin report pattern, but simply to create a better structure in the PowerBI app navigation. Each report represents a thematic area so users can orient themselves more easily. I love this. This is a great idea. It doesn’t have to be one monolithic report. That’s brilliant. I love this. Already we’re starting off in a great foot here. All reports in the semantic model live together in the same Git repo and are version controlled together. Great. Okay, we’re done here. Ship it. Where can I hire you? Oh, [laughter] I know this is going to

22:19 Oh, [laughter] I know this is going to take a turn, Mike. So, I don’t think they’re just saying they’re Yeah, they’re not just calling their witch. Obviously, you’ve been listening to our show and you’re tickling the the our ears with like all these. We He doesn’t actually behind the scenes. He doesn’t have any of this done. Like none of this is like even. He’s like, I know they want to hear this and this and this. So, I’ll make a question. That’s why they’ll pick me. Yeah, credit. If I say get in my mailbag, then I know I’m going to get on the mail. I’m going to get on the podcast. [laughter] I use get Clyde LLM thing [laughter]

22:49 [laughter] and then ask a totally different question. question. Yeah, Yeah, sorry I totally got derailed about data flows gen one [laughter] that totally sidetracked me Tommy. Awesome. All right, keep keep going. Keep going. I don’t know. Okay. Okay. All reports in the semantic model. Yeah. Get repo and versions tool together. The reports are developed locally in PowerBI desktop and point to the same dealbased model in the repo. changes are pushed via git in to our production workspace and then surfaced to users through the workspace app. So effectively I’m running a repo based

23:21 effectively I’m running a repo based golden semantic model. Interesting way to put that with multiple reports built on top of it but without the typical thin report pattern where everything connects to the same data set in the service. Interesting. Interesting. Oh, hang on. So let me unpack this one here. This wasn’t this one threw me for a loop here. Yeah. Yeah. Yeah. So we’re running the same pad single golden se multiple golden semantic models all deployed in the same place but it sounds like each report is pointing to a different version of it.

23:51 pointing to a different version of it. [sighs and gasps] I see I’m a little confused with that too because it sounds like there’s a single semantic model. I think it he thinks Yoke thinks that it’s not the pattern but it is it’s the same pattern. It’s all connected to a single model. He’s just saying that it’s all in the same repo. Okay, let’s keep going. So, yeah, this setup actually works quite well for me right now, especially with there you go, more timelines, more pandering AI assisted development.

24:23 pandering AI assisted development. Ding ding ding ding and repo and update references across reports if measures or objects change. Y but it also here’s I guess our question now. now. Okay. Y Okay. Y it also made me wonder about something related. I’m curious whether and how report building MCPs might appear in the future. My understanding is that an MCP based workflow will likely interact with the currently open report in PowerBI desktop while a repo based agent can modify multiple reports in the folder structure at once. Oh,

24:54 structure at once. Oh, in some ways an MCP [clears throat] approach would actually be attractive because edits would be more robust through the application layer whereas file-based edits in the folder structure sometime miss dependencies I see I see elsewhere. So my two questions are here we go to the meat of the qu meat of the mailbag. First, have you seen or do you recommend this repo based golden semantic model with Yes. Yes. Yes, we do. Yes, we do. We

25:24 Yes. Yes. Yes, we do. Yes, we do. We 100% recommend it. Thin report pattern. Okay, cool. We’ll move on the next one then. [laughter] A short episode. Short episode. All right, we’re done. Let’s talk more about something else totally unrelated. [laughter] LMS and a second. If MCP servers for report modeling become available in the future, how might they influence architectural choices like the one I’m currently making? Oh man, this is a killer question. Best regards, you’ll keep get this guy on a podcast. This is great. This is a

25:55 on a podcast. This is great. This is a great question. So, great questions. Okay. So, let’s un I want to frame out a there was one thing that was confusing me about this question in the middle here which was I know exactly the part you’re talking about. I’m still a little bit torn about do we have like so okay fine things are in the repo not a big deal not not too worried about that when I look at the lineage tree yes yes for the publish semantic model in dev or whatever the workspace is right is there

26:27 whatever the workspace is right is there one semantic model and many reports from what I’m hearing it is okay I’m agreeing with you Tommy because that’s what I would recommend yes yes right more than just having a for each report a separate semantic model now model now right I’m not reading it that way though okay good because I I think that’s fair one thing I would like to so point of note around this I believe

27:00 so point of note around this I believe AI and agents change a bit of how of what we can do because of how efficient they are at making multiple changes across multiple things very quickly. Let me just throw

27:14 things very quickly. Let me just throw down an idea here. So I’m not sure I’m I’m not changing my ideas about architecture. I’m just want to unpack the idea that let’s just imagine for a moment that in this world we have we used to have one report that served you used to have one report that served he said like three or four know he said like three or four different themes around data right and he said no no no this doesn’t make sense in order to make the app which we’re publishing it which again love this idea using organization apps or workspace apps are underrated in my opinion that

27:45 apps are underrated in my opinion that that should be the preferred sharing method for people who need to consume data if I don’t need to edit or build reports. Like if you’re not a content creator, creator, right, right, and you’re a consumer of data 100%. Apps all day. So I if if you’re in the agentic space, this is probably going to ruffle some feathers here. There’s no there’s no reason why I can’t take instead of one semantic model, I could have four semantic models, one for each of the reports. So do like a onetoone still.

28:15 reports. So do like a onetoone still. Now, let me I know I know I Tommy I’m gonna I’m going to caveat this. I don’t recommend this, [clears throat] right? This is something but because it’s all code and because it’s in the repo, you could have one deployed repo of just the model and you can automate that same model deploying four times pretty easily. Not not saying it’s super frictionless yet. Yeah. But the story of the truth, right?

28:45 Yeah. But the story of the truth, right? The truth of the model is the repo. There’s one definition of it, right? So I I just want to conceptually say like with agentic things you can easily automate. Look, my model lives in this repo. This repo has the timal definition of this semantic model. Hey Mr. agent, I need you to go deploy that to three or four separate models named this, this, and this. And I want you to link each one of those models specifically to only

29:16 one of those models specifically to only the report that we care about. I think this is possible. And I’m Yeah. I’m Yeah. Do you see what I’m like? So I’m not theoretically. Yes, it works in theory, I think. But then you could also have these other things like there’s lineage tags. Like if if you let the agent own part of the deployment piece, I think we actually free ourselves up to have new building patterns on how we build things in PowerBI and Fabric. So we have some limitations here and I I

29:46 So we have some limitations here and I I think in theory what you’re saying is true, but there’s two limitations right off the bat that we I see. So number one, with that many semantic models that you’d have to deploy and you’re looking at that all in a repo, you really are going up against context limits. Time out. Time out. One model, one repo. That’s all I’m putting in there. It’s the per repo. Okay. Single model. It’s one model in the repo. That’s it. I’m not deploying I’m not deploying four of them. It’s one model only. It’s the deployment pipe.

30:16 It’s the deployment pipe. Yeah, Yeah, it’s the deployment. It’s the agent that’s deploying it. So the agent’s looking at the single definition of the model. model. Sure. Sure. And then from there renaming it and publishing to the workspace four versions of it. Okay. Okay. So what I’m trying to what I’m trying to articulate is the sto the the story of truth is the single model that is then the repo. If you make one new measure it would go to the single model. Right? Right? When you deploy the four different versions again if as you’re as you’re building things everything gets a

30:47 building things everything gets a lineage tag. Okay. I got you. But you see where I’m going like I see what you’re saying. All of this is traceable then is it’s traceable but you are then completely AI dependent at that point. May maybe Tommy like so it’s the automation it’s the automation of this that’s difficult to set up right you Tommy you do a lot of like app development or side app development on yourself as well. Yeah. How many how many deployment pipelines have you written yourself?

31:19 pipelines have you written yourself? Barely any goose egg. Nothing because your your agent writes it for you. So what what I’m trying to articulate here is the agents not only do a really good job of like pattern matching on things. things. Sure. Sure. But what what he describes here in question number two is talking about this as well, right? How does the MPC MCP server and the agentic space change like how will this influence my development in the future? If I have this really powerful agent that’s there, I’m not going to make the agent do the

31:49 going to make the agent do the deployment. What I’m going to have the agent do is make a GitHub or DevOps action that’s specific to deploying model one or two or three or four. You can’t manage the other. That’s all automation. It’s just automation. No, I I agree. I agree. So I I guess I got again the first part I’m going to say here is you are still pretty AI dependent because if that ever were to break break well then you’ve lo okay like this is if something were

32:21 okay like this is if something were who cares what what’s going to happen Tommy Tommy if AI broke today you’d be screwed screwed I’m going to put that right back we all be screwed the podcast and figure out our own AI. No but I but that’s my point. We’re already past the point of like already heavily relying on it so much that it doesn’t that’s not that’s a moot argument to me at this point because we’re already past the point of if if someone just said, “Hey, we’re just going to rewind this. This wasn’t a good idea. we’re going to kill them all. That there’s there’s no there’s no AI out there. We’re going

32:52 no there’s no AI out there. We’re going to get rid of all the agents. None of them can be available to you. the the amount of productivity would be that’s not an issue on my plate because we’re we’ve already opened that bag of worms or can of worms and it’s open like we are there. I what I’m not going to push back on that one. I want to but I’m like yeah it’s true. So we’ve passed that point but but what I don’t want to rely on is I don’t want to rely on the agent to do the deployment. I want to rely on the agent to build the automation to do the

33:22 agent to build the automation to do the deployment. So, so that way if I need to go like if I if let’s say something really bad did happen, I can’t do the agent. It’s not going to be there. We’re not going to have the agents around. Okay. Well, let’s step back and say, okay, now we’re not going to use the agent piece. Let’s just focus on I got to go hire a DevOps engineer, right? They would be able to at least figure out what’s going on, right? right? My other point I want to say here on the first question is I think we may be misunderstanding the question a bit differently than I think the four

33:53 differently than I think the four different semantic models because in my head there’s not a lot of logic to why I would have a model and then deploy four different ones. I think the way that Yoke is trying to say this that makes at least more sense in my head is rather than having a workspace devoted to semantic models and this is the normal pattern right you have a workspace for semantic models and then I build my reports in other places well he’s building the semantic model in the same

34:23 building the semantic model in the same place he’s building the reports because why Mike to your point why would I have a single semantic model built in my repo then deploy four different semantic models, right? Like that that makes no logical sense whether I had AI or not. There are reasons why people would want to do this, right? So one one of one of the reasons would be in each of these four models you’re adding parameters that during your deployment pipeline you adjust and so that different different

34:53 adjust and so that different different data for different customers or different data for different domains or different time periods need to be added to different models. Sometimes you’re you’re trying to fight that limit of what size the model can be to be inside a particular workspace or you want to there’s there’s threshold limits you’re on a certain fq those models can only get so large before you start start four identical semantic models they are in the schemas but the data that’s loading to them might be different that that could be different

35:23 different that that could be different there so I think I think there’s pieces there that again that those to me are like lowhanging fruits that I think would automatable like the parameters if the parameters are the only thing that’s changing I would argue the model’s the same and the parameters just adjusting right that’s something we can do with deployment pipelines today we can have the same model pointing to different servers in different environments it’s the same model but it’s being slightly adjusted with the parameters I think that’s that’s very reasonable to expect okay so I think there’s some things there that would be interesting

35:54 things there that would be interesting I’m not saying this is a good idea I’m just saying it’s possible now and I I think what I’m trying to say is in the past have you if you had told me this in the past look I’ve got four reports and I’ve got four exact semantic models and I’m like where’s the story of truth right so prior to agents I would have I would have immediately complained and said no way not possible don’t do it right that this would been like very strong adversion to that pattern now that I’m seeing this differently a bit more with the agent space and also

36:25 bit more with the agent space and also the fact that we’re noting here there’s a repo with the definition of the model, right? So to me, that’s actually the source of truth. So let me let me just unpack like what I’m where I’m going with this a little bit. Two more, Tommy. If you have a single semantic model defined in the repo and that is your source of truth or any for any matter, I was just talking with a client about this one. this one. When you have a workspace and you attach the workspace to a repository, you shift the story of truth from the workspace to the repo,

36:55 workspace to the repo, right? And then when you pull the repto down on your computer, it’s just a copy of the story of truth and you make changes and you push your new updates back to the story of truth. Same thing for the the workspace. The workspace doesn’t actually become the story of truth. You make changes on files and models or reports. It just updates the workspace. The the story of truth, which is the repo, says, “Hey, it’s uncommitted. You need to make your changes and get them back to the main to the main story of truth.” So that

37:25 the main story of truth.” So that changes the game for me. Now would I want to build the automation to deploy the same thing four times? No, I would not. That’s absolutely off my radar. But the agents having 15 minutes of time to describe what you want and why you want to do it to an agent. It can easily build GitHub actions or Azure DevOps actions that will deploy the model as you expect it to and replace properties in the model so that it deploys it correctly into the workspace. I think this is really achievable. Now

37:55 this is really achievable. Now that is achievable. Let me tell you, I’m going to going to again I’m not arguing this is a good idea. I’m just saying it’s possible. The friction is to me the friction’s been removed. There’s a let me give you a different angle here on why I think Yukim built this the way he did especially if he wanted AI assisted development because I want you to consider this. One of the limitations in our current architecture of building semantic models and reports is the fact that they are in separate workspaces, which means they’re

38:26 separate workspaces, which means they’re in separate repos, which means it’s very hard to have AI assisted development on the semantic model and the report if they don’t exist in the same place. So if you want to have AI assisted development, you need you almost by prerequisite need the semantic model to live in the same location as the reports if you want to have that workflow because think about it Mike you’re not going to be able to have easily easily an agent go

38:59 able to have easily easily an agent go across repos to be able to talk to any dependencies or push things out or to make sure that it understands the context there. So if I want a report built or Tim updated on a report but it has no context on the semantic model because it’s not in that repo, we got problems. The only way is I have the semantic model and that tindle definition in the same place the report is from a repo point of view.

39:30 repo point of view. I we’re almost getting limited this I let me say this I understand your point but I disagree with the analysis of it because already today I’m using agentic experiences with multiple repos not having a problem jumping between multiple repos easily understands the context of multiple repos again I’m doing more are doing this local or is this a is this an agent jumping from repo to repo in a server or is this happening on your computer computer so this is this is more CLI local development, right?

40:00 development, right? Okay. Okay. And and I can tell an agent to come down and pull two different so for example, one of the things that I built which was look I need to have skills of my agent distributed across my team. How do I get that across my team? So I said let’s build build let’s build a skills repo [clears throat] and that way the skills are in a common repo but all the work that I’m doing is in a different repo. So now the agent has full context around okay how do I build repos and again VS code does a

40:30 build repos and again VS code does a really good job again VS code has this concept of its own version of a workspace workspace and in that version of a workspace you can actually pull in two repos side by side and you can save the workspace and its settings. So here’s the skills repo here’s the actual repo where the code lives. So you don’t have to localize

40:48 lives. So you don’t have to localize your skills and and custom agents into a single repo. You can use you can use them other places. So I would argue today I’m already using with fairly amount of effectiveness multiple repos with the same agent in the same session with the same chat session like so so to me I’m not seeing that as a big barrier for me because I’m already doing it today and it it works. you’re doing it on your own local computer and there’s a difference there too when you’re pushing when you’re using skills it’s not using the same context that it is if I have to cross

41:19 context that it is if I have to cross repos to tell hey look in this folder for my semantic model now jump over here for all my reports you are wholly dependent or if that’s the right word but you are entirely dependent on your own local VS code workspace for the context there every time you want to keep running that right And shoulder shrug, but so there’s nothing here. There’s nothing here that’s talking about having to use agents in the cloud. Like this the conversation here. He’s talking where’s

41:49 conversation here. He’s talking where’s where is he? He he’s talking about MCP servers here. That’s the second question. The first question, dude. Mike, I don’t know how much I’m sure you’re doing a lot with agents in a GitHub repo. Most of my repos now kick off after I do a commit or in terms of my agents where I have GitHub actions that run that will kick off an agent to do a ton of things for me, right? Because here’s the thing, especially in my head, if I’m working in an enterprise or if I’m working in in an organization, I don’t want to be

42:21 organization, I don’t want to be especially with something automatic and something agentic to be entirely dependent on my own local machine for things to run because again, what happens when I’m out of town? Because that means everyone else has to have the same workflow set up on their computer and I guarantee you someone’s going to get that wrong. I want this set up in the repo, not in my own local machine. Okay. So, let’s let’s tease this idea a little bit further because I think you’re also breaking another pattern here that we wouldn’t recommend either, which is now when you have two repos with different with the same skill in

42:51 with different with the same skill in them, how do you keep those two skills in sync? in sync? No. And so skills are different than semantic model. I will answer your question. It don’t matter. I will answer your question. You just you just proposed the opposite of what I’m doing, which is I was like localize skills that everyone can reuse the skills centrally across all repos. And then you’re just saying, well, okay, I don’t want to do that. I actually want to have the skills in the repo so I can use. Okay, understand. But now you now you have a problem, a separate problem, which is now now you

43:21 separate problem, which is now now you have two definitions potentially for the same skill across two different repos. And if you interact with one, you don’t get the same updates on the other. So now when you want to update one skill, you have to figure out well where is it all referenced and what all repos is it in. in. That’s relevant to the semantic model the report here. I’ll answer your question. I will gladly answer your question but I want you to try to stick to if you’re doing a PowerBI workflow here with a semantic model context outside of the same repo. You have a semantic model in one repo. You have reports in a different one. I don’t see this.

43:51 I don’t see this. You want to kick where in this question is he saying that the model and this the model the model and the reports are in two different repos. repos. He’s not. What I’m saying is you’re dependent [snorts] on my argument here is you are I don’t understand I don’t understand your argument. Seriously, I don’t understand your argument. Okay, let me back up. Let me I’ll take a You’re arguing things that the questions doesn’t even ask. It’s not asking about models and repos in separate models and reports are not in separate repos. Yoke’s already saying that the semantic model and the report are in the same repo. repo. So then why are we arguing about different repos?

44:21 different repos? My argument is you are now with AI assisted development. My argument is with AI assisted development, if you want to make it work, you’re dependent on having that architecture of the semantic model and the report in the same repo. What we are accustomed to is having them separate. My semantic models live here. My reports live here. But that does not jive as well with agentic development. My argument is if you want

44:52 development. My argument is if you want to do AI development, AI assisted development, you have to do what Yoke is doing with his first question where that is the preferred best practice way at some point to do AI assisted development. I want my semantic model in the same location as my reports if I’m going to do AI assisted development to its fullest. That’s my argument. I was just simply talking about what we’ve done in the past which is not going to work with AI assist development.

45:29 I’m I’m trying to unpack your comment here. here. Okay. Okay. I finally got clear at the end, I think. But But yeah, I don’t I don’t know how to answer your question. I I I’m not sure how to like really unpack it there., I I think I understand your point now. Before I was like just on the different page, totally on a different wavelength. I’m like wasn’t really following you. I guess I I guess maybe the hangup for me is this wording in the question that’s talking about instead of the classic thin report pattern and

46:00 pattern and let me let me define what. That’s why I got thrown off too. Okay. So let me define what by a thin report, right? a thin report and and I’m going to talk to a thin report as in the traditional PBIX space and now how I interpret the same thing as a thin report in the PBIR space. Okay, in a traditional sense we would have let’s call classic thin reports. You would build an entire PBX file for the

46:31 would build an entire PBX file for the model. Now because of the way you deploy that you would get by default a blank report and I would put a gray page on the report and say this is a model only there’s no data in here for development do not edit yeah this is for developers only this is a model only report okay so in the old way of doing things I would make that my model and I would literally name it you model and I would literally name it sales model sales semantic know sales model sales semantic model that would be the name of that object right and then I would build a

47:02 object right and then I would build a second PBIX that would reference the other model and I wouldn’t build that against local development, right? The report PBIX would be a live connection to the semantic model built into the service or if I was getting really fancy with it, I would have it locally pointing to the XMLA endpoint of the local desktop and then I would hot swap the connection at the last minute and point it to the service and do other crazy things, right? All that very hacky, all that not very smooth. Okay, so

47:34 very smooth. Okay, so that was the classic way of doing it. Now, in the classic mode, I still would have one semantic model and three or four reports, that’s still in my definition of classic, I’m with you covered, right? So when you say instead of classic mode of two PBXIX files and now talking about one PBX file that is model only and three or four reports that are just report only but all reports are still pointing back to the

48:05 reports are still pointing back to the same model. That’s what by classic. So, as I’m saying this out loud, maybe that’s what we’re talking about here because in this new agentic world, you don’t need to deploy the PBX anymore. anymore. With with the way you have repos and the model set up, you can physically open that model and deploy directly just the model elements to the service. Now, let me just pause there. Yeah. Yeah. And I’m gonna I’m gonna

48:35 Yeah. Yeah. And I’m gonna I’m gonna agree with everything you just said except except No, no, no. You [laughter] add it except for everything I disagree with. Everything I said is right. Everything else I I disagree with wrong. [laughter] It’s all wrong. No, everything you just said I’m 100% in agreement. I think what I’m adding to that so it makes sense in my head. The question sure sure is I’m assuming that in the classic development what you keep talking about is that the semantic

49:05 keep talking about is that the semantic model and reports live in different locations. That to me is when I think of the classic workflow semantic models are in their usually their own workspace. So [snorts] that’s adding to what you’re saying. Yeah, that is a p that is a pattern, not necessarily always the pattern. The only it’s not the only pattern, but it’s usually the preferred. So, in terms of making this question make sense to me. I’m I made that assumption. Okay, Okay, fair. And I would say that is a best

49:37 fair. And I would say that is a best practice and pattern depending on how you’re distributing and sharing these models because you don’t want to give people access to a workspace with a model in it, right? you you’re trying to so so now you’re you’re blending a little bit of like okay you’re bringing some strategic thoughts around workspace design and where the files potentially are stored as well right right and again if you take further with that okay well now each workspace is attached to a different git repo or branch or something along those lines now we’ve got different thing okay so now I think I’m more in line with where you’re at Tommy like

50:07 Tommy like the should have said that assumption off the bat but well [laughter] if you’re ass if you’re assuming that this this workspace is attached to the main branch and you can only have one main branch per item. it get to your point Tommy it gets more it gets trickier to how to deploy this thing through git understand especially using the workspace integrated git integration where I get thrown off now a little bit for me personally is if you’re going down this more complex path of like a repo and deploying things to workspaces

50:40 repo and deploying things to workspaces I’m probably not going to use deployment pipelines as much nor am I going to use the git integrated workspace. I’m going to treat the workspace more like a deployment a deployed item that I deploy into. So that’s maybe that’s a slight difference here, but but that but I all I’m saying I think we’re still on the same page. Okay. I think so too. and and and I I think there is going to be a question or I think really something that we’re going to have to break out in the next few

51:10 to have to break out in the next few months on and and I want to get to the second question, but the last thing I’ll see I’ll say here is we are going to really have to strongly consider our architecture with PowerBI so it works best with AI assisted development rather than the other way around because I think we’re beginning to get there. Yeah, there’s new again AI is shifting everything that we do and things that used to be really hard actually are not anymore. So like why wouldn’t we step into these new arenas with new patterns?

51:43 into these new arenas with new patterns? Why don’t I build things for AI format and the architecture 100% on this one Tommy? Okay, so let’s keep going this one. So we talked about the PBIX version of the classic way of doing thin report patterns. We’re on this. We’re we’re in alignment there. Let’s unpack the PBIP format. So, PowerBI project format. Now, one of the advantages of this, let’s I’m going to just decompose this, right? If you have a single model,

52:14 a single model, a model report. You no longer need to have the model and the report be two separate PBX files because in some essence, you’re duplicating some things, right? you’re getting a model, but you’re getting like this dummy report that you really don’t use. And when you deploy it, you get two artifacts. And in the PBIP landscape, you have the ability of having the model and you have folders for the model and you have folders for the report. And I’ve even seen Ruie do some really interesting things here where he takes a

52:45 interesting things here where he takes a PowerBI project and you can have you can take the report folder and you can copy it four times. So you can have one model And you could have four PBIRs pointing to the same semantic model. So all of this is like workable, right? right? This is where my head starts melting a bit because there’s some things you can do here that are insane because when you have the PBIP project. So I I typically build whenever I’m taking a

53:15 typically build whenever I’m taking a PowerBI report and putting it into a PBIP project, I make a folder on my desktop and call it the name of the project, right? Tommy and Mike’s super duper report, right? That’s the name of the folder. In there, I save the PBX file in the PBIP format in that folder. Once I’m there, I can copy the report folder as many times as I want. So, I could copy the reports and say report one, report two, report three, report four. In doing that, all four of those reports point back to the same semantic

53:45 reports point back to the same semantic model. model. Right? Right? Let’s melt some heads here. If you go into the report folder and open the PBIR file, that opens that report attached to the the single semantic model right there in desktop already works. You can also go to the second folder and open up that one. It will still open up desktop, but now you have two versions of desktop open. Two different thin reports open, but both

54:17 different thin reports open, but both both thin reports are pointing to the same model. And in this scenario, if you

54:23 same model. And in this scenario, if you go to one thin report and add a measure, when you refresh the second thin report, the measure will appear. So, we’re basically like loading the same data twice basically for both PowerBI desktop files. files. And so this is you can now do this and this is something like brand new pattern right no like this is headmelting like what I can have two PowerBI desktop opens and if I make one edit here it changes the edits on the other side simultaneously

54:53 edits on the other side simultaneously like that’s this is wild it’s it’s like materializing so you could have four desktop versions open open no I’m with you for everything or and any one of those changes would update or modify measures So that is almost like mimicking what you would do in the service where you have the model and then you go edit the thin report in the service. It’s almost the same pattern, right? The browser is just showing the thin report. There’s a single semantic model and if you update a single semantic model, you can then

55:23 a single semantic model, you can then edit all the things. So let me just pause right there. Does this make sense, Tommy? Yeah, I’m I’m No, I I saw Ruy’s thing. is something that I’ve been actively doing for versions, testing things out so I’m not breaking stuff. And no, I so I’m not breaking stuff. And no, this this to me has become a best mean this this to me has become a best practice. This is already at that point to me where that is a default way of building things. And I think that’s actually a really good transition to the second question as well. Yes, it is. So I think in here, so I think what this now unpacks for us is

55:53 think what this now unpacks for us is okay, the PBIP format unlocks a whole bunch of a whole bunch of different development patterns that we could be deploying here. So in this situation, we now have the model and multiple thin reports all in one code space. Everything’s tracked by small files. We can have all the different reports. Everything’s much easier. And if you take that entire Mike and Tommy’s awesome reporting project and you deploy that to a git repo, everything’s tracked. To your point, Tommy, the model’s tracked and every version of

56:24 model’s tracked and every version of report is tracked. And if you go in and really look in the details there, the file, I believe it’s the PBIR file or there’s there’s one file in the structure that says this thin report or this report points to this particular semantic model. And so there’s if you want to now adjust where that thin report points to, you can easily adjust it because it’s now part of right the file and it’s literally just change a string and it just goes to a different source.

56:54 source. So for the sake of clarity I want to make see your definition here or interpretation of what you saying because I took this one of two ways. Okay. And this is the second question. Question two. If MCP servers for report modeling come available in the future, how might they influence architectural choices like the one I’m currently making? Now, I want to give you a caveat. This question is from April this month. This is a recent question. So, my head

57:25 head thinks that he’s talking about an MCP server for report building, not for semantic model building. Okay. And so, that’s where I’m having the hangup right here. Good. Okay. I like where you’re at. Great call out on this one because I read it the first time and I was still thinking modeling server, right? So, let me just before I go into the report MCP, right? right? Question. Question. Let’s unpack first. Yeah. Yeah. The The PowerBI modeling MCP server, which has been recently renamed to be more focused

57:56 been recently renamed to be more focused on the modeling MCP server. Okay. Right. Right. The modeling MCP server is will connect two different ways. you can connect locally to a desktop file that’s running or you can connect that server to a cloud the the powerbi. com version of that model. In both situations, the MPC server is going to make live changes against both models. So, you can change it locally on your machine or you can change it in the service. My understanding also is when you’re changing the model locally on your computer, the model must be on in

58:28 computer, the model must be on in desktop and open. you you have to have the model the model depending depending on what command you do with the MCP server. So the MCP server though you can’t make any changes without the model running. The model has to be on. There are things with the MCP server that can connect to TIMDL and actually edit Timul because so the MCP server has a ton of different commands. It can connect to PBX. It can connect to the path of a PBIR. It can actually say write to a Tim definition, a model

58:58 write to a Tim definition, a model definition. If you want to do the live connection, yeah, it has to be open. Does the MC the local one? Yeah. I don’t think I’m I don’t think all the commands if you go through if you go through CL. Yeah, I haven’t gone through the command. I’ve gone through the commands on it. Yeah, but I was under the impression that there are no commands to just directly edit the files. All those commands all those commands are not looking at local files on. So

59:29 not looking at local files on. So yeah, yeah, the pattern I’m using, let me just say it this way. The pattern I’m using is if I’m going after changing the model with the MCP server, I’m assuming the models on and running somewhere and I’m going to the MCP server server that my workflow too. If I have files that I’m looking at, so let’s say I want to directly edit the tim files, the MCP server is not required because not required, but you need to have something that defines what is in

59:59 have something that defines what is in the timle file. So this is where skills pick up the weight of here’s what a timle file is and Ruy’s also published some skills around okay if you’re not going to use the MCP server to modify things you just want to modify the model directly with the files you need a skill that says okay you are a timle data modeling expert and here’s how the timle file works because in that situation situation there’s no blocking for when it writes bad code it

60:30 blocking for when it writes bad code it will just write whatever it thinks tindle should be and the model may reject it in the format it thinks which will break your p your p yes and that’s the point there the point is the point of this is like that so so that’s why I’m saying the MCP server is better for modeling because in the MCP modeling server that’s where the protections are made against things that it can’t do correctly so it’s So there’s also this idea of like

61:01 it’s So there’s also this idea of like You’re right. You’re right. So I’m just verifying here. It must have been something else I was looking at, but I don’t think Yeah, I think you were wrong on that one because exact Yeah. Yeah, you’re right. You’re right. So right. So MCP server [laughter] has to be running with Okay. So let’s just line a line on that one. So thanks for checking in on that one. Yeah, but I was I was fairly confident that cuz cuz yeah yeah that had tindle, but it’s not it’s not the local one. So it’s not this one, right? So another definition Marov and Marov and some other gentleman

61:31 Marov and Marov and some other gentleman Kurt Buer are making another MCP server somewhere else that one may be talking to Tindle but this the one that Microsoft provided for the one that Microsoft developed you have to have PowerBI desktop open capacity correct and I think that’s a fair trade-off right if you’re going to run I like it I like it MCP stuff go directly through desktop or service okay so now that we got that out of the way right so now with MCP yes now that we have that semantic That’s the modeling server. server. Here’s where I think things go different. I think the different side of things, the report side is much more complex.

62:02 the report side is much more complex. And the report side I Microsoft themselves have directly said at fabric conference, they’re not building an MCP server for reports. So really, I don’t think they’re they’re building a skills library. So they’ve moved on to skills for reports. So they have vocally said that they’re interested about building skills against reports. reports. I’ve also seen another a number of other people actually build CLIs. So Kurt Buer being one of these people has built an entire CLI around now this is not an MCP

62:35 entire CLI around now this is not an MCP server it’s just manipulating files on the desktop for those things but I think I think where the challenge for this comes is Tommy there’s a much more so here’s where here’s how I perceive it as an outsider right the timle format and the semantic model format has a codebased checking system built into it right right right because there’s an XMLA endpoint. There’s already IP that Microsoft has built around being able to verify the

63:06 built around being able to verify the timle or the semantic model changes you’re trying to push to anything a server desktop there’s already built-in error messaging that it will not accept changes or it will check those errors and say this is wrong don’t do it. Okay. I think there’s a difference between that and what’s on the report side. I don’t think the report side has the same rigor and error checking outside of desktop or the service that the semantic model has. You want to break your report quickly

63:37 You want to break your report quickly and not be able to open up desktop. Just go randomly put things in tindle and screw it up quickly. No, not tindle. Randomly put things in the report side like so the like and it won’t tell. No. No. Yeah. If I do PBIR, I don’t think I’m going crazy here. It’s not JSON. Tindle is also the Am I going crazy? Tim is tabular model definition language. language. Oh, you’re right. You’re right. You’re right. Yeah. Yeah. Okay. Yeah. Yeah. Yeah. Yeah. Okay. No, it’s not on the report side. Report

64:07 No, it’s not on the report side. Report side is doing lots of small objects that are all about the report. I believe all the report side is still it’s not markdown. It’s all JSON. Markdown. Markdown. You are. Yeah. Yeah. Yeah. Yeah. It’s JSON. That’s right. So, don’t question me, Tommy. I’m two for two now. two now. Three strikes, you’re out. [laughter] Hey, I’ll give you today. You got today. So, So, but So, but So, you win today. Yeah, you win. Like, I’ll What is it? I’ll I’ll get you next time, Gadget. [laughter] Rattly.

64:39 Rattly. So, good call back. Yeah, good call back. That’s That’s for my I watch so many cartoon shows with Yeah. Spectre Gadget. There it is. So So, let’s go back to the second question again. Okay. Yeah. Yeah. Yeah. There may be people in the community that are going to build an MCP server like report builder. There may be people in the community that are going to build a CLI report version of this. I think what’s going to happen is we’re going to get more skills. We’re going to get custom tooling that’s going to be around the report writing because it is just

65:10 the report writing because it is just quite it’s a quite bit more complex. And Microsoft themselves has said, “We’re focused on building the skills that you need that will work with an agent to let you manipulate the report layer.” So the question here directly is if an MCP server for report building, I’m going to change the word here, report building becomes available in the future, how might this influence your architectural choices like the ones you’re currently making? the fact that you already have everything in git,

65:40 everything in git, you’re already using the PBIP format will not change. I don’t think the format changes anyway. [clears throat] The The the only way I could see this being portrayed a little bit is okay, let’s look at the pattern of what MCP modeling is doing today, right? It’s a harness, VS code, claude code, something, right? That harness harnesses the large language model. You bolt on the MCP tool

66:10 language model. You bolt on the MCP tool and you talk to either desktop or service. Okay. What does that same pattern look like for the report? Do you do the same? Is it the same thing or do you not manipulate desktop? Because I don’t think desktop has an API layer to go say make a visual and put it here on the page. It just doesn’t it just doesn’t have that we we too. No, let’s be clear. Outside of the techn the technical challenges or the barriers that we currently have.

66:41 barriers that we currently have. Yep. Yep. Report building is so much more ambiguous, right? In terms of it is not simple DAX because what design do you want? Just are you telling it to build a line chart? We don’t have that yet. But that being said, I I really want to go ahead. ahead. Why hasn’t anyone built it? There there is no tool out there for report design out there and I’m not just talking PowerBI data visualization is so this is goes back to what I said three years ago a good report is in the eye of the

67:12 a good report is in the eye of the beholder unlike a good DAX or good timle or good semantic model building which is universal one to one like there are universal design patterns for good semantic models you can have data visualization theory but what metrics are you going to put in there. I can literally tell my CL my MCP server to start looking at relationships. Here are some general instructions and it’s going to start building for me things I didn’t ask for. I think the reason why

67:42 didn’t ask for. I think the reason why we haven’t seen anything around data visualization and for report design not just a single visual that claude can make or whatever can make is because you have to give it clear instructions or it ain’t going to do anything where show me somewhere that we have

67:59 where show me somewhere that we have report building skills report building designs that I can deploy with general instructions. It is not does not exist yet. And, Microsoft has tried a ton. This was part of fabric on day one on their on the when they first introduced fabric. they said, “Look, we’re building a thing that’s going to design a report page for you and you could go back to go check the tape. You and I were both watching this live and going that’s not going to happen.” So we

68:30 going that’s not going to happen.” So we there’s a lot of challenges here. yes, I agree with you. Cool. I’ll take the win. [laughter] Where I Let me give you what I think is the miss. miss. Okay. Okay. Okay. The miss is a lot of tools and things that you see in the market all focus on how do I build the visual? How do I take the data and get it to a visual? And then on the visual side,

69:02 visual? And then on the visual side, then we say, “Okay, great. Now that we’ve built the visual, we then stack them together and then make them interact with each other. Okay, fine. Right. Right. That’s not the That’s not what we’re doing. doing. In reality, what you’re describing and what I’m hearing you say is we’re actually talking about what is the intent of this page? What is the intent of this report? And what what tool do we see, Tommy, in anywhere that you’ve observed any BI reporting, right? whether it’s Amazon’s quick insights, whether it’s Tableau, if it’s

69:32 insights, whether it’s Tableau, if it’s Google Looker, if it’s PowerBI, if you look across any of the visualization tools that are out there, I think this is a really hard problem to solve. And so instead of solving the hard problem of what is the intent of the report and getting people to not think about visuals or KPIs or things and actually going from like a first principles concept like take take the Elon Musk stance on this I know you’re going yeah right why does this exist right hey I

70:02 right why does this exist right hey I need a report that saves me money here’s what needs to be in it and then you go into describing your business and how it saves saves you money. There’s zero tools out there that let that happen. Everyone focuses on what they can accomplish, which is look, I’m going to stitch data together. I’m going to build a semantic model and I’m going to drag columns over, right? So, we’ve been taking this bottom up approach. And so, And so, one of the things that I try and do with the power designer and my theme

70:32 the power designer and my theme generator was we don’t just build themes anymore. We build full templates, right? And the idea was this was my idea. Let’s start from the intent forward. Let’s start from the design forward, right? We don’t really care what data is in the visuals. We really don’t even care what visuals are on the page. Let’s just go for the feeling of the page. What does good design look for? And so you go to Power Designer in Fabric as a workload or you go into themes. PowerBI tips, you’re going to see the world’s largest collection of offtheshelf readyto-use templates. We’ve

71:03 offtheshelf readyto-use templates. We’ve got over like I think 60 now and we’re continually adding more each month. But we now have templates. You just go use and you hit deploy. What semantic model do you want to bind to it? And you drag fields over the the report is at least laid out in a way that you can just start getting value out of it immediately. So So that’s a start that’s a step in the right direction, right? Starting point. That’s a starting point, right? We should be f and this is when you look at good design practices around reports.

71:33 good design practices around reports. This is what we’re doing. So let me let me come back to this MCP thing, right? So to your to your point, Tommy, we’re not looking at reports the right way. We’re not designing them with intent based things. We’re probably always starting with that that’s like an afterthought or maybe it’s a some a separate thought, a side thought instead of like the primary reason is why we’re building things. I do think there are really good tools out there or the tools are getting better to be able to build visuals with Agentic and prompts. There’s a lot of

72:03 Agentic and prompts. There’s a lot of projects out there right now that are doing it. One of the things that I’m really looking forward to is to stop having the heavy tick click tax. And I’ll call it a click tax because every time you click on Yeah. Yeah. I like that. Yeah. It’s like the ink to data ink ratio. I I’m going to throw my my clicks in the ocean. I’m going to in the in the Boston River, right? I’m going to I’m going to revolt and say no more clicks for me. I just want to say words and have it built click party. Yes, we Boston click party. That’s what? We should do that. That should be like we should have a little we should have a a pilgrim throwing his his mouse

72:34 have a a pilgrim throwing his his mouse into the ocean because he’s he’s he’s done with clicks, right? No more tick click tax. Okay, that’s going to be a shirt. I’ll have that for you next week sometime, Tommy. So, that’s that’s a good one. Okay., so that that is that is where we’re at, right? So, what does that look like now? How does this work? And I think we can do really good jobs of if I let’s let’s unpack this a bit, right? I really like this idea of agents and custom agents. I really like this idea of being able to define a report holistically

73:06 define a report holistically and then chunk it up into like many work items, items, right, right, that we can approve and then it all stitches together. So in my mental model, this feels a lot like I need an orchestrator agent that says, “Hey, you’re my report intent designer. You evaluate everything against the report intent of what this is.” Sure. I have a series of sub agents and in the sub agents we scale them up to here’s how you build these kinds of visuals or what visuals we do. Now maybe each agent

73:36 what visuals we do. Now maybe each agent gets its own thing. Maybe you build custom agents that are like you’re the scatter chart agent, you’re the pie chart agent, you’re the whatever agent and we would kill the pie chart agent because we don’t want that around. Just get rid of him altogether. Deleted in no way agent the only kill the pie chart agent. Thank you. just if he had it’s like the the the pie chart agent shows up and says I had this great pie chart and you’re like no D deleted remove from report the pie chart agent just so I can see it deleted. I feel like there’s a song a sad song sound like w and he he like

74:07 sound like w and he he like pie chart agent walks off and is like taking a long walk. I no one likes me anymore. I would only create the pie chart agent only to tell it tell itself to destroy itself. [laughter] So but I I don’t know but if I think about the report and what needs to be built right there’s a lot of technical pieces around like the visual the line chart the formatting of it the positioning of it and so maybe that’s all tasking that we need to like give detailed requirements to a sub agent and let the sub agent really work out the

74:37 let the sub agent really work out the mechanics of the one visual. So, right. So, can we make an agent really good at building a single visual and then just tee up or task up or have an agent just rip through, okay, let’s just keep building this in a series of tasks so it can logically think through how we build things. And I think the reason why I’m saying this is because I think this is more amendable to a skill rather than an MCP because I think MCPs like authentication and APIs.

75:08 authentication and APIs. I don’t think desktop has the surface area to command a visual to move around the page or do things with an API, but the model does. Does that make sense? I see where you’re going with this because I think what we really missed the point with and going back to I think what you’re saying is intent again is on the eye of the beholder. You’re not going to find that in documentation for a certain report. I I had a sandbox call yesterday. is awesome. My my favorite

75:39 yesterday. is awesome. My my favorite things is to do where we you meet with a client and you begin to go over my initial designs, but you’re trying to get what sticks for them. And that’s the problem we have. However, I see what you’re saying here where there could be a that ret intent agent. It has the notes. Okay, let’s begin to visualize not in a PowerBI, but just SketchUp. Honestly, that’s what we need is that initial SketchUp. So, I know that’s what that’s what my tool Power Designer does. It’s a sketch. Yeah. Yeah. Throw visuals on a page.

76:10 Yeah. Yeah. Throw visuals on a page. Just feed it fed into it because you need to know what are those triggers. What are those thresholds? Here’s a background. Here’s a color palette. Like you can work with things to build those things. Right. Already we have an agent. We already have AI built into our tool today where you say, “Look, here’s a background image. Yeah. Yeah. Figure out where the visuals go.” And it just figures out where the squares would a visual would live and just puts it on there for you. So like we’ve already done all the work for this. It’s like we’re already starting to incorporate agentic building of templates which is phenomenal. Yeah. Which is a good step in the right

76:40 Which is a good step in the right direction. But we got to like this is the first step to getting further down the road. Okay, great. All that. Yeah. You need an agent to build the template and then you need another agent to fill the data into the template. Like that’s like pick the visual and then fill the data. Like that’s what we should be focusing on. All that being said to answer the question because we are I I love how anytime you and I before a podcast go we’re keep it short today. This is not a short one [laughter] and we didn’t even have that many we didn’t have that much news. Yeah, I know this is just a good like hey mailbags like this. I just want to

77:11 hey mailbags like this. I just want to quickly try to answer the second question. question. Let’s fully close out the question. Yeah, go ahead. Fully close out the question. What the architecture you’re putting together? I want to go back to my initial statement here around that if you want AI to work well with PowerBI and fabric. It does change the way we think about the architecture and the organization of our content. To me, I I would keep doing what Yokim is doing. I would have the semantic model

77:41 doing. I would have the semantic model in the same location as those reports to have that needed and nec necessary content on what those measures are, those definitions, what’s critical so we can understand that. So I would not really change anything to what he’s doing. doing. I would say the architecture you have in place no changes. Yeah, Yeah, there’s no changes there. The only change I would maybe recommend on what will happen in the future will be how you deploy the things right you you may not use a workspace linked git repo to

78:15 not use a workspace linked git repo to deploy the model and the reports you will need to have model and reports I think in a same sim repo which makes the most sense to me right right I I would also argue Tommy like if we really do need to break it apart and have a model repo and a report repo I think you can get away with it. It’s going to be more frictionfilled for you. So, I would probably not recommend that path. I would probably to your point earlier. earlier. It’s not a hard requirement for me, but it’s going to be a strong advice to keep the model and the reports in the same

78:47 the model and the reports in the same repo structure because it’s much easier for you to build those thin reports by making just more folders for the thin reports. And when you go to the service pyro. com, you’re it’s going to be much easier for you to build another thin report inside the workspace and have those if you did have the workspace tracked, it’ll actually push those changes into the repo easily for you. Can I answer one of your earlier questions about how do I because I just realized this and we’ll close out on this, but this, but oh, oh, I remember how you said how do I keep skills in place? How do I keep skills in sync? sync? You ever put skills in a workspace?

79:20 You ever put skills in a workspace? skills and any of that repo. Oh yeah, correct. Yeah. Yeah, there you go. Your own question. So, well, I’m going to let the cat out of the bag here slightly. We’re working on a skills organizational storage system. Sweet. Sweet. So, we have another workload coming here pretty soon. Hopefully, I get my, you pretty soon. Hopefully, I get my,, hopefully by the the end of this know, hopefully by the the end of this month or maybe the end of next month. It’s going to be called Power Templates and it’s going to let you power templates. Yeah. It’s going to be like templates of

79:52 Yeah. It’s going to be like templates of PowerBI reports. It’s going to be templates of notebooks. It’s going to be templates of userdefined functions. And it everything in this power templates will be a onebutton deployment in any workspace you have control of. So So to your point Tommy, right, where does skill centralization lives? Right now the tooling today does not exist for you to share and reuse and deploy skills across multiple places. What we’re trying to build now is what does a skill location look like? What does centralized centralization of this look like? And how do you synchronize those

80:22 like? And how do you synchronize those skills with repos or notebooks or other things that are inside fabric? So that’s something that we are internally working on already because we think there’s a need there. So okay, great thoughts, great question today. Love the pattern. Shake, just keep doing what you’re doing. I think everything I think you’re prep I think you’ve set your your team and yourself up for for winning in the future. That being said, Tommy, where else can you find the podcast? You can find us at Apple, Spotify, or wherever your podcast. Make sure to

80:52 wherever your podcast. Make sure to subscribe and leave a rating. It helps us out a ton. Like today, like yo, Keem, do you have a question, idea, or topic that you want us to talk about in a future episode? Head over to powerbi. tipsodcast. Leave your name and a great question. And finally, join us live every Tuesday and Thursday, a. m. Central, and join the conversation all Power Tips social media channels. Thank you all so much, and we’ll see you next time. Explicit measures, [music] pump it up, be

81:22 be lighting up the sky. Dance to the mix and I get your explicit measures. Drop the beat now. H [music] feel the crowd. Explicit measures.

Thank You

Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.

Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.

Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.

Previous

Why I'm Burning Down Every SaaS Tool In My Business

More Posts

May 8, 2026

Stop Using Bookmarks - Ep.526 - Power BI tips

In Episode 526 of Explicit Measures, Mike Carlo and Tommy Puglia unpack the latest Power BI and Microsoft Fabric topics from the show. You’ll get a quick read on the episode’s biggest ideas, why they matter, and where to dig deeper in the full conversation.

May 6, 2026

Less Guessing? More Building! - Ep.525 - Power BI tips

In Episode 525 of Explicit Measures, Mike Carlo and Tommy Puglia unpack the latest Power BI and Microsoft Fabric topics from the show. You’ll get a quick read on the episode’s biggest ideas, why they matter, and where to dig deeper in the full conversation.

May 1, 2026

Fabric's Most Underrated Features - Ep.524 - Power BI tips

In Episode 524 of Explicit Measures, Mike Carlo and Tommy Puglia unpack the latest Power BI and Microsoft Fabric topics from the show. You’ll get a quick read on the episode’s biggest ideas, why they matter, and where to dig deeper in the full conversation.