PowerBI.tips

Fabric June 2025 Feature Draft – Ep. 439

Fabric June 2025 Feature Draft – Ep. 439

Mike and Tommy break down the Fabric June 2025 update with a rapid-fire draft of the features they’re most excited about. They also cover Copilot inline code completion in Fabric notebooks, shortcut transformations from files to Delta tables, and new data agent data-source instructions.

News

Main Topic

  • Fabric June 2025 Feature Summary — The monthly round-up that triggers today’s “feature draft,” including updates across Data Engineering, Warehousing, Data Factory, Real-Time Intelligence, and more.

Main Discussion: Fabric June 2025 Feature Draft

Copilot inline code completion (Fabric notebooks)

Mike and Tommy are excited to see Copilot move closer to a VS Code-style development experience inside Fabric notebooks. They discuss practical workflows like writing comments to “steer” suggestions, using chat for the big picture, then letting inline completions accelerate the last-mile coding.

AI governance and “approved agent” guardrails

The conversation detours into the idea of an AI governance layer (they mention “BarnDoor AI”) that sits in front of multiple agents/tools and enforces what actions are allowed. The core idea: as agents gain more ability to act (create, edit, delete), organizations will need policy controls so teams can use AI safely at scale.

Shortcut transformations (files → Delta tables)

They like the promise of continuously syncing a folder of files into a Delta table without requiring a dedicated pipeline run. Mike calls out the operational simplicity: fewer moving parts, and a more “lakehouse-native” ingestion flow—while still wanting clear tracking/lineage for what files were processed and when.

Data Agent data source instructions

They agree this is a necessary foundation, especially for non-semantic-model sources where metadata and descriptions aren’t as rich. At the same time, they’re skeptical of the long-term burden of writing tons of instructions—unless the agent is being deployed broadly (where the “lemon is worth the squeeze”).

Draft picks: the features they’re watching

Highlights from the back-and-forth draft:

  • Materialized Lake Views (Preview): Mike expects these to change how he builds medallion-style (bronze/silver/gold) transformations without stitching together lots of pipelines.
  • Sparklines GA: Tommy celebrates sparklines finally reaching GA, with calculation group selections supported.
  • Notebook variable libraries: A key ingredient for CI/CD and multi-environment deployments (dev/test/prod), avoiding hard-coded workspace/object IDs.
  • PBIP public JSON schemas: A practical quality-of-life improvement for anyone editing PBIP/JSON files directly—schemas help validate keys and reduce “mystery breakage.”

Looking Forward

They expect a strong July release, a lighter August, and more big announcements ramping into FabCon Vienna.

Episode Transcript

Full verbatim transcript — click any timestamp to jump to that moment:

0:00 [Music] [Applause] Good morning and welcome back to the explicit Measures podcast with Tommy and Mike.

0:31 Hello everyone and good morning. Sorry for the slow the slow start this morning. We had a little technical issues getting going here, but I think we’ve worked them out and we are good to go. I think we’re good to go. Welcome back, Mike. It’s been a little been a bit of time. We did a little bit of recording last week. So, we were out recording. we are officially back in the seat seat again doing live episodes. So, we’re jumping right in. , we have a number of news items slash today we’re just going to go through two blog posts. , I guess these are the

1:01 Two main blog posts. There’s one from the PowerBI release summary in June that was early released in June around June 9th. And then we’re also going to go through some of our favorite features from the fabric release June 2025 which is roughly they release the blog in June. So the June blog and fabric comes out much later. their cycle is like the end of the month when they release the the blogs. So, it’s June 25th is the most recent release date for the fabric items. All right, Tommy. , give us a couple news items. Well,

1:32 There’s been a few that have actually just come out that actually make this really cool. So, first off, and actually an update to some fabric notebooks with inline coding completion. So, there’s been a lot going on with Copilot AI and our dear beloved notebooks. One of them is really just the ability to make copilot comprehensive throughout Copilot. Very much like if you’ve ever used VS Code or an ID and you get that autosuggest. Yep.

2:03 So, I wondered why this took so dang long because this seemed like to be should have been there from the get-go. It’s very hard to go through the chat, but I love me some autosuggest. I I think this is actually so let me a comment on this one because I I really think this is a one it’s a powerful feature. one of the things I’ve really been groing with here recently is just understanding like where’s the best place to use co-pilots? Where do they best serve their purpose? And by far, I think

2:33 Long and away at the front of this is the ability for the AI to understand every single line of code or have a huger huger a larger knowledge base around what code is written and how to use it. And and the reason I give this example here is I’ve had a number of projects where I’ve been working in a notebook and I didn’t really I know I wanted to do some data transformation. I think I think Tommy you would agree at the end of the day if we sit back and just look at going hey we have a whole bunch of notebooks and a whole bunch of items

3:03 That we have to data shape we know what we want the data to do. We want we want the shape of the data to to transform and and look this certain way and you can communicate in English what you want it to do. The challenge becomes how do you get the code written to do the production? And I saw a couple comments, I think it was this last week on Twitter. Someone was saying, “I really love notebook experiences.” And by the way, a notebook experience paired with a co-pilot means I now have a really rich intern

3:37 Who knows all the code and doesn’t know knows none of my business logic. So, they don’t know how to they don’t know what I want to do with the code, but they have all the code and they know everything. And I What do you think, Tommy? What’s your thoughts on this? How do you how do you interact like when you’re writing code either in a notebook or in VS Code? What does your interaction look like when you’re using a co-pilot or some agent to help you write code? So, honestly, I’ll use a few things. , depending on which autosuggest I have, but I’ll actually write comments in the code itself to say what I’m trying to do.

4:09 , because some of the features, they may not have that like inline chat like directly in that code block. So, I’ll actually write in the comments, hey, I’m actually trying to do this. This is what this is doing. And the autosuggest actually will learn from the comments. But a lot of times I start with the chat. I start with like overall what I want to do with a notebook or project and then we’ll go from there. And then I ask it to break it down into cell blocks.

4:39 But generally speaking, I always start with hey, I’m wanting to accomplish this. The data is here. the transformations are going to be this what’s the best way to do this and then we are on our way but this is huge let me let me also go in here a little bit as well I inside VS so I I’m very happy to start seeing some of these VS code like features coming over to notebooks inside fabric I think I think VS Code has got a very robust agent I really like how they’ve handled using agents inside VS Code also in VS Code

5:11 You have the ability of picking multiple different agents it doesn’t have to be just one agent you can pick like claude, you can pick Grock if you have the the API keys, you can pick GPT mini or chat GPT mini 03 and all kinds of other things. So, I really like that the richness of like there’s there’s always right now I feel like the the ground is shifting so fast that you need a tool that’s more nimble, right? It it can do a lot of different things. So, I I’m very excited about this code inline code completion. I I will be give people

5:41 A note here or caution like every time you’re prompting co-pilot you’re using some little bit of CU in the background for you as a developer that might make sense. but also if you are using Microsoft Edge there is a co-pilot button that lives on the far right hand side of the screen and you can use that. some organizations are even now locking down you can only use Copilot because that’s a Microsoft certified one. and they don’t want you sending data to other agents or other services out there or specified services. So, I think I have

6:12 You I’m going to throw out one more thing here. have you heard of a company called Barno AI? Tommy, Barndoor AI? I don’t think I have. Someone brought this up to me and I thought this was really interesting. So, just imagine I’m just going to we’re going to riff on this co-pilot thing for just a bit here because we got some time today. barn door AI and I’ll put the the link here in the chat window as well just for those who are interested in in learning a little bit more about it. this is a

6:44 An AI service that’s kind not doesn’t scrape but it it goes across all of your AI agents. And so you for each agent there are things you may or may not want those things to do. There are different agents you may want them to be using and not using. So there’s like basically not all agents are created equal. This is a service that companies can now put in place. It’s not AI. It’s a it’s it’s a service in front of the AI that helps users understand which AIS they’re supposed

7:15 To be using, which ones they’re not using. Interesting. Approved by your company, which ones are not approved. So you have all these things like you have people, you have actions, you have agents, you have different services like Google Drive or or Salesforce or whatever. And so you as an as an admin, right, you can start not locking down but just like saying these are certified or approved ways. It’s almost like adding process to how you use AI and agents. Again, I think this is really interesting. very neat idea and this was

7:45 Brought up to me. I thought, wow, this is a really cool service where, , you can use these different agents in these different services. You can turn on services that you want or not want. And then you could say, okay, this sales agent can, , create folders, not create folders, delete things, or edit stuff, or not edit stuff. You have all these different controls that you can do to help these agents work on on different areas of your of your company. I think larger companies are going to need to have a feature like this. This is going to be something that I think will be more prevalent because there’s just too much

8:16 For people to like get themselves in trouble with. Does that make sense? No, I I think this is one of the things that we’re going to realize be after it’s too late where how many agents and AIs and co-pilots and task flows and whatever’s models are actually going around. So we have actually be able to scan. It’s so if I were to put this in layman’s terms, it’s almost like the PBI scanner but for your AI. Yeah. It’s not scanning. It’s not like it’s not like after the fact, right? So this is more this is more like proactive like

8:47 Hey these are the services these are the agents we’re going to allow users to use. This is a way that you can hook into those agents whether it’s chatgp04 mini or whatever but it’s it’s basically managing the relationship between I have an action create a file I have an agent such as chatg04 mini or whatever and then you have services what can you do in the various services. So this I think this is going to make a lot of sense in this MPC land where all these different NPCs can talk to these different services.

9:17 They can do different things. So you need to be able to say look I don’t want people deleting things from this folder. Right? This like if you’re in the legal space legal space can’t have people talking to an agent and having agents going into a various services and deleting files that those that are there there’s like a hard requirement that that is not allowed. So you can now start managing still letting people use agents and and AI things, but now you can govern a bit more of what they can and cannot do. You can create files, you can copy them, but you shouldn’t be able to delete them. So there’s

9:47 I think there’s other aspects of this that are being considered here, and I like it. I thought it was a really neat idea. Very nice. Anyways, all right, Tommy, give us another news item. What else you got here for us? All right, so some shortcut transformations. Actually, this is neat. there’s simply a new capability Microsoft fabric that is simplifies the process of converting your raw files or CSV into delta tables. And Mike I’m trying to think of one of the other feature updates that PowerBI

10:17 Came out with that data flows are becoming CSV files, right? They were originally in data flow gen one everything CSV file. the feature update that we we had a we had a hash on where the new feature that PowerBI is coming out is one of the outputs for data flow gen 2, right? Oh, is CSV files. It is CSV files. So, I guess you can use this and make a delta table not knowing why you would do it. Anyways, if you have your normal CS, why you wouldn’t just go right to a delta table? I don’t have any clue. I

10:48 Don’t know. I the strangest thing. I think this makes more sense in the realm of like you’re being given CSV files from from some system, right? CSV files come out of all systems all the time. Like that this makes sense to me that in that regard. Oh, a thousand%. And so simply it just allows you to convert CSV files into Delta tables and one lake. they’re working on a few really neat things like AI power transformations custom transformations using notebooks and promptbased transformations

11:19 And all you have to simply do is when you create a shortcut you’re can that live reference wherever that data may live and then you can apply a transformation layer to that shortcut again not to the CSV file. Yes. So, it’s as easy as simply going to create a new shortcut, choose your CSV file, and then one of the notes, all the files for transform need to be in the same folder. So, we can’t do nested folders. Sure. For now, right? That that seems like

11:50 A logical something that would eventually happen later on. so, Tommy, this is this is interesting to me. Tell me understand some of this feature from your perspective. Does this thing live Have you played with this feature? Have you been able to touch it at all and then use it? Not played with this completely yet. Yeah. So this is this is really let me just say this is a very big differential from where I think we’ve been. Typically anytime you want to do any transformation transformation of data it was with a pipeline a notebook and maybe

12:22 A data flow gen 2. Those are the thing those are the only three options you really had for data transformation. This is really interesting to me because this is exclusively coming from the lakehouse. Let let me I’m just going to I’m just going to hang on that moment because this is I love the idea of this because this means I don’t need to spin up a virtual instance of something in a pipeline, right? So when a pipeline turns on, a runtime is turned on. it tracks the pipeline while things are being

12:53 Done. And again, it’s fairly efficient. It’s it’s pretty cheap. But if you’re just taking a CSV file and just want to make it a delta table or let’s just imagine Tommy you’re bringing in a CSV file or you’re updating a CSV file over and over again. It says this is going to try to keep it in sync as well. So if the CSV files change, I’m assuming at some point the lakehouse will say hey this file is now different. I I now need to go create a new version of this delta table on some schedule or some detection. Hey, this file needs to be processed. Let’s process this file. This file then turns into

13:24 A a new version of that delta table. I’m really excited to hear this and I really I’m very excited to see how this is going to continue play out and it’s I really like this idea though. I I like not having to have a whole bunch of extra tools in place to get this to work. Yeah, I think the easier we the simpler we get to all those CSV files. Now I’m trying to take a look at this and the source type I don’t know if it supports like

13:54 One lake local files. It all I saw so far was Azure data lakeink Amazon S3. I’m assuming it should one it says it says one lake. So in the managing of the shortcut the image they put in the blog post is talking about customer support calls. It’s the data type delta and their target path was files slash customer support calls which I believe is coming from one lake. So I think I think you can source a file in the

14:25 Files area. So this is how I would how I would use this feature. Right? My feature was would be put a bunch of CSV files in the one lake in a files folder. Right? from there point to the shortcut make the shortcut go back to the tables area and then the tables now just become a table that materializes that shortcut and it just automatically keeps itself up to date as I add new files it’ll automatic update itself now what I don’t know here is this is so let me just bear with me here Tommy

14:55 Right one thing I have to play with here a bit to understand what’s going on is what happens if I have many of the same files in the same folder like I’m adding net new data. I’m adding more information to what’s going to be placed in that folder. Does that mean it’s going is it going to take all the files and append them together to make one big delta table? So if I add a new file, it’s just going to append that data. I think that that’s how I would interpret the function to work. Is that how you read this, Tommy? So, to me, I’m seeing

15:26 This as you choose the CSV files and each CSV file is going to be you may be able to choose to append it, but it looks like they’re all individual, which wouldn’t make any sense. So, you’re saying well, so here, this is what it’s saying. It’s always in sync. And this is the point number three that I was looking at here, right? There’s no need to schedule refreshes. Awesome. Love that. the shortcut transformation engine continuously monitors the source location for changes every

15:56 Time a new file is added to the source path meaning not the actual file but the path above it the folder. So basically Tommy to what you were saying like look you can point to a folder and everything in the folder is automatically picked up and processed and then modified files are processed and reflected the latest data and then any deleted files then are remove removed from the delta table output. So my thinking here is and I’m thinking in how I typically get data from a system, right? I’m going to get, , every day

16:26 Or once a week I’m going to get an extraction of a CSV file. I’ll just put the file in the folder location and every single file that’s in there, it’s going to automatically view all those files and say, “Okay, let’s just turn all these things into one big table.” And then and then once it’s a delta table and I’m happy, right? It just works. So, I’m I’ll need some lineage on this and some tracking to like know when when the last update occurred. And I hope there’s some some metrics that are coming out of this

16:57 That it gives like an a little bit of an output that says, “Hey, we processed your CSV shortcut file. Here’s the files that were used.” Like, that would be helpful. I don’t know if we’re going to get that, but I need to play with this feature. Is this feature officially out, Tommy? Have you seen it in the service? This feature is officially ready to go. You can start trying it today. There it says, “Try it out today.” Okay, awesome. I love this feature. I think this is really nice. I I do wish, again, I’m guessing here because this just came out, it probably doesn’t have any tracking yet, but we’ll have to push for that a little bit as well. All

17:28 Right, Tommy, what other topics do you have here? One data agent, baby. So, a pretty cool data agent. So we know data agents are something that’s trying to be the co-pilot throughout our fabric ecosystem and they are continuing to get more and more necessary features not just cool features but I think fundamental features. So, one of the big things that is now available is fabric data agents now have data source instructions, which

17:58 Is really good because you have your general default instructions for a data agent in your rules. But it was really hard especially if you had different data sources on the guidance on which tables how to interact with each data set. So now we can basically give a blueprint for each of your data sources to say what to prioritize in that data set, what to filter, how to join. This goes off of Chris Webb’s blog articles that he was saying how to really talk to copilot the data agent. We can make sure they can take

18:29 Care of that work through the system instructions. So each data source can has its own instructions field. Okay. So again, no, I think this is this is fundamental. The these are one of the things that make data agents either a deal breaker or non-starter. , we need more these instructions too because right now the data agent is just a a chat UI. But the fact that I can add these instructions, I would still

18:59 Like some ability to wait and actually like the PowerBI prep with AI where I can choose those columns and tables and also even weight things to say this is weighted higher, this is weighted lower, but this is a good start. So the only problem with the instru everything just being a prompt or system instruction especially when we’re dealing with intricate data sources is you have to add a lot of instruction to make sure it gets right. For example is saying what should you include and it’s like well when

19:31 Asked about historical sales use the orders using order date always join sales with products and product ID. So you almost have to map out your entire schema here, which I don’t like because again, it doesn’t really have any of that metadata of of the source. Maybe you’re dealing with a lakehouse that maybe not a semantic model. So you can add a few instructions, but that’s probably not going to help compared to having a little more configuration. That’s interesting that you’re talking that way, Tommy. So I I I like where you’re

20:02 Going with this one. this makes me think that you need additional so let me unpack this a little bit what you’re saying here right there’s in the semantic model we have a nice good amount of metadata that goes along with the semantic model right you can go into the semantic model you can add comments and descriptions and so the AI agents now are seem like to be they’re applying a bit more information like the agents can scrape that information back out and use that as context for what users are asking about the semantic model.

20:32 I think to your point Tommy though there’s really not a lot like where does the metadata live for lakehouses or a SQL data warehouse or a SQL analytics endpoint like there’s nothing you’re not you’re not able to actually add the context in the SQL server you could probably add relationships between tables I don’t know if the agent will be able to pick up on that but you’re not writing descriptions per column that I’m aware of right that that’s not there n you’re not writing measures in the SQL database So there’s nothing there to add

21:04 Context around that. So it feels to me like of the things that I’m going to be able to pull into agents. Yes, I could have you point at a lakehouse table cuz yes, why not? Why not use it? Like you should be able to use it, right? But there’s not going to be any metadata about that. And I think what you’re describing, Tommy, correct me if I’m wrong, is for these different sources, things that come out of a lake house, things that come out of SQL databases, you have to write a lot of instructions against those to really describe what the data is doing. So the agent

21:34 Is smarter about those things. Exactly. Right. Not a thousand%. Yeah. I don’t I don’t I have a I have a in general, this is my general comment. In general, I have resistance to all this writing more text to tell the agent what’s going on, right? I think some of this there’s a p there’s a thought around like you just wanted to work and figure itself out. I have to sometimes I feel like I’m doing

22:05 More typing to get the code like maybe this is the wrong way of saying this. I don’t know how to say this articulate. I’m talking to the agent. I feel like I’m writing more words to the agent. write one line of code or get an answer back than what it would have been if I just knew how to get the answer back. If I just knew how to write the code, I just would write it. So on some level, I feel it’s a bit redundant for me to spend all this time adding additional context and words and all this extra stuff.

22:35 And this is this is maybe where I’m having the rub here, Tommy. Like where’s the where’s the benefit? Like for me personally, if I’m using this model or I’m building an agent for just me, I’m not sure it’s worth it for the time investment and all the extra things we have to do, it’s not ideal. Flip side of that, if I’m giving this agent out to a 100 people, my team of a thousand, putting it in a published app on top of certified data, okay, now I think that might

23:05 That lemon might be worth the squeeze. Does that make sense what I’m saying there? Well, , you’re talking about the data agents version one where is basically write a SQL statement and that’s what we’re going to convert for your data agent which didn’t work. So now we’re going the opposite way where we’re trying to do all everything is this nice easy text prompt. Here’s the thing though when you’re dealing with agents like this and really any custom GPT or custom chat, right? You define a few things. So you have your

23:36 System instructions, but you also have a few things that are known as tools or skills that you’ve given it as well, which I would like to see where it’s like, hey, you have, , a lakehouse marketing skill that has the ability to go d and basically it can either read JSON for instructions or the metadata. It could be a Python package or a Python script that you write that could do whatever, but you tell what tools or skills it has.

24:07 So these data source instructions are fine. But here’s the thing though, everything that we’re using with our AI tooling right now on the back end is something Python or is something coding. And I think with the data agent, it’s nice that these we have the data agent and we have this nice UI that no one has to worry about coding. Yes. But can I can I click on the wrench symbol and go to the back end please? Like so because I think that’s just integral because you’re not going to accomplish what you want to do. I

24:38 Think it’s hard for them to give you that wrench and go to the back end do all the things. that’s super technical all of a sudden. We’re going from like people who are coming from PowerBI and a lot of like UI driven screens into okay now you have to figure out how to learn agents. It just it feels like there’s so there’s so much there that’s like oh it’s just I feel like we’re diving into the deep end yet. We’re not quite there yet. I I they’re doing a good job. I like where we’re going with this. It just feels like we’re just so far away from really doing a like getting this corrected. Anyways, that’s

25:08 Just maybe my opinion. Any of these any other thoughts on this agent thing? I think that’s a good start. I think it’s time, my friend, to go through all the let’s go through some of the major updates here. Okay, so we’re going to transition here. We are going to move into more of a draft mode here. So, this is we’ve did this a couple times. There’s a lot of features coming out every so often from Microsoft. we’re going to go into draft mode. Tommy’s going to pick his favorite pick from either the Microsoft or the Fabric blog and

25:39 We’ll talk about it. We’ll riff on it for a bit and then I’ll pick one of my favorite features. We’ll do like a draft pick and volley back and forth here. Tommy, do you want to go first with your with your first pick for the draft? what? I feel like I always go first. I’m going to let you pick the first one here. Okay, sounds good. , I’m going to go pick one from I believe it’s from the fabric blog. Again, the fabric space is a lot I do a lot of stuff inside fabric in general already. So, let me pick one from there. And I

26:09 Well one one thing I will point out here sadly Tommy did you see one of the announcements inside the fabric blog I think you maybe it was the PowerBI blog I don’t remember which one it was did you see the sad announcement that we had is it sad about the 10 years or is it something about something becoming GA No it was something about something being released I think it was on the PowerBI blog I’ll see if you can figure out which one let’s see how well me Tommy is from the June update, the June feature

26:39 Summary from the PowerBI blog. There’s something in there that was very disappointing. Oh, on it. Yeah. Power Query in the web for import models. Let’s go see if I can find this. I I’ll as soon as I see it. you actually have to read the blog in order to know which which feature it was. But you said it if you saw in the the the PowerBI blog, there is a an update on the Power Query inside the web for

27:09 Modeling. And it was sad because you said we know you’re long awaiting Power Query editing for web. We had some blockers and they’re not able to release it till July. So they’re pushing the the ship back for a little bit. So you were right. It was the feature. you read it out loud. You said, “Well, the the the top header of the page doesn’t really give you indication that it’s been pushed to next month.” But I was very excited. I was actually waiting for Power Query editing for web to show up. I kept going, “Where’s the button? Did I not

27:40 Get the release yet? Is it not happening yet?” So, I kept looking for the button. It was it was not there. and then yeah, I didn’t I didn’t catch that one until later. So, anyways, let me give you my first pick. , I think I’m really excited about So, they they really they announced this a while ago. This has been out for a while or they’re trying to get this out. I am very excited about materialized views. I think this is going to really change how I’m going to build things. So, , it is in preview. You can use a materialized lake

28:11 Views. And if you want, I’ll go find the video. There’s a video on our YouTube channel where I work with the the PM who built the feature for materialized lake views. We did a full hour on just like how they’re supposed to work and how all the features there. So that’s something that we did a demo directly out of the FabCon in Las Vegas to just show people like these things are coming. I’m very excited about this. I’m really excited for this feature to come out. it’s going to allow it fits this vibe of we’re going to bring a lot of data to the lakehouse and the lakehouse

28:41 Will just keep things up to date for you. I’m not building a pipeline. I’m not doing extra things. I’m defining bronze, silver, and gold level tables all in one place. So, that’s that’s fun. So, anyway, I wanted to call that out there. That’s a solid one. , honestly, this is Would you put this yet as potentially a gamecher or in that in that thing? Is it a potential gamecher? Right now, I I’m going to say it will definitely change how I build stuff. So this is a fundamental

29:13 Shift of what I will do when I bring data to the lakehouse. It it is it and so let me let me give you some context here. This is a very similar feature to what data bricks does inside delta live tables. Delta live tableables allows you to define loading data in and then once that data gets changed downstream there’s like a dependency tree of the bronze table modifies this silver table, the silver table models a gold table. So there’s like there’s always like steps to process the data, but then you have

29:43 To build these pipelines. And what I’ve done in the past is I’ve had to like make a whole bunch of notebooks and run a pipeline and organize all the information. I don’t want to do that honestly. I just want to say this is the source table. These are the transformations I want to mess around with and here’s my final table. And then when the new data comes in, Microsoft, just be smart enough to know how to run the tables and process the data. So like to me I look at it going like I think this system should be smart enough to just figure this stuff out like this is that’s very complex Microsoft

30:15 By you making this easier for me to work with it I will use it more right that it’s it’s literally simplifying my time anyways that’s a good one that’s I really I really like this materialized view thing and so stay tuned there’s going to be more probably tutorials and blogs around doing materialized views well over to Gosh. All right. Well, I got I’m going to go to the PowerBI blog and what I’m going to do? I’m going to announce something and give a shout out for something that is now GA.

30:46 And guess what, my friend? Sparklines are now generally available. They’ve been in preview for four almost four years since December of 2021 and they are now generally available. So, calculation group selections will be applied. You can use them with both on individual values. So there’s a ton you can do with C calculation groups, but they’re now GA. So now you can use them in production. Congratulations. Nice. So

31:17 So this is off of the PowerBI blog, correct? PowerBI blog. Yeah. Do you use sparklines inside your reports today, Tommy? somewhat. So I I tend to find them good as a starter point for tables for if I’m working with a new project or a new report, especially if it’s a migration from Excel. Okay. Me personally, it’s not a like a fundamental tool of all my reports, but I

31:48 Do know people who swear by them. Yeah. There’s also I think I saw another post by data goblins Kurt Beller talking about SVGs SVG measures and things becoming like like hitting critical mass. I don’t know if anyone else saw that article. Yeah. so I don’t know it. So Kurt I know sometimes you listen to these and sometimes you don’t but Kurt it’s hard to find where your content goes anymore. I

32:18 Don’t know where your content goes. Before I could just go to data goblins and it was always there. I feel like it’s all over the place now. I wish you had like a better way of consolidating your work because it’s much more different to find where you’re posting things, Kurt. Anyways, that being said, he the article he wrote was I I don’t know how I feel about this. I like the idea that we can use SVG and spark lines and things on top of all these different places, right? It makes sense. There these are these are the some

32:48 Of the creature comforts that we need to have when we’re doing visualizations. 100% agree with this. There’s no question in my mind. These things need to exist. How it is implemented and how easy it is to figure out how to make the SVG do its job. Not easy. I’m going to I’m going to pick on the team on this one a little bit and I’m sorry Microsoft. This is in my opinion this is a major miss. Where is the global or enterprise collection of measures, right? How can we

33:20 Pull a library of measures into my models? We can do some things with PBIP now. We can do some things here and there, but like Microsoft, you did such a good job with creating quick calculations or quick measures. The quick quick measures was genius. Hey, here from the semantic model, grab these different data fields. And oh, by the way, we will auto build you like a set chunk of DAX. Love it. Great. No new features were added. No one

33:50 Could contribute from the community. Where’s Where’s the central library for my organization to build all these things? Like, I’m not going to Man, it just bothers me. I love the fact that we can do it, but I hate the fact that I I can’t use a central library around all this stuff. It’s so frustrating to me. There’s really smart people that are building these things and I can’t use them because I had to spend a whole bunch of time learning what they did, getting the DAX right and bringing it in. That should not be a thing. And I don’t

34:20 Want to write I’m sorry. I don’t want to use tabular editor writing a bunch of C to b bring in a bunch of measures. Sorry, that’s not going to float for me. I need it to be in a UI. I need to be built into the tool and I need to have a library of things that I just point to and it just works. Well, more you want something saved. It’s a major miss. It’s a major miss in my opinion. So, here’s the thing, and you already solved this. What was the tool? It was called DAX. not DAX do, but you had that DAX variable use PowerBI helper

34:51 Also was a big part of that. Yeah. And or the side tools. And here the the thing is this has already been solved because this idea of report level measures. We have DAX. Power we have DAX. PowerBI tips which was a a very rough tool and again I’ll just point it out there for for Tommy. So thank you for the call out. Yes, DAX.P PowerBI tips was an early version of a tool that would write a DAX state. You could take an existing DAX statement and parameterize it with different like functions and other things as well. So

35:24 That that is a tool that lets you build a JSON object that can define a measure and you can inject it into your model. Right? So you could actually have a library of measures that were parameterized almost like quick measures, but you can just drop them into the model. Like to me, that makes sense. And it should have a bunch of other stuff too, Tommy. Like it should be like, hey, this measure goes in this folder. Hey, this measure has a description. Do you want us to auto give you the description? Like here’s what here’s a base description of what this measure does. Dude, all this stuff should be like built into a library. Why can’t

35:55 I get that? Where’s the library of this stuff? And I think the biggest thing too the miss is report level measures while they’re not shared across models the formulas are right and that’s the biggest thing is like all I just need to plug in is a measure or a table but the formula for displaying this or what the text is or the CSV or CSVG that stays the same except for the measure I want to change and the table I want to change and I use the side tools with your DAX powerbi had tips and I was

36:28 Able to inject things into the model go through the community and I had my own repository of alerts whatever all these visualbased report measures but the thing is that was just for me that was some only shared with me and it was I don’t want to say convoluted but it was wonky it was it wasn’t the best UI I and I do agree with you with the tablet editor tools because that’s not meant for that other layer of my

37:00 Report design across my environment that’s meant for a semantic model. And I think that’s there is this difference between having these universal report formulas and things that are really defined just on the semantic model. Yeah, man. it it is a tough one especially when you’re dealing with some of the features that are stuck in PowerBI like the numeric or like the spark lines where it’s

37:30 Like it’s neat but it’s limited. Yeah. Yeah. So that that’s Yes. This is this is where I’m at with this this feature. Like I I love the fact that it’s there. I just SVGs things are great. Spark lines are awesome. You got to have them. people are always going to want to customize them. And another one here I’m going to rag on a little bit as well, the visual measure. Visual measures are great to put spark lines on or SVG things, right? I don’t want to run a measure against the entire model. I want the measure to run after the model is already so the visual should return

38:00 The small subset of data for the visual and then from that then maybe I want to use some spark lines or some features inside the visual. So, I think Visual Couchs is a great place to put SVG hype stuff, but again, it’s too difficult to make it all work. You got to be like really cody technical expert to get all this to work. Where’s my library? How do we make this easy for my team to just, hey, look, Tommy’s the expert in SVG stuff. He’s going to build a whole bunch of really neat looking things. Where’s Where’s the library of acceptable stuff you should be using in visuals?

38:31 That’s the things we should be working on. Anyways, oh man, I I had a light you off. All right, my friend. Well, it’s back to you somehow. Yeah, I’m I I’m not sure where to where to go from other features here. I I think I’m doing Let me just I’m going to preface my next pick based on what I’m doing right now. a lot of right what I’m doing a lot of right now is working with organizations and helping them work with continuous integration continuous deployment

39:01 And I’m doing a lot more of dev ops which is a bigger topic we talked a little bit about the last couple weeks with Matias around what is DevOps how does it work how how do you bring that into your bringing it in so I’m very much on the opinion that the variable libraries is going to be quite a major so that’s what I was looking at too So it’s it’s an assistance tool that you that you need to use for

39:35 To help you build different environments. Right? So so let let me I’m going to very much preface my comment here. Right? In DevOps there’s a lot of different things. It’s a process that people follow to help you build something. Whatever that thing is, apps, data, BI reports, doesn’t matter. It’s a process. the tooling which is CI/CD more closely tied to the technical solution helps you with part of the DevOps. So I see that continuous integration, continuous deployment is part of the broader DevOps story which I think

40:07 People mis have a misnomer around this. They think that CI/CD is DevOps. I don’t think that’s true. I think DevOps is a much broader story. So the notebook integration with variable libraries is very much welcome. The more things that I can build inside my pipeline using deployment pipelines, using variable libraries, I think the better I’ll be because I shouldn’t be hard- coding notebooks and connections and things to my notebook. It needs to be flexible because the names of the lakeouses, the different

40:37 Objects across my environment will change. And this is going to go real technical here. the gooids that define each object in every single workspace are different per environment or workspace. So in a deployment pipeline, if you have a lakehouse named Tommy’s great lake and he moves it from dev to test to prod, there are three now three separate lakehouses in different environments and they all have different gooids. So if I have a notebook that’s talking

41:07 To that thing and I’ve attached a data notebook to it, we need to identify okay what is the GID of that great lake thing and know that we’re going to modify it and move it to the next environment and or or forward. So I think what happens here is we now have the ability by using variable libraries to make that a lot more programmable. We have more more ability to switch things in real time and you’re not hard coding the variables in the environment for all the things. I think this is a major win. I’m happy to see that that Notebooks is coming to play the game now. , but we’ll

41:39 See where this goes. Anyways, for me, this is a friction point that I’ve had for a number of years. It’s difficult. It’s hard to work with. I feel like this eases some of my pain. Very nice. No, I I think that variables outside of just even your CI/CD this I love what they’ve done with environments and having that on your workspace and how you can configure that with your own packages. This is just an advancement on that as well. So, see this is the thing. Why do we have all these great config features that are what

42:12 You would expect in a normal Spark environment with your notebooks with the environment with which version you’re running to your custom packages whatever packages you want. But this is where that AI data agents I’m like I don’t think we’re there where there’s that set structure yet because they’ve really given you so far. And Mike tell me what else is missing in terms of configuring your environment or customizing your workspace with your environment with

42:42 Your the Spark instance. You have a lot of the features that you would expect to have. I’m not sure if I follow what you mean by that. I have my own environment in a workspace and I can add packages. I can choose which Spark instance 1.1, two, three. I can add a ton of those features which there’s like 5,000 of them. , enable this and that’s all things you would expect to have when I created my environment. Is

43:13 There anything in terms of that they’re still skimping out on when it comes to what you expect to your own custom environment? because it seems like it’s pretty contra comprehensive. , I don’t really know. , I don’t I use a I don’t know that I answer that question. , I use as much of the standard stuff as I possibly can. So, here’s what I observed, right? I don’t customize a lot of environments. The only thing right now that I customize heavily is the ability to use the the automatic accelerator for the Spark engine.

43:44 So you can you can turn on the native execution engine. That’s the thing you need to turn on. Right now it’s not on by default. it’s it’s currently an option. You have to turn that on in order for your Spark cluster to run faster. And what it does is it it runs some of your Spark in I think it’s like .NET or C or something something more fast. And it it speeds up things a lot. So it takes some of the Java away from the process and and writes it in a different language. It just goes faster. So that’s one thing that I will use. Outside

44:15 Of that, I very rarely use other packages because I have found anytime I add other packages to a notebook or the runtime of the Spark engine, it immediately slows it down and makes it slower to run. And so I also don’t want to have to manage that environment across multiple call it deployment zones, deployment environments, right? I don’t want to I don’t want have to manage that environment against dev, test, and prod unless there’s something very specific that I want to do. So, I like using the offtheshelf stuff as much as

44:46 Possible because I feel like it runs faster. That’s just my opinion. I know that you you have to do it. so, there’s a question here from Kratos. yeah, you have so Kratos, it is off by default. The native execution runtime is not on by default. If you want the entire notebook or everything on that cluster to run with the native execution engine and you was enabled, but you can use it, but it’s not enabled by default.

45:16 You have to change the environment. I was on a live stream with a PM for it. The name is escaping me at this point. but he was telling me it in the future it will be on by default. It will be on. And also there are shortcut commands. If you want to turn on the native execution engine for a single cell inside a notebook, you can turn on the native execution engine for a single cell, let it run in that cell, and then in the subsequent cell, you can turn it back off. So there is the ability for you

45:46 To toggle this on inside the notebooks, but you have to write the line of code every single time you do it. And makes sense to do a variable setting because it just turns on. So, , yes, it is a requirement, but, , it will become a default option hopefully in the future. And I’ve been told from the Spark team that it’s going to be getting there. And I love to me the data engineering side of things. Microsoft is doing a killer job here. They’re they’re taking a lot of complex things and I think making them pretty simple. All right, Tommy, I think I talked about the

46:16 Integration variable notebooks. I think we’re back over to you for your maybe final pick. I think we’re actually at time. I I I was struggling on which one to pick here, but so I’m going to go a bit dev, a bit nerdy, but I think I went under the radar, but to me, this is going to alleviate a lot of pain. This is in the PowerBI update blog. PowerBI PBIP public JSON schemas. So, someone may be asking, what’s a JSON schema? simply put, JSON schema is something

46:48 That you can plug into if you’re typing in a bunch of JSON or if you have format like your PowerBI project and you’re not sure syntax, well, you could use copilot, but the schema is something would validate to say, oh, you can’t add that key. That key is not a thing that actually exists here. there is no such thing as layout optimization in this particular place. This is wrong. So you could and that’s a great way especially in a tool like VS Code where going okay I have an extensions

47:19 But this actually a really will give you autosuggest but also will give you that error it almost looks like a spelling error in VS Code where it’s like hey this is not something that actually you can do here and for me I’ve converted not converted things but whatever reason going from the remote to my computer. And looking at it that way, it just changed the syntax a little. A new layout optimization

47:53 Key showed up in my reports, couldn’t open the report, and it was like, okay, that’s strange. And went to the VS Code, didn’t say anything was wrong. So, this is great now. So we can actually just feed that in where in your definition p pbir you just simply add in the URL to the schema and you’re going to be on your way. So again to me alleviate a lot of pain here okay good. I want to also jump on this one as well Tommy you picked on something that caused a lot of pain for my

48:23 Tool. So, , Tommy and and the community here knows I have made the, , my team, not really me, I I’ve visioned it, but my team has really developed it here. We have, I would say, probably the most robust theme generation report creation tool that’s out there that’s outside of desktop. , we have a wireframing system. We have coloring. We have all the theming properties of the of the tool. And this actually started out as just a side pet project. I was annoyed with the JSON color themes. And I built a little tiny color picker. And from there it evolved into this massive tool that let you

48:54 Pick colors and themes and and properties on on visuals and things. So your public JSON schema inside the file definition of the PBIR files broke everything that little thing called dollar sign schema was required. So if you are so let me just another word of caution for those of you who are building theme files or mcking with these files separately. If you have an old version, if the if the schema does not exist, desktop will throw

49:26 A fit at you. Yes. So, oh my goodness, that that little schema note is required. So, previously it was like an optional. You could put it in and it wasn’t hard. It wasn’t required to be in the file format. Something changed in Microsoft in the in June that required the schema to be part of the solution. It has to be in the file and you cannot have a PBX or PBIP formatted project anymore without the schema being attached to it. Our

49:57 Tool, right? So our tool that was building PBIP files from scratch, which is what our tool does. We build a PBIP file from scratch. , , literally writing the code to make the file, we didn’t include it because it was an optional and again the the evolution of the schema wasn’t there. So, we ran into a massive issue. All of a sudden, everyone’s like, “I can’t open my files. It’s broken. What’s going on?” And so, we had to like quickly turn around and say, “Okay, we need to figure out what the problem was.” And it was the schema file. And it’s not One thing I’ll note here

50:27 Is they it’s not in this one file. So, the file they show you here in the screenshot on the blog is they’re talking about the sales.report and they’re looking at the definition. PBIR file. The PBI the definition.pir PBI is one of many files that are in the report side cuz each visual gets a file, each page gets a file. There’s a lot of other files now that are used to generate the report. And because of that, every single file needs a new schema.

50:57 Oh, and by the way, each schema is slightly different. So you actually see that there’s different versions. Some of them are on version 1.0, some of them are on version 2.0, Z. Some of them are on 2.1. Great. Love the fact that that’s there. But now you have a whole collection of files that are in different states of versioning and you need to track all that, which is again maybe one of more of my points here is if you’re getting to the PBIP format, you’re going to need some tooling

51:27 To help you manage it. So Mike, we’re trying to get get to the masses. We just talked to Mththeus about CI/CD and source control. All of a sudden now you’re telling people that they have to worry about schemas and the right version. Holy crap. Like so I I know your tool had to get updated but for a lot of users too unless they don’t update if they don’t update their

51:57 Get they update PowerBI they’ll have this is this is a frustrating thing is like we’re trying to introduce Git to the masses when it’s not in preview it’s okay well it’s a user it’s not a it’s not a hey you learned Excel great here open up this git folder you’ll you’ll understand it and what to open. That’s not the assumption though. That’s not what they’re building. You’re assuming that people are going to know how to use it. No, they’re not. It’s just a file format that’s behind the scenes

52:28 That just synchronizes with the git and it also just syncs with the workspace. Like that’s that’s all that most people are going to need. 90% of people are only going to need synchronize my workspace with Git. Done. Move on. The tooling that needs to exist for like experts to really jump in and really know all the little details, that’s not quite there yet. It’s getting there. And I’m seeing again this feature here is a great step towards improvement of that. We’re getting there. It’s getting better as we go. But what I will say though is it’s it’s not like don’t

52:58 Try to push on all these technical things on the normal people. It’s not designed for them and it’s not good for them to know. The best thing people need to know is just push your files to powerbay.com. Turn on get synchroning. That’s all you need to do. 90% of the people will be happy with that experience. It works decently well. It’s not perfect, but it works decently well and it’s better. I think I think that experience is better than sticking a bunch of things in SharePoint. My opinion, I I think it’s I think it’s not a big deal. All

53:30 Right. Well, I think that’s probably a good place to end. Mike, you got anything else? No, I do not. I think those are my my main things there. I believe all those are the main key features and functions that have come out from the blog. very happy about where we’re at and where we’ve landed with a lot of different things. I’m June was a pretty big update on in my opinion. I think this was like now that fabric conference is over there’s a couple months before the fabric conference Vienna which typically Microsoft holds back a couple features before the next big

54:00 Conference. So I would expect July to be a pretty decent release month. I think August might be a little bit short. there might not be a lot of things there because typically Microsoft holds back some features before they get to September which is will be FabricCon Fabcon in Vienna which speaking of by the way if you are planning on attending or if you are a listener from Europe I’ll be going and speaking at FabCom Vienna I got a speaking engagement there so I’ll be doing some speaking engagement in Fabcon Vienna so if you want to meet me there u make sure you go check out Fabrica conference

54:30 In Europe I think it’s the European Fabcon 2025 check that out we’ll probably start putting out links here shortly and we’d love to see you there. With that being said, thank you all so much for listening to the podcast today. Hope you found some features here that we thought was valuable, hopefully you find them valuable as well. If you also have features that we didn’t mention that you think is worth a note, please let us know in the comments down below. if you like this video, please give it a thumbs up. let us know you like this video and that way it’ll help the the powers that be the the large language

55:00 Models, the AIs that are just roaming the internet to know that this was a good video and other people should watch it as well. Thank you all so much. Please share with somebody else. Tommy, where else can you find the podcast? You can find us on Apple, Spotify, or wherever you at your podcast. Make sure to subscribe and leave a rating. It helps us out a ton. And share with a friend since we do this for free. Do you have a question, idea, or topic that you want us to talk about in a future episode? Well, head over to powerbi.tipsodcast. Leave your name and a great question. And finally, join us live every Tuesday and

55:30 Thursday, 7:30 a.m. Central in all of PowerBI tips social media channels. Awesome. Thank you all so much and we’ll see you next time. Heat. [Music]

Previous

Who Owns the Connection? Managing Access and Chaos in Fabric Pipelines – Ep. 438

More Posts

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.

Feb 18, 2026

Hiring the Report Developer – Ep. 503

Mike and Tommy unpack what a report developer should know in 2026 — from paginated reports and the SSRS migration trend to the line between report building and data modeling.

Feb 13, 2026

Trusting In Microsoft Fabric – Ep. 502

Mike and Tommy dive deep into whether Microsoft Fabric has earned our trust after two years. Plus, the SaaS apocalypse is here, AI intensifies work, and Semantic Link goes GA.