PowerBI.tips

Fabric January 2025 Draft – Ep. 395

February 5, 2025 By Mike Carlo , Tommy Puglia
Fabric January 2025 Draft – Ep. 395

Mike and Tommy walk through their ‘draft’ of the Microsoft Fabric January 2025 update, calling out the changes they think will matter most for Power BI and Fabric practitioners. From TMDL scripting and semantic model version history to Copilot/Q&A improvements and OneLake catalog metadata, this episode helps you prioritize what to test next.

News & Announcements

  • Microsoft Fabric January 2025 update — Microsoft’s monthly round-up is packed: Power BI reporting improvements (like Explore this data from a visual), modeling investments (including semantic model version history and the TMDL scripting preview), and platform features like folder support in Git. If you’re trying to plan what to pilot in Q1, this post is the best single place to scan what’s new across Fabric, Power BI, and the surrounding developer workflow.

  • Submit a topic idea / mailbag — Got something you want Mike and Tommy to break down on a future episode? Drop your scenario or question in the mailbag—real problems make the best conversations.

  • Subscribe to the podcast — The easiest way to keep up with new episodes, plus links to your favorite podcast platforms.

  • Tips+ Theme Generator — If you’re tired of “close enough” brand colors, generate consistent report themes quickly and keep visuals aligned across a team.

Main Discussion: Fabric January 2025 Update (What to Pay Attention To)

This episode is essentially Mike and Tommy doing what every Power BI / Fabric team needs to do each month: triage the release notes and decide what’s worth testing now versus what can wait.

Rather than reading every bullet, they focus on the updates that change day-to-day workflows for report authors, semantic model owners, and Fabric engineers.

Power BI reporting: better self-serve exploration

One of the practical improvements called out in the January update is the continued push toward letting users explore without needing a developer to build a custom page for every question.

Key idea:

  • “Explore this data” from a visual makes it easier for consumers to pivot, swap chart types, filter, and inspect underlying data without leaving the report context.

For teams trying to scale adoption, this kind of feature matters because it can reduce “can you add one more visual?” requests and empower consumers—as long as your semantic model is solid.

Modeling & metadata: TMDL, version history, and documentation maturity

Mike and Tommy highlight a theme that keeps showing up in Fabric: models are becoming more developer-friendly.

A few noteworthy items from the update that reinforce that direction:

  • TMDL scripting experience (Preview) — More accessible, text-based model editing that plays well with source control and repeatable changes.
  • Semantic model version history (Preview) — A big step toward safer iteration and easier rollback when changes introduce breakage.
  • OneLake catalog metadata improvements — More emphasis on table/column descriptions and discoverability.

The takeaway they hammer on: investing in descriptions, naming, and metadata isn’t just “nice documentation” anymore—it directly improves downstream experiences like Q&A and Copilot.

Copilot + Q&A: context gets more valuable than clever prompts

In the January update, Microsoft also continues tightening the relationship between Copilot and semantic model metadata (and the Q&A setup experience).

Mike and Tommy’s framing is practical:

  • Copilot becomes more useful when it can pull from well-described models (synonyms, descriptions, and sample values).
  • The “future value” of AI in Power BI isn’t just generating text—it’s taking actions (bulk changes, formatting, metadata generation, model documentation) that would otherwise be tedious and error-prone.

If you want AI features to be more than a novelty, the groundwork is still the same: model quality and clarity.

Platform & ALM: Git workflows keep getting real

A steady trend in Fabric is the move toward repeatable deployment patterns and artifact lifecycle management.

From the January update, they call out progress on:

  • Folder support in Git and continued investments in the “project as files” approach.
  • Ongoing improvements around APIs/ALM patterns that push Fabric closer to a proper engineering workflow (PRs, reviewable diffs, and environment promotion).

For teams that already run software delivery pipelines, this is the direction you’ve been waiting for.

Looking Forward

If you only take one action after this episode, it’s this: pick 1–2 features from the January update and test them intentionally (in a sandbox workspace) instead of trying to adopt everything at once.

Start with whatever removes the most friction in your current workflow—often that’s improvements to semantic model editing (TMDL), source control (Git), or better metadata (descriptions/synonyms) that unlocks Copilot and Q&A down the road.

Episode Transcript

0:31 good good morning and welcome back to the explicit measures podcast with Tommy and Mike good morning everyone good morning sir how you doing where has the time gone we are already rolling out to February I’m exhausted dude it’s this this year’s got to slow down we can’t keep going at this pace this is crazy how are you doing how are you holding up oh man I I’m with you too with updates with things coming out and then just things are building up normally I had a little free time I was like I’m miss it with with some of

1:01 was like I’m miss it with with some of the things that maybe you don’t necessarily want to dive into we could touch on all the new a AI stuff but we’re not going to We’re not gonna talk about deep seek oh my gosh there’s things geez shaking we talked we touched on this I think like last week or something before we had a recorded episode but man it’s it’s really I think they said something like it it wiped a trillion dollars off of the the the balance sheets of these AI companies

1:32 the balance sheets of these AI companies people are going to have to get smarter quicker about how to make these models more intelligent we’re hitting it’s getting competitive I guess I would say it’s getting very competitive very quickly a th% and just the updates with open Ai and this deep research finally actually have some AI That’s being useful Mike so yeah especially with all the updates with fabric you want to stay ahead not just what the blog saying but all the people that may be talking about a certain subject

2:02 certain subject well Google Now something called Deep research not to be confused with opening AIS deep research yes they named it the same thing man I come on guys get a little bit creative I that’s what I thought too but regardless this tool is simply you say hey I want to find topics like people have done examples of has anyone done a fabric SQL database and been able to write with power apps and it basically will become an AG and scour the web for you which is awesome so

2:33 the web for you which is awesome so we’re getting there we’re getting there it’s I don’t know AI is interesting to me because it it feels like it’s very capable right now but it also also it’s sometimes it does still feel like it needs to require a lot of extra learning or education to make sure that you can get to a place where you can use it effectively and so I’m not sure I’m there yet I’m looking I’m looking at my team basically looking at my team going we have co-pilot for GitHub co-pilot which is Awesome by the way it does great job on things for code based

3:04 great job on things for code based things but I understand a little bit how it works other team members on my company understand a bit more and use it more frequently others don’t touch it and so they’re they’re missing out on the opportunity so what I’m looking for is like where’s the where’s the techniques where’s the best practices where’s the training that’s going to help me to skill up my team because I think this is going to have to be part of our workflow moving forward is using AI to help us build net new things a thousand per I think we’re actually finally getting

3:34 think we’re actually finally getting there where there actually going to be tools that are going to be more useful rather than a random chat that I may have found so yeah agree awesome other topics for today so any any news items from yourself Tommy anything that you want to talk about newsworthy things Microsoft dishing outome things with monitoring and databases I think I’m going to save that for another time there’s been a little bit of Buzz around monitoring you said monitoring and databases what you mean are what are

4:05 and databases what you mean are what are you referring to about the so a few things that they have now is you can actually see not in the monitoring Hub I believe with SQL databases but we can actually see like any mirroring that’s been occurred replication of SQL databases and they’re just announced to the activation of billing for SQL databases ooh I didn’t hear that yet I didn’t see that part I thought you were going towards I think there was a release from Microsoft on GitHub around

4:37 release from Microsoft on GitHub around there’s this new real time monitoring that’s for oh yeah another thing with SQL databases I thought you were I thought you were going down that route as well because I’m not I haven’t quite figured out how to make it work yet but it seems it seems interesting yeah that’s a whole other piece of what they’re doing but yeah the SQL database billing because right now if you create up to three databases you can do it for free but it’s going to be built separately based on

5:09 separately based on storage H interesting which we literally just talked about too yes give what I forgot about what we talking about on this one again I’m a little foggy this morning it’s been a long yeah so we literally just started last week how all the cost in fabric really is based on prosect not based on the data you’re actually storing so correct and that’s why work been unique but they’re like hey it’s a SQL database you’re not going to be able to cheat Azure so since SQL database is native

5:39 Azure so since SQL database is native itam and fabric it uses units like other workloads but billing for compute charges and data storage will occur after February so I don’t believe I believe this is the first artifact in fabric that’ll pay for the storage not just compute well lak House 2 also makes you pay for storage it doesn’t come for free yeah so billing and utilization that’s going to be a whole thing that we can dive into because I know how much you love where’s my cost coming from yeah I think it’s important especially

6:10 yeah I think it’s important especially if you don’t have your fabric workspaces Broken Out by department where you’re trying to allocate cost towards like if you’re if you have one larger fabric environment and you’re trying to split that across multiple teams you could have all that be a central it cost center but then none of the teams have skin in the game I I find this idea of like if the department doesn’t have get in the game they just consume whatever they want to consume someone else’s bill I don’t care right we run out of capacity that’s not my fault that’s your fault you you need to turn up the capacity right and so it’s it’s one of it’s one of these games of like

6:41 one of it’s one of these games of like who’s blaming who so you want the business unit to have some skin in the game on this one where if they consume or build poor models or consume too much content or or cus on the capacity you want them to pay for it hey by the way yeah you’re slowing yourself down because of this bad stuff in your stuff and it will run better so interesting things there all right what do else you want to go with this top a man I I think I’m really to get started so we had finally

7:13 really to get started so we had finally again it always feels like as soon as we finish our episodes for the week we’re like we did good good topics that’s when Microsoft I think is really watching us and going all right let’s go drop the major 75 page update now yeah for the fabric update the fabric update is officially out for January the 1 of 2025 and just like we did before with powerbi this is powerbi more we’re just gonna go through and do a very informal

7:44 gonna go through and do a very informal draft of our top features and again the points don’t matter here but you can only pick a few and I believe that I started first last last time so Mike I don’t know if you saw anything anything here yeah your video cut out there for a little bit I’m not sure what you said but we’ll again I’ll start I’ll start up here again on picking some

8:15 I’ll start up here again on picking some topics here so jumping in let’s do a draft so in the past we’ve done a draft around picking topics directly from the Microsoft blog and I’ll pick one topic and then we’ll discuss a little bit and then I’ll kick it over to Tommy what he’s going to pick for the draft all right jumping in here we will let’s see here there’s a lot of neat features here one thing that I just interested me here as I was looking at it there’s a link to the first ever powerbi data viz World

8:46 powerbi data viz World Championships I I don’t know what this is like this is what in the world is this Tommy are you gonna are you gon to participate Tommy in the first ever World Championships I wanted initially to participate in the haathon I’m all about these Community Based collaboration based features Mike maybe we should do a podcast representing for this thing a little branding there too well maybe I’m looking at the world championship so

9:17 looking at the world championship so more about this event right so there’s it’s in the blog it actually points you to a community post that talks more about it at the community post is talking about the powerbi data viz World Championships four weeks four finalists and only one Champion you think you’ve got what it takes to create a stunning beautiful accessible data visualization get ready put your powerbi skills to the test in this ultimate data Vis Showdown so looks like it’s interesting it’s going to be the full details are coming soon it looks like it’s been announced on the Microsoft blog they

9:47 announced on the Microsoft blog they have a little link here for stay tuned I’ll put the link in the window the description here for the the post around this so if you think you’ve got the visualization chops if you want to give your hand at trying to build an amazing powerbi visual check it out maybe you could maybe you could find your skills your your talent inside making all those super highly customized visuals so I’ll throw this link in the chat window here just for people to see

10:18 chat window here just for people to see if they want to participate in the world

10:23 championship I’m not sure I will participate Tommy there’s a lot going on in my world this at this point I’ve got presentations I’m speaking at the conference I’m probably won’t have time to prepare a visualization for that I’d be interested to see I would like to see like what people do at like what I’m GNA I’d like to watch it if nothing else I it would be interesting to watching it feels like this is like a take on like what Excel does Excel has like a a broadcasted live event around individuals building Excel sheets have you seen one of those before yeah I’ve

10:54 you seen one of those before yeah I’ve seen well I’ve seen it with Excel I’ve seen it with even SharePoint has like a c too that people get way more involved than they should I don’t think Microsoft done the visual one I know that the only thing I can actually think of that’s been visual led by Microsoft I think was just the Gallery but that really was never a contest yeah they never I don’t think they gave away things it wasn’t they didn’t make it like a competition thing but they did have they have galleries of like hey show your ideas I’m already thinking in my mind there’s a couple

11:24 thinking in my mind there’s a couple individuals that I know build incredible stuff and I would love to see what they produce or if they build something for this competition so I’d be very very curious to see what’s going on here I’m also I’m also thinking they’re the reason they’re doing the championship is to start drumming up some people building like examples good examples of reports and things like that as as well thousand I I don’t do that as much because I already have my own gallery of things that we build we have the the theme generator that already builds great templates for you automatically so

11:55 great templates for you automatically so that’s where I spend a lot of my time I but again this is another point point of this is like when you’re building incredible visualizations are you talking about standard reports what’s acceptable here what how how much can you bend the rules are you going to write an HTML visual that builds things like what what are we looking at here that’s what Mike that’s exactly it I don’t want to see in there like I’m there’s amazing stuff you can do in powerbi but the actual use and functionality it’s like hey this has 725 different objects on this page like

12:27 different objects on this page like that’s neat I guess but how helpful really is that well it’s nice to see what you can do with it though like I would agree like you need to know mean I would agree like you need to know like what the potential is but I’d also argue if whoever is the winner one of the conditions of being the winner of this would be you have to explain what you did like you have to go in and tell us how how you built it to some degree right you that that to me feels like that should be part of the requirements for winning is you get a 30 minute video with Microsoft to explain and show us

12:58 with Microsoft to explain and show us through like walk through the different things that’s one of my big pet peeves when I see amazing visuals like people just like post a little random video and Linkedin and they’re like look at this incredible report I built I’m like wow that’s amazing they don’t give you a download you can’t go pull apart the file like there’s nowhere to go see what they built and I’m like well there’s no way for me to really learn this other than okay good job you made a cool visual but I can’t understand what you did and or how much time did you really spend on like the background images have a lot to do with how these reports work really well so where do

13:28 reports work really well so where do those background just come from how’d you make them like there’s other things I think I’d like to see about this that that democratizes that knowledge on how to build things a th% so and yeah I’m I’m looking to hear more because I would love to see more than just like the design because yeah Mike I’m not even putting my skin in the game if that’s the case the case because yeah definitely not gonna win that way but yeah if we can actually see just like just like functionality I love hackathons that’s where my heart is so

13:59 that’s where my heart is so interesting to see all right that’s my that’s my first pick I think the first pick of this draft is there’s going to be some interesting things around the powerbi data World Championship I’m interested to looking at that I hope it’s broadcasted live or something if it’s not at the if it’s not directly at fabric conference I’d like to see it recorded and re-shared somewhere else so I’d like to take a look at it all right Tommy up to you pick up another topic Mike I am going to go with something that I think for both of us I’m surprised you didn’t go you didn’t start with this well well H there’s

14:31 start with this well well H there’s one of two ways there’s I can be a troll or I’m just going to go to the one that’s probably the number one all right ahead it is the folder support in get is now available so this is planned to start rolling out by mid-February and really what it is is it actually if you have your folders in powerbi the one place you actually don’t can’t use the folders is on Windows Explorer they don’t sync sure which fine but not really helpful like not

15:02 but not really helpful like not great either it’s it’s it’s incredibly frustrating experience to go from I have a workspace with everything organized and then all of a sudden you deploy it and then it’s just one long list of all the items it was very annoying it’s not helpful so I think I think having the folders inside git integration is incredibly useful so I’m looking for that’s that’s a feature I’m going to I’m for sure going to use a th% and I think two we’re getting we’ve gone a we’ve come a long way when it comes to the git

15:34 long way when it comes to the git integration and Microsoft fabric this is something we’ve always wanted and Mike I have to say it is pretty seamless well so like what yeah what the principle here is you can make you can have a lot of artifacts in a workspace right you can put artifacts in a folder all they’re doing now or at least my understanding is what they’re doing now is the the get integration from whatever GitHub or Azure devops it’s now just mirroring the same folder structure

16:05 mirroring the same folder structure right so if you make a folder for nail samples or customer analysis or reporting it’s just mirroring the same folder structure as you’re adding those objects into folders just mirroring it in the git integration which makes a lot of sense because now you could basically build your git integration around making folders inside git and it would automatically put your elements in folders if you’re going that direction as well as well so I’m really excited about this and honestly I’m going more and more to most things going from the

16:37 and more to most things going from the SharePoint integration to really get which is strange too because everything always just worked with SharePoint right anytime I saved something and there I think there are a lot more step especially for internal data I’ve had it’s still in preview so I’ve had some issues where environments or certain things wouldn’t SN I’m like oh great now I have to learn about checkout branches a little more because I wanted to remove this and I got to go back so I I have this is actually interesting

17:07 so I I have this is actually interesting too they’re getting so far with the G integration where especially for me it’s got to be in GitHub that that’s just the way life is so that really opens up to a lot of people but Mike you and I both know G is not I wouldn’t say for the faint of heart even if it’s all readily available now well now well yeah it’s interesting that you mention that because I’m I’m doing a talk I’ve been accepted to do a talk at Microsoft fabric conference in Las Vegas this year and I’ll be doing with Matias tarback

17:38 and I’ll be doing with Matias tarback who’s really good at get and get integration if you think about all the things that are contained inside a workspace there’s actually a lot more than just plainly turning on git and the items in the workspace right git can do or or deployment or cicd could do a lot more we just don’t have the capability in the UI today to do everything right for example the workspace has a name the workspace has a description the workspace has a capacity attached to it what if you wanted to control the capacity that is attached to the

18:08 capacity that is attached to the workspace control that redeploy a different fabric workspace capacity you can’t do a lot of these other things directly with the git integration that we have today it’s those are outside of the scope of git git is really focusing on you you have a workspace and you’re going to deploy items into the workspace the git integration today doesn’t handle the data so like if you’re having different data changes here if you really think about what git should allow us to do if you if you think about all the items in your workspace as like an an encyclopedia I know I’m dating myself

18:39 encyclopedia I know I’m dating myself here because people don’t even like what is an encyclopedia don’t just Google it think of it like an encyclopedia right if I’m going to look up the letter c or something that has starts with the letter c I would go to the book that is the letter c and pull it out make my changes and then put it like read it and go put it back right that’s the same concept that we need with get an integration as well I need to be able to check out like a portion of items in a workspace not have to copy the whole thing over again I should be able to check out what only what I need fix the change and then put it back

19:10 need fix the change and then put it back in so interesting Concepts coming we’re going to talk much more heavily around patterns in get integration how this works there are some downsides Tommy I found with get integration do tell because I’m I’m I can share you the sentiment what what what have you been finding well I found the there’s a difference between a your devops get integration and GitHub integration they both work in far as far as you’re able to check in and check out all the items that works fine the difference is if

19:41 that works fine the difference is if Tommy if you check out an item on Azure devops it uses your credentials of you Tommy to show me that the change was made so in the git repo directly it says Tommy checked something out Tommy made a change and pushed it back in if I check something out and make a change it shows me that I made a change or I made a branch and change some items in the in the repo so it actually uses our identities to show on the branching inside Azure devops who’s making the changes when you use GitHub integration

20:14 changes when you use GitHub integration because they’re using a single users principal ID like a single user all the changes are made on behalf of like one single user so yes you can still track all the changes but I don’t know if it has been Tommy or it has been Michael who made the changes on the on the the branching does that make sense really I yeah yeah because I guess devops is using Microsoft entra it’s using entra ID so it it knows who’s making the

20:41 ID so it it knows who’s making the changes yeah but that same information still available be a GitHub but still tell you an account it’s it just looks like everything under GitHub because you’re using a service account so when you connect to GitHub the act of connecting through to GitHub you go get a personal access token you use that personal access token to link to the repo which is fine but it just means that when I commit a change or you committ a change Tom me it doesn’t know who’s committing the change it’s using the credentials of that common user

21:11 the credentials of that common user there’s no like let’s call it this way there’s no credential pass through on the git integration which is fine here’s what I recommend though if you’re using git integration you just need to add in the description of the change add your initials add add some additional content to the commit messages that’s saying who’s making the change and then you can go back and Link it to who’s making the change cuz sometimes you’re trying to go backwards and say who broke something like what what happened here did something get missed or you’re

21:41 something get missed or you’re trying to debug and maybe there’s a bigger story of why something changed that I need to go reach out to Bob hey Bob something changed what happened does that make sense and again Microsoft owns GitHub but it’s a different account but I know that information like who it’s a different account and it’s a different authentication method like devops uses enter ID GitHub has its own it maybe it’s entered ID I don’t know but GitHub has has its own entirely different identity provider you log in

22:11 different identity provider you log in with a different username there’s no guarantee that every single user can log in and have those details shown for like how would they know how would they know what my GitHub login account is when I’m in a a Microsoft fabric workspace you wouldn’t right because it’s a it’s literally a different email I’m logging into par. com with an entirely different email address than I could be using for GitHub anyways just just an edge case thing that I found inside the get

22:43 Integrations wonderful well dude I didn’t even think about the different good accounts it’s been fine for me that’s not even that’s not even a bug I guess too but good to know and I I think think to if that’s the worst thing right now if if people add their initials fine thing as long as let me just use GitHub please sure all right have we beat that one to death enough all right you’re up my friend all right let’s go to the next topic here so this is something Tommy remember how

23:15 this is something Tommy remember how we talked about the one Lake catalog remember in an episode we talked a little bit this we’re like there’s some things that we thought were missing right guess what the semantic model table name and column are now inside the one Lake catalog oh my goodness right that was quick because a quick turn around so we we directly talked like when we were initially thinking about the one L catalog I don’t know how long ago is it Tommy you’re going have to search the

23:45 it Tommy you’re going have to search the the I less than a month ago yeah maybe about a month anyways it it time’s been flying by so fast but anyways I remember our conversation Tommy we had a conversation around Hey look it’s really helpful to have the one catalog but there’s almost minimal to no information in the one L catalog around what’s the name of the column where does the description show up so I was very pleased to see the semantic model now so if you have semantic models that are being sto shown inside the one L catalog you now have the model table and column

24:16 you now have the model table and column description which I think is incredibly useful this is the the description can come right from the semantic model it’s already there it exists right so this is one more reason why you should document your semantic models because this can help users trying to discover content where did this table come from why does it exist what Transformations did you do are you filtering any information out from this table like that stuff’s relevant now and now people can actually Discover it inside this passive

24:46 Discover it inside this passive documentation tool which I really like I know so when I saw this too I immediately thought of yeah we just talked about this not even a month ago this is opening up a lot because Mike do how many solutions there are right now like third party Solutions I’ve worked with clients where they bought a Google Chrome extension that they had to bring to everyone in the organization that went to a different tool that would hook up to powerbi so people get get their

25:16 powerbi so people get get their documentation the this very convoluted way of just people trying to see what the what data am I looking at not just the numbers but those attributes that describe them and again this is this is one of those things where I know Apple does this a lot where they update something and a lot of companies go away a lot of product and services this is one of those because this has been one of the biggest things we’ve been trying to solve for I’m gonna update my taable editor macro too

25:48 gonna update my taable editor macro too Mike because I now have I have one right now that’s pretty extens extensive for my measures to give it a pretty good business description and then the forms like nice it works really well let me ask you some questions here does does how how are you doing this are you autogenerating something for every single measure or column or are you actually letting like an input box show up and say Here’s an input box type in something like what do you how you yeah I I’m G choose a select measure and

26:18 I I’m G choose a select measure and then I do have a way to do the system prompt where you can say hey inut things this is going to explain more about this business or this like or that model but for the most part what I found is really the normal base ma macro if I’m bind binding for time works great but I don’t understand like how are you text how are you describing the measure you’re just saying enter the formula back in the measure and be done with it

26:48 back in the measure and be done with it like what do are you I don’t understand how the description so all we’re simply doing is adding a assistant prompt saying you’re in assist getting concise business definitions for powerband measures are you using AI to generate that yeah it it’s syncing up to open AI I okay sorry sorry I yeah yeah yeah yeah I was totally on a different left field here you caught me off guard I’m with you now I was very confused I only talk about AI yeah so that makes a little more sense now doesn’t it now I’m much more with you okay so you have a

27:18 much more with you okay so you have a tabular editor script that reads A measure sends some data to open AI hey give me a business description of what this measure is doing and then it comes back to tabal editor with the text of that description and then you’re just shoving it into the measure then at that point okay this makes more sense I was under the impression you’re just like had a script that just ran on its own and you weren’t sending any other data are you also pause here note are you sending it other information about the model not just like the measure so I do

27:48 model not just like the measure so I do only selected measure so only send the formula okay gotcha but you again I’m spitballing here in my mind here for a moment right yeah there’s no reason you couldn’t enhance this script to say go get the table description oh yeah go get the relationships of this model and you could feed a little bit more text or a little bit more prompt to open Ai and say hey here’s the tables and their descriptions here’s the relationships of each of these tables how they join together here’s the measure in this table write a description for this

28:18 table write a description for this measure here’s the formula for it right that you I I see Tommy here like this in my mind I’m like you could start from where you began and can can continue to extend it oh yeah so I have one for tables and I have one that could go look at all of the U measures in a model but yeah to your point now that we have this descriptor you probably have to add a little more context for the AI to like understand a column but measures are pretty easy because you have a formula

28:48 pretty easy because you have a formula right so it’s going to be able to develop something pretty quickly it’s like take a look at this measure get a description but yeah so now this is something that I’m going to be planning to do to add that to our nice little repo because man The Columns too especially most important columns like hey this is the one that we choose for when we count a member based on this thing there I’m sure there’s an additional integration do with AI but for the most part right now this is one to be

29:19 part right now this is one to be the first things that I I would focus on AI or chat GPT is GNA get way too General if I just give it a bunch of columns without any context to it sure sure it needs some level of context yeah it it’s going to need a little little something that’s why the measures work really well with and then it provides that description and then it provides a line and then it shows the formula so with a column how do you give it like hey what should you write about this this is a date this shows

29:49 about this this is a date this shows dates on a powerbi report I don’t anymore like that’s just yeah what it would say yeah I don’t I wouldn’t want that not helpful I need to give me a better definition of that a th% awesome anything else you’d want to kind awesome anything else you’d want to add on that one I’m excited I’m of add on that one I’m excited I’m hopefully we can right now you can only view those descriptions on columns and tables you cannot modify from the onel catalog catalog correct say the question again so right

30:20 correct say the question again so right now in the onel catalog if I wanted I can view the descriptions of a column and table but I cannot modify correct right do I really want people so who the I think all right so thoughts on this comment Tommy I do like what you say I think the ability to go both directions into one Lake catalog and AD definitions and then also go into the semantic model and AD definitions I think it’s useful definitely think it’s

30:51 think it’s useful definitely think it’s useful I think the headache I would have here is who’s really allowed to add those descriptions oh no I completely I know know and that’s where I’m like should it only

31:00 and that’s where I’m like should it only be the owner of the data set do you do you have admins that can come in and do this and I really do think to your point Tommy right what You’ built in TBL editor with scripting to go to AI open AI Microsoft has all the data right there in front of them like this should be the co-pilot feature where you just drop in and run co-pilot across your data catalog and say here make a bunch of descriptions because I do think I think there’s a psychological thing here as well if you give a bunch of users pre-built description they’ll tell you they’re wrong but if you give them no descriptions they just

31:31 you give them no descriptions they just won’t say anything it’s too much time to think go through all the descriptions and be like nah this is not helpful right I don’t I think I think there’s a point of like people are more willing to correct you than they are to create it from scratch oh yeah Everyone likes to point out a problem yes so interesting interesting yeah but and that’s true even just getting something basic in there is better than nothing people hate what so I’d agree with that one but I I think we can add

32:03 that one but I I think we can add another role to your point who is actually allowed to do that well you talk about the stewards there’s all that intermet people who I never want them ever to look at the semantic model itself in terms of like connecting to an inbl editor or building a report that’s just not their role but they know about it you like again like their business may be all those that account information tables you look you look at they have 75 columns you’re like I don’t care about any of them but to the stakeholder all

32:34 them but to the stakeholder all of these are apparently important well they’re the ones who do that so I’m sure I what I would love to see what I would love to see is a role created for you love to see is a role created for I know we already have know I know we already have contributor but like documentation like yeah exact exactly steuart called it a Stewart thing so they can in the one L catalog go modify tables and columns and the measure descriptions interesting that’s a that’s a neat feature there I would have to think through a little bit more

33:04 think through a little bit more of how I would want that to have and the other thing now with many many different places like so you could the other as you’re saying that right now Tommy I’m thinking what if what if someone was editing the model in a workspace and you’re trying to modify catalog at the same time would that how would that work this this coauthoring experience needs to get figured out because I think you could run into conflicts of like hey Tommy’s in there making changes to the descriptions I’m also in the model and I’m also making changes to the model like adding measures or something but how how do we keep those two people in

33:36 how how do we keep those two people in sync on the same model so that that could be a little bit interesting there yeah interesting good stuff all right I’m I think we’ve talked about this before Tommy and I’m not sure I’m are are you good on this topic because I don’t want to rush you oh yeah I’m good I’m good I’m good okay I’m I’m gonna pick another one another winner on this list here I think the python notebooks so pure python notebooks running them inside so let me let me preface this a little bit here when we talk about notebooks there’s two there has been

34:07 notebooks there’s two there has been traditionally two runtime engines that run a notebook you can do a python notebook which a py spark basically P spark notebook let me correct myself a p spark notebook P spark runs python on top of the Spark engine that’s what that does then there was a second runtime called The t-sql Notebook so that’s running t-sql on a different runtime I don’t know what it is maybe it’s the SQL Server runtime I don’t know but it’s a it’s a notebook like experience where you’re writing straight tql against lake houses and

34:37 straight tql against lake houses and tables and such okay so that’s the second engine we’re talking about here this introduction of just a pure python notebook no spark involved this is just python it’s a it’s a different experience right and it still runs like a notebook it still acts like normal it’s initially supporting python 3. 11 and 310 so those are the python libraries you can use with it but it’s optimized for resource utilization okay so let me let me give you the

35:08 okay so let me let me give you the context where I think these new python notebooks make sense right I was doing some testing Tommy I like semantic link Labs it is so cool now imagine you could run semantic link Labs on not a spark cluster which has a worker and potentially a management node right so Min minimum two computers right imagine now you could run all that in a single machine on a single notebook potentially costing you

35:38 single notebook potentially costing you less compute units so tasks that are like lightweight like I’m doing some automation around testing of things I’m trying to pull Great Expectations in and use Great Expectations with my semantic model and and talk to the centic model the centic link works with the python notebooks I tested it you can go use just this pure python notebook and use semantic link and semantic link Labs directly with that notebook there’s nothing fancy there that’s happening that means it doesn’t run so if I’m

36:09 that means it doesn’t run so if I’m looking at this going okay I’m now making the decision The Notebook experience I love it’s great but now I can even pick a very finely tuned process like if I’m not moving hundreds of millions of Records or millions of Records I’m doing lightweight things why not use the python notebook compared to this the pi the pi spark notebooks it might save you some compute units so I think this is a not like it’s necessar a cost-saving experience but I like the idea that I can write python in both

36:40 idea that I can write python in both experiences the pure python notebook and the P spark notebooks and get the same results I haven’t there’s probably things somewhere in here that don’t work in both notebook experiences but I haven’t found them yet and I’m starting to test it more and more so I’m really liking this new python only notebook experience so does that make sense what I’m saying no yeah and I was actually wanted to ask you in terms of yeah sure from the cost point of view one of the things I I’ve noticed with the with the py spark engine is if I’m testing

37:12 the py spark engine is if I’m testing things out or if I’m like okay I gotta modify the code yeah like that time for it to start running I the amount of times I have out loud because you’re going to take two or three minutes just because I want to run a cell and again obviously when we actually have the full pipeline different story but you’re in the notebook too to probably Tinker around to you’re bu you don’t have all the code already you’re just copy and pasting a lot of people are like hey did

37:42 pasting a lot of people are like hey did this work did this work yes so and just like in a normal like a p a Jupiter notebook right where it it just goes once it starts running if I get coffee and come back to it it’s going to take another three minutes to start does the python experience in a sense can be a good substitute for there or you see where I’m going where the python experience can be a good substitute for that entire engine that I don’t need when I’m just simply wanting to look at something yeah

38:13 simply wanting to look at something yeah I think I think the answer here is like what is the size of your job how long does it need to run right are we talking about a very monolithic job that runs to your point Tommy there’s also another moment here where you start the notebook right if I have to start two virtual machines a worker and a a head node to run a notebook well it makes sense to me if I can just run the python notebook and it’s just one machine one VM that I need to turn on that should be faster to turn on than two machines to that effect right so one it’s a cost

38:44 that effect right so one it’s a cost savings but two it’s like a speed to access a speed to get into using the experience if I can just have one machine that’s just available to me great that’s easier for me to just spin up the machine and use the python notebook there so I think to your point Tommy right it’s it’s part of this is cost savings but part of it is the user experience if if it takes less time one or two seconds to Lo load the notebook that’s just that that’s another opportunity of a little bit less friction that just makes it easier for me to get started does that make sense

39:14 me to get started does that make sense like that is exactly what I’m hoping in terms of in terms of the actual functions and everything else really from there like it makes you’re it sounds like it makes a lot of sense for people who are starting off would you then recommend them from going from straight say I’m going from Power query to notebooks that that story would you want yeah so oh geez Tommy now we’re now we’re getting into another big yeah

39:45 now we’re getting into another big yeah when when you move me from like data flows gen two so there’s data flows gen one which doesn’t talk to lak houses I can’t write data to a lake house and then there’s data flows Gen 2 which is like the experience of data flows Gen 2 Now talks and writes data directly to lake houses awesome love it there’s comes a threshold where you’re like am I do I need to do more is do I want to optimize more of my solution and what I have found is if I run python notebooks on top of my data versus data flows Gen 2 it’s more efficient to run those and

40:15 2 it’s more efficient to run those and now I think with python notebooks pure python notebooks we also add another layer to this right do I use data flows Gen 2 or do I want to use a python notebook where do I start I would argu here’s my argument my argument would be if you’re just starting out using notebooks maybe start with the python only notebooks first just because the odds of you doing big data right out of the gate is probably a little bit less on those first couple projects my experience has been when you when organizations start to use fabric

40:45 when organizations start to use fabric they pick small to mediumsized projectss that are reasonable in size they’re not like okay great now we have fabric our first project should be like a 200 billion row table like that’s I don’t hear a lot of organizations like willing to jump in and like start with that because in order to get to that knowledge of that level of knowledge of being able to handle that large of data inside fabric you’re probably already bringing in solid data Engineers who already understand Big Data technology particularly spark notebooks and like you’re bringing in the expertise already

41:15 you’re bringing in the expertise already to get to that scale right smaller organizations I think are the have the

41:19 organizations I think are the have the better opportunity here to start with these things in a smaller a smaller build does that does that make sense that’s exactly what I was thinking too right because is for me the the most frustrating thing when I started getting into notebooks it was just a spark environment where it’s like I’m just learning python man and now I’m realizing that the column types in spark are different than the ones in pandas why thing yes and and I pandas why thing yes and and you want to talk about I I have a

41:49 mean you want to talk about I I have a few white hairs in the back to show for that little knickknack things like that that again unless you come from that background where that was your life how in the world would all those little things right you’re G to get someone to go I can do this in power query it may take longer but at least I can change it D column type and push you like one works for a file but I can’t use that in a table I think there’s a little bit of if you think about like the proliferation of all the different tools that we can use to do things right

42:19 tools that we can use to do things right right I can bring data into I could send data to austo database I could use an event stream to then bring data in I could load things in back I could bring things into a Lakehouse a custa database I could use a python notebook I could use a p spark notebook I could use a t-sql notebook like this is really interesting there’s now a wide variety of tools but I think the community in general has been like why are we throwing everything at everything right I think I think there’s a little bit of a whoever can build

42:51 a whoever can build that the decision helper tool right whoever can say what is the workload you’re trying to accomplish and then give you like a routed path to say this is the most efficient way to build a combination of this pattern optimizing for compute or optimizing for storage or whatever that optimization thing is there’s now so many different routes to build the same stuff we need help a little bit I think the community is pushing back a little bit on this fabric experience I’ve seen a couple blog posts from people like why why are we really pushing fabric so hard we should really

43:22 pushing fabric so hard we should really be focusing on just making powerp better oh my gosh I I love that so yeah I already put that as a topic too because I and I looked at the limitations it doesn’t work with the environment fine but it looks like everything else would work and again if you’re talking about most people who are making that transition perfect yeah I’m gonna actually call someone who’s made a comment in the chat here that I think is really relevant here right so someone

43:53 really relevant here right so someone in the comment Michael makes a comment here people are coming out of college and they’re going to know python regular python that that’s like a very common language and I would agree I’ve also seen the same Trend a lot of schools immediately teach python right away so having just pure python notebooks to do data engineering I think is going to be a big win for fabric especially if you’re in more of that technical computer space I think that’s going to be useful to us I I’d also argue like the two main tools I think

44:24 argue like the two main tools I think that are in the market here are like anything that’s SQL related and python I feel like those are the two main languages that everyone seems to have some knowledge about and if I were a new user coming into Fabric or powerbi or thinking about starting my career those would be two languages I would spend a lot of time just figuring out how to use and use them effectively Python and SQL if you can get your head around those two languages I think you can do a lot inside the data space so I think those are really interesting yeah and I think we’ll say this for another day too as we we talk

44:55 this for another day too as we we talk about this a lot of people too are going to be probably used to an IDE and actually probably some co-pilot too to help them out so I know Microsoft’s still working on the integration with notebooks and other tooling like how often are you building notebooks in vs code for fabric or are you doing that at all that’s a good question Tommy I’ve had a little bit of friction there from my my personal experience so here’s what I’d like the experience to look like I would like to go into a fabric workspace

45:27 would like to go into a fabric workspace I’d like to be able to pull open a python or a high spark notebook and I want to click the button that says open in vs code what I want it to do is I wanted to open that notebook in my vs code locally on my machine and then immediately connect to the cluster that is in the cloud service right so I want it to be seamless I want to be able to click a button and just have the notebook open on my vs code and make it easy to connect back to that cluster and then when I hit run it doesn’t run it on my machine it runs it it it basically bundles like

45:57 bundles like I don’t know the the like a Jupiter notebook or Jupiter Hub thing it just Conn talks to the yeah it just it just so it it just bundles the command inside that command cell and just sends it to the server and then Returns the results to me like that’s what I want it to look like but I’ve had some friction actually getting that to implement correctly and so it hasn’t really worked super smooth for me and the reason another reason Tommy I’d like to have that experience would be I have co-pilot GitHub co-pilot for free m or if you

46:29 GitHub co-pilot for free m or if you want to you even pay for it and you get more advanced features from the GitHub co-pilot featuring so like how rich would that be right I don’t need to go buy Fabrics f64 to go get a co-pilot I could connect the notebook bring it down to vs code write the code directly their locally Ed vs code GitHub co-pilot I found it super effective that that would be great and then I could just push the not back up and then it’s ready to ready to go yeah I agree with that let me one other thought here too Tommy with

46:59 one other thought here too Tommy with the notebook experience there’s now also the announcement around live versioning for notebooks did you see that one as well so I did I did as we’re talking about notebooks in this experience of like vs code like I would want it all work together but how cool would it be to have a notebook that also lets you live edit the notebook nvs code and then also see the other guy’s code editing there like that would be amazing it’s that’s fine I guess I know a lot a lot of to have that too do you do you see yourself using that a lot and I

47:32 you see yourself using that a lot and I would I would argue code development together with other people in the same notebook is extremely useful so you have two people on a team or you have a problem with some notebook or some code it’s really useful to bring two Engineers together and have them working in the same notebook where one person is building a different cell you’re testing things in another cell and like you’re like hammering out a problem together sometimes that’s really helpful or code reviews I really think there’s a lot of value for joint development together interesting okay

48:03 development together interesting okay Mike I know we’re getting into your time but there’s one that we haven’t mentioned that I thought I’m actually blown away this not made it into your first or second round of the draft it probably it probably should have I think I know what you’re talking I think I know what you’re gonna allude to here and I’m already probably using this feature heavily already but go ahead I really doubt it CI this is the second announcement this is actually the what I’m about to introduce to you is someone

48:34 I’m about to introduce to you is someone who was said they were eligible for the draft went back for a year in college cicd support for data Flows In fabric sound sound familiar I feel like this is like the third announcement I’ve heard this one from how did I miss this on the where the heck did they put this at the bottom like where did they put this one I don’t even see it if you look for there’s a section called Data flows in the article if you actually go to let me just control F data

49:04 to let me just control F data flows data flow Gen 2 okay so data flow Gen 2 has its own section okay oh I see and okay so there’s a checkbox now enable get integration for this diploma pipeline public API scenarios preview okay I don’t even know what that means but okay all right I’m I’m G to hold my breath on this one I feel like I’ve been hearing this one for months trying to get this feature into get hopefully it works we’ll see what happens but interesting I this is very needed it’s been about time that gets

49:35 it’s been about time that gets here but Mike they even updated the block from November to say that we were gonna introduce it that week but we waited till January so even in the original just we find they they were just polishing it for us Tommy there was a couple things that he just need to finish so that we could be really happy with the ca I thought you were going to say something about timle editor honestly oh timle yeah there’s a timle editor in powerbi desktop now dude I’m telling you like so oh yeah that we talked about then the we did when it

50:07 talked about then the we did when it came out we talked about it it was released I’ve done a video on it on YouTube which I think is Awesome by the way ruy goes through and tells you exactly how to use Tindle editor and some good demos there but like okay I’m I’m gonna have a hard time choosing between timle editor preview or cicd inside data flows what wait what I you’re gon have to explain because it’s a it’s a why it’s a two- race it’s a two horse race at this point

50:37 race it’s a two horse race at this point for me for those two because timle editor affects my day-to-day like it it is a 100% every day I’m actually going into semantic models I’m building models and I’m dragging the entire semantic model into the timle editor just to see how the model’s built like what features are turned on what settings are there this gives this Temple editor gives me a whole another layer of a view of data that I could never see before I love it I think the templ editor is a great addition to powerb desktop so much so that I’m changing my training material

51:07 that I’m changing my training material to reflect timle editor as part of the new modeling experience so now now oh wow so think about it right when you do report building you’re thinking about the report view of desktop when you think about data modeling you’re thinking about the model view but now I’m extending my documentation to also include Dax qu review also timle editor so these are other tools that are going to make it easier for you to make measures I don’t like writing measures anymore unless I’m using Dax S View it’s so much better wow I’m telling you I

51:39 so much better wow I’m telling you I like especially if you make like a like imagine you’re doing like a sum on yeah and you have to write three measures I could write all the measures all at once and just hit publish all measures to the model like it’s great like these new code-based tools are changing how I work with semantic models it’s physically changing how I do things man you’re gonna have to get get me off a tablet editor though and see you’re doing all this in powerbi desktop that’s what I’m so Tommy to your point you’ve done a lot of scripting inside tab

52:09 done a lot of scripting inside tab editor like select this measure select this column run the script that saves you a lot of time I’m not saying that this is not going to help you in that way but imagine if I didn’t need to even write the script to begin with imagine if I could just script the entire table and it’s columns out into a timle window and I instead of just like to your point Tommy like I can’t I can’t use open AI to send code to it I just can’t do that but if I’m not that advanced of a person

52:39 but if I’m not that advanced of a person if I’m not using open AI if I’m not using scripting pieces like that I just want to see all the columns and add descriptions to them it’s as easy as going through and adding some text in front of every column and boom it’s already documented and I hit push or update the model and it pushes those changes right back in the model it’s super fluid so yeah tab editor 3 and tab editor 2 are going to still be like probably the the Premier Premier tool around editing models but I think the timle editor is cutting into that business a lot and I think there’s a lot

53:09 business a lot and I think there’s a lot of new users to powerbi that will find immense value with those timle editors yeah Andrew Andrew’s also commenting timle and desktop has been so useful I I’m saving time with using it i’ I’d rather write measures in timle or Dax quer riew than be building measures using the formula bar well let me ask you this Mike are how much of your time you would actually say guesstimate the rate in desktop and the rate in VSS code right

53:40 desktop and the rate in VSS code right now is or is this is purely a desktop since they both support obviously the Tindle a great question on that one so the tmdl or the Tindle format is supported in vs code but you have to save your powerbi file as a PB formatted file so you can still use timle it’s still in vs code it’s just like a whole extra step Tommy like that’s that’s what I’m not like a big fan of there’s a whole another step inside vs code that requires this now to your point we should go talk to who’s the gentleman

54:10 should go talk to who’s the gentleman who makes the the vs code plug-in for for fabric who I’m talking about oh yes Greg bur Burl so yeah or Geral no I I Gard g g Gard sorry it just took me a second to get there so Gard does this really interesting fabric fabric addin for vs code if he could add to that fabric Adin the ability for you to send information to open AI or use

54:41 send information to open AI or use scripting inside that to some degree like with timle dude I’m telling you that would be amazing yeah Gart is the name thanks Greg I appreciate the name call out there you knew it faster than I knew it awesome anyways those are my key topics I think those are the ones that I think are think are most important here and Tommy I’m going to say it’s going to be a hard choice for me I don’t know if I want to choose cicd with data flows gen two I it’s been needed most of the world uses data flows in the in the business intelligence space from the powerbi side people are

55:13 space from the powerbi side people are going to want to move to data flows Gen 2 and if it’s not supported this is one major barrier that I’ve been talking about that has now been removed and it’s it’s now a lot easier to now use the the data flow Gen 2 inside get which I think is absolutely insane it’s the right approach we knew we were going to get there I just was I was getting impatient I wanted it I wanted it now I didn’t want to wait for like months and months to have it show up yeah right yeah that’s interesting with the timle do you envision a world Mike and my I always

55:44 envision a world Mike and my I always Envision the world can you are you dreaming of a world where you’re building your entire model like people build node projects and JavaScript where there’s literally no UI all this is just done in files and pages in a repo no I don’t think so I I think it’s I think it’s more of like I’m going to make a a a a p a co-pilot agent that will have the voice of Alex powers and I’ll be able to just talk to Alex and be like Alex I need a model

56:15 like Alex I need a model instead of co-pilot be like Co Alex I need a model that does XYZ things and Alex will in his gentle kind tone Michael that’s a dumb idea I’ll just build you the model that you actually need own by the way I’m going to make everything in power query just work better for you because you don’t have a clue what you’re doing so that’s the co-pilot agent I need I would rather no I’m being totally fous here I’m just teasing you Tommy I I don’t I don’t see a world where we’re going to have a whole bunch of like pure code based building of models I still think we’re going to need some level of tooling there but I do think using

56:48 tooling there but I do think using more things like timle is very useful and I think a lot of other tools are going to start adapting to that because it’s a it’s to me every time they add a brand new feature into anything that’s part of desktop the timle format can cover it right there’s things timle can do that other things that they don’t have to build a UI for it now like imagine Tommy like you have the perspectives right you can’t make a perspective in desktop well at least you couldn’t but now you can because now the perspective lives as

57:19 can because now the perspective lives as part of the semantic model and you can script that out using timle so timle scripts out perspectives you can use that to build different languages on top of your model which is awesome like that’s what you want to do sure but before I had no UI to do that like it just wasn’t available to me so Tommy where do we go with this one right can is there going to be an ability for us to script out things and pull in code samples and Snippets from other places I think I think that’s where we need to go next right I I want Tommy to figure out the scripting code that says

57:50 figure out the scripting code that says hey Tommy’s gonna build an automation I know what you want you see what I’m saying like Tommy’s gonna build yeah I want a community based space where Tommy can build the scripts that he needs and I can just go use them when I want right you want be able to go and say hey download this package and this will install all your measures and your descriptions just like a note pack that’s that’s where we’re going with this where they’re goingon to be package based installations it’s gonna do everything in your environment so maybe maybe that’s the approach they’re taking with the notebooks right

58:21 they’re taking with the notebooks right now with like semantic link and semantic link Labs like that’s something that’s potentially there like you just install the library and it’s done and it it it works but I don’t know I I feel like I feel like the packaging piece you’re talking about Tommy is a little bit more dodgy I feel I don’t think I’m being completely serious about that but it would be cool to automatically integrate stuff like that I will say this though Tommy to your to your credit here though people are building timle scripts and sharing them now I just saw

58:52 scripts and sharing them now I just saw someone oh really yeah so people are now building timle script like hey copy this timle script and it will create your date table copy this timle script and it will create something for you so I do think there’s a world I do think there’s a world where people are going to share timle scripts the my challenge here with this is you need a UI to help you a little bit right so sometimes these timle scripts depend upon knowing something about your semantic model like I a date table makes sense right doesn’t matter you don’t really need to know any

59:22 matter you don’t really need to know any information about your model other than I’m making a date table maybe want to know the minimum and maximum date range to build out that timle script maybe right but in other things like if you’re trying to build complex measures if you’re trying to build other complex things you need to reference elements of the semantic model and I don’t think we have any tooling today that will help us do that easily does that make sense yeah and I think we’re easily and I think that’s why Microsoft’s pushed so hard over the last I want to say the last two years to make powerbi text

59:54 years to make powerbi text accessible because again we live in a world where only the the only way to really view things was binary through an application and so I think that’s where they realized that everything they do is in code so they just wanted to open it up so yeah it does open up a lot of different things but just for the automation itself man if we stop there I’m good all right I like it all right with that we’ve burned through a perfectly good monthly update so this was our draft for January of 2025

60:25 was our draft for January of 2025 we’re trying to pick up the best features that we think are interesting that you want to look at research more about and go test I for sure am going to go dig around I’m going to go make a data flow Gen 2 right away I’m going to go right to my harbon environment I’m going to go check this one out because that’s a feature that I’ve been whining about for eons I want to see if it works so I’m gonna definitely check that one out and see if I can get that to work with us anyways that being said thank you all so much for spending an hour of your time I know your day is busy I know you’re doing a lot of other things you could be doing other things like knitting a sweater and actually working

60:56 knitting a sweater and actually working at real work and maybe even going on a run or some exercise so hopefully we’ve kept you a little bit busy during your your day entertains you a little bit here as we go along as well Tommy where else can you find the podcast you can find us on Apple Spotify wherever get your podcast make sure to subscribe and leave a rating it helps us out a ton you have a question idea or a topic that you want us to talk about a future episode head over to power. tips podcast leave your name and a great question and finally join us live every Tuesday and Thursday a. m. Central en Joy the

61:28 Thursday a. m. Central en Joy the conversation all power by tips social media channels excellent thank you all so much appreciate the chat thank you for engaging with us and saying comments and things as well we do appreciate that thank you so much for participating we’ll see you next time

61:58 you [Music]

Thank You

Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.

Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.

Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.

Previous

Microsoft Fabric Job Listings – Ep. 394

More Posts

Mar 7, 2026

Is Power BI Desktop a Dev Tool? – Ep. 376

Mike and Tommy debate whether Power BI Desktop should be treated like a true development tool or more like a report authoring environment. They break down what “developer workflow” actually means for Power BI teams—source control, testing, deployment, and repeatability.

Mar 4, 2026

AI-Assisted TMDL Workflow & Hot Reload – Ep. 507

Mike and Tommy explore AI-assisted TMDL workflows and the hot reload experience for faster Power BI development. They also cover the new programmatic Power Query API and the GA release of the input slicer.

Feb 27, 2026

Filter Overload – Ep. 506

Mike and Tommy dive into the February 2026 feature updates for Power BI and Fabric, with a deep focus on the new input slicer going GA and what it means for report filtering. The conversation gets into filter overload — when too many slicers and options hurt more than they help.