PowerBI.tips

Most Underrated Fabric Feature – Ep. 263

Most Underrated Fabric Feature – Ep. 263

“Underrated” in Fabric usually doesn’t mean “nobody knows it exists.” It means it hasn’t made it into real workflows yet.

In this episode, Mike, Tommy, and Seth share a few Fabric features that quietly remove friction: seeing what’s actually in OneLake, shaping data faster in notebooks, and adopting Fabric without rewriting everything.

News & Announcements

Main Discussion

A pattern shows up in all three “underrated” picks: they make Fabric more inspectable, more approachable, and more incrementally adoptable.

Here are the takeaways worth stealing:

  • Don’t run Fabric blind—monitor capacity first. The Capacity Metrics app should be installed early so you can see spikes, long-running workloads, and noisy-neighbor behavior.
  • Bursting needs guardrails to be practical. Bursting can turn hours into minutes, but SKU guardrails let you cap that burst so finance doesn’t get surprised by a short, expensive sprint.
  • OneLake File Explorer is “storage explorer” for Fabric. Sync lakehouses to your Windows machine and browse both the Files and Tables areas when you need to validate what’s landing.
  • Know the limitations and still use it. Even with caveats (like constraints around editing certain Office files), it’s a huge productivity boost for troubleshooting and discovery.
  • Data Wrangler lowers the barrier to notebook transformations. It’s a friendly on-ramp for shaping data and learning the transformation steps before you standardize them into code.
  • Shortcuts are an adoption strategy. Connect to existing Delta data (even outside Fabric) so you can bring workloads over selectively instead of “big bang” migrations.
  • Underrated often means untested. If you’ve heard of these features but haven’t used them end-to-end, you’re missing easy wins.

Looking Forward

Try OneLake File Explorer and a Shortcut in a dev workspace this week—prove the workflow before you commit to a migration plan.

Episode Transcript

0:24 welcome back to the explicit measures podcast with Tommy Seth and Mike good morning everyone and welcome back good morning and a happy Tuesday gentlemen just jumping right in today our main topic for today will be talking about the most underrated features around fabric so there’s a lot of features that have been released from fabric are there any that we feel like are greatly underused or we should jump in with a particular feature and start learning about it so that being said let’s get into some just quick introductions or topics or

0:56 introductions or topics or links that we found over the last week or so that are interesting to us so Seth I think you found a really interesting topic that we should discuss here and I think this is us trying to understand more of what’s Happening Here in the Microsoft fabric ecosystem the link will be down in the description SE take it away what’s what’s your what did your find this week Mike let me take it away go ahead I’m I am by no means an expert across the whole ecosystem of of now what is fabric because we’ve been from analytics to all of the engines and

1:26 from analytics to all of the engines and conglomerating together however this one struck me is a a good enhancement so as of the 26th Steve Howard had a post saying that data warehouse skew guard rails for burstable capacity now the the the interesting follow-up question to this is which I’ll start with instead of actually talking about it h that’s odd sure first first and foremost what is the skew guard rail so up to this point we’ve had the

1:57 rail so up to this point we’ve had the option of saying hey like I want you to to expand my capacity units of consumption and then for for those times where I might have a spike in my effort like fine that’s great however cost is a concern right but then we’ll there’s this smoothing we’ll we’ll smooth it across the 24 hours as opposed to just making in a chunk that you’re going to extremely like pay for right away it’s okay but this is now an A a third piece to that puzzle

2:27 now an A a third piece to that puzzle where we actually like have an option now to say I want to set a guard rail and this introduces a throttle to that burstable capacity right so the example that they use in the article is if they’re using an f8 and the workload run time is 3 hours without any burstable burstable capacity if you implemented burstable capacity you’d be done in five minutes because it used 294 cus instead of eight and like

3:00 used 294 cus instead of eight and like hey who doesn’t like Fast data I do sure I I don’t like having to answer to the the finance department but hey five minutes is great for those of us who do have to answer and by the way that’s everybody like what what what happens when you put in the skew guard rail and it it uses bursting but it it caps it at like 49 cus so it’s like 30 minutes so what I don’t know and I’d be interested in your guys’ take too on this is like

3:30 in your guys’ take too on this is like is this specific to just the warehouse SKS is it across all the the capacity within fabric if I’m using a a data lake or it it’s interesting it’s a good feature it raises more questions about other parts of the ecosystem that I’m not completely familiar with but anyway I I I liked seeing it seeing it implemented so I think this is an interesting challenge but let’s talk about the chall Cheng that I think

4:00 about the chall Cheng that I think Microsoft is trying to solve and I think this this is trying to be solved across every tool honestly I don’t think if you’re loading things through data flows you’re going to have the same problem if you’re loading things through spark engines and you’re you’re running a spark job at one point in time you’re going to have the same issue so I think the I think the core concept here is look we’re trying to sell you software as a service so you’re going to get a paid amount of compute time right and everything I think as I’ve heard explained to me from

4:31 think as I’ve heard explained to me from Microsoft everything you do in power. com is related to the amount of cores or the compute time it takes you to do something on a processor so some workloads are very easy to parallelize spark data warehousing they’re they’re built in a parallel way so they can run the same query across multiple compute engines to return your result quickly so I think you’re just playing the game here of in a normal day on a normal company with these compute P capacities

5:03 company with these compute P capacities there’s always these spikes there’s always times periods of time where there’s a high usage and then it goes away and I think what Microsoft is identifying is most compers most companies need to pay for enough capacity to handle that high capacity Spike but then ramp back down to a reasonable cost for the rest of the time because it it doesn’t make sense to have and this is I think maybe this is where we’re we’re seeing in in the the on-prem world you always had a hard upper limit right in

5:33 always had a hard upper limit right in on Prem had to buy the number of cores to make sure you ran your job and you would buy like okay what do we think our Max load will be and then we’ll buy 50% more or 150% so that way you can handle those large loads know you can handle those large loads and the problem was inverse right like because because you had to choose that capacity you you were always combating how fast you could do stuff correct so it was always a Time issue where I’d love to do it faster but I can’t well let’s fix that okay buy bigger

6:04 let’s fix that okay buy bigger machine at least three weeks exactly so but in the meantime what can we delete yeah and exact so this is this is where I think the challenge or the interesting capability of cloud starts showing up because now Cloud can do it at scale and you can borrow machines when you want to and it’s it’s just like us developing things as a developer in Azure within a couple minutes you can just turn something on and boom it’s there so I think this is a lot of addressing that and I think what Microsoft’s trying to do with these new fabric SKS is say look whether you’re

6:36 fabric SKS is say look whether you’re using analysis

6:38 using analysis Services Power query whatever compute that thing’s using spark now the data warehouse we now have like four and now K C custo I guess is another one that’s in there now in the compu engine that apparently is different than the other ones I haven’t played with custo too much but now we have five compute engines that are all vying for poor CPU time and if in my mind here here I’m thinking going oh yeah this does make sense because it should not matter what compute you need Microsoft can launch any one of them in real time with

7:09 launch any one of them in real time with Automation and they just need to track how long each one of those things run doing your work and that’s where the F and the CU capacity stuff comes from anyways that’s my thought like so I had I feel like in my mind I have to go back down to the basics of like let’s first think fundamentally about what challenge they’re trying to solve they’re trying to solve for you’re buying time on a compute capacity core and they’re trying to handle spikes of usage at certain points in time yeah well but not only

7:40 points in time yeah well but not only that and the way you’re describing the the usage of compute is the engines underneath may be different right like we’re we’re now combining multiple differentes and it matter from our perspective we don’t need to understand the complexity of like okay well now we’re buying this thing or this or this license and now we have now we now we have the the reverse problem do you have an fcq for lake or do you have an FC for warehouse or do you have an FC for powerbi or no you need all three and now you need combinations of six 12 19 you

8:11 you need combinations of six 12 19 you you need combinations of six 12 19 like at least it’s just no man you know like at least it’s just no man you need one just just one yes so that’s appreciated because in the past it wasn’t like that no it was not and the burst thing in the and the smoothing are only for the SQL important Warehouse so that does not affect any that you’re I don’t think it affects data flows that you’d be doing or pipelines this is only having to do with the data set and just the warehouse well at least from the which I was like there there’s the Nuance for the followup of

8:42 there’s the Nuance for the followup of like yes I’d like to see this everywhere right like start say setting up like how how much can you set your so I think your your question there is very valid Seth right talking about the bursting capability right we do want bursting to take over but I don’t want to sacrifice so much from the bursting that it starts killing other things right the way it’s been described to me with the the smoothing inside powerbi as they think about that compute if you if you burst for a short

9:12 compute if you if you burst for a short period of time they’ll let you have it and then they borrow from your future core capacity so your core time right so the number of cores you have in the future they’ll borrow from that up to a point so after like 30 minutes they’ll say okay wait a minute you’ve been bursting for too long this is no longer a burst anymore this is now a a job that is running and at some point they start saying okay we’re going to start slowing you down and then if you continue to go heavy usage then they start failing things so it makes sense right so they’re they’re basically

9:42 sense right so they’re they’re basically saying you are able to borrow a future capacity that you have that you may or may not be consuming and then the the pattern is you get a short period of time where you get a lot of compute all at once and then you can start throttling that to back and then eventually it starts failing which kind eventually it starts failing which makes sense right it’s you’re over of makes sense right it’s you’re over consuming assuming the system eventually it says sorry you can’t have that you’re done yep and that way you to start look at the system and figuring out what’s wrong what are you doing in the compute to start abusing that

10:13 the compute to start abusing that stuff so anyways I I’m not sure how this is going to fit I to your point said earlier about this whole article it makes sense for us to be able to allow these computes to stretch out over time like if you’re doing like batch job processing things overnight in the even ings when people are not on the system yeah it totally makes sense like don’t use all your capacity all at once allow it to bursting with some guard rails and say look we’re not willing to let you run this thing wild for use

10:43 run this thing wild for use everything in five minutes but then instead let the job take longer because there’s really not an an urgent demand to get the data out right now right those those batch job processing things it’s okay you can plan around that a bit so I think that’s I think that’s good from this perspec perspective is the the guard rails make sense for you not to overc consume too quickly and cause failures or problems or in that short term that’s how I see it or inflate your costs to levels that you

11:14 inflate your costs to levels that you didn’t want well but are you though so this is where I’m wondering about this because in the spark world and data bricks World it just keeps eating until it’s done like it’ll just it’ll just you basically set a maximum number of machine you need to use and say look I you can have no more than eight but if the job runs for eight for 2 days it will run at eight for two days but your cost is directly corresponding to how long those eight machines were running for those two days Fabric’s different though because I’m already buying this

11:45 though because I’m already buying this set amount of Cu or capacity up front what I’m saying like so like to me this is not a spend issue this is more of a oh it’s not more oh yeah it’s a really good point cuz I the way I read the article and that that’s just just my point is like this is not is not autoscaling it’s performance probably Performance Based yeah so it’s like okay so this is where I think maybe this to me this one feels more like Hey we’re trying if if I want to let you again I’m being very theoretical

12:16 you again I’m being very theoretical here in nature right let’s just imagine you have a SQL server or a SQL server list or the warehouse job or a SQL serverless job running what if that thing’s running at the same time a spark job ramps up what if that runs the the same time someone else is consuming an inordinate amount of capacity for whatever reason right so now you have again we we talked ear about there’s five different compute engines that are working what if three of them start really consuming very hard what about the other two what did they do do you do

12:46 the other two what did they do do you do you start getting failures in powerbi because someone’s running a heavy SQL job I’m gonna have to dig into this more I don’t I don’t think you’re right I think there’s settings that it’s like yes here’s the base level skew but if you’re going to use bursting are you’re going to use these things things you’re you’re going to pay more I have to I have to check that yeah I think I think that I think what I’m referring to though is if you look at the in the middle of the article it talks about the smoothing portion right there’s another fabric

13:13 portion right there’s another fabric capacities blog posted earlier this month around talking what is the smoothing that’s happening and I think that’s really what I’m speaking more towards is is that Fe feature there as well anyways really good topic I think we’re going to talk more about this in the future because there’s a lot going on here and potentially more thoughts around how the smoothing stuff will work I’ve just got to get my head around it I I’m not quite sure i% understand it yet why we’re talking about smoothing when I buy a dedicated capacity so I got to think about that a bit

13:43 bit more I know I would the last thing I would just say is make sure you’re using that the capacity app because I know they’re updating that all the time they are they’re pushing a lot of apps and that’s on appsource That’s So the the capacity metrics app and you’re talking specifically Tom me for fabric right for the fabric Microsoft fabric capacity metrics yeah see if I can go get that and I’ll make sure I put that in the description here as well with that that sounds like a good transition to our main topic today so in our main topic today I think we’re going to go talk through what is

14:14 to go talk through what is our main couple features maybe one or two features here let’s talk about some features that we think that are in fabric today that are underutilized so let’s go through all the list of Tommy’s made us a nice little list of all the features that we build look at here inside the fabric environment let’s just we’ll just like randomly pick out features that we like and with that Tommy I’ll pick out a feature that you think from fabric that you you think is underrated and we should probably be using more of yeah let me just set this

14:45 using more of yeah let me just set this up too because go ahead yeah Kurt had a great article that we’re probably actually going to do an episode on is do you feel overwhelmed by fabric with so many bells and whistles and features and they just came out I think last week with the October updates and there always like a thousand things on top of either it’s a new product or the little enhancements to each thing but just off of memory without even looking I think we have about 25 usable products and I’m not trying to extend that just

15:18 not trying to extend that just like from data Wrangler data warehouse kql databases semantic link that just came out notebooks Delta table so all these things are being used but I think from myself thinking about this question is what am I either not focusing on now or what do I think a lot of organizations are and that’s what I’m going off on I’m gonna start with one and as just give the background and I really think I’m not hearing enough about it but how cool it is it’s the one Lake File Explorer for

15:50 is it’s the one Lake File Explorer for Windows that’s a good point and just for the quick give us some context what is this one Lake file explorer and what where do where do you get it and what you what is this thing you’re talking about Tommy let’s get some context on that one so the one yeah the one like file explorer just simply is a Windows application you can get it from the Windows store I believe or obviously from going to the D Lake in powerbi for fabric and what allows you to do is sync all of your lake houses to your local machine just in

16:22 houses to your local machine just in the very similar exact same way that one drive does in SharePoint when you have a synced but it’s really awesome about this is you can add any types of files images text Excel files and it automatically get synced back to the Lakehouse which is incredible when you think about the different workloads for teams especially on if we’re dealing with more of the business where it’s like hey can you just upload that Excel file rather than us having to go to SharePoint to try to grab some

16:52 go to SharePoint to try to grab some document Library can be organized readily available for us in in our Lakehouse we can view and then yeah that’s the first part of this I got another Windows specific one but I think just having that alone and getting people adopted of that is can be so incredible and you can also see all your files too so you’re saying the word files here but let’s let’s add some context on to what this looks like inside fabric so in fabric when you create the lake house you get two

17:22 create the lake house you get two folders you get one called files you get one called tables correct yes I don’t know if you can see the tables though no I’m go ahead s just no this this this goes I think back to our our discussion around what is one Lake and you keep hammering that it’s a lake house this this is the other part of that where with like just like you have your experience where you’re sinking to SharePoint you’re sinking to your your one drive it’s it’s one one Lake right

17:53 one drive it’s it’s one one Lake right so within Windows Explorer these are all different file types that are stored there it’s it’s not Delta tables ah so this is where this is where I want to do this is why I want to dig this I I get it from a business I I I like this feature too Tommy and I’m glad you brought it up because I totally just because once you set it up I if we talk about lowering the bar for a business user to engage with one Lake like this this is that bar like it’s

18:23 like this this is that bar like it’s just okay this is a thing in my Windows Explorer just like I’ve I’ve interacted with all of my files ever in the past now I can upload everything to this one location like one place one everything whatever what what I’m GNA get blasted on is like ah but is it a CSV anymore but is it a Delta table now it may be I don’t know how to get it gets converted so so I want to I want to point in here a couple things so I love what you’re you’re bringing a point out here Tommy I think that’s awesome I do think this is

18:53 think that’s awesome I do think this is probably an underrated feature and really to your point Seth it is one drive for your files in the lake so the equivalent lens of what Microsoft provides to you Microsoft provides you when you go to the the one lake or The Lakehouse inside car. com you see that there is a files and a f a a files and a table folder in that root area when you open your one Lake for one drive or the one Lake whatever the thing file explorer it gives you both it gives you

19:24 explorer it gives you both it gives you both the files area so you could put your CSV your text files like so it acts like I’m EAS able to load things up so it’s acting just like a one drive experience but it also gives you all the table experience so you also see all of the parket files and all the tables that are there as well under so it’s literally just like blob storage like it’s just like a blob storage account for all your yeah it’s storage Explorer yeah it is it’s literally yeah it’s

19:49 yeah it is it’s literally yeah it’s storage Explorer like let me see my raw stuff y yep but it’s nice because there’s really no other way that you can like so if you think about the equivalent when you use other tool tools non fabric related you can you can always go in and like look at the file structure like if you’re doing data bricks you can say oh here’s the table here’s the things inside this area here’s you can go see the physical files well you couldn’t do that without this one Lake file explorer there’s there’s not an easy you can’t go to a web UI

20:19 not an easy you can’t go to a web UI like you can in like Azure you can’t go to a blob Lake storage and click on file explorer there and go find your files and see them all there you have to use this tool from what I’m aware of to you to go interrogate and look at those files what I do find interesting is the the caveat and I just ripped over to the limitations real quick is users can’t update office files so if you load up an xlsx or pptx or a docx you can’t update them oh not probably not in the same way

20:49 them oh not probably not in the same way right it’s probably pure file storage and replacement as opposed to the experience that you have on one drive where you’re opening in a browser you like you’re changing things locally and they auto sync Etc like I wonder how how that’s how that’s working in that space time you are you sure about that because I’m I’m looking at a few things I’m sure about I’m sure I’m sure about what I’m reading what’s the limitation it says users can’t update office files such as xlsx PowerPoint document docx is

21:21 such as xlsx PowerPoint document docx is necessary okay so like but CSV files you probably could I all right but it’s not meant to be place for PowerPoints anyways right and well everything’s the PowerPoint for everything Microsoft well but let’s talk about this for a second right because in the same way that I just was just like oh yeah all in I’m I’m a business user I’m familiar with this with this experience that is that is GNA be a significant difference if you’re used to dud come on if you’re used to opening up

21:52 dud come on if you’re used to opening up that Excel file making changes and saving it in a one drive experience and it just automatically saves is that experience the same can I can I pop open the Excel file and save it back it certainly seems that it’s like I can’t update those files so am I updating locally and then replacing the file out file out there well okay first off I your feature brought up come on don’t you have these for I always turn autosave off on my all my products and then yeah

22:24 my all my products and then yeah because I Tommy still old school it’s very old school thing because I would just to make sure I don’t know why second off we have one drive already so this is for data this is meant for for your data for people to upload data sets one drive is meant for playing fun all the other things people do and again there’s a lot of stuff that people probably don’t need to see here like they’ve really updated this I can see every data flow artifact as a paret

22:55 see every data flow artifact as a paret file like the backup you can do I can see all the warehouse definitions everything in the Lakehouse the demos that we did for baseball I have the CSV files I uploaded and everything else has been translated whether it’s a data flow Lakehouse or SQL on point so it is storage explorer that I don’t know why Azure still doesn’t have a way to sync on your computer but now you have that access to finally easily get there and then view it online as online as well so I I think I think this is a good

23:25 well so I I think I think this is a good feature I think it would be very useful here I just get so nervous when anyone thinks oh yeah I’m just going to build a bunch of Excel files and they’ll be my source for my data like just this just rubs everything that I’m like my core of everything around the wrong way I’m like this is just not the right way to think about doing data engineering things in the future here but I could definitely see the use case and the value here as well so I think I think it’s a pretty solid feature by itself not sure I would super endorse it for loading and

23:57 would super endorse it for loading and Stor Stor yet I do like the fact that I can see it in there and I do like the fact that I can go observe What’s happen it’s also great for ad hoc stuff too like do do you do you go into storage Explorer and like manufacture and build all your no it’s just like do do my files exist today did they process did something go wrong exactly occasionally like oh I need to load up this CSV because I need to like do some temp work and whatever the case may be super super

24:28 whatever the case may be super super useful in that regard yeah I think I think at the end of the day most of the stuff for this feature is just supposed to be seamless it’s just supposed to act just like a normal it’s supposed to be like blob storage early basically at the end of the day it’s a new API layer called one Lake that acts almost the same as yeah typical barriers by putting into yeah Windows Explorer so it’s not a separate install I think I’d agree with that with that one and I do think I think they just

24:59 one and I do think I think they just mentioned in the bottom of the article I think you think you can I think it says something about the idea you should be able to online view it apparently they said I’m looking at the bottom of the article here Tommy that I find interesting open open option to open workspace as an item on the web portal and actually go to one Lake and there’s now a right click on the workspace like one l view workspace online it opens the workspace browser in the fabric portal so there’s like it’s supposed to like help you get back and forth between where

25:30 back and forth between where are the files you store locally and then hopping into that same one L inside fabric as well so it’s it’s interesting to like see how you’re able to Mo move back and forth between desktop and the files there as well yeah maybe I misspoke like it shows that there’s this edit file experience that if you’re trying to open it you can change it locally and then save and it syncs so I don’t I don’t quite know what that limitation was talking about it seems contradictory based on the documentation it’s like the browser users can’t update office files and then maybe they’re

26:01 office files and then maybe they’re talking like the F like Tommy’s point right it doesn’t Auto synchronize your file like if you had a word or a PowerPoint open it’s not Auto saving back to that location for you you have to physically shut down the file and let the one I’m sorry I’m sorry it does explicitly call it out down there in edit files that it doesn’t doesn’t support updating those office files okay n there’s consistency s okay move on other other things of

26:27 okay move on other other things of fabric that are being underutilized I’ll I’ll throw one here into the mixer that I think is I think is pretty underutilized in general and and this might be more of a Nuance things just because I like the experience of it I think data Wrangler inside the power cor experience or inside the spark notebook experience sorry is something that’s underutilized I’ve seen a couple people blogging about it every so often and I actually hear a lot of people say well I already from the from the hardcore developers they already know how to write python and so they’re like oh I’m

26:58 write python and so they’re like oh I’m not really interested in writing my own pandas functions here I actually know how to write it and they just handw write it themselves but I think for a lot of new users getting into the spark experience it’ll be very helpful for them to have code autogenerated for you so I think that’s that is a very neat feature I like the idea that it can do a lot of data cleaning for you without having to UI around it like I just think it’s it feels very power quesque to me and I think it’s a a great barrier removal for anyone starting to get into

27:29 removal for anyone starting to get into that spark notebook area so I probably give my vote here for the data Wrangler experience any thought have you guys used that one I know Tommy you’ve played around with it a little bit any thoughts around that one from your side I I really do I like it but honestly like it’s a really neat feature but it’s I don’t see it really being part of like the pure Dev side like it’s nice to easily summarize something I still think there’s a lot more features that are needed there and like you you and I were

27:59 needed there and like you you and I were doing this and I I’ve tried to do a few other examples you’re like okay I can do some simple things yes but now I have to write some code and now I’m completely taken out on I have no idea where that like language is coming from yeah it feels like it needs maybe something else while you’re inside that editor experience of adding a custom step where you are able to add your own custom code for a particular step or not it would be really useful if they were able to produce that and maybe it’s already there do Tommy have they have they Incorporated some like AI

28:31 they Incorporated some like AI inside data Wrangler cuz that would be really useful cuz you could describe what you want to do with the data and it could try to generate py or pandas code for you and say Here’s what here’s the step we think you would want to use to translate that that would be cool yeah and not yet and I think that’s probably going to come out once co-pilot and notebooks come out that’s probably another thing we should talk about eventually as well but would you say yeah co-pilot maybe co-pilot all the things it’s it’s

29:02 maybe co-pilot all the things it’s it’s like everywhere now it’s supposed to like they’re they’re really T they’re really pushing the co-pilot pieces yeah I think in general like probably not every time we talk about fabric I think well this might be slightly off toic but one one feature that I’m extremely excited about implementing in a a larger scale is the shortcuts and I think people should know about that too like even even though we’re talking about features people may or may not know about so shortcuts are the

29:34 know about so shortcuts are the ability and they they just implemented this for all the data data verse artifacts as well right like not long ago so Dynamics 365 all all the ETC well but essentially what it’s allowing you to do is through one Lake create a connection to a source of data without extracting the and pulling it into one link and and this goes cross Cloud which which is what I needed right so that’s

30:05 which is what I needed right so that’s multi-tenant touching Delta par a files in different platforms different things data verse Etc like that that’s going to be a huge huge win for how I have to work and operate and I think removing some of the not only some of the complexities for organizations of where their data sits but saving money on on data movement charges but also one of the things I was thinking about recently was like

30:36 thinking about recently was like and I have to dive into the particulars of it but it also seems to say solve the gdpr things where data can’t be in in certain regions where I still can maintain continuity acrossed my analytical platforms where I need to access data but I’m not actually moving or extracting from places where now there are regulations around me doing that so it’s a phenomenal feature that people should know about and

31:06 people should know about and definitely play with regarding you definitely play with regarding implementing their Solutions within know implementing their Solutions within within one like in fabric I think this is a huge adopter feature as well so from one who comes from the world of making lots of Delta tables already M I was very impressed with being able to create Delta tables outside of fabric in other tools that I like yeah like fabric is definitely coming a long way it’s it’s adding so many tools together but I would definitely I think many people would agree in the space of doing data

31:36 would agree in the space of doing data engineering there’s just certain tools that are better than fabric at this point fabric hasn’t caught up yet it’sand new the benefit of that the benefit of that too though is that means we can continue to build and innovate in those tool sets knowing that we can leverage those things those artifacts in fabric at a later date when we need to productionize so so let me give you just a really common pattern in a Lakehouse that works really well and I have many installs of this already deployed and I’m deploying more of them is you use

32:08 I’m deploying more of them is you use Azure synapse to run a pipeline the pipeline connects to the data you care about API structure tables doesn’t matter the API pulls the data in and makes your initial tables in your lake house you use another engine like data bricks to pick up that data and transform it shape it get it ready to go and then you would go back to synapse and say in synapse here’s a view of the data the SQL Server View and then you can go hit that view with powerbi so basically you have synapse doing a lot

32:38 basically you have synapse doing a lot of the collecting of data and then serving the data back to powerbi powerbi just bolts on really easy with a SQL endpoint great everything’s happy now you flip this mindset now fabric you don’t need the pipeline to be in a separate tool the pipeline can just be part of your fabric environment you can do your processing of that data because now it’s just an endpoint a one L Storage account is like data Lake Gen 2

33:01 Storage account is like data Lake Gen 2 so you can still manipulate that data just like you would normally and you don’t even have to put the lake house in the lake you could actually have a separate data Lake Gen 2 storage that’s separate from all of this right it could be somewhere else in your environment and then once you build those tables you can now ditch the whole SQL Server endpoint if you need to you could still use it if you want but now you can go directly with the shortcut right back into your powerbi data sets and now there’s no refresh in now the data set knows how to read that parket file immediately there’s a there’s a

33:31 immediately there’s a there’s a lot of advantages I think with going around this pattern so I’m really excited about that new pattern working on some projects right now to test out the capabilities of that which I think I’m finding very good success with it already really like this pattern so I think shortcuts is another big win that people aren’t using any other items Tommy you would want to pick out here that would be yeah the one the one thing with shortcuts is I is this still underrated when it’s

34:02 I is this still underrated when it’s probably the most important feature hey hey I I made a caveat I’m not saying it’s the most underrated I’m saying that if if we’re talking to an audience who doesn’t normal like may not know a lot about fabric I think it is one of those things that you absolutely want to be aware aware of no I think everyone is aware of it to me it’s like it’s it is it’s one of those yes it’s probably one of the most valued but I don’t know if it’s the most underrated sorry Tommy what I’m sorry that I did not pick

34:33 Tommy what I’m sorry that I did not pick an underrated feature do you have any do you have any actual underated features you’d like to share with the audience before we have to close I I’m G to go back to your comment though Tommy like I I think people may be aware of the feature are people actually testing it out and do people really understand the impact of what a shortcut could do I think I agree with Seth on that part I I think the idea that you have there’s a a lot of companies that have data sitting in AWS somewhere I think I think there’s a there’s a another part of this that is

35:06 there’s a another part of this that is people maybe are aware of it but I don’t think people are aware of how to actually implement it or use it into a current work stream or workflow they have today so maybe from that perspective I would give it a bit more credit I do think it’s underrated I think I think many people are looking at fabric going well I got to build everything there now I got to throw away all my old work that I did and now I got to move over to fabric and I think that’s not the case I want to be it’s a a little bit more nuanced around that where you can selectively pick specific projects and bring over the data that you care about so I still

35:36 data that you care about so I still think I still think the the shortcuts is a underrated feature just from a fact of I don’t think people have really tapped into its full potential yet and understand how impactful this will be and potentially how much time it will save them because they don’t have to rebuild anything you can just connect to your existing stuff so if you’re already if you’re already a Delta builder of Delta table somewhere else Delta’s amazing because or the Sor the shortcuts are amazing because now you just bolt into what you’ve already done if you’ve been doing two years of engineering work on tables great just

36:07 engineering work on tables great just reuse it that’s huge I don’t think people really grasp that yet I think you’re right because I know it’s one of the most important features it’s not hidden what prob but that doesn’t mean that it it really needs to be like 100% of lak houses have to have some type of shortcuts but or like at least thinking about that that honestly the way shortcuts are in terms of the architecture of how you build your lake houses like that’s a completely like

36:37 houses like that’s a completely like critical part it’s not just a nice side gig if that makes sense like Lake shortcuts dictate how we’re going to frame and build our lak houses I think I yes I I think it’s less about how do you build it moving forward I think it’s more about integrating with what you’ve already built right how how quickly can we get to me the scenario is how quickly can I build the data engineering stuff that I like and how quickly can I get that into powerbi data sets as fast as possible so to me this is like a really good hook for those people or the teams

37:08 good hook for those people or the teams that already have data engineering on them and again I’ll I’ll say it again like I said since fabric came out fabric is amazing for business users because you’re getting all these new tools you’ve never had before fabric is okay for data engineers and data scientists because they already have tools and what they’re getting with fabric is a little bit less capable than what they already have so I I feel like there’s this idea of eventually fabric will get there where data scientists and data Engineers

37:38 where data scientists and data Engineers will feel as comfortable there the tooling will the Gap will close and you’ll be like oh wow there’s all these extra features inside fabric that I want to use that are better than what I had previously so I think the tighter they can pull that integration between reporting engineering of data data science things together the better I think the teams will want to work together but you really do have specific or different teams that need the same stuff over and over again all right with that I think we’ve we’ve burned through a perfectly good hour of your time maybe a little less than an hour this time

38:08 a little less than an hour this time this is a bit of a shorter episode but we do want to say thank you we hope you’ve liked our recommendations for things that are underrated inside fabric particularly around shortcuts data Wrangler and the one Lake file explorer our picks for underrated features that you should maybe spend a little bit more time learning about and trying to use them and with that we only ask if you like this episode please recommend it to somebody else let somebody else know you found some value from it it really helps get the word out and lets other people learn more about Fabric and powerbi and the the data ecosystem that

38:38 powerbi and the the data ecosystem that we love so dearly Tommy where else can you find the podcast you can find the podcast anywhere that it’s available on Apple Spotify or wherever get your podcast make sure to subscribe and leave a rating it helps us out a ton if you have a question an idea or a topic that you want us to talk about in a future episode head over to powerbi tips SL podcast and leave a rating it helps us out awesome or and and then yeah that’s it leave it that that’s good enough awesome thank you all very much and we’ll see you next

39:09 we’ll see you next [Music] [Music] n

Thank You

Thanks for listening to the Explicit Measures Podcast. If you enjoyed the episode, share it with a colleague and help someone else level up their Fabric skills.

Previous

Azure DevOps & Power BI – Ep. 262

More Posts

Mar 4, 2026

AI-Assisted TMDL Workflow & Hot Reload – Ep. 507

Mike and Tommy explore AI-assisted TMDL workflows and the hot reload experience for faster Power BI development. They also cover the new programmatic Power Query API and the GA release of the input slicer.

Feb 27, 2026

Filter Overload – Ep. 506

Mike and Tommy dive into the February 2026 feature updates for Power BI and Fabric, with a deep focus on the new input slicer going GA and what it means for report filtering. The conversation gets into filter overload — when too many slicers and options hurt more than they help.

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.