PowerBI.tips

Fabric Decision Guide – Ep. 222

Fabric Decision Guide – Ep. 222

Microsoft Fabric gives you multiple ways to ingest and transform data (pipelines, dataflows, Spark) and multiple places to land it (lakehouse, warehouse). The problem usually isn’t that any one option is “wrong”—it’s that teams mix patterns without a default, and the platform starts to feel unpredictable.

In Ep. 222, Mike, Tommy, and Seth use Microsoft’s Fabric Decision Guides as a structured way to make those calls. They talk through the decision points that matter in real projects, why “pick a default and standardize it” beats endless tool debates, and how governance/process has to keep up as workloads scale.

News & Announcements

Main Discussion

Topic: Fabric decision-making (ingestion + storage)

The decision guides are most useful when you treat them as guardrails, not a one-time “architecture answer.” They force clarity on the questions teams usually dodge: are you optimizing for low-code transformations or for maximum flexibility? Do you want SQL-first consumption by default, or do you want a lake-first pattern that supports broader engineering workflows?

A key theme in this episode is standardization. Once you pick your default patterns, you can put governance around them: naming conventions, promotion paths, and clear handoffs between the people building ingestion and the people consuming data downstream.

Key takeaways:

  • Pick a default ingestion pattern (pipeline, dataflow, or Spark) based on transformation needs and operability—not what’s newest.
  • Be honest about your team’s skills; the “best” tool is the one you can run repeatedly without heroics.
  • Separate experimentation from production early so ad-hoc work doesn’t become the permanent pipeline.
  • Choose a default landing zone (warehouse-first or lakehouse-first), then document when exceptions are allowed.
  • Treat governance as a feature: ownership, naming, access, and a cleanup cadence prevent sprawl.
  • Plan for handoffs: data quality and business process fixes need an explicit owner, not implied blame.
  • Use the decision guides to align on tradeoffs, then validate those choices against your actual workloads.

Looking Forward

Pick a Fabric default (for ingestion and for landing), write it down, and apply it to the next project so teams can move faster with fewer architecture debates.

Episode Transcript

0:31 good morning everyone welcome back to the explicit measures podcast with Tommy Seth and Mike hello how’s it going hello Mike hello we should make really formal intros at some point like good day sir day sir I don’t know if that’s formal or if that’s not something not formal really the only Pockets we we have the intro song but what we what we need is some voice actor to start us off with like this is the explicit version I keep

1:02 like this is the explicit version I keep telling you I can do this I always wanted Walter White to introduce me so exactly exactly say my name [Music] so I I think we I think Tommy found this morning Tommy just like literally moments before the podcast starts this morning Tommy’s like hey did you see this thing in the Microsoft blog I’m like no way so I think Microsoft has really flubbed up they’ve made the

1:33 really flubbed up they’ve made the biggest mistake they ever have and they included our video on one of their blog posts around fabric so I’m trying to find it Tommy where’s the oh it’s in the chat it’s in the notes no no on the page if you gotta scroll yeah we’re at the bottom right we’re at the very bottom oh Scrolls no we’re like four fifth one down we’re top five baby can I get a link here can I get a link yeah I need a link it’s everywhere it’s in them one no it’s in our private chat no no the page I

2:05 in our private chat no no the page I need to find I can’t find us on the page what’s it say the words say the words what does it link for learn hey there it is hello Microsoft fabric okay so they actually did put the episode in there all right we’re literally so 218. there you go got it I don’t know why they didn’t put us first because they probably they actually want people to actually talk about the product really let’s be real this is a pretty comprehensive list there’s actually a lot of really good content this is I think this is a very smart Microsoft blog should be pushing all the people that make content for

2:35 all the people that make content for Microsoft like this is this makes sense yeah pretty cool there’s definitely a lot smarter people in there than us that’s for sure yeah so we are by far probably the most entertaining but I don’t know about that I think you’re stretching this through my other stuff you’re stretching it pretty far excellent that was fun to see your your name pop up on there it’s kind your your name pop up on there it’s neat a little of neat a little I made the news yeah I made my mom on

3:06 I made the news yeah I made my mom on the news the news I think I know again Italian mothers if I show that to my mom she’s like oh my gosh not that she has any idea what’s going on that’s that’s what pretty much like 90 of my friends say like oh yeah cool you got a podcast what’s it about I’m like I say what it is like yeah we watched one of it like five seconds on when it came up on my Facebook feed and they’re like I don’t have a clue you said a whole bunch of words I don’t even know what they mean like wow all right I I don’t think I said this on the podcast I caught my wife listening no

3:36 podcast I caught my wife listening no way so and it’s because you’re working too hard and she misses you I think so maybe I can just listen to his voice on the podcast we’re going on on a trip and like her phone will always connect it’s on the van it will always connect to her phone first my my wife does the same thing in our van you get in and I turn this thing I turn the fan on immediately whatever podcast they’re listening to just comes right on the radio and I heard like we’re 15 minutes into one I was like hey what’s this and then she was like I don’t listen and

4:06 then she was like I don’t listen and then she like said something oh when you guys said I’m like yeah oh oh [Laughter] so maybe we’re gonna get another Microsoft MVP all right Miss pulia you’ve been caught I’ve been caught listening to the podcast miss you too be careful we’re gonna yeah he’ll put you to work so since you’ve been listening to the podcast I got about three reports I got to get you to get done here so don’t work let’s jump into our main topic for today

4:36 let’s jump into our main topic for today we don’t have a whole bunch of intros other than the blog post so Tommy I think you dropped that in the oh actually I probably should snag it here I’ll put it in the chat window for those of you who haven’t seen it there’s actually a ton of MVPs who have made content or built content around Microsoft fabric Microsoft went through and just collected a good long list I think there’s probably what 25 30 maybe even 40 just different videos from other MVPs talking about communicating about thinking about what fabric is doing and I thought it was

5:07 fabric is doing and I thought it was a very very nice post though thank you Microsoft Microsoft now that everyone has a pre a free trial and all the tenants are changing at this point a question came up the other different just real quick before we jump off of fabric into our next topic do you are does fabric so we talk about this one leg thing and one like is is attached to what fabric is doing are you guys aware of anything around when you go to a workspace you have a

5:38 when you go to a workspace you have a so you can make a fabric workspace and then in that fabric workspace there are settings for the workspace one of those settings in the workspace is the ability to be able to add an azure storage account yeah does one like mess with that can you I think you can still do it I haven’t I haven’t experimented with this yet have you guys played around with this at all I don’t think it breaks it but it is still the same settings where you can’t have any data flows or really you have to set that up first and I I don’t think it messes with that what I’ve seen at

6:08 it messes with that what I’ve seen at least least so does it become so my question then would be is if you build so for example right right once you make a Microsoft fabric workspace do you have the option of picking gen data flows gen 1 which would write to the blob storage account or data flows Gen 2 which would write to lake house or the one Lake I I see I think you see what I’m saying I think any dataflow will push to the one they’ll push to both but because that’s

6:38 they’ll push to both but because that’s a different setting and the workspace setting for the this is what I was confused about yeah well no the workflow the workspace setting pushes everything to the your lake house or when you have fabric you already have the blob storage so I feel like you’d I feel like dataflow gen 1 would write to the Azure Link Storage account and if you use dataflow Gen 2 you can pick the destination right so that’s the whole idea of like data flipped and choose is awesome because I can pick any

7:09 awesome because I can pick any destination I want or have multiple destinations if I want but I was just trying to I was just trying to wrap my someone asked a question I was like oh this is an interesting question I didn’t even think about about that setting in the in the portal what does that do and does it require you I don’t know I just was confused I have to do some testing on this one I was curious if you guys had a workspace that did this or have an experiment experimenting with this as well no I I haven’t but at the same time if I I would I would imagine they’re

7:39 if I I would I would imagine they’re going to push you to migrate at some point like if you’re if if it’s a fabric workspace or they’re just gonna move it because when is the last time you’ve you’ve ever seen them like push things into multiple places especially if you’re gonna say this is now a fabric workspace my guess is that it just migrates over because it it breaks the the flow of everything that you’re setting up is going to be in one Lake they they would have a workspace that

8:09 they they would have a workspace that would support multiple different other data sources that just doesn’t that doesn’t make sense to me so let let me me let me I’m gonna throw down just a quick what I think is happening I’d be curious chat if you’re listening to this or chat if you’ve tested this let me know I believe there are now two versions of a data flow if you go into the workspace and this is what again this is how I’m trying again I’m just trying to rationalize and wrap my head around what does a storage account linked to a workspace mean in the context of what a fabric environment

8:39 the context of what a fabric environment is doing right so if you link a storage account I think it’s still doing all the normal Power bi things there’s still a data flow gen 1 in there to get to a jdfogen one you click on the workspace you click the new button and then you move down to where it says you have to like click the show all button button oh interesting it just changed my oh yeah okay yeah you hit click you click show all and in the show all button you go all the way down to the section called Power bi and there’s a thing called data flow or it’s where the data Marge the streaming that’s all

9:10 the data Marge the streaming that’s all the traditional stuff lives then you go to the top and then there’s a data Factory or data engineering item that lets you so indeed a factory it says data flow Gen 2 preview preview so it shows you the options to pick dataflow Gen 2 or regular data flow and my assumption would be is regular data flow would just put all the data into the blob storage account dataflow Gen 2 will allow you to write to a destination right does that make sense yeah it does

9:41 right does that make sense yeah it does but they’ll probably separate this then right because they’re not going to remove any functionality right with the agency you can’t just yeah you just can’t turn it off you’ll have a my guess is there’ll be a different like it’ll be a fabric workspace like if your workspace is supported by a fabric opacity capacity then you’ll have the new experience and everything will go one leg and then if you’re in the other one hypothetically right well technically you’ll probably stay in and have separate data sources because that

10:11 have separate data sources because that that Azure linking to a workspace isn’t just for data flows it also is for backups and yeah correct and restores and things like that as well right but would so again here goes here comes my question right once you’ve linked a fabric workspace to your point Seth right why would I be putting backup files in another storage account I guess I probably want to put them in the one

10:32 I probably want to put them in the one like as well right hard cut over like the minute it goes GA that’s that’s true media might change things the minute goes GA that happens yeah right because like right now it’s tested out figure it out but my I would say it’s probably going to get separated out by licensing yeah that’s true I did also notice something in here as well felt like I saw a note around fabric oh other side so so many side notes other side note I’ve been playing with

11:02 other side note I’ve been playing with the git integration on the workspace level yeah it’s pretty cool what they’re doing there yeah I’m very excited to see how this is going to build into like power bi desktop because there’s nothing there for desktop for working with individual gits at this point but we’re like making a change to a report or a data set in the service that looks awesome and I saw a tweet this morning talking from Matias saying today the way that information is stored in git with connecting a workspace the bim. bim file is stored as it’s inside the

11:34 is stored as it’s inside the service but it sounds like Matthias was tweeting on on Twitter saying the tdml Tyndall yeah is coming to get so you’re gonna the whole model will be human readable here in a little bit as soon as soon as they incorporate that so I guess I’m guessing closer to GA they’re going to start lifting out Tyndall and putting that in the get repost so then I can see a clear definition of the measures the descriptions it’s I really like it it’s so easy with vs code too did you guys so is it my misconception

12:05 did you guys so is it my misconception because I haven’t played with this yet but but I was under the impression that this was going to be like an independent like we now have Source control but I I swear I read something yesterday that like I have to be in fabric to get this yes I believe git control get Source control is a workspace setting that is a part of fabric it’s a fabric only yes so it means that you have to get onto fabric in order to get Source control for power yes correct I was just that’s where I

12:35 yes correct I was just that’s where I was just like like well you dang it where are you well however the pricing for fabric just came out out and if you think if you think git is important right the no it’s not a it’s not a pricing thing like it’s it’s a how do I go from an old ecosystem oh how to migrate brand new one oh is it so yeah my existing Works report okay yeah yeah yeah yeah yeah another thing that I

13:06 yeah yeah yeah yeah another thing that I was actually exploring here and I got to do again this is what happens when Microsoft releases new things particularly for MVPs MVPs jump in and go all right let’s break stuff like yeah how would this work yeah how can I make this thing as efficient as possible how can I push and pull this thing around so yeah I’m yeah I’m I’m also very curious to see what happens when so hydrating something a fabric environment allows you to use the direct like connection you can now pause

13:36 direct like connection you can now pause or not pause a workspace based on that Fabric Connection in in my mind here what’s happening is when you’re using sums process it doesn’t have to be spark notebooks inside inside the workspace you could use data bricks or some other tool to write Delta tables into the one lake so my assumption here is one Lake doesn’t turn off it doesn’t like it doesn’t like go dormant when you pause your fabric skew right it’s it’s one like it’s a blob storage account and you pay

14:07 it’s a blob storage account and you pay for storage of things separately then you do pay for everything else that you’re seeing you don’t you get storage you get a storage account but you either pay for that one like storage separately then part of the Fabrics skew I think I don’t really understand that part either but that being said if that turns on and you have a one leg endpoint well why not fill up that one lake with another tool connect power bi with direct link into that file and then just turn off fabric like yeah will they block that does that

14:38 like yeah will they block that does that block no I don’t think it’s blocked the scenario because in the same way like right now when you’re using and you brought up data bricks right like correct your your storage is technically ADLs Gen 2 or you can set it you can configure it that way right which you should yes but you should be awesome too but you should also be able to mount data bricks into one link so one link has the same endpoint I I agree okay but I think the Nuance is or or what I have to test out is it was

15:08 or or what I have to test out is it was my understanding that if databricks is creating that table yes it’s not going to be as performant as if the lake house was creating that table of things okay so what you’re I think what you’re referring to is the difference between Z order and V order is indexing directly yeah yeah yeah yes that’s a that’s a feature of so I was trying to wrap my head around this and I think what’s happening in my understanding what’s happening is when you do a v order sort in in data bricks you can say I want

15:39 in in data bricks you can say I want these three columns to be Z order sorted so basically it groups similar objects together and tries to make bundles of those in in the files below my understanding with a Z with a V order for vert pack engine I think that’s what veto order means it’s the v-order packing of the files are looking across multiple columns and each column is saying okay what’s the best grouping across all Columns of data so it may not actually do like hey there’s a there’s a column that has a number or an index number in it like one to a thousand thousand to two thousand something like that it may make

16:11 something like that it may make decisions to not sort or pick different columns because it’s trying to say based on the groupings that you’re going to get in the columnar data it’s trying to group those artifacts together in the most efficient way across multiple columns and I think that’s where the compression really comes from because then you’re not adding extra rows of data in the partitioning process does that make sense yeah and and to go back I I think within and I haven’t tested this because in order for any of in order for me to

16:43 in order for any of in order for me to invest an insane amount of time in testing or yeah doing a lot of r d around Fabric and how it butts up against my Enterprise systems at the moment I need RLS on directly like like yeah it’s just not usable and if it’s not usable I’m not going to invest the time right now but to your point I think what it does do is open up a whole bunch of different Paths of how you could potentially change up your ETL processes yes yes so yeah agreed so if direct link

17:14 yes yes so yeah agreed so if direct link is my my go-to like that’s what I want to do because I have a bunch of curated reporting in power bi yeah are there scenarios where I could take existing Enterprise platforms up to a certain a certain ETF threshold yeah and then pick them up with lake house and and the whole fabric implementation so that my final objects that I’m using just specifically for power bi where I want that direct link experience in Border whatever that I picked that up with a new tool yeah absolutely that’s yeah that’s a possibility yeah which is fantastic going off that so and well there’s so

17:46 going off that so and well there’s so much to digest still I was seeing from someone the other day with with the direct link or the direct import mode you’re still paying for that when it does in a sense refresh even though it’s still not interesting what do you mean by that Tommy yeah so still there’s still some processing that’s going on it’s not necessarily because it’s not a live connection but someone was doing a test with that direct link or the direct import mode and they turned on F2 because Satya sent out a tweet saying

18:17 because Satya sent out a tweet saying hey everyone just turn on F2 see what you think for an hour yeah and someone was testing that I was like Wow everything goes so fast with the direct link but there’s a cost right so this is where even you’re using a cluster somewhere right yeah and maybe that’s the ongoing thing where technically this would be a direct query scenario where I would have to have a cluster running all the time in order to service any of the requests it’s likely I don’t know cluster’s gonna be more expensive than 280 bucks or

18:47 be more expensive than 280 bucks or whatever it is interesting it’s interesting that’s a really interesting scenario and I’m I’m very interested in seeing how people play out with the direct query or the direct like yes how that that feeds into what you were just talking about Tommy yeah I think there’s let me see if I can find the source but they’re the the person was saying like they’re like well what we’re going to just do is when we do a refresh is turn everything on and then turn it off after and and not not the F2 but let me let me see

19:19 not not the F2 but let me let me see what I can find but there they did find the direct cost from when they were actually refreshing the direct import but there is no refresh with directly it’s not a roof but it’s not a live connection either there’s still some processing that’s going on well it has to the the directly basically what I understand is to your point Tommy there is no technical Verte pack loading process I think that’s happening there what’s happening is the model that stores tables in memory is is directly going down to the lake picking up the

19:50 going down to the lake picking up the partitions and reading those files just basically read a file and load it to memory like that’s all that it’s doing that’s the same thing your premium capacity is doing if you have the yeah if you have a premium capacity not a PPU so yeah it’s the same thing if you have a premium capacity but for a lot of organization for organizations but it’s just not visible like right now what I think is going to trip a lot of people up or still be very interesting to see is direct test cases where if we’re using direct Lake you’re you would probably start seeing how the cluster or whatever capacity

20:22 how the cluster or whatever capacity you’ve vcpus you’ve chosen get utilized for those those resources and that’s also one of my concerns around like picking capacity across all these work streams or yes you’re right here or you could separate you could technically separate out separate that out you could have a separate separate fabric capacity just for your power bi implementation can’t you or no or no no you couldn’t no and but the

20:53 no you couldn’t no and but the difference even premium capacities are set costs unless you’re embedded on a monthly basis right I’m paying five thousand ten thousand Etc the fabric is in a sense the paper

21:04 Etc the fabric is in a sense the paper like pay as you go it does both you can you can you can use fabric as a Azure SKU or you can use fabric as a buy it from Microsoft so to me it’s like how you buy it right so you’re right Tommy like if you buy premium via the Microsoft Office portal as an office product it becomes a set money here’s the cost but then now but the idea is now you manage the capacity like you you have to like right you have to admit it yeah so if you run out of capacity and things start falling over you have to go buy a bigger SKU and the increments on that are much more

21:35 the increments on that are much more interesting because it just keeps doubling right so usually usually you don’t need like a whole other I don’t need double the amount of capacity I just need I’ve already just I’m hitting the upper limit of my capacity I just need a little bit more which there’s not a really good method to say I I need incrementally more inform more compute as opposed to like double compute right that’s something I gotta dig into I’m gonna I’m gonna take that one because how do I ensure like my big thing is how do I ensure a separation of capacity right because I can’t have

22:06 right because I can’t have production reporting being impacted by somebody doing some large data manipulation like okay on a capacity right this was one of my major feedbacks to Microsoft was that exact thing like look if if I have a guy in the middle of the day or gal in the middle of the day running like select all tables or select all records in some some notebook it’s gonna suck down all my computing power for that particular query to run I don’t want that especially if I’m doing a reload on a report that way correct so I want like

22:37 report that way correct so I want like it needs to like power bi needs to be smart enough to reserve the computer capacity to continue to render my reports so like I want to have like a set threshold that says okay limit this team or limit these people to not exceed this and make sure that everything else runs efficiently or there’s really no way to the set with your capacity like what what resources you want to devote to engineering compared to let’s say just reporting like someone yeah technically suck it up doing something very stupid in a power

23:08 doing something very stupid in a power bi route yeah and there’s no way to hold it yeah and they did that a little bit with synapse to try to start giving you like hey here’s a SQL serverless thing and you can know 80 for this team and 20 for that team like they started letting you shift the capacity around to different teams so you weren’t over charging something I just I just I like the idea I’m just not sure I understand the controls as an admin how would I control it a little bit better or at least understand it’s going to be one of these things where everyone turns it on someone clicks buttons they

23:39 turns it on someone clicks buttons they weren’t supposed to someone didn’t turn off what they shouldn’t have and that’s so yeah supposed to do this automatically says Raphael yeah yeah it’s it’s completely SAS and it completely does it automatically if you check this one box that says yeah scale it whenever you need to yeah like there’s all your costs yeah exactly well because I see all the scenarios now of people going in and they’ve turned this on and they’re going to use data flows Gen 2 for everything and just try to load everything initially the amount that workflow is going to be

24:11 that workflow is going to be such a cost but I was also having a conversation with a client that was this exact thing right hey I’ve got a handful of old data flows dataflow is gen1 I’ve got a handful of models I’ve got a handful like so and the question was where do I go from here what’s the next step for me as a power bi developer what should I be focusing my attention on how do we continue to grow again he was at who literally was asking me what’s the grow up story where do I get to the next step and I said look your next step right now is focus on learning synapse

24:42 right now is focus on learning synapse focus on pipelines focus on other methods to get data to power bi and start start thinking through about how you would build your Lake in Azure data Lake gen 2. I said but be aware Microsoft’s trying to release this fabric Thing and fabric technically does all the things that you want and fabric gives you the added feature of direct link and I I still think directly it still blows my mind on like why that feature exists it’s such a good idea such a good idea that’s actually a

25:13 idea such a good idea that’s actually a good segue that’s a great segue so let’s talk about our main topic of today it’s like it’s like we’ve done this before or something I don’t know the main topic for today we’ve kind the main topic for today we’ve been talking a lot about Fabric in of been talking a lot about Fabric in beginning here right so we’ve been talking about what about this feature what about that feature well this plays very well into our topic for today is really a decision guide around fabric Microsoft has released two really good articles around how do you decide on is it a copy activity is it

25:43 decide on is it a copy activity is it a data flow do I go use spark and then how do I pick between this thing called the data warehouse and now the data lake house so I think we’re I think this is going to be a very interesting conversation just because I think we have a lot of different opinions around this and this will be good to beat up some ideas here and think through like which of these things we should pick so Tommy I’ll let you pick which which way you want to go first do you want to go down the fabric route of copy activity data flows and Spark or do you want to go down do we pick a data warehouse or a

26:13 down do we pick a data warehouse or a lake house I’m gonna go over as close to my heart if I get to pick so I’m gonna go with the data flow okay copy activity or Spark all right let’s do that one first so let’s let’s focus our attention on that portion so in the article what again a high level of the article right the links are in the chat window as well well you can go catch those they’ll also be in the description of the video if you want to go follow along and read the documentation I’d highly recommend it but if you read those you’ll see that there’s basically an outline but it’s got table Matrix right and then it

26:43 got table Matrix right and then it starts talking about different scenarios scenario one scenario two scenario three yeah let’s start there what do you think what do you guys what’s initial and reactions to the article article so I was as I was going through the documentation I I seen you guys a chat yesterday and okay give me a bit of a bit of a giggle but I’ve never seen the articles in a sense a bit humorous but one of the links I think understands a lot of the developers confusion it

27:14 a lot of the developers confusion it says decide like copy dataflow or spark with a question mark like what should you choose when it comes to our ingestion this has been Power bi developers I’ve been talking to I had the same call yesterday Mike was someone about doing migrating everything over like what do we what are we transferring or where are we going from here because we have these sys these tools that have overlap yet have different obviously pros and cons and I think for someone from

27:44 and cons and I think for someone from from my point of view of okay I know data flows I know pipelines but when you do spark or do I need to use all of them with the Indians end-to-end solution where do I need to incorporate all this all the tools at once and part of the the guideline here is what do these tools do best where do they kind these tools do best where do they fall short and I the bigger of fall short and I the bigger question is do you need to use all of

28:14 question is do you need to use all of them or is there a use case for utilizing each of these tools in one end-to-end scenario so let me let’s start at the beginning what you said there and say there’s three activities pipeline copy activity that’s basically Azure data Factory then there’s dataflow Gen 2 which is our beloved power query Gen 2 version of it I don’t really know what Gen 2 really does behind the scenes but it to me Gen 2 Data flows feels a lot more like What spark would be doing because you

28:45 What spark would be doing because you can write Delta tables and you’ve got all this extra fanciness so I don’t know if it’s the same engine I hope data flows Gen 2 is multi-threaded because that’s the challenge with first the original data flows was it seems to tap out after a certain period of time and it just like Falls over so bigger bigger data sets in old data data flows didn’t seem to work very well for me other people have found value from it if you do incremental refresh with it yeah fine you can get large datas in but just didn’t really feel like

29:16 feel like Enterprise like large scale ready and then there’s the Spark engine which is how I’m in my world spark and databricks are synonymous so it’s like it’s like databricks but it’s it’s not it’s it’s the Spark engine yeah as I look about these different items I I have a really hard time delineating between dataflow Gen 2 and Spark I disagree with some of their table Matrix stuff like what it’s basically the same stuff all the way down other than the fact

29:46 all the way down other than the fact okay if you want to use python or r or r you you use spark everything else is the same the table is like identical right dataflow’s Gen 2 there’s there’s Nuance in here yeah okay which is as we’re going down the Matrix yeah yeah code written okay no I would agree with that

30:16 okay no I would agree with that announcement development interface sources destinations well okay sources I would disagree with so so so here’s here’s how I see this right like one of the things is we’re focusing in on The Matrix and I think it’s it’s where the guts of the conversation for us is where I’d actually start for the listeners is at the end because that’s where they’re outlining the actual persona of people with skill sets right and I think this is what this the value behind this article is it’s showing the tech stack

30:48 article is it’s showing the tech stack like here’s the tool here’s the technical tool and the different areas within that technical tool that align with certain personas and I think that’s important because you’re immediately going to go into what are the skill sets that I have as an individual which we talked about and Tommy you’re comfortable in data flows or well that’s that’s your your brag and you’ve done a lot of development in there well then that makes it a ton of sense because you’re very familiar with that that

31:19 you’re very familiar with that that column right versus and I would say the next part of this is what butts right up behind this is okay what is the offering of the tech stack what are the skill sets we have and what are the business needs if all of my tables if all of my data is in tables right it’s very

31:37 data is in tables right it’s very structured I have I I don’t have to take on complex forms of implementations or Tech Stacks if I don’t have to deal with completely unstructured data right so maybe automatically because if I’m an organization that doesn’t have a large volumes of data even if I do have some large volumes but everything’s in table format and it’s already pretty well organized stick with the easy stuff right like I just want to copy some data into this this fabric ecosystem so I’m

32:07 into this this fabric ecosystem so I’m going to take a pipeline activity or a data flow data flow why push myself into a realm unless there’s a need for spark or Jupiter or whatever the case may be and there could be because that’s that’s part of the business needs conversation though like how is my data stored how to how what it is the alignment of skills and resources that I have aligning to one of these and how does that how does that influence my future paths for development in my

32:38 future paths for development in my organization if I’m if I’m like yeah five years out there’s no way any of this is changing then keep it simple if all of a sudden you’re gonna be like well we’re hiring a whole data science department because we’re trying to ramp up on a whole bunch of model and language llm and all this stuff well now you’re going to be like pushed into spark in some respects right you’re going to have some of this on this environment where people are using python a lot more than they would be you python a lot more than they would be with a SQL environment and and

33:08 know with a SQL environment and and that’s like the value behind I see some of the alignment and columns and and how one would go about looking at this and choosing the path that they they go down so I want everyone to listen to what Seth said because it’s gold oh I just ignored it all right no I’m literally taking no I’m taking notes of things I’m gonna there’s a couple points I want to make here but yeah go ahead Tommy yeah it’s it’s essential what you said because it gets through I think the heart of where the

33:39 through I think the heart of where the decisions are coming from where everyone’s like oh I I have to now become the full engineer it’s like well first you got to get access to all this where the for a developer now who’s been the bi developer I don’t think they’re gonna go oh by the way I need access to all of our raw data that’s coming in as it comes in the files if that’s not changing because you can’t in order to use a lot of those other systems like you’re going to use spark the normal use cases is because you’re getting the direct raw data but to the huge point to

34:12 direct raw data but to the huge point to what you said is if a lot of my tables are already structured and that’s what I also have access to right now what I’ve given access to then yeah you don’t have to do that full do the full stack it’s just the goal here is pushing into the lake house not necessarily that bi develop developer or the normal users now are going to also now be given the keys to the kingdom of all the raw data if they didn’t have it before the other in a

34:44 didn’t have it before the other in a sense tools or approaches one don’t make sense and two are not really they don’t really work because you don’t have the raw files you’re not going to have the CSV files or the raw data system so what you just said in terms of if I still have access to what I had before fabric came out three weeks ago ago then yeah keep it simple all right all right I’m gonna jump in with a bunch of comments here Seth you you talked about

35:15 Seth you you talked about structure tables and the need to you structure tables and the need to stay simple I would agree with you know stay simple I would agree with you there 100 percent one one challenge with the structure tables element that has not been good in the existing power query experience is the ability to have slowly changing dimensions of information loading to tables that’s my only caveat to what you just said around like stay with power query or the decision point to go over to spark to spark and and I think you I think the new version data flows Gen 2 does give you the ability to

35:47 flows Gen 2 does give you the ability to continue to append data yes that would actually allow you to have what we would say hey look there’s a table in production I want to go grab all the records that have an updated date as of yesterday and I want to lift only the records that are changing to my lake so now I’m going to be duplicating values for the natural key of the column right so there’s a there’s a natural key in that table somewhere okay fine but that is the challenge that businesses face right they’re taking a snapshot of data right now what is the value of things today or I want to see

36:18 value of things today or I want to see what is the value of that thing every single day that it changed or every a snapshot of it every single day so I think I think that would still I’m a little bit on the fence whether or not I could do that exercise of grabbing the latest record the most current thing inside inside power query I think what they just added with where you’re writing to destination and then the ability to append or merge or replace the information I think that solves my problem there so I think to your point Seth right I could

36:49 I think to your point Seth right I could probably spend more time in power query and what I was saying in chat is you for sure should learn data flows power query data flows that’s what you should learn like today that’s the easiest thing to learn it’s buttons it’s clicking you can write a couple custom functions if you need to like you can get a little bit more Cody if you need to but 80 90 of your workflow could probably just be served from going through data flows the flip side of this one then is okay when when is the right decision to go

37:19 when when is the right decision to go over more towards the spark side and so this is where I’m disagreeing with the article a little bit there’s three scenarios of people that they’re talking about for this decision point and all three of them are data engineers and this is your point Tommy your point here was saying hey hey as a what’s the what should I be knowing so there’s two personas I think that are in play here for using this feature I think there’s the person who was a data modeler who did a little bit of lightweight power query so I think when

37:50 lightweight power query so I think when I look at people I think the data model person out Persona is a thing right tables relationships shaping data so that you can get a star schema they’re doing lightweight data engineering what you’re seeing now in this article is they’re only talking about the data engineer which I think in my opinion is a miss a miss yeah because every business user is doing a little bit of lightweight data engineering already they may not have been studied up they may not be like a full-on called Data engineer at this point but you’re doing data engineering

38:21 point but you’re doing data engineering and I’d argue all those Excel file users that are moving pieces of data around and copying and pasting them they’re still data engineering they’re doing the engineering work it’s just not in a tool that’s going to be repeatable but aren’t you aren’t you picking a like yes a a role of can somebody with with modeling skills just do that and be data modeler yes absolutely and is it completely different or does it coincide with data engineer I think data

38:51 with data engineer I think data Engineers this overarching term that they’re using within here that’s like how to opt how to work with data transform it model it do it right like are you getting too hung up on modeler as opposed to like okay that role fits within well they’re saying is Jade engineer well so okay so to my point there right I think traditionally there was this concept of a data engineer a data engineer would be on the it side they would work in their data warehouse and they would build tables

39:22 warehouse and they would build tables that would say okay I’m done with these tables here you go business well here you go business objects user I’ve made the universe I’ve engineered the data to a point where I can walk away and now business you can show up here’s a bunch of tables and you can use said tables to build your reporting or connect to it or whatever right so I think what I’m seeing here is I’m seeing in the they’re in the power bi Realm we’ve always called that if you’re not just a report Builder if you’re building data models that’s a little bit harder than the report level

39:52 little bit harder than the report level building building the development the developer for power bi was considered the data modeler the developer in the data warehouse is the data engineer so I think to me if I look at the Venn diagrams there’s probably a couple roles that each of those personas or those people don’t have like I wouldn’t expect a data engineer to be able to build measures and Dax I don’t think yet and so right right that’s a feature of the data model that I’m maybe I want my data Engineers to have and maybe I want my data modelers to

40:22 and maybe I want my data modelers to have more skills like the data engineer I’m getting I’m getting lost because we look at the other article we’re looking at bi developers data warehouse developers in here we’re looking at data developer developer okay okay so now you’re right Seth so this is this is my this is my complaint with the two articles the one article focuses like a hundred percent on the data engineer the other article talking about Warehouse or lake house talks about Susan a professional developer meaning I’m not scared of code rob a data engineer like

40:53 scared of code rob a data engineer like thinking like oh I understand how to get data and transform it into something that I need and then Ash a citizen developer that’s my business user Ash should be inside the other article so hey you need you need scenario four that says hey I’m Ash a citizen developer what do I use when I look at this stuff because the citizen developers can use this stuff and I there’s a huge part here too and Mike I completely agree with okay I see where I see where you’re going with yeah yeah absolutely sorry like I I’m with you now

41:23 you now I think you’re I agree there’s a missing Persona here of Rob Tom or whoever it is of I’m a power bi developer who’s been doing a lot of the ETL in power bi but I didn’t have access or we’ve talked about this so many times with data flows and modeling where they always have a grow up story to go back to the source and the the amount of teams that are trying to do everything in data flows from hard ETL things that needing for the reporting point of view that are business dependent business

41:53 that are business dependent business critical but are not up to the source they’re not in the synapse so they’re not in the warehouse I think that’s the huge Persona here missing is now we can do this with the lake house that we have the ability to use the power bi

42:09 the ability to use the power bi developer role who’s doing major ETL Transformations that they’ve already been doing only existed just in power bi oh yeah I don’t like the term developer honestly so like well that is hey that’s obviously it’s what what person wave used used who did do you not remember yes no I understand that the only reason the only reason I’m saying power bi developer because I feel like there’s a different skill set right now I’m telling you the citizen developer citizen developer

42:39 citizen developer citizen developer that’s in the article but yeah Christian I hate that more than powerful burning with fire so so but but I think I don’t like to have power bi developer as a term because I think it’s too overarching like what does that mean does the power bi developer move content between Dev test prod in a pipeline does the power bi developer only build data models or does the powerbit developer build data models and reports yeah I think the answer is yes it could mean all those things but I like to specify more about like what are

43:09 like to specify more about like what are the different workloads I’m trying to think about when I think about power bi and so now there’s like I don’t really love the powerband developer it’s too incumbency too broad of a term I want to focus in on okay we have this Persona called Data engineer who’s making tables of data and we have this person engineer model Enterprise terms right I think we were on the right track and you’re going the wrong direction again citizen because right now again like no if if we’re gonna if we’re gonna give

43:40 if if we’re gonna if we’re gonna give feedback on the Articles because we are right now right now in that if you look at the lake house slash slash data warehouse right this citizen developer is no longer just the consumer of data power bi threw that away five years ago right if you think about the person who’s in the business unit right now that’s talking about using data flows or flows or anything else that those are ETL tools that they’ve been working on to

44:10 tools that they’ve been working on to clean and refresh their own data to do their modeling to do their reporting so I like to me the argument makes a ton of sense because where does that person go where does the business user who is this citizen or power bi developer who has been doing bits and pieces of all of this all the way through because it’s connecting through Power query it’s transforming the data it’s modeling the data it’s reporting where is the grow up story for them in Fabric and I agree with you Mike

44:40 them in Fabric and I agree with you Mike they’re missing in the first part which is part of the ETL yes and and technically like they for sure then need to get pushed into probably data flow Gen 2 yes yes this is where you start the Persona of this person is through this chain correct so yeah so I like the fact that like hey we’ve gotten some personas hey there’s some structure between like how this works yes but but it doesn’t seem like they’ve put together the end to end of how this

45:12 together the end to end of how this flows through different parts of the organization the huge part here is previously teams who whatever you want to call them that they were doing the bra and silver gold even in data flows and doing the major Transformations the difference is it only existed in power bi the output was only meant in the only destination was a power bi report there was no other place that you could put it even though they were doing

45:42 could put it even though they were doing these business critical Transformations for their reporting needs with data flows and it only existed in that ecosystem the difference now is that ability to actually push that to somewhere that can be utilized in again as a SQL database it can be utilized in the a lake house which can be connected to more than just power bi it’s not so much the Transformations that are different for that role it’s the output the the it’s the actual destination of

46:13 the the it’s the actual destination of that data which is what opens up all the doors not so much the tooling it’s really the the Final Destination it is but at the same time like the where my head goes after this is like okay well can’t they just keep using power query and power bi well yeah but if if I think about one of the phrases we’ve we’ve coined a lot in the past is Think Like the business act like I. T right and and one of the biggest wins of power bi and now a new fabric ecosystem

46:44 power bi and now a new fabric ecosystem is if there are ways to automate or scale what you as a business unit are are doing we want to pursue those right and and that takes it out of just this realm of a challenge of this very small little nugget of fantastic Insight only in a power bi report and it says no like all the work you did now we can leverage that in dataflow Gen 2 in fabric right in this way in which now it’s

47:15 in this way in which now it’s discoverable to everybody yes right of course I want to step into that realm yeah and if I step into that realm I instantly take these citizen developers into that realm right like yeah so let me go back to your comment you said earlier Seth you said you talked something about the end-to-end Microsoft is not producing the right story around the end-to-end architecture of where data flows through the organization and again my mind went towards yeah this is 100 true and right

47:45 towards yeah this is 100 true and right now many organizations are struggling with this design power bi gives so much control I’m gonna bear with me here while I go off a tangent I’ll bring it back but inside a workspace we have reports we have data sets and we have tables of data being produced through data flows right all of those elements have different levels of permissioning and you can design different patterns inside your environment to give different

48:15 your environment to give different layers of access to those assets what we’re doing is we’re going from a thin like a report that’s very baked done we’re ready to go we’re just going to hand you the finalized report hey business let’s go ahead and use that all the way down to here’s raw here’s almost raw data here’s very tables of information it’s up to you to build your own tables and relationships and data engineering right so as we think about where where does the Enterprise where does corporate I. T or where does corporate bi live in

48:46 or where does corporate bi live in relation to all these different artifacts you may have you may decide as an organization to only produce reports and that’s all you share from Enterprise bi you may not decide to do that you may actually say Enterprise bi will manage and create a data model that then gets shared to the organization and you can build off of that there may be other teams or other parts of the organization that can go all the way down to the table level hey I’m not going to build you a data model here’s the raw tables that you would use to produce information this is dim product Master

49:16 information this is dim product Master this is dim sales Master this is you this is dim sales Master this is fact sales right here’s the tables know fact sales right here’s the tables that come out of our systems go do what you want business now I think the reason I’m bringing this up is because what happens now is in any one of these pieces of information you are now transitioning between what traditionally was it which was the team that was connecting to the transactional system system loading tables of data to a warehouse and then giving beaded access to that information to the rest of the business

49:46 information to the rest of the business and what I’m saying now is the challenging part is the transition of responsibility at these different levels levels when when is someone responsible for a portion of that data and then how do I transition data stewardship this is the conversation the data stewardship becomes incredibly important now because it can’t move quick enough to build all the tables of the business once so what we do is we provide access to that information further Upstream maybe we get them access the data sets okay now

50:18 get them access the data sets okay now when that happens if the business goes out and builds some crap reporting off of that big data set and doesn’t understand how it works whose fault is it it and and this is where I think that we potentially get ourselves in trouble with organizations because they struggle to figure out how to how to successfully transition from one team to another team the responsibility of the data and especially if you didn’t have either the concept like a data contract or the data engineering team who used to be in charge of this there there are so many

50:48 charge of this there there are so many scenarios I have from my previous work where realize data flow is now can be used without power bi ever in mind at all we’re in the past data flows only have the purpose for reporting but I could theoretically use data flows now it has just been pushed farther Upstream because of gen 2. where I can develop data flows purely for actually from a just pushing this data without it ever needing to be digested by a report theoretically and using that from and

51:19 theoretically and using that from and pushing that to other systems that cannot be understated but to your point Mike that also can cause if I’m doing the same things that don’t have the data stewardship or the governance around them okay them okay yes and this and this is where I think I’m I’m struggling I’m not struggling I’m working through myself is what is it what is occurring here is we’re seeing the evolution of a data platform that’s bringing in I would say another group of people that have been working in power bi solely right so

51:49 working in power bi solely right so we’re now seeing a true merge of data engineering people and business people all in the same ecosystem and now the idea is we’re not just I. T or business or Enterprise bi or a department bi we now have to have the conversation around this is all of our organizational data we’re bringing everything together there’s a there’s the potential to let anyone build in the space some of it will be very good and well designed some of it won’t be very well in good design so how does this work and then how how will organizations

52:21 work and then how how will organizations so so there needs to be more emphasis on your process process and your policies that will help you decide what part of this should have who should have access to what who’s building what and how do I transition data artifacts from one team to another like a lot of times we get stuck in the

52:40 like a lot of times we get stuck in the middle right the transactional system has bad data in it people enter bad data fine whose job is it to fix it do I do I add business process in my pipeline to fix the data that’s coming out of the enterprise system incorrectly or do we kick out a task to go to the people who are entering entering in the bad data and say go fix your stuff here’s your here’s your fixed list and and that is a responsibility handoff between people consuming stuff versus the people who are entering data into the system this is hard it’s not easy to

53:12 the system this is hard it’s not easy to figure this stuff out well I I think that it you’re 100 across all these points it is hard I think I’m gonna spend more time in this area as well as far as how does a business go about implementing something like like this how does it fit into a framework and one of those challenges related to that is there’s such a push and it’s I I’m I’m the cusp of just being like it’s really easy to do this which is oh data for everybody yes but

53:42 which is oh data for everybody yes but it also fits into this like oh when when I go data for everybody and I open it up to everybody it’s like yeah I’m a subscription-based service and the more people that use it the more we’re gonna make yeah yeah that that that makes a ton of sense that’s exactly what Microsoft’s doing but in terms of implementation it creates significant problems when that’s the push and the follow-up is oh one security right then we’ll provide you all the structure to put together like how we would segment this out and and that’s

54:12 would segment this out and and that’s the part that’s a challenge of do I have to wait for some of that do I have to wait for some of the features or do I have to bake them into what I would presume is existing features and functionality I have now in preview yes or connect or is it just such a new environment that I can manage a lot of these pieces just through capacities and workspaces and permissions and that’s the challenge for me like that’s why I’m saying I’m going to take this and diagram it out because absolutely like all of these things are now challenging the access to

54:43 now challenging the access to information yeah so I think it needs to be traditional bi absolutely not probably got through that like through a long time in all that long time ago because people have access to multiple different systems faster and and the business needs are are now solved at such a pace that of course that like that that’s no longer the way to go this one fabric mixes all these models together though like whether you wanted to be there or not like you’re

55:13 wanted to be there or not like you’re you’re in there so we’ve got to figure out like what does it look like from how do we how do we bring these people even further back now though into Enterprise spaces yes where we’re technically pushing them yeah right this is what’s happening as much as you wanted to say like data data shouldn’t have been or a glad to see that data is now a accessible to more of the organization but just in the same way that it was in the way well like you as

55:44 that it was in the way well like you as a business person that’s interacting with data well maybe you do need to skill up in certain certain parts or areas areas or we’re just defining these guardrails and then like the the AI bots of like easy easy like oh I’m in data flowed me a build me a data flow that does the Transformations and it just happens that could be the case too hey copilot do something in spark yeah okay fix this well and I’m I’ll just say this one last

56:14 well and I’m I’ll just say this one last time because I I really think this game this has to be the last time Tommy no more than this just the ability alone with Gen 2 to push to even just a SQL table cannot be understood on what that’s done for that that role that user and what data flows in a sense where it’s actually in that ecosystem of this okay system give me just give me why do you need to push into a SQL Server what are you doing with it with it why do you need to push it to the table why why would you ever want to pull a

56:44 why why would you ever want to pull a data flow out and push it back into SQL Server Server was if you’re doing the Transformations that are your bi or your eye teachings not having sales transactions or I I there’s a few I can’t say here but I’m trying to think of the non it’s the non the non-conscious specific answer okay but I think there’s about 10 scenarios I have that we’ve done with dataflows like this is great and the model can I use this somewhere else because we had to do these Transformations merge other sources together and we what we’re doing

57:14 sources together and we what we’re doing was all bi but it was all just going to a model okay okay these were structured tables we were creating in data okay you said the words that I was looking for okay that says generals no no it’s fine my my issue with what you said was I like the so to your point I do love the fact that I can write data anywhere and honestly it makes sense right I should be able to write data do something with power query and then shove the data back into anything I need I have a major issue with using SQL servers as a source

57:44 issue with using SQL servers as a source for data models I don’t think it’s needed and I have if you if you are putting data if you are doing if you were doing data things and you were shoving data into a SQL Server you better be putting an application on top of that SQL Server that needs that data to operate like your what this says to me is you’re actually building an input to the transactional system which again I think there’s a use case for it I’m not saying I would never design an architecture that would take data out of

58:14 architecture that would take data out of a SQL Server do something with it in power query and try to push it back into SQL Server just to pick it back up with power bi and put it into a data model that to me that’s just a total waste of a SQL Server I would completely agree with so my my point going back here though was is like if if that is your scenario Your Design scenario that’s what I have major heartburn against but to your point though it was redundant it feels very redundant like let’s go make a server let’s get the data out and do something to it store it in files and then let’s put it back in it like this is totally the wrong workflow right if

58:45 is totally the wrong workflow right if I’m only just doing reporting then obviously the data flows yes exactly right and and I I would the only thing I need to use SQL for is query folding rollover security and talking to my Delta tables like it should be serverless there should be no real tables inside SQL it’s just a compute engine that I borrow a computer for to go access things in my Lake sorry I’m getting really hot and bothered about that one the the that’s the the same thing we’re on the same page there that’s my only scenario but but I do I do want so again

59:15 scenario but but I do I do want so again if this is Michael’s lens of the world right there’s going to be people who need that feature there’s going to be people who think they have to go back to SQL Server it just is what it is so like yeah feeling foreign foreign this whole topic today I’m very I love this one but I’m so fired up today I’m so sorry like this is I’m like I jittery with all these maybe the coffee is extremely strong today I don’t know what’s going on but I feel very passionate about these topics

59:46 passionate about these topics and particularly around like which tools are now available to us just and I know we’re right out of time but just one quick reply I to what you said and yeah a lot of organizations they don’t have the full infrastructure available they’re basically using what they have correct if you have a SQL Server you’re going to put it back in SQL and you’re going to pick it up from Power bi fine well that no you still don’t have to do that I I’m what I’m saying if you have a data flow that’s just going to power bi use the data flow but if you don’t have

60:16 use the data flow but if you don’t have the data lakes or obviously yes the other infrastructure but you do need to use what that DBI team has done because they have structured the data in a way that can be used in powerapps and power automate to push the pushing invoicing pushing actual usage like for yeah let’s not get to fabric am I making sense kinda all right we’re gonna wrap it so so I feel like we might need a revisit on this one I think we should definitely come back and actually focus more on like the scenario one

60:46 focus more on like the scenario one scenario two scenario threes I I wish we had gotten there we talked so much about like the tool the tools the tech picking which things that’s good conversation but I feel like we could revisit and just focus on just scenario one I had notes around the Persona of Leo Mario Mary Adam what are they doing what are they how are they using the tool does this make sense with my mental model of how these Technologies should be used with their workflows does that make sense so anyways

61:16 with that we are literally right at time I’m going to cut it here we’ll just say thank you very much for joining the the podcast we really appreciate your time today today sorry and and I apologize that I got so passionate about this thing I hope you enjoyed the time of the conversation and learned a couple things I guess that’s it’s good that we argue I guess that makes for good entertaining content anyways our only request is if you like this content if you learned something from it would you please share it with somebody else we’d love for the the podcast to get out to more people the only way it can hear some hit someone else’s ears and find Value is if

61:46 someone else’s ears and find Value is if you tell them so tell somebody else we appreciate it Tommy where else can you find the podcast yeah let’s make another blog article by Microsoft you guys can help so exactly it’s like another appearance you can find the podcast anywhere it’s available Apple Spotify Google podcast make sure to subscribe join us live every Tuesday and Thursday 7 30 a. m Central if you have something you want us to talk about Fabric or power bi go to the power bi. tips slash podcast and we on mailbag for you to submit your questions oh I

62:16 for you to submit your questions oh I forgot to mention at the very end here I’ll throw it in here anyways the new Power bi pbip hat is now out on the the power bi tips store so go to swag. power bi tips and you can go get your very own pbip at for real true developers who love git so yeah exactly things like I just bought a I just bought a whole bunch of them so when I see people I can go give them out to people so anyways thank you all very much we’ll see you next time

63:11 [Music] thank you

Thank You

Thanks for listening to the Explicit Measures Podcast. If you found this episode helpful, share it with a teammate and subscribe so you don’t miss the next one.

Previous

Exploring Direct Lake and One Lake – Ep. 221

More Posts

Mar 4, 2026

AI-Assisted TMDL Workflow & Hot Reload – Ep. 507

Mike and Tommy explore AI-assisted TMDL workflows and the hot reload experience for faster Power BI development. They also cover the new programmatic Power Query API and the GA release of the input slicer.

Feb 27, 2026

Filter Overload – Ep. 506

Mike and Tommy dive into the February 2026 feature updates for Power BI and Fabric, with a deep focus on the new input slicer going GA and what it means for report filtering. The conversation gets into filter overload — when too many slicers and options hurt more than they help.

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.