Star Schema in Fabric? – Ep. 259
Fabric makes it feel like everything is changing at once: OneLake, Delta tables everywhere, Direct Lake, new authoring experiences, and a steady stream of roadmap features. In the middle of that, it’s fair to ask whether the classic star schema is still the ‘right’ shape—or just habit.
In Ep. 259, Mike, Tommy, and Seth unpack why star schema modeling still matters for Power BI semantic models (and DAX performance), but also why Fabric shifts more of the modeling and transformation gravity upstream—into Lakehouse/Warehouse patterns, governance, and shared data products.
News & Announcements
- Announcing the Fabric Roadmap — A consolidated view of what’s shipping (and what’s coming) across Fabric workloads—useful for planning and for setting expectations with stakeholders.
- Fabric’s New Item Icon System — A quick look at the refreshed, color-coded icon system and the design logic behind it—small UI changes that make big workspaces easier to navigate.
- Suggest a podcast topic — Send in a question or idea you want the team to cover in a future episode.
- PowerBI.tips Podcast — Browse the full Explicit Measures back-catalog with links to listen on your preferred platform.
- Power BI Theme Generator (Tips+) — Create and manage theme JSON so your report styling stays consistent across a team (and across projects).
Main Discussion
The core debate is simple: if the data is already stored in a highly optimized columnar format (Delta) and Power BI can read it directly, do we still need to build a star schema?
The team’s take: if you’re delivering analytics through DAX and a semantic model, star schemas are still the best default. They’re easier for report authors to understand (facts + dimensions), they compress well, and they keep filtering behavior predictable. What Fabric changes is where you do the heavy lifting: more shaping, standardization, and governance can (and should) happen upstream in the Lakehouse/Warehouse layer.
Here are the key takeaways:
- Star schemas are still the ‘happy path’ for DAX: if your measures are complex or slow, it’s often a sign the data shape needs to improve—not that you need more DAX wizardry.
- Fabric moves the compression step earlier: with Delta tables feeding Direct Lake, you’re effectively preparing the column-store before the semantic model—so upstream modeling pays off more than ever.
- Usability matters as much as performance: dimensions/facts make it easier for new report authors to find the right fields and build visuals without accidental ambiguity.
- Normalize to reduce repetition, then model intentionally: pushing repeated attributes into dimensions saves space and keeps your model from ballooning as you add attributes.
- Expect multiple consumers beyond Power BI: notebooks, data science, and automation flows may use the same curated tables—so treat ‘data products’ and ownership as first-class concerns.
- Governance becomes unavoidable in Fabric: more artifacts, more permissions layers, and easier deletion means you need clearer stewardship, certification, and lifecycle processes.
- Use downstream pain as a signal: when business logic turns into gnarly allocations and complex measure branching, it’s often time to reshape upstream tables to make the semantic layer simpler.
Looking Forward
If you’re adopting Fabric, treat your first Power BI model as a discovery artifact—then use what you learned to formalize upstream tables and keep your semantic model star-shaped and boring (in the best way).
Episode Transcript
0:31 out good morning and welcome back to the explicit measures podcast with Tommy Seth and Mike hello everyone welcome back hello Mike how are you this morning and happy Tuesday gentlemen there we are it’s a Tuesday yeah so jumping in with a couple announcements things announcements and things that we found across the internet there’s a lot going on right now a lot moving and shaking right now interestingly enough a new version of powerbi desktop I believe has been released however we still don’t have a Microsoft blog outlining
1:02 have a Microsoft blog outlining all the features yet at this point so what have you guys discovered so far in the new I guess it’s October desktop release or midt desktop or mid September silent release I just opened it up and and it says October 2023 it’s an October release it’s an early October well yesterday I was perusing the LinkedIn and saw a post by Zo Zoe Douglas senior product manager for
1:32 Douglas senior product manager for Microsoft powerbi making the announcement that there are calculation group authoring in the powerbi desktop now all right there we go calculation groups pretty cool this is something that’s been calculation groups have been there for a long time but you had to like write the code to make them happen inside tabular editor two or three so there this is a a pleasant surprise now that it’s coming into Des I think people will actually maybe start attempting to write some calculation
2:02 attempting to write some calculation groups what what are your thoughts on calculations groups in general do do you use them often does do you have like a specific use case when you want to use calculation groups this is an interesting one because I don’t think they utilize as much as they should for the majority of powerbi users I think when this comes out in the modeling View and when people start downloading it they’re gon to go what’s calculation groups again for those who are initiated it’s probably you’re listening to sqlbi you’ve been listening to K berer you’ve been listening to Bernard aguelo from Spain
2:34 listening to Bernard aguelo from Spain and you probably have Tabo editor that’s I don’t know I would say maybe 15% of people if I was the guesstimate have taable editor on their computer okay and of those how many are actually utilizing calculation groups it’s tough because if you’re building Pro reports for someone or building data sets are you handing off calculation groups to a client or to a team that has never used it before they there in lines I think my predicament with calculation groups I like the idea of them because there
3:05 like the idea of them because there there are a number of calculations you’ll do a sum of like time based calculations make seems to make a lot of sense for a calculation group I have a a sum of some sales and then I want to modify that sum of sales by certain time periods last week last year just different patterns on how you want to calculate a Time bound sum across some calculation in your model so the calculations group I think really do help out a lot for that however how to get that visual to interact with the calculation group
3:36 interact with the calculation group sometimes is a bit interesting and maybe slightly strange because if you don’t have the right filtering properties on that visual it can’t decide what measure to pick it’s like how would you describe calculation groups for a new user like someone who’s never like okay cool they’re out like why are they valuable to me Seth what how would you explain kind to a new user what a calculation group is I in in some degree it’s just a collection of measures that are very similar right that you can extend
4:06 very similar right that you can extend and use and use in an easier way I guess than like manually creating every single one I know like I I’d also say it’s more of a Enterprise feature right like team teams that are really well versed in powerbi models like H have to build out things that we’re we’re calculation group are really useful but yeah there’s a there’s a ton of caveats to them right so is the the lay person going to use
4:37 so is the the lay person going to use them like probably not unless they get really excited about daxon modeling you’re like yeah I’m gonna own these things but yeah it’s nice to see that it’s in the tool yeah because that way like it’s it’s not one of those features that is buried and you would never know about it unless you use third party tools now like I could make an argument that you should always use tabular editor but true I I think it’s nice that they brought visibility
5:07 it’s nice that they brought visibility into at least the functionality because a lot of the things we talk about you a lot of the things we talk about our features or tools are things know our features or tools are things that 90% of the audience doesn’t use anyway does that mean that we don’t find them useful no right like it it’s now that it’s part of the tool I think it’s it’s a good feature ad but yeah I would say it’s a it’s a limited use and even how to build a calculation group at this point right you have to go to like the modeling tab of desktop in
5:38 to like the modeling tab of desktop in order to even I think create it even it appears on the home ribbon on the model view inside the calculations group but if you go to your report View and click on modeling it’s it’s not there like so it’s not in it’s again it’s one of these like inconsistent experiences you have to like know where to dig to go find the right setting to make the calculation groups even appear it’s a bit opposite too where Dax is easy to easy to learn but hard to master calculation groups are the opposite because without the necessary
6:10 opposite because without the necessary configuration up front which you may not be familiar with it becomes a catchall for all your me your measures to your point Mike we like I only want this for my time intelligence why is this happening for all of my measures yes or how do you how do you build your measures so they’re reusable right because you it’s it’s the idea is trying to make it a reusable measure that you then inject to other calculations to manipulate it so it’s it’s it’s adjusting it’s a it’s a group of filter context adjusted measures and I think
6:40 context adjusted measures and I think that after you’ve been playing with it for a while to to say that to a brand new person you’re like what the world are you talking about interesting okay cool another thing in desktop that was released which again we still have no blog about so we really don’t know exactly what’s all been announced or what they improved but another one that’s also there when you go to the model view you’ll now see there’s now a data tab also on that and one of the features that I like here is they have a tree for tables and for models so you can look at the
7:10 and for models so you can look at the tables inside your data model and you can look at the model definitions of things on the model ribbon in the properties paint it gives you what’s your local so what’s your Local Host server name handy what is your compatibility level Andy and there’s another feature here that’s called discourage implicit measures being that we are the explicit measures podcast we would very much end we would highly recommend this feature that’s what our MO is
7:42 feature that’s what our MO is here a little bit is don’t don’t just use columns on visuals you want to discourage those implicit column usages instead you want to explicitly call out the calculations or the measures and hence the name of the podcast even so which makes sense it’s in the modeling View because if you use a calculation group you can only use that discouraged implicit measures turns on by default and you can’t turn off if you have a calculation group in your entire model yes and I and I think this the language on this one the the wording
8:14 the language on this one the the wording of the feature seems a bit off it says discourage implicit measures it’s not mean it doesn’t mean you can’t make I think it just says no implicit measures I think it just it turns it off it’s not saying you’re going to get a pop going we would rather you not no exactly it’s like you’re done like you’re cut off so I feel like the the the setting is a little bit misleading there however the intent is very good and I do I think what you’re saying is just bury a calculation group in every model or just turn it on just turn it on like so not
8:44 turn it on just turn it on like so not it’s it’s a setting that’s always existed in the model and you could have so if you had tabular editor or something else you could flip that flag in your model and implicit measures would just be immediately turned off so this I think it was a requirement because my understanding is once you bring in calculation groups you need to turn off implicit measures so this is some logic Microsoft defa yeah so you have so when you if you create a calculation Group by default now they flip that flag and I guess the the
9:14 flip that flag and I guess the the situation here would have been if you didn’t have this switch you wouldn’t be able to turn off you couldn’t delete the calculation group and then turn implicit back on so to INF feature enable something so you could only turn it off and never turn it back on again would be frustrating yeah so you have to this feature released which probably meant the whole modeling ribbon had to be appearing before before you actually had calculations so there’s a probably I calculations so there’s a probably this is one of these things that mean this is one of these things that Microsoft says right everyone’s like yeah just put calculation groups in desktop there’s all these other
9:44 desktop there’s all these other implications that go along with this like how is the UI going to work what happens when you turn it on and everything goes implicit disappears how do you flip that back on where does that come from how do you select it like there’s a lot of work that had to get here just we had a whole new menu we got another whole new properties option screen just to get calculation groups so it was a lot of work to get it there I think I’m all for this because this is only going to bring more awareness to our podcast because someone’s gonna see this is an explicit measure right there should there should
10:14 measure right there should there should be a URL or a little link next to that one just turn off implicit measures and listen to the explicit measures podcast link here found here every Tuesdays and Thursday A. M I’m not gonna lie 250 episodes take a lot of time I’m not when I do training and we do the Dax part of the training whenever it’s like well you always want to use explicit measure part in the name of the podcast thing does it always go have to mention it that’s funny it’s funny because at that at that same kind funny because at that at that same comment too Tommy I’ve been on a
10:44 of comment too Tommy I’ve been on a couple of calls and some people have been looking at the when I go on a call I still use the microphone and the setup that we have here for for the podcast and I get a lot of people responding going oh am I on a podcast now I’m like yes this is Joe Rogan with powerbi and they laugh and have a good it’s funny but then at the end of the meetings I close it going I feel like I should just say like And subscribe and they all just die that’s funny people have us I’m on a radio show I go might as well be exactly
11:11 radio show I go might as well be exactly right so that was one of our our major announcements new desktop is out you can go download it right now from the Microsoft store or you can go get it from the downloads area on pb. com so those you can go get those there other announcements let’s jump over to the the fabric road map I think this is pretty important for us Seth I think you were this one maybe you were picking up here thing I think oh this is a Tommy one sorry so there is now a Microsoft
11:46 aka. ms fabric road map I believe is the is the new road map and so now there is like a a roll up of all the features that are coming out from fabric now so admin and governance one Lake data warehousing engineering data science realtime analytics so they’re trying to looks like they’re they’re giving you a a window into what’s coming down the pipeline I think Alex Powers I feel like he was saying something on Twitter about getting a new report for fabric because he has the Alex Powers has the release pipeline or
12:18 Powers has the release pipeline or release notes for whatever Microsoft produces for their releases on the the road map things I think he’s building one now for fabric as well so I had I’d heard some notes about that on on there as well anything that stood out to you guys you have a chance to look at any of those things or road map related things two things it’s significant that they’re doing a releas plan now separate from the Power Platform and Dynamics because previously every release plan powerbi was always in the Power Platform scope interesting and now we’re separating fabric but Mike there’s one I think you’re GNA like this would have been my
12:49 you’re GNA like this would have been my secret in the web pass that I in Q4 this would have been my my secret sanity to that we normally do every year okay all right folders in the workspace Q4 2023 oh workspace that’s amazing yeah where where is that found under is that found under the powerbi one Lake on the one Lake okay yeah and then underneath that persistent filters in workspace which I really like because we’ve I’m always
13:19 really like because we’ve I’m always using I never used the filter feature in workspaces until M yourself fabric until fabric yes and you and just Tommy and I have been doing some training things we just learning Fabric and trying to digest it just making things inside fabric is incredibly confusing and from a logical standpoint there’s only so many artifacts that I could make in there but for whatever reason there’s like tons of lakes tons of things like within like an hour we had at least a Page worth of artifacts showing up inside fabric which was like this is
13:49 inside fabric which was like this is absurd like you’re going to need a better way of organizing this stuff I think every time we open our our workspace I I want to apologize I’m like sorry I made it messy I’m like wait I didn’t have cre half of this exactly right well and if you did it’s like I tried to be organized with it but it just be it’s like you just run by your desk and keep throwing things down on the on top of the desk and now it’s just a pile of mess like you like spend some time cleaning it up but there’s no way to clean it up yeah except if randomly your paper split into three every time you put a paper on your desk see naming matters right matter
14:19 desk see naming matters right matter this is exactly why they are naming standards because folks who had to deal with systems where it’s like nope this is a STD underscore this one is VW uncore this one is whatever underscore SQL Server is a good example because they didn’t have folders in like the views for SQL Server so you had to come up with a better naming schema and then if you’re in SQL because you don’t know the difference between a table and a name of a view so everything you don’t know that there’s actually a folder for views you yeah
14:49 folder for views you yeah exactly well there is just for one views right you can’t make folders and inside views for different types of views right yeah there’s a few other cool ones I don’t one top it’s just one top it’s like all your views go into one the first part underscore was is always the most important because you just Zing just scroll down the thousands of tables and then don’t you love it when people want to save the views and they have like deprecated or like they start putting Dore yeah yeah because because then you
15:21 Dore yeah yeah because because then you could push all those break dependencies too yeah exactly right so you’re like H can I delete this I don’t know name it and see like can always rename it back oh I’m in the X’s yeah we should probably get rid of this stuff that’s funny that’s actually and that’s actually it’s a real technique right people in it are like is anyone using this I’ll just shut it off and see what anyone whs about it if they don’t wh about it then no one’s using it because people make it stink when there’s a problem when something’s broken did you find any other goodies
15:52 broken did you find any other goodies because I did yeah give me give me another one I I haven’t had a chance to really look through the fabric my favorite experience in the world data flows which again been talking about since since my daughter was born because literally the same time that happened so incremental refresh support enhancements to the output destinations oh good yeah so finally being able to say hey what this entire data flow is meant to go to this one lake so just make everything go to the one Lake by default yeah I’ve been finding that to be very painful even if you even if
16:23 to be very painful even if you even if you don’t want all tables to go to the same location it would be nice to just say I’m going to mult I select I I need it’s probably more of the exception than the standard right I prob gen two yeah I would probably make everything go to the same Lake and then I would just remove one or two like I don’t need these I’ll just turn them off right the only purpose to create a Gen 2 data flow really is for the output destination that’s why you’re doing it co-pilot and data Factory so the
16:54 co-pilot and data Factory so the first instance of co-pilot in the fabric landscape which I think significant too there’s a whole bunch of those there’s co-pilot in notebooks there’s co-pilot in data Factory now the best one it’ll be interesting to see how this PL how this lands and and how well this will I’ve been experimenting so another company that I use a lot of with data engineering exercises is data bricks and they’re very forthright and they’re very advanced in their notebooks and they’ve got their own AI or like a
17:24 got their own AI or like a co-pilot version that helps you build code in there and I found with like it’s helpful it definitely keeps me from having to go Google a lot of things there are some interesting features where you can take like a code snippet and like immediately add it to your notebook and stuff like that so it would be it’ll be interesting to see Microsoft’s take on how co-pilot is going to roll out because it’s coming everywhere it’s all over the place like Microsoft’s really pushing copilot for all of Microsoft Office stuff so it be interesting to see how
17:54 stuff so it be interesting to see how effective these co-pilot options really are are anything else that stood out to You Tommy I think those are some of the big highlights for me obviously a bunch more but we’ll we’ll keep with those there’s one there’s one more thing that’s also in the release notes you’re talking about there’s a lot more motion around data activator so I haven’t played with that one enough yet to really know where it makes sense yet but it’s definitely talking about real time data data in
18:24 talking about real time data data in motion when something occurs then doing something else so it seems interesting in nature I’m just not quite sure how I leverage it yet to to find good use cases too it did and I think they’re just starting to get like they’re enhancing the triggering they’re they’re allowing monitoring for kql sources like it seems like it’s still it’s late to the game I think compared to everything else in fabric they announced that it build when it came out and oh by the way we’re going to have this thing but now it feels like
18:54 to have this thing but now it feels like it’s starting to come real and it’s actually becoming an act part of the product now it’s what dashboard alerts should have been well Johnny thanks very much for the the note there in the chat I’ll read your data activator blog and hopefully it’s really good and informative so Johnny I have questions about it help me out here what’s going on with the blog around this one so we can dive in and get more around what that looks like excellent any other thoughts or openers there all right the only thing I’ll
19:25 there all right the only thing I’ll other the only other note here I’ll add is m moft has introduced a number of new icons so your icons now will have color and that coin coincides corresponds with this folders area so it seems like some of the colors of the icon somewhat match groups of data flows gen one is purple data flows Gen 2 is green right there’s some some language that they’re trying to use here for these different icons and colors of things so you’ll see that come out into your tenants if you haven’t noticed it already inside fabric I think the colors
19:57 already inside fabric I think the colors are welcome I like I like them better than what they were before it’s still just would rather have folders at this point point so I’ll just keep just give me the it’s coming I just gotta be patient look get there right and let me color code my folders and right drop yeah exactly anyways all right with that let’s jump on to our main topic for today Tommy give us a quick introduction for our main topic star schema and fabric where ises this going to go I am
20:27 fabric where ises this going to go I am intrigued where this top is going to go so the premise today we’re talking about a lot of fabric projects and Mike you and I have done a bunch of projects a lot of people are obviously testing out different alternatives to what they’re doing now with a normal powerbi data set and I think there’s a question worth asking on when we’re building a fabric project or when we’re in this new modern landscape does the star schema and the importance and it being a Cornerstone of our normal development
20:58 Cornerstone of our normal development still apply with a normal fabric one lake with a fabric Lakehouse and with the normal development is this still a Cornerstone of everything we build and the IDE the top of the mountain or is there more a different priority now as we’re building fabric projects this is a good question here so I’m going to probably take so in my opinion here I’m probably going to take the conversation a bit larger than star schema I think star schema is definitely a portion of this when you
21:28 definitely a portion of this when you so I’ll say it this way and this is something that I learned from sqlbi so I have to give them credit for the knowledge of that I’ve gained around this powerbi or at least the the cubes the models they’re great at filtering and aggregating that’s what they’re meant to do so whenever you’re building cubes you’re always thinking about how can I build or structure my data in a way that forces it to be filtered and aggregated for your measures or whatever you’re trying to calculate that informs some of the shaping
21:58 shaping of the data in dimensions and fact tables I feel like and this is my my impression on being doing some model building for customers for a number of years now as I look about or as I present data models to users to consume right so I I think of myself as that person who’s in the role where I’m the data modeler and I’m building models for other people to consume I feel like it
22:23 other people to consume I feel like it makes more sense for people to consume a star-based model these are the Dimensions this is the factual table the measures always come from the fact table and then these various dimensions are what are used to filter or the the measures make up the majority of the data points on the the chart right the dimensions are all the x- Axis or Y axis depending on doing doing modeling I think for new users to powerbi this is the way to go like it’s
22:54 powerbi this is the way to go like it’s very easy to understand that and if you’re building from a thin report layer you can see the different measures or you can see where the measures are in tables and you can see how the tables relate to each other even though you may not see the actual formula for the measure I think just having the idea of I can see the tables and understand how they filter down to the main table or not is very helpful so I feel like one of the main documentation pieces I
23:26 of the main documentation pieces I like to provide is for every fact table if you’re pulling measures from this fact table here’s the dimensions that impact that fact table so I typically will do a diagram especially if we have a a data model that has three or four fact tables in it I’ll have one diagram with all the tables shown with their relationships in case you really want to look at it and then I’ll break it down and say okay for each fact table in this model here’s this fact table here’s the dimensions that go along with it while there are exceptions to the rule I find
23:57 there are exceptions to the rule I find most pages in a report usually use one or two fact tables it’s it’s somewhat simple at least that’s how I design my models models anyways so I’ll just pause right there I said a lot of things well I think it’s worthwhile going briefly Back to the Future on why the Tabler model is so important in power Pi because it uses a verpa engine it’s a colar based database where it really in order for it to work and the reason why it is so efficient
24:27 and the reason why it is so efficient because it takes thin amount of columns with many rows that’s why our CRM is always so frustrating to try to immediately load in powerbi that’s a row-based database it takes a lot of columns a lot of fields powerbi utilizes the vertac engine from importing which is meant for a lot of rows with a lot of thin columns it’s interesting you’re going down that route with that topic Tommy because this this is a really interesting point I feel like there is a to to your trying to put put the thought here
24:58 your trying to put put the thought here together from a operational side of things the transactional data system right when you have people actually editing individual records or adding data into a system doing the daily work you’re adding row by row you’re locking out this row you’re editing that data you’re putting it back in right so the transactional system is more of a row based piece of information and that’s built by app developers that’s what they want to put in there when you move over to the reporting side we need to look not across just individual rows of data we’re focusing more on looking at an
25:29 we’re focusing more on looking at an entire column like I need to see all of the sales for an entire year right so you’re doing more of this aggregating and filtering I need to take a lot of little tiny data points and roll them up or aggregate them into bigger amounts of information and to your point Tommy that’s why the Tabler store that the vertec engine delivers is so powerful it’s literally storing the data in a different format it’s smaller it’s more compressed and this is where I think fabric becomes very interesting to
25:59 think fabric becomes very interesting to me because the Delta format that everything fabric uses from data flows to to pipelines whenever you’re writing data down into your Lakehouse by default you’re getting this column store of information and for me the really neat thing about fabric is now that powerbi literally the analysis Services engine can now read direct Lakehouse tables and immediately load them into memory that’s the awesome part because I can now it’s like doing all the verac work before
26:31 like doing all the verac work before verac like in The Lakehouse now like you’ve moved one whole entire step right in front of it that makes it really it really easy I I completely agree I find it hilarious that as we’re talking about the the new features and updates of an October release that are not released so I almost wonder if people are listening to us and somebody’s like some forg forgot to push the button on the October release so Tommy just hit in our
27:02 October release so Tommy just hit in our internal chat here you want to check out the powerbi October release that has just dropped so I’ll will grab the link and I’ll put it in the in the window I do appreciate I just I just want to say that I appreciate Microsoft has now synced their releases podast to the major podcast and channel in the world the explicit measures podcast so thank you Microsoft we appr send us to ignite mic come on your introduction it’s early for them right now on Microsoft right it’s it is we’ll we’ll
27:33 Microsoft right it’s it is we’ll we’ll save looking at that seg I just couldn’t leave that in the in the back end well stay tuned for next month when we’ll release the next for November you heard here first guys sorry tomm you were saying you were saying no we like we’re traversing a lot of the the the reason for Star schema and column storage of the vertac engine right is behind the scenes regardless of how you how you set up true the objects within your model right I think we’re
28:05 within your model right I think we’re where where it’s in the same way that you’re talking about a production system right data stored is typically not repeated but when we explode it out especially when we’re talking about sales data and all of the accoutrements that go along with it a lot of data starts to get repeated right so this is where even in databases you go go through this process of normalization which basically means you’re removing any repetitive data to save yourself space and this used to be
28:35 save yourself space and this used to be a huge thing when you only had a finite amount of space right this is why like machine planning the amount of ram like how much memory you were going to consume during the year was a big thing and now we’re just like oh like give me more but when we’re talking about those same Concepts in Star schema models you’re remove you’re normalizing your data you’re removing all of the repetitive data out of giant tables right so if you have the the one record but then include all of the the product
29:08 but then include all of the the product information the titles the whatever and that’s just repeated information you’re you’re wasting space and ultimately your model gets to a size where that starts to have detrimental impacts on the performance as you’re trying to rip through and Aggregate and do all these things on top of that with the Dax language so I think that’s like the key part for me and why there’s a there’s a reason why like Star schema has been you reason why like Star schema has been the the de facto way to go
29:40 know the the de facto way to go build and do these things whether that’s from the warehouse moving forward where you’re implementing surrogate Keys you’re implementing slowly changing dimensions and then where powerbi is Rec realizing that same structure it’s not like you went from this structure to a brand new one it’s like no this is the most performing in in all of our places and the best way to do analytics right so I I think that’s what stands out to me in the star schema perspective I guess my question is are there any new aspects of fabric that
30:11 there any new aspects of fabric that would would challenge that star schema so my initial did we create a clickbait clickbait title no no and I think I have two sides of the coin here on the on the first side is well if we’re still using Dax utilizing Dax and utilizing powerbi we’re still going to live in this world of Designing database design or Tabler design so that’s like the default answer
30:41 design so that’s like the default answer I would have however the devil advocate side I have and I I I’ve been going back to this at the turn of the century there’s a lot of talk about like man has reached this Pinnacle of inventions and like three years later the wri brothers came out with the airplane and I feel like we’re also at this other point now with fabric with direct Lake that we don’t know yet in a sense how efficient it can be we also if we’re going to be using direct L rather than import to your point with Delta tables for the first time in on all aspects of
31:13 for the first time in on all aspects of what we’re using in databases I think there’s still a lot we don’t know that it can really play into whether or not the normal Tabler model with you have to have the thin columns and and or the thin columns with a lot of Records still makes the most sense so so you’re making the Assumption and a pretty large one that the direct Lake mode was it like
31:43 the direct Lake mode was it like structurally so significantly different than the past I don’t know if it’s using verac engine the same way well so I would from what I glean verc engine has been enhanced to not so what you would provide to verto pack engine would be a SQL query it would provide you a row level detail of table and the vertac would read that into memory and then compress it so the vertac engine makes partitions it makes groups of things it does all these
32:13 groups of things it does all these little makes these little tiny files that or or groups of files that allow you to take a table with row level data and delete basically all that information that’s repeated in rows and then in a as optimized as possible way Only Store the colum or level data so that to me that’s what the that’s what the verac engine is doing is doing some data engineering shaping you can manipulate it with M but the the vertec engine really is doing that compression and serving that column or
32:44 compression and serving that column or store data to the visual side right so quer you can basically write SQL against it and get out answers or you write dacks against it and then out comes answers right right so what I think to your question and maybe Tommy to your point as well like I think what’s Happening Here is the vertac engine just has got an enhancement so instead of just doing vertac engine’s proprietary format the vertac engine just now is able to do the exact same thing it was doing but now there’s Delta tables
33:14 doing but now there’s Delta tables and the Delta tables are what’s doing the that brings the engineering Upstream so for my perspective when I look at this I’m thinking turning my head or my mindset right when harbi desktop came out we did everything in desktop you imported you made measures you made relationships you did
33:35 measures you made relationships you did everything you needed to do to build that report it started from raw data to report in one tool it was in desktop and you could only do it in desktop where I think this is interesting to me is we started in a place where the single tool did everything now we’re at another place of this where we’re looking at the power. com service has now been adding additional break points further and further up into the data engineering ecosystem so instead of writing my M
34:05 ecosystem so instead of writing my M making my tables and do everything in desktop we’re actually pulling away more and more things away from True report building at this point and I think what we’re doing now is we’re bringing other Microsoft tools closer to where the report or the integration is becoming tighter so I see this as What’s Happening Here is the size of data is increasing the tools we’re using are getting more modern we’re we’re adopting some more modern Lakehouse type architecture pieces but what’s happening
34:37 architecture pieces but what’s happening is we’re pulling out more and more things from desktop so initially it was Data flows we’re just the M code could be stored in the lake right but we still had the vertec engine so you’d still have a CSV file with lots of rows of data in it when you refresh your powerbi data set it would still go read that file compress it and then from that compression then it would store it the the model what we’ve just now added with fabric is now the compression side of the vertec engine has also been moved outside of the data set side
35:08 outside of the data set side and now that is controlled by something inside the Microsoft ecosystem and fabric so so while while some of the underpinnings are changing potentially right where we’re leveraging Delta par Etc the engines and how they operate across a star schema denormalized Dimensions normal you or normalized Dimensions denormalized fact table right where you’re reducing that the same principles are still applying as far as structure of data and
35:39 as far as structure of data and how it’s going to work most efficiently yes and I don’t think so I guess what I’m saying here is what I feel like we’re doing is we we were given a tool that had everything in one nice pretty box and then here you go as data increased in size as we need more control about all the different things that we’re trying to do the tool is now being expanded and what we’re seeing is we’re seeing adaptation of pipelines and notebooks and all these other things that can prepare data Upstream which is
36:09 that can prepare data Upstream which is a traditional data engineering exercise but now all it’s just incorporated into a single powerbi fabric ecosystem which now is it still does the same stuff you still connect to data you still make column stores of information you still serve it in a cube and that still goes to the report so all those are the same but now you’re seeing distinct tooling pieces appear because now Microsoft is enhancing each of the tools all along that path does that make sense yeah I I think it’ll be interesting as as more of the business
36:40 interesting as as more of the business gets involved because I think the opportunity to do that in fabric is probably the best out there right in in terms of this new the and it’s not a new Theory but it it’s it’s breaking down The Silo between like I I was just watching another conversation on this yesterday where the the argument was business should own everything reporting related not it and and it’s like this siloed old version of it right but I I think we’ve
37:11 version of it right but I I think we’ve we’ve seen over the years I think this this mix of of people who are deeply engaged in the business trying to solve business problems in the bi space are already bridging that gap between traditional it and the tech Stacks where as now we have a platform where it’s not like oh I’ll give you permission to four different things and you need to navigate and figure out like where all this stuff is now it’s just part of the same same workflow and where this gets interesting from my perspective I think is if you
37:41 from my perspective I think is if you start building things in such a way and modeling data from Source systems of data that they know and all of the sudden it starts to make sense even from a Lakehouse build structure really the powerbi side is then just plugging into those objects objects it’s not even having to to remodel or rebuild or do things so like that’s like phase two but phase one is hey do like if you structure and build things right the first time right let’s let’s take a
38:11 first time right let’s let’s take a brand new business area if you if you go into that area and one of the best things about powerbi is I can connect multiple different data sources I can do all the ETL I can do everything in that tool is that going to scale infinitely no we’ve talked about that you need Enterprise systems to do that now you have like hey fabric is this ecosystem where I can connect to all your different third parties I can pull in the data but how do we build things well how you build things is how I already built this for you if you use
38:43 I already built this for you if you use your original powerbi build as a as the like phase one of data Discovery figuring all the Transformations doing all the things that you would need to to like produce business value then you can use that as your guide for how you build those in the fabric ecosystem which is essentially just the Enterprise ecosystem right saying hey we need to build these objects and actually here’s like here’s the the structure your architecture and then let’s let’s
39:13 your architecture and then let’s let’s enhance that because we have these new tools but now all of a sudden like everybody’s part of that work stream and they can easily I think understand what it is that an engineer may be doing right and it may not I’m not I still don’t assume business folks are going to jump into Fabric and be able to do all the day in engineering no no but if it’s set up and it’s structured and they’re still the owners or they understand how and where we’re doing business logic I think it scales a lot better in that environment than any
39:43 better in that environment than any other that we’ve been introduced to that’s a significant point I and I don’t think that should be un not underrated but undervalued one what you’re saying here so think about the idea where if you were building powerbi before the everything was built on top of that same engine all within powerbi desktop all within that particular model the refresh the the query Transformations the modeling and Dax now that engine and the processings coming from Al multiple different sides where
40:15 from Al multiple different sides where your Transformations are coming from a data flow Gen 2 and just getting pushed as in a sense a static table in the Lous your Dax calculations are coming from the service or coming from the model that’s not within that powerbi model so all the efficiencies all the things that you would normally build that would only in a sense blow up your model is in a sense I don’t want to say irrelevant but gone where you would have to be so conscious of that which is a which is a big point and we’ve talked about
40:45 is a big point and we’ve talked about that growup story of of a model to the Enterprise but I think that the bigger part here just from the efficiency point of view is the fact that you the engine the processing is coming from multiple different engines now it’s not coming from just powerbi desktop so I’m taking a lot of notes Here on both of your comments and I think I agree very much with both of the comments you’re having here and I’ll just add a couple notes around here Seth I really liked your idea about having the organization stand back and actually say what do we care about right
41:16 actually say what do we care about right I think a lot of times we focus on like the weeds of things right I have a need around I need to rip out a little bit of data from Salesforce I’ve got to attach it to some internal data that I have I’m trying to blend it with something else it’s a very narrow view of what that data set should look like when you step back and have someone more like when you have a tool like fabric now you can really to your point set step back and say what do we care about what is the bigger picture around our data for our customers we have products we have
41:48 customers we have products we have customers right how you get to those tables of information and what you do to make sure your business runs you can now have broader conversations around well yeah we should have a Products master table okay great who owns it where does it need to live we’re going to make one in fabric who should have access to this what does that look like for the entire organization so to me I feel like what is happening here is we are we’re in the middle of what we’ve been talking about for a long time
42:19 we’ve been talking about for a long time is these data silos that existing organizations everyone does their own thing and unless you have a central it team that everything is beholden and they hold everything with an iron fist because all the data must come from that single Department right that’s the only Department that can kind that’s the only Department that can see the data joins across of see the data joins across departmental things so what I feel like I’m seeing here is fabric is this tool that’s going to enhance or allows us to have a broader view of all the Departments all the things that we care about and we can really start having conversations about here are Central
42:50 conversations about here are Central data things that we care about the other side note around this one as well is I think now with fabric we need more visibility or we’re getting more visibility around responsibilities of data like governance is becoming more important in my opinion data stewards and identification of data stewards who owns the data is becoming more important because because previously it owned everything I connected to with an import right that was somebody else you own the server you
43:21 was somebody else you own the server you have everything else Upstream I don’t deal with it I start at the import portion of RBI build my data set how I want and then output a report what I feel like is happening now is we’re we’re drawing that line further upstream and now I have more lines or more people that can be involved in the process which means better governance is required right yeah who owns the the product table who owns the customer table where does that data come from and what happens when the data comes into our systems where it’s legit out of dat
43:52 our systems where it’s legit out of dat incorrect wrong who’s going to go back in and fix that information we’re now able to see those conversations now and as we in my mental model here is we now have permissions at the lakeh house a workspace a data set and an app these are all different layers deeper into where the data is being created that we can now provide new access points around individual people and I see one Lake security playing a very pivotal role in this yeah this is this is really
44:23 this yeah this is this is really important Mike because fabric is more about end goal being just a powerbi data set there’s and more and more organizations are acquiring or having data science teams utilizing notebooks utilizing machine learning it’s not required is a Tabler model they don’t it’s not a requirement to do so notebooks even data Wrangler right like us playing around with that love it super cool stuff yeah but the the Tabler
44:49 super cool stuff yeah but the the Tabler model doesn’t make sense there because you have to merge all that into one table so there’s more and more applications in Fabric or I’m pushing to like a power app or power automate now all the integration where the Tabler model by itself I don’t want to say doesn’t Mak sense but is not as applicable as a normal powerbi model and so this is where I want to keep emphasizing here I think what we’re doing is we’re we’re getting more
45:19 doing is we’re we’re getting more visibility to governance right so to me what this is really just accelerating in my mind is the what you were just to mentioned Tommy right I have like four different three or four different ways where I can make data right I can make it through a pipeline I can make it through a data flow I can make it through notebook I can make it through austo query I can now do data activator like now there’s so many different compute engines that can push data around do different things like what is the right
45:49 different things like what is the right mix of these things and who owns them because if you just stand one of these up there has to to be to some degree some documentation and a little bit of succession planning because you can’t have someone in marketing owning products and then leave or get promoted go somewhere else because what I see a lot of times is I see a lot of times in organizations that are starting out in powerbi if leadership can’t have the full vision of powerbi individual users of that powerbi environment become very successful they do really well in their
46:20 successful they do really well in their department and they the users themselves learn Fabric and can do better stuff but they they outgrow the policies and the people inside the organization so I see it often someone shows up they get really good at powerbi and the leadership can’t recognize how good they’re doing and how they’re adding value and so instead of pivoting to enable that person to do more that person leaves they go to a different company or start their own thing or do something else some to a company that actually is adopting the mindset of
46:51 actually is adopting the mindset of we’re going to step forward we’re going to do things with powerbi yeah I I think the it’s it’s funny how many crossover points we have with something I just listened to yesterday like I said see sweet May in 100% if you if a an organization is going to do like have long-term change or have an opportunity to build data culture and litery literacy across an organization there has to be C Buy in particularly just for what Mike talked about yeah I think the point I want to make like right before earlier and then we can dive down into
47:21 earlier and then we can dive down into like build this is Fabric and that ecosystem all the points you were hitting I think we’re aligned to or push for why we keep talking about having clear objectives that all teams align to right and this is the importance of having ogsm or okrs as an organization because with a mind for data because that allows these teams to link up where they’re good and figure out how do we
47:52 they’re good and figure out how do we how do we build what the business needs and solve these problems in a way that is both solving the business problem as fast as we can and looking for sustainability and I think like Mike this is where it’s like think like the business act like it came from because we’ve been in this world for a very long time but where we still have problems without an organ like without a seite buyin without objectives that the whole ship can link their things to and
48:23 whole ship can link their things to and without good interaction between the business it you’re not going to get you’re not going to win yes and the main reason is because business teams their main focus is rapid rapid solving problems for customers right where they need to focus from the it perspective is thinking about long-term plan ahead dudes you do the same stuff figure out what the overall solution is for your problem make a request for the singular report that’s going to solve everybody’s
48:55 report that’s going to solve everybody’s problem when they add a filter versus a hundred of you making the same request with little little variation and differences that’s what kills a bi team 100% right so that’s where Tech teams come in they’re like no no slow down like I own all the data yeah we need to build this in a sustainable scalable way that you guys can reuse all this stuff and the next time you ask for something it’s going to be a lot easier but what they don’t get is guys the business moves so much faster than this how do you produce something for them in an
49:26 you produce something for them in an iterative fashion that is going to be useful that solves the objective or at least gets us to a point where everybody’s on the same page for long-term companyb building strategies as opposed to just throwing throwing sticks at each other across departments yes and I think that’s the fun part about business intelligence in general but new platform fabric yeah totally sometimes rocks sometimes rocks too right I am
49:56 too right I am the fun part about being in business intelligence is we’ve been in this world for I think longer than a lot of folks but now are getting like like powerbi was it now fabric I think accelerates or pushes that even further into the business and saying hey let’s come along for the ride everybody’s trying to or not everybody should be going along the same path towards the same objectives here’s how we’re doing that and all of us understand how that’s happening you may not understand all the TX technical parts of it but I’m still going to
50:27 parts of it but I’m still going to produce the same things with higher visibility for you because I want you to be part of the journey ultimately and it rides into your your other part of the conversation where it’s going now which is ownership data I don’t own everything yeah right exactly you have you have the business logic you have the business knowledge of all of the intricacies into to I think Johnny Winter’s Point all the case statements and all the special exceptions that have to happen in data that like like produce the numbers that are spoton like that you guys
50:59 are spoton like that you guys are the subject matter experts so either we’re working extremely close with you or you actually skill up and level up into these platforms where how to interact and how to modify the data that’s going to get refreshed and pushed into the reporting so I’m gonna say a statement fun ecosystem based on what you said I’m gonna say a statement I want you guys to tell me if it’s an overreaction or not sure over sure over reaction reaction just kidding on you got it
51:29 just kidding on you got it right Tommy you walked right into that one I know I know I know I know but this is what it Seth to saying based on the new technology landscape and the data landscape that we live in the importance of collaboration committee and process is going to be more important than the skill itself in the technology the skill in having the right process is going to be more important than the skill in just the technology itself I and the reason the reason I say
52:00 itself I and the reason the reason I say yeah the reason I say that so it isn’t over would you the reason I say that is because you’re now working with so many teams who there’s going to be so much crossover how you actually Implement a project and how that process hand is handed off and how that process the conversation the communication that skill is now risen above the technological skill no no it’s all no I I think I think what you’re outlining is that all of these challenges and breakdowns that are happening in
52:31 breakdowns that are happening in organizations where things don’t align between it and business is a direct result of that I think all of that is would is going to come forward more in conversation where there has to be good hands-off there has to be a good process that we develop there has to be a strategy that someone is developing along the way to make these teams understand how how all of this is supposed to work joh we talked about process people process technology what one is not more important than the other
53:01 one is not more important than the other they all have to work in sync with each other for any of these projects to be successful sure sure that’s why I phrased it as would you overreact is that an overreaction Mike is that an overreaction I I don’t I think the language is a bit too strong in my opinion so I think I would align with Seth on this side so I like to think of it as being balanced right there there’s there’s always three parts of this one and people will add maybe more things in there as well but I think it’s like people process and Technology you can compensate with weakness in one area by overcompensating in another for example
53:33 overcompensating in another for example if your people aren’t skilled up enough to use all of powerbi you can set you can enhance or you can strengthen that part of the three-legged stool by adding more technology if you don’t have solid process you can lean more on your people to develop or devise how they’re going to release things so you don’t have a strong process but you can then lean on the p people and or the technology to help you build some of the process one thing I’m thinking as an example of this is like deployment pipelines which by the way as
54:03 deployment pipelines which by the way as I’ve been reading the Microsoft blog here recently you can now have more than three environments so now Microsoft has just opened up multi- deployment pipelines you can have four or more deployment environments for your powerbi artifacts which is cool but that is that is a technology solution what to what I would say would be a weakness inside potentially organizations or particularly the business of getting things from Dev to production right we don’t have a solid check-in checkout
54:33 don’t have a solid check-in checkout process on things therefore we use a deployment pipeline if we follow that the technology enhanced a weak process that we have so for my opinion it all has to be balanced and to say the technology is going to overrun other parts of this I think is a misnomer because I don’t think there’s any one silver bullet you can’t focus only on one area and expect it to be really successful well for the sake of argument I think it’s an overreaction too but I what I think what
55:05 overreaction too but I what I think what I’m trying to say I the need the need to implement a good process is ever more important now because of the technology like this is not something that can survive like we could in the past with just one person developing and I’ll I’ll add my little my little term on top of this governance is what’s needed right so process more than before process coincides very well with this whole governance thing right so the my idea here is you need more governance because now more than ever you can connect
55:37 now more than ever you can connect anything to anything right Tommy and I’ve been playing around with with fabric you can take a data flow you can build some data engineering off of Lake a and you can do engineering to it and you can drop it in Lake B so you can now use data flows to move data between Lakes between workspaces and you again what I talked
55:59 workspaces and you again what I talked about earlier was there are now permission levels at the app the data set the workspace The Lakehouse there’s all these different layers of permissions that you can give access to people so now in my opinion it becomes way more need around just trying to be organized and so I think what this is what I feel like this is doing is the same thing that powerbi did powerbi turned report building into a commodity anyone can
56:32 building into a commodity anyone can build a report it’s a commodity they’re all over the place so what this accelerated for me in my mind was the need for certified data sets certified reports is now becoming I think more important for organizations these are things that someone or some team has some rigor or process around to your point Tommy the process piece they’re going to put their stamp of approval on that thing now add to that all the other things that come along with fabric where does that sit like do we have certified
57:02 does that sit like do we have certified lake houses do we have certified workspaces like how are you going to engineer like so the same rigor you were doing only in it can now be brought further and closer to the business so now we need better process around how do we administer and govern these things how do we get good data out the door all the time because what I don’t want to be doing is running around on a weekend trying to figure out why my data set won’t refresh and what we also found out Mike it’s also way too easy to delete a table and fabric in a lake house maybe like yes there’s no box that
57:36 house maybe like yes there’s no box that says are you sure are you sure you want to do this there’s no drop box when you drop table Yeah so we were we were Tommy and I were doing some fabric we actually did a learn fabric yesterday and while in our testing of this you can use the same SQL command or where you say drop table and the table name right so you would do in data bricks any table great it just it just remove the table but yeah they’re welcome to the back end well yeah but in data bricks though like if you did it in datab bricks you can drop the SQL table but the physical table still exists in files on the
58:08 table still exists in files on the storage account so I can’t in fabric I couldn’t see like fabric actually cleans I can’t see it that’s interesting because maybe it does maybe it doesn’t I don’t know yet cuz I can’t see what’s inside the one Lake I can’t I can’t go to the lake housee and like literally search for or scrape through like okay here’s the one link does it have a container for tables does it have a container for files and can I go to that container and actually see there kid just kidding let me plug into my last version of this yeah like oops I didn’t mean to delete it let’s recreate
58:38 didn’t mean to delete it let’s recreate it like there’s some weirdness there where I don’t get to see all the things that are happening inside the Lakehouse so yeah yeah you’re right it’s not about the sell Alex you are 100% I love Lake housee storage and this is the reason why I like it so much and particularly star schem stuff things as well this is all the stuff that I’ve been developing for the last five six years outside of powerbi and I’m super enthused about it because this is the first view of the world we’re getting that says we’re going to bring all this really rich multi- 10e plus new
59:11 really rich multi- 10e plus new data warehouse development we’re going to bring that forward we’re going to bring it into powerbi and from the business erser perspective business users get a great deal like we are getting so many new features which is awesome I’m not sure this is a great cell for the data engineer and the data scientist yet I don’t think there’s enough features well-rounded out yet for those personas inside fabric but if nothing else it’s allowing us all to play together in the same ecosystem and I’m sure Microsoft is going to continue to close the gaps and they’re definitely
59:41 to close the gaps and they’re definitely listening to what can they what can they do to improve it so that way they get a premium experience for all people data scientists data Engineers data modelers all the way down to the reportability experience and I I like the direction they’re going so all in all I think star schemas for the win right performance usability of Downstream calculations they Dependable repeatable results there’s a wealth of knowledge and documentation online the the downsides still remain the same right like it requires people
60:11 the same right like it requires people to understand them so there’s a knowledge Gap probably for brand new users and a little bit more planning when you’re developing and building reporting I was reading one one thing earlier where somebody was like well yeah can’t we just get away with not doing them he’s like yeah till a point when you have to right like and what he had done and he’d been for with the company for a long time he’s like yeah I just built them and now I spend all my time re rebuilding all of these reports I built because of models and I think what this is going to drive and this
60:41 what this is going to drive and this conversation especially between it and business teams or business intelligence teams is there is going to be and is starting to be a rapid back and forth of value to system constraint and this has always been there just at a much slower slower pace and now what we’re saying is hey yeah here’s the tools here’s the keys to the castle but if you trash the castle somebody’s got got to come in and clean it up right so if like to not get yourself in a position where all of the sudden this
61:12 position where all of the sudden this rampid rapid Building without any foresight creates a massive problem that your organization has to solve either by spending an inor amount more money or going back and refactor things it’s really important to Tommy’s point I think the stress should be there on process and strategies to ensure these teams understand what this new ecosystem looks like and that we’re a team we aren’t two different competing priorities we’re all align to the same objectives and that obviously leads to
61:43 objectives and that obviously leads to the fact that organizations need to have these teams on the same objectives on the same boat because the larger organization is the harder that boat is to turn right oh 100% this this is where the breakdowns have happened in the past where it’s like oh my gosh we just launched this year-long initiative and it has been going down this direction and we are an aircraft carrier and you just introduced this to the business and the business is like I don’t need that we need to build like okay well now we’re six months before we can turn the
62:15 we’re six months before we can turn the ship I I think the tools and Technology are are to a point where it’s solving it’s focused on solving the business problem and that’s exciting but it also terrifying for technologist because it’s like man we there’s there’s a balance here we all got to be on the same page with things you don’t understand and you don’t need to but I’m here to help you solve your business problem and here’s like let’s develop these processes and and do this together yeah my last two sents I think no matter what the output
62:45 sents I think no matter what the output is we have more outcomes than ever before if you’re still playing in the world of Dax and the Dax language star schema is supreme star our schema is by Design how Dax Works efficiently regardless if you have the best engine in the world or Works differently if you’re still even just working in Dax which you are in a powerbi data set or in the Microsoft powerbi World Dax reign supreme star schemas reign supreme and I think I will I Al
63:16 supreme and I think I will I Al emphasize this as well I think new users don’t understand the value of the star schema I think as you mature you get more comfortable with it and you you understand there are more data Transformations you need to do Upstream so my my one word of advice here is if your deck if your Dax is getting very complex if you’re doing a lot of iterations on top of your data this is a good opportunity for you to start thinking about how can I reshape the data so that it makes it better for reporting and I really do think it may not be a true star schema all the
63:47 it may not be a true star schema all the time you might have many facts and dimensions that serve those things but I think once you really start understanding how value that star schema is you start focusing a lot of your engineering effort Upstream to support the downstream star schema development and that really does it shakes out a lot of business requirements and really makes you understand them to get them into a place that you when the word I hate right now is allocations whenever I hear the word allocations
64:18 whenever I hear the word allocations that just means a whole lot of work maybe I shouldn’t hate it maybe I should just start seeing dollar signs when I see the word alloc whenever I hear the word allocations that’s when I start seeing dollar signs because that means the business has a different view of their data than what the report of the the operational system has today and we have to reallocate or reshift the those data points in there which again that just takes work it just takes effort so I really like fabric I think this really enables our discussion we can have better conversations around what this is looking like but at the end of the day
64:49 looking like but at the end of the day the data sets should be star schema you now have a wealth of other new tools to shape and add that business logic outside of the data engineering modeling side of things you can now do that in Fabric and other places which I think will be very beneficial for teams and people to really start building all right with that we don’t have any other announcements or other things going on that was that was a really good episode we may have talked about star schemas a bit but maybe more broad patterns in governance and people in
65:20 patterns in governance and people in process as well too in this episode we hope you liked it we hope you enjoyed the conversation here thanks everyone in the chat some really good comments I got to call out a couple people here Donnie and Donald and Johnny I was trying to put your both names together Donnie thank you very much for your comments you guys were awesome some really good thoughts and some really added value to the to the conversation so really appreciate you guys jumping in and into the chat as well if you like this conversation if you think someone else would benefit from it we’d really like you to share it on social media let somebody else know you found this was a
65:51 somebody else know you found this was a good conversation and you enjoyed what we were talking about and that Michael probably messed up your name with that Tommy where else can you find the podcast you can find us on Apple Spotify or whever you get your podcast make sure to subscribe leave a rating it helps us out a ton do you have a question an idea or a topic that you want us to talk about in a future episode head over to powerbi. com
66:26 channels D you’re getting good at that I like I wrote it’s getting Dynamic now I like I like how you’re read better and better it’s awesome yeah helps when you better it’s awesome yeah helps when what you’re gonna say exactly right know what you’re gonna say exactly right I just wing it a lot of times so I just make up things anyways that being said have a great week and we’ll see you next
67:09 [Music] you
Thank You
Thanks for listening to the Explicit Measures Podcast. If you enjoyed this episode, share it with a teammate and subscribe so you don’t miss the next one.
