PowerBI.tips

CLM Part 5 - Deploy Content – Ep. 351

September 4, 2024 By Mike Carlo , Tommy Puglia
CLM Part 5 - Deploy Content – Ep. 351

Content lifecycle management only works if teams can reliably move Power BI artifacts from development to production without breaking what users rely on. In this episode, Mike and Tommy walk through CLM Part 5—deploying content—and talk about the maturity, process, and tooling decisions that make deployments repeatable.

News & Announcements

Main Discussion

This episode is part of the Content Lifecycle Management (CLM) series and focuses on the part that most teams feel the pain of first: deployments.

Mike and Tommy dig into questions like:

When teams are “ready” for a deployment process

Not every organization starts with dev/test/prod discipline, especially with self-service teams. The discussion frames deployment pipelines and other release tooling as a maturity step—something that becomes necessary once changes start impacting consumers and reliability matters.

What you’re actually deploying (and why it breaks)

Power BI deployments aren’t just “a report.” Teams are shipping a collection of artifacts (semantic models, reports, lakehouse items, etc.), and changes to upstream pieces can ripple into downstream consumption. The episode emphasizes designing the process so you can iterate without breaking what’s in production.

Deployment pipelines, branching needs, and the real-world workflow

The conversation highlights practical expectations teams have once they start using pipeline-style tooling:

  • Clear comparisons (what changed, what will be overwritten)
  • Confidence checks before pushing to the next stage
  • Support for real-world patterns (like multiple destinations / branching)

Looking Forward

If you’re implementing CLM in Power BI, the “deploy content” step is where process meets pressure: production users rely on stability, while creators need room to move fast. A lightweight, repeatable deployment approach—paired with good conventions and clear ownership—can drastically reduce break/fix work and make scaling your Power BI estate much less chaotic.

Episode Transcript

0:35 good morning and welcome back to the explicit measures podcast with Tommy Seth and Seth and Mike good morning gentlemen and a happy Tuesday to you it’s a Tuesday good morning back at in our area the children have been now launched back to school schol has happened we are now officially back into the the school year wait so that’s what happened Wisconsin just now in September today today is the first day of school that’s right yep it’s it seems

1:07 school that’s right yep it’s it seems weird but yes we have a very we have a late ending to the school year and we have a late start so that we’re just do everything very far behind I think it’s primarily du because they’re trying to extend the vacation season as late into the year as possible because this is where we get it’s still good weather right now so they it’s not warm enough earlier in the year so they just try and extend it later in the year for us to have a good warm end of the summer what we talking about today well main topic for today is we’re going through the content life cycle management system

1:39 the content life cycle management system guess documentation that Microsoft has provided today we’re on a topic that’s near and dear to our hearts we’ve done a lot of this talking about this in the past deploying content how do you get it out the door so we’ll talk about that today that’s our main topic but before we jump into there I don’t have any major news items so I think we’re skipping the news but Tommy has some a Beat from the Street some real practical world use cases around a topic so Tommy jump us into your beat from the street and this is definitely going to become an article too so been

2:10 going to become an article too so been working with a client for not I don’t want to say too unique of a powerbi DAC situation and I was having trouble kind situation and I was having trouble explaining it and I was like of explaining it and I was like obviously one of the ideas is use if you haven’t used power or like your qbts for Dax you can it’s a lot of Hit or Miss but the scenario I wanted was like okay when I interact with a given visual I wanted to provide a filter here but not the other visuals because I kind but not the other visuals because I wanted this story of you’re of wanted this story of you’re going each visual by Visual and

2:41 going each visual by Visual and then show that everything related and I’m like that’s gonna be hard to explain to chat GPT and I know I’m gonna have to go through multiple times but one thing you may not be aware of is all the generator of AI tools especially Claude chattu the 40 version is really really really good with image recognition and not just can tell you if it’s a pizza or not or a hot dog but I gave it first a diagram of my relationships or a picture of my relationships described what I wanted

3:12 relationships described what I wanted interesting and I gave a screenshot of the visuals and basically put like over like when you click on this here’s what happened the text in the visual itself as the explainer of what I wanted to occur I barely explained anything else I just gave it the diagrams of when I click on the this I want this to happen or be all the related fields it already knew what my model looked like I had to make it one edit to one line in the code worked perfectly and really sharing diagrams with your models

3:45 really sharing diagrams with your models with a chat gbt or one of those the image recognition there it really changes on how well or how fast you can write your your calculations so it output the Dax code you needed yeah I didn’t yeah I and the diagram explained my model so it said here’s the tables here are my main metrics and like and looked by the diagram told it what I wanted to do and hey that visual on the bottom right should show everything related that’s all and then basically it looked at the

4:15 all and then basically it looked at the visual went okay he’s talking about that red Red Box because he said red box I know what that means I understand the context of the diagram oh yeah and so I probably wrote like to to jat TPT itself four sentences maybe maybe the whole thing but it was the screenshots that made the difference so this is this is something I really want to explore now especially when we’re talking about task flows but this is pretty cool it’s and it I don’t know how many how much time I solved or how much time that

4:46 time I solved or how much time that solved but saved saved saved yeah that’s inter solved the thing for you that is cool yeah I’m I’m learning very quickly that prompting is n just guess one my initial op opinion was I go to it’s like a Google Search right you go to Google or any other search engine the Google’s the one you go to but you go to Google and then you type in a sentence and then output results right with prompting there seems to be like

5:16 with prompting there seems to be like it’s you’re trying to think more like a there’s like a program or potentially a sequence of things you potentially need to do to get the answers you want another example of I think like along these lines Tommy there’s also another another one that’s out there that I have to try it’s called cursor yeah yeah I’ve heard about this one a lot and I’m seen it a lot on my feed so I got to go check out some other that this is every other week though there’s another interesting AI there doing things that are supposedly groundbreaking in code generation but

5:47 groundbreaking in code generation but anyways that aside thinking through like if you have an M code and you want to say explain this code you could have it explain it and then it I’ve had chat GPT me okay here’s the pseudo codee of what’s occurring in this line work and then it actually goes through and comments each section by section but if you if you change the prompt sightly a little bit and now it’s more of a I’m starting I’m starting to get my head around it’s multiple prompts to get what I want the first prompt is document this code by using commented lines right okay

6:19 code by using commented lines right okay now that the code’s documented now rewrite this code as if it was a SQL or another language right it’s it you add more context to the question by giving it it prior steps to add context right oh yeah goe so did you did you try to describe what you were after before you threw the images in or you’re just like I’ve done that before but the problem and I think is with especially a powerbi metric is it’s so like just like

6:50 powerbi metric is it’s so like just like filter or evaluation context it’s everything’s so dependent on your model right and the columns you’re choosing it’s not just I want this result but usually you need to know the tables and I’ve gone down that path and like well no no this and I’ve tried to explain my model with like columns like pipes to say here’s what my table is because the two big things Mike you just talked about that what makes chat be different than just a search engine in terms of really utilizing it are the two things you just said that really that idea of

7:20 you just said that really that idea of context understanding this the situation everything around it which is what that di was showing it the relationships in the model does and then a Target or an objective and telling what you’re trying to achieve what that changes completely the output of what you’re going to get from this a very general answer or in the cases that you’re saying like Cruiser is a like vs code except just really built in it needs it needs to understand not just your query but the things related to it

7:52 your query but the things related to it so yeah I’ve tried describing with text before but the it’s image recognition and again not it’s much more than it’s a hot dog or not and in relation to mik reference to Silicon Valley Valle it’s good incredible but it it more than just understands the dark arm and then in relation to what I said I wanted it to do it’s pretty pretty incredible interesting does this does

8:23 incredible interesting does this does this in all way Tommy impact what we’re seeing with like so in the August update of RBI desktop there was the AI skill has now gone into public preview is this similar in nature as to what’s going on there or is this just something that’s totally different in nature so the AI skill one I can’t wait for to get my hands on because that’s more the output of that is going to be for the business to use right so it’s going to be like being able to ask questions about your data not necessarily writing your own C- pilge for other developers interesting I

8:54 pilge for other developers interesting I love that it’s like remember automated insights but we couldn’t do anything with it mhm I remember submitting a question to Microsoft when automated insights came out with can we like wait this at all or control this at all because like it’s cool but I can’t control at all what Microsoft was going to wait in my tables or my models say this is more important than this so it just give you whatever found relevant to any of your columns okay yeah it’s interesting because it it

9:25 okay yeah it’s interesting because it it seems like again maybe this is a concept that’s been around that I’m just starting to get my head around but it sounds like you use the AI to generate the prompts that you want on how to prompt things yeah so even once you get to a result like I I heard a technique somewhere that once you get to a result that you like you can actually ask the AI what prompt would produce this output and ask the AI to send to send you back the prompt that you would need in one prompt how could I get this same output

9:56 prompt how could I get this same output and it will it’ll write a thing for you

9:59 and it will it’ll write a thing for you that it could understand that it could then use that to write the output it’s just again my head is starting to spin of all the things I need to start figuring out and learning with these AI things I’m not sure I’ll still say I’m a little bit hesitant I maybe I’m maybe I’m in that camp of old school it hasn’t quite rattled my cage enough yet but interesting good topic all right cool any other thoughts on the beat from the street around Ai and implementing in parbi I’m going to make an article too

10:29 parbi I’m going to make an article too so we’ll actually hopefully yeah that good one there you go I’d love to see some like especially Tommy that you’re talking about prompting and stuff like that I’d love to see what prompting things you’re looking at or even using the generic model and other things as a Sim as not that you can’t use customer data but like I’d love to see the prompts you’re doing what you’re sending what screenshots need to be sent in because that that’s something that other people should be trying to see if it’s adding value for them too cool excellent all right with that let’s move over to our main topic for today so the article

10:59 main topic for today so the article for today is going to be around deploying content which again think we’ve had a lot of conversations around this one currently but I’ll throw this link in the chat window as well that’s our main topic Tommy you want to kick us off for some initial thoughts on this topic yes we are now on part five of our content life cycle management series CLM for short if you’ve not heard the others guess what you can go back and listen on our previous episodes they’ve usually come out every other Tuesday today is again something we’ve talked about in general

11:29 something we’ve talked about in general very near or dear to us but we just came from validation of content that’s part four which was a really Lively discussion on not some like we don’t have a automated process to do that but once we get to deploy content what are some of the methods especially outline in the article when we’re talking about hey it’s not just we’re publishing content to a powerbi app what are different ways we can automate that from a and scale it out and then what are some of the PO post- deployment activities that we’d want to do

12:00 activities that we’d want to do do I feel like this article article this article is there is some solid examples of things that are like possible today I feel like this article of the ones that we’ve read so far is somewhat we start getting into some of the git interfaces the more advanced deployment scenarios it’s a bit a little bit more opportunistic it’s a little bit more this is what you should do this is what we would eventually get to

12:30 is what we would eventually get to because not everything always sits it’s it’s not the solution is not fully baked yet I guess I would say so maybe maybe we’ll start there I do I do like the again the Articles have all outlined at the very beginning of them it does talk you talk to you this article is primarily targeted at these groups of people which I did find extremely important the fabric or PBI administrators is someone who needs to pay attention to this article the center of excellence which I’m finding more more important as I get into more

13:01 more important as I get into more organizations where this does not exist it’s very chaotic to figure out what should the process be it doesn’t have to do the process it has to decide on what the process is doing and then obviously this is for all the content creators to follow the process that’s built so I really like the first part of the outlining of the article can can we pause on something you just mentioned where because that that was what my same thought as well that it’s not fully baked yet however it’s on the Microsoft

13:31 however it’s on the Microsoft documentation under their implementation planning there there’s something for me that obviously there’s that hesitation but just if we’re not in the ideal state right now with the documentation especially with a lot of the developer based scenarios right it outlines a very basic we’re going to publish to the web but everything else I publish to the web but everything else this is not mean this is not for is this for everyone even if it’s in the ideal State and I think that’s where I wanted to start you wanted to start with the most

14:02 you wanted to start with the most complex scenario okay okay we’ll okay cool there go we’re three cups of coffee in so we’re I’m at your 45 minute Mark now so we’ll parking lot that and this this is apparently near and dear to your heart so you you guide me man I’ll I’ll go wherever you want to go before I go into into the all the things I think we should at least talk about the different ways that Microsoft is approaching publishing content right and I think that’s the starting point yeah because I

14:32 that’s the starting point yeah because I think there’s actually a missing the publishing content piece that’s not in here yet but we’ll get there so the first sections of this are decide how you publish the content and I think that’s a good way of of thinking through the company like what are people going to make how they’re going to create those contents I’d also maybe articulate with the deciding how you’ll publish is deciding will you publish certified content differently than self-service content or content that’s not certified so I think to me there’s

15:02 not certified so I think to me there’s there’s two big delineations there and I do feel like there’s organizations you need a FastTrack path for Content that’s not certified yeah this there’s a faster method to get that out the door versus we’re going to have less content being P content maybe larger chunks of content It’s a larger Rock to move but there’s less of them and that’s more of the certified route so I I think about that too in in the decision on how you publish content do you guys agree with that

15:33 that two-part process there I do great no great no it what what was tickling the back of my mind is like if you think about a lot of the self-service scenarios right like one it’s a good point because if you’re if you’re an admin and you’re reading this and you’re going to do self-service you’re not going to be pushing business people into some of the Advanced scenarios like it just stop stop yes so there’s a a way in

16:04 just stop stop yes so there’s a a way in which you’re segmenting out which groups are deploying content in certain ways what I do like about your comment is it tickled the thought around CER certification like how do you certify a report for a data set because this could be one of those guard arrows where it’s like you will not get certified if you are still straight deploying to powerbi like a particular workspace in order to be certified you have to hand this off to a team that’s doing

16:34 this off to a team that’s doing something else in a in a managed way it so an interesting thought like you could put like a guard rail around that and I think and would would be a good one too before you went down the efforts of yes solidifying the back end and the DAT sources because all of all of that means nothing if somebody can still make changes on the front end without validation or that go through a process of of of more testing which is yes where where some of the more advanced scenarios I think offer better potentially better Solutions in the

17:04 potentially better Solutions in the future and and let me give you just another that’s a great point you make there Seth and I’m going to give another example of what I’m why I’m thinking this way another example of this would be okay if I’m doing a self-service mode that may be a workspace that’s for the developers of the content and when I’m done with that I publish that content into an app and then said app goes out as their leaded content now right there is a couple weaknesses with this because any changes to the data model in the

17:34 any changes to the data model in the workspace potentially breaks all of the report apps in the app side of things so that is why that that is a way of doing doing deployments but you have to you have to step back and say okay is that something that our team is willing to do are we willing to have the risk of someone breaking the model in the workspace that potentially will break the reports that’s a risk that’s there so in lie of fixing that then you build two workspaces and so now you start building Dev and prod or test and prod however

18:05 Dev and prod or test and prod however you want to call it it doesn’t really matter there’s two now and now you can say okay in the in the development or test space yeah things are allowed to break it’s not actually it doesn’t have to be working 24/7 and then you move the content over to the production one and then you need like okay what is the what is the set of rules that will say we’re good to go and we can app it to publish it to prod and then deploy apps from there so one thing I I noticed here though none of that’s really they’re not talking about they’re talking about publishing from desktop in a lot of these situations I didn’t see a lot of language in this article around

18:36 lot of language in this article around using apps in any way yeah it’s curious I and I was I was wondering the same like maybe I just missed it but it it is part of deoy it because if anything some developers you have to keep reminding them now yes that when you use apps like you’re not getting it to the prod workpace right there’s there’s that one other really good layer yes was added in almost as a hey here’s your validation step right now you can update update

19:07 step right now you can update update your app for the audience well then it sounds like the the conversation is very focused on the semantic model then not so much as the design the reports themselves because again all all that content once it gets to that workspace would automatically be updated you’re not really talking about does the functionality work as much as again have we validated the data I at least that’s like might take on why they’re not speaking at apps as much I much because yeah is is it the same

19:37 mean because yeah is is it the same process for semantic model and Report design there’s a lot of let me say it this way in this article it feels like there’s a lot more hand waving around the checks we’ve already done the validation so maybe the Assumption here is they’ve handwaved they’ve hand waved like okay we already assume that all this is done because we’ve already talked about validating content in a prior step right so I think

19:58 content in a prior step right so I think by the time we’re getting into this part of the article we’re talking about okay we’ve already we verified that the information should be accurate and true we’ve already put that aside this is only focusing on things that you’re going to do to get things distributed to other people like how do how am I where do I put it inside the powerbi ecosystem and what does that look like there is one reference to it way at the bottom where they they are almost looking at at that like a post deployment activity so

20:28 that like a post deployment activity so under decide how to handle post— deployment activities and like the last sentence of the paragraph is however some tasks require manual intervention such as a firsttime setup or updating a powerbi app but in the scenario where you’re manually pushing something or publishing right like you can do all of the validation the testing to your point you break a model it’s broken everywhere but at least on the front end right like there is the segmentation where I could almost think it’s part of the deployment process but I guess it’s post deployment

21:00 process but I guess it’s post deployment there’s an there’s an argument here too with the that this a lot of this content belongs in the validated content section too because the only things where I’m seeing the difference of we’ve deployed content is we got to move content across workspaces or to to your point Mike that we’re gonna get something certified but a lot of this is in lot Synergy with what you’re doing with with validation yes yeah I and and there’s also what’s what’s occurring right now

21:30 also what’s what’s occurring right now is there’s this idea of data Ops that is starting to become part of the story here and I don’t think Microsoft nor powerbi has got a really solid story around it yet other people in the community are starting to build data Ops as part of this right so deploy something to test run these tests verify this data is correct okay now go to production run these tests make sure it’s right there is there is this more data Ops experience that can be built here again a lot more automation a lot heavier and again not for every team either right so I think you do want to

22:00 either right so I think you do want to be able to publish accurate information but I think when we start getting to more automation around these deployment processes we’re talking about a very specific skill set that’s likely not in every team especially if you’re talking like a business user Centric team I I do agree with you some of the there is some confusion here where we’re bouncing back and forth between the whole report like everything in a PBX like all of the report and the Bim and everything or are we just talking about the model because some of these like the third party tools

22:30 some of these like the third party tools are you aware of any third party tool outside of just like we use tabular editor right for validation and deploying their model changes right especially in the large large model scenarios yep I’m not aware of any thirdparty tools that allow you to push report only API not yet yet not yet I report only API not yet yet not yet we can refresh right like you can mean we can refresh right like you can execute model refreshes you can update there’s a lot lot you can do but I’m not I’m not intimately familiar

23:00 but I’m not I’m not intimately familiar with a lot of third- party tool publishing of things outside at scale yeah there’s a couple things out there like there’s a couple like I I don’t know if this is a third party tool that you would count maybe this is a bit gray area but there is like the Michael kow’s tool around report validator or report analyzer which it does some analysis on the report side of things the other thing I I’ll note here too is all these contents it looks like it’s coming from powerbi desktop so you look at all the different notes here publish

23:31 different notes here publish with powerbi desktop so I have a I have a pbix file and a PB P file right and then I’m putting it into a fabric workspace then you have third party tools talking more about the model the semantic model right the Bim files the timle files and then you go down further you have publish on one drive refresh which is been our staple for a while we have walked away from the one drive refreshing because it’s a little more clunky to unbind or or reconfigure those things it’s a little bit more I don’t understand what’s happy in the back end

24:01 understand what’s happy in the back end it’s not as clear to me so therefore I don’t use this one as much and then they’re saying publish with full fabric get integration again this works for some things like this is where I get a little bit more like leery a lot of this focus is on powerbi artifacts this is like reports and semantic models in here they’re skipping like pipelines data flows which you probably have too which if we’re talking fabric part of this de Loy mment experience is all the things of the fabric so that they seem like they’re

24:32 fabric so that they seem like they’re skipping a lot of those parts and then they’re talking about publishing with Azure Azure pipelines so one’s with Git integration so U it could be GitHub or Azure devops and then they’re talking about Azure pipelines U building out aure pipelines there there’s no communication of what happens when people build reports in the service where does that come from what if we’re building things other things like pagein reports how how you manage those things how do you manage Explorations those are another things

25:02 Explorations those are another things that are in there so I feel like there’s another couple items here that are to me this article covers 90% I think there’s a couple other areas here that I would like to have noted here as well yeah and I think possibly it’s almost a renaming of this series too because the other situations which they do cover in their implementation on the self-service side you’re right this doesn’t cover at all and any of the situations that we covered when we went through the self-service scenarios or

25:32 through the self-service scenarios or manage selfservice yep and again I don’t know if there’s a link in there to say hey if you’re doing manage self-service which really should have its own CLM dedicated series too that’s a big part here and yeah it’s I I feel like I can make the argument going through this that it would make sense to split or to have two CLM series one for I think what to me this is all focused on the semantic model and much more developer like backend role I would

26:02 more developer like backend role I would expect to see in deploy content more on what are we going to do about our documentation what are we going to do about promoting the content too from the side so not they don’t have yeah no it this is to me very all Heavy I’ll say developer Centric where this is all just the technical side interesting that you note that because they do they they pay a little bit of language that Tommy inside so again a lot of this is things that we’re talking about here are like the three the people the process and the

26:34 three the people the process and the technology right some of this is explaining some of the technology piece here but I feel like when we talk about deploying content we’re talking a lot about the people and the process side of things and so some of the tools that we bring into our library of how we get content out the door they I do think they did a good job in in the publish fabric git integration section so they’re talking about hey note as your devops has these additional features for there’s this thing called an Azure repo an Azure Pipeline and how you can use it to automate deployments

27:06 you can use it to automate deployments as your test plans so when you want to test things and get things out yeah Donald’s right there test test test right if there’s a if there’s a need to test things and make sure your data is correct as you’re getting data through Azure devops has the ability to write that in there as part of that you have Azure boards to track all the work on your team as you’re generating content and then there’s an Azure Wiki that goes along with Azure devops where you describe you basically write out okay here’s all the things that we care about the documentation of this thing so I I

27:36 the documentation of this thing so I I do like the integration of those pieces and I think that’s a really good call out here is to Tommy I think to your point part of your deployment content should be focusing on not just the mechanics of is it coming from SharePoint is it coming from desktop is it going through a pipeline it should also be do you have your testing in order do you have your documentation complete are you producing videos blogs an article a notice something that goes out to the organization to to let

28:06 out to the organization to to let them know or educate them on what the changes are to those reports so just for clarity we we we pop this a little bit but in what use cases would you recommend a business go down this path because this is a lot more involved in terms of the things you can do the validations the test test test right on reports what use cases should people start thinking about like some of the

28:36 start thinking about like some of the more more advanced deployment strategies versus not right because I don’t think I don’t think it like as we mentioned in a in a brief snippet right in the beginning it’s it’s it’s not relevant for all use cases I would agree with that 100% agree so I think what you’re asking for if I had a re I’m going to try to Reas your questions dollar right where if a company’s is time and effort what’s the Tipping Point of when you switch over to one versus the other pattern right or

29:07 one versus the other pattern right or would consider it yeah right that’s a really good question I’m going to answer your question with a spectrum I think it’s this I think it’s going to be the number of consumers of said content and I think it’s going to be a for each organization again if you’re a small organization you’re going to have not a a ton of consumers in a particular data set so for you a large number maybe 10 if 10 people are consuming if you’re a small org and you have 30 people total if 10 people are consuming the data that’s a large number so I’m I’m going to answer it this way if you are

29:38 to answer it this way if you are building content that has a large number of consumers I think you want to make sure that that data is accurate it’s updated and you can regularly make changes without breaking things because there’s no there’s no faster way to lose trust from people in your reporting than to publish something and either have one

29:55 to publish something and either have one the numbers be totally wrong or having visuals break when people go to consume it so if anything there is a l there’s a very bare minimum of you’ve got to have some confidence in what you’re doing so if you’re a large organization and you’re are doing tens hundreds thousands of people consuming the data set I think those are opportunity for investing more time into making sure that that it’s not going to break that you do have a Dev and a test

30:25 break that you do have a Dev and a test and you’re doing more of these things and now one thing really like and I’m a coder so I like code semantic link does a lot of these things for you out of the box you can do verac analyzer right away you can store the results of that you can do best business practice analyzer and get results of so what you can do now is in the process of automating things you can now talk directly to the XML endpoint you can write a notebook that has a series of tests in it that you’re going to consume every time you deploy things and that makes things a

30:55 deploy things and that makes things a lot easier I’m I’m going to slightly disagree with you I agree with the idea of the the volume but in a different way so especially with a smaller organization because it’s not so much the ratio to the amount of people to me it’s actually the the volume if I’m at a company of 200 people and how many actual people are building reports probably and then how many reports are out there to me it’s a scale thing if you’re that company of 30 people you’re probably the only person developing reports at scale and let’s say you now

31:27 reports at scale and let’s say you now implement Jupiter or notebooks and you implement all the things here well guess what now the organization if you ever go buy a farm in sisly has to find another you that has all that skills because they can’t just hire a powerbi report Builder they have to build someone with all that other knowledge where this begins to make more sense is when I think the individual or the team of individuals manually cannot do the the standard process where this makes to me that Tipping Point where it’s like we’re

31:57 that Tipping Point where it’s like we’re getting to a point and I know this is not the right phrasing but so to speak an economies of scale where there’s more reports coming in there are more Semitic models then a human person can manage or a team can manage at once so we need to automate this in a better you in a sense user interface to look at this I don’t know if a company of 50 people is ever going to have to go through the process of semantic link and some of the more advanced capabilities here because again

32:30 advanced capabilities here because again the the head knowledge of who actually can do that how many people out there now can actually do that so yes I I think that’s part I think that’s part of the equation though like when you when you invest in the time and effort to go down this path because the the output of the product has to be dialed in for one reason or another that that what you’re what you’re outlining that the skill set of the individuals not only do the teams likely have to

33:01 not only do the teams likely have to ramp up on processes that they may not be familiar with but to your point the new people coming in now need to know what that is so the value to the business has to be there I don’t know if I necessarily agree Tommy with your approach to that because if it’s just the just if the Tipping Point is the team can’t manage all the content I almost wondered to some degree like adding all this deployment does that actually speed it up or slow it down right and I don’t I don’t see

33:33 right and I don’t I don’t see a like the the business driver to say have have the business be like yeah I want you to add this in like I would think is more on the front-facing we’ve had some problems or the volatility of this the accuracy of this data is so pivotal to the decisions made in the business so to Mike’s Point right the accuracy thing I think tips you into a position where you’ve got to you’ve got to put more testing around things you

34:04 to put more testing around things you deploy before you deploy them and I no yeah I completely agree with that because to me it’s not just the volume of the data but I think to me that’s a prerequisite or at least part definitely my part of the equation of again if you’re a company of 200 people you’re you’re probably not getting to this point without a heavy conversation about about this is where we are moving forward and you’re going to need someone with this development skills let me challenge you

34:34 development skills let me challenge you there though yeah I understand what you’re saying I definitely understand that these skills are more nuanced and they’re more Edge case based right semantic link we get it in organizations where there are smaller people there’s or less people in there the pool of knowledge around notebooks spark and the things and develop is going to be a bit harder to either find initially or you’re going to have to spend some time training them up my question though back to you though is what’s the alternative if you don’t do that what is the alternative and I would argue the

35:04 alternative and I would argue the alternative is someone building a whole bunch of extra reports going through every deployment and looking at things line by line and actually going through every report page and going through and making sure all the visuals are saying what they want to say so to me the alternative to this is look you can do that but I’m saying this is a skill that I think that the teams even in small teams should start to be learning and this is why I think there’s really critical it’s very important that the

35:34 critical it’s very important that the center of excellence defines what is a report consumer who is a content creator what does your release manager look like and we’re talking about that to me everything in the deployment content area fits into this release manager role that may be the same person that may be the the same set of skills and then you have an admin so I think there are like four skills around around this right and in content creator you can talk about data Engineers data scientists or semantic modelers any one of those would be a

36:05 modelers any one of those would be a content creator piece inside that space so if you think about that those things have to exist no matter what it just maybe all in one person’s shoulder or not and the organization again my thought here being is the center of excellence should understand the skills required to do some of the stuff correctly and yes if they are centralizing all those skills into one individual person and Tom to your point they go find and buy a farm in Sicily and they’re out and they’ve they’ve

36:35 and they’re out and they’ve they’ve they’ve done their good life and they’re moving on that’s totally acceptable but what has to happen here is the organization has to realize that that is a weak Link in their potential process and either one spend time with other people to somewhat cross Trin them or when you again to be clear if you’ve got this stuff set up and Tommy you you you zoom out and you’re done and all the stuff is remaining there better be a transition plan but even in that it’s easier to read a semantic link

37:05 it’s easier to read a semantic link notebook after someone’s already built it it’s easier to understand what you’re doing and modify after the fact as long as you have examples to follow so I’m not sure I I 100% agree with your comment I to somee I do but I’m pushing back a little bit so let me ask you this question can it be expected to have one person organization I know a little general but seriously to your point can you expect someone to develop all the semantic models or develop semantic models build the reports and publish them and also write all the

37:37 them and also write all the validation through Jupiter notebooks think about all those skills and the time involved can that be one person has to be in smaller organizations it will be one person because there’s not going to be it’s not you can’t have you don’t have the luxury of the spend of the multiple people to do that so this is where I split with you because I think that that skill set all the skills you have to know to do that you have not that big a deal it’s not it’s not rocket if you’re writing if you’re writing Dax right now if you’re writing complex Dax proed like if you

38:08 writing complex Dax proed like if you the people I’ve seen write Dax stuff there’s been some complex dacks if you can write DXs like that if you can write complex nested join sequel statements there’s no question in my mind that you have the capability in your head to understand how to write a notebook and execute commands in a notebook no question that stuff is by far more complex than writing python in a notebook notebook 100% And so to me to get people to go across the the threshold the barrier to get people into this one and I’ve done this before I’ve worked with teams that

38:39 this before I’ve worked with teams that are super SQL based and we’ve pushed them into notebooks and they love the experience yeah it does take some time it does take some training but with minimal investment of time and effort into those team members I think they get it they understand it and and by the fact you can still jump in and out of python and to sequel as as much as you need to to get the data out that you want and then people are feeling comfortable oh it’s just SQL oh I’m just making this thing called a data frame and I I can manipulate it these ways the the the jump to that next level or the jump to like be understanding other

39:10 jump to like be understanding other things they don’t have to be experts in them they just need to get they just need to get through it enough to make sure that there’s enough value out of the end of it so I think to me I I think the the learning knowledge step is small enough that the value is worth the effort there’s there’s one there’s one area that Andrew brings up in the chat that we we haven’t talked about which is and I suppose to some degree deployment process could be part of that which is bad processes are typically mean more bugs reported bugs right or you’re

39:42 bugs reported bugs right or you’re you’re opening the door for challenges to arise in in my experience that that bad process is typically further back in the ETL where there’s like a lot of data variants right as

39:54 like a lot of data variants right as opposed to living in the report but I think that that’s one to out and I think I think one conversation that should be had in this as you’re potentially even pushing for more deployment process especially if you’re in an organization where you feel the like you are struggling with that is is that risk conversation right because the the likelihood of something going wrong when you’re just straight deploying to the workspace right where I have my PBX it’s

40:25 workspace right where I have my PBX it’s the highest risk something could go wrong a report goes down but it is potentially the fastest way content gets out there so like there’s the value prop there like how fast can you get something because that’s always a business side request right I want it today I want it now versus like when you start going down the path of like adding more process to ensure that the product being delivered is of a higher quality or a vetted quality it’s almost like you you can probably pick the Middle Road

40:56 you can probably pick the Middle Road here right where you you have a fast delivery and not highest risk but it’s still high right like and then when you go into full control mode it’s it’s the lowest risk right like so it’s the the outcome Product Where what is the level of risk you as the business are willing to to accept for us like and we will build our processes around that because and that’s where the the use cases

41:27 where the the use cases high volume of of individuals accuracy of data report uptime could be one as well right like everything’s got to get dialed in those are worth it to the business to invest the additional time invest the resources invest in the process understanding that you’re not going to be able to throw report changes out five times a day right because it’s not going through that process and they don’t want that level of risk and I think when you talk about risk levels

41:59 think when you talk about risk levels with the business that’s where you really find yourself in the right spot with signoff too right because it’s like hey we didn’t invest in all this stuff and and there was high risk in in in that decision that’s why when we deploy sometimes we break the report or this is why this stuff happens the other the other area that we haven’t talked about at all where I think the further you go into a solid deployment process is

42:30 go into a solid deployment process is external facing reports right especially the embedded scenario where you’re almost you’re you’re develop you’re developing and building experiences for customers not just your own internal teams yeah and that has a higher to your point set that’s a higher risk of if you do that incorrectly and numbers don’t work and visuals start breaking now your company is reflected poorly to the customer and they may not stop they may stop paying you like so that is a huge high-risk amount internally like

43:00 high-risk amount internally like if you send a report to an executive and there’s one visual that’s broken maybe you get a little bit of Lance there hey why is this broken they reach out to you you fix it no big deal you’re done you move on but like if you get a customer who’s regularly getting reports that have broken visuals or wrong data in them they’re like these guys don’t know what they’re doing that that speaks to The credibility of your team and they’re immediately saying maybe we should go somewhere else maybe we should think about like this is not we don’t want your models it’s not adding value to us you’re not helping us give us the raw data we’ll do our own analysis on our own so there’s

43:31 our own analysis on our own so there’s there’s a level there of professionalism that especially goes that the risk is much higher and to your point Seth I love that you bring that up I spent a ton of time in Building Systems that operate for customers externally and we need to have more of the process and the checking checks and balances and the automation in all cases and I I would I would also like to argue as well when you get to the point of You’re Building custom reports you’re Distributing through an app or through shared content

44:03 through an app or through shared content to external users you’ve already pushed yourself into the realm of you are a bigger team you’re thinking more strategically about your data and you’re treating your data like a product as opposed to it’s just Tommy doing all the reporting in one in one part for the entire business like that that’s a that’s a different scenario I and I I I love it too because I this also raises the elevation of the role that we do because this is to me that argument why we’re not just it because you said some magic words there

44:34 because you said some magic words there Seth actually both of you we’re one it’s that sign off on the business and the acceptance of what we’re going to do because the risk is worth it so that means we’re going to implement all this process we’re going to always require this level of skill for these roles and to me this hopefully will bring in more of like your risk management team or even hopefully you have a chief data officer especially if it’s more like a product because again if you are to go down this road where the risk where because again we’ve hit that threshold

45:04 because again we’ve hit that threshold of we cannot have data wrong especially we’re external again you have to have that Acceptance in the business not just the team or the person building it on here’s now here’s now the standard of what a powerbi builder is or the standard of what we think of our data Engineers or even just the report Builders or a Content author whatever you’re doing you have just now elevated that role in in the organization in terms of the talent we’re looking for the expectation of that role again with

45:34 the expectation of that role again with the consideration that it’s harder to find like and what’s going to be happen there with with a lot of this I don’t I don’t know I I don’t know necessarily if it’s like yes potentially it’s harder to find at the moment right but it’s not like it’s not like these things are insurmountable things that are extremely challenging that you’re going to need 3 to six months of like outside training and like you have to no this is just it’s

46:05 like you have to no this is just it’s it’s understanding the tool sets and how to use them and and realistically it’s a great upskill for for any business intelligence person but like it’s the invest like there is a there is a Synergy that I think business intelligence people find themselves more so understanding than just straight devs sometime to some degree or technology where Business and Technology have to work together the difference is business dictates what is

46:36 difference is business dictates what is the most important value driven things for the business and that’s why the risk thing is is worth while because like it’s it or Technology’s job to interact with the business and say hey here like what is your level of comfort or or level of where we want to be for these things and it’s the same thing when we engage let me finish the thought and then you make the appropriate decisions behind the scenes that they have no clue about right to ensure you deliver the product that they

47:07 product that they need those decisions have ramifications to the business though right does that mean one less report or one less you mean one less report or one less two less reports in a Sprint or in know two less reports in a Sprint or in a quarter or yes it’s going to mean less output because you’re investing more time and resources into the quality of the products that you’re delivering right and then the same way it’s like it’s very similar or akin to our engagement just in report requirement Gathering right you make it you make it

47:37 Gathering right you make it you make it as business speak as possible so that everybody’s on the same page and then you go behind the scenes and do all the technical work we work at the Leisure of the business right like our Solutions should be should be driving value and those should be the same value props that business expects like these aren’t too independent or organizations right technology and in large part our jobs are aligned or should be aligned to expectations that can be hit based on what the business

48:09 can be hit based on what the business says is like acceptable in terms of the the risk that it wants to to be be aware of aware of right and I think you’re making a really interesting point there Seth because I think and how I mentally grapple with this topic right I I started thinking about what are the handoff what are the transition moments between the tables that have been built models that have been built reports that have been created right what is the acceptable then this is this isn’t a

48:40 acceptable then this is this isn’t a topic for the center of excellence to understand but different parts of the organization different teams are going to need different levels of information from that Central team whether it’s it whether it is a community of a center of excellence or Community Practice whatever whatever that central body is around those pieces of content are you can decide again I really like your point Seth because it’s up to the the hey I need this data warehouse created this is the table I want I want it here every day in the morning and you

49:10 it here every day in the morning and you have some requirements around that but it’s up to it to ensure that the transition is smooth between that table or that group of models that you’re handing over to other individuals right so this is this is a great example of those transitions of responsibility and I I see see this across all organizations some organizations are willing to let everyone have responsibility other organizations are not and don’t trust what the business is doing with things on the data side so I definitely can see the culture is mean I definitely can see the culture is very there’s lots of different cultures

49:41 very there’s lots of different cultures and many different kinds of businesses where there’s a lot of trust a little bit of trust or no trust and teams continually hold on potentially too tight to what they’re doing the technology is such that it’s easy to

49:53 technology is such that it’s easy to share stuff it’s easy to to hand off the transition point between one team to another but the business and the culture may not be there yet and so there’s there’s potentially hesitations there that go along with this pattern I don’t know which episode we introduced the concept of the people process technology but I think especially in this conversation for me I’m just harping back to how critical even though the documentation here is so focused on really that developer role

50:24 focused on really that developer role and what they’re going to do getting to that point or bring introducing that that process and again the people involved with that not just the developer but the whole business being in line and working is so critical here I would agree with that to be fair people process Technology’s been around I don’t hear a lot he really no oh dude my my world over the last 20 years was just people process technology

50:56 years was just people process technology people process technology like it’ be we should look that up and see what that but that is certainly not no I know it’s not ours we definitely not our idea we definitely borrowed that one from somebody else who is much smarter than us and has observed this a lot longer than we have introdu so the quick Google searches are like this is a 1990s thing when some Bruce Schneider schneid of some security technologist yeah been around a while Tommy we’re we’re introducing the data

51:26 Tommy we’re we’re introducing the data part part press technology data data awesome any other things that stuck out to you in the article is there any other talking points or other areas you’d want to go through as the latter part of the article we wrap up here and kind article we wrap up here and do final thoughts I think we’re of do final thoughts I think we’re actually at the point where we can do final thoughts if you need to I do like the Spectrum right like they they outline the the majority of all of you outline the the majority of all of the the the ways you can get data

51:57 know the the the ways you can get data out there I think it’s it’s a valuable article because it probably instantly goes from like people who know how to do the easy deployment to oh there’s a whole other world of things we can do yes to enhance the delivery of of the product that we’re we’re building and I think that that’s the greater value as businesses go down or potentially have issues right with reports and and sometimes if you’re not proactively thinking about this stuff or

52:28 proactively thinking about this stuff or having a Coe right like th this you can find your you can find yourself here where somebody’s like that’s it we’ve had it like this report has been down several times this week like you guys need to go figure out a solution this is a great outline for folks to figure out like what are better ways we can you like what are better ways we can ensure we add a little bit more know ensure we add a little bit more process to our deployment strategy I’ve gotten burned by this one caution that’s inside this article and it wasn’t me doing it but it was

52:58 and it wasn’t me doing it but it was customers doing this and this and they’ve gotten burned and I have to go and fix it stuff right so under the section called decide how you’ll promote your content there’s a huge red caution box that says avoid publishing avoid manually publishing from your local machine to test and production I can’t underestimate how important that note is if you were building Dev test prod or Dev prod the prod environment needs to be as automated as possible to get content from one item to the next and

53:29 content from one item to the next and you need to know what’s changing as it goes in there that’s that’s a that is a table Stakes type statement right there you have to do this one so if nothing else in this article really Pops to you and you’re like this is one to definitely take note of it’s extremely important to not publish from powerbi desktop to test or prod you need some other mechanism to do that I think I think the call out is really good because as you are implementing this

53:59 because as you are implementing this stuff you 100% need to test this out across your scenarios yes all of them right like all the different ways in which you deploy content the models the reports everything you’ve got to have dialed in and tested because some things may not work or some some things like this one right what if we did that boom the whole test environment goes down and you’re like oh don’t want that to happen in prod or the data source is still wrong and oh shoot published a test and we’re still pointing at Dev like now everyone’s confused even the

54:29 like now everyone’s confused even the developers they’re all confused who’s pointing at what where like what are we doing like well the data is not ready in test and we still pointing back at Dev or everything’s pointing at pad and it shouldn’t be like this is not there is huge implications of confusion if you don’t start thinking this stuff through at least initially to get the right steps in place m not only is that I think due in the article but you almost feel like when you get to that point in the article a popup window should show up with the same eror be yeah and there’s no way to disable publishing to

54:59 there’s no way to disable publishing to a certain wor SP wor space either so again it can happen any time and to your point how critical it is my last thought here is I think maybe it’s another conversation or at least deserves it is the two areas in here that we didn’t focus on as much today is the certified content what that flow is and then moving content between departments or works not just development tests and when there’s inter or overlapping data across businesses here here’s my one gripe at

55:31 businesses here here’s my one gripe at this article my my one major I think I really disagree with this one is if you look at the section the very bottom section where it’s talking about deployment in fabric deployment pipelines in all the images they show you down below in the in the approach one two and three you’re seeing a fabric workspace in Dev test and prod you see semantic models reports and notebooks you also see data flows and it’s the icon icon or data flows Gen 2 my issue here is you

56:01 or data flows Gen 2 my issue here is you can move a data flows gen one between different environments with a deployment pipeline but you cannot move a data flow gen gen two which is Bonkers that you can’t do it yet and then I know it’s Pro it it was communicated at build that that Amir got on stage and said I’m sorry fabric conference and the fabric conference Amir got on stage and said we’re going to support this this is going to happen here we are months later it’s still not delivered yet and yet I still see in the

56:31 delivered yet and yet I still see in the documentation data flows Gen 2 icons are represented as being fully supported with a deployment pipeline it is not and so so me personally I feel like this whole article centers around semantic models and reports that’s is really what they’re talking about they’re not talking about a lot of the other fabric artifacts because in nowhere in this documentation they’re talking about how do you version data in each of to these environments how do you load the data what does the load process look like to kick off job runs as you deploy new

57:02 kick off job runs as you deploy new artifacts between these things because there’s three things when you’re doing this stuff you need to think about the infrastructure the hardware the stuff runs on you need to think about the code that makes the data you need to think about the data itself there’s three things you need to version as you go through this and there’s not a clear story around the data versioning or if there should be or just delete it all and reload it all every time you do a deployment I don’t know yet so there’s to me there’s still some missing gaps here but this is defin a solid article if you haven’t read it yet you definitely need to read this and and grapple with what this looks like here

57:32 grapple with what this looks like here and and yeah the Andrew also heard it too so Andrew in the chat is saying within six months all fabric artifacts would be supported I’m pretty sure it’s recorded and on YouTube published as a video and I’m like I need to go snip that thing and be like okay we’re getting close people where are we at like we are we getting close yet so the time is running out but all this to say even in that six months of time I’ve seen a massive amount of progression to get better in all of these things so if we’re just even if the trend just

58:03 we’re just even if the trend just continues to keep getting better and better and better and they’re rebuilding Reinventing and getting this stuff to there’s probably massive issues I’m griping here but there are probably massive issues that they’re rebuilding huge amounts of work that has to be done to to get to this level so the fact that they’re taking it on is very important to me and this is very good so I really am encouraged that team is still building these things I just wish it was here yesterday yesterday relax they got till the end of the month if they said a six-month skill so yeah we’ll see we’ll see what happens we’ll

58:34 we’ll see we’ll see what happens we’ll run that yeah we’ll talk we’ll talk in next month and we’ll see how how how it played out anyways any other final thoughts from the team here no Tommy I’m good all right with that thank you very much for listening to this episode we appreciate the feedback chat has been really good lots of good feedback there as well you’re also factchecking us to make sure we got the right details of things so thank you very much chat for jumping in and conversing throughout the conversation we really appreciate you for doing that we do not advertise this the only thing

59:04 we do not advertise this the only thing we do at advertising is through you so if you like this content if you like what you hear here in your ears if you like listening to this while you do your run please recommend it to somebody else we really appreciate the the voice of support either on social media or people at work we really appreciate that as well so if you don’t mind please share this with somebody else Tommy where else can you find the podcast you can find us in apple Spotify or wherever you get your podcast make sure to subscribe and leave a rating it helps us out a ton you have a question an idea or a topic that you want us to talk about in a future episode head over to power bi. tips podcast leave your name and a

59:36 bi. tips podcast leave your name and a great question finally join us live every Tuesday and Thursday a. m. Central and join the conversation all of powerb tips social media channels thank you all so much and we’ll see you next time time [Music]

Thank You

Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.

Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.

Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.

Previous

Getting Your Dream Job – Ep. 350

More Posts

Mar 4, 2026

AI-Assisted TMDL Workflow & Hot Reload – Ep. 507

Mike and Tommy explore AI-assisted TMDL workflows and the hot reload experience for faster Power BI development. They also cover the new programmatic Power Query API and the GA release of the input slicer.

Feb 27, 2026

Filter Overload – Ep. 506

Mike and Tommy dive into the February 2026 feature updates for Power BI and Fabric, with a deep focus on the new input slicer going GA and what it means for report filtering. The conversation gets into filter overload — when too many slicers and options hurt more than they help.

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.