PowerBI.tips

Half Baked Ideas – Ep. 420

Half Baked Ideas – Ep. 420

Mike and Tommy serve up their half baked ideas for the Power BI and Fabric ecosystem. From wish-list features to wild predictions, this episode is all about thinking out loud.

News & Announcements

  • Fabric April 2025 Feature Summary — Microsoft released the Fabric April 2025 feature summary, packed with updates across multiple workloads. Highlights include low-code AI tools for notebooks in preview, session-scoped distributed temp tables going GA in Data Warehouse, and a new migration assistant for Fabric Data Warehouse. Copilot and AI capabilities are also now available on all paid SKUs.

Main Discussion: Half Baked Ideas

Mike and Tommy dedicate this episode to sharing their unpolished, half baked ideas — concepts and features they’d love to see in the Power BI and Fabric world, even if they haven’t fully thought them through yet.

The Spirit of Half Baked Ideas

The guys embrace the “half baked” concept — these aren’t fully formed product proposals, just seeds of ideas that could grow into something meaningful. It’s a fun exercise in thinking about what the platform could become without worrying about feasibility.

Feature Wishes and Wild Predictions

Mike and Tommy riff on features they wish existed, pain points they’d love to see solved, and bold predictions for where the ecosystem is heading. The conversation bounces between practical improvements and pie-in-the-sky thinking.

Community and Ecosystem Thoughts

The discussion touches on how the community could better collaborate, share ideas, and push the platform forward. Both hosts reflect on what they’ve seen working well and where there’s room to grow.

Looking Forward

The half baked ideas format is a reminder that innovation starts with unfiltered thinking. Mike and Tommy encourage listeners to share their own half baked ideas — you never know which one might actually become a reality.

Episode Transcript

Full verbatim transcript — click any timestamp to jump to that moment:

0:00 Heat. Heat. [Music]

0:34 Good morning and welcome back to the Explicit Measures podcast with Tommy and Mike. Good morning, Tommy. Good morning, Mike. How you doing?

0:41 Lots of things are happening. We got a lot of exciting announcements today. We’ve got a very fun topic. Today’s topic will be halfbaked ideas from fabric and PowerBI.

0:53 So these are ideas that maybe Tommy and I want to just voice things that we are finding. There is a hidden joke in this title. So hopefully you found it and you enjoy our funny joke today.

1:05 But we’re going to go through PowerBI and fabric. We’re going to think through little areas that we find that are maybe some slight bits of friction, right? Things that are just not quite designed all the way right or things that we wish it would do.

1:19 And well, maybe I don’t want to just go out and pick on things and say things are weird around these edges. There’s rough edges, but I want to also give solutions to this stuff. I want to ideate on what would make this better, what would solve this problem for us.

1:34 So I’m a big proponent if you’re going to come with a problem, you need to come with a solution. Yes, Tommy knows. We’ve been talking about this before. Yeah. All right. That being said, let’s do some news things.

1:46 Tommy, what do you got for us? Oh, man. Do we have a good one, Mike? I don’t know if do you want to start with our draft or I think we beat from the street, too. Let’s do a draft first. Let’s do the news draft.

1:57 So, we have, ladies and gentlemen, and right at the finish line, the April Fabric 2025 updates. And oh my gosh, there are a ton here.

2:10 And if you have been joining us lately, you’ve noticed that we’ve done rather than just going line by line on the updates. We’ll have a little fun with this. We do a draft style, say what our favorite ones are.

2:21 Mike, there are so many in this one. And before we even begin, I want to give mad credits to the Microsoft team because one of the things we loved about PowerBI when it first came out was all the updates once a month.

2:35 With PowerBI like whoa look what they did with bookmarks. Wow, that drill through feature. And this is like that on old baseball steroids. This is just incredible the amount of updates that they have here and impactful ones.

2:51 But Mike, I talked first. I am going to give the floor to you. You will be the Tennessee Titans of this draft. You have the first pick.

3:01 All right. Well, I’m going to give my very happy first pick here. April apparently is the time where we should be seeing now in May that we’re now here in May. We should now be seeing that if you want to test out Copilot, you should be able to test out Copilot at any level of an FS skew at all.

3:17 This makes so much sense to me. Let me manage this stuff. And there’s also this idea of you can make a dedicated co-pilot capacity. It’s like a I don’t know what they call it. Fabric co-pilot capacity, something like that.

3:34 But the idea is you can make a fabric F and dedicate that to only co-pilot which this makes a lot of sense. So you could spin up a very small co-pilot and F2 have all your users attach their co-pilot sessions to that particular capacity and use up that capacity till it runs out.

3:52 And then when you’re done you’re done, you let it build back up again and then you have more capacity later for the next day. So I really like this. I think this is a great feature.

4:02 I am having just in general co-pilots and large language models and things. Tommy, I got to be honest, I’m having like a revolution in my head these last couple days. It’s been I’ve probably built about three or four websites in the span of a couple days or a week.

4:21 Right now, I’m experimenting with what can they do? How do I use them? The agent side of things for co-pilots is really changing how I work. It’s substantially helping me do things I was never able to do before.

4:38 So one I really like this. I think this is going to be extremely helpful for other individuals. It’s going to open up more opportunities. And I can’t wait to see where Microsoft is going to continue to go with co-pilot.

4:51 I think they’ve got the right vision for it. In talking with the product team, they’re very focused on making sure this is impactful, it’s useful. So, it’s just a matter of getting the right integration with your experience to make sure that the co-pilots are actually helping you as opposed to other things.

5:03 Simple things like document all my measures in my TMDL view like that should be a thing or it should just do it. Yes, that’s what I want to do. It’s all these little nitpicky things that should be there. So, I really like this feature. That’s one of my votes.

5:20 And I’m gonna press on this too because honestly there’s the two major positives here outside of the feature itself. I want to again give a lot of credit to Microsoft because do you know why it’s such a positive? They listened.

5:31 And I don’t know if they listened to us, but I am sure we’re not the only ones who made a big stink about just the co-pilot not being available to more users. And F64 is not the majority of licensing that’s out there. I’ve looked at the volume.

5:47 And so the fact that now it’s given to the peasants, the people so to speak, this is a major thing, especially when you think about a lot of the co-pilot actions that you said that we want done. It’s not an incredible load.

6:00 It’s just TMDL files, right? It’s a lot of text based items, not going row by row. So we know that we’re not going to take a ton off of the co-pilot capacity from going row by data. It’s like, listen, I have a bunch of text. It’s code. This should be easy.

6:16 And the fact that they’ve announced this is a major one. So I just want to emphasize that as well. Let me keep going with this one Tommy a little. I’m gonna riff on here more.

6:26 I think this is going to be a huge opportunity when we are in the TMDL editor. I think TMDL editor and using large language models. It’s a known spec. It knows what to write. You could train the model and say hey model these are the boundaries of what I want you to build.

6:41 And I really do honestly think that we could say I want to get to the point where I could say, “Hey, Copilot, go through the TMDL document, find all the columns that have a numerical value of them, and for each numerical column, make a new measure.”

7:01 “Put the measure in the table where the column lives, hide the original column, and put all the measures in a folder for each table.” Like I want to be able to tell it what to do and have it smart enough to figure out, oh, I know what that is.

7:16 Here’s how you make a measure. Boom. Done. Oh, and by the way, make sure all your measures are documented. Make sure there’s a description of them. Like there are so many multiple small tasks you could just stack up together that there should be no effort.

7:31 You should be able to load a series of data tables and have all the basic measures pre-built, designed, ready to go. And there should be a single prompt that just gets you 80% of the way there. That’s a huge time saver.

7:44 Well, and I think honestly the team recognized, okay, we could make this just F64. Maybe we got a little more money that way. Maybe we’re net positive. But I think what they found out was, oh, nobody’s going to use it because guess what?

7:57 I talked with Ginger and we went through fabric git workspace on my local machine and we’re using Claude and Gemini to do all the TMDL stuff because I can feed it documentation. It’s that barrier, the path of least resistance.

8:13 I’m not going to use that when there’s other agents. They’re like okay we got to lower the barrier because otherwise there’s a ton of other avenues. So rather than making that limitation, we’re going to make sure that we’re in the game.

8:28 So I think that’s a good point with the agents especially too. Yep. Exactly. Okay. Sorry, I didn’t mean to riff on that too long. No, no, I think that one deserved it. That one deserved it.

8:37 I think we’re going to talk about a few episodes there as well. So I got one right off of that and this is a twofold. I made a trade. So this is a two draft, but they’re one of the same.

8:48 Well, in our data science space, guess what? We now have in the notebook section and I think there’s really two features here that make this impactful. We have low code AI tools to accelerate productivity in notebooks.

9:01 So we already knew about the AI functions in notebooks but with functions you had to write it out. Well, they’ve added something very simple, but to me, I think this is actually going to change notebooks to me.

9:15 Really, they have is setting up AI functions as a UI interface in notebooks on the right hand side rather than if I didn’t know it existed, I would never use it. Well, now it’s a user interface, like a Power Query thing where I can choose the function I want that’s available to me.

9:32 This is available in the notebook interface and it’s also available in the data wrangler interface. Like the fact that the AI functions are now easily accessible is awesome.

9:45 I’m actually going to touch on I think what they’re going to be doing with notebooks. What have we loved about Power Query, Mike? We’ve loved the user interface. And I think they’ve realized everyone likes notebooks, but not everyone may know the language.

10:00 They have that browse snippets feature. To me, I see what they’re doing here. And if you see the image on the April update, it’s a really nice user interface to choose a function. Yes, I think this is going to be expanded to not just AI functions here. So, to me, this is I think a first step on our experience with notebooks.

10:21 Yeah, I would say this. If I look at what they’re doing with the data wrangler experience, one, I really like the data wrangler experience. It’s good. It’s working better. It’s getting more robust. It’s smoother to run.

10:33 The only thing I would argue is the UI just isn’t quite as polished as I think the Power Query UI. The buttons aren’t it’s not menu-driven like the Power Query UI is. It just feels a little bit disjointed.

10:46 So, it feels like a product that was existing somewhere else, which I think it I think data wrangler did exist before it was brought into fabric. I’d like to see, if I look at

10:54 I’d like to see, if I look at the operations element on data wrangler, there’s actually a lot of operations that are now getting stacked on the left side of the screen, and it’s getting really long. I think it’s going to get a bit overwhelming with so many operations.

11:08 So, I think we need to rethink slightly on how data wrangler is laid out so it’s a bit easier for users to use it. But other than that, I will say everything else about it, I love. It feels just like Power Query. There’s menu-driven selections like hey what columns do you want to pick, which ones do you not, you can click on columns, you can remove them and it automatically adds a step for you.

11:30 It is really well done and I think there should be more — the data wrangler experience should be tighter coupled with the notebook experience. I get what it’s doing, and this is maybe part of the half-baked idea, maybe we’re going to get into a little bit here. But I really like Power Query. I’m sorry, I don’t like the data wrangler experience when I have a data frame and I have to go to a brand new screen just to use data wrangler.

11:59 I feel like what I’d like to do is in line within the notebook. I feel like I’d like to create a data frame and I’d like to make a new data frame that is data wrangler where the data wrangler is the frame or the code block, right? And then I can go into that frame essentially, do the data engineering that I want to do, and then hit done.

12:22 And it takes away the data wrangler and then just puts the code in front of me. And then I can, because I feel like I want to be able to toggle between jumping into that cell, that single cell of data wrangler, and then jumping back out in and out. So that’s the part where I think that would make more sense to jump right into data wrangler and then come right back out.

12:43 So I would say for those of you who are trying to learn Python and get started with it, I think data wrangler is excellent. It gives you a very Power Query-esque type experience. But it’s not — it’s going to write Python and you’ll learn Python as it writes it for you. So I do like it.

12:58 I’m going to pause here because Mike, we may or may not be touching on a half-baked idea here. We may be, I don’t know. But I’m just gonna pause there and leave some other thoughts potentially for later. But no, I think what they’re doing with AI functions is good. Honestly, good on Microsoft to say we’re not just doing AI functions, but we’re going to make this easier for people to access. So, it’s past that half-baked.

13:24 All right, you’re on the clock. Yeah. So, I’m not sure this is going to go over, but we’ll see what happens here. We’ll see. There is now SQL databases in Fabric.

13:39 So, let’s just say this, I’ve had an argument with a data engineer in the past and the argument was about SQL. And my argument was SQL runs the world. Everything runs inside SQL. It doesn’t matter what you’re using. So, yes, there’s different flavors of it, right? There’s Spark SQL. You have SQL Server. You have Oracle SQL.

14:06 Like all of them have — there’s T-SQL. There’s slight variations on part of the syntax pieces, but generally the concept works very similarly across all SQL languages. Select something from where joins all the things. It’s a very common language for manipulating and moving data together and allowing engines to figure out what that plan is.

14:30 So I was a little bit nervous around when SQL databases came into Fabric and there’s new regions supported in SQL databases. There’s now backup billing. There’s all these extra features that came with SQL. So I’m going to lump SQL databases in an entire feature here that Microsoft I think is pushing.

14:47 When you looked at the scope of what Fabric and Power BI were doing, the SQL DBA, which I guarantee you Microsoft has millions of users that are SQL DBAs across their ecosystem, they didn’t really have a job to do inside Fabric or SQL. It wasn’t straight SQL. It was notebooks and Python and Spark. And then there was Power BI, which isn’t really SQL either.

15:09 And we had this thing called data marts, which was like what is that really? And is it SQL? Is it not really? Is it a SQL server? I don’t really know. So I think this is a really good transition for Microsoft to bring in the SQL database into Fabric and allow that to be used by the business units. I think it’s a strong positioning for data.

15:31 I think it’s a really good alignment and honestly I’ve been using it more and more for small data sets, things that are just not big enough to be inside the Fabric lakehouse table level. So, I’m getting more and more comfortable with just using SQL databases as a place to store some quickly moving data, some transactions type things, some things that I want to update quickly and have it automatically be ready to go for my models.

15:56 It provides a really good surface area to build a little transactional system to make, modify, or update records. So, I’ll just say this, we’re not there yet. It’s still going to be evolving into the product, but I’m very happy that SQL databases are now inside Fabric and I think they’re going to make waves and you’re going to see a lot of people find a lot of value from the SQL databases inside Fabric.

16:20 I’m liking the experience and it runs really well and it doesn’t seem to charge me a lot of money. I can spin up a SQL database, run what I want to run and I’m not sitting there just crying about how many CUs are being used from it like other tools or products. And honestly, it makes sense with their billing with SQL to do a billing on storage rather than computation there, right?

16:42 And Mike 100,000% here. First off, SQL does run the world until heard otherwise because SQL has its place. And honestly if I’m working with a client, one of the first things we’re going to start with because odds are they just don’t want reporting with their data and odds are their data is in 800 different places.

17:05 Yes, the best and still to this day the best scenario to be universal is a SQL database. Lakehouses and I think SQL has its place too because again one of my philosophies around these products is if you’re going to have something it has to do something better than everything else, has no reason to exist. SQL has a ton of those still.

17:24 And yeah let me do you one more here on this one — that’s why I like this so much. How often, how difficult is it for you to get sample data sets into Fabric? What is your sample data set? What do you start from? Well, how about you just spin up a SQL database and you just drop in. It says literally on the SQL database, it says, would you like to play with a sample data set? Yes, I would.

17:47 And there’s a one button press and I get not just one, I get a couple tables that have relationships between them. The data set’s ready to go. Yeah, I already like that experience. So, great. Spin up your SQL server, get the sample data set, and if you’re teaching people about Fabric and Power BI, great. Now, your SQL server, which is part of your story typically, it doesn’t matter if it’s SQL or Oracle or an API or whatever, you need data from somewhere to bring it to your class or training exercise or whatever, so people could use it.

18:17 This is a great place to get started. And with a couple clicks and waiting for a minute or two, you get all this rich data and you can and now it’s consistent. You know what it is. I really like that experience inside the SQL database. It’s a great starting point for how to get data into and started working with it.

18:36 So that is a core functionality that I think you need to train people and get them up to comfortable and Fabric. And to your point with the SQL language, the last point I’ll say here — most used programming languages used in 2024, 51%, and you could choose multiple here so it can be more than 100. 51% said Python. We’ve talked Python, but guess what? 51% also said SQL too.

18:59 So, the odds are if I’m using Python and I’m in the data engineering world, I’m also using SQL. So, that has this — I love that they’re making this a major focus because it still runs the world today. So, no, that’s a great one. I think that’s a very apt solution there. I think you’re going to be seeing — I made the argument that it doesn’t matter.

19:24 Like if you’re building systems nowadays there’s some people would want to build like Scala inside Spark. So Spark runs on Scala. You could write Scala notebooks or whatever. I don’t think anyone’s really going out and hiring Scala engineers. Now if you have Python, Python is like you can throw a stone and hit someone who knows Python. They’re everywhere.

19:44 So just the fact that that language is so well known and commonly distributed out there is just really good. I really want to leverage that as well. Yeah. No, I love it. And honestly, my last draft pick is going to go off of yours. This is off the data warehouse here.

20:04 And okay this also coincides a bit with one of my other half-baked ideas but we now have generally available for the data warehouse alter table and drop column and also rename column support in Fabric warehouse. When we’re dealing with the data warehouse here and when we’re dealing with SQL data, you need the general universal functions that people are probably going to do available from the onset.

20:31 And I’m very happy to see this because honestly these are very — they’re probably in your top five functions that you’re using or you need to know if you’re dealing with a SQL table much less a data warehouse. So I love that that’s here. There’s the fact that also I can do temporary tables in this session which is also again just one of those things if you’ve dealt with a SQL table, if you’ve done slightly more than select and from.

20:58 Honestly you don’t have to be an expert in SQL to know these things. It’s just the next logical thing after you do your first select statement. You’re probably going to utilize these in some capacity. And again, I think part of the language is being so apparent what we need to know and what we need to do.

21:13 I’m really happy to see that I can now alter a table, drop a column, which my gosh, again, that’s one of those — what we talk about. We go, how is that not available right off the bat? Why is that now just GA? You’d think it’d be simple, but it’s just one of those things that just saves a lot of headaches and it’s just part of the workflow.

21:35 So, that’s going to be what I’m gonna end on for my draft picks for the April 2025 feature update. I like it. I think these are good draft picks. I think we’ve got some good features coming. Very excited to see Microsoft continually putting the gas on.

21:44 Microsoft continually putting the gas pedal down. I think I said it a year ago or so. Fabric is coming out a little bit hot out of the gate. It’s a little bit green. It doesn’t quite know what it’s doing, still stumbling along a little bit here.

21:57 I said, “Give it a year. Give it some time for them to really refine the features,” and I feel like we’re really hitting a stride right now. We’re getting those little paper cuts resolved. We’re getting more experiences that are useful and helpful.

22:10 I really feel like the product is very much rounding itself out, and it’s getting much more solidified in their vision for the tool and the program. So, really like it.

22:21 Okay, awesome. Let’s move over to — I have a quick, very quick beef here. I guess this would be a beef. I saw a post. I believe it was on LinkedIn is what I saw and Tommy, the rage in my soul that appeared was not healthy.

22:43 So let me paint this scenario for you. I will leave names out of this because I really disagree with this comment that was made on LinkedIn. I’m just going to share just because I felt like this was just really weird. It just struck me the wrong way.

22:55 Okay. Someone was talking and the post started off really good. “Hey everyone, I found this amazing new feature. I didn’t know this existed. I’ve been working in Power BI since 2015 and I didn’t know that you can adjust the page size beyond a small size.”

23:12 Again the page sizes are like 2080 by 720. It’s like a smaller screen size by 9 and the 4x3. Hold on, where normal Power BI page dimensions — so he realized that he could stretch the page out and he’s like “I can make the page bigger.”

23:30 And I’m like, “Oh yeah!” Yes. I’ve been doing this for a long time. I make pages that are wider. I make pages that are longer. If I’m doing a mobile design I’ll make a page that’s like 600-700 pixels wide but really long because I need to scroll the page.

23:47 I think templates — yeah. Designing pages and adjusting them to the right size for the use case makes a ton of sense. I’m all on board since the beginning.

23:56 Well, I would argue again in my experience with building themes and working with companies, 90% of individuals who use Power BI, maybe even higher, 95%, never touch the page size, never adjusted it.

24:10 I don’t believe you. My data shows otherwise. It is so common. Hold on before you get there. Sorry.

24:19 Yeah. Two weeks after Power BI Designer came out, I already created a scrolling page. I remember that because it was pretty bad. It scrolled a lot, but that was a page size. This was June of 2015. Sorry. Continue.

24:35 So, anyways, the post is starting off really good. I’m like, “Yes, this is a good technique. I like this.” And then it all came crashing down. It was awful. And then he said, “The reason I want to make my page larger is because I can’t make the font size small enough on a table because my table has so many columns in it.”

24:58 And I thought to myself, “Ah, why are you recommending this?” If there’s any one large performance killer on any report that I have ever done, it’s adding a very large, very wide table. Every time you want the report to run very slow and lag like no other, make a big table. That’s what you do.

25:19 Forget performance killer. It’s experience killer. Oh, it was awful. I just thought to myself, “Okay, you may have been designing things since 2015, but I’m not going to trust a single thing you said now.”

25:36 Because there are so many other experiences inside the Fabric and Power BI ecosystem that let you get to tables of data that are really wide. Why on earth are you building a really large wide table? What information can you gather out of that table that you can’t add a slicer for or summarize or aggregate the information to?

26:00 You need to be doing that — that is not the right approach. I do not agree with a very large wide table. And let’s make our page size larger so I can get more stuff on the table size. That’s ridiculous.

26:12 The only time — no, no. The only time I’ve ever used a wide table was we called it the audit page. You had to drill through. There had to be like four things selected before you could get that page. It was hidden.

26:24 So already when I got to it, it was prefiltered and it was only because of verification and — Mike, it just blew my mind. I was in it and you wrote this post.

26:38 Don’t do that. How did you survive so far? And I say that in the nicest way because you must have run into performance issues. You must have run into people saying that. Because listen, since our days of old Seth Bower, we’ve complained that the table experience needs to — horrible.

26:58 It’s and so many ways Power BI is not meant for tables. It’s meant for matrixes. It’s made for summaries, but it’s not meant for the audit to go through row by row until we find out otherwise. That’s almost a half-baked idea still.

27:15 Yeah, that’s not well in mind, but one can make the argument. So I really disagree with that approach. I really disagree with that analysis. So I really do think I firmly am convinced that the reports should really be focusing on filtering, aggregating, rolling things up.

27:30 My mental model is when you have a semantic model, every related table — star schema, right? In essence every star schema you build is a really wide long table. It is a very wide table and you could build any combination of those columns with any of those fact elements. That makes sense.

27:48 But the whole point of Power BI and the semantic model is to aggregate things, roll them up. If you’re looking down at the individual rows of information, something’s wrong. And you need to be thinking of — there’s this massive table of data. I need to find which grouping of which columns are having the impact that I’m looking for.

28:07 Right? Sales are down. Which rows are attributing to that? Is it a group of rows? Is it a region? So all of those insightful things are groupings or aggregations or rollups of that large really wide table into something you can see.

28:27 So okay, again I’ll just leave that there. I would argue though that if you do need some large tables, where do you do this? I think you should do this inside paginated report builder or in paginated reports or in the data explorations or bulk export or go straight to analyze in Excel.

28:48 It’s not going to be the fastest thing but you could at least get the data there as well. So it’s still pretty fast. So if there’s nothing — and I would agree to this person’s credit, right? There is no good table experience inside Power BI and I think we really do need a rich table experience where we can go get data in mass from the semantic model but have it in a table form.

29:17 I think people really do want that experience, but I feel like the whole table and matrix visuals have totally missed the mark inside Power BI and they need a big redo or rebuild because they’re just not hitting the mark for me.

29:28 So, anyways, yeah. We’ll pause there. I have a ton of thoughts, but thanks for bringing that one up. And again, this is like a friendly PSA for everyone. So, this is good. Don’t do that. If you want to be on Michael’s good side, don’t tell him you’re making very large wide tables in Power BI and making your page sizes larger because I will get bent.

29:51 Those are fighting words right there. Anyways, all right, enough of that, Tommy. Let’s move over to our actual topic of the day, our half-baked ideas.

30:00 So, these are maybe — let’s call these like little paper cuts. These are the things that we see inside Fabric or Power BI that are just — they’re maybe halfway there. They’re not quite fully formed. Maybe there needs to be a little bit more work on them.

30:16 We just find, from our perspective as experts who use the tools daily, there’s a little bit of extra friction around these features and they need to be highlighted a little bit more. So, I’ve got some ideas. Tommy, you’ve got some ideas. We’re just going to go through a couple of them here and discuss what those little paper cuts or issues are and then how do we maybe solve them.

30:34 Because I don’t want to just come here with problems. I want to come here and say what would we expect it to do? How would we like it to work that would solve this problem for us. So, I don’t want to be a Debbie Downer here. I want to give a positive light on this as well.

30:49 Okay, Tommy, give us one of your half-baked items. I prioritized this based on impact and I think the problem I have the most with it but the biggest impact I would have if it was solved — and I’m going to start and I apologize in advance because I have a good feeling this is probably one of yours too but I think we could just make this a big expansion here.

31:11 Mine’s data wrangler and I’m calling this a half-baked idea here because you mentioned it earlier when we’re talking about the updates. To me it’s a great feature. And just again, I think to your point, just because we’re saying it’s half-baked doesn’t mean it’s a problem. It’s bad. It’s just — it should be something else. It’s not there yet, I think, what we would envision it.

31:34 And to me, data wrangler, it’s wanting to be two things and it’s neither right now. So, it’s wanting to be the power query experience for Python, but it does an okay job, but it still has limited functions.

31:48 But my thought here is why couldn’t it do more? It should be able to do a ton more, especially with what it’s capable of. Now, to your point, the experience in a notebook is janky, so to speak. It’s not a very smooth experience to go in a data wrangler.

32:03 When I’m looking at the table — I love looking at the table that way and doing things. But you definitely are taken out of your experience with the notebook. And then when you go back into it, you don’t have the ability to really explore. You don’t really have the ability I think to experiment with a user interface like that.

32:26 So my half-baked solution here, so to speak, is make it its own product. Why couldn’t we with the breadth of the Python language provide additional functions, more actions to the data wrangler?

32:40 Additional functions, more actions to the data wrangler user interface. Make it a ribbon, to your point, instead of on the left hand side. And you should be able to honestly replicate all the Power Query actions with Python actions.

32:55 The reason why I know this is because I’ve used agents to do that. I transformed one of my terrible data flows and I said make this Python and it did it better and it was incredibly fast.

33:08 So I know all those actions, table rename, all the things. Honestly I would be enthusiastic if you just made the Power Query ribbon Python. So give me that and then make it its own product. Make it its own artifact rather.

33:22 Dude, I’ve been saying this for years now. I’ve been saying we need to get how can Power Query just write Python? It should have just been doing that the whole time. But yeah, I agree with you 100% on that one.

33:35 Real quick though too, with how powerful it is. I mentioned this about a month ago. I had a data flow that basically broke my capacity. And it was doing very complex things. So you’re on F2, it’s a smaller one.

33:49 It’s F2 but should be able to handle a data flow. And I gave it to an agent through Cursor and basically fed it through this line and I said I want this to be a notebook.

34:01 That transformation, it’s not even a Spark, Mike. It’s just pure Python notebook. It runs in less than two minutes to do everything I was trying to do in Power Query that was literally killing my capacity.

34:16 So I think the idea of making it its own product, yeah maybe integration with notebook, fine. But to your point I can learn Python just like I learned Power Query. I have all those actions available to me.

34:28 And guess what? The output is a notebook then, but the user can stay in that UI. All of a sudden now you’re using best practices because you’re using Python and you’ve opened up the audience much wider.

34:42 Why wouldn’t this be something of priority? So that’s my solution there. Mike, first off, was this one of your halfbaked ideas? And two, what are your thoughts? What’s your take?

34:53 Yeah, I would argue I don’t think I would say data wrangler is a halfbaked idea. I think it’s a good idea. I just wish it was more. What’s halfbaked about it for me is it’s just not exposed in the right areas inside the experience for notebooks.

35:11 I still think it makes sense to be inside notebooks. I still think it makes sense to be tied to Pandas data frames or Spark data frames. Those make sense to me. I just wish it was a bit less jarring of an experience to go from a notebook cell into data wrangler and then back out of data wrangler.

35:36 I wish it was highlighted more as part of the experience. Think about the people that are coming to notebooks. They’re heavy PowerBI users, maybe looking at, hey, someone’s been telling me data flows gen 2 is really slow. I should go check out notebooks.

35:52 How quickly can I get those users who are coming from PowerBI? I keep saying this over and over again. On one of Microsoft’s recent sales calls, they said they have 30 million active users for PowerBI a month. 30 million monthly active users. Awesome.

36:08 Those are the people that need to learn notebooks. You have 30 million users. We’ve already got a huge audience of people who are probably just still using PowerBI. How can we make it easier for them to get into the experience of notebooks?

36:21 Well, I like data wrangler. Make it feel as similar as you can to Power Query. Then it’s a very easy transition. Everyone I know in PowerBI loves Power Query and they want to start using it day one when they get to Fabric and they’re like, “Wow, this is really expensive. This data flow is tipping over my F2. What the heck, guys?”

36:41 And then when you start doing things in notebooks, it runs so much more efficient. There’s a whole lot less CUs and you can do a whole lot more with it. So make it easier for us. I’m with you, Tommy.

36:51 I think I don’t think data wrangler is the halfbaked idea. I think where it’s integrated and how it’s leveraged in notebooks, I think that’s halfbaked. I think that needs a bit more love and highlight it better and make it easier to get to that experience without leaving the notebook cell experience. I think tighter integration there.

37:06 The last thing I’ll say about this, part of the success with PowerBI was Microsoft’s user story they generated where they’re like, we’re going to take Excel users and we’re going to point an arrow to PowerBI. We’re going to give them a bridge to walk.

37:18 Yeah. Well, let’s do that with Python. Let’s do the same thing with PowerBI users. What’s that bridge that they can easily walk to the notebook experience? So, yes. All right. That’s my big idea and my solution.

37:31 Mike, I cannot wait to hear what you got in store for us. Well, I’m going to give you one here that’s still notebook related. Again, I’m spending a lot of time in notebooks.

37:40 All right, Microsoft, this is totally halfbaked. You guys should have this by now. I should not drink water when you say this because I think oh, boy. Here we go.

37:49 I need a copy a cell in a notebook button. Dog gone it. I’m sorry, guys. This is a stupid issue that should just be solved by now. Oh my gosh. Just duplicate the dumb thing.

38:03 Even there’s a dropdown menu. There’s an ellipsis with lots of different options. This should be a shortcut command key in freaking notebooks that lets me just put my cursor into a cell and duplicate it.

38:19 The amount of times that I’m trying to save my old code and make a change to it in new code. I want to compare results side by side. It’s absurd, dude. Every day. Every day I’m using this feature.

38:30 Instead of having a shortcut command or a button that does this, I have to go into the cell, control-A, click another button to make a new cell, paste. Okay, that’s not a lot of work. But when you’re doing that all the time, over and over and over again, and sometimes you don’t capture all the code and you miss a part of the line, or if you use your cursor to drag and grab it, you don’t grab everything.

39:01 I am so annoyed that copy a cell in a notebook does not exist. This is a super basic feature. Put an intern on it for a weekend for crying out loud. This should not be rocket science. What the heck, guys?

39:17 I can see you in your basement cursing Microsoft. I was literally in an experience with my team developing in notebooks and we’re co-developing in a notebook and I didn’t want to edit the cell my coworker was in. I wanted to just take a copy of it and work on my own version of it.

39:31 Should totally be possible. Can’t do it. Halfbaked. Come on guys, I’ve asked for this multiple times. I have put ideas on powerbi.com. I have done Fabric ideas. I can’t toot this one any louder from the mountaintops.

39:49 I’m losing my voice over this feature because it should exist. It should not be this difficult. I hate to tell you, I don’t think this is a half-baked idea as much as like my wife last night taking the shrimp out of the freezer and forgetting to put it in the fridge. It’s just left out. It’s not in the right spot. And that’s what this is.

40:07 No, Mike. And honestly, I’ll expand on that a little with the shortcut thing. There are shortcuts in notebooks, but come on, man. We’re developers. We’re analysts. We like shortcuts. We like efficiency.

40:21 If you’re like, I don’t touch notebooks. Well, you touch PowerBI and I bet you know some shortcuts. We like efficiency. I like optimization. The first thing I did when I learned Excel was learn the most major Excel shortcuts. Control shift down, control shift right, control A, shift left.

40:39 All the things that make it easier because you’re right, it’s not the worst thing in the world, but when you do it over and over and over again, it’s tedious. It’s frustrating because your mind’s going faster than your hands can. So, give me more shortcuts with this.

40:56 Give me more quick things that most people do. It needs to be a button. It needs to be a button somewhere. The amount of times I’m trying to just copy a cell, make a couple changes, and then I’ll delete the cell. I’ll just copy it, change it a little bit, run it again.

41:08 Does that look right? Yes. No. Okay, I don’t need that. Delete, and move on. I’m doing this a lot, and I don’t know why this does not exist. This should exist. Copying a cell should be a no-brainer. That should be a shortcut, and it should be a button somewhere in the UI that I can just click and get a copy of that cell.

41:26 Simple. It’s not hard. By Tuesday, this could be done. Who knows? We’ll see. Mark my word. Today is May 1st. We’ll see how many days it takes to get this feature added. It just feels like it should just be there.

41:40 Anyways, that’s my other one. Over to you. Another halfbaked idea. So there’s a little larger one and I’m going to see what you think initially from this. Mine, I’m naming it PowerBI in the future.

41:57 So that’s right. Here we go. Are we doing desktop or are we doing service with PowerBI? We have a ton of features available in the service for Fabric and they’re doing a lot. I can build reports still but now I’m still very limited.

42:09 And I’ve been limited since the beginning with PowerBI report building in the service. I do not have all the features available. I cannot do drill through or bookmarks. I cannot do all the actions and things that I can do in desktop report building.

42:23 But also what’s the primary use case here? They’re building more things with semantic model building, but it’s still a much poorer experience than desktop. It does not have all the features available to me.

42:35 Measure creation, and how do I say this? On a Tuesday and during work hours, is subpar. It’s terrible. One, it’s not even a button anymore. It’s actually a right click on the table in the diagram view. And it’s just a frustrating experience.

42:50 So, I have features available to me in the service and they’re obviously pushing that. And yet this whole time I now have this incredible desktop that runs smoothly and they’ve just given us that new ability we talked about on Tuesday about semantic model building with direct lake.

43:08 So this is to me something where I don’t think a final decision’s been made on where the time is going to be focused for PowerBI pros, on where they’re going to build. Because there are some things I can do in the service I can’t do in desktop, but the majority of things I can do in desktop more efficiently with a better experience and faster.

43:29 So where are we going with this? Like I’ve done before, I’m going to give my half-baked solution here. I would like to say a full solution, but I understand Microsoft takes a lot of work just like duplicating cells here.

43:46 Give me, Mike, if I were to say I’m going to take away VS Code, and this is the whole desktop philosophical argument we have with applications in the web, you’d be pretty frustrated if you only go VS Code.dev. You did not have VS Code on your machine.

44:04 In the same fashion, give me almost a Fabric experience with Power BI Desktop. Call it Microsoft Fabric Desktop for crying out loud. Give me the Power BI runs incredibly smooth on my machine. And it’s meant for that experience.

44:19 Even if I’m connecting to the service like VS Code, you’re doing a ton with VS Code extensions. So we know that you’re focusing on user experiences on their computer, not just the browser.

44:33 So give me more experience of desktop because I can utilize shortcuts. I’m in that environment already. My machine can handle some of it even if I’m using my capacity just like now with Power BI Desktop with what is available.

44:47 So my solution here is make it a better developer experience for desktop. Make that the focus because guess what? This is still your pride and joy of a tool. I know a great service experience. So, Mike, what’s your take?

45:03 Wow. Desktop does one thing really well. Desktop builds models and reports. I think the expanse of what you’re going to build on top of semantic models is actually going to grow in the future, right?

45:21 So desktop cannot build an exploration. Desktop cannot build a paginated report. Well, why couldn’t it? Because why would you? It works in the service already. So there’s going to be things you can’t build a metric set in desktop. You can’t build a scorecard in desktop.

45:37 You can do a visual that is a scorecard but it’s not the same as building the scorecard. So I understand where you’re going with this, Tommy, and I understand that there is this idea of a pro modeling tool for semantic models. And that’s what I see desktop doing now.

45:52 But if I look at where I build measures, I don’t build measures in the model view anymore. It just doesn’t make sense to me. I use the model view to build relationships between tables. I use it to organize the measures into different folders. That’s what I do. I manage the model on the model view.

46:09 Where do I actually write measures now? It’s DAX query view. I don’t like to use anything else other than that tool. And now with Copilot F2 in the ability for me to use DAX query view, that’s where I’m going to be writing all my measures and experiences.

46:24 So I’m going to keep going back to I think Fabric and Power BI, they’re continually building a better experience in the service, but I don’t think it’s going to be how we envision it. It’s not going to be the same way as we use desktop.

46:40 I will 100% argue I really do like having three browser windows open. One with the lakehouse and the notebooks open to build the data engineering, another one with DAX query view and semantic model development in the middle, and then another view for the report connected to the semantic model.

46:56 So that’s something desktop does not give us and I don’t think Microsoft is going to let us build multiple windows of desktop to make it easy for us to have all three windows open at the same time.

47:07 So my opinion here is look, I understand people want to use desktop. I don’t think you’re going to be able to get rid of it. I’m also seeing, which is surprising to me right now, Microsoft developing features first in desktop that are then migrating their way to the service. I’m suspicious of that actually.

47:27 I was really hoping Microsoft would do more feature development first inside the service and then bring it to desktop. Like for example, let’s talk about the new TMDL editor, right? TMDL editor is now in desktop. You can edit there, but there is no TMDL editor in the web.

47:42 I would have felt like TMDL editing would have first occurred in the web. And this may have been the reason why they did this, because TMDL desktop building the semantic models in a different format that PBIP and the PBR version two, I think all of that needed to be done in desktop because they can’t lift that over directly to the service because you need desktop to talk to it first basically.

48:09 And then you can build all the service-based features on top of it. So I really just hope that that experience is going to be, they’re going to re-close that gap again. So to your point, Tommy, I understand what you’re saying.

48:20 I really do firmly believe the service is the way. It’s going to be all the greatest features. It’s going to be better experiences. There’s going to be more experiences that can be created in the service that will make it the premium place to do all your development work.

48:37 Yeah. But I don’t think we’re going to get away from desktop needing to be around for a longer period of time. It’s not disappearing in the next five or ten years. It will always be there.

48:47 But did you just download the recent desktop? Yeah. Did you see how large it was? We’re at 700 megabytes for a single application at this point. What did I download?

49:06 So this desktop is getting bigger and bigger. The last update from last month to this month was a 100 megabyte jump in size. We’re going to hit a point where desktop just can’t get any bigger. There’s just too much code in it. And you’re going to need to break that apart into multiple products because it’s just too big.

49:28 And that’s what I’m concerned about for desktop, that it’s going to continually be locking down what machines can actually run desktop because it’s just getting so big.

49:40 And listen, Mike, honestly, this has been you and I’s conflict since episode one on the philosophy here. And I’ll make the case again, I have no issue using the browser. I like it. I do most of my things in a browser.

49:57 But for me, until the experience is better than desktop, I’m going to be pushing for desktop until I actually see the improvements. Because like I said, most things in the report building I can do, yes I can adjust the visual, but I can’t do everything in desktop.

50:11 So to me it’s like choose one and make that experience better, then I’m all in a browser until then. Yeah. So anyways, that’s going to be the thing. I’m still amazed that DAX Query is still your primary case here. I use it. It’s not my primary. I’m still a Tabular Editor guy. I like that experience.

50:32 Like I said, you dabble with that TMDL and DAX query. But man, I think that’s an episode for another day. But anyways, I’ll pause there. Mike, back to you on your second half-baked idea.

50:43 Yeah, I got a lot of little things here that I’m going to put out here. I’m going to do two in rapid succession here. Okay. Right. I’m going to do one around tables and matrixes. We already talked about that earlier today.

50:58 Half-baked idea. It’s still half-baked. It needs to be better. I really want a Power BI table or matrix that is a slicer table. That’s what I want. I should be able, that table should be identical in experience to a table inside Excel or the data view.

51:19 And I should be able to go into the data view. If I look at the data table view of Power BI Desktop of a single table, I can pick an item in the list and it filters it down. Done. That’s what I want. And I want that same experience. Look, you guys know how to do it. It’s built. It’s already in the product.

51:40 Just make that thing a new table in the desktop in the report experience, and that way whenever I select a column, make that become a filter context for the rest of the page. That’s what I want. Why is there not a slicer table visual that uses the table and has the data in it and makes it a part of the slicing experience?

52:02 Oh my gosh, that would be a big win I think. And then I would feel like, again, I don’t love slamming out really big, large, wide tables. But I do think in smaller tables or filtering areas, you do want that experience and then I don’t need to worry about filters. I can just go right to the table.

52:16 The experience should feel, if we’re saying Power BI is a grownup version of Excel, this is an Excel feature that is just dialed. And right, we need to bring that same experience over to Power BI. We need to be parity with that, bar none. That has to be there.

52:31 That’s my small one. So my second idea here is the other half-baked one. I think it’s Copilot right now for me. I know we’re running out of time here, so I just want to go quickly on this one, but I think Copilot is a half-baked idea right now.

52:50 It feels like it’s very disjointed across the product. I need the ability to do more with it. When I look at other large language models and where I apply them in other products, I’m having this revolution in my mind right now.

53:06 I’m using large language models to build apps, software, code, really good experiences. And I’m very impressed with how well these things are doing stuff. There’s also this idea of an agent now.

53:19 So I can talk to the large language model. It can read my code. It can adjust my code. And then it actually modifies it for me and then asks me to approve it. This is amazing. Why isn’t this everywhere?

53:35 That’s the kind of experience that I want all over the place. It should be in Power Query. I should be able to talk to the large language model and say here’s my data table and I want these transformations done to it.

53:50 I should be able to go into a notebook and talk to Copilot that way. So I think there’s this whole idea of an agent that is there. I’m not convinced yet that the data agent experience is quite dialed yet.

54:03 And I know, again, I’m going to give Microsoft some grace here because this is brand new. They’re just trying to figure this out for the Microsoft space. And I think a lot of the tech industry is moving very fast here. But I do see the writing on the wall. There is a wave coming, if you don’t learn how to use these AI generative models in conjunction with

54:23 Generative models in conjunction with writing, building code, building software, building solutions, you’re going to get left behind. So I think this is the one we end on because I think there’s a lot to unpack here.

54:57 First off, Mike, the frustration with the agents that you’re talking about doing in things like VS Code or even Cursor. Well, you don’t have to pay for it. It’s like Anthropic. Maybe you have an API key, but you’re paying very little to do an incredible amount.

55:35 And we just got yesterday the ability with — I still have to pay a good amount for Copilot. That is a worse experience than all these other agents, all these other AI tools. But the problem, Mike, is also when you look at Microsoft Copilot compared to the Microsoft Copilot in Office.

56:16 The thing is if you want it to work, what Microsoft’s pushing is not just you using Copilot as the user. It’s the organization using Copilot Studio because guess what? You want an agent to be effective, you want a Copilot to be effective.

56:34 Any Copilot in GitHub, other tooling, you need system instructions, you need custom instructions and context. And that’s what Copilot Studio is for, but that doesn’t touch Fabric right now, that doesn’t touch the Fabric Copilot.

57:05 So no, Mike, you’re dead on. I can ask very general questions and get general answers from Copilot and Fabric, but it can do so much more. And I think that’s where that half-baked feeling is — like every user at this point is like, I know what you can do. I know what Copilot can do. I know what agents can do. Why are you so limited? Why is it so basic?

57:45 Yes. And so I know they have the data agents, but again I haven’t even played with them that much. I haven’t played with data agents enough yet. So where is the ability for me to pre-train models?

58:03 Yes, exactly. So these are all the things that I would expect to be able to give them. I should be able to have a number of files in OneLake that are like, hey, these are my system files that I want this Copilot to obey while I’m in this session with you.

58:35 So I go into Fabric, I load up my files, and this is as I’m learning about agents. This is everywhere. This is in all other agents that are out there. There’s a bunch of stuff like don’t be verbose, talk this way, write language like this, format your code like this.

59:11 It’s like there should be rules that I can apply against the Copilot in various experiences. Yeah. And I would also argue the Copilots that I’m seeing in code generation for Fabric experiences, notebooks or other — they’re okay.

59:28 But what’s really good? GitHub Copilot, that’s really good. And so that team must just be larger. They’ve been doing it longer. And there’s agents now available to us in Visual Studio Code. So there’s something there to that.

1:00:02 I don’t know what this is going to look like in the future, but I really do feel like there needs to be — Microsoft isn’t the only team trying to get at AI and agent level things. There are other teams in Microsoft that I think are doing a really good job, and that team is the Copilot team inside GitHub.

1:00:41 I think that team is doing a great job. I will also argue I am now more recently going to use Copilot in writing emails or doing email things. Like hey, I wrote something, I’m like, this doesn’t quite sound right, or hey I need a second set of eyes. That’s becoming more useful to me as well.

1:01:19 And then I use Copilot in summarizing Teams emails or Teams meetings and video. So there’s places where I’m finding it’s adding value. It’s just when I go back and look at the spectrum of things that are happening in Fabric, it’s doing okay.

1:01:56 I’m finding the regular Copilot in my Edge browser is just as powerful as the Copilot in Fabric. So why would I need — there’s nothing special there that’s giving me more advantage than just using the standard Copilot in my browser?

1:02:29 I think the situation here — and I’m going to get your take — I think it’s because they’re trying to spread too thin all the Copilots across all the Fabric items, right? And to me, that’s how I see it. Okay, I have a Copilot for Power Query. I think Kurt actually said there’s nine different Copilot agents in the Fabric environment, in the Fabric tenant.

1:03:13 But the problem is all of them have the very standard context, where I may want Copilot for notebooks to explain it for me. I may have one like, hey, I’m a data engineer, I don’t need all the basics and I want you to focus here — let’s just generate code.

1:03:51 You may have one like, I’m just starting, tell me what you’re doing. But I think the biggest thing is I would rather have much less — I would rather have a limited amount of agents that do the primary things very well than all these very basic agents across the different items.

1:04:26 Because I cannot trust the very general agents to do what I want to achieve the solution I’m trying to do. When I have nine agents that are okay, hit or miss, the odds are I’m going to keep going back to it are very small.

1:04:59 But if I had two agents, maybe notebooks and Power Query — or notebooks and SQL — that do it very well and I can trust that I’m going to prompt it, it’s going to give me the desired result or action, I’m going to use it.

1:05:32 So yeah, I think that’s a big one. And I hate saying this because they’re like, dude, we just announced Copilot for everyone too for Fabric and now you want more. Like, come on. But I think that’s the next step here.

1:06:05 Chad is pretty funny right now. And I would just call this one thing while we wrap here. It feels like to me Copilot is now in the Windows Vista version era. That’s hard. That is neat. Well, maybe we got to get a little bit further.

1:06:39 So it’s going to get better. They’re going to improve it. They’re going to release better things for it. So it’s still a little rough around the edges. I think there’s better ways to integrate this one. I do think that large language models, vibe coding, going after things with talking to it is going to be the future.

1:07:14 Yeah, I’m already seeing huge wins. I’m seeing lots of time saved and other things that I’m doing. Me personally, I’m doing coding things I never thought possible. And I’m learning faster because I’m able to talk to the AI and ask it, what is this?

1:07:48 I’m literally running an NPM command and it says run this command. I’m like, I don’t understand this command, can you please explain every parameter in this line by line? And it goes through and says, oh, this means this, this means this.

1:08:22 And now it’s not just the fact that it writes code but it explains it to me. I’m learning while it’s doing things, and it’s getting quite crazy. I’ve actually showed my dad — he and I were working on a problem. I have a three-way, four-way switch issue in my house and one of the switches doesn’t work correctly.

1:08:58 And he and I sat down and we wrote out the question. I have a wiring diagram, it looks like this, we’re having a problem, it doesn’t look this way. Can you please explain what it’s doing here and how would I wire it correctly? Doggone it, the thing read it. It understood. It gave me instructions. Here’s how you might want to fix it.

1:09:36 Like, it was shockingly good at what it could do. And if it can do that for just random things I’m asking around, like projects — it’s going to change. If you don’t learn how to at least integrate or use it at a lightweight level, you’re going to get left behind. People will outproduce you.

1:10:14 I saw a funny little meme on YouTube to wrap up on this one. There’s a gentleman at work and he’s typing on his computer away and all of a sudden he just closes his laptop and he starts walking out of the building.

1:10:46 And one of the other co-workers is like, “Hey John, where are you going? It’s only 1:00.” He goes, “Oh, my ChatGPT tokens have just ran out for the day. I’m done working for the day. I’m going to go home.”

1:11:18 So there’s nothing else to do. I can’t code because ChatGPT ran out of tokens and I’m done. So I’m finished working for today. I’m going to go back and try again tomorrow.

1:11:48 But will that be our future? I really do think, Tommy, we’re going to be very reliant on Copilot things in the future here. And maybe I wasn’t seeing the vision that Microsoft saw earlier about how much Copilot is going to be pervasive on everything that we do.

1:12:27 But I feel like now I’m starting to get the idea. I’m starting to get my head around — this is going to be a game changer. And if you don’t learn how to do it, you’re going to get left behind. As I said before, English is the next great coding language. And if you are not skilled in prompting — like to your same point, Mike.

1:13:09 No, I love today’s session. I think this is something we should — if you guys like it, by the way, let us know. I would love to do this recurring. I had a blast. So you’ll see this next episode at episode 1,420. Good one. That’s the next time. That’s your hint.

1:13:47 Anyways, thank you all so much. We really appreciate your time listening with us today. I hope you enjoyed this fun, light-hearted episode just going through and thinking about experiences inside Power BI and Fabric.

1:14:17 We were also trying not to bash the tool. This was all about trying to figure out what would we like it to do? How could we add these features? How can we make the product better. So if you have product ideas, we’d really want to encourage you to voice those product ideas. That’s how products get better.

1:14:50 What do you need to be a better developer, user, creator inside Fabric and Power BI? We’d love to hear your comments down below. We will comment on them as well on YouTube or other social media platforms. If you like this episode, please share with somebody else. We do appreciate your recommendation to the podcast.

1:15:31 That being said, Tommy, where else can you find the podcast? You can find us on Apple, Spotify, or wherever you get your podcast. Make sure to subscribe and leave a rating. It helps us out a ton. And share with a friend, colleague, or your family. We do this for free.

1:16:08 Do you have a question, a topic, or an idea that you want us to talk about in a future episode? Well, head over to powerbi.tips/mpodcast. Leave your name and a great question. And finally, join us live every Tuesday and Thursday, 7:30 a.m. Central, and join the conversation on all PowerBI.tips social media channels. Thank you all and we’ll see you next time.

Thank You

Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.

Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.

Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.

Previous

Do we Design Lakehouse Differently Now? – Ep. 419

More Posts

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.

Feb 18, 2026

Hiring the Report Developer – Ep. 503

Mike and Tommy unpack what a report developer should know in 2026 — from paginated reports and the SSRS migration trend to the line between report building and data modeling.

Feb 13, 2026

Trusting In Microsoft Fabric – Ep. 502

Mike and Tommy dive deep into whether Microsoft Fabric has earned our trust after two years. Plus, the SaaS apocalypse is here, AI intensifies work, and Semantic Link goes GA.