PowerBI.tips

Shortcut Mania! Use Cases for Shortcuts – Ep. 429

Shortcut Mania! Use Cases for Shortcuts – Ep. 429

Mike and Tommy dive deep into Microsoft Fabric shortcuts — from managing connections and creating shortcuts programmatically to using them as data contracts between teams. They explore how shortcuts are changing data architecture patterns and enabling new ways to distribute and govern data across organizations.

News & Announcements

Before jumping into the main topic, Mike shares a “beat from the street” about embedding composite models in Power BI. He explains that while it is technically possible to embed reports backed by nested semantic models (a composite model where one Analysis Services model queries another), it’s incredibly complicated — especially with row-level security applied to each model independently. The APIs for discovering all the dataset IDs needed are unclear and difficult to work with. Mike’s recommendation: avoid embedding composite models if you can.

Tommy brings up a few quick news items:

  • Integrating Fabric with Databricks using Private Networks — A new article covering how to integrate Power BI private networks with Databricks. This is relevant for organizations running Databricks alongside Fabric, and Databricks was recently named the leader in AI among all AI companies.

  • Creating SQL Databases via CLI — Microsoft published guidance showing you can create a Fabric SQL database in just three terminal commands. While it’s impressively easy to spin up, Tommy and Mike caution that the back-end complexity hasn’t been simplified — column naming constraints, data type gotchas, and error messages that don’t always map to clear solutions still apply.

Main Discussion: Shortcut Mania

Mike and Tommy unpack three Microsoft blog articles about OneLake shortcuts and share their real-world experiences using them in production environments.

What Are Shortcuts?

  • What’s New with OneLake Shortcuts — Microsoft Fabric shortcuts act as symbolic links to data in different storage locations, enabling organizations to create a single virtual data lake without copying data. Recent updates include Azure Key Vault integration for secure connections, the ability to manage and edit shortcuts (rename, update target paths, delete), and automatic discovery of Delta and Iceberg tables when creating shortcuts.

  • Manage Connections for Shortcuts — A new manage connections experience lets you see all shared cloud connections used in your lakehouse, how many shortcuts use each connection, and highlights broken connections in red. You can now replace connections in bulk rather than deleting and recreating individual shortcuts.

  • OneLake Shortcuts Documentation — The official Microsoft documentation covering all aspects of OneLake shortcuts, including security, permissions, and supported data sources.

New Features and Updates

Tommy walks through several key updates announced at Build:

  • Manage and Edit Shortcuts — You can now rename, edit target paths, and replace connections for shortcuts. Previously, if you named a shortcut wrong or needed to point it somewhere else, you had to delete and recreate it. The new replace button alone is a huge quality-of-life improvement.

  • Delta and Iceberg Table Discovery — OneLake now automatically recognizes Delta and Iceberg table metadata when browsing data sources during shortcut creation, making it much easier to work with existing data lakes.

  • Batch Creation via REST APIs — You can now create multiple shortcuts programmatically, which is huge for automation. Want to add a date table to all your lakehouses? Automate it.

  • Fabric SQL Database Support — OneLake shortcuts now support connections to Fabric SQL databases.

  • External Data Sharing — Users can now share shortcuted data with other tenants.

Shortcuts as Data Contracts

Mike makes a compelling case for using shortcuts as a mechanism to transition data responsibility between teams:

  • A central data engineering team builds and maintains master data tables (products, customers, etc.) from source systems like SAP or Oracle
  • Those tables become the “data contract” — guaranteed to always have product IDs, descriptions, and up-to-date information
  • Other teams receive shortcuts to these tables and can supplement, join, and build on top of them without touching the original source
  • This enables department-level autonomy while maintaining a single source of truth

Mike frames this as building modern “data marts” — a central lakehouse with all enterprise data, then small subsets of shortcuts distributed to teams who need specific tables.

Shortcuts for Development Workflows

Mike highlights a pain point in the dev/test/prod lifecycle: when you cut a branch from a development environment, the first thing you have to do is hydrate the data into the lakehouse tables before you can develop. He envisions shortcuts solving this — programmatically creating shortcuts to dev tables in your branch so data is automatically available without copying, saving both time and compute costs.

Databricks Integration Pattern

Mike shares a specific pattern his team uses heavily: a notebook that reads the Databricks Unity Catalog API, gets the list of tables, and then bulk-creates shortcuts pointing directly to the Azure Data Lake Gen2 storage accounts (bypassing Unity Catalog). This runs daily to keep shortcuts in sync as Databricks manages table locations and schemas.

Semantic Model OneLake Sync

Tommy describes using the semantic model OneLake synchronization feature — importing data through Power Query transformations in a semantic model, then syncing that table down to a lakehouse where it can be used in notebooks for advanced analytics. Mike clarifies the feature and notes that while useful for legacy reports with incremental refresh, the sync does consume some compute.

Read-Only vs. Read-Write Shortcuts

Mike points out that shortcuts aren’t strictly read-only. While that’s the default, enabling OneLake data access roles allows write access to shortcuts. However, both agree that read-only is the most applicable use case for distribution scenarios.

Looking Forward

Mike sees shortcuts continuing to evolve, particularly wanting tighter integration with CI/CD pipelines. Tommy emphasizes that shortcuts represent a fundamental change in how data architects should think about building in the Fabric ecosystem — it’s worth stepping back to evaluate existing workflows and see where shortcuts can eliminate unnecessary data copying, reduce refresh dependencies, and build more trust between teams.

Episode Transcript

Full verbatim transcript — click any timestamp to jump to that moment:

0:00 Heat. Heat. N. [Music] Good morning and welcome back to the

0:34 Explicit Measures podcast with Tommy and Mike. Good morning everyone. Good morning Mike. Missed you already. Oh well, thanks. I appreciate it. Yeah. Yeah. Well, we’re back in another episode today. We are now just beginning our summer adventures. So, I think your kids are already out of school, Tommy. My kids are almost out of school. Going to be a whole new world when you have little ones running around at the house when you’re working in the basement all day. So, always fun. Oh man. , it’s already been a whirlwind for this weekend of the birthday party,

1:09 Soccer games. The kids didn’t know what to do today or yesterday. They had no idea what day it was. And honestly, I think the structure is better for them. So, awesome. Well, love it. Before we get into our our, , main topic today, our main topic today is around shortcut mania. There are so many different places for shortcuts. What are their use cases? How do you see organizations using this? Is it a security mechanism? Do you use it inside your organization? Do you use it to connect to other data sources? What are the places where we can use shortcuts? So, we’re

1:43 Going to unpack some articles. There’s there’s three articles from Microsoft blog that we’re going to talk about what’s new with one lake shortcuts, managing connections with shortcuts, and then one link shortcuts as well. So, we’ll put those links in the actually they’re already in the description already, but we’ll also go and put them in the chat window as well. Before we do that, I do have a quick, , something I’ve been learning, a beat from the street as we like to call it. just something that we’re working on. Tommy, I don’t do you work with a lot of parb embedding things? I don’t think you do a ton. So, not a ton for like the the custom custom

2:17 Applications. , you embed in like SharePoint and like, right? Yeah. Everything that’s the the business of business, not necessarily when it comes to the custom embedding. So, I do a ton of embedding and just recently came across something that just I found was interesting and just wanted to throw it out there just for a knowledge sake for something. That’s why you listen to the podcast, something to learn. So today I learned that you can embed a composite model. So composite models you have composite models where you have a model of models right. So you

2:49 Have a analysis services model and maybe there’s some tables in it. There’s a second analysis services model that directly queries the previous one and then maybe add some other some additional tables some additional shareepoint tables or things from Excel. But basically you have two semantic models that are able to render the PowerBI report. Well, in doing some digging embedding these things are quite complicated because you have to especially if you have row level security applied, you have to apply rowle security and I I’m not 100% certain on all the details but like you

3:24 Could apply security to each model independently. I believe you got two different security models for each of those reports. So you could have one report with some security tables and then the second direct query table could also have some security with it. And in order to pass all that information in to get a embedded token, you need to be able to find both semantic models, get their ids and understand both of their rowle security. All right. All this to say it’s definitely possible for you to do

3:57 It PowerBI embedding with these nested semantic models supporting the report doesn’t sound it’s not fun at all and it’s quite complicated and the APIs you need to do to go get the information they’re not really there’s actually not a really good way of looking up additional details around the data set ids that you need for this reporting so it’s actually quite difficult the APIs that Microsoft gives either in the PowerBI or fabric side don’t seem to be very clear as far as getting a very clean lineage of all the items that

4:30 Exist and all the data sets that you care about to render the report. So actually can you walk me through because the complexities there, right? Is that’s something you just did you get pen and paper out and like before you started because I’ve been No, I feel like you just you had any everything in your head. Really what it boils down to is someone was trying to embed something and we’re like and they’re asking us again. We do a lot of this embedding stuff already. We’ve done well over 27 28 embedded projects at this point in time. So, we spent a lot of time just working around embedded and we know a

5:03 Lot of things about how to do it. Someone came to us and said, “Hey, we’ve got a problem. This doesn’t this thing seems to error out every time we try to embed it.” We’re like, “Well, that’s interesting. We’ve not seen that before.” So, after we dug into the problem, we we realized, “Oh, well, we can embed it. It just requires a lot more information to do the embedding.” And it’s just it’s just difficult. It’s just really challenging to figure out how to connect all the things. , you’d think there’d be like a nice little API that says, “Hey, give me all the data sets or data sources of this report or semantic model and you could just call it and return all the data.” Well, in a

5:37 Composite model, you get the first layer of data, but you second layer data. The second semantic model and its definition is much harder to come by. So from the testing point of view, Mike, because there’s been a few things in fabric and we might get into shortcuts too where you realize the like testing in the different areas where having that testing plan is so important thing. it the days of just building semantic models, how easy they were, how simpler times that we are in

6:10 Besides what we have now because honestly Mike, I’ve been doing it a ton. not the exact same embedded, but projects where I’m writing everything out with literal pen and paper before I’m just m muddling around in the tool and like I’m just going to change a few configurations because you have to make a mental map of these things. Yeah, I don’t think people realize. Yeah, I don’t I don’t go to pen and paper because that’s just it’s very difficult to share pen and paper with your teammates that computer like so I don’t really do that. But I will say to you Tommy though, like in in exchange

6:43 For pen and paper paper, we use Miro a ton. So to your point, right, so when you’re going to make a change to a model, especially a model that you don’t own or someone else gives to you, it makes a lot of sense to be able to stand back and look at it and go, what tables do I need to change? What is the impact of this change? Is there a more efficient way of doing this data design? And I think where I see it right now is we’re our data modeler people. You could data model fairly straightforward before using power query and such because we

7:18 Own more or have the potential to own more of the back end. And what by that is like fabric like houses, tables, warehouses, SQL databases, the data structure itself. We actually have more capability now to make our DAX easier because we can shape the data a bit further upstream which I think is very powerful but it also comes with a new set of like now you’re opening up this whole new world of things you may need to learn or figure out how to do. , so I do I do agree with you, Tommy. Like we do it we do a lot of analysis before we

7:52 Make major changes before because you don’t want to break something or make it less performant than it already is. , but anyways, that being said, I do agree with you. You should you should definitely take a metered look at the report, the model before you do a bunch of changes to it. Yeah. Awesome, dude. Well, love to hear. are always in bed betting is the one way to start off your week. But Mike, there’s some new It’s an it’s not an easy topic to start with. Anyways, all this we learned something. I wanted to share the knowledge that we

8:26 Learned back to our audience in case you are doing things in bedding. Again, as a wrap-up or a summary, would I embed composite models? Yes, you can. Do I recommend it? No, I would not. If you can get away with not having to do a composite model to embed, I’d recommend not doing that for simplicity sake. Awesome. Mike, I do have a little better news for you. fabric blog. They actually put data bricks in on one of the titles of their blog. This actually just came out a few days ago.

8:58 Integrating Fabric with datab bricks using private networks. Let me add this to the chat for you, but I thought this would be right up your alley. Mike, you do a ton with data bricks as well. I do. I’ll look at this with the private network. So, this is this is very nuance. It’s it’s not a data bricks feature. It’s more about when you build PowerBI things or you have private networks for PowerBI. you are now able integrate that on top of the private network. I think that’s what this article is going to speak to about those things today. So, I’ll I’ll throw the article in there in the chat window

9:31 As well in case you do use data bricks. I do I know a lot of our listeners do actually use data bricks. I’ve heard some comments back and forth. So, the odds are very high that data bricks is likely a part of your ecosystem is if you are especially in a larger organization. I think I just saw a report this morning. I was reading some news and I believe data bricks is now announced as the leader in AI for all AI companies and there’s like Google, there’s Microsoft, there’s other big players there, but data bricks is the leader in that space ability to

10:03 Execute and visionaries in the space which I would agree. I’ve always felt data bricks has been like a notch above everyone else. So they they’ve been the the leader in this race I feel like. Mike, also starting today too, the last quick news and I thought you’d have a little fun with this, but did it takes three commands on your terminal to create a SQL database? Now, so I this is another update Microsoft has where they’re actually just saying, hey, how to create a SQL

10:36 Database using the fabric command line tool. And it’s basically only four lines of text which creates a database. Mike, this goes back to something I said. It shouldn’t be that easy to create a database. , I create data I create databases by going just to the UI and just clicking the button and it just shows up. Yeah. One thing I will say though that I really like about the SQ yes you can spin up the SQL database. Yes, you can do all the things. One thing I really like about the SQL database is it’s actually really easy to set it up

11:08 And even put a sample data set in it. I know for like training and educational purposes or just getting started with something when you have data systems that come with a sample data set that’s easy to like have it just show up. Oh my gosh, that’s that’s so nice to have that capability. So whoever’s been making the decision on the Microsoft side to make sure that every data system that you spin up has the ability of having a sample data set with it has been that’s a brilliant idea be I always like to test it with my own data anything but

11:41 Then again it’s that integration pushing it in and honestly yeah I we’ve talked about this the sample things too when you’re trying to learn something I’ve been avoiding now lately just trying to integrate immediately some of the existing data that I have because my goal is the tool, right? Not necessarily what’s in my data at that point. And it’s very easy compared to PowerBI when you want to learn something. I feel like it’s the best when you’re actually using like your Spotify history. But there are too many things right now when it comes to pushing data over to SQL database

12:16 Right now. I’ll give you a good example. Remember Mike, we talked about the pushing data from a semantic model, pushing that to a a lakehouse. Yeah, as a shortcut. And I was intrigued. What if I can then push theoretically that symmetric commodity table to a database, right? Should be pretty simple with the copy job. Well, the things that you don’t know in the copy job is there are certain types or values allowed in column names in the SQL database. Yes, correct. So, and

12:50 Unfortunately, , co-pilot is like, “Hi, you can’t do that in the SQL analytics endpoint.” I’m like, “I’m not in the endpoint.” But there’s a lot of things to where you don’t know those errors and like all these little tidbits and gotchas. We’re dealing in a world of gotchas right now. , and you don’t want to get thrown off with either your own information, but I’m finding a ton of, hey, why is it doing that? What’s the error? All I see is an error code, but doesn’t really tie into anything. It’s interesting that you say that

13:22 Because I feel like a lot of , if I if I look across the spectrum of all the different tools that are being produced from fabric, we are in this phase of it works, but there’s like all these little like creature feature of life improvements that are not being done yet. I think we’ll get there. But in a development world, it seems like you just want to get the product out the door enough that people can use it and then you come back and then people provide feedback around well, hey, we tried to use this application. One example that I had recently was I was

13:56 Using I was trying to experiment more with fabric fabric user data functions. So basically UDFs inside fabric and so you can edit them or you can open them up in VS Code. So I was like, “Oh, I’ll open up in VS Code. I’ll use the extension that they give you. Does it all work? You sign in. Great. I was able to modify the code, but it wasn’t very intuitive to me on how do I publish my user data function back to the service. I would think you’d be very similar to like GitHub, right? You’d

14:28 Bring the code down, you’d make the changes, you could, , run them locally if you wanted and then you could, , push or say submit changes or synchronize changes back up to the service. I wasn’t able to find that very easily. So, it’s all these like little like feature improvement things like why doesn’t it have a button for that? Why where’s the UI to support this? What I would think would be a simple update feature on top of something. That’s the stuff I’m like, this is not really what I want. you’re opening up all these products that are very much have again like a lot of allowed properties like the SQL

15:03 Database with no columns in the space or certain date times that don’t jive well with power query even though they want you to talk about that integration and this is why there’s been so like these previous jobs or careers had their long education because it wasn’t just someone who knew how to create a database but they knew all those ins and outs of okay this version of this column won’t work if we try to do X Y and Z. All those things still apply in fabric. They didn’t make a database easier on the back end. They just made it easier to create. So, honestly, for a lot of people, I’m I don’t want to say

15:36 Concerned because I still love that we have this access, but you have to know that they did not simplify the back end for you. There are still principles that apply whether you’re dealing with Spark, you’re dealing with databases. Agreed. All right, man. I think. Do you have any anything else? No, I don’t have any other news or topic main items. We can go into the main topic now. Tommy, give us a a run through of the main topic. Just give us a detailed runup of what we’re going to talk about today and then we’ll we’ll start there. Shortcut

16:09 Mania. speaking about simplifying your life. So shortcuts in Microsoft Fabric are a really transformed a completely different approach to how we manage our data in fabric. where what we’re actually allowed to do is not mirror or copy or move our data around, especially when we’re reusing a lot of the same sources or tables. And simply allows us to point to certain tables or sources that already exist in fabric. I can point to a table in my

16:44 Lakehouse and I can have that show up in multiple lakehouses. I don’t have to worry about refreshing it. I don’t have to worry about data being copied. It’s simply a pointer. almost look working like a system link in Windows Explorer. But this ability, Mike, to rather than having recreate the wheel or then have some way to connect to that data, making sure it’s all up to date, shortcuts have been by itself when they were introduced pretty right out of incredible, but now they’ve been adding a ton of features that just make

17:18 Shortcuts honestly changing to me the architectural model I have in my head to how I’m actually building a fabric. how I’m designing my lakeouses and what approaches to take to utilize shortcuts the most. So a pretty good job of shortcuts. Anything you want to add on that? Yeah, I think shortcuts are interesting because depending on what what system you’re using, there’s there’s a little bit of a blurry line in my world or I feel like in my mindset here, , sometimes when I’m using a shortcut, very clear when we’re

17:51 Talking about fabric to fabric items, right? This is a lakehouse in fabric and there’s another lakehouse in fabric. When you’re shortcutting across those things, it it seems to make a lot of sense when you’re doing mirroring. A mirroring is if you think about SQL databases, it’s actually reading data and storing data down in the lakehouse. That’s part of that mirroring exercise. When I do other things like mirroring with data bricks and I would assume mirroring with snowflake, you’re not actually copying the data in and those mirroring experiences are acting more

18:23 Like a shortcut. So there’s to me there’s like a blurry line between when we’re talking mirroring and when we’re talking shortcuts. So I think for most of the conversation today, we’re going to talk in the purest form of like shortcuts inside fabric and we’re talking lakehouse to other lakehouse shortcuts. That’s what we’re discussing, I think. Right. Perfect for me. So, my let’s kind I just want to start off too with just some of the updates that Microsoft came out with at Build just to talk about where they’re going with this. They

18:56 Spent a lot of time with it. When I created a shortcut in the past, and I was super cool with this, I could see that I had a shortcut in my lakehouse, but it was very hard to see where it came from, to manage it, to rename it. I just got a nice little icon that said, “Hey, this is a shortcut, which was fine, but again, when you’re dealing with multiple tables, you want to make sure you’re dealing with the right thing or you may need to point to something else.” Well, we have finally the ability to manage our connections in and shortcuts. And this is really to me is

19:29 Getting to the heart of of one lake. we’re making it a lot easier in my particular lakehouse to utilize okay what the source is how where my connections are coming from and I can even then change that source so this opens up a ton of really potential testing where we can have our dev and prod but we won’t get into that yet really before if I wanted to point to a different table and a different shortcut I had to remove a shortcut and add a new

20:01 But now we very clunky. very clunky. Yeah. And I think even if you named it wrong, like you couldn’t rename it. There was no edit or capability there either. So once you if you named a shortcut incorrectly, you’d have to delete it and replace it. Get rid of it. Yeah. Exactly. Exactly. So I think this is a a very good feature here. and I do like this experience. the new replace button on top of that I think is a big win. This is something that should have been there from day one. It’s nice to see that. This is what we were talking

20:33 About earlier. Like these are some of these paper cuts that are like I don’t want to delete it and replace it. I don’t know what that’s going to do downstream. I just want to replace or update it. And there may be some really interesting implications there too, Tommy. Right. So if you think of like the default semantic model, right? In a lakehouse, the lakehouse gets a a SQL analytics endpoint and you get a default semantic model with that lakehouse. So if you’re trying to query the data, you can easily do that. So deleting and replacing the link was probably basically recreating the metadata and

21:06 All the things for that table. , imagine if you’re updating the link and you’re adding a column or removing a column like how do the downstream sources, right? What does what does the SQL analytics endpoint do when you have a new shortcut that you’ve updated and the definition is different? There’s physical different metadata that supports that table. I that was probably the reason why they said no, we’re not going to do that because the SQL and the PowerBI analysis services weren’t able to support those changes. So probably it makes sense because those are other compute engines you had to work with to get it to work correctly.

21:39 Right. So honestly this is from a seamless point of view this is really like I said it’s we’ll talk more about our processes but I want to go through some of the other updates as well. , so if you’re into icebergs or if you’re dealing with a lot of delta tables, , it’s a lot easier to browse and access those tables across those particular data sources. So, , anything that you’re dealing with a delta or iceberg table, , one link can automatically recognize that metadata,

22:12 Allowing it to be recognized in the browser experience. And I think a lot of people too when we’ve talked about migration from other systems that they’re are pretty heavy and dependent on and in the weeds with this makes it a lot easier to if not migrate but seamlessly work with something that I’ve built over a long period of time. Yeah, I don’t know how many people are using well to my understanding of Snowflake right now or at least the most experience I see here. A lot of people

22:43 Are building data in Snowflake or Snow Snowflake ecosystem. I’m not seeing a ton of people going into Snowflake and saying, “Wow, I really want to bring a bunch of that data back into PowerBI.” They’re just using Snowflake as their permission layer, their security layer for everything. , and it it seems to me more of the idea that the snowflake becomes a SQL endpoint to everything you’re going to get into PowerBI. I I don’t see I’m not hearing, again, this isn’t just in my world, , I’m not hearing a lot of people really tightly integrating Snowflake

23:16 Into the fabric ecosystem. It feels that they’re still barely separate at this point. Maybe that’ll change over time and maybe I’ll get some more projects where we’re doing tighter integration between Snowflake and PowerBI or Fabric. Right now, I think the the deeper integration that I see is data bricks. It’s data bricks and fabric all day. It feels like a lot of companies have data bricks and the the data bricks integration between that and the PowerBI side is something we’re trying to unpack and figure out how that works. I’m starting to believe we’re getting to a place where bronze

23:51 And silver can live in data bricks and then the gold layer physically storing those final tables should live inside fabric because there’s other things we want to do at that gold layer. So, previously I would say do bronze, silver, and gold all inside data bricks, which is fine, but I’m seeing the transition between like the BI team and then the data engineering team transitioning. It seems to be a better transition between silver and gold because the gold tables are the refinement of those silver tables. A lot

24:24 Of times you just need those silver tables to be populated by data engineering and then let the business team mle over the table, add the columns, shape them, adjust them, and then own that that layer of the lakehouse structure there a little bit. I’m finding success with that anyways. No, and I think that’s a Mike, I think that’s a big thing too for a lot of teams and organizations who are again they’ve been doing this for years. There’s nothing new here in terms of the products and services, just where it lives. , so I think that’s exactly where this is geared for. Interesting

24:58 You say that how the still the heavy development that would still be in the other platforms, but that makes sense too. , it depends on the company’s strategy, right? So if you have a data engineering team that’s not very fabric centric, they’re going to want to pick a tool that’s been really reliable, been out in the market for a number of years, they’re not going it’s a risk for an organization to pick a fabric Microsoft fabric that’s only been out for 2 years, right? That’s a risk. Is it going to do everything we want it to do? Is it going to have all the features we need? Is it going to have all the latest, greatest innovation

25:30 Stuff? , it probably won’t. Right? Right now, you and I, Tommy, are already talking about like we’re still cleaning up the edges a little bit. We’re still refining some of the user experiences. If a player who’s been in the field for 10, 15 years, I don’t know how how long data bricks been around. I know they’ve been around at least 10 years, but if they’ve been around for 10 years, you they’ve got some experience. Like the the data bricks team is the team that invented Spark. , who’s going to be a better knowledge at like what Spark can or cannot do and where its weaknesses are than the team that invented it. So that whole experience, I look at that going, yeah, it makes sense

26:03 To me that organizations are going to want to pick something like that. I’m just not a big fan of Snowflake. I feel like it’s an overpriced tool for what it does, and I think you can get more bang for your buck inside the data brick space. That’s where I feel things are going. Awesome. So, a few other things, Mike, that we we’ll just breeze through this because I really want to dive into you and how you’ve been using it and honestly your thoughts moving forward on if it changes any processes. But just a few quick ones as well. Yeah. shortcut batch creation through REST

26:35 APIs. So you can actually create shortcuts multiple which is honestly something that’s going like that I I perked up about because if I want to add let’s say a date table to all my lighouses right and the single one well no matter how many lakeous I have we can automate that awesome love automation one lake shortcuts for fab fabric SQL databases so one lake shortcuts now support connections to a fabric SQL database

27:07 Mhm. , again, pretty neat. I wonder if they’ll ever do it the other way, but I don’t think it would work where a fabric table would actually be a shortcut, but I don’t think that’s going to be possible. Say that again. What you saying? Right now, I can connect to a fabric SQL table and add that as a shortcut in my lakehouse. and and my lighthouse can have a shortcut that’s a fabric SQL SQL connection, but I want my fabric SQL database to

27:42 Have it to utilize shortcuts. Wouldn’t that be neat? I don’t understand really. So, if one of my tables in SQL SQL doesn’t care. You just you just give it the location of the lakehouse and it would just read it. That’s what is right. Right now it’s not available in a SQL database. So right now I can in my lakehouse can add a shortcut from SQL. Why won’t you just connect to a different lakehouse? Can’t you take a SQL server and or SQL endpoint and just connect to

28:14 A different lakehouse? Just actually establish the connection. It’s just reading delta tables at that point. Anyways, but I don’t think it’s a I think a shortcut is the same. I think you I think you can do it. My question my statement here is you can you can already do that. You can take a SQL something the SQL data warehouse or the warehouse and you can just connect to multiple lakehouses and get the data you need from there and it’s like a shortcut. It doesn’t actually live in the SQL server. I understand I think what you’re saying is you’d like it to be easier to create that remote linked item as a shortcut in the lakehouse and

28:50 Represent the same thing. Nice. Yeah, but you can you can already do it today by just making a new connection to a different lakehouse. No, I got you. I the confusion around SQL databases and endpoints and their differences for another day. true. And finally, external data sharing for one lake shortcut. So now you allowing users to share shortcuted data with other tenants, which again just another win. So Mike, all this to say with where shortcuts are and they’ve really been

29:22 Out since more or less the start of Microsoft Fabric. It’s one of the early major features that they integrated in one lake. Mike, there’s a lot here that I think to really digest. , I use shortcuts all the time and I more importantly have been working on developing different strategies for , utilizing some of the best to really what that feature is. So before going any forward, Mike, let’s just digest for you lead how dependent are you on , fabric shortcuts and how to

29:57 Use them right now? Yeah, we’re doing some exploration around with some of our customers about making the shortcuts an item where you can use a security layer essentially, right? So there’s there’s this data engineering world of bronze and silver and then there’s like maybe the the the BI world of like the gold layer and the semantic model tables and we’re we’re looking at this going like okay well what happens when you have different tables of information and where when do you start sharing that out right so if

30:31 You think about the users of your data platform right there’s going to be very basic users they’re only going to care about the reports they’re only going to care about semantic models fine this is probably not your target audience. But if you go one layer above that, so beyond a basic user, so let’s maybe say an intermediate or more advanced user, you want to give them access to other aspects of your data and maybe they are capable of writing their own SQL or maybe they’ve been trained or want to just muck around with the data like that’s fine. So we have to have this ability to be able to hand over

31:03 Responsibility to other teams and give them basically one solution that handles basic users, intermediate users, and all the way to super advanced users. And so we’re thinking at the advanced user level, here’s a shortcut. Here’s a lakehouse with a bunch of data in it from gold that is a bunch of shortcuts. If you want, we can give you the relationships between the tables, but you’re welcome to build your own thing, add your own data, supplement it with something else that you need for your business unit. So, we’re we’re really seriously considering like looking at this information saying this is a

31:35 Shortcut is a easy mechanism to transition responsibility from one team to another team. And so when you have other teams that are highly technical and need access to the data, you don’t need to be building a semantic model. You don’t need to be building all these extra complicated things that you need to maintain and manage. You can give them direct access to the tables. So that’s what one area that I think we’re exploring. I think that makes a lot of sense. , one thing that’s a very much a challenge for me right now, I don’t

32:08 Know where this will fit in the Microsoft ecosystem later on, which is when I’m building on top of dev, I do a lot with, , teams that are bigger and so we’re now moving data from dev to test to prod that that development cycle of things. This is where we struggle, I think, a bit and I think shortcuts could add a lot of value here. especially when you’re trying to cut a branch from the development environment, make some data changes to the semantic model or data engineering or the notebooks, but you need those shortcuts to exist to read the

32:42 Data. So, I’m I’m of the of the opinion here that when you’re doing a branching strategy, there needs to be a way to programmatically create those shortcuts to the bronze the the the dev tables in your branch. That way, I don’t have to rerun all the data. I don’t have to copy it all over again. And so, right now, a lot of our our pattern is, okay, cut a branch from dev. Okay, first thing we need to do is load the data, hydrate the data into the tables that are in the lakehouse. And then after we do that,

33:14 Then we can do our development. I’d like that step to go away. I’d like that step to be more dynamic. And I’d like to be able to specify when I make a branch of information, I’m automatically going to get the data automatically in the environment without having to do any extra copying of data because it’s just it’s just more cost to run the CU to do that stuff. I feel like it should be easier to do that. Yeah. So interesting that your your two main use cases for shortcuts are from from branching and dev test prod and then also when you’re dealing

33:48 With responsibility one I haven’t thought about as much but let’s dive into the responsibility one because that’s the aha moment for me with shortcuts was the transition of responsibility of data role. Yeah. Okay. really the aha moment for me when it came to shortcuts and seeing how they can be utilized really was that source of truth so to speak. again a very basic easy example here is if you had a date table that you wanted to make

34:21 Sure that every lakehouse was utilizing well because it can come from one place. I don’t have to do a copy job. I don’t have to recreate that from somewhere else. So that is one of the major implementations of utilizing my references. And again compared to really everything else when we to do this in PowerBI, I always still had to refresh that data. Even when I add data flows and I add that master customer table, well it still takes time. It takes time but again the shortcuts makes

34:53 That incredibly seamless and easy. So but walk me through that transition of of responsibility right because the ownership starts with those who are building that table who have gone through that table and finally share sense allowing people to connect to that shortcut right I think master data is a good example of this pattern master data is there’s a team or there’s a team that’s building a part of the master data together right often Even when you hand out master data

35:26 To the organization, other teams will be like, well, we have this extra metadata that we want to add to it. We need to enhance it slightly or we need to do something different to it. So the product number is still the product number, right? There’s still some definition to some common elements that are inside that master data. So I think a good example here is the master data table coming from data engineering or from the central BI team. Here you go. Here’s your master data department. you can supplement that data with whatever you want right read that master data in it’s again it’s

36:00 That that contract right in the master data table every ID of every product will always be filled out will always have a product description name and so the the BI team or DE team data engineering team will have some standards around this is the information I’m going to give you and it’ll always be up to date based on the latest information so maybe they’re drawing the information out of like an SAP maybe there’s an Oracle system. Maybe there’s some other data system that’s holding all your products and your parts that you sell that now lives in a automated way. So that the team that team owns going from the source

36:33 System to the table, right? At some point you need to transition from okay well we’ve done all that we can do with the data. We don’t actually know how you’re going to use it in the business unit. Like you may do something different to it or you need to add something or supplement it or whatever. Maybe there’s a business unit who’s attaching, , , customers to master data for whatever reason. You’re doing some analysis there. Whatever. That’s fine. Here’s the table. Here’s the shortcut. Start start here from this point. That is our that shortcut becomes the data contract between one

37:05 Team and another and the second team can pick it up and then use that, , that shortcut. , now they can build notebooks on top of it. They can write SQL against it. They can join it with other data. Great. And now and now they’re off to the races. So here’s the thing though. When does the BI team then stop? And I think this is the idea here, right? Where contrary to everything else the BI team was responsible for and especially when was just PowerBI. Well, really all user input was what the business owned and then after that it

37:38 Was the BI team to really work transform that data, work with them like etc. But now you’re almost in a sense flipping that on its head where I’m going to give you the most basic potential table without any additional configuration. This is your temp now I don’t want to say a template but this is the base right how fleshed out is it? It’s really it’s meant to be distributed in multiple places. Mhm. That all of a sudden makes a lot more owners in the kitchen or a lot of chefs so to speak where if I’m

38:12 Just giving this out to everyone and again let’s let’s take a a basic example of all of your let’s say sales regions right we just want to make sure that’s tied to a person and whatever it may be well if all the BI team’s doing is we have mapped the each state to a u manager and the rest is up to you, right? That and let’s say 18 different teams or 10 different teams connect

38:45 To that same shortcut, then you have 10 different owners that are doing their own thing, especially when you’re dealing with master data. I and again I don’t want to go down a rabbit hole when we’re talking about other things but this is one of the things that I’m seeing with shortcuts where I want to give people the most in a sense not just fresh data but also pick and choose what shortcuts I’m giving out and I think there’s two categories here of this there’s what we’ll call the raw shortcuts this is you

39:19 Needed that source data here you go this is we’ve connected to these different sources. We’ve combined Salesforce and XYZ together, but we don’t know what definitions look like. That is up to you. The table is created though, and that was something you never had access to. Go and run with it. The other side of shortcuts is going to be our master data where this is things only that should be really used in reports. Anything that’s modified obviously would not reflect well, but are you seeing categories of types of

39:52 Shortcuts and the type of data you’re giving out? Because you both of your examples were much more from the developer side of moving the data along and the approach I’m taking here with shortcuts is on that distribution when you’re working with multiple different teams. So, and I think both work and that’s the beauty of shortcuts, but there’s definitely multiple approaches here. Have you looked at shortcuts yet and gone, okay, I’m going to not just create shortcuts, but there are different types of shortcuts I’m going to create. I

40:24 Mean, no, I don’t think there are different types. , I think there’s different purposes of why you may want to use different, , shortcuts. So, let’s talk about one example here. One example with the the lakehouse when you connect to a shortcut you are able to connect shortcuts from multiple different locations right so you can have a single lakehouse that is a lot of centralized data coming from a central team it’s coming out of your main reporting systems SAP or Oracle right you can you can build a pipeline to get that table of made of

40:57 Data made that could be a single lakehouse that lakehouse could have all the SAP data that you care about in one single place once you have that now you can start shortcutting to that data to other lakehouses and and now they can be business business specific and so there’s a difference between business users and there’s difference between that and department right so I’m thinking this is more of a department share versus like a business user share and and what by that differently is usually when you get down to the business user level they’re not writing

41:29 A bunch of SQL queries they’re not doing a bunch of like data engineering on top of things to build their on reports. They’re being given a collection of semantic models and maybe a handful of tables in a lakehouse they can go, , build reports on top of using a SQL endpoint, something like that, right? So, I’m I’m thinking there’s there’s a distinction there. And when I talk departmental, I’m thinking a little bit bigger than just a individual business unit user. And when you get to the department level, some departments actually have one or two individuals that are their analysts that that are their specialists, their prousers.

42:02 They’re they’re their higherend users of these data systems. So those teams can have their an internal discussion of like what is valuable to us? What data do we need to build? And so internal to that team, they can take those standard tables and use them. One other point that’s been brought up here in the YouTube chat, which I’ll just point out here, is Paul was speaking about the ability to use shortcuts in a composite model. So a shortcut allows you to have multiple lakehouses or

42:35 Making shortcuts allows you to virtualize data into multiple lakehouses where you can now actually have master data living against business generated data. Right? So the the products table right we were talking about earlier like the product master table that can come from somewhere else you can shortcut that in to your lakehouse and then the business can go collect all the data they want. that’s a really good use case and then when you do this that means from a tactical standpoint I really like this feature cuz this is better than a

43:06 Composite model in the in the fact that when you’re pulling all this data using shortcuts you’re not making a bunch of islands in the semantic model. So you’re not limited by text strings going back and forth and simplifying filtering the data down. You can the the semantic model is actually going to be smart enough to load multiple lakehouses of data as if they are a import model which is really really powerful I think and underused in the space. So I just want to point that out and and call out here. So did I answer your question

43:39 Tommy or I just go off outside? No, no, no, no. And I I think that’s that’s such a huge part for me. I’m I always look at these features in the eyes of from the from that adoption roadmap and like how does that help or hurt like an adoption data culture at a company and for me and again the things that I’m raw about are when it comes to that lack of trust. So this idea of shortcuts are I oh my gosh I don’t want to say I couldn’t think of a better way to distribute my master data but yeah to

44:12 Your point the fact that it can go into more semantic models I believe you just said it last week that you still see semantic models is really being the forefront of everything in fabric and that’s really the heartbeat of fabric and let’s not forget that with all these amazing things here Yes, shortcuts can live in different areas. It can be a folder or it can be a delta table. But the fact alone that we can that customization is absolutely huge where we I’ll give you all the records, I’ll give you all the files so

44:45 You can run with it or all your reference material which I again I can see that being on your knowledge center like a bunch of readmes for all the shortcuts at your organization. , which again was not something we really could do with PowerBI before because you would have to talk about data flows and Power Query. This is really find this lakehouse. Here are the different tables that exist in there. So that management side is almost the to be honest the best way to organize your data. because

45:19 You’re never dealing with refreshing, you’re never dealing with something not working. It’s that that pointer connection. So yeah, for me the shortcut win is from an organization point of view is that. But I would be remiss if we didn’t unless you want to talk more about that from the global or from a team scale, but I would be remiss if we didn’t dive in for you personally for for us individually. , what are some of the tricks that you’ve been doing with shortcuts or

45:50 Things that you’ve been testing out, especially in your normal development workflow? Because I’m amazed at all the different ways that I’ve been able to utilize this in different areas. Yeah, I’ll give you one of my ideas and then I’ll kick it back over to you Tommy to give what you’ve been using shortcuts for. So, one of the main shortcuts we’ve been doing is we’ve been using data bricks a lot and so data bricks becomes a very big source for us. data bricks has this ability to have a a data bricks experience where you have a unity catalog attached and in some cases that makes sense but in other cases

46:23 We don’t really want to go right to data bricks instead we want to be able to create a bunch of shortcuts programmatically going directly to the API creating them all and so what we’re able to do is we’re able to write a notebook that reads the data bricks API from Unity catalog gets the list of tables and then from those list of tables create a bulk list of door cuts that are actually connecting to the Azure Azure data lake gen 2 storage accounts. So not using Unity catalog going right to the the source of where those tables live. in our situation the the idea of being able to create all

46:57 Of these shortcuts using code is extremely useful because data bricks manages those tables and where they go. if you make a change to the table and data bricks decides hey we actually don’t want you to use this table location anymore you they may change the name of the table they may change the schema or update something so in doing that schema update they may make a change to the physical location of that table so by using the datab bricks API for unity catalog and then grabbing the list of tables that I care about and then immediately creating the shortcuts

47:29 Inside fabric with a with a notebook you can just run that every day you could just bically recreate those shortcuts as needed. That way you’re always staying in sync or if you’re doing a data loading process, you could just verify that those links are correct. So that’s something that we’re doing a lot right now. We do a lot of deep integration between data bricks and the fabric lakehouses as a shortcut. So I think that’s that’s one of our primary use cases. What’s a use case that you’re using, Tommy? Honestly, two big ones was actually the data versse of all things too. So, , we’ve always had this bridge up between anything in

48:03 Dynamics or the data versse and trying to deal with that with PowerBI because we know it was the loading time and how what it took to actually deal with the type of data. Well, I don’t know if you knew this in Power Apps, but for there’s been a this feature where you can actually connect to a Synapse instance and sync data together into a storage blob. But you really didn’t know what you really couldn’t do much with that. But obviously now they’ve changed that. So now I can connect to a fabric lakehouse and

48:37 That becomes that synchronization. So I can quickly and easily connect to any table in my data versse and add that to my lakehouse. And that’s all via shortcut. there’s a ton of reasons why I think a I don’t think a lot of people know this, but this definitely was a finally one of those walls was coming down for me in terms of I finally can, , all these new things that I’ve been wanting to do are open for me. But just some things like that where there’s a something that we don’t

49:10 Have to worry about refresh at all. But like even just utilizing the lake that architecture of like I said the semantic model one has been one of my favorites where I’ve been actually just testing just different things out. I know I’m I can hear people in the chat or asking themselves why are you using semantic models to push data thing. there. Aren’t there more efficient ways? Yes and no. , I think there’s definitely some pretty efficient

49:42 Or or really productive things that that table does in PowerBI and just getting everything refined and just pushing that to my lake houses which I’ve been then able to just iterate over and notebook on have been one of the again I wish I had this before features. I’ll give you an example on that. I had a record of data that was all combined data sources in in a existing PowerBI symmetric model and I want to do some

50:15 More we’ll call an advanced analytics. What I’ve always done in the past was it was like get a subset of data export it even create a page report but it was very clunky and again it was all local and everything was on it really siloed. Well, I have that table now in a lakehouse and I have an automated notebook to really dive through index this find go through this different things and then oh look now that’s another table of the freaking lakehouse. So the fact that to I think

50:48 The biggest things is the lake houses are really utilizing us to keep everything seamless where if you’re trying to do it your advanced analytics you want to export your data because you have your certain formulas that you want that you work on well again that now breaks the chain of that lineage but also if you’re up to date or not and the fact that I can have this all seamless with a single refresh I’m seeing this really extend to like that one the data itself where I don’t

51:22 Have four different files or four different artifacts that I’ve created to get to one answer. it all goes through a single pipeline at this point. I’m not sure I’m following your use case. What what are you doing with this data? I don’t really understand. So I have a table in the semantic model and I’m now pointing it as a shortcut to a lakehouse. I’m taking a look. Yeah, you have a And how’s that table getting into the semantic model? It’s like an import table into semantic model. Imported table and it’s obviously a few

51:55 Transformations, a few merges, a few different data sources that are only PowerBI. So you’re using a semantic model to do some data transformations and you’ve and I think something you you missed or you did not communicate clearly was you’re taking the table in the semantic model and you’ve turned the semantic model onto one lake synchronization. you’ve turned that feature on, right? So data comes in different sources, tables being imported. Therefore, the semantic model has a table, but you can’t reuse that table any other place because it’s part of the semantic model. Now, you don’t

52:28 Want people editing the table, but you want to reuse it in a different semantic model, some other place. So, by turning on, and this is a feature of the semantic model to do a one lake sync, right? which means after the data is loaded then the data will be stored down in the lakehouse as a table that came from the from the semantic model. So then you can still use the semantic model to do the reporting but then you have guarantee that the lakehouse table also is the exact same data that’s inside the PowerBI report and as the report refreshes through semantic model the lakehouse is getting the latest

53:01 Version of that data. Is that what you’re trying to describe? Yeah. So I might have missed the I was totally lost doing but the fact that with PowerBI there are data sources and certain things that are not available in fabric but are still a available in PowerBI desktop. So yeah I am taking the final table from a PowerBI desktop once it’s published it now becomes a nice table in my lakehouse via the shortcut. no right backs. that new table in my lakehouse that shortcut I’m then simply connecting to in a notebook to

53:37 Again go through what I need to do. And this is all part of the single pipeline. But really I think the biggest things is with shortcuts and we’ve I there’s an example someone who used an Excel file or that was open mirroring actually. But the shortcuts have a lot of ways that we can again not have to recreate the wheel, use everything that’s already in a sense existing or existing. But have you been using the semantic model that feature a lot? A ton? I’ve been I’ve been in it a little bit. Yeah.

54:12 Yeah. We’re we’re finding that sometimes you have legacy reports that have a lot of data in them. especially ones that have incremental refreshing or other complex complex things sometimes it’s just nice to have those data lake tables down it does when you turn on this one lake synchronization there is some level of compute that’s being consumed during that writing data down to the lakehouse you have to be mindful of that you just have to be careful of like yes you are writing the data back down to the lakehouse you’re getting it out of the semantic model but it doesn’t come

54:43 For free there is some minimal amount of compute usage to get the data to the lakehouse. I think that’s that’s just an acceptable trade-off. so yeah, that’s what I’ve been looking at there. one thing I just will will note here, Tommy, you made a note around that the shortcuts are only read only. That’s not necessarily true. If you look at the documentation under the shortcut security, you can actually provide additional permissions. Now I think by default all shortcuts are created a read only file and folder version of the shortcut.

55:15 It starts with that’s like the default that’s the default experience with a shortcut. but if you turn on one lake data access roles, you can then create a you can write to the shortcut. So the shortcut can be something that data can push data to. So actually it’s a it’s a two-way street not just a read only experience. It could be read, it could be read and write if needed. So anyways, I thought that was interesting because I think a lot of people think of like a shortcut as like it’s a virtualization of data. You can no

55:47 Longer write back to the original source. Well, you can adjust that property and you can allow people to write back to that shortcut if you want. So the data it basically it’s it’s a I think of it this way. I think of it’s it’s a it’s a virtualization. It’s a shortcut. It’s a item an object in the a different lakehouse that points back to the original source of data and then the permissions of that shortcut are can you read write or you can just read the data that’s coming from the source system. I think the most applicable use cases here is just a read

56:19 Version. I don’t think you want to be given out a lot of shortcuts to a lot of different teams or people that are read write but it doesn’t if you have a use case for that I think you can use it. It’s it’s part of the solution. I’ll just point that link out as well and put in the chat. I know we’re getting near time, but I think I want to end and ask you as well, where are we going from here? And my really closing thought is I’m seeing this be one of those features or new

56:52 Technology that’s going to change some existing processes, the way that we used to do things before. because we have an ability that again was we never previously had in terms of moving data along that journey in an organization or from a development point of view. So for me as I look forward with shortcuts, it’s not just the new features that come out, but I think it’s what it is today, what it already can do. what could that really alter in my own mental

57:26 Model of from a engineering or really just being a source of truth in an organization. Again, this is one of those things to me that does change processes. it does change the way people perceive how we’re supposed to push data through. So, I really think it’s it’s a really good time to step back and evaluate our our methods, our normal workloads or workflows and see if we can utilize these a little more. Yeah, I I think in some ways I look at it going I look at

58:00 The lens of like things that were done previously in SQL databases and things that are now being done inside the fabric world and it feels like a lot of the fabric things or the the Spark elements are continually trying to mimic or do what SQL could has been doing for years now. So the reason I bring this up is you have SQL databases that are a readonly database, right? You have SQL so this is essentially like a shortcut. you can connect to the database, you can read to it. you can actually do a linked a linked table, I think is what it’s called inside SQL. So it’s like a a

58:33 Table that’s linked to another ser SQL server and you’re linking that table out. Again, I’m not a SQL DBA, so someone in the chat can correct me about linked tables if that’s correct or not. But these are all features that already exist or or the concepts I think exist already inside SQL. I think this is just the way of the modern fabric experience matching that same experience. Right? I have I have data that I want to put in a single place. I want to be read only. I want to give it out to a lot of people. Great. I don’t want to give every table. I just want to give a handful of tables. So let’s just build three or four

59:04 Shortcuts to this main central place of location. if you look at studying like databases, data warehouse design, data design inside organizations. This is commonly across like very common. There’s a central warehouse that has all the tables of everything you would ever need in one central place and then you give out little tiny data marts to people across the organization of things they need to use. This is this fits that pattern very well. You can now build little tiny data marts of measured tables or or or subsets of tables from the the larger enterprise warehouse. So

59:37 Again, I I really think this is a great use case of people transitioning between different departments handing data responsibility off to other teams where they can look at the data but they can’t touch it and they can’t modify the source of information because again now you have that clean handoff transition of responsibility. So all this to say is I think this is going to change how we look at our data. I think it’s going to add more flexibility to departments that have really strong SQL developers and power users. If you have teams of people that love to use and and

60:10 And manage their own data, right, this is a great opportunity for you to have a centralized team build out the core of tables and then allow other teams to copy from those tables, build what they want or other things as well. So, I do think it’s something it’s a feature that you need to be mindful of because it does change how I think we build inside the fabric ecosystem. Awesome. All right, man. I think that’s a wrap. All right, we got a we’re about an hour out now. So, what’s your final thought

60:43 On this on the shortcuts thing, Tommy? What are you going to message to people about your where you land on this? So, I don’t want to repeat my thought I just did. but you might have missed me saying that but no I think the biggest thing is really I again I think really not just saying this is a neat feature but I think it’s worthy of taking a step back on a lot of your normal workloads or process and see if really something can change and how can I utilize this more or take advantage of this to not just alleviate time but

61:17 For taking less steps out of your work and also building more trust. I think my final thought here is is around shortcuts aren’t done. It seems like there’s more development coming. There seems to be a lot more changes happening to them. They’re getting better. I’d like to see tighter. Me personally, I’d like to see tighter integration between shortcuts and the CI/CD process. I think that’s getting better as we go. Not sure. I’m fully on board with everything that’s happening there. So there may need to be some additional work to refine part of that in the continuous integration, continuous deployment pipeline pieces.

61:50 And I’m thinking more around like a team of people developing and building things for development. I think shortcuts should be used there, but not in a traditional dev test produ. It seems it’s it’s a little bit different. So that being said, I think we have some more there’s there’s there’s some more feature development that needs to be occurring with these this experience with with shortcuts, but at the end of the day, you do need to be mindful of them. I think they’re extremely powerful. And for teams that

62:21 Are just clamoring and asking for data, seriously think about shortcuts. I think I think shortcuts is is the way to go around having users engage with that content. All right, that being said, thank you all so much. We appreciate your time today. , our only ask is if you like this conversation, if you haven’t used shortcuts or you’re you’re exploring it or you have someone else who’s thinking about shortcuts, hopefully some of the articles that we shared today and unpacked for you as we had a conversation around this one, it helped you. So, share it with somebody else. Let somebody else know that you found some useful part of this

62:52 Conversation around shortcuts. Tommy, where else can you find the podcast? You can find us in Apple, Spotify, or wherever you get your podcast. Make sure to to subscribe and leave a rating. It helps us out a ton. And please share with a friend since we do this for free. You have a question idea or a topic that you want us to talk about in a future episode. Head over to powerbi.tipsodcast. Leave your name and a great question. And finally, join us live every Tuesday and Thursday 7:30 a.m. E or central and join the conversation all of PowerBI tips social

63:26 Media channels. Awesome. Thank you so much for listening to the podcast today. We appreciate your ears. Have a great week and we’ll see you next time. [Music]

Thank You

Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.

Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.

Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.

Previous

Adopting Copilot Standalone for Power BI – Ep. 428

More Posts

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.

Feb 18, 2026

Hiring the Report Developer – Ep. 503

Mike and Tommy unpack what a report developer should know in 2026 — from paginated reports and the SSRS migration trend to the line between report building and data modeling.

Feb 13, 2026

Trusting In Microsoft Fabric – Ep. 502

Mike and Tommy dive deep into whether Microsoft Fabric has earned our trust after two years. Plus, the SaaS apocalypse is here, AI intensifies work, and Semantic Link goes GA.