Myths, Magic & CoPilot – Ep. 360
Mike and Tommy dig into the myths and magic around Copilot and what it really means for Power BI users. They share practical guidance on where AI helps today and where you still need strong fundamentals.
News & Announcements
-
Recap of Data Factory Announcements at Fabric Community Conference Europe | Microsoft Fabric Blog | Microsoft Fabric — Last week was such an exciting week for Fabric during the Fabric Community Conference Europe, filled with several product announcements and sneak previews of upcoming new features. Thanks to all of you who…
-
Important billing updates coming to Copilot and AI in Fabric | Microsoft Fabric Blog | Microsoft Fabric — Coauthor: Misha Desai We’re always exploring new ways to help you more easily unlock the full potential of your data. Today, we are excited to announce upcoming pricing and billing updates to make Copilot and AI…
-
Tag your data to enrich item curation and discovery | Microsoft Fabric Blog | Microsoft Fabric — Introducing tags – now in public preview. When it comes to data discovery and management, the modern data estate presents a set of daunting challenges for organizations and admins. An explosion in data sources…
-
Announcing Service Principal support for Fabric APIs | Microsoft Fabric Blog | Microsoft Fabric — A new way to authenticate and authorize your Fabric applications. You can now use service principal to access Fabric APIs. Service principal is a security identity that you can create in Entra and assign…
-
Myths, Magic, and Copilot for Power BI — DATA GOBLINS — In this article, I explain Copilot in Fabric and Copilot in Power BI, walking through what it is, how it works, and three common scenarios when it might be used. More importantly, I evaluate whether using it over…
Main Discussion
This episode focuses on separating hype from reality: where Copilot-style tools can accelerate exploration, documentation, and prototyping — and where you still need a clear data model, good measures, and strong governance.
Myth-busting and expectations
Mike and Tommy talk through common misconceptions about what Copilot can do inside Power BI workflows, and how to set expectations for quality, repeatability, and trust.
Practical workflows and guardrails
They cover ways to use AI safely and productively (especially for accelerating drafts and exploring ideas) while keeping human review, testing, and version control in the loop.
Looking Forward
As Copilot capabilities evolve, the best results will come from pairing AI assistance with solid fundamentals: strong modeling, clear definitions, and disciplined delivery practices.
Episode Transcript
0:30 good morning and welcome back to the explicit Me podcast with Tommy Seth and Mike good morning everyone good morning good morning it’s been a busy week this week it seems like it’s been flying by for me this is already the the week after the Microsoft fabric conference a lot of announcements are starting to trickle out now it was kind starting to trickle out now it was funny to me going to the of funny to me going to the conference and coming back here and I was thinking like where’s where’s all the announcements they they announced a ton of things and there’s like no blog post to support these announcements so
1:00 post to support these announcements so thankfully I started taking a lot of video of little demo features that they were producing on my phone because then I can present the I’ve been slowly trickling out a couple messages here on Twitter I did a lot on Twitter initially and then now I’m also putting some stuff out on YouTube and and Linkedin from videos that I caught while I was at the sessions so anyways it was just interesting to me that we still haven’t got the blogs yet for all the announcements so looking forward to seeing some of these blogs come out and unpack them today we’re going to jump into a crazy article
1:30 going to jump into a crazy article very thorough Kurt writes little mini essays every single time he blogs so we’re going from data goblins today Kurt ber wrote out a great article called myths magic and co-pilot in powerbi so very interested to unpack this we’ve talked a lot about AI generation co-pilot and where does it fit inside the powerbi ecosystem so it’ll be interesting to see where we land on this today all right let’s go through a couple announcements here Tommy what did
2:00 couple announcements here Tommy what did you find for us man we got a ton so yeah so we’ll put it in the chat but there’s finally a Blog article on my fabric going through a recap of all of the announcements at the community coner conference in Europe talked yeah this is an announcement for the recap of only the data Factory announcements because there’s actually a lot of data Factory ones I’ll put that in the chat window here right now Tommy go what highlight what do you highlight in there well more co-pilot
2:31 highlight in there well more co-pilot fast copy for data flow Gen 2 we have data Gateway support for data pipelines and we finally have incremental refresh for data flows gen two so it’s actually now built in rather than trying to hack it so those are the main updates yeah so they had more than just the data Factory that they talked about the right sure yeah they did there is a lot more than just Ed Factory this is just the announcements that came out just from data Factory one of the ones I think I’m
3:02 data Factory one of the ones I think I’m most excited about here is I think there’s a spark job environmental parameters for data pipelines this is actually I think a a major move on Microsoft’s point it’s a feature that should have been built when it came out but I’m happy that it’s here now the idea is you can add a parameter for a spark notebook a tag a session tag and that session tag I think will carry all the way through all the notebooks you tag that session tag with so if you have have 789 notebooks it doesn’t kill
3:32 have have 789 notebooks it doesn’t kill a session and start one every single time you run a new notebook you can tag the session and the session will stay on and Alive through multiple notebook session runs in a pipeline it’s just an efficiency thing it’s it makes sense it also allows you to have different tags for engines in the future so anyways that was really cool I really like that feature it allows you to spin up a single spark cluster and use it across multiple notebooks which I thought was a great idea there’s a lot of co-pilot things coming out I have yet to play with more of
4:04 out I have yet to play with more of the co-pilot in data flows Gen 2 see how that works there’s an announcement of incremental refresh for data flow Gen 2 have you played with that at all Tommy I’ve tried hacking it there’s a few articles came out about how you do it in a data flow gen too so now that it’s actually available there are few data flows that I could go back on it’s just this is one of the odd ones that it’s available now why is it odd that it’s available now I don’t understand your I don’t
4:35 now I don’t understand your I don’t understand your comment because it’s been available in gen one forever oh yeah yeah I see you’re saying yeah yeah yeah yeah well and and it’s even it’s even worse than you say because you don’t get the ability to write your data in the incremental refresh to the lake house it only goes to SQL server and I think in my opinion that’s a big Miss if you’re going to do incremental refresh it’s got to work with the lak house and it only works from SQL end points to SQL end points so I’m I’m not really thrilled about this incremental refresh I don’t really think this is going to
5:05 this is going to be I think it’s useful for people who are doing things in SQL but I’m not thrilled that it doesn’t work with my Lake housee that’s really what I want incremental refresh to do and there’s no equivalent of like Drop and replace it doesn’t my opinion is this works nothing like data flows gen one did data flows gen one did a totally different pattern of how they did incremental refreshing this one requires you to have a modified date you’re going to get data that comes in from a source system that doesn’t have a modified date or you’re going need to drop and replace Things based on a number of days and date ranges mhm I this system this new
5:36 date ranges mhm I this system this new one is it totally my mind it totally misses the Mark it’s not incremental refresh it’s something much less powerful than an actual incremental refresh so I’m not really thrilled about this new feature I’ve played with it a little bit it’s just okay for me anyways other announcements anything else that stood out to you on the items there not too much on the recap but I think it’s a good segue with co-pilot and all the announcements and we’re even talking
6:06 the announcements and we’re even talking about it today another major announcement came out there is a billing update coming to co-pilot and fabric Seth you caught something interesting about this post this post came out at a very timely moment especially when we’re talking about this article with Kurt berer as well what did what did You observe and find here Seth it it may have been a month after article article right Hey listen if if in some way shape or form the people using heavily using
6:38 or form the people using heavily using Microsoft products and have platforms and are are just helping others use those platforms yeah can make recommendations or outline things like Kurt does in his article at at length and one of the things that that struck me was the the testing he’s done that we’ll get into yeah you he’s done that we’ll get into yeah was 3% of is f64 capacity and know was 3% of is f64 capacity and outlines that it cost €200 to put together a Blog whether or not that has anything to do with this I
7:09 not that has anything to do with this I don’t know but it it it is something that as we walk through the article we we’ll have to highlight as well which is that billing update coming up in November 5th what it seems like reduce the price of co-pilot and fabric yes by about 50% and it sounds like also making it a little bit more discoverable as far as like the costs or the thing consuming the costs of the capacity which if
7:39 the costs of the capacity which if anybody has listened to this podcast at all that is costs are near and dear to my heart as far as like understanding the what where who why of azure billing and it it it’s it has to be done in organizations I think are demanding it more I didn’t catch in the article maybe you guys did they didn’t mention anything about the skew that there’s no change in skew this is still f64 correct yes correct there’s no there’s no it’s
8:10 yes correct there’s no there’s no it’s still an it’s still a threshold you a minimum threshold of getting to an f64 to even use it I’m looking at it now I to even use it I’m looking at it now there’s no there’s no physical mean there’s no there’s no physical reason why they would or would not allow you to have it on an f64 but I think what I think what I’m seeing okay I’m reading the tea leaves now a little bit here this is a bit speculative but like if you think about how much consumption a CU uses and it’s using a graphics processor unit those things are expensive it’s not cheap to do this stuff two observations about this change in pricing one is they
8:42 about this change in pricing one is they were probably using a co-pilot model that was more expensive just to run in general just because that’s what happens with co-pilots or that’s what happens with large language models Tommy knows a lot more about this than I do but the ones that are built initially are slow and it takes a long time to run and it it’s expensive to run those initial ones as you build better co-pilots the technology has been improving they’ve been able to build things more efficiently so they were probably using an earlier model and they were able to update their model with newer ones trained with more efficiency which is great because this means as co-pilot
9:13 great because this means as co-pilot gets better as Microsoft continues developing it it will get cheaper hopefully over time when I hear your comment Seth about Kurt pointing out the cost of this thing there’s no way you’d want to put a co-pilot on an F2 and be like oh wow I used 75% of my capacity running a couple queries like the reason you don’t do that on below an f64 is because it would look too bad on consumption
9:41 would look too bad on consumption compared to everything else that’s in there so that’s why you don’t do it so maybe as they make it more efficient it becomes it comes down to the lower SKS maybe I don’t know but there’s no reason why I can’t just run cop on any skew it just there seems it to me it seems like a marketing Ploy here a little bit like you can only run it at this level because it’s less impactful on your percent of your capacity anyways just something I thought was interesting I want to say it makes sense but it just doesn’t make sense because if you get co-pilot for office again a big part of
10:12 co-pilot for office again a big part of what makes our lending model run is text a lot of text and that’s what everyone talks about the tokens yep and if I’m just using co-pilot to ask questions and generate code which is available in GitHub for Pitt if I can do that over a workspace unless copilot has to in fabric go through an entire semantic model and go through all the the rows I think yeah I think this is where we’re finding the problem here it’s right how much data does it actually
10:43 right how much data does it actually need to consume to do something right but if I just have a question for hey I want a column that’s now three days three days ago shift the dates I don’t know I don’t know I don’t know what there’s I don’t know how much information they’re sending back to it it’s it’s all about the prompting right if the prompting is really expensive if you’re sending all this data to the AI to figure things out it’s not summarizing well enough like that that’s a lot of text that you’re sending potentially the AI to figure
11:13 sending potentially the AI to figure things out so that’s going to be an expensive query yeah agree yeah anyways I I think I think there’s something regard someone someone made some strategic decisions there I think regardless price down yay go get it I agree 50% off starting November go to your tenant settings and turn it on now yeah that’s a that’s a good move anyway regardless update to the the article we’ll talk about coming soon is true statement all right another
11:44 true statement all right another thing that just came out that’s interesting I don’t know how this is going to be received by the community I haven’t really talked to anyone about this one so I’d be curious your ideas around this one a new feature for tagging items so there is a now ability to tag your items inside a workspace so think about tagging previews on you can make different tags such as FY 26 or bronze silver gold you can add tags to various elements what are your thoughts on this one do you guys think this is going to be
12:18 valuable it feels like this is very I think there’s something like this in perview and also this is also bringing in some of the features or at least the behavior of azure when you have to tag resource groups I think they’ve realized too there’s so many objects even in a single workspace even if you have folders I’m I’ve learned my lesson I’m not going to call it a game changer but I think it’s absolutely something that’s going to be part of your center of excellence that you would use it you would say this is this is
12:49 use it you would say this is this is worth your time to use it if I’m the data are I’m absolutely working right now on a a nomenclature and a and a standard around this okay of course Tommy would govern the snot out of it and not let anyone anything I out of it and not let anyone anything well it is it is admin mean well it is it is admin empowered right it’s one of those things I I think it was it was in the chat that somebody threw down the a data mesh book link I think that I I’ve started reading or SL listening to on Audible
13:19 reading or SL listening to on Audible because I have no time anymore and a lot of the principles and of of that are not ju are right doing essentially what Microsoft’s doing with fabric right you’re creating domains of of areas where individual parts are separated out and the subject matter experts are engaged and own the data set the data right but but part of that that is interest more interesting in fabric that I think we’ve all been excited about is just the self-service ability of that and introducing now all
13:52 ability of that and introducing now all of these users into an ecosystem of data and Reporting right and as part of that I think tags are are going to be extremely useful and beneficial in that enduser interaction with this whole new ecosystem because if they’re trying to leverage or find information about something that’s in this workspace or this domain like tags are a great way to do that like hey how where do I find the
14:22 do that like hey how where do I find the data set for this or how is this tagged and then whether it’s a whether it’s a a table whether it’s a report whatever I think it’s extremely useful that you can guide other people to utilize the information that that you’ve curated if if you’re in charge of that particular area so I I think it’s a great feature ad so the the part that I really like about this is from domain so there’s a
14:52 this is from domain so there’s a couple things that can now do filtering we’re starting to see like things show up that allow filters to be applied in the the the one Lake data Hub which I think is incredibly powerful cuz I have a lot of stuff in my tenant and I’m looking at a lot of things tagging things will allow you to filter the one L data Hub based on those items which I think that’ll be pretty cool you’ll be able to see the tag that’s there look for it and then have that tag up here my question really is who can make tags can anyone make tags or is this like a a specified admin generate someone
15:23 specified admin generate someone generates the tags and that everyone can use them is that is that how I I’m reading this article I haven’t really delved into the feature that deep yet you’re overlords control the creation of tag good that’s that would make Tommy happy I thought well they control enabling them and I think that’s where you actually create them too really see oh you can but others can apply tags hey you can apply tags anyone can apply tags but the creation of tags happens well that that to some degree makes a lot of sense because you wouldn’t want thousands of tags right
15:55 wouldn’t want thousands of tags right because then it’s point then like you would give this task to Tommy and Tommy would create the list of the very small list or the hierarchy of things that can be used to appropriately tag things because this this would be something that you would share with a much wider audience eventually right in saying like hey when you’re going through and H getting access to these particular areas of information or looking for something you
16:26 information or looking for something you information or looking for something here’s the tags that that we would know here’s the tags that that we would be useful in searching I’m also going to throw out something here one of the things that super impressed me over the conference was the ability to have semantic link Labs do automated things for you and so as I’m unpacking this a little bit here I’m thinking about tagging I think tagging is going to be incredibly important for those of us who are going to be doing some level of automation across your organization right call a list of all workspaces yeah
16:57 right call a list of all workspaces yeah for each workspace call a list of all items add a tag to every item in that workspace for the things that I want to tag things for or something along those lines so I think there’s going to be I lines so I think there’s going to be from a heavy automation standpoint mean from a heavy automation standpoint with semantic link Labs it’s so easy to do stuff and talk to the powerbi apis now I really hope that I can see some blogs Kurt berer Andrew on in the chat wonder if there be any blogs out there that might be able to help us automate some of this Kurt has also recently put out an article around semantic link labs and how useful it
17:29 semantic link labs and how useful it is so again I haven’t really unpacked that blog yet so Kurt keep up the good work you’re doing a great job on all the blog items I I love what you’re doing it’s it’s so encouraging so looking forward to learning from you but maybe a quick topic there for you Kurt if you can’t sleep for a weekend and have some other things to Noodle out here maybe a Blog around using tags with semantic link Labs would be really really helpful here I feel there’s an episode for domains tags and workspaces and how they’re all play along just I I think you’re right Tom because it’s it’s getting more it’s it’s getting more
17:59 getting more it’s it’s getting more evolved as we go it it was like we had little control with domains and now it’s getting bigger with tags and so now we have more of this surface area of like what’s the best way to there’s going to be the community is going to need to figure out patterns here that make it easy for people to discover content this is good these are all good things yeah it just gives us more options awesome more options are always good maybe and besides it like I I think I think as a precursor if we’re moving into the article like this this just isn’t a Blog this this almost reads
18:29 just isn’t a Blog this this almost reads like a like a pamphlet a a white paper it totally does it’s a white paper it is extensive Kurt has done out outdone Kurt in this in this one all right well I think this is a good transition then so we talked a little bit about coop talked about some new features coming out I think that should be good there’s there is one other announcement that count Tommy was thinking about was the announcement around announcing a service principal support for fabric apis let’s just skim over that let’s just say more support is
19:00 over that let’s just say more support is coming I’ll put the article in the chat window just in case just in case people want to read it but I I’m a developer I do a lot of API things this is very impactful for me and extremely helpful in the fabric API realm but most normal people probably aren’t going to be doing the level development that I’m doing so probably not as impactful for them all right main article Tommy give us a summary of the
19:22 article Tommy give us a summary of the article let’s jump in what should we talk about today oh my what should we talk about so again to assess point this is a this is a comprehensive guide manual to using co-pilot in Microsoft Fabric and what Kurt’s done is really gone through the overview of all the different co-pilots that are available talk it talks about the real the reality of adoption things where where fall short both
19:54 things where where fall short both from the ability of the user to just the service itself looking at it in powerbi and overall just taking a look at all the areas that you could use copilot Microsoft Fabric in powerbi where you should proceed with caution where you should understand what you’re doing or what it’s actually doing the back end and where strengths are so it’s an incredible guide that I it’s amazing it’s amazing that Kurt did this and I I love both the article and he
20:26 and I I love both the article and he also has made a pdf version because apparently people didn’t like the goblins on it so he’s made a p people get over yourself exactly come on SO under the executive summary There’s a summary pdf version without goblins which is pretty official and pretty pretty awesome and it and this to your point right there that reads a lot like a it reads like a white paper now at that point right there there’s so many thoughts in here so I I like the
20:56 thoughts in here so I I like the executive summary in the beginning CT they’re doing a great job on this there’s a lot in the article we can definitely dive deeper in a lot of these sections but I really liked your initial overview of the main key points around what’s at the top co-pilot what is it how do you start using it and one thing I will really note here a lot of companies are struggling with this whole is it secure anything and chat GPT open AI all the things that are being built out there cursor what when you send these AI things your
21:27 when you send these AI things your information you’re sending it a text prompt a script something that potentially could be very company secure Microsoft has taken a very big stance on that’s for you and you alone and it’s not going to be used to be training the model more on things so that’s interesting to me I think that’s a good stance that Microsoft is taking that businesses can feel comfortable using co-pilot without having their data leaked out especially if you’re writing code in code editors because that potentially could be company secrets and proprietary information that’s inside that code you’re getting help on so I
21:57 that code you’re getting help on so I think that’s really important to focus on that on that part so before we get started I I I kind part so before we get started I I I there there’s a there’s a prevailing of there there’s a there’s a prevailing so two two points I want to make the first is and and Turk kurk does a good job outlining this right this is point in time right we all know or would assume that co-pilot in its versions of How It’s implemented will gain in in the value that it provides to businesses but this is a very thorough
22:29 this is a very thorough walkth through of the different features that are in the co-pilot Fabric and whether or not it works to what works to the hype I think is is a good thing because a good way to describe it and the second at actually at three second it’s it’s very detailed and I would encourage every listener to go go read this article so we’ll we’ll paste it obviously in the chats but for sure it’s a worthwhile read and third
22:59 sure it’s a worthwhile read and third having what what a prevailing theme is going to be like in this mistakes are possible right so the whole article start to like title is welcome to co-pilot mistakes are possible and I’m I’m just noticing and I don’t know if it’s intentional or not but this is by far the longest tic article written by Kurt and he starts with dot dot dot let’s have a brief one chat and I’m like I don’t know if that was a play because we’re talking about mistakes are possible or if but I found it extremely
23:30 found it extremely humorous I think I think the humor I think your your humor and humor align very well together I could just be reading into something intentially I found it funny so the article start starts off with just a little bit of like here’s the enthusiasm around co- pilot chat GPT large angle models it’s just everywhere and I would agree this also the beginning part of this article really resonates to me Marco Russo came out with a little YouTube short I don’t know if anyone saw it but there there was a YouTube short that Marco Russo put out that said
24:01 short that Marco Russo put out that said don’t lean into all the hype all the time people still need to be involved with their data and figuring things out we’ve been through this hype cycle before something new comes out everyone gets excited about it and there’s this it’s it’s the hype cycle that comes out of what’s the what’s the company that does the hype cycle Tommy this one right that’s Gartner Gartner does the hype hype cycle so it’s this this really wave of enthusiasm like this is going to solve all our problems everyone loves it and then there’s this big TR of disillusion and like well it doesn’t really do what I want H it’s not
24:31 doesn’t really do what I want H it’s not as useful as I thought and I think we’re still ramping up that excitement level and maybe there’s some more community members trying to really Tamp down that excitement and say look what’s really at stake here what how can we best utilize this where’s the best places to apply co-pilot in your workflows yeah so I think the three main points of the article and the reason I also encourage people to go read it I doubt we’re going to be we we are not going to be able to give it justice in 25 30 minute conversation yeah but
25:01 25 30 minute conversation yeah but the three main points that I I will probably dive into are the ones he he walks through right using co-pilot to generate code using co-pilot to ask questions about data right because that’s a big one if we’re using AI we’re using a search thing and I I can ask any question I want and get the answer right that is an expectation using co-pilot to generate reports and and he walks in detail around those should we dive into those or you guys want to hit something else first I’m good to dive into those those
25:32 first I’m good to dive into those those that’s that’s perfect gu off Seth which which one you want to start talking about first let’s start at the top man I like it start at the top using co-pilot to generate generate code this is by far what I think copilot is best at doing in my opinion now it’s not great at I’ll be it not great at Dax all the time but have you used copile to generate sequel code I’ve I have had ESS with code generators helping me generate code and
26:03 generators helping me generate code and yes and they they they they seem to be getting better and better yes I think I think Dax is more of a nuan code because it’s there’s not as much examples out there around Dax if you think about like all the language of rank all the languages of of of code over time right Python’s like number one or Python and what JavaScript maybe like number one number two something like that there very high on the list so when you’re training things there’s a lot of examples of like functions and how things work and the syntax and there’s a lot of good examples of that I think DS would fall
26:33 examples of that I think DS would fall very far down this list because it’s not as used it’s only used in models it’s a Microsoft specific thing it’s not really Universal everywhere now SQL I would agree is a very common language now there’s a lot of different flavors an csql t-sql Seth’s got probably another four or five sequels under his hat somewhere just laying around but it’s in general sqls very fairly common language as well and it’s acrossed multiple there’s different flavors of squl like Oracle has its own
27:03 flavors of squl like Oracle has its own flavor a little bit versus Microsoft’s version but in general the language is very diverse there’s a lot of it out there so that’s anytime you see things like that I think it’s going to be very helpful for code generation so it’s interesting here with this scenario and if you actually look at the intro that Kurt does on co-pilot versus chat GPT because I think a lot of people would start there how is copila different from chat TPT a work and one of the big things as we get
27:34 work and one of the big things as we get into the generating code is this generative AI is considered what’s called Soft accuracy so it’s or soft accuracy soft accuracy I’ve never heard that term before that’s the first time I’ve hearing this one it just sounds like a way of saying it makes mistakes and this is a technical way of saying it’s accurate well so you well when you asked Chad for a bunch of jokes or a poem or a rap right yes that it’s is that necessarily right and or if
28:05 it’s is that necessarily right and or if you ask for even some code well that’s pretty basic when we’re dealing with a model especially if I need I need for finance this to be always be this in the llm world that’s considered hard accuracy which means it must be very precise yes that definition of correct I like how we’re making up new terms for being right and being semi- right this this is just when we’re going large language models but this is this has been I think we’ve talked about this in the past this is my point right certain things can be
28:35 is my point right certain things can be a little bit more it doesn’t have to be 100% like if I’m generating a background image for a thumbnail for my YouTube channel like it has to it has to be appealing and look good there’s some Liberty in like how many fingers does the person have on on the image right does the colors all blend together in a weird way like there’s some things you can give it some Liberty if there’s text on images does the text actually say a real word or is it just some gibberish but I get the I get the gist of like I’m close enough it
29:03 get the gist of like I’m close enough it makes sense I can depict what’s happening in the picture but to your point Tommy if I’m asking how many how many dollars of sales did I sell last quarter it has a definitive answer you can’t get that one wrong you can’t be like ah maybe it was a million I don’t know give or take a half a million like you can’t you got to be right yeah that’s funny so as we dive into this the generating code he’s using the Dax query r viiew and using copilot and he’s going through the scenario of a user who’s simply wanting to get the two dates
29:34 simply wanting to get the two dates in hours they want to compute the difference between two day time fields and hours and he’s going through a scenario of different prompts they may use and he’s talking about real world scenarios this is very very interesting to me as I read when I read this initially for co-pilot compared to other tools because we’ve talked about other tools and I had my article about utilizing chat GPT to create Dax I have I have a post online
30:07 create Dax I have I have a post online about using Cruiser and basically giving it a markdown document of what my model looks like to generate Dax sure to give it the context of my semantic model before actually given a prompt and one of the things that Kurt says right in the freaking first part of the generating code in the example is people are lazy and not trained to be prompt Engineers I I don’t like the I don’t like the word lazy there it’s this is new yeah like I don’t I haven’t
30:38 this is new yeah like I don’t I haven’t amount the amount of time how many times how much of your lifetime have you spent Googling things sure a t tens of years like I don’t know how old you are but like tens of years tens of years like right I don’t know I tens of years like right I don’t know we’ve been doing we’ve been trained mean we’ve been doing we’ve been trained the the the computers have reprogrammed our minds so we know to search for to get the results we want out of the Google I would I would still argue that that is not true oh what do you mean in
31:08 that is not true oh what do you mean in like so I I think lazy is a strong word it is but at the same at the same time why countless times countless times people have asked like said I can’t find any information on this and I will go to the Google C engin and I will find okay things that they are not finding because there are specific ways I’ve learned over time that you can find the results that you’re looking for where others cannot so I think even in prompting or
31:40 cannot so I think even in prompting or the search type that you put in is indicative of the prompts that you learn or would have to learn to to leverage and use AI so there is there is a bit of a learning curve are you taking it upon yourself to understand that you can’t just willingly throw something into to a chatbot and expect that you’re going to get the same results versus a better prompt those are two different things yes and there’s a technique to this that again to your point Tommy like this is all I don’t think it’s lazy I
32:10 all I don’t think it’s lazy I think it’s just a matter we don’t know how to work with it yet we haven’t figured out it’s not common knowledge on how to build this stuff also what I what I’m finding is I’ve learned from other people for Tommy like Tommy was saying this I’ve learned a lot from Tommy on how to do prompting I’ve also learned that when you make prompts sometimes it’s helpful to document your code using co-pilot and then ask the prompt of what you want it to do so first make the co-pilot explain each step with a comment then after it’s
32:40 step with a comment then after it’s explained it then take another step and write another prompt to do that when you’re complete if you get the answer that you want this is a trick that I’ve learned from Microsoft people that were were teaching this in a class they said once you have the answer that you like or result ask the AI what prompt should I use to get this answer and it will try to recreate your prompt so if you have multiple steps of prompting it’ll then consolidate down to one prompt my challenge here is now I have a huge blurb of text that I need to use
33:11 blurb of text that I need to use repeatedly where do I put it where do I keep it do I keep do I open up a loop document and put all my prompts in one spot and then I just copy paste my prompts in I’ve seen other people in the AI data space where they’re just collecting prompts in notebooks or some there’s got to be a place where I can like FASTT track my prompts into the AI so I can get them but there’s I doesn’t feel like there’s something there yet like I need a notebook of prompting things so I I don’t here’s here’s my personal thought and then let let me know what you guys
33:41 and then let let me know what you guys think I think we should talk about like the areas of brief description of what kurk goes through yeah details and then going to deep and then well at the end just like what his results were sure what his findings were and whether or not we agree with him yeah yeah in this section right he he runs through I think two different scenarios of like I’m an end user I’m writing some Dax code Y and the results are not they’re not perfect right the the co-pilot generates some code it can successfully
34:11 generates some code it can successfully do that goes through some iterations and I think his findings are if if we’re comparing what co-pilot is supposed to be doing for us or or the the story right what we’re what what we’re being sold or or what we’re like the hype cycle is it’s it’s going to streamline and make your job a lot lot easier a lot faster yes in powerbi and in self-service we also know we’re not dealing with Dax experts right so if you’re using copilot to generate something his findings are such that
34:43 something his findings are such that it actually he thinks there could be a possibility it’s creating more confusion rather than people finding the right answer right A lot of the times if you’ve ever been a part of a community and powerb Community was on all the time for many years what people will bring their specific scenario to as a question and and not the code question right it’s not how do I do something it’s I don’t know what to run or I need to do this stuff with my business and there what
35:13 stuff with my business and there what they’re looking for is the formula or the function that they don’t know and what we’re introducing here at least in the outputs that Kurt was able to generate is an answer but it’s not the function right answer it’s not pulling the right things together and is that more confusing for an end user and now are they going it opens the door of like are they actually going to spend more time on this because they’re not going to a place where it’s like here’s the function now you have to figure out how
35:43 function now you have to figure out how to plug that in hey it’s not working you don’t have a relationship in your model right like the fundamental like just checks and balances I can’t get it to work now I’m throwing something that like sh should be working but it’s not but I don’t understand why it’s not and and that is like the morass of what I read in terms of just the testing he did is it it could create these scenarios where individuals are like yeah I got my answer and then nothing comes out but then they’re thrown into
36:14 comes out but then they’re thrown into troubleshooting in Dax that they don’t understand in the first place so it’s a conundrum right yeah and yes I agree it it’s I feel like the story around co-pilot is very broad we say co-pilot and even I think that I think what you’re mean I think that I think what you’re maybe pointing out as well set down here later on it says what does Microsoft copilot look like in just fabric what does the fabric flavor of copilot even look like so data factory data
36:45 look like so data factory data engineering where does it show up in data warehousing and things so looking at the different elements figuring out which one of the items are here also it’s you get different experiences and they do different things based on where you’re at in the program so you to spend some time learning and figuring out what does it do in order to get answers out of it in different experiences yeah this goes back to I’m actually disappointed with the scenario that Kurt put here because his scenarios that he goes through the the
37:16 scenarios that he goes through the the walkthrough of what a user would do what he had to do in this basic example to get the right output or a better formula based on changing his prompt and get get context I I was hoping or hoping that co-pilot would have that in a sense trained as part of it like Oh I’m a co-pilot first semantic models so one of my goals like one of my tasks or or skills is the
37:47 or or skills is the ability to have context of the semantic Model A lot of the issues that he had with generating code was Fab the co-pilot didn’t know that there was no relationship between [Music] [Music] or that that the Dax statements or like the Dax parameters that what’s allowed sure I that’s disappointing because I can feed Chad GPT or cruiser a document from the dax’s documentation site sure and
38:18 documentation site sure and again even without feeding the model like so the the leg up or in a sense what co-pilot and fabric should have that no other AI tool should have it doesn’t in terms of that skill for semantic model context and Dax context so if I had to reinterpret what you’re saying is because Microsoft owns co-pilot and Microsoft also owns the semantic model you’re expecting that the
38:44 semantic model you’re expecting that the co-pilot from Microsoft have more contextually aware elements of the relationships in the model how the tables are being written all the other relative Dax measures that might be similar or being used to help build something so you would expect those elements to be a feater for your your co-pilot yeah that makes sense it and it does I think again we’re in early days of this stuff so I’m guessing the answer is yes it’s coming they’ve got to figure out how to make it more efficient
39:14 out how to make it more efficient does is there let me ask a a dumb question here potentially is there a note on co-pilot for Microsoft that tells you what information from the model is sent to the co-pilot in order to give the prompts do they have an article about that about what they send they don’t need an article they just have mistakes are possible okay then then that’s I’m kidding oh okay but but I think to your point though Seth I think that’s right right that’s that’s the point here is the point is they’re still trying to
39:44 the point is they’re still trying to figure out what are the right level of information to send to the co because if you have a big model you’re going to send potentially a lot of extra text to the co-pilot eat up your yeah Trump characters and then now you’re just spending money to spend money like it doesn’t but scenario one I don’t need to know the values I just need to know the Bim or the metadata right but generate code understand but these things get big like so if you’re going to send in a Bim every single time you might be sending thousands of characters of stuff to the model so you want to send the co-pilot
40:15 model so you want to send the co-pilot the least amount of information you need to to keep the cost down because it has to every they charge you based on how many characters or prompts that’s how they charge you right yeah so the longer your text string is is that you send to the copal the more it’s going to cost you to produce the output so I it’s it’s a game in my mind it’s a game of optimization how much do I send to the co-pilot so it knows enough about my model before I actually get it back and just recently co-pilot has just been able to be conversational
40:46 has just been able to be conversational right Tommy what is it what is it called when you have multiple prompts and they’re threaded together and it’s supposed to understand the context of the prior the prior prompts chat that’s just context I thought there was another word for that that was like lineage or or the the idea that some co-pilots are just dumb and they just ask a question it gives you an answer it but it doesn’t acknowledge the the prior steps I see so that’s just basically if you have a one thread is like system memory that’s not the word I’m looking for but yeah it’s the idea that
41:18 for but yeah it’s the idea that conversational chat is not always supported in all co-pilots right or hasn’t been and only recently has it been started to be more supported in co-pilots so for example if you went to the Dax Studio code or the Dax query yeah Chain of Thought it’s something like that so thanks Kurt it’s the Chain of Thought things right it needs it needs to understand the prior conversation pieces and use those as context for new conversation you’re having with it I think that’s been more of a recent ad as well so that way it has context of what it already has talked about and is using that as
41:49 talked about and is using that as information for the next step I think I think one of the the things that Kurt does does outline somewhere in the article is is one that this conversation drives into too if there’s an expectation that you’re just going to turn on co-pilot and automatically give value that that’s not a valid assumption and I think what I’d agree what is outlined in many of these tools and even where a lot of the Microsoft driven articles and things are is like you have to you still have to intentionally build
42:19 to you still have to intentionally build things in the ecosystem or set like the Baseline for these tools to really gain value value for others and and so it’s not just going to turn on and work and I think what why you see such a discrepancy between Tommy the tools you’re talking about which are open to everyone and open to consume what everybody puts in them is their learning a lot more a lot faster right I I do agree that it is I I
42:50 faster right I I do agree that it is I I guess somewhat where the the varnish gets rubbed off for me or the the shine IM mediately goes away where with co-pilot I I would assume it would be better in some of these areas around models right like or that the the messaging messaging coming back in the Dax editor specifies like yeah here like here’s here’s an equation but here are some problems you have right or and be specific around the areas of what Microsoft owns in that Arena of like
43:22 Microsoft owns in that Arena of like what hey I’ve got this model here’s the thing I want you do this and be the community well you don’t have this you don’t have this we can’t use that like if we’re going to create this do you want to do it with this first and that second or make or make recommendations as opposed to here’s the answer right and we’re I don’t think we’re at here’s the answer yet and and I guess that leads into like the second part which is to me I agree with Kurt it’s the most
43:52 me I agree with Kurt it’s the most concerning just from this marketing of AI being the answer or you can type something in and get the answer is using co-pilot to answer data questions and and what he dives into is obviously the amount of work that business intelligence professionals and data people spend we know data we know the complexity of all of compiling the business logic and regardless of what architecture you’re
44:22 regardless of what architecture you’re putting and you’re trying to produce value for the business and Tommy how many times have you even AR argued about like well this this sales total could mean three different things depending on who you’re talking to like that seems like the least of the problems as it relates to can somebody just walk in and start asking questions against the model and get the the the answer right and and I think that’s where he rightfully points out
44:52 where he rightfully points out mistakes are possible as a little field is not acceptable it shouldn’t be acceptable and and honestly like can we I I push to the to the point of can you really put something in there that’s going to get people in trouble and that one one would so and you’re you’re naturally going to this second scenario here where I I’m not going to lie the scenario two which is very much rather than trying to generate code we’re talking about using co-pilot to answer data
45:24 about using co-pilot to answer data questions and that’s just simply going through the like a chat and that would have memory so it’s not using natural language and I have the chat and it’s like hey I want to see the month of date sales for August 2021 pretty simple so this is a business user I imagine scenario going through this scenario I got goosebumps I got chills from how scary this could be in an organization if you go through
45:55 in an organization if you go through what Kurt did in terms of how his step byep he asked a basic question it would return first an error then he asked it a slightly different and it was the diff wrong number but it didn’t say it was the wrong number so that’s in to your point set that’s scary some of the adjustments that he had make in order for to get it right were twofold he had a really go through the Semitic model and make adjustments hiding and showing columns really making
46:25 hiding and showing columns really making the model fabric in a sense prepped and really the prompt engineering which is and and what are you doing what are you what you’re adding synonyms you’re add like all of the all of the language stuff that we had to do to get Q& A to work work correctly that that’s what he had to do to get this to work correctly it yeah it just and the weird things where it was showing the the year after not the year that he wanted and again you can try to set walls to
46:55 and again you can try to set walls to like a dist schema you can hide and show columns but we all know this too there are unless your model is data governance certified where we have all of our metrics and the names of those metrics everywhere in the data culture is high normal user is not gonna walk in here and and to be fair i’ I’m I’m going to go back and look at the a the co-pilot documentation for this feature specifically right to to see if there are hey don’t use this unless you
47:26 there are hey don’t use this unless you do all these things because in my mind if we have to do all those things in order to use it or make it produce accurate results then then that’s a prerequisite for use in my company right like I I don’t want people to use this or nor should they think they’re getting value like the value they expect or the answers they expect unless you’re you’re hitting my AI approved model or something like that what I’m what I’m saying I’m gon to I’m going to
47:57 saying I’m gon to I’m going to transition just slightly conversation here I think this maybe relates to what you guys are talking about here as well halfway down the art halfway down the page maybe a little bit more than halfway there’s this really interesting diagram and again one of the things that Kurt does really really well that I absolutely love is he takes Concepts and ideas and materializes them down into really rich diagrams so the images resonate with me and there’s a I guess it would I guess it’s called a ven diagram diagram it’s on this one oh man it’s good it’s
48:24 it’s on this one oh man it’s good it’s known knowns Known Unknown unknown unknowns and just total BS like you don’t know what’s going on like it’s just it’s it’s junk right and I really liked one the diagram was interesting but two I had to really take a tech a second to look at how does a novice or beginner learn things and how do those bubbles of influence change with co-pilot how do how does this change over time and I like this so the one diagram is interesting about the bubble
48:54 diagram is interesting about the bubble diagram or the the the the overlap of ideas things that versus things you don’t know but then going down to his other chart where it talks about the novice the intermediate and the expert and so I look at this going saying okay Seth how to write a lot of code what Dax should look like you what Dax should look like most of there’s some things that know most of there’s some things that are probably Ed Casey and and some things that are just a bit more harder on the other end but you’re an expert on writing stuff and I really liked his his graph down below the risk of falling victim to AI errors is higher
49:26 of falling victim to AI errors is higher when you’re talking about novice or beginner users who don’t have a lot of experience with Dax hasn’t done a lot of tuning or optimization the model’s really large and you don’t know writing Dax this way is actually very bad performance for the model it won’t be efficient so as you increase your knowledge of Dax as you increase your knowledge of how the model is supposed to be shaped and what tables you build your knowledge becomes more of that expert level and you I think will find more value from potentially the co-pilot and find it can help do more remedial
49:58 and find it can help do more remedial tasks and and things that I think that are very useful here is why can’t Coop I are very useful here is why can’t Coop co-pilot I think for me has been mean co-pilot I think for me has been useful in explain this code that someone else wrote what is it doing here you else wrote what is it doing here diving into hey there’s a summarize know diving into hey there’s a summarize versus summarize columns function which one should I use can you give me some explanation as the difference between those two items document this Dax code write comments for each line those things I think are very interesting because those are sometimes the remedial tasks they’re not as hard as hard but it supplements my work and I’m not
50:29 but it supplements my work and I’m not doing a bunch of extra effort someone told me at the conference they said people if you ask somebody what is the definition of this measure or column people will not tell you the answer or people are too L are too busy to say down sit down and say okay I’ll help you describe or write descriptions for every column however if you give someone a description of a column or a measure and say this is the answer this is the descrip descrition of this column if it’s wrong people are more apt to jump in and say no that’s
51:00 more apt to jump in and say no that’s not right this is how we see it so there’s another another layer here that is again taking some of these remedial tasks off my plate dude I love that I document my whole model right all and this is this is something that was shown in timle editor that’s going to be coming out for desktop soon scripting your entire model asking the the co-pilot to say write descriptions for every single measure in this model and boom one prompt you’ve written 10 15 20 100 different descriptions for columns
51:31 100 different descriptions for columns measures and and things in the that’s huge and then now you can go back to your business team and say look I’ve written some description they’re probably not all right but can you please review them and let me know if there needs to be updates and I think that was a really aha moment for me to saying if the task looks too daunting for people they won’t do it but if you start them down a path they’re more than happy to give you Insight or input to that wasn’t what I was expecting and then you can adjust from there and you get closer to what the users really need anyways I thought that diagram was
52:01 anyways I thought that diagram was really really cool I I don’t know if that hit you guys the same way I I agree and it it definitely points into the areas where what what are the where the big mistakes start to happen yes all right with that we’re good final thoughts no and I think that’s a perfect place to end Mike exactly also what Kurt really talks about at the end there’s two things that make makes this in terms of not the deal breaker if you were getting a 64
52:32 deal breaker if you were getting a 64 one he one of his headers is the expectations for generative AI are stupid I I agree with that so because all of this and we’re not even though AI is really awesome cool every single scenario here required some knowledge experience and skill in prompt engineering and yes this is great that it’s is getting closer and closer but what are we as expecting it really to do over an entire model over the ENT
53:02 to do over an entire model over the ENT over all of our code there’s a lot of investment and but there’s still there’s this is still a skill and a human skill that we need not just modeling your data but asking the questions and having that context so yeah co-pilot I love it I love all this stuff but I love it because it is a skill it’s not just I sit back make coffee and I write three words and I’m done with my day it’s about enhancing it and co-pilot’s no
53:33 and co-pilot’s no different I think I think you’re you’re a good point there I think my final thought here is there is a misconception that copilot is going to be this you that copilot is going to be this solving everything problem and I know solving everything problem and I think Kurt does a really great job in the article outlining key areas of where you can invest your time to learn how to use co-pilot right now I think for me the areas that I see co-pilot adding value in are in the Dax query review writing code there is going to be helpful I think in fabric writing SQL statements or helping you understand how to write some SQL statements to get your
54:03 to write some SQL statements to get your head around your querying some tables in the lake housee or something along those lines I think a lot of the code based use use cases are where I find the most power around co-pilot things that I’m not as confident in if where I would not spend a lot of time with co-pilot would be to Kurt’s really big struggle in the article was getting it to give you actual answers of the data from the data model like calculating things and then returning a result back to you so I think those elements are a bit more those are harder to get your head around and so I would maybe recommend Focus your time on
54:35 recommend Focus your time on when you’re writing code when you’re documenting your model those lighter weight tasks are are really good opportunities for copot to make things more efficient for you I’m excited to see where it goes it’s it’s definitely improving rapidly there’s lots of updates they’re now Bringing Down the costs which is great so now we’re not consuming so much of our capacity so I’m I’m pleased to see where it’s going I’m just keeping it arm length right now I’m not doing a ton with it right at this point any final thoughts for you Seth yeah I would specifically for what we’re talking about here what Kurt has outlined co-pilot fabric right
55:07 Kurt has outlined co-pilot fabric right and what but I think it’s applicable to all co-pilots or chat pts or what all the ls in AI right I I think it’s a perfect title or Focus mistakes are possible yes right co-pilot fabric is not a catch-all term for easy button and whatever is being marked marketed right now as far as like oh it’s all so fast and easy I think this points out that it may work extremely well in some places and not great in others and I think the
55:37 and not great in others and I think the less about an area and the more you’re relying on co-pilot the more susceptible you are to making big mistakes or spending more time than if you would go figure something out yourself I I think where this all lands is end users need to learn more about AI capabilities and how to use them and them and under use them with an understanding that mistakes are possible Kurt at the very end of the article ask the question why should I use this and he gives his
56:07 should I use this and he gives his opinion as well in the future I can see myself potentially using this for the third use case which is use the outputs for a starting point an ideation a a a beginner design when creating reports sounds like a great opportunity to kind sounds like a great opportunity to just get your feet wet and get of just get your feet wet and get started and at least get the creative juices flowing to get things moving out and Building Things in your report so I agree I think Kurt makes a really good article this is a great Point and all in all it’s definitely worth a read it’s a long read I’ll give it that but take
56:38 a long read I’ll give it that but take some time to unpack it really get your head around this because I think this will be especially if we’re leaders in our organizations there’s going to be hype from top down saying AI stuff is coming we need to be doing it and you may not want to jump on the bandwagon of hey look co-pilots here in in Fabric and by the way we should just buy it and use it and I think you may have to Tamper some expectations there so just come in with with reasonable expectations I guess is what I’m trying to say all right with that thank you all very much for listening to the podcast we really appreciate your ears we know this is a
57:08 appreciate your ears we know this is a long hour of hardworking out or biking or running or whatever you’re doing so we hope you had a successful exercise today for those of you who are doing that for other for all of us others who are just sitting here on keyboards and pushing off work for another hour we also appreciate your ears as well thank you very much our only ask is if you like this episode if you like what we’re talking about here please share with somebody else that’s the only way we grow the podcast let other people know that you liked and enjoyed the content here Tommy where else can you find the content you find us in apple Spotify or wherever your podcast make
57:39 Spotify or wherever your podcast make sure to subscribe and leave a rating it helps a ton do you have a question idea or topic that you want us to talk about in a future episode head over to power. podcast leave your name and a great question finally join us live every Tuesday and Thursday a. m. Central and join the conversation all tips social media channels awesome thank you all very much and we’ll see you next time time [Music]
Thank You
Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.
Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.
Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.
