Action Systems + AI Applications – Ep. 468
AI that just reports is table stakes—the real value comes when AI systems take action. Mike and Tommy discuss action systems, the new Anomaly Detector and Maps announcements from FabCon Vienna, and how real-time intelligence feeds into automated decision-making.
News & Announcements
-
OneLake Diagnostics: End-to-End Visibility into Data Activity — New diagnostic capabilities for tracking data activity across OneLake, giving admins visibility into what’s happening with their data.
-
Fabric Data Agent Now Supports CI/CD, ALM Flow, and Git Integration — Data Agents can now be managed through deployment pipelines and git integration, bringing proper ALM to AI-powered data experiences.
Main Discussion: Action Systems
Beyond Dashboards
Mike and Tommy discuss the shift from passive analytics to active systems:
- Traditional BI — Human looks at dashboard, makes decision, takes action
- Action systems — AI detects anomaly, evaluates context, triggers action (or recommends action with human approval)
- The gap between “insight” and “action” is where most value gets lost
Real-Time Dispatch and Action Systems
Drawing on Chris Schmidt’s LinkedIn article, they discuss how real-time intelligence feeds action systems:
- Event streams → anomaly detection → automated response
- Example: supply chain disruption detected → automatic rerouting suggested
- Example: fraud pattern detected → transaction flagged in milliseconds
FabCon Vienna Announcements
Two new capabilities announced at FabCon Vienna:
- Anomaly Detector — Built-in anomaly detection for real-time data streams, designed to work within the Fabric ecosystem
- Maps — New mapping capabilities for geospatial analysis within Fabric
Building Action Systems Today
Practical patterns:
- Use Data Activator (Reflex) for event-driven triggers
- Connect Real-Time Intelligence to downstream automation (Power Automate, Logic Apps)
- Build Data Agents as the AI interface to action systems
- Start with human-in-the-loop — AI recommends, human approves
Looking Forward
The convergence of real-time intelligence, data agents, and anomaly detection points toward a future where BI platforms don’t just show what happened—they respond to what’s happening. The teams that build action systems on top of their semantic models will deliver dramatically more value.
Episode Transcript
Full verbatim transcript — click any timestamp to jump to that moment:
0:00 Heat. Heat. Good morning and welcome back everyone
0:31 To the explicit measures podcast. We are back again with some more fun topics around real time intelligence. Today’s our main topics will be today our main topics will be action systems and AI applications. That’s going to be our main topic today. Oh yeah. There were some recent announcements at Fabcon Vienna that I think were exciting. I think one of them being around real time maps, which is like another thing that was then data into fabric and you can get that data mapped out in real time, which is slick. So, we’ll we’ll unpack that a
1:03 Little bit more today. What are these new features? What does this mean for your business? Let’s just understand what’s going on in those new features areas. Before we begin, Tommy, what news do you have today? So, pretty neat articles here. We actually have two. one on one lake and the other on data agents in CI/CD. So the first one Mike and I know you’re really excited about this is gain end to end visibility into data activity using one lake diagnostics. This is something that felt like a black box for a little bit.
1:35 Yeah, I with one leg it was really hard to know everything that was going on. So with one lake diagnostic you can actually see who accessed what when and how and it all comes in through JSON and event streamed into a lakehouse. Some of those key capabilities is for internal and external data access anything that you do with an API pipelines or the fabric UI and again everything’s stored in an open JSON format. So you can do your own reporting on it. So the charges are similar to your Azure store
2:09 Diagnostics and it’s pretty cool dude. finally I think this makes a lot of sense. This is there’s a lot of events that are probably happening in the one lake and making those things just more visible I think makes a lot of sense. I do Tommy I would argue the majority of our work that I do in fabric is in in one lake. it’s it’s a it’s a lakehouse something along those lines. that’s that’s the most I think that’s the most used I think Microsoft said in one of their I think it was in Fabriccom Vienna they were saying how many customers are using
2:41 Everything and those customers are using a minimum of three workloads inside fabric and one of those workloads I think I would say is very used for me is as I build things that I find very useful is lakehouse pipelines those are my two main I think elements that I’m using there and maybe a bit of data warehousing starting to use more of that now and finding use cases where to use the data warehouse. What would you say Tommy? Yeah, and I think this is a great introduction to as we’re talking about selling us on real time intelligence and using the event stream.
3:14 So honestly this is a perfect case where real time would be pretty important especially in a large organization. One feature that we have recently on our Intelos product. So our Intellexos product is an embedded accelerator. It helps companies start embedding PowerBI reports or building like a data as a solution or data as a product experience. We are on the Microsoft partner showcase for building a data product in this way. But it it basically helps you this product helps you move from building an embedded solution in
3:46 Months down to hours, right? It’s it’s a huge improvement in delivering real value from your PowerBI reports. But one of the things that are interesting is we just added a feature called data shares. A data share allows you to go from fabric to a different tenant in fabric but using a lakehouse shortcut or a lakehouse share. One of the things that’s interesting in this diagnostics article it talks about one lake diagnostic events will capture external access. So including all activity from the fabric user experience when one lake
4:21 Is protected by workspace private links the one lake will capture all of the blob storage operations. So any operation that the blob storage is doing read this file write this file deleted a file those security operations are now included as part of this which I think is actually really really good especially when you’re trying to externally share or or give data between different organizations. Again, this is something I think this is one of these enterprisey type features that enterprises just want to have. It makes sense that it’s there. I think the best part, Tommy, correct me if I’m wrong
4:54 Here. As I read the documentation, it’s saying everything gets landed back into the one leg. So, it’s just events that are coming back in and you could push them to your one leg. Correct. Yeah, that’s exactly what it does in that open JSON format. And that makes sense to me because then you don’t have to have you don’t you’re not worrying about like a retention period or how long the stats exist. The only downside of this is you now have to go parse a bunch of JSON which means you’re probably running notebooks or doing some logging things on top of that to get the the understand what’s going on in the logs. So it’s a double-edged
5:27 Sword. I I think you’re just going to be fine with it. You’ll you’ll figure out what the format is and go from there. Cool. Neat article. I like this feature. I think this will be useful for organizations. It’s probably not going to make everyone like shake their earth and make them like oh wow been waiting for this feature for years. This is probably very nuanced for those very specific enterprise organizations that need this one. Looks like you have another one here Tommy another article CICD. Yes, more on our favorite data agents and something relevant Mike because this
6:02 Is what I’m actually doing my session on at dynamic summit because I had to change from metric sets. but basically with data agents and going on now is supports CI/CD supports ALM flow and really allows for version control and collaboration. So pretty neat dude. Do you use data agents today currently? Are you building them right now and using them in your either internally or for customers? Are you using them today?
6:34 Yeah, so right now there’s a few customers that are interested in in test driving it, but it’s been a lot of experimentation on the best data source and then actions with copilot studio. I do think this so this idea around data agents is data agents is a newer item inside the fabric ecosystem. It’s copilot it’s agent it’s it’s using some large language models behind the scenes. The fact that the time between when you heard about data agents came out and now that it’s supported in CI/CD I think is really
7:06 Encouraging Tommy. I think it speaks a little bit to what’s happening inside the Microsoft ecosystem, which is I think they’re I think they’re doing a really good job of like shortening the time frame between when a feature comes out and when it gets into CI/CD. Would you agree with that, Tommy? I feel like that’s been like a a mission for them now that if we release something, it has to go into CI/CD before we get it to like GA or or really out the door. like it has to be a a quick turnaround, I guess, is what I’m
7:38 Trying to say. Oh, a thousand%, Mike. I think this is just becoming integral in terms of everything they’re doing. Everything in fabric is really now a developer experience. So, you need that collaboration. So, it’s interesting that that’s probably like part of just the normal checklist that they have. Now, now one thing I’ll just note here is Tommy, just correct me if I’m wrong. When you build a fabric data agent, you’re basically binding that to a lakehouse. Is that is that correct? A lakehouse or some other data source too, right?
8:10 You can choose multiple data sources. So it doesn’t have to be just a single lakehouse. It could be a model. It can be a SQL database. It can even be real- time analytics as well. So here’s my question though. So this is where I’m going with this one. So, CI/CD helps you continuous integration, continuous deployment. It helps you move that data agent across dev to test to production. The thing I’m curious here, Tommy, I’d be curious what’s going on here. When I read this article and I search for the words variable library, I don’t see anything that lets me do this. So for for example
8:44 The connection string to the lakehouse the connection string to the semantic model if I’m building a dev test prod type environment with the three different environments and I’m looking at the different environments as we go here data agents yes I can move the data agent between the different environments but what happens does it autobind between the different environments does it is it picking up a new lakehouse so I would assume and this is this would be a pattern it’s not the only pattern pattern. Let me preface that. A pattern is I’ll have data in the dev, I’ll have
9:18 Different data in test and I’ll have production data in the production server or wherever that may be. If the data agent is pointing to the dev lakehouse, how does it point to the test lakehouse when you move it to the next environment? Does it still get is it still somehow attached to the old development works to the development lakehouse or data warehouse whatever the thing is or does it now auto rebind to the next level. This is one of the things that I feel is Microsoft is trying to solve with this thing with the thing called variable
9:52 Libraries but that would mean the data agent needs to have the ability to be used by the variable library. That’s where I’m getting a little bit hung up here a bit, which is okay, I understand you have CI/CD attached to the item. Yay, great. I’m glad we have it. But what about all the referenced items that it’s using as it moves through the different environments? And that’s the part that it gets it’s been difficult to manage that I will say in other even in other things like semantic models has been okay, but if you’re going from notebooks across different environments, that’s a
10:24 Little bit challenging. That doesn’t really work so well. It’s not super smooth. and I think variable libraries is trying is trying to to fix that the data flows gen two same thing right it’s difficult to get them across and you have to use these variable libraries in the data flow to make sure that you can switch out the parameters per environment there’s there’s some things here that I I I’m not sure if we’ve already thought them out or maybe it’s on the backlog or needs to get built out but there’s a I need to go play with this one this is one that I think I need to go directly use try it out see how it works and what happens when you have these things bound
10:58 To different environments. Yeah, I don’t think I don’t think of the libraries are supported with data agents yet, but I know they’re actively working on that because really right now there’s not a full comprehensive process with data ages, especially when you’re doing testing across different data sources because what the data agent has to do is it’s not just in a sense pointing to a data source like a SQL query and you just change a parameter and you just know that data in it. So yeah, that’s
11:32 Not supported yet. But to be honest, I’m not terribly concerned with that. I understand and I get it. But , when I look at the folder structure, so I am very happy. So going back to this, right? So let’s talk about these things that have to move through different environments, right? When I look at the root file structure, so Microsoft in the documentation shows you some JSON files that it’s built. The entire data agent is just b built with metadata, a whole bunch of JSON that describes how to build the thing. And so you can see there’s a files section. It has a configuration folder. Inside the config
12:04 Folder, you have draft and published. You have the data agent.json. You have publish info. So there’s information being like it’s all JSON. It’s easy to use structural data that’s being used to build the data agent. The tricky part, I think, of this is how do you configure this to move between environments? And that’s where I’m a bit more cautious here a bit. I guess it just be aware you you might need to figure out how that works. So anyways, this is brand new, hot off the press. a good feature. I think it’s very very relevant here. It is currently in preview. So again, I think they’re probably going to
12:37 Refine the idea and get it worked out here. Maybe we’ll see something in the future around parameterizing your data agent so that you can use it in different places or or you can swap out those connection strings as you go. Anyways, just a just a thought or side note around that one. Any other news topics? Any beat from the streets that you have, Tommy? I don’t think I think I’m pretty good on beat from the streets, my friend. All right. Well, that being said, let’s jump into our main topic today. And our main topic will be around action systems and AI applications. With that being said, I’d like to introduce everyone
13:08 Back to Chris, welcome back or Christopher, I’m trying to go by you I’ve been going by your name. Christopher, welcome back to the podcast. we really appreciate you being here and jumping in again and talking more with us about all these AI real time experiences that we’re seeing here. So let’s maybe frame up let’s frame up the conversation here just a slight bit and just say okay action systems and AI applications. You’re not going to go find these things maybe in the Microsoft documentation. Chris, maybe frame us out here a little bit when we talk about
13:41 Action systems or AI applications. What does this definition mean to you? Maybe we can start the conversation there. Give us a a a brief introduction or or setup here of what you mean by these terms. Yeah. Yeah. So action systems are an umbrella term. So if you search for action systems in Microsoft Fabric, you’re not going to find a whole bunch of documentation on it because it’s just it’s a category that we use to describe all the different consumption methods and consumption artifacts that are available to you within fabric. Right? So when we think about what PowerBI was, when it was PowerBI, your main
14:15 Consumption artifact and your main action system was a PowerBI report, right? There was maybe you had a PowerBI dashboard, but those are really limited. That was those were really your options, right? Now that we’re in the fabric world, there’s a ton of other things that have become available to you, right? you were just talking about fabric data agents and we have fabric maps and we have activator and we have reports and we have real-time dashboards. We have all these different systems that allow us to consume the data and so the scope of everything that’s become available to you is now much greater than what it
14:47 Previously was before. And so we use action systems to describe all the different ways in which you can go consume that data. Right? The goal and the promise of fabric is I’ve got all my data in one place. I can use it in a variety of different forms and formats. When we think about what that would go take us to do in a a non-fabric environment or in the cloud world, that’s frequently multiple copies of the data where I’ve got, , a project that I’ve run for Azure storage and then I’m processing that data and I’m making it available in a report and then I have a whole another project that’s running
15:19 Where I’ve got a whole another copy of that data running in storage and then I’ve got a whole another layer of compute and then I’m going and making it available for an AI application. So yes, action systems are just a simple phrase that we use to describe all these different ways in which we can go consume our data. I like this one. I I think this is a great term actually. I don’t know who’s branding this one. This this must not be Microsoft this must not be a Microsoft marketing. This is probably the catism. Yeah, this is criticism. I think I think the term makes a ton of sense to what you’re actually describing
15:52 And what you’re talking about. So I think this is a great term. I like the idea of an action system driving and doing things on top of information. And again, I think we talked about in our last episode that time between like when does data show up and when should I be making decisions about that data. The shorter that decision window needs to be. I think we’re really talking more about some of those actionbased systems, the data, do something immediately, have actions on top of that. So, , awesome. Any thoughts there, Tommy, around action systems or the terms there? Yeah. and I’m looking at your LinkedIn
16:25 Post and I really like how excuse me how you laid it out and the at least this is the addition two and really looking at that difference between event streaming fabric and miring in fabric. It’s interesting that you chose mirroring as your comparison. when you were look at all the features available in fabric, why does mirroring to you almost solve the problem or to make that in a sense a good
16:56 Comparison? The reason I chose mirroring for that Tommy is I get that question almost every single day. Sometimes multiple times a day, sometimes sometimes a dozen times a day. what is the difference between mirroring my data into fabric and leveraging a fabric event stream or an event- driven architecture? What are the things that make those things different and when should I choose one over the other? So, I picked mirroring specifically for that article because I really wanted to just have honestly selfishly for me, I wanted to have a
17:28 Place I could point people and say, “Go read this. Go read this and then talk to Go read this. It’ll make sense that way. I don’t get the same question a million times a day. It’s worked.” , so it’s been very successful in that endeavor. I’ll admit. So, give us maybe like the 10,000 foot view of the summary of like, , to to answer the question, right? I know I should read the article, but for the listeners who haven’t read, I put the article in the in the chat here. So, for those who are listening to the podcast, the article is in the chat as as right now if you want to go read that. When to leverage the event
18:01 Event streams or when you’re going to use mirroring, what’s the 10,000 foot view or summary of this, Chris? Yeah. Yeah, great point. and so for anyone who doesn’t have access to the chat, when you might be listening this later, you can find me on LinkedIn. and I’m the author of what’s called the real-time dispatch, which is a weekly newspaper, weekly article that I publish around all things streaming data. sometimes I dabble in specifically how it relates to fabric and how it relates to all things within real-time intelligence, but I really try to keep it higher level around how do event- driven architectures really change the paradigm for how we think about data
18:33 Processing in 2025. So, with that shameless plug aside, we’ll we’ll go back to your question. You’re allowed to do this. You’re it’s totally totally acceptable. Totally acceptable. Well, we’ll get back to answering your question, right around miring versus event streaming. And so, sure, the way to think about it is mirroring is a very simple way to create a copy of your data and, , virtualize it, connect it into fabric. Once you’ve mirrored that data into fabric, there’s a lot of things that you still have to do to that data. You still have to go process it through silver. You still have to process through gold.
19:05 You have to create your pipelines. You have to create your notebooks. And then at the end of the day, it’s primarily available for a report in a lakehouse, right? It’s most likely what you’re using it for. Yes. So although the entry point with mirroring is very, , simple and straightforward and it’s near real time that that is we try to make mirroring as quick as we possibly can on the fabric side, but there can be a delay. There delays. There are things that come in. the bigger the database, the longer the latency. There there’s some other considerations and some technical things to think about when it comes to that as
19:37 Well. Sure. but at the end of the day, it’s very much a extract load, then I transform it, right? Then I move it to silver and then I move to gold and then I go move it all downstream. Right. Can I add just a note there just to clarify a little bit for me? You’re also saying this too, right? So it’s more of like a passive passive getting data in and what by passive is from what I understand mirroring is more looking at the change data capture logs of different systems watching the data change in the source system and then using those logs to help update records in the reporting side or the oil lakehouse side. There’s really no like
20:10 I’m not listening to anything. I’m not like waiting for a certain message to appear to go do something. It’s it’s more of a and that’s what by passive. It’s more of like this process that’s set up to just make sure that the data in that table is matching what’s inside a different server or a different location. Is that a fair mental map of what’s going on? It’s a great mental map of what’s going The other piece that the other caveat I would add to that, it is truly a mirror. So if we think of like a mirror, if you break the original source, then your mirror is going to break as well. And so you’re not going
20:42 To get updated data anymore. So there there’s things to consider when you look at implementing something like mirroring. The easiest way I always describe it is think of it like a very ETL type approach of I’m moving the data from here I’m loading it and then I have to go ELT approach right and then transform it. Okay, makes sense. Got it. Event streaming is much more of an ETL approach, right? I’m extracting the data but I’m not dependent on the mirror anymore. I’m just subscribing to an event. any event happens. Once that
21:14 Event happens, I transformed that event in flight and then I write it to my destination. So, think of it like I’ve taken bronze, silver, and gold and I’ve collapsed it and said, I’m just going to do it all in flight and I’m just going to say I’ve picked up from the source. I’m going to make the transformations that I want and then I’m going to go write it wherever. Nice. Once I’ve made that once I’ve made that transformation in flight and I’ve written it to the destination my event house or my lake house or wherever I want that to go I get a whole variety of things that then I can go do on top of that data in addition to just
21:46 Reporting because when I’m capturing those things in real time I can go use any of these action systems to then go consume that. So whether it’s a report or whether it’s an activator, it’s a trigger. Someone just created an order or hey sales rep, this account just went from green to red because we had XYZ conditions that occurred or their sales have dropped by 50%. Or their sales have increased by 50% in the last week. Like hey, what’s going on? Like how are things going? These are all things that you really want to be contextually aware of like in the moment where you can start to understand what
22:19 Those pieces are happening. So in its simplest form, mirroring is ELT, event streaming and event driven architectures are ETL. Okay. So that it’s the nice way that I bucket and can create those different things and then it really help articulate the difference. Great. I don’t have to read the article. You have you have from the expert you have you have the the summary. This is good. I really like this. This is a great explanation of this one. So awesome. there’s another term that we were talking about here. So that was that was the first half which is like action systems. And so, , maybe Chris, give us a little bit more flavor
22:53 Here. The other term here we’re talking about is just AI applications. So, again, I think maybe going more towards this, , let’s let’s unpack what you just I think this fits very well with your article that you just talked about, which is I’m going to extract the data using events. Events are coming to me. I’m going to listen to those events, transform them, add metadata, and when you, for my experience, and when I’ve seen some demos of this as well, when you’re using real-time things, you have the ability of appending or joining or matching real data against those events that are coming in. Cuz I in the in my world, these events that come in are
23:25 Usually pretty simple. There’s not a there’s sometimes there’s a lot of information in them, but sometimes they’re very meaningless. There’s a lot of IDs. There’s things that are not rich enough to actually let users see what’s going on. like you have a customer ID coming in or there’s an event or an email showing up, but you have to go look up that event and append more data to it and then you can then action off of it so which customer that email came from or other other enrichments. When I look at this and I think the AI application seems like that tax on somewhere in the process here where you’re saying, okay, now that I have
23:57 This real-time event actioning system, now I can apply AI pieces to this that helps me further or enrich more of those actions. Maybe maybe that’s not what you’re thinking here with the word or your term AI actions, but like applications. Help me help me unpack that that term as well. What does AI applications mean in this realm of real time? Yeah. So AI is the the buzzword to end all buzzwords I feel right there’s so many different applications where you can talk about what AI what you described we really consider
24:28 Contextualizing the stream right so when we think of to put it in terms of a dimensional model right when we capture facts that are happening within a business we have a dimension right and that dimension has all those contextual attributes about everything that’s happened the reference things of like here’s the customer here’s where they live here’s their email address, here’s how many kids they have, here’s, , their like all the information that we could possibly want to know, spend or whatever. Yeah. Products and product categories and all these different pieces. In event- driven architectures, we think
25:00 About that data much differently. We don’t think about it doing it in terms of after the fact and saying, I’m going to defer that to a model or a semantic. We refer to it as contextualizing the stream. And when we contextualize the stream, we apply those transformations directly to the event as it’s occurring. So if we get a sensor value or an event, you’re right, it’s raw. It’s simple. There’s probably only a few things in it. But we expand that event as it occurs by contextualizing it during flight and say, here’s the customer that it associates with. The
25:33 Customer that associates with is Chris. Chris is, , Chris is this years old. He lives in, , this location. and he’s got this number of kids, this is his expected spend, all those different things. And then we add it in one long row. So think about it in terms of almost like a one big table thing of we’re bringing all this data, we bring it into one event and then we take that event and we write it to wherever we’re going to write it to something like an event house or something like a lakehouse or wherever we’re going to. And once we’ve done that, that’s really we don’t you could use AI to maybe supplement some of
26:08 That and do some of those different things you think you’re describing. When we’re talking about AI applications here, we’re really talking about the AI action systems that you can go use to go consume that data. So, we just established a minute ago and talked about here’s all the different action systems that are available to you within Fabric that you can go consume, whether I’m using a map or a PowerBI report or an activator or all these different pieces. There’s also data agents. Data agents are a form of AI. They’re just conversational AI. But when you create a conversational AI system, you really get to I have the ability to chat with my data and I’m
26:44 Still in very much a a reactive mode. And we talk about the the real powers and capabilities that come with event- driven architectures in AI applications. when you look at something like an agentic AI application to say these are all the things I know about my data or these are all the things I don’t know about my data and I want those systems to automatically build and go take some action downstream maybe it’s a known unknown right activator is great at the known unknowns excuse me activator is great at the known unknowns right I know
27:17 What it is that I want to go track I know that these are the exceptions where aentic AI applications are great are the unknown unknowns I don’t know what I want to go check I look at something like an anomaly detector. Okay. And when we look at anomaly detection in RTI, how do I make those anomalies really easy to go identify and then create an agentic workflow on top of that? So we really see it as bringing that the big buzzword of AI and making it real, making it something tangible of not just oh we AI, right? Like I AI, you
27:51 AI, we all AI and making it something of Here’s really the business impact and the tangible thing like we talked about last time with event- driven architectures, you can really get to generating real tangible quantitative impact to your business, right? Of this and if this line goes offline, that’s costing me something. Or if the order doesn’t get delivered, that’s costing me something. Or if I’m an airport and I don’t get your bag to you in 30 minutes based on the airline guarantee, right? That’s a piece. or if I’m going to miss
28:22 The plane and that bag’s not going to make it on that plane. Those are all things that are really important in the moment. It doesn’t matter 6 hours later or 24 hours later say I lost the bag, right? The bag didn’t make it on the airplane in time, right? Like that does me no good. Like no, it’s gone. Yeah. I’m traveling. I’m like I have a customer presentation tomorrow and now I’m in like my street clothes where I’m not prepared to go meet a customer and Yes. like telling me about it tomorrow is not going to do me a whole lot of good, right? Those are things that I want to know in the moment. Yes. So
28:55 Being able to combine these two approaches to say I’m pulling this event data in. I’m transforming this data in flight then I’m creating these agentic experiences on top to really create these these things of whether I know about it or whether I don’t know about it is really where we see the the real value and the quantitative impact of creating these AI applications. Wow, this is interesting. So I’m going to just kick it here to you Tommy one comment real quick. This is a whole new world. So if I I’m
29:28 Looking at this from the lens of like maybe let me come in this with my PowerBI brain, right? I came into this with my PowerBI world and brain. This is a whole paradigm shift as to how I’ve been traditionally thinking about like PowerBI importing getting data in regular loads. this this is a whole new way of reimagining a stream of data with actionbased elements getting a stream of information coming in and then having real time anomalies being detected. Again I go back to the I’ll go back to the example that everyone knows
30:00 Which is you have a fabric capacity and you’re watching it something happens. This is to me this is one of these really like very applicable use cases. Everyone understands at some point their fabric capacity throttles out or some event occurs or as you’re rolling these out you’re getting these initial really big bursts of usage. That’s that’s perfect for what anomaly detection should be doing is to alert someone to your point Chris in the moment at that time. We we want to as a data team as a centralized data team become a little bit more proactive. We should know about
30:32 These things before users come to us and say hey something’s broken right we should know what’s going on ahead of that. It makes us look good, right? It it is truly the experience from from experience and from everything we know in the industry when you can get in the position where you can tell your customer about something before your customer tells you about it. Yes, it is a it’s a gamechanging moment because to your point and carrying through that example telling me like capacity throttle tomorrow it’s not super useful if we’re in the
31:05 Moment but because today is going to be a miserable day. I’m going to have customers who are going to be yelling at me all day. I’m going to be having all kinds of issues. But if I can get to hey I I used to run when I was in consulting way back in the day I used to create what I would call BI on BI, right? That was always like my favorite thing to do and I would I call it I did it and I’d always create like all these telemetry and log logging systems around everything that was happening within the BI environment, right? So here’s the here’s the telemetry from the pipelines that are running and the notebooks that are running and here’s the queries that
31:37 People are running and here’s what they’re quering in the semantic model and then analyze all that information. Yes. The reason why I would do that is one because the the business data is always the business data, right? I’d always I always tell the business like it’s your data. Do with it what you like do with it what you need to go be successful in your business. But the BI on BI data, that’s my playground. Like that’s my that’s my my sandbox. I can do whatever I want in there and no one’s going to bother me. And if I break it, I know that no customer is going to call me and say, “Chris, the the BI reporting is broken.” Like I was like, great. Like it was my toy, right?
32:10 Yes. Exactly. So I could go I had a safe space. I could go play. I could go try all kinds of new things where I could go do it. awesome. One of the the byproducts of that is when you’re able to get to that type of level and that type of observability, you can see and say, “Hey, I’m sorry that you ran that query and it took 20 seconds to return, right? We we identified the issue. , , hey, Mike, , we we saw that you ran it. We we know that it took a little while. We went and fixed and we made some optimizations in the back end. So, next time you run that query, it should run in about two seconds, right? And if it doesn’t, then let us know.”
32:42 Yeah. Those are the experiences that really change it from, man, the system doesn’t work to like, oh, the the data team is really working with me. They’re on my side. They’re they’re trying to go help me out. It’s a confidence builder. It’s it’s a confidence builder, right? And so eventriven architectures is really a continuation of that. It’s the carrying through of this ability to say in the moment, these are the things that are happening. These are the observability pieces. These are the not just observability, but here’s the business processes. and here’s what I’m looking to go do where then I can go take so we can take an abstract term like AI and turn it into something
33:17 Tangible that’s real. one thing I tell customers very often is the line between an activator in fabric and the line between an agentic AI application is very very thin. it’s extremely thin. the same thing with a logic app, right? So if you’re not familiar with activator, think of it like a logic app. the line between a logic app and an agentic application is a thin line. the only difference is one thing I know and the other thing I don’t know. U so being able to close that gap and make those experiences easier allows you to
33:51 Really turn data from oh here’s the report that I’m looking at and then I know I need to go take these actions or here’s all the different pieces and really start to integrate it into all the business processes. I get over to you. Yeah. So interesting from when we talk about this from the intelligence side and it the common example is usually the we talk about like the security or like let’s say your suitcase is missing. Well, I’m going to play devil’s advocate here because those systems already exist today to get real- time data, right? Like obviously airports,
34:26 Cyber security, , debit card, credit cards all deal with a system to do that automated for anomalies. But we’re talking about this in the realm of business intelligence. So it’s not like again devil’s advocates. It’s not like fabric has rewritten the book for everyone now that all companies can do real time anomalies again that exists particular companies specialize in that what is different in this for fabric because we’re talking about this
35:00 Again with that business intelligence and I’m looking at the article about the AI with real time and and just about temporal spatial and relational being able to do that in real Okay. Why is this so groundbreaking being in the fabric platform compared to an existing system that is already in use? So when we look at those existing systems that exist today and there are a lot of use cases that currently exist of
35:32 Fraud or baggage. but I I’d caution and I’d say, “Oh, we’ll we should measure because they’re extremely expensive systems to run and maintain today. They’re frequently chaining many different systems together. I might have Kafka. Maybe I’m using managed Kafka and I’m using Confluent. Then I’m loading that into a time series store and then I’m creating some type of a Spark notebook or some type of a job to go analyze that data and then push and try to do that as quick as I possibly can,
36:04 Right? We we call that the faster horses effect, right? Of the Henry Ford of like I’ve I’ve got a horse, right? And I’m trying to move it from point A to point B as fast as I possibly can. there’s a limit for how much you can con like compress that to run like that that horse can only run so fast and at some point the horse cannot run any faster no matter how hard you try. but one of the things that Henry Ford used to say was if I’d asked people what they’d wanted, they would have told me they wanted faster horses. Right? So that’s why we call it the faster horses effect.
36:37 And we really envision real-time intelligence as a car. Right? So we’ve taken all these different tools and all these different experiences that took five, six, 10 different solutions and 10 different tools to chain together in order to accomplish that. And we’ve simplified that. Not only have we simplified it and made it two tools or three tools instead of having all these different tools, but we’ve also brought it into fabric in a no code lowode way. So without having to maintain the infrastructure of Kafka or without having to go maintain a ton of procode
37:11 Experiences with a custom bunch of custom Java applications or functions or whatever it is that you’re looking for, you’re able to do this in a very simple no code, low code experience to really democratize it. When you look at what real-time intelligence has really brought to the table, I think there are a lot of parallels between real-time intelligence today and what PowerBI was 10 years ago, right? Because when PowerBI came out, there was a lot of different tools in the market. There was not an easy there was not an easy no code, low code tool that really allowed you to visualize things without having
37:44 All those expert without having a bunch of expertise. And so RTI has really allowed us to bring those experiences, put them into fabric, make it in a noode, low code form. You’re right, you can do it. They’ve typically been extremely expensive implementations to build, to maintain, to run. And when we look at data projects, and , data warehousing and data analytics projects aren’t really that much different. You go through a whole project where it’s six, nine months you go implement it. and I’ve chosen Kafka and Beam and maybe I’ve chosen like ClickHouse as my
38:20 Time series store, right? Or whatever it is that I’m building. And then Chris finishes his project and Tommy comes into the project like nine months later. Chris’s contract ends. So Chris rolls off and then Tommyico shows up, right? And Tommy comes in. He’s like, “What is this?” Like I’ve never seen anything. Why would you use Beam and Click? This is a terrible project. Like we got to start all over again, right? Yeah. So you start all over again, right? and it really inhibits the business value from really truly being achieved. So being able to make it a no code low code tool and tool set allows the business to really have that not only simple way to
38:54 Go implement but also a really simple way to go maintain. Would you would you also agree and I think what I’m hearing you say Christopher is a little around here here there’s also this easy to connect everything. So one of the things that I like a lot about fabric has been just the ease of which it is to move data between different totally different kinds of compute systems. And what by that is we have KQL which is like an event real-time very fast hyperscaled experience where you can store a lot of data. Then we
39:26 Have lakehouses semi-structured structured data can go there. And then you have data warehousing. So these are these are like my three main storage areas where I put data in this and the fact that I’m when you look at like the documentation it’s not a lot of friction to go between those different elements pipelines can read things from from a lakehouse and put it in a warehouse. Pipelines can read things from a warehouse and put in a lake house. It’s this it’s this new world of like things that are just inoperable and they just work together. when I was doing
39:58 This stuff or and maybe you your experiences too Chris is your consulting experience which you were saying earlier is it’s just more painful to go pick out all the different things. Okay, I got to have an event hub. I got to have this system. I got to have that system and there’s a lot of other systems you have to like know how they work and how to turn them on and then figure out how to wire them together. It feels like a lot more of the fabric spaces. It’s just easier. there’s there’s the wiring is built into the product and it’s easier to pick up data from different spaces. Is that another friction point
40:29 That you see getting removed here? Yeah, 100%. When we look at what it would previously take us to go implement those and how those solutions have really been implemented. I think Lambda architecture is a a great way to think about this, right? So when we think about lambda and we think about and what needed to go be built the the primary way in which we have built powerbi reports and in which we have built data systems has been lambda cold path architecture specifically when we look at it being able to process that data on a hot path although we
41:02 Always saw it in the documentation or when we read about lambda there’s always these great things that are like well it’s hot path up here and then it falls to the cold path and then it goes and does these different pieces in the real world that wasn’t really something that we could go implement. It was really challenging for us to go do because when like we talked about last time when we start talking about well you want to stream that data I needed to go chain all these different tools together and I needed to go do all these different pieces and then make it available. It was really challenging. There was not a lot of connective tissue that existed between all those different pieces. M but when you look at that from a fabric perspective because all these tools are
41:35 In one place it’s very easy for me to say well okay I want to implement a true Lambda architecture and so I have my hot path which is rolling data down into my event house and then I want to go take all these different actions on top of it with all these different action systems that are available to me then I want to drop it into a lambda cold path architecture and then my lambda cold path architecture then carries it into my lakehouse and my warehouse and all these other ways in which I want to go consume that data Yes, but like you get the power to choose to say, do I need that hot path? Do I just want to go lambda cold path the way that I have
42:07 Been doing before? Do I just want to keep everything hot all the time? There’s so many choices to you. And at least for me, I find that very energizing as a data professional because I’ve never had more choices in in my life than what I have with fabric. I can really pick and connect all these different pieces together and say, well, for this data source, I want to use mirroring. For this data source, I’m going to go use a stream where I’m going to load it in. Then I’m landing this data into my hot path store and then I want to go move it into my cold path store and go do all these different things like when I’m creating this do I do I want to keep all of my reports directly on top of my hot path
42:39 Architecture or does it make sense for me to drop those make those available for cold path and save my hot path stuff for these event driven scenarios in which I want but it’s so easy to go implement these things now where before it was the connective tissue of maybe 10 15 20 different tools tools. It was really difficult to a build, b maintain, and c document. I like this. What are your thoughts, Tommy? Anything that you want to you’re picking up here? Yeah, and I think that’s probably the biggest selling point for me when I’m
43:12 Looking at this. is me and Mike keep going back to this regardless of what part of fabric it is is the effort we know and the focus that you guys and the Microsoft team have done for that user interface and being able to set up these very enterprise level developer level products and provide a nice user interface and to get up and running. So, as we’re looking at this and you have been really pushing as well
43:46 With, hey, there’s really no excuse now to get at least something up and running here. , for someone who’s listening going, okay, this anomaly detection, this, , , and by the way, love the , use of you known unknowns and unknown unknowns. Tommy, you’ve talked about that a number of times in the past, Tommy, like there are known knowns and there’s known unknowns. There’s unknown unknowns. The old Donald Rumsfeld quote. Yeah, it’s Donald. Yeah. So, hey, what? That was
44:20 Pretty good. So, it was very good. , good call back. , you’ve been really pushing for, , what BI is going to be in the future is, , rather than just simply the being reactive is helping us find things that we otherwise are weren’t looking for. or more importantly to get those alerts. And there’s really like three main products right now that I see that are trying to allow do that. So obviously there’s the anomaly
44:53 Detection, the fabric activator and then there there’s even one of the PowerBI reports too. I forgot what is it actually called but it you can actually get notified on a visual. Is that still something that’s being worked on or a big focus? The alerts in PowerBI reports. Yeah, that actually goes to the event stream or that might have gone the way of metric sets that goes to an event stream. Tommy, are you describing like an event stream is happening and then from that
45:25 You’re getting alerts directly in a Kaparb report or you’re kicking things off or you’re talking more like in the report so goals, metric sets thing, right? So I’m in a refresh of a report. I’m looking at some analytics on that and then I’m getting an alert from hey this measure went below a certain value. Is that what you’re talking about? The the goals basically? Yeah. It used to be part of the PowerBI report initially when Fabric came out. I think I have to have an event stream set up but it’s not just a normal alert. It would actually push it to an event stream in a power from a PowerBI report. That might that actually might
45:59 Have got deprecated. It might be. I’m not super familiar with that. , I’d have to look at it. I honestly don’t know. Oh, PowerBI Activator, that’s what it’s called. So, PowerBI activator became activator and activator is that connective glue that holds a lot of those different pieces together. , it is activator deprecated certainly not. but we’re doing a lot of work and investment in that space to to really talk about specifically what we’re talking about here, right? entic
46:31 Applications and how do we leverage activator to go identify these known knowns and these known known unknowns within our environment to go identify and trigger those pieces. when it comes to anomalies, I think anomaly detection is really interesting because it’s really there are so many tools in which we use to consume and create anomaly detection in within fabric. I I frequently joke that anomaly detection is the first
47:03 Thing in RTI which you can do in three different ways. It’s really just entirely up to you how you want to go to it. You can do it with the new no code interface that we’ve created. you can do it in low code because there’s a a KQL function which is also happens to be my favorite KQL function which is called series decompose anomalies and there’s another way you can do it where you could just create a notebook on top of an event house and just do it the proc code way right so however you want to do your anomaly detection whether it’s no code low code or proc code RTI has got the great solution for you right it’s whichever one you want
47:36 But when you look at all the other ways in which you can go do it like you said you can do it in PowerBI You can do it in Spark. You can do it in all these other different ways. So, it’s really entirely up to you on how you want to go create those things and how you want to do. But what I would also say is I think anomal detecting an anomaly alone is very similar to just creating a report. Right? I’ve identified an anomaly. Okay. Now, what? Right? What do I go do? What what should I do about it? Right? Should I do anything? I used to used to have a manager a really long time ago
48:08 And I always liked the way that he would challenge me as an analyst. because when I would talk to him and I would tell him something from a data piece and I would say, , sales are down by, , 20% or whatever, he would always look at me and he’d go, “So what?” Right? Yeah. So, so what? Like why? Like what am I supposed to do about it? Right? Don’t just tell me the number’s down. Don’t just tell me what’s happening. And when we look at reports, reports are really good at giving the number, right? Reports are up by 7%. Right? or there are seven exceptions that
48:42 That are happening on this specific business process. Oh, so what? Right, what do I need to go do as a user, as a person, as the human who can as actually consumes that data that then takes that to the next step that makes the action, right? So it it allows us to really start to bridge some of these gaps together to say regardless of however you want to calculate your anomalies or build your reports or whatever, these action systems allow us to go take those to the next step. It allows us to go answer the so what to the business to say these are the thing this is why it’s
49:14 Important to you right there are seven exceptions in this business process and these are the things that you should go do next to go fix those business processes yes correct and as we get as we get more sophisticated and as we get deeper and more mature in the AI space I think then the question becomes okay there are seven exceptions in this business process and these are the things that you should go to but now systemically I know historically these are the ways that you fixed that problem. So as a system, I know what to go do. Do you mind if I just go do it as an AI application, can I go fix that where
49:48 Then it’s just integrated as a part? And then as a business user, I no longer have to even worry about the exception because I know if that exception occurs, then an agentic AI application will just go close that gap for me. And that could become applicable in so many scenarios around all the different things that we’re looking for. Right? We’re talking about exceptions in a business process here. But what about data quality, right? We spend so much time talking about data quality. When we look at when we look at data quality and we look at fixing data quality or master data in a data analytic system,
50:20 We’re doing it downstream after the fact. This the transaction system. You’re you’re hitting a cord here that I love because this is this is a major problem. This is a major problem. This is a major problem. Yeah. And we don’t we don’t have the tooling to get back into like we’ve seen there’s a problem. someone needs to fix it and we keep having to throw humans in the loop. Like this is a prime opportunity for like agentic somethings to go fix the stuff somewhere else, right? And so as we mature as we mature in the space, we say, “Hey, this transactional system generated this quality problem. I know that these are the data quality rules
50:52 That I need to go to go fix, but instead of putting a band-aid over top of it in a dimension or in my data analytic system, why don’t I just intercept the event as that event got generated and send it upstream to the original application and tell the application, hey, fix it and then send me a new event and then I log. And then not only do I have the correct event where I can intercept it in real time and go tell the business user to go fix their data quality problem or whatever, but I also have a record of that as well. So I can say this application generated these problems
51:24 Over the the amount of time. So this is an application that we should go look and go fix. There are so many use cases where this really starts to apply around how do I get more mature and get more sophisticated around all the different pieces. And I think from a from a pipeline perspective, this also allows us to start to ask the question of well let me ask you a question Mike and Tommy I think before I go too much point. How many times in your career has a pipeline failed because one row was bad in that batch or that pipeline for whatever it
51:58 Is that pipeline’s processing a thousand or 5,000 or 10,000 rows a day. Yep. One row is bad because someone wrote Chris in a datetime field and the whole thing breaks right. Yep. People are crafty. People are very crafty and they and they’ll add whatever they can wherever they can at it because they think, “Oh, that’s a good idea. I’ll put it there.” But in reality, it’s actually forcibly breaking our pipelines and and causing things to fail. Like, yes, I’d rather kick that one record out, notify someone immediately, hey, there’s a bad record. I kicked it out. I still loaded your data. You got to go find this. , something happened here. Yeah,
52:29 It’s great. And so when when we look at this and we say, okay, we we have all these tools and we have all these approaches that we’ve come up with over time to say, , well, if this record fails, then we’re going to ignore data. We’re going to ignore the data type and we’re just going to write it and we’re going to clean it up after the fact and we’re going to do all these different pieces. These are all after the fact things. When we can get to pulling this data from these applications in event driven architecture from an event system and say as that system generates, say Chris just wrote Chris in a datetime field, right? I’m gonna go process that event. I’m going to transform that one event in
53:01 Flight as it’s coming. Hey Chris, drop that to a dead letter queue. Drop that something. Send it back to the application in real time as Chris is working in the application and say, “Chris, you can’t do that.” like can you put in an actual date time field and not break the downstream system where you can really start to integrate some of these systems together to say I’m analyzing the data in real time and I’m also not just from a process perspective but I’m also able to fix some of these upstream data quality problems that I have and then go take whatever downstream thing that I
53:33 Wanted is to go occur. So, it really opens the door for a ton of these different use cases that we’ve we’ve really just barely scratched the surface on things that are capable of us to do. You you’ve really lit a fire in my mind around the data quality aspect of things. And I I really like what your your point there, Chris. I think this is a is really strong because one of the things and Tommy, you and I have talked about this a lot on the podcast, which has been data quality is an expensive proposition to organizations to just implement it. you you’ve got to be able to like stand back and say okay what is what does data quality look like
54:06 For our team for this table you got to go through the different columns themselves and say okay this column is a ID it can’t be blank right there’s there’s some general rules that you’re trying to apply and typically those are like a person is standing there looking at the data and making up the rules as it goes about what is in requiring integrity inside the the the data quality your note around agentic things that can help with this makes this really interesting because what that does is I think the mechanics of in the
54:40 Same way that a AI sorry too many thoughts I’m slowing down let me pause in the same way that like an agentic thing can then build websites or codes or functions very well in code or software why can’t the agentic space do the same thing for a data quality hey here’s a table of data here’s the things that I care about you talk to the AI about those items and it goes through and says, “Here’s a a base set of rules you’d like me to apply every time this data loads in.” And we’ll check those rules as we go. And so the the agentic piece of this could start building data
55:13 Quality processes for you faster, quicker. It can write its own SQL statement. So when that data comes in, it can do a couple like three or four se SQL tests, write its statements, check them, make sure there’s nothing wrong, and then approve it and move it through. This really feels like a potential turning point or an opportunity for those of you who are listening small startup companies or whatever. This this seems like an opportunity. someone should build this and then this could be a next million-dollar idea of just turning these data quality aspects on talking to a language model and having it build
55:47 Rules, check quality, notify you when things are all there. And all this, I think, builds on top of this this actions, right? It it all goes back to action systems. This is an action system we can use with a lot of this AI stuff. And I’m excited about that because it brings my cost down to implement data quality, which I think is amazing. And and I think this is the biggest thing for me too where you think about even the non-traditional real time departments or industries where again a lot of them still run in some type of
56:19 Automation but everything I I think to your point Chris is so everything has to be a known known if you’re doing email marketing if you’re doing website tracking Mike we were talking about this the other day I I’m giving it six months until there someone creates a website that will automatically generate as you navigate the site. Yeah, exactly right. my my thinking here is like why why even build a website anymore? Build build an experience. Talk to a large language model. Let it let it think of building the website on in real time
56:52 When you go there and I’m seeing things like I’m going to the writing’s on the wall here. It’s only a matter of time before someone clicks a button and the next page you go to isn’t even there. It doesn’t even it doesn’t exist yet. the AI is figuring out what to build next when you click that button. This is really changing quickly and yeah, this is exciting to me. So, I I like where we’re going with this one. I do think we had a lot of good conversation pieces right here. I do want to start wrapping things up here a little bit. So, let’s maybe between action systems and and unpacking AI applications. Chris, give us
57:23 Like maybe your your final thoughts around this topic for today and then we’ll we’ll kick it over to Tommy. Yeah. I think final thoughts are we’re truly at a paradigm shifting moment in the data space and we need to stop thinking about event- driven architectures as just a stream of like it’s only for sensor data. I think when we talk about data quality when we talk about all these other pieces the the use cases and the applicability are really across a wide variety of different
57:56 Pieces. And so final thought final final thought would be start thinking about data from a perspective of instead of processing the entire data set of all 10,000 rows or all 100,000 rows or however much data you’re trying to process on a file. How do you break that down to say how do I process these rows individually? And if you come from a developer background then you probably already familiar with that. so so like think about that that stream or that row of data and say what are the things that I want to apply to this data as it’s occurring right at
58:30 That moment in time when it occurs what are the things that are important and maybe it’s a business process and maybe it’s just a data quality thing but all these things really become important so over to you Tommy. Yeah. So for me it’s really I think the realization that the conversation with AI and it really belongs in the loop where our decisions are going to happen. how do we actually really bring to life that buzz phrase insights to actions and action systems help us turn that data into decisions where we don’t have
59:02 To set up every single part of it. real time for me only matters if what our responses are going to be and I think I am shifting my mindset on what I’ve always looked at real time and thinking about this more from my own business for things that are not again the traditional mold of real time automation. So, I’m really excited to see as this grows and progresses. I think for me the the final takeaway or
59:36 The final thought here is I I really take away from this conversation and when I look at the landscape of all of fabric, what it can do, I think there’s a really big opportunity that Microsoft has built here for businesses is just to move across these different work engines very seamlessly. And now we’re seeing event- driven architectures. We’re seeing action based elements that can occur in multiple places, bring data down to the lakehouse. , acknowledge when a new file appears, then do something, right? So, that’s something more action- based.
60:07 , having real-time data streaming in and then as the data comes in, checking data quality as the data comes in. These are really interesting topics here. And I think for me there’s like a what the space of BI with traditionally in PowerBI had like a limited set of walls, right? Okay. And there was a couple areas where you could do real time. I feel like fabric has just expanded the the capabilities, not just double or triple, but like exponentially, we’ve gotten more capability available to us. And now with the, , anomaly detection
60:40 Directly inside the service. It’s like a button press. It’s a button press. You pick the anomaly detection. You throw down some variables. It’s it’s pretty straightforward to set up. This is easier than we’ve ever done before. So, , I’m very optimistic for the future in this space. I think this is a really good opportunity for businesses to start changing how they do business. I’d also maybe argue if you don’t least understand and look at this, your competitors are. So, it’s almost like a competitive feature now, not the tools at your disposal. If you’re not doing more of these real-time eventing things
61:13 With fabric, , your competitors will be doing it and they may be more nimble than you if you’re if you’re not staying competitive in this space. So, it’s interesting. Really like this topic. This was really good. Thank you very much, Christopher, for joining us and and unpacking this, which was really wonderful. Thanks for the time. That being said, let’s just go through a quick We know your time is extremely valuable. Thank you very much for spending an hour with us. I hope you learned something new around AI and Agentic Workflows with real-time analytics. I I hope there’s some more thoughts or things you can unpack here
61:45 As well for your business. Hopefully, it was a little bit educational as well. That being said, Tommy, where else can you find the podcast? You can find us on Apple, Spotify, or wherever you get your podcast. Make sure to subscribe and leave a rating. It helps us out a ton. And share with a friend since, , we do this for free. Do you have a question, idea, or topic that you want us to talk about in a future episode? Head over to powerbi.tipsodcast. Leave your name and a great question. And finally, join us live every Tuesday and Thursday, 7:30 a.m. Central on all PowerBI.tips social media channels.
62:21 Thank you all so much and we’ll see you next time. Let’s go down.
Thank You
Want to catch us live? Join every Tuesday and Thursday at 7:30 AM Central on YouTube and LinkedIn.
Got a question? Head to powerbi.tips/empodcast and submit your topic ideas.
Listen on Spotify, Apple Podcasts, or wherever you get your podcasts.
