PowerBI.tips

Central BI & Workspace Strategies - Ep. 501

February 12, 2026 By Mike Carlo , Tommy Puglia
Central BI & Workspace Strategies - Ep. 501

In this milestone episode 501, Mike and Tommy dive into some hot topics around AI, data governance, and workspace organization strategies for Power BI and Microsoft Fabric.

News Roundup

AI Doesn’t Reduce Work—It Intensifies It

The Harvard Business Review dropped an interesting article this week exploring how AI adoption might not deliver the productivity gains companies expect. While AI promises to reduce burden on routine tasks—drafting documents, summarizing information, debugging code—the reality may be more nuanced. The article raises important questions about burnout and work intensification as organizations push for broader AI adoption.

Read the full HBR article

OneLake Catalog Updates

Microsoft announced significant updates to the OneLake Catalog, positioning it as “the trusted catalog for organizations worldwide.” With over 230,000 organizations already using it, the catalog now serves as a unified access point for discovering, managing, and governing enterprise data.

Key highlights include:

  • Explore tab — Central entry point for discovering data across your organization
  • Govern tab — Tools to maintain control and trust over your data estate
  • Secure tab — Integrated security controls for data protection
  • Copilot integration — Generate summaries of semantic models automatically

The catalog is now embedded across Microsoft Teams, Excel, and Copilot Studio, making data accessible to over 350 million Microsoft 365 users.

Check out the full OneLake Catalog announcement

Main Topic: Central BI & Workspace Strategies

The bulk of this episode focuses on strategies for organizing workspaces effectively in Power BI and Microsoft Fabric environments. Mike and Tommy explore patterns and best practices for structuring your BI environment to maximize governance while maintaining flexibility for teams.

Whether you’re managing a small team or an enterprise deployment, getting workspace organization right from the start can save countless headaches down the road.

Watch the Full Episode

Connect with Us

Got an idea or topic for discussion? Drop us a line!


Full Episode Transcript

Click any timestamp to jump to that part of the video.

Intro & New Theme Song

00:00 — Good morning and welcome back to the Explicit Measures podcast with Tommy and Mike. We’ve got our new intro this year. We’ve done it. Episode 500 has pushed us over the edge. We’ve got a brand new intro for you. We’re excited to be back. Tommy, good morning.

01:02 — Tommy shares how his kids went crazy for the new theme song—literally every time they got in the car it was “Play the Tommy song!” Mike explains the process of creating the song using AI agents and Suno.

Making the Theme Song with AI

02:03 — Mike walks through the process: he went to a couple different AI agents, had them write multiple potential lyrics, called out what he thought was funny about the podcast—two guys, Tommy and Mike, maybe both Italians. Then took those lyrics to Suno (S-U-N-O), an AI music generation app.

03:04 — Mike explains Suno’s capabilities: you can have AI generate lyrics or input your own, specify the vibe, style, chorus sound. He tried a couple genres but landed on dance, electronic, EDM—high energy music. His work mix is a blend of heavy metal (no singing) and dance EDM.

05:05 — More on Suno: you can specify male/female vocals, harmonies, and it handles many genres well—country, heavy metal, rock and roll, 80s rock, even Middle Eastern music. It’s free to use. Mike’s family, even his parents, are now writing their own songs.

07:07 — The song is available on Spotify, Apple Music, and all major platforms. Mike introduces the main topic: central BI and strategies around workspace development, especially with Microsoft Fabric opening up new patterns.

News: AI Doesn’t Reduce Work—It Intensifies It

08:08 — Mike discusses a Harvard Business Review article. The concept: when you add AI, you don’t just offload tasks and walk away. You’re now offloading extra tasks so you can think about other things. Those six emails you wouldn’t have had time for? Now your AI agent handles them.

09:08 — The research findings: In an eight-month study of 200+ employees, AI tools didn’t reduce work—they intensified it. Employees worked at a faster pace, took on broader scope of tasks, and actually worked more hours because they felt more productive.

10:09 — The company studied didn’t mandate AI—workers adopted it on their own initiative because it made them feel more capable. The article breaks down forms of work intensification: task expansion (AI fills knowledge gaps), blurred boundaries between work and non-work.

11:10 — Mike gives a personal example: “I have an OpenClaw bot on my desk. I can talk to it via Telegram. At night, I’m texting the bot saying ‘Hey, this button doesn’t work, can you fix it?’ It goes ‘Okay, done. Committed.’ I’m building features way into the night.”

12:10 — AI introduces a new rhythm where workers manage several activity threads—manually writing some code, getting AI-generative results, running multiple agents in parallel, retrieving deferred tasks. This partnership with AI enables more momentum.

13:11 — Tommy agrees—we think AI would reduce work but we’re actually working more. They discuss Super Bowl commercials showing commoditization of app building (Lovable, Base44, Replit). Tommy mentions he finally got OpenClaw working with Telegram.

15:12 — The article recommends preserving time for human connection. Mike jokes that maybe the podcast is their “touch grass moment”—candid human interaction that’s not talking to a bot.

News: SaaS Model Is Dead?

16:12 — Discussion of a post by David Orande (CEO of Agent Zero): “The SaaS model is dead and agents have killed it.” Major software companies taking hits in earnings reports. Mike reflects on commoditizing code and building what you want, when you want.

17:14 — Mike’s revelation: In 6 months, have an AI agent look at your existing codebase, write requirements around every feature, then rebuild the entire application in a new language in half a day. Also have an agent write all the tests.

19:15 — The barrier for third parties to undercut seat-based licensing is dropping. Mike’s frustration: other people’s software does 90% of what he wants, but 10% doesn’t fit his workflow. Now he can just tell an agent to build exactly what he needs.

20:16 — Companies like Salesforce, Adobe, Microsoft—unless they integrate AI that makes people’s jobs easier, startups will outbuild them. But questions remain about sustainability: if a feature breaks, who debugs it? Has to be an agent because it wrote so much code.

News: OneLake Catalog

21:17 — OneLake Catalog discussion: “The trusted catalog for organizations worldwide.” Microsoft promoting how organizations are using the OneLake data catalog. 30 million monthly active users. Lumen saved 10,000 hours of manual effort.

23:18 — Mike says OneLake Catalog is underrated. Great for helping teams discover where data lives, what models are available. Three components: Govern, Secure, Explore tabs. Filter by data source, reports, lakehouses, datasets. Tag things for easy filtering.

25:20 — Tommy wishes the Govern tab could be customized—add your own actions and sliders. Both agree there should be a way to add custom observations like “notify me if any workspace is created without a domain.”

Main Topic: Central BI & Workspace Strategies

26:20 — Mike explains why this topic matters now: he’s seeing new patterns evolve with Fabric. Traditional Power BI had dev/test/prod workspaces, but Fabric opens up more variety.

27:20 — Pattern examples: Organizations using Databricks pushing data into Fabric for distribution. Where does the lakehouse sit? Does it travel with the semantic model through all environments, or does it need its own workspace?

28:22 — Another pattern: bringing in data from SQL Server, SharePoint flat files—you’ll have data flows, pipelines, notebooks, lakehouses. Where do all these data engineering artifacts go? Does lakehouse stay with semantic model or with data engineering artifacts?

29:23 — The “layers of the onion” concept: Outermost layer = paginated/Power BI reports (build it, you consume it). Next layer = semantic model (build reports on top of it). Innermost layer = table access (build your own semantic models, write SQL, create data warehouses).

31:24 — Tommy references Microsoft’s adoption roadmap and three content ownership models: business-led self-service, managed self-service, enterprise BI. He questions whether these three fit into Fabric as well as they did Power BI.

32:24 — Mike disagrees—the principle doesn’t change with Fabric. Data ownership = data engineering, table creation, semantic models. Report ownership = paginated/Power BI reports. Organizations should land somewhere in the middle, leveraging both teams.

34:26 — Managed self-service is a partnership: IT manages the flow of data into the business, business owns their own reports. It may bring in tables and build initial reports, then expose those tables for business to build their own semantic models.

37:30 — Two key concerns: users/roles and workspace proliferation. What is the purpose of the lakehouse for business users? That answer determines architecture.

38:30 — Mike’s broad pattern: Give business some level of control—either create reports on top of semantic models, or go further and let them build their own semantic models. The handoff point (semantic model vs. table) is a key decision.

39:30 — In Fabric, we now have many personas in one ecosystem: data engineering, BI engineers, release managers, admins, data scientists. You need clear rules around where your job ends and the next person’s begins.

Defining Roles and Workspace Design

41:32 — Define what tasks each role performs. Data engineer: build pipelines, lakehouses, architecture, handle dev-to-prod deployments. BI analyst: is that just reports, or reports AND semantic models? Different skill sets.

42:34 — For big data or complex measures, semantic model creation might fall into data engineering realm rather than business analyst. Decide roles, then clarify where they play in Fabric and what artifacts they build.

43:34 — Workspaces are security boundaries—a group of people working together on common artifacts. Mike’s pattern: three workspaces per environment: (1) pure data engineering (bronze/silver), (2) semantic models + gold lakehouses, (3) reports (Power BI + paginated).

44:36 — The organizational app has simplified report distribution. Before: one workspace = one app. Now: multiple organizational apps per workspace, pulling reports in any combination. Could potentially get away with two workspaces: data engineering + semantic models/reports.

Central BI Team’s Evolving Role

45:37 — Central BI teams will need API-based workspace management for enterprise scale. With different roles and artifacts, the ability to manage this at scale is crucial.

47:40 — Tommy’s hot take: Should central BI even own workspaces anymore, or become more like a permanent “tiger team”—best of the best who fix urgent problems, move on, don’t live in one department.

48:41 — Tiger team concept explained: dedicated people who come in regardless of technology/process/people, fix highest priorities, move to the next. In traditional Power BI, users mostly just consumed. Now with lakehouses and databases, there’s opportunity for more collaboration.

50:43 — Mike’s response: Central IT should fully manage and understand what workspaces exist (300 workspaces, who owns them, policies). But also nominate champions from each department who understand Power BI and can self-govern their team.

51:43 — Pattern: Department leaders handle requests for new workspaces, document owners, ensure domain tagging, understand what content goes where, handle sharing of apps. Central IT sets policies; departments self-manage.

53:44 — Team leaders help with deprecation, routing data challenges, app development patterns. Central IT educates and promotes knowledge. Once business teams prove capability, give them more autonomy—“let them drive a little farther from home.”

Questions When Starting Workspace Strategy

55:45 — First questions when starting workspace strategy from scratch: What are the team’s skills? What work needs to be done—analysis, building reports, ad hoc stuff? Funnel people into appropriate roles.

56:45 — Personas ≠ one person. Same person can do four roles. The important thing is accountability. Understanding team skills also helps determine architecture—a marketing team with Google Analytics, email campaigns, DoubleClick might focus on real-time data, not just semantic models.

58:46 — Key takeaway: There’s no one-size-fits-all Fabric workspace strategy. It depends on the department’s current use cases and what they’re trying to do.

Wrap-Up

59:47 — The overarching concept of adoption roadmap, governance, and content ownership still exists. But with Fabric, think about the “layers of the onion”—what layer do you want to give people? What do they really need access to? That responsibility thinking will be impactful moving forward.

60:47 — Thanks for watching! Find the podcast on Apple, Spotify, and all platforms. Join live every Tuesday and Thursday at 7:30 AM Central. Submit questions at powerbi.tips/podcast.

Previous

Late Adopter Advantage? AI Readiness - Ep. 499

More Posts

Mar 4, 2026

AI-Assisted TMDL Workflow & Hot Reload – Ep. 507

Mike and Tommy explore AI-assisted TMDL workflows and the hot reload experience for faster Power BI development. They also cover the new programmatic Power Query API and the GA release of the input slicer.

Feb 27, 2026

Filter Overload – Ep. 506

Mike and Tommy dive into the February 2026 feature updates for Power BI and Fabric, with a deep focus on the new input slicer going GA and what it means for report filtering. The conversation gets into filter overload — when too many slicers and options hurt more than they help.

Feb 25, 2026

Excel vs. Field Parameters – Ep. 505

Mike and Tommy debate the implications of AI on app development and data platforms, then tackle a mailbag question on whether field parameters hinder Excel compatibility in semantic models. They explore building AI-ready models and the future of report design beyond Power BI-specific features.