Themes are the bedrock of consistency. As report authors it is important to create a consistent experience in a single, series or multitude of reports. With a little forethought you can easily build reports that exhibit the same fonts, properties and many other aspects with a Power BI Theme. If you aren’t using a theme and you build reports, its time you learn about them and put them into your arsenal.
Using Power BI Themes
Themes are available in a simple form in the Power BI Desktop and you can set color templates and some global properties for fonts and sizes. You can read more about that in Microsoft’s documentation here. However, themes go much, much deeper than that. You can set almost any visual property to a pre-configured setting. This greatly simplifies building a report, and creates a consistent experience across all your report pages. A single theme can be applied and removed from the Power BI desktop with ease.
Are Power BI Themes Hard to Build?
Creating a custom theme on your own would be hard… very hard. The theme files are built using the JSON format and have grown in complexity over the years. So, its highly unlikely that you are going to endeavor to build your own. Lucky for you, we love themes and created one of the first and most widely used theme generator to minimize the time it takes you to build a comprehensive theme to maximize your efforts. You can find and use that tool for free here. Another great resource has been provided by Matt Rudy, be sure to check out his Git repo. One of the key reminders when using themes: Make sure you don’t customize any properties in your report before applying a theme because they won’t be applied.
Discussion
We tackle all there is to know about implementation, when and how to best use themes and how to maximize your experience using themes in your Power BI reports in this episode. Be sure to join us to learn more.
If you like the content from PowerBI.Tips, please follow us on all the social outlets to stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel, and follow us on Twitter where we will post all the announcements for new tutorials and content. Alternatively, you can catch us on LinkedIn (Seth) LinkedIn (Mike) where we will post all the announcements for new tutorials and content.
As always, you’ll find the coolest PowerBI.tips SWAG in our store. Check out all the fun PowerBI.tips clothing and products:
First of all, go read this amazing blog put out by Alberto Ferrari over at SQLBI that he posted awhile ago. It is the context for the conversation we had and is well worth the read. 7 Reason DAX is Not Easy If you are new to DAX you will find a wealth of deep, insightful and helpful content in anything you find on SQLBI. Marco and Alberto are amazing guru’s in DAX and tabular models and have been perfecting their craft and helping the wider community hone their skills for many years.
I’m not going to re-hash the article as you should go read it yourself on their site. The points we talk about in depth in the podcast relate to focusing on the best methods to approaching and learning DAX. We take our experiences and the methods we learned and discuss at length the different points made in the article. Explore with us how you get better and overall simplify your process when building or troubleshooting DAX.
If you like the content from PowerBI.Tips, please follow us on all the social outlets to stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel, and follow us on Twitter where we will post all the announcements for new tutorials and content. Alternatively, you can catch us on LinkedIn (Seth) LinkedIn (Mike) where we will post all the announcements for new tutorials and content.
As always, you’ll find the coolest PowerBI.tips SWAG in our store. Check out all the fun PowerBI.tips clothing and products:
This month we partnered with the relaunch of the Des Moines user group to host Matthew Roche. This month’s topic is all about data culture. If you don’t know Matthew you should get to know him. His experience in developing enterprise data access is second to none. Along this journey of working with the largest Microsoft clients Matthew has learned a lot about their data cultures.
Obviously working with so many companies you can see what works and what does not. In this video webinar, Matthew discusses what is Data Culture. Additionally, there are aspects of what determines a successful environment.
Massive thank you to Matthew our presenter. Huge shout out to James Bartlett and Dipo Thompson for the planning. Be sure to follow our presenters on LinkedIn.
Matthew has also worked on many other impactful projects. One such project has been the Power BI Roadmap. This the best guide for individuals wanting to start their Power BI journey. On the Explicit Measures Podcast we discuss the Power BI roadmap quite often. Thus, we feel it adds a ton of value. Check it out for yourself.
If you like the content from PowerBI.Tips please follow us on all the social outlets. Stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel. Or follow us on the social channels, Twitter and LinkedIn where we will post all the announcements for new tutorials and content.
Introducing our PowerBI.tips SWAG store. Check out all the fun PowerBI.tips clothing and products:
This topic was gleaned from the absolute wealth of knowledge put down in the MSFT Power BI Adoption Roadmap. We highly recommend you check the full roadmap. In this episode, we focused in on Data Culture, this topic is a bit ambiguous as it relates to how you measure the outcome or success of your organizations data culture. Despite that, we took a good shot and covered quite a bit of ground and came up with some good ideas of how to better measure and determine how to build, how to measure and challenges organizations face when embarking on increasing the value of decision making based on data.
Its a Culture Thing
“You know it when you have it, and you know it went you don’t.” – Mike Carlo
“If you are trying to measure culture, you are really just trying to measure behavior.” – Tommy Puglia
“It’s pattern changing of people… one of the largest challenges any organization will have..” – Seth Bauer
How do you Measure it?
Behaviors are hard to measure, but you can put things in place to evaluate it. Tommy uses his sociology degree and brings up a good point in that there are ways in which we can directly and indirectly measure culture. Overall, the time invested in each of the different areas of the adoption roadmap will indicate how successful the data culture is. Success metrics are pointed out in how widely available data is to the organization.
Is it easy for individuals to find, access and use the right data at the right time? When they find it, do they know what it means? As organizations begin to invest time and resources in building out areas of the adoption roadmap the better and stronger they become at making data based decisions. The investments start to develop patterns of behavior that drive to better outcomes. Listen for more ideas on what sort of KPIs you could measure, or different patterns and behaviors would indicate your data culture is strong.
Listen to for these key points and more in our recent conversation
If you like the content from PowerBI.Tips, please follow us on all the social outlets to stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel, and follow us on Twitter where we will post all the announcements for new tutorials and content. Alternatively, you can catch us on LinkedIn (Seth) LinkedIn (Mike) where we will post all the announcements for new tutorials and content.
As always, you’ll find the coolest PowerBI.tips SWAG in our store. Check out all the fun PowerBI.tips clothing and products:
To say this another way. Not all columns have datatypes in DAX, specifically speaking to using dynamic ranking with an “Other” category. Let me explain.
The Tabular model is comprised of tables with typed columns. This is a truth learned in one’s Tabular infancy. Whether the table is imported, calculated, or Directly Queried does not matter; all tables have columns with explicitly defined types.
DAX is the language of the Tabular model. It has less type discipline than tables in the Tabular model. Measures can have a Variant data type. Implicit type conversions abound. Nor are DAX expressions limited to returning scalar values. DAX is a query language. An expression may return a table. When defining a table in DAX, projected columns have no explicit type. Each row’s value is calculated independently and its type depends on its value. This is the phenomenon we seek to explore in this post. If this seems unsurprising, you can probably stop here.
A practical scenario
Here is the scenario where I discovered this. I was helping a colleague with a dynamic ranking measure and aggregation. The requirement read:
Show the top for production plants as ranks 1 through 4. Group all others ranked 5 or lower together as “Other”.
We have three tables:
a fact table 'Fact'
a dimension table 'Dim', related to 'Fact' with the relationship 'Fact'[DimKey] <-N:1- 'Dim'[DimKey]
a disconnected table, 'Rank Selector', with the values: “1”, “2”, “3”, “4”, and “Other”.
We started with a measure [Agg] as our base measure.
Agg = SUM ( 'Fact'[Value] )
[_Rank (variant)] is a helper ranking measure, returning a rank 1-4 or “Other”.
_Rank (variant) =
VAR rawRank = RANKX ( ALL ( 'Dim' ), [Agg],, DESC )
// note this value is either a number 1-4 OR the string, "Other"
VAR groupedRank = IF ( rawRank <= 4, rawRank, "Other" )
RETURN
groupedRank
Below is a screengrab of these two measures evaluated against our 'Dim'[Plant]. As you can see, [Agg] returns a value for each plant. The top four plants have ranks of 1, 2, 3, and 4. The other plants all have the same rank, “Other”. This is all as expected.
The issue reared its head when we attempted to use these ranks to define a new visual based on 'Rank Selector'[Rank]. We defined a measure to show [Agg] based on the plant’s rank (1-4 or “Other”), rather than the plant name. Note that 'Rank Selector'[Rank] is a column in a data model table, with Data type: Text. Our expectation is that we will see a table like the image below.
Wherein our heroes encounter a challenge
[Ranked Agg (variant)] is the display measure to show aggregations against 'Rank Selector'[Rank]. We wrote the variant version you see below first, not realizing the perils that laid ahead. (Correct implementations follow later.)
Ranked Agg (variant) =
VAR rankContext = VALUES ( 'Rank Selector'[Rank] )
VAR ranked =
ADDCOLUMNS (
ALL ( 'Dim' ),
"@rank", [_Rank (variant)],
"@val", [Agg]
)
VAR currentRank = FILTER ( ranked, [@rank] IN rankContext )
VAR result = SUMX ( currentRank, [@val] )
RETURN
result
When we visualize this measure against 'Rank Selector'[Rank], we get the result below.
A table visualization showing only the 'Rank Selector'[Rank] value “Other” and “Total”. Specifically, we do not see any value for ranks “1”, “2”, “3”, or “4”.
What is going on here? We see data only for the Rank Selector'[Rank] label “Other” but none of the numbers. Let us walk through the measure step-by-step to understand this result.
VAR rankContext = VALUES ( 'Rank Selector'[Rank] )
rankContext captures whatever the current filter context is for 'Rank Selector'[Rank]. This is one or a collection of the values: “1”, “2”, “3”, “4”, and “Other”. For a detail row of a table visualization, the context contains exactly one of those values. For a total row (assuming no other slicers or filters), the context contains all of those values.
VAR ranked =
ADDCOLUMNS (
ALL ( 'Dim' ),
"@rank", [_Rank (variant)],
"@val", [Agg]
)
ranked is a table with all rows and columns of the data model table 'Dim' and two projected columns, [@rank] and [@val]. For any row, i.e., for any specific plant, [@rank] is one of 1-4 or “Other”. Many rows may have [@rank] = "Other". We do not expect ties, so there are exactly four rows with numeric ranks, one each for the ranks 1-4. (This table looks like the first table visualization screenshot above.)
The critical part of both the ranking logic and the phenomenon we are exploring is in currentRank. This is a subset of the table, ranked, holding only the row or rows which have a [@rank] value that exists in the filter context captured in rankContext (again, one or a collection of the values “1”, “2”, “3”, “4”, and “Other”).
Note, that we see data only for the Rank Selector'[Rank] label “Other”. As you recall, our [_Rank (variant)] seemed to work correctly above – it definitely returned ranks 1-4, not just “Other”. 'Rank Selector'[Rank] has the correct values. We checked that our logic in [Ranked Agg (variant)] was correct. We verified in DAX Studio that ranked held the correct table. We even got to the point that we checked whether VALUES was behaving as we expected in rankContext. (I will note that if you find yourself verifying the behavior of an expression as basic as VALUES ( 'Rank Selector'[Rank] ), then you, too, may find yourself questioning what has brought you to this point.)
VAR currentRank = FILTER ( ranked, [@rank] IN rankContext )
We continued checking and troubleshooting and identified that currentRank had binary state: either it would be an empty table or it would be a table holding only rows with [@rank] values of “Other”. It would never hold a row with a rank value of 1, 2, 3, or 4. It seemed that the predicate in our FILTER would never find 1 to be equal to 1, or 2 equal to 2, and so on.
How could basic logic be so broken just in this measure? There was much gnashing of teeth. Several head-shaped indents were beaten into the nearest wall.
DAX, the language, has a more permissive type system than the Tabular model
You may have observed some apparent inconsistency in my quoting or not-quoting the rank values above. In fact, I have been scrupulous to always quote values when referring to 'Rank Selector'[Rank], the model column with type Text, and to not-quote the rank values returned from [_Rank (variant)]. The column 'Rank Selector'[Rank] has exclusively string values. The measure [_Rank (variant)] sometimes returns a numerically typed value in the range 1-4 and sometimes returns the string “Other”.
In DAX, 1 = 1 is an equality test that returns TRUE. Similarly, 1 IN {1, 2, 3, 4} returns TRUE, because 1 = 1 and 1 exists in the table there. In DAX, 1 = "1" is an equality test that throws a type error. Numbers cannot be tested for equality with strings. They are different types. Therefore, a number can never be equal to a string. Thus, 1 IN {"1", "2", "3", "4", "Other"} also throws a type error.
The lightbulb moment
In [Ranked Agg (variant)], currentRank has a column [@rank] with type Variant. Sometimes, [@rank] has a value in the range 1-4, type Whole Number. Sometimes it has the value “Other”, type Text. When we evaluate the predicate [@rank] IN rankContext, there are exactly two possible results. Either we ask for an impossible membership test, “Is this numerically typed rank value in the set of Text values?”, or we ask whether the string “Other” is in that set. The first cannot succeed. The second only succeeds for the “Other” rank.
The Fix
The fix, then, is straightforward. We must always return a value of type Text in our ranking measure. Correct measures are below with comments highlighting the modifications.
_Rank (typed) =
VAR rawRank = RANKX ( ALL ( 'Dim' ), [Agg],, DESC )
VAR groupedRank =
// with an explicit typecast, our measure now *always* returns a string-typed value
IF (
rawRank <= 4,
FORMAT ( rawRank, "0" ), // force string type
"Other"
)
RETURN
groupedRank
Ranked Agg (typed) =
VAR rankContext = VALUES ( 'Rank Selector'[Rank] )
VAR ranked =
ADDCOLUMNS (
ALL ( 'Dim' ),
"@rank", [_Rank (typed)], // the only difference here is using our typed rank helper measure
"@val", [Agg]
)
VAR currentRank = FILTER ( ranked, [@rank] IN rankContext )
VAR result = SUMX ( currentRank, [@val] )
RETURN
result
The measures above give us the results we expected all along, as you can see in the table visualization below.
Below is a screengrab of all the measures in visualizations, so you can see the differences.
Summary
Tabular model tables have columns of explicit type. DAX expressions have implicit types. Columns defined in DAX have Variant type. Each row has an implicit type based on its value. If such a column is a Calculated Column added to a data model table, then an explicit type must be assigned and all values in that column will be cast to that type at load time. When such a column is used as an intermediate step in some calculation, each row may have a different type. If your operations depend on types, you must account for this explicitly. This is especially relevant when operating on data model and DAX tables together.
Insights from the community
When talking about this with some peers, Maxim Zelensky (blog, Twitter) observed that there is no type indication in Power BI Desktop. Nevertheless, you can identify this sort of thing in DAX Studio. The result of a query such as that below will right-align Variant-typed values and left-align Text-typed values. This query references the included Power BI file and its results are shown in a screenshot below. You can observe similar alignment in the table visual shown above. In general, when visualized, numeric data is presented with right-alignment and textual data left-aligned. This practice predates Power BI; it is a general rule that Power BI and DAX Studio wisely adopted.
Here is a link to download the PBIX that was used for examples above. This has four visuals to demonstrate the phenomenon discussed in this blog post. The first visual is a table with 'Dim'[Plant], [Agg], [_Rank (variant)], and [_Rank (typed)]. Of note in the first visual is left-vs-right alignment of the two [_Rank ...] measures. The second visual is a table with 'Rank Selector'[Rank] and [Ranked Agg (variant)], showing the incorrect results of the measure depending on a Variant-typed intermediate column, with a value only for rank “Other”. The third visual contrasts with the second, using [Ranked Agg (typed)] to show the correct behavior of the dynamic ranking and grouping logic, with a value for all ranks and a correct total. Lest one protest that the phenomenon above may be an artifact of using VARs, the last visual, on the bottom, shows an example that uses no VARs and references no other measures. The behavior of this last visual is identical to the incorrect [Ranked Agg (variant)] version.
Trolling… we should clear up the definition of this right off the bat. We aren’t talking about the type of trolling where we spend a bunch of time finding things on the Power BI Ideas site and say obnoxious things about them to elicit a response. We may say obnoxious things, but not in that context. In any case, what we spend time doing periodically is seeing what fantastic ideas people are coming up with and requesting for future feature enhancement. Today we’re going to dive into some of our favorites and would encourage you to give them a look. If you agree with us, be sure to vote them up!
The Power BI Ideas Site
If you aren’t familiar this site was created right alongside the Power BI Community site and has been part of the ecosystem of Power BI for a long time. It certainly speaks to Microsoft wanting to hear from, and build a great community around Power BI right from launch. The site is a direct way that users can submit ideas, garner support from the community, and raise the visibility of that request. The Power BI product teams then review the most voted on requests and determine if the request is something that they want to add into the tool.
Trolling & Contributing
Perusing the ideas and contributions of others is always a good way to broaden your scope. It gives you a taste of what other people need and want in a tool that may serve you just fine. Its also a shortcut to understand what isn’t in the tool that maybe you thought was. More importantly though, the site is a great way to contribute your ideas to making Power BI even better than it is already. We encourage everyone to contribute.
Recommend that MSFT keep the Idea status’s up to date
Join us for the full conversation as we discussed all these and more in the podcast!
If you like the content from PowerBI.Tips, please follow us on all the social outlets to stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel, and follow us on Twitter where we will post all the announcements for new tutorials and content. Alternatively, you can catch us on LinkedIn (Seth) LinkedIn (Mike) where we will post all the announcements for new tutorials and content.
As always, you’ll find the coolest PowerBI.tips SWAG in our store. Check out all the fun PowerBI.tips clothing and products:
We’ve been having an amazing amount of fun having conversations on our podcast “Explicit Measures”. Join us live on Tuesdays and Thursdays on YouTube or listen on Spotify or Apple Podcast. During our podcast this last week, we talked about which method is better when trying to learn all the different aspects of Power BI, Formal vs. Self-Training. Given that the tool has been out 6+ years, there are plenty of resources available now to all end users. As such, we discussed in detail many aspects of what these approaches offer. We’re huge proponents of making sure that all individuals keep learning, and the time you invest in yourself increases your value personally and professionally. Above all, whether it is formal training, or self-training, we encourage everyone to keep learning.
Formal vs. Self-Training
Formal
Typically this type of training revolves around someone teaching you. One of the benefits of formal training is that it allows beginners and advanced users alike to accelerate their learning. The main reason is because the expert teaching can provide insights and shortcuts that have to be hard earned when doing Self-Training. Formal training will almost always be something you need to pay for.
Self-Training
Self-Training revolves around what material is out there that you find on your own. This means you are learning things as you go. A lot of the different things we learned ourselves fall into this bucket. The challenge with this method of learning is that you don’t know what you don’t know. As a result, it may take longer for you to experience the bigger issues and solve the problems without anyone guiding you. Some types of materials that support Self-Training are the Power BI Community, blogs and books. When you are researching and finding your own learning path the material is usually free.
Our Recommendations
Have your organization support your goals. Is there potential for you to tie this to your yearly goals? Is there some way you can take what you learned and share with other colleagues.
Is Self-Training the same thing as solving a problem for your current job need?
Practical vs. Theoretical Learning.
Is there a difference between knowing Power BI in your job vs. knowing Power BI?
Will Formal or Self-Training set you apart from others when looking for a job?
Check out the full episode below where we discuss all these things in detail.
If you like the content from PowerBI.Tips, please follow us on all the social outlets to stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel, and follow us on Twitter where we will post all the announcements for new tutorials and content. Alternatively, you can catch us on LinkedIn (Seth) LinkedIn (Mike) where we will post all the announcements for new tutorials and content.
As always, you’ll find the coolest PowerBI.tips SWAG in our store. Check out all the fun PowerBI.tips clothing and products:
This past weekend, I was a man on a mission. There were two pressing reasons for a new release of Business Ops:
The authors of many popular External Tools released updates recently, so we needed to ship a new version of Business Ops with those updates included
Last week, Michael Kovalsky (ElegantBI.com) released a brand new External Tool called Report Analyzer (which we can already tell will be an instant classic), so we wanted to get it out there and into the hands of report developers ASAP
So, I toiled away most of the day on Sunday, cramming as much value as I could into the next version. And along the way, I learned some important new software development skills, including how to:
If you like the content from PowerBI.Tips, please follow us on all the social outlets. Stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel. Or follow us on the social channels, Twitter and LinkedIn, where we post all the announcements for new tutorials and content.
Introducing our PowerBI.tips SWAG store! Check out all the fun PowerBI.tips clothing and products:
Much thanks must go to both Ferry Bouwmanand Rui Romano for inspiration and building off the work they have done already for the use cases for the Scanner API. Ferry created the initial Microsoft Flow to automate the request and pull of data, and design ideas were taken by Rui Romano’s existing Report. Please give them a shoutout because this would not be possible without them!
Recently the Power BI team announced a major advancement in the ability for Power BI admins to extract tenant-level metadata with the ability to collect information such as tables, columns, measures, and DAX expressions in datasets in the Power BI Service. This feature is a huge step and something that any Power BI Champion should strongly focus on the use cases and integrating this solution into their catalog.
Let’s start with the what and the why of using the Scanner API as a Power BI Admin.
What is the Power BI Scanner API?
The Power BI Scanner API allows organizations to request and collect the entire metadata of a tenant’s Power BI schema and catalog. Using the Power BI REST API, users can push a scan and more importantly extract nearly all of a dataset’s information and schema. The Scanner API returns the entire tenant metadata such as:
Datasets & Workspaces
Data Sources
Dataflows
Dataset
Tables
Columns
Measures, including the actual DAX expressions
Table M Expressions
Why Use the Power BI Scanner API
The ability as a Power BI Admin or champion to consume and understand the datasets and information of their tenant is vital both from an Governance and Adoption perspective. Firstly, the Scanner API enables admins to discover and easily understand the workspaces, measures used, and what datasets are active in their tenant. Rather than relying on various methods of manual input of datasets into a system, the automated fashion to pull in this information positions admins to better enforce and manage the organization of datasets.
Governance
Along with dataset information, the newly updated Scanner API pulls in dataset metadata which creates more potential of how to better govern and unify the business logic used in datasets. A primary use case is to ensure that datasets and the tables being used are using the proper logic (columns, data sources, merges) by easily viewing the M code behind any table dataset. In the same fashion, champions can now ensure that datasets are 1) using Explicit Measures in their reports, and 2) those measures which are universal to the company are using the correct formulas (think Net New Members in multiple reports, ensuring that the correct relationship for date and Member ID is being used).
Adoption
There are many workarounds in the community to best provide discoverability of data for users. Unfortunately, many of these require manual input and do not synchronize with one’s active data. Using the Scanner API, admins can create automated solutions to easily provide datasets that are active for users to discover, and further can be integrated with other platforms to include custom fields.
One idea is creating a Data Lexicon for an organization, which includes a company’s report catalog and terminology. A Data Lexicon should include helpful content for consumers, such as a report’s purpose, intended audience, and refresh schedule. Using the Scanner API, anytime a dataset is added to the tenant, report authors can easily integrate these custom fields with active datasets.
Understanding the Goal
This article is not going to cover the intricate details of the API requests and parameters. Rather, the TL;DR version of the required calls / steps of the API are:
Call the Scanner API to trigger a Scan
This call must include a body of what Workspaces to be scanned
If more than 100 workspaces, than loop through the request (limit per call is 100 workspaces)
Wait until a Scan is completed (depending on how many scans)
Get the Scan Result and collect the array as JSON
The goal here is then to try to accomplish the following:
Create an easy-to-use process to automate the API workflow
Store the scan results into a JSON file into SharePoint
Transform the metadata into a structured model (Tables, relationships, etc)
Use the structured tables in multiple products (Data Flows, Power BI, Power Apps)
Building the Solution
The majority of credit needs to go to Ferry Bouwman who initially created a viable solution that can easily be integrated into a report. He created a GitHub repo that included a Power Automate flow that truly covers the entire process of automating the API call.
The following is building off Ferry’s solution, including the new metadata schema that is now available. There is more that I want to accomplish in this solution, but to get the Scanner API and a template to connect to the data, you can do so using the steps below.
Pre-Requisites Before Use
Before starting, you must have already completed the following in order to use the Scanner API at all. Please see the documentation for each to set up:
The first step is to import the Flow pbitips_ScannerAPI into your tenant. Once you do this, there are a few variables and actions to update before running.
tenant: The tenant of your Active Directory
clientId: The Client ID of your registered App
clientSecret: The Client Secret value of your registered App
SharePoint Library: What SharePoint library you want to save the files
NOTE: Remember this location as it will be used in Power Query
Folder Location: The folder location to save all returned scans
NOTE: Remember this location as it will be used in Power Query
Folder Location Trigger: A different folder with a different name, to trigger the refresh run.
Set up the Automation Flows
The next part is we want to set up the automation of the Flow, so that it triggers on a daily basis, or even a manual basis.
Import the Flow PBI_Scanner_Refresh into Power Automate. Once imported, you will need to grab parts of the initial Flow’s HTTP trigger and add them to the variables in the PBI_Scanner_Refresh Flow:
Initialize string URI-first-part: The first part of the HTTP Request Received, everything from the start up to modifiedsince/.
Initialize string URI-last-part: The parameters. Simply copy from the ? part of the URL to the end
Initialize string modifiedSince: write all
Additionally, The Power BI Template also includes a visual to trigger the Flow within the Report. You can simply copy and paste the variables and the HTTP Call other flow with all Power BI API logic actions using the When a Power BI Button was clicked as the trigger.
Run the Flow: Ensure It is successful & saves the files
Run the flow manually. Note that the first time you ever call the Scanner API, it will return a subset of the metadata. The more that you run it (daily) the more complete metadata will be returned.
Once you can confirm that 3 files have been saved to the folder specified above (a MetaData_, a WorkspaceArrary_, and RequestStatus_ json file), you know the Flow works.
Once you have verified the flow runs and saves to the correct file, you are ready to start using the Power BI Report.
Connect to the Data – Power BI Template
Using the Scanner Tenant Metadata Power BI Template file, open it and it will prompt you to input two parameters.
SharePoint Folder: The SharePoint Document Library url specified in the variable from the Flow
FolderFiter: The deepest subfolder that the files live (for example, if the files live in PBI_Admin/MetaData/MetaFiles/, then enter in “MetaFiles“)
Once you enter the parameters, click on load, and wait for the magic to happen!
Using the Scanner API Report
The Report contains tons of information across the entire organization’s content in Power BI. From Datasets all the way to the DAX expressions per table and report. The template and report is meant to be a starting point for authors to further build out additional functionality to meet their needs. Much thanks to Rui Romano’s Scanner Template as well:
Summary Page
The Template starts with the Summary Page, providing a high level overview of Workspaces and Datasets active in the tenant. Included in the high level overview is the created date of a particular dataset, the number of tables, data sources, and users who have access to it.
Selecting a dataset will highlight a drill through button to navigate to a detailed dataset page.
Dataset Drill through Details
The drill through page for a dataset provides vital information such as the tables, fields, and even the query and DAX expressions within a dataset. Along with this, an information panel of the ID, storage mode, and even users is available here.
Selecting a table will display the M query in it’s entirety. Expanding the Measures & Calculated Columns displays the DAX expressions beneath it. Along with this, the list of data sources by type is available.
Datasets Page
The Datasets page is a overview showing the number of entities (columns, calculated columns, and measures) within a dataset, including what Data sources are being used. Tracking datasets by created time is a helpful feature allowing admins to monitor the creation of new datasets overtime.
Tables Summary Page
Tables allows admins to monitor what tables are being used throughout the tenant’s datasets. This a powerful feature, allowing admins to monitor tables that may be used across datasets.
Data Sources Page
Looking at the metadata in another way, admins can monitor the type of datasources used throughout the tenant, including the information such as the data source type (SharePoint, SQL, etc) and even the source. Selecting a datasource will display what datasets they are included in.
Users Page
The Users page is using the new ability to append to the Scanner API metadata, getArtifactUsers=true, to pull what users have access to various datasets. Again, the ability to select and display is a powerful feature for Admins.
Details Page
Understanding needs to get the metadata displayed as a list, the Details page provides all of the underlying information about each artifact in the tenant, such as the ID’s used in Power BI, types, and who last configured an entity.
Conclusion
The ability for a Power BI champion to have full visibility into the organization’s Power BI content has and will be a vital piece of adoption and governance of Power BI. The amount of information available and to act on will allow admins to readily understand the activity happening at all times.
Again, many thanks to Ferry Bouwman and Rui Romano for building the foundation.
If you like the content from PowerBI.Tips please follow us on all the social outlets. Stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel. Or follow us on the social channels, Twitter and LinkedIn where we will post all the announcements for new tutorials and content.
Introducing our PowerBI.tips SWAG store. Check out all the fun PowerBI.tips clothing and products:
If you have installed Business Ops 2.0 or 2.0.1, you should use the “Edit External Tools” menu to remove the External Tools you have installed, then close and uninstall Business Ops.
In previous versions of Business Ops, its installation path was C:\Users\<Username>\AppData\Local\Programs\pbi-tips-business-ops\, which could be accessed by any process running on the machine. This struck us as somewhat insecure, so we decided to beef up the security. When we released Business Ops 2.0, we changed the installation path to C:\Program Files\PowerBI.tips Business Ops\, which requires elevated permissions to modify any files inside. This change increased the security of Business Ops, but those of you who are familiar with how file paths work can probably guess how it also had some other, unintended consequences. For those who aren’t, here’s what happened:
The installation path in previous versions did not have any spaces in it, and everything was working fine. But when we changed the installation path to one that includes spaces, several of the included External Tools in Business Ops stopped working. This wasn’t immediately obvious because it only affected the handful of External Tools which launch a PowerShell script by calling its absolute path. Once we identified the cause of the issue, the fix was pretty straightforward, and we got everything working again.
Thanks for reading!
James
If you like the content from PowerBI.Tips please follow us on all the social outlets. Stay up to date on all the latest features and free tutorials. Subscribe to our YouTube Channel. Or follow us on the social channels, Twitter and LinkedIn where we will post all the announcements for new tutorials and content.
Introducing our PowerBI.tips SWAG store. Check out all the fun PowerBI.tips clothing and products:
Check out the new Merch!
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.