Trending December 2023 # Power Bi P&L Statements: Challenges And Solutions # Suggested January 2024 # Top 16 Popular

You are reading the article Power Bi P&L Statements: Challenges And Solutions updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Power Bi P&L Statements: Challenges And Solutions

Profit and Loss Statements are often challenging to create in Power BI, especially if you’re working with large or complex datasets. In this tutorial, you’ll learn about the common challenges encountered when dealing with Power BI Profit and Loss (P&L) statements. Solutions for each challenge are also discussed in detail.

Unlike other business reports, P&L statements are universal. Thus, problems encountered when creating P&L statements are most likely similar across organizations.

This means that you can learn from others and leverage templates and ideas that have previously been established.

To discuss the common challenges of P&L statements in Power BI and their respective solutions, this financial report will be used as an example. The report is running 1.1 billion rows in the background. This dynamic report also allows you to select the Customer Name, Product Name, and Time of Sale.

These are the 10 common challenges encountered when dealing with P&L statements in Power BI:

Subtotals are intermediate totals that show the sum or total of a specific group of items within a larger dataset. Subtotals are often used to break down a large dataset into smaller, more manageable chunks, and to show the relationship between different groups of data.

It’s also the first challenge you’ll come across when dealing with P&L statements.

Percentages don’t often appear in public financial statements. But when doing internal reports, they will most likely be needed.

Earnings Per Share or EPS is a financial ratio that measures the amount of net income earned per share of common stock. It is calculated by dividing a company’s net income by the number of shares outstanding.

If you look at the financial statements of Fortune 500 companies, you’ll most likely see a row showing the EPS.

Most often, the internal management team in your organization will request to call out specific numbers in the P&L statements.

In this case, there are additional entries for the Full Time Equivalent percentage (FTE %) of the overall labor costs.

You can apply different formatting options in P&L statements in Power BI.

Since Power BI allows you to transform your reports into dynamic financial statements, you can add in slicers that can switch the view from millions to thousands.

These are made possible with the use of calculation groups.

Signages are also integral parts in financial statements.

About 8 out of 10 organizations will want to show positive numbers even if some are being subtracted from a figure.

Spaces are another minor formatting challenge that you’ll encounter when creating P&L statements in Power BI.

When displaying financial statements, it’s important to have the right spaces to separate elements and make it easy to read. If the spaces are missing or not where they should be, it can lead to misinterpretation of financial data.

A common hierarchy used in financial statements is ragged hierarchy.

A hierarchy shows the breakdown of an item into different levels of detail. For example, the Operating Expenses are broken down into their component parts, such as Labor, Travel and Expenses, Marketing, and Administration.

Sorting the items within your financial statements is highly important. Note that items aren’t sorted alphabetically by default. They can also be sorted according to ascending or descending order with respect to the amount.

The final part in a P&L statement is the analytics. Analytics, such as data visualizations, can help businesses to better understand their financial performance and identify areas for improvement.

In this example, the analytics section of the report is drilling down on the Total Gross Revenue.

When modeling P&Ls, it’s best practice to use the Star Schema approach. A Star Schema is where your Fact Table is surrounded by multiple Dimensions Tables.

Moreover, when creating a data model in Power BI, you need to have a strong understanding of the VertiPaq Engine and Kimball principles of modeling.

In this P&L statement data model, the Ledger Table contains information on the different values for each individual ledger key. This includes the Customer, Item, and Time Keys, among others.

The Layout Table contains the stripped layout of the financial report in Power BI. It exactly shows how the financial statement will look like in the report.

Meanwhile, the Link Table provides the bridge between the Layout Table and the actual Ledgers.

At the top of the data model is the Calculation Logic Table. It’s used to produce the various subtotals needed in the report.

To create the sample P&L, you need to start with the basics. The primary goal is to retrieve the sum of the general ledgers which is done by using the Layout Table.

In this case, the Gross Revenue for B2B and B2C are shown, along with their respective financial statement keys.

Keys are very important whenever you’re doing modeling within Power BI. These are known as prior keys or surrogate keys used to join or link tables together.

Next, the FSKey connects to the Link Table and retrieves a set of ledgers. These ledgers are then passed across the Facts Table.

Subtotals are slightly more complicated than Sums.

In this example, the subtotal for the Total Gross Revenue has an FSKey of 5. The Calculation Logic Table then lists all the ledgers under this key. This gives a huge amount of flexibility because you can define exactly which ledgers you want to sum up or aggregate.

Next, Power BI creates a virtual relationship from these ledgers down to the Link Table. And from there, it retrieves all the ledger data and values from the Facts Table.

In the Tabular Editor, you can see the DAX code for the subtotal calculation.

The first step is removing the filters using the REMOVEFILTERS function. Then, the relationship between the Layout and Link tables is removed and instead replaced with the relationship between the Calculation Logic and Link Tables using the TREATAS function.

The result is then generated using the CALCULATE function in the context of the calculation financial statements table.

The benefit of doing this is that you only need to write the DAX code once. You can then change the layouts or change the inputs into the file without having to rewrite the code.

As an example, let’s use the input file of the P&L statement.

In the file, you can see that the FSKeys for the Total Gross Revenue are 1, 2, and 3. If you want to make changes to the subtotal calculation, you only need to add or delete an FSKey from the file.

This approach gives you a huge amount of flexibility because it can quickly move, change, and redo items without having to rewrite the DAX code.

By putting an effort into the foundations of the solution, you don’t have to rework each time you want to make a change.

And flexibility is important when it comes to creating financial statements. You’re highly likely to encounter changes as you go. Companies want to move around their general ledgers from year to year. They also want to create different layouts.

They’ll also have different accounting standards. For instance, an IASS accounting standard might move to an IFRS accounting standard.

Things are constantly changing and constantly evolving. Therefore, having a solution which maximizes flexibility is the goal when it comes to financial statements.

Creating P&L statements in Power BI can be challenging. Data quality, formatting, visualization, and security are all important considerations when creating a P&L statement, and any issues in these areas can impact the accuracy and usefulness of the financial report.

However, by following the best practices and using the solutions presented in this tutorial, you can overcome these challenges and create P&L statements that are accurate, clear, and easy to understand. These will be invaluable for any business looking to make informed, data-driven decisions.

All the best,

Chris Barber

You're reading Power Bi P&L Statements: Challenges And Solutions

Power Bi Reports Design – Unlimited Possibilities

In this blog, I’m going to highlight an incredible example of Power BI reports design. It’s such a unique design, which is different from the normal way that you would think Power BI can do it. You may watch the full video of this tutorial at the bottom of this blog.

This Power BI reports design will show you the unlimited possibilities in developing reports engaging your team and your consumers. So let’s jump into it.

This report was done for Power BI Challenge 6 and was completed by one of our members, Alex, in Enterprise DNA.

This was created related to some data from insurance complaints. What Alex did was he summarized the KPIs (key information) on the right side and built a simple chart on the distribution of those complaints throughout the network of the channels that an insurance company sells through.

The little things that have been embedded here make the report really compelling. One is the color palette, which is great. It’s simple, but it’s not overdone. The colors have been used in a really creative way.

I’m also big on grids; I think they’re an important part of the Power BI reports design. You need to make sure your report sits within the grids, so everything is aligned and organized, and the way that the grid framework has been used here is impressive.

There’s also a lot of logic embedded behind just this one page. You can see here that we have some interesting tooltips based on the selection we make or the visuals we hover over.

We have a simple line chart embedded into a square, so it looks like it’s one visualization where it’s actually just a transparent line chart hidden in the background.

This is a dynamic report so all the details (on the right side) are going to change based on the selections that we make. For instance, when we select BRU, you can see that the numbers change.

In this visualization, there’s a question mark up on the upper right corner.

And when we hover over that, it brings up another tooltip, which highlights some other really interesting insights.

Another great thing about this report is the graph below that features outlier information.

As you can see, Alex labeled the page as well. At the bottom left corner, we have Global Overview. And then, we also have these two circles on the other side to navigate to a different page.

And here, we are also able to drill into specific information.

There are no page selections in this Power BI reports design. It’s all done with navigation. Here’s another tooltip that’s embedded on the main page for more navigation experience.

This Power BI report design is one of the most epic ones I’ve seen so far. I hope that you get inspired by this and I highly recommend that you try and create something like this as well.

A lot of effort has gone into the visual aspect of this report, and that’s what keeps the consumers engaged. If you were to throw this up on a big screen in a meeting within your organization, everyone would be looking at this all the time, because it is so engaging. It’s so informative and it’s easy on the eye. It’s easy for you to find the insights that you want.

If you’re trying to report on something and you want to make it compelling, you want to really dive into the information and the parts that are most important. The brilliant navigation in this report provides all of this.

All the best,


Table Value – A Common Structured Value In Power Bi

This tutorial will discuss about table value. It’s one of the most common structured values that you can encounter in many Power BI data reports. You’ll learn how to build tables using different expressions in order to obtain specific information and make your report insightful.

Think of tables as a list of records. The hash or pound table function can be used to construct a table value from a list of column names and a list of the record field values.

Input the following code and press Enter. You can then see the table icon beside the query name inside the Query pane.

If you want to create a number of columns without specific column names, you can enter a number as the first parameter followed by a list of record field values.

The formula created 5 columns with 2 records. The two records are lists with values from 1 to 5, and 6 to 10 separated using a comma. If you input 4 values instead of 5 in the second record, you’ll get an error.

But if you change the number of columns to 4 and press Enter, the first record now returns an error.

Most of the time when constructing a table, you want to include the column names. In this syntax, you can see that the column names are a and b. You can also see two records with values 1 and 2, and 3 and 4.

You’ll also notice that column icons have ABC123. It’s because the lists with record field values can obtain both primitive and structured data types.

It’s possible to declare data types when constructing a table. The first parameter will no longer be a list of column names, but a declaration of a table type that includes both column name and the column type.

In the formula, the first column is called a and has a number type. The second column is called b with a text data type. There are also 3 record values containing a number and a text. You can also see each column icons with their associated types.

If you change the field value of the second record from {2,“two”} to {2,2}, you won’t get an error message and the field name two will be changed to 2 in the column. Even though 2 is a number, there is no type validation occurring. However, if you pass this field into a function that expects a text value or load this query to the data model, it will be evaluated and a mismatch error will occur.

There are other ways to create tables. You can use M functions that return tables from lists or records, or you can manually add a table using the Enter Data option on the Home tab. But most of the tables that you’ll be dealing with inside Power Query are the results of connecting to an external data source.

When it comes to accessing elements from a table, you can access both rows and columns by referring to its zero-based index position. You can use the positional index operator, which is a set of curly brackets ({ }).

If you want to access the first item in the sample table above, input curly brackets at the end of the formula and write 0 inside the brackets. The formula will then return the first value.

Accessing the first item in a table returns the entire row as a record value. You can also perform the optional item selection by adding a question mark. This changes the not-found behavior from returning an error into returning a null.

So if you want to access the fourth item, change the index operator to 3 and press enter. Then, add the question mark at the end of the formula. The syntax will then return a null.

Tables also support field selection, which is the field name in square brackets.

The syntax returns the first column by adding square brackets at the end. Column a is then written inside the brackets to pull out the first column.

A column can contain more than one value so this returns a list in an ordered sequence of values.

Combination and equation operators can be used with tables. Tables can be appended using the combination operator, ampersand (&).

You can compare tables with the equal or not equal sign. It can be helpful to remember that a table is a list of records. Tables are considered equal if they meet all four criteria:

They have the same number of columns.

They have the same number of rows.

All column names or record field names are present and equal in both tables.

All record field values match.

Here is an example:

The formula contains two tables with two columns each. The first table has columns a and b, and values 1 and 2. The second table has columns b and a, and values 2 and 1. This formula yielded TRUE because the order of the field or column name is irrelevant when comparing tables.

Most Power BI reports have tables that consist of various data inside rows and columns. These tables are the main data-generating entities inside Power BI. They show information in a table form, which makes your reports look compelling.


Clustering In Power Bi And Python: How It Works

Below are two visuals with clusters created in Power BI. The one on the left is a table and the other on the right is a scatter plot.

Our scatter plot has two-dimensional clustering, using two data sets to create clusters. The first is the shopping data set, consisting of customer ID, annual income, and age, and the other is the spending score. Meanwhile, our table uses multi-dimensional clustering, which uses all the data sets.

To demonstrate how it works, I will need to eliminate the clusters so we can start with each visual from scratch. Once you create these clusters in Power BI, they become available as little parameters or dimensions in your data set.

We’ll delete the multi-dimensional clusters using this process and then get our table and scatter plot back, starting with the latter.

If we choose Age and Spending Score from our data set, Power BI will automatically summarize them into two dimensions inside our scatter plot.

If we add our Customer ID to our Values by dragging it from the Fields section to the Values section, we will get that scatter plot back, just like in the image below.

In the Clusters window, we can enter a Name for our clusters, select a Field, write a Description, and choose the Number of Clusters.

We will name our clusters Shopping Score Age, select CustomerID for the field, and input Clusters for CustomerID as a description. We’ll then set the number of clusters to Auto.

The current dimensions in our table, which you can find in the column headers, are Customer ID, Annual Income, Age, and Spending Score. A dimension we didn’t bring in is Gender.

Let’s bring this dimension into our table and scatter plot by dragging it from the Fields section to the Values section, as we did when we added our Customer ID.

As you can see above, we now have a Gender dimension that indicates whether the person is Male or Female. However, if we go to Automatically find clusters to create a cluster for this dimension, it will result in a “Number of fields exceeded” response.

There are two things we can do to go around this roadblock. We can turn the variables, Male and Female, into 0 and 1, giving them numerical values, or we can remove them. However, removing them means that this dimension will no longer be part of our clustering consideration.

Let’s try the second method and remove Gender by unselecting or unchecking it in the Fields section. We then go to our ellipses and select Automatically find clusters.

Now let’s proceed on how to cluster using Python, where we’ll run across the data and create a new data set. We’ll be using an unsupervised machine-learning model that will give you similar results for your multidimensional clustering. I will also show you how to put different algorithms and tweak them along the way.

We first need to run a Command Prompt, like Anaconda Prompt that we’ll be using in this example, and install a package called PyCaret here. After opening the Anaconda prompt, we enter pip install pycaret to install the package.

We’ll put that machine learning algorithm into our Python Script using a simple code. We start by entering from pycaret.clustering import * to import our package. We then type in dataset = get_clusters() in the next line to replace the data set and bring in the function called get_clusters.

We want our function to get our data set, so we’ll assign it with a variable by entering data set = inside the open and close parenthesis. Next, we add our model called K-Means and assign the number of clusters for our model.

Before we run our Python script, first let me show you the different models we use in PyCaret. As you can see below, we’re using K-Means, which follows the same logic as having that Center Point. Aside from that, we also have kmodes, a similar type of clustering.

These other clustering models above will work based on your needs and are much more flexible and not blob-based. If you have a different data set and feel like your Power BI model isn’t working, you can use all of these models. You can go to the Python script section highlighted below and specify the one you want.

Now we can run our Python script using the K-means unsupervised machine learning algorithm. As you get new data, K-means will learn and alter those Center Points and give you better clustering.

Python allows you to assign better names for your clusters to make them more digestible to your users, a feature absent when clustering in Power BI.

How To Use Chat Gpt For Power Bi: It’s Easy!

In today’s data-centric world, effective data analysis and visualization are crucial for individuals and business users to succeed. Microsoft Power BI offers superior tools, interactive visualizations, and unmatched business intelligence capabilities, but can things get better?

Welcome to Chat GPT. You can now use Chat GPT for Power BI to ask questions about your dataset, write formulas for complex calculations and request steps for implementing a particular functionality.

This is a game-changer, let’s dive into it.

Power BI has a variety of features that can take time to learn. If you’re new or unfamiliar with certain aspects, using Chat GPT’s guidance can help you make the most of Power BI, quite quickly.

Chat GPT has been trained on massive datasets and allows you to ask questions in natural language, receiving detailed and useful responses. By combining ChatGPT with the data visualization power of Microsoft Power BI, you can unlock a wealth of opportunities for generating insights and boosting your organization’s decision-making process at scale.

Now that you understand the importance of using Chat GPT alongside Power BI, let’s take a look at the main areas where Chat GPT can help you improve your data analysis and visualization capabilities.

ChatGPT is a natural language model which means that you can ask it any questions you would like an answer to, well just about.

ChatGPT can help you in the following 4 ways.

Data Preparation

Data Transformation

Data Analysis

Data Visualization

In the sections below, we will look at each step of the data analysis cycle and focus specifically on how it can improve your speed and efficiency in creating a killer Power BI report.

Join 43,428+ others on our exclusive ChatGPT mailing list; enter your details below.

Data Preparation and Transformation refer to the process of cleaning, organizing, and converting raw data into a structured and usable format, suitable for further analysis, modeling, or visualization.

They are crucial steps in any data analysis cycle and can take time, often a considerable amount. ChatGPT can help you with cleaning and transforming data in Power Query, and write custom code calculations for creating solid data flows too.

So, restart power bi desktop, and let’s get into it.

In Power BI, Power Query is a powerful tool for data transformation. It allows you to connect to various data sources, apply filters, and perform numerous formatting tasks.

To demonstrate how to use ChatGPT for Power Query formulas and code, we will use a dataset to look at specific examples to give you a more solid grounding on making your next transformation.

Once the data is loaded, you should see the following dataset in your Power BI’s Data View.

In Power Query Editor, you will see the dataset you are analyzing.

Let’s say we want to apply a filter on the Country column in the dataset. If you’ve never worked with filters in Power BI Desktop before, you can ask ChatGPT, and it will guide you every step of the way.

I entered the following prompt into ChatGPT when I was applying a filter on the Country column.

Ok, I know, I wasn’t so polite, but I wanted to get straight to the point.

ChatGPT gave generated the following detailed list of steps to follow:

Let’s follow each of these steps and see if we can apply a filter on the Country column.

After following the steps given by ChatGPT, I was able to remove rows that had Germany in the Country column.

Suppose we want to combine the columns Month Number, Month Name, and Year into a single Date column.

The following prompt was used to generate a step-by-step response using ChatGPT to replicate in Power BI Desktop:

The following guide was suggested by ChatGPT for merging the columns.

With the steps suggested in the guide, the columns were successfully merged to create a new column named New Date.

The above two demonstrations should give you an understanding of how to use ChatGPT to make more efficient data preparation and transformation.

In the section below, we will look at how to use ChatGPT to generate formulas and calculations for your Power BI reports.

Let’s get into it!

DAX (Data Analysis Expressions) is a formula language used to create custom calculations and aggregations in Power BI. By leveraging DAX, you can create complex measures and calculated columns for your data model.

For a demonstration of using ChatGPT for DAX code, we will use the same dataset as used in the above section.

I asked ChatGPT to help me write code for finding the total sales. The following is the text prompt and the output given by ChatGPT:

Once you get the code, you can go ahead and implement it in Power BI.

I asked ChatGPT to help me write the code for finding the Average Sales Price per Unit. The following is the text prompt and output given by ChatGPT:

Once you get the formula, you can go ahead and implement it in Power BI.

The above demonstrations are to highlight the amount of help you can get from ChatGPT when working on a Power BI project. You can ask ChatGPT to write the most complex formulas for you and it will return with a template formula that you can change according to the variables in the dataset. For instance, take a look at the formulas generated by ChatGPT below:

In short, you can ask ChatGPT to help with the following in DAX:

Creating calculated columns

Defining measures

Implementing time-intelligence functions

We’ve looked at how ChatGPT can be helpful in writing complex formulas. In the section below, we will look at how to enhance your data visualization and report-building capabilities with ChatGPT.

Join 43,428+ others on our exclusive ChatGPT mailing list; enter your details below.

A crucial stage in any data analysis project is to present your findings to relevant stakeholders. Power BI offers a strong set of tools that allows you to make appealing visuals and create summaries of your work.

Following are some of the ways where you can consult ChatGPT for making visualization and reports:

Example 1: Create a Bar Graph of Total Sales by Country

Suppose I want to find the total sales by country. I asked ChatGPT for the directions and the following response was generated:

After following the guide from ChatGPT, I was able to create the following bar graph in Power BI:

Keen to use see how we are integrating Chat GPT with Microsoft Outlook using Power Query? Check out our vid below:

Example 2: Create a Tree Map to show Total Sales by Country and Product

Suppose I want to find create a tree map of total sales by country and product. I used the following prompt to ask ChatGPT for directions:

ChatGPT generated the following step-by-step guide for creating a TreeMap:

With the steps provided by ChatGPT, I was able to create the following treemap:

The above examples should give you a good starting point for using ChatGPT with analyzing and visualizing data in Power BI.

You can also use ChatGPT for recommendations and directions about your data. In the section below, we will look at how you can use ChatGPT to ask for a recommendation and direction for your analyses.

When using ChatGPT with Power BI, it can suggest a variety of data analyses and visualizations based on the column names provided.

By understanding the context and the nature of the data, ChatGPT can recommend numerous potential insights and data exploration paths that users can follow to gain a deeper understanding of their dataset.

This functionality allows users to unlock the hidden value within their data by suggesting multiple analytical approaches tailored to the specific dataset.

For example, given a dataset with columns such as Segment, Country, Product, Units Sold, and Sales, ChatGPT can recommend analyses like calculating Total Sales by Country or Segment, identifying the most popular Products by Units Sold, or exploring Sales trends over time.

Users can also receive suggestions for suitable visualizations, such as bar graphs, pie charts, or line charts, to represent the extracted insights effectively.

Suppose I want a list of possible analyses on my dataset. After asking ChatGPT, it recommended the following list of analyses:

Power BI combined with ChatGPT presents an immense opportunity for data visualization and analysis.

By utilizing its diverse capabilities, you can harness the true potential of your data and make informed decisions that drive your business forward. BUT… and this is a big but…

Chat GPT, although proving to be really helpful, should not be fully relied upon. Sure, embrace the technology (we are) but always learn, practice, and build your skills alongside it too!

How To Get Your Dataset’s Top N In Power Bi

In today’s blog, we will walk you through the process of using Quick Measures Pro to create a custom measure that returns the top N items in Power BI based on a specific metric. It’s a useful technique for data visualization and analysis, allowing you to quickly identify the top performers in your dataset. You can watch the full video of this tutorial at the bottom of this blog.

In the Analyst Hub, I have created and saved a custom quick measure which I call the Top N Ranking Measure.

What this measure does is take a number of elements, rank them, and return the top N. Take the code below as an example. 

In this case, we want to get the Top 5 and rank all the Locations based on Total Sales using the RANKX function. If the item is on the top 5, the code returns the total sales, otherwise, it returns a blank. 

This measure is helpful when we want to rank our data. However, this code is tied to Locations, Total Sales, and Top 5. 

What we want to do then is make the code more general so it can take any element, rank it by any measure, and do any number of top N in Power BI reports.

Start by copying the DAX code.

Open Quick Measure Pro and create our top N pattern quick measure.

The New Measure may look long and daunting initially, but it’s going to make a lot of sense as we progress. 

Let’s go to the Analyst Hub and look at our initial DAX code measure.

Think of the elements that we want to modify in the future to make it more flexible. 

In this example, there are four items that we can change. 

First is the items to rank. We won’t be ranking locations all the time. In the future, we may want to rank customers, regions, products, and more.

Second is the measure used. Our current code ranks based on the Total sales but we may want to rank based on the averages, maximums, and minimums on our data.

Third is the number of elements we want to keep. We may need to get the top 3, top 4, top 10, and so on.

Last is the order which is currently set as DESC. In the future, we may want to rank our data in ascending order instead of descending. 

Let’s copy this DAX code and go back to Quick Measures Pro.

In our New Measure, start by creating an aggregation label. Tick the box beside the Aggregation Label and choose Maximum in the dropdown options to avoid having a naked column.

Then, we’ll need another aggregation label. Tick the box beside Aggregation 1 Label to enable it.

In this instance, tag it as Rank on the Basis of, which we will use as our measure. Then, choose Total Sales from the list of variables.

Now that we are done setting that up, we can now go to Tooltips.

In the Tooltip, input the instructions that we want for the different labels we created.

In the ParameterLabelTooltip, we can instruct it to enter whole numbers only. 

We can then proceed customizing the code. 

After we properly set up our New Measure, the interface gives us the keywords to use in ranking and setting up our variables. 

But instead of using these variables, we will customize it by copying the DAX code earlier. 

Delete RETURN and #QMPRO in lines 15 and 16, respectively, and paste the DAX code.

We can then customize the code.

Start with the items to rank. Replace Locations [location city] with _COLUMN_, taken from the list of variables above.

We’ll replace the Total Sales measure next. We can either replace that manually or use the find and replace feature made by Greg Deckler. 

In the second text field, put the variable we want [Total Sales] to be replaced with. Let’s put [_COLUMN1NAME_].

Finally, let’s replace 5 with __PARAM__ as shown below.

After editing the code, the next step is to clean our new measure. We want it to look clean since this will be a permanent measure in our file. 

Start by taking the measure name, “Top Ranking Pattern”, from line 16 to line 1.

Then, set our last variable as the result we want to return. To do that, follow the code below.

To further clean our code, remove the list of variables from lines 10 to 14.

We can also take the keywords from lines 2 to 9. In this case, we’ll just leave them so we’ll have them available in case we need to modify the code in the future.

The next step is to update the metadata. In the Description, let’s write “Keeps top N items based on user selections” as shown below. We can also put our measures in folders or hide them for any reason. 

In Quick Measures Pro, a Top Ranking Pattern option should appear under the Custom section.

We can change the variables for the Aggregation Label, Items to Rank, Number of Items to Rank, Aggregation1 Label, and Rank on the Basis of. 

If we go back to Power BI, our new measure should show up in the Fields pane. 

It should filter our top 7 customers based on the average sales. 

Let’s try modifying our custom measure. This time, let’s take the top 5 products based on the total sales.

To do that, go back to Quick Measures Pro. Set the Aggregation Label to Sum and choose Product Name to rank. Then, opt for the top 5 items and rank based on Total Sales. 

In the ribbon under Measure tools, set the Name as Product Name Top Ranking Total Sales and choose Measures for the Home Table.

Drag Product Name Top Ranking to the X-axis and Product Name to the Y-axis. This will give us the bar chart with our top 5 products. 

In conclusion, creating a custom top N ranking measure with Quick Measures Pro is a powerful tool for analyzing and visualizing data in Power BI. By following the steps outlined in this tutorial, you can easily create a measure that ranks items in a specific category, allowing you to quickly identify the top performers and make more informed decisions.

Remember to test your measure and make adjustments as needed to ensure that it is providing the most accurate and relevant information. With Quick Measures Pro, the possibilities are endless, and you can continue to explore and refine your analysis to uncover valuable insights into your data.

All the best,

Brian Julius

Update the detailed information about Power Bi P&L Statements: Challenges And Solutions on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!