Dan English's BI Blog

Welcome to my random thoughts in regards to Business Intelligence, databases, and other technologies

Power BI using Service Principal with Synapse Data Explorer (Kusto) Pool

Posted by denglishbi on September 21, 2022

In my last post I went over using a Service Principal account to access a Synapse SQL Pool with Power BI. I even showed how you could go across different tenants in my example. In this post instead of going against a SQL Server source I am going to switch to a Synapse Data Explorer (Kusto) Pool. Once again we will use the SQL Server ODBC driver just like we did in my last post.

For this post I will be using the same Service Principal and driver that we used in the previous post, so if you need any guidance with that or link to download the driver please reference that post here.

For the example that I will present here I will be do the following:

  • Create a Synapse Data Explorer Pool and load data
  • Grant the Service Principal account permissions to the Synapse Data Explorer Pool database (NOAA Storm Events data)
  • Configure the ODBC System DSN to access the Synapse Data Explorer Pool
  • Create a dataset and report using the ODBC source connection in Power BI
  • Publish the report in a different tenant than the Synapse Workspace
  • Configure a gateway to use in the Power BI service with the dataset
  • Refresh the Power BI dataset and view the report in the service

As mentioned I will be reusing some of the items that were covered in the previous post, the only difference is that we will be going against a Synapse Data Explorer (Kusto) Pool. If you are already familiar with Azure Data Explorer (ADX) you can review the document here to see how these compare.

First step will be to create the Synapse Data Explorer Pool within the same Synapse Workspace we used in the previous post with the Synapse Dedicated Pool. This can be done in a few different places within the Azure Portal (Synapse Workspace Overview page or Analytics pools section) or in the Synapse Studio (Data or Manage Hub). In this example I am doing it in the Azure Portal and simply click the New Data Explorer Pool option in the Synapse Workspace Overview page and complete the required information.

Create Data Explorer Pool configuration screenshot
Create Data Explorer Pool configuration screenshot

I didn’t make any other setting changes and simply created the pool, this will take a few minutes to complete (approximately 15 minutes for me). In the meantime I have decided to use the NOAA Storm Events data which is a sample you will explore if you doing any of the Kusto Query tutorials on the Microsoft Learn site. I went ahead and downloaded all of the StormEvents_details files (73 of them) and extracted them to a folder on my laptop (1.31 GB total).

Once the pool is created on the Overview page you will be presented with a wizard that you can complete to create the initial database and load data.

Synapse Data Explorer Pool wizard screenshot
Synapse Data Explorer Pool wizard screenshot

Simply click the Create database button, enter a database name and set the day settings, and then click Create.

Data Explorer Database creation screenshot
Data Explorer Database creation screenshot

Creating the database will not take long and after you click the Refresh button on the Overview page you will be at Step 3 – Data Ingestion. Click on the Ingest new data button which will then launch Azure Data Explorer window with a connection to the newly created database. If we right-click on the database we can select the option to Ingest data which will launch us to another wizard to name the table, provide the source, set the schema, and then ingest (load) the data into the new table.

Azure Data Explorer Database screenshot
Azure Data Explorer Database screenshot

Provide a name for the new table on the Destination page and then on the next page in the wizard we will set the source.

Ingest New Data Destination screenshot
Ingest New Data Destination screenshot

For the Source page you can simply drag and drop the 73 storm event csv files and it will start to upload them which will take a few minutes to complete (approximately 8 minutes for me). Next you can go to the Schema page and review what was done based on analyzing the data.

Ingest New Data Source screenshot
Ingest New Data Source screenshot

Go ahead and review the Schema page to how the data was mapped if you want and then click on the Start ingestion button.

Ingest New Data Schema screenshot
Ingest New Data Schema screenshot

Now the storm events data will be loaded into the table and this will take a few minutes to complete but should go relatively quickly and once this is done you will see a Data preview.

Ingest New Data screenshot
Ingest New Data screenshot

Now we are ready to move on to the second step and grant the Service Principal account access to the database. This can be done at the cluster level or database. In the Azure Portal within the Data Explorer Pool Permissions page you can add the Service Principal to the AllDatabasesViewer role.

Data Explorer Pool Permissions screenshot
Data Explorer Pool Permissions screenshot

Or in Azure Data Explorer in the Query tab with the connection to the new database you can use the following syntax –> .add database <databaseName> users (‘aadapp=<application id>;<tenant id>’) ‘<app name>’ as shown below

The third step is to create the ODBC System DSN which is pretty similar to how we did it for the Synapse Dedicated SQL Pool, but there is a slight twist which took me awhile to figure out and had to go into the registry to make a modification to the ODBC entry🤓

Just like we did in the preview post we will go into the ODBC Data Source Administrator (64-bit), click on the System DSN tab, and click Add. As before use the same ODBC Driver XX for SQL Server, click Finish, enter a name for the DSN, and then provide the Data Explorer Pool connection information which in this case will be in the format of <pool name>.<workspace name>.kusto.azuresynapse.net and you can get this from the Overview page of the Data Explorer Pool in the Azure Portal. It is the URI information and simply remove the “https://&#8221; information.

First page of the System DSN configuration screenshot
First page of the System DSN configuration screenshot

On the next page just like before we will select the option for Azure Service Principal authentication, provide the application id for the Login ID, and then click Next. There is no need to put the Secret Value yet for the Password because we need to simply create the entry first in the registry to modify it and then we can come back in with the password to do an official test.

Second page of the System DSN configuration screenshot
Second page of the System DSN configuration screenshot

On the next page you can go ahead and set the default database setting and then click Next.

Third page of the System DSN configuration screenshot
Third page of the System DSN configuration screenshot

On the next page you can click Finish and then OK. Now if you would have provided the Secret Value and then tried to test the data source connection you would have been provided this message about “No tenant identifying information found in either the request or implied by any provided credentials” as shown below.

Data Source Connection Error screenshot
Data Source Connection Error screenshot

This is what we are going to fix in the registry and it took me awhile to track down the solution and thanks to this article here on MS-TDS clients and Kusto I determined how to include the tenant information into the “Language” field in the ODBC data source🤠 Once you have created the System DSN you need to use the Registry Editor to add a new String Value to the ODBC entry called “Language” with the “any@AadAuthority:<tenant id>” format as shown in the link above.

Registry Editor ODBC Language screenshot
Registry Editor ODBC Language screenshot

Now if you switch back to the ODBC Data Source Administrator you can configure the System DSN that was created to include the Secret Value for the Password and then as you click through to the final page you will see the language information included now based on what you entered into the registry.

System DSN Configuration Language screenshot
System DSN Configuration Language screenshot

And when you go to Test Data Source now you should see (fingers crossed) a “TESTS COMPLETED SUCCCESSFULLY!” message.

System DSN Test Data Source screenshot
System DSN Test Data Source screenshot

The fourth step now is the easy part, just like we did in the last post, open Power BI Desktop, use the ODBC connection, and connect to our Kusto database to create our dataset.

Power BI Desktop Get Data ODBC screenshot
Power BI Desktop Get Data ODBC screenshot

There is just one table so this is pretty easy. We can clean up the column names to make them more readable, add measures, hide columns that are not needed, etc. and then build out a report. As you can see below in the report we have over 1.7M records loaded and the size of the Power BI file once all of the data is loaded is 188 MB which is much less than the 1.31 GB of CSV files👍

Power BI Storm Events report screenshot
Power BI Storm Events report screenshot

For the fifth step now we simply need to publish the report to the Power BI service which is straight forward and just like the last post and so is the sixth step where we configure the gateway connection once published to include the new DSN data source. Just remember to replicate the ODBC configuration that we did above on the gateway machine that is being used by the Power BI service.

Dataset settings Gateway Connection screenshot
Dataset settings Gateway Connection screenshot

Just make sure to follow the same steps as the last post and reference that if you have any questions.

For the final step you can now go ahead and test everything out by running the dataset refresh and then viewing and interacting with the report.

Dataset Refresh history screenshot
Dataset Refresh history screenshot

And if we interact with the report everything should be good and just remember once again this demo like the last post is going across tenants using the Service Principal account.

Power BI report screenshot
Power BI report screenshot

That completes this post covering the topic of being able to use a Service Principal account to connect to Synapse Data Explorer Pool for a Power BI report.

Let me know what you think and thanks for checking out this post, I hope you found this useful:)

Posted in Azure, Data Explorer, Kusto, Power BI, Synapse | Tagged: , , , , , | Leave a Comment »

Power BI using Service Principal with Synapse SQL Pool

Posted by denglishbi on September 14, 2022

In this post I will go over a topic that is frequently asked about and that is using a Service Principal account with Power BI when connecting to data sources. Currently today none of the built-in connectors support this capability natively, but the SQL Server ODBC driver does support the use of a Service Principal account. The one caveat with using an ODBC driver with Power BI is that a gateway would be required once the report is published to the service.

For the example that I will present here I will be do the following:

  • Create a Service Principal account (App Registration) in Azure
  • Grant the Service Principal account permissions to the Synapse Workspace Dedicated SQL Pool database (Adventure Works DW schema)
  • Install and configure the SQL Server ODBC driver (including the System DSN)
  • Create a dataset and report using the ODBC source connection in Power BI
  • Publish the report in a different tenant than the Synapse Workspace
  • Configure a gateway to use in the Power BI service with the dataset
  • Refresh the Power BI dataset and view the report in the service

As noted above in this example not everything will be in a single tenant, so that is a slight twist and I have worked with customers where this type of a configuration was needed. This means that the database will reside in one Azure tenant and the report and dataset will be in another and everything will work.

First step is to create the Service Principal account in the Azure portal. In Azure Active Directory in the portal you will go into App registrations and pick the option to add a New registration. When you create the new account this is where you can determine if it can be used multitenant or not and in my example that is what I selected.

New App registration screenshot
New App registration screenshot

Once that is created in the new App registration you will need to create a New client secret which will be used as the password when connecting to the database for authentication purposes. When creating the secret you can determine when it will expire as well.

Adding a Client Secret to App registration screenshot
Adding a Client Secret to App registration screenshot

Once you create this you will want to copy the Value for the client secret and store this in a secure location, that will be the password we will use later. The other item that you will need to capture that will be used as the Login Id or the User Name when connecting to the database is the client Application ID and you can get that guid value on the Overview page for the App registration you created.

App registration Application ID screenshot
App registration Application ID screenshot

The second step is to grant permissions for the Service Principal account to be able to access the database. You can run the following script once you are connected to the database and in this case I am using a Synapse Dedicated SQL Pool database where I have loaded the Adventure Works DW database. Whether you run the script in the Synapse Workspace or using SSMS or Azure Data Studio it will all work the same.

SQL Scripts to add Service Principal user and grant permission screenshot
SQL Scripts to add Service Principal user and grant permission screenshot

The third step now is to download and install the SQL Server ODBC driver (I will be using the 64-bit which will be required on the gateway to work with the service) and then configure the System DSN. If the machine you are using doesn’t already have the Visual Studio C++ Redistributable runtime libraries installed that will be required for the driver install.

Once the driver is installed then you can launch the ODBC Data Source Administration (64-bit), click on the System DSN tab, click the Add button, select ODBC Driver XX for SQL Server, and click Finish button.

There will be quite a few screenshots showing the configuration of the System DSN. The first page in the wizard is pretty straightforward provide a name for the DSN and provide the SQL Server instance name (you can see in my example below we are connecting to a Synapse database.

First page of the System DSN configuration screenshot

The next page is where the hidden Easter egg is where you can select the Azure Service Principal authentication. This is where you will put provide the Application ID and the Client Secret Value for the Login ID and Password.

Second page of the System DSN configuration screenshot
Second page of the System DSN configuration screenshot

The next page is where you can provide the default database name you will be connecting to and after this you can click through until the end where you get a change to test the connectivity.

Third page of the System DSN configuration screenshot
Third page of the System DSN configuration screenshot

The last page is where you can test the data source connectivity and if you did everything as expected you should see the “TESTS COMPLETED SUCCESSFULLY!” message.

Final page of the System DSN configuration to test connectivity screenshot
Final page of the System DSN configuration to test connectivity screenshot

Once you have completed this configuration wizard and the connectivity worked successfully you will then see the new System DSN added as shown below.

System DSN added in ODBC Administrator screenshot
System DSN added in ODBC Administrator screenshot

So far so good, we are about half-way through the process;)

The fourth step is to open Power BI Desktop and Get Data using ODBC option as shown below.

Power BI Desktop Get Data ODBC option screenshot
Power BI Desktop Get Data ODBC option screenshot

For the ODBC Data source name (DSN) you will provide the name you entered when creating the System DSN in the previous step.

ODBC data source name screenshot
ODBC data source name screenshot

Once connected with the System DSN you will then be presented with the Navigator as shown below to search and browse the database.

Navigator view of the ODBC connection screenshot
Navigator view of the ODBC connection screenshot

Now simply select the tables that we want to include in our data model and in this example I selected five tables, four dimension tables and one fact table.

Select tables for data model screenshot
Select tables for data model screenshot

After I selected the tables I went through the transform steps keeping the columns that I wanted to use in the report or for the relationships and renamed the tables. Once the data was loaded I then verified the relationships and created a few measures as well hid some of the columns. I then quickly put together the report below, nothing fancy by any means.

Power BI report using ODBC data source screenshot
Power BI report using ODBC data source screenshot

The fifth step is simply publishing the report to the Power BI service and in this particular case I decided to deploy this to my demo tenant versus my primary organizational tenant. To do this I made sure in Power BI Desktop to login with my demo tenant account and then publish the report. For this there isn’t much to show, simply switched the account I was logged into within Power BI Desktop and then published the report.

The sixth step is getting the gateway ready which will be needed since I am using an ODBC data source. In my demo tenant I already had a virtual machine already available in my Azure subscription, so I went ahead and started it up, remoted into the virtual machine, downloaded and installed the SQL Server ODBC driver, and then configured the System DSN just like we did in the previous steps above.

Once the virtual machine was configured and running the next thing to do is to review the settings of the dataset that was deployed with the report in the previous step and specifically we want to look at the Gateway connection as shown below.

Dataset settings Gateway Connection screenshot
Dataset settings Gateway Connection screenshot

In the above screenshot I selected the action arrow on the gateway I configured with the ODBC driver and for the data source in the dataset clicked on the link to Add to gateway option. This will then open up a dialog to add a new data source to the gateway as shown below where you will need to add a Data source name and for authentication provide the Application ID and Client Secret Value again for the Service Principal account. Once you have done this you can click Create.

Create ODBC data source in Gateway screenshot
Create ODBC data source in Gateway screenshot

Once the data source has been added to the gateway then we simply need to map this in the Gateway connection, click Apply, and then we can proceed to the next step of running the data refresh!

Map Gateway connection to Data Source screenshot
Map Gateway connection to Data Source screenshot

The seventh and final step is to run the dataset refresh and view the report to make sure everything is working as expected. After running the on-demand refresh and reviewing the dataset refresh history in the settings we see it completed successfully!

Dataset Refresh history screenshot
Dataset Refresh history screenshot

And then if we view the report and interact with the slicers and visuals we can see everything is working as expected.

Power BI report screenshot
Power BI report screenshot

That completes the post covering the topic of being able to use a Service Principal account to connect to a data source for a Power BI report. Currently the only option that I am aware of is the one I have shown, which is using the SQL Server ODBC driver. In this example I used it to connect to a Synapse Workspace Dedicated SQL Pool database located in one Azure tenant and run a report and dataset that was published in a separate tenant.

Let me know what you think and thanks for checking out this post, I hope you found this useful:)

Posted in Azure, Power BI, Synapse | Tagged: , , , | Leave a Comment »

Power BI Data Driven Subscriptions with Power Automate – Follow up using Power BI report

Posted by denglishbi on September 6, 2022

In the last post I went over using Power Automate to perform a data driven report subscription using a Paginated report referencing a Power BI dataset. The flow referenced an Excel file with the information to make the process data driven and generate 2000 PDF files that could then be emailed to users. In the flow the PDF files were simply placed in a OneDrive folder for testing purposes to validate the flow would run as expected and to review the metrics after the fact to evaluate the impact of running the process.

That post was the first follow up to this post where an AAS database was used to run the report bursting process. This post is the second follow up I wanted to do by replacing the Paginated report with a Power BI report to compare performance. Granted not all Paginated reports could be swapped out with a Power BI report, but in this particular case it will work fine since I am not return 100s of records and don’t require multiple pages, plus I am just producing PDF files.

The first thing I needed to do was to recreate the Paginated report to a Power BI report and I did that with Power BI Desktop referencing the Power BI dataset that was made in the previous post follow up. I won’t go into this in detail, it was simply a matter of getting a similar look to the report output and testing it out to make sure it should work for the report bursting process.

Power BI report for data driven subscription screenshot
Power BI report for data driven subscription screenshot

Once this was tested and ready I cleared the filter and deployed the report to the Power BI service. The next step was to add a new column into the Excel file for the subscription process that could be used to pass the filter to the Power BI report. Steven Wise has a nice blog post that outlines this setup with using URL filters with the report subscription here Filter & Email Power BI Report pages using Power Automate & Excel. I go ahead and add a new column “URLFilter” in the Excel file and add an expression to generate the filter that will be used in the Power Automate process to filter the report based on the Company Name value which is an attribute in the Customer table.

Excel file with the URLFilter column screenshot
Excel file with the URLFilter column screenshot

Steven has a reference in his blog post to how the filtering works and is used in the export process. Now that we have the report available and the Excel file updated, we just need to modify the Power Automate flow to use the “Export To File for Power BI Reports” steps instead of the Paginated Reports step.

Power Automate Export To File for Power BI Reports step screenshot
Power Automate Export To File for Power BI Reports step screenshot

In the above screenshot you can see that I am referencing the “Export To File for Power BI Reports” step now, using the new Power BI report that was deployed to the service, and have made a reference to the URLFilter column in the “ReportLevelFilters Filter” section of the step.

Now it is time to test the updated flow and process to validate everything works and then compare the results. After the flow has run and we take a look at the PDF files in the OneDrive folder we can notice one considerable difference…

PDF files in OneDrive folder screenshot
PDF files in OneDrive folder screenshot

And the difference is in the size of the file that was produced. In the first post I included a screenshot and the size of the files was around 260 KB and now they are around 30 KB which is an 88% reduction in file size. Granted the report / file being generated here is really basic, but that is a pretty significant difference and the output if we look is basically the same (I could have spent a bit more time on getting the formatting identical).

PDF file output from the Power Automate flow screenshot
PDF file output from the Power Automate flow screenshot

And the process took around 90 minutes again to complete, so right in line with the Paginated report examples. One thing to note is that I didn’t adjust the concurrency from the 15 setting. If we take a look at the capacity metrics we see similar impact with approximately 15% with the background processes and looks a lot cleaner with just the 2000 background operations.

Capacity Metrics App screenshot during the flow process screenshot
Capacity Metrics App screenshot during the flow process screenshot

With Power BI reports using the Export API call we should be able to do 55 concurrent report pages using the A4 SKU that I am testing with based on the documentation limits mentioned here. I did bump up the concurrency setting from 15 to 50 in the flow and it did complete in 80 minutes the 2nd time versus the 90 minutes initially, which means like 25 reports per minute instead of 22.

Overall to sum up the three different tests from this series of posts they were basically the same when it came to the following:

  • Duration – approximately 90 minutes for 2000 reports (PDF exports)
  • Capacity impact – approximately 15% on A4 (P1 equivalent) for background operations which are evaluated over a 24 hour basis

The differences comes down to the following:

  • AAS Database versus Power BI Dataset
    • Advantage to Power BI having everything hosted in the service and has more features like endorsing the dataset in the tenant, using sensitivity labels, central monitoring, etc.
    • More details on the feature comparison can be found here
  • Paginated versus Power BI reports
    • Format support – paginated reports support more options like Excel, Word, CSV compared to Power BI report
    • We did notice a major difference in file size where the Power BI report PDF files were 88% smaller than the Paginated report ones

So that wraps up the data driven subscription overview and comparison, let me know what you thought in the comments and provide any feedback you have on the series of posts. Thanks!

Posted in Power Automate, Power BI | Tagged: , , | Leave a Comment »

Power BI Data Driven Subscriptions with Power Automate – Follow up using Dataset

Posted by denglishbi on August 30, 2022

In the last post I went over using Power Automate to perform a data driven report subscription using a Paginated report referencing an AAS database. The flow referenced an Excel file with the information to make the process data driven and generate 2000 PDF files that could then be emailed to users. In the flow the PDF files were simply placed in a OneDrive folder for testing purposes to validate the flow would run as expected and to review the metrics after the fact to evaluate the impact of running the process.

For the follow up there were two items that I wanted to compare against the original flow

  1. Moving the AAS database being referenced to a Power BI dataset hosted in the same capacity as the Paginated report
  2. Using a Power BI report instead of a Paginated report

In this post I will cover the first comparison. I went ahead and deployed the existing AAS model to the premium workspace being used for the report bursting test. I did this using the Visual Studio project along with the XMLA endpoint. For this to work you will need to make sure that the XMLA endpoint read/write is enabled for the capacity as well as having the XMLA endpoints enabled in the tenant-level settings which are enabled by default.

Once the settings are all enabled then you just need to get the workspace connection information to use the XMLA endpoint and then make sure your model is using compatibility level 1500 which is supported in Power BI Premium.

Visual Studio Tabular Model Compatibility Level 1500 screenshot
Visual Studio Tabular Model Compatibility Level 1500 screenshot

Then it is simply a matter of setting the server information for the deployment using the workspace connection information and deploy the model.

Visual Studio Deployment Server Information screenshot
Visual Studio Deployment Server Information screenshot

To test the Paginated report with the new model I went ahead and updated the data source connection information to reference the workspace connection instead of AAS. After you do this you will then need to switch over to the ‘Credentials’ section in the properties to enter your user name and password to authenticate.

Paginated Report Data Source connection string screenshot
Paginated Report Data Source connection string screenshot

Once you have authenticated you can then publish the version of the Paginated report referencing the Power BI dataset to the workspace. Now we are about ready to test the Power Automate flow with the Paginated report using the Power BI dataset, just need to update the flow to reference this version of the report which is easy to do. I would also do a quick test with the new report just to make sure it runs fine in the workspace without any issues prior to running the updated flow.

Power Automate flow with updated Paginated report reference screenshot
Power Automate flow with updated Paginated report reference screenshot

Once again we let the process run and it completed in approximately 90 minutes and then after reviewing the metrics app we see very similar metrics with the background operations using roughly 15% of the capacity and these operations get evaluated on a 24 hour period.

Capacity Metrics App screenshot during the flow process screenshot
Capacity Metrics App screenshot during the flow process screenshot

So really not much different than running the process against AAS, expect now we have everything running entirely in our Power BI Premium capacity, so we can leverage all of the features like endorsing the dataset in the tenant, using sensitivity labels, central monitoring, etc.

In the next follow up post we will test out the process using a Power BI report instead of the Paginated report, so stay tuned;)

Posted in Power Automate, Power BI | Tagged: , , | Leave a Comment »

Power BI Data Driven Subscriptions with Power Automate (Report Bursting)

Posted by denglishbi on August 22, 2022

Being able to do a data driven report subscription with Power BI and Paginated reports is a common request we hear from many customers. Let’s say you want to send a PDF version of a report to each of your store or department managers using a set of parameter values specific to each person. In the Power BI service that is not an option, but using Power Automate you can do this.

In this post I will be using a Paginated report that is referencing data in an Azure Analysis Services database and I will be referencing an Excel file that I have in OneDrive for Business which includes the needed information for the data driven subscription with 2000 records. The Paginated report is in a workspace backed by a Power BI Embedded A-SKU (A4 – equivalent of a P1 SKU) for testing purposes and the AAS tier is an S1 (100 QPU).

The report for this example is basic with just a single company name parameter defined that provides a list of the customers (first and last name) and the total amount due.

Paginated report example screenshot
Paginated report example screenshot

With Power Automate you can use the pre-built Power BI templates that are available and then customize them as needed. The end result of the flow I created looks like the following –

Power Automate flow screenshot
Power Automate flow screenshot

The trigger for this flow is just using a schedule setup with a recurrence which is part of the template. For my demo I am just running it manually to show how it works and to highlight a few options you need to be aware of. For the Excel file it looks like the screenshot below and you can see a few records have already been processed based on the Processed and ProcessedDTM columns (these values are being updated in the flow) –

Excel file referenced for data driven subscription screenshot
Excel file referenced for data driven subscription screenshot

In the file there is a RowId column so that I have a unique key value to reference for each record which gets used in the flow to update the processed information, CompanyName column which is used to set the parameter value in the report, user information for who will be getting the report, file name and format type, and then the processed information. The processed information is so I know when the record was last sent as well as if there is a failure or for some reason need to cancel the flow I can then restart the process and it would resume with just the records that are set to ‘No’ because of the filter included in the List rows setup for the Excel file –

List rows Filter Query expression screenshot
List rows Filter Query expression screenshot

In this example instead of emailing files to users I am simply generating the PDF files and placing them in a OneDrive folder.

Pro Tip – It is always a good idea to test your flow out to make sure it will run smoothly without any errors, determine how long the process will take to complete, and verify this will not overload the premium capacity (especially if it is being used by end users for critical reporting).

So before emailing anyone I just want to make sure the process will run from start to finish without any errors. After the process does complete I will review the metrics and determine if any adjustments should be made.

Here are screenshots of the Export to File for Paginated Reports, Create file in OneDrive, and then the two Excel Update a Row steps.

Export to File for Paginated Reports screenshot
Export to File for Paginated Reports screenshot

In the Export call I am specifying the workspace and report, you could actually provide the workspace id and report id in the Excel file to make this dynamic, referencing the File Format in the Excel file for the Export Format type, and then at the bottom setting the name of the parameter in the report (CustomerCompanyName) and referencing the CompanyName provided in the Excel file.

Create file in OneDrive screenshot
Create file in OneDrive screenshot

For creating the file I am dynamically setting the File Name based on columns in the Excel file (make sure to also include the file extension information, seems odd, but that is needed).

Excel Update a row screenshot (set processed info)
Excel Update a row screenshot (set processed info)

In the first Excel Update a Row step I set the Key Column which is RowId and make reference to the RowId value currently be processed in the loop and then set the Processed column to ‘Yes’ and ProcessedDTM column to current UTC time.

Excel Update a row screenshot (reset processed column)
Excel Update a row screenshot (reset processed column)

In the second Excel Update a Row step once all of the records have been processed in the first Apply to Each loop I simply reset the Processed column back to ‘No’ for all of the records to reset it for the next time we need to run the flow.

After all of the steps have been configured and a test run is kicked off there are a couple of things that we will see, first it reads the Excel file records extremely fast, but after further investigation on the Apply To Each step you will see that it is only loading 256 records and in the Excel CompanyList table there are 2000 records;)

Initial flow run screenshot
Initial flow run screenshot

So why is this process only loading 256 records? Hmm, thanks to Sanjay Raut who was working with me on testing this process out last year for report bursting he found this Solved: 256 rows limit – Power Automate and it turns out there is a Pagination setting on the Excel List Rows step that you need to turn on and then set accordingly depending on how many records you might have. In this case I simply set this to 2100 to cover my test sample.

Excel file list rows pagination setting screenshot
Excel file list rows pagination setting screenshot

Now when we reset the Excel file and re-run the process we see that it took a little longer to read all of the records in the Excel file and that the Apply to Each step is going to loop through 2000 records:)

Update flow run screenshot
Update flow run screenshot

Another setting that you will want to be aware of is on the Apply to Each step which controls the concurrency. This is important and you will want to adjust this accordingly depending on how long it takes to run a report and stay within the Power BI connector throttling limits (100 calls per 60 seconds). If you do not enable this then the flow will only process one record at a time, in this test I have it set to 15 (20 is the default when enabled and 50 is the max).

Apply to Each step concurrency setting screenshot
Apply to Each step concurrency setting screenshot

I figure if I can process 30 to 60 reports per minute that is really good and will stay within the throttling limits. I don’t believe that will actually happen, probably more like 20 per minute, but we will see;)

For the second Apply to Each step to reset the file I don’t have the concurrency set to avoid any type of blocking, simply loop through the 2000 records and reset them back to Processed value of ‘No’.

Now that the flow is running the PDF files are being generated in the OneDrive folder as expected –

PDF files being loaded into OneDrive folder screenshot
PDF files being loaded into OneDrive folder screenshot

I took a look at the progress after an hour and based on the Excel file I could tell that it had generated 1,384 files which is around 23 files per minute. Based on this information the process will complete in another 30 minutes, so 1 hour and 30 minutes from start to finish.

I reviewed one of the PDF files out in the OneDrive folder just to make sure everything looked good –

PDF file generated from the flow in OneDrive screenshot
PDF file generated from the flow in OneDrive screenshot

Once the process completed I verified the file count and then reviewed the metrics from the Capacity Metrics App and AAS QPU utilization is Azure portal.

Verified file count in OneDrive folder screenshot
Verified file count in OneDrive folder screenshot
Capacity Metrics App screenshot during the flow process screenshot
Capacity Metrics App screenshot during the flow process screenshot

With just this process running the capacity was not overloaded and used roughly 15% with the background process running which gets evaluated over a 24 hour period.

AAS QPU metrics from Azure portal screenshot
AAS QPU metrics from Azure portal screenshot

And AAS was barely being utilized maxed out at 5 QPU (the S1 has 100 QPU available).

Some things that I will need to compare this process against would be the following –

  1. Moving the AAS database being referenced to a Power BI dataset hosted in the same capacity as the Paginated report
  2. Using a Power BI report instead of a Paginated report

I will do both of these comparisons as follow up posts, stay tuned.

Here are some references that you might find useful in the meantime –

Just don’t forget when running this type of a load to test (and A-SKUs work great for this so you don’t impact production P-SKUs) and make sure to check out the additional settings pointed out earlier like the pagination and concurrency.

Posted in Analysis Services, Power Automate, Power BI | Tagged: , , , | Leave a Comment »

Power BI / AAS data model optimization v2

Posted by denglishbi on October 18, 2021

This is an updated version of the presentation I did earlier in the year which I recently presented for the Minnesota Data Saturday 13 event this past weekend.

Here is the direct link to the presentation on slideshare here.

Thanks to everyone that attended on Saturday and held out for the file sessions of the day:)

Kudos to the PASSMN board, sponsors, speakers, and volunteers for making this event possible!

Posted in Analysis Services, Power BI | Tagged: , | Leave a Comment »

Power BI / AAS Model Optimization Presentation

Posted by denglishbi on August 23, 2021

Updated (10/18/2021): There is now an updated version of this from a more recent presentation here.

Thanks to everyone that attended the MN BI User Group meeting today. Here is a link to my slides that I presented and if you have any questions about the content please feel free to reach out here, Twitter, or LinkedIn.

Here is the direct link Power BI / AAS Model Optimization presentation August 23, 2021.

Posted in Analysis Services, Power BI | Tagged: , | Leave a Comment »

PASSMN September 2018 Meeting

Posted by denglishbi on September 17, 2018

The next Minnesota SQL Server User Group meeting is tomorrow, Tuesday, September 18. This month we are going to try something new and are going to have some of our upcoming MN SQLSaturday speakers attend and do a preview / introduction to their talk which will be on Saturday, October 6.

Be sure to register so that your name badge will be available for you at the Microsoft Technology Center when you arrive and so that we will have an accurate headcount for ordering food.PASSMNLogo

The sponsor for this month’s meeting is PASSMN.

Location: 3601 West 76th Street, Suite 600 Edina, MN 55437

Agenda:

  • 3:30-4:00 : Registration, Networking, and Food
  • 4:00-4:10 : Kickoff / Announcements
  • 4:10-5:25 : SQLSaturday #796 MN Preview (multiple speakers)
  • 5:25-5:35 : Closing
  • 6:30-7:30 : Pinstripes social hour and bocce ball

Please click here for meeting details and to RSVP for the event

Presentation

SQLSaturday #796 MN Preview

Abstract: For this event we are going to have some of the speakers attend and provide a brief introduction to their session topic that they will be presenting and answer any questions to prepare everyone for the big event in October.

SQLSaturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence and Analytics. This event will be held on Oct 6, 2018 at Saint Paul College, 235 Marshall Avenue, Saint Paul, Minnesota, 55102.

Don’t forget to register and secure your spot for the SQLSaturday event:)
http://www.sqlsaturday.com/796/eventhome.aspx

Speakers:

Joshuha Owen, Dan English, Ross McNeely, Eric Zierdt, Rick Bielawski, Chris Kramer, Tim Plas, and more speakers will be added, stay tuned…

Social Hour:

We are planning on meeting over at Pinstripes in Edina afterwards to network, play bocce ball, and have some apps and refreshments.

Bocce ball reservation is from 6:30 to 7:30. Feel free to show up after the meeting ahead of time and socialize prior to the bocce ball.

3849 Gallagher Dr, Edina, MN 55435
https://pinstripes.com/edina-minneapolis

Posted in Training | Tagged: , , | Leave a Comment »

MN SQLSaturday #796 Pre-Cons

Posted by denglishbi on September 13, 2018

We are just a few weeks out from #sqlsatmn and over 250 have registered! Don’t miss out on securing your spot, register today! sqlsat796

http://www.sqlsaturday.com/796/eventhome.aspx

Checkout pre-con events and the earl-bird pricing while it lasts:)  Early-bird pricing on pre-cons ends 9/14, so sign-up today and save:)

We have paid, full-day pre-conference sessions on Friday, October 5th, 2018 (also at Saint Paul College).

These full day training sessions are a great value, current early-bird pricing of only $99 plus fees! (early-bird pricing is good through Friday, September 14 and then the price will increase to $125)

Click the links below to get more details about the pre-con training events:

You must register and pay through Eventbrite for these pre-cons.

#sqlpass

Posted in SQL Server, Training | Tagged: , , | Leave a Comment »

MN SQLSaturday #796 Schedule Posted – Oct 6

Posted by denglishbi on August 27, 2018

The schedule has been posted for this year’s Minnesota SQLSaturday event which will be held on October 6 at Saint Paul College. sqlsat796

I will be speaking in the afternoon on Power BI: Dashboard in an Hour Walk-Through.

This session will provide a walk-through example showcasing the Power BI tools including the Desktop, Service, and Mobile application. You will see how you can quickly access and explore data and gain insights from any device as well as collaborate and share the content with others. The content and examples will be provided after the session so that you can go through the walk-through examples on your own.

This session is perfect for anyone that is new to Power BI and is looking for an overview and a demonstration of what the toolset can do and provide for reporting and analytics.

sqlsat796schedule

 

Make sure you register today and secure your spot before it fills up – register now!

Hope to see you there!

Posted in Training | Tagged: , , | Leave a Comment »