Thursday, April 30, 2026

Microsoft Fabric pipelines fail: AADSTS53000: Device is not in required device state: compliant

AADSTS53000 in Microsoft Fabric Pipelines: Device Compliance Error Explained and Fixed


Complete Error Message (As Seen in Fabric)

The pipeline fails with the following complete error message, which is important to review before troubleshooting:


BadRequest Error fetching pipeline default identity userToken, response content: {
  "code": "LSROBOTokenFailure",
  "message": "AADSTS53000: Device is not in required device state: compliant. Conditional Access policy requires a compliant device, and the device is not compliant. The user must enroll their device with an approved MDM provider like Intune. Trace ID: 53469a28-4971-4d4b-9338-e0dc05751a00 Correlation ID: f6524808-3ab0-49ab-8d25-7371831ab57e Timestamp: 2026-04-30 07:24:38Z The returned error contains a claims challenge. For additional info on how to handle claims related to multifactor authentication, Conditional Access, and incremental consent, see https://aka.ms/msal-conditional-access-claims. If you are using the On-Behalf-Of flow, see https://aka.ms/msal-conditional-access-claims-obo for details.",
  "target": "PipelineDefaultIdentity",
  "details": null,
  "error": null
}
Activity ID 7fade997-becc-4708-b40d-77454522ed40
Error Time Thu Apr 30 2026 14:40:50 GMT+0200 (Central European Summer Time)
Service version 13.0.28187.37
Client version 2604.2.28902-train
Cluster URI https://wabi-north-europe-g-primary-redirect.analysis.windows.net/
Stack Trace {"AdditionalParameters":null,"ErrorCategory":null,"ErrorCode":"AzureActiveDirectoryClientUserErrorException","ErrorMessage":"Job failed to start: unable to acquire user token","ErrorParameters":null,"ErrorSource":1,"IsTransient":false,"UserInteractionRequest":null}

At first glance, this error strongly suggests a user device compliance problem, which can be misleading in the context of Microsoft Fabric pipelines.


Introduction

While working with Microsoft Fabric pipelines in enterprise environments, you may encounter device compliance–related authentication failures like the one shown above.

Although the message references Intune enrollment and compliant devices, Fabric pipelines are non-interactive service workloads. This means the root cause is usually not related to a physical user device.

This post explains why the error occurs, why common fixes don’t work, and how redeploying via Deployment Pipelines resolved the issue.


The Error Scenario


BadRequest Error fetching pipeline default identity userToken
AADSTS53000: Device is not in required device state: compliant
Conditional Access policy requires a compliant device
  • Occurs in Microsoft Fabric pipelines
  • Conditional Access enforces device compliance
  • Pipeline runs as a service workload, not a physical device

Why This Error Is Confusing

  • Fabric pipelines do not have a device
  • Fabric pipelines do not enroll in Intune
  • They should not be subject to device-based Conditional Access

This leads many teams to try:

  • Enrolling their laptop in Intune ❌
  • Changing the user who clicks Run
  • Retrying the pipeline ❌

None of these work consistently.


Root Cause (What’s Really Happening)

The issue is usually caused by stale or incorrect identity metadata in the target environment (for example, Prod).

This can happen when a Fabric pipeline is:

  • Manually recreated in Prod
  • Copied instead of promoted
  • Originally authored under a different user context

Additional Observation: Expired Password of the Submitting User

In our case, further investigation showed that the password of the user shown as “Submitted by” had expired.

Because the pipeline was originally created and deployed under this user context, Fabric attempted to acquire a token using that identity. When the password was expired, token acquisition failed and surfaced as a misleading AADSTS53000 device compliance error.

To validate this:

  • The pipeline was redeployed from Dev to Prod under another user
  • The pipeline was executed using a different user identity

✅ The pipeline ran successfully after this change.

This confirmed that the issue was related to expired user credentials and stale identity bindings, not an actual Intune or device-compliance problem.


The Fix: Redeploy via Deployment Pipelines ✅

Redeploying the pipeline from Dev to Prod using Fabric Deployment Pipelines resolved the issue completely.

Why this works

  • Rebinds the pipeline to the target workspace identity
  • Resets execution and ownership metadata
  • Removes stale or expired user associations
  • Ensures the pipeline runs as a non-interactive service workload

No Conditional Access changes were required.


Best Practices Going Forward

  • ✅ Always create pipelines in Dev
  • ✅ Promote using Deployment Pipelines
  • ✅ Avoid manually recreating pipelines in Prod
  • ✅ Keep Workspace Identity enabled for production workloads
  • ✅ Treat Fabric pipelines as service identities, not user actions

Key Takeaway

In Microsoft Fabric, AADSTS53000 device-compliance errors can be caused by expired user passwords or stale identity bindings. Redeploying via Deployment Pipelines resets the execution context and resolves the issue.


Closing Thoughts

This behavior is under-documented, but understanding it can save hours of investigation and unnecessary Conditional Access changes.

If you work with Fabric in secured enterprise tenants, Deployment Pipelines are not optional—they are essential.

``

Thursday, March 12, 2026

🎉 Refresh SQL Endpoint Activity is now available!


If you work with Microsoft Fabric Lakehouses, you know the story all too well:

You load data, transform it, optimize it… and then you still need to manually refresh the SQL Endpoint to make sure your tables, schema changes, and metadata are fully synchronized.

That extra manual step is finally gone.

Today, a brand‑new pipeline activity has landed in Microsoft Fabric:

This is one of those deceptively small features that unlocks a huge amount of automation power. For the first time, you can orchestrate a full end‑to‑end Lakehouse workflow — ingestion, transformation, optimization, and SQL Endpoint refresh — all inside a single Fabric pipeline.


🔧 What this new activity does

The Refresh SQL Endpoint activity allows you to:

  • Trigger a refresh of your Lakehouse SQL Endpoint directly from a pipeline

  • Ensure schema changes (new columns, reordered columns, dropped fields) are reflected immediately

  • Guarantee that downstream Warehouse queries, Power BI models, and semantic layers see the latest metadata

  • Remove the need for manual refreshes or external scripts

  • Build fully automated, production‑ready data workflows

This is especially valuable for teams who:



























🛠️ How you’ll use it

You simply drop the Refresh SQL Endpoint activity into your pipeline, point it at your Lakehouse, and place it after any step that modifies tables or schema.



















Set Variable which uses SQL EndpointId from Variable library file.This variable will be used in Refresh activity.

So when we deploy from Dev-> Test-> Prod the variable library will refer to correct SQL Endpoint.






Variable Library file:




Thursday, August 1, 2019

Stock Prices ETL :TD Ameritrade API

I was think few days back about how BI could work with Stock Prediction.
To do it first I needed regular stock price data from the internet.
So to do it I wrote a quick code in Python to get stock prices from TD Ameritrade API.
So once I have the data feeding in regulary I will write the next post of prediction algorithms.


Here is the code:
SQL Table:
CREATE TABLE [dbo].[Stocks_Price](
[Symbol] [varchar](10) NOT NULL,
[Open] [float] NULL,
[High] [float] NULL,
[Low] [float] NULL,
[Close] [float] NULL,
[volume] [float] NULL,
[TransDate] [date] NOT NULL
)

The python code has a Start and End date which can be modfied the way you want.
The symbols for which we get prices for is in the file Symbols.txt.
Symbols.txt has only text as shown below for all the stocks
AAPL
AABA
TSLA

The list of all Symbols on Nasdaq can be found here
ftp://ftp.nasdaqtrader.com/symboldirectory/
nasdaqlisted.txt


You can run the following Python code and make changes for your APIKey, SQL Server adddress. SYmbols file path.
Python Code:
import requests
import json
import pyodbc
import datetime
import time

start_date =int(datetime.datetime.timestamp(datetime.datetime.strptime('07 20 2019  1:33PM', '%m %d %Y %I:%M%p')) * 1000)
end_date   =int(datetime.datetime.timestamp(datetime.datetime.strptime('08 02 2019  1:33PM', '%m %d %Y %I:%M%p')) * 1000)

file= open(r'''C:\Stocks\Symbols.txt''',"r")
   
for x in file:
    symbol_name= x.strip()   
    endpoint=r"https://api.tdameritrade.com/v1/marketdata/{}/pricehistory".format(symbol_name)
   
    #payload=   { 'apikey':'Your API KEY',
    #             'periodType':'year',
    #             'frequencyType':'daily'                     
    #            }

    payload=   { 'apikey':'Your API KEY',
                 'periodType':'year',
                 'frequencyType':'daily' ,
                 'startDate':start_date,
                 'endDate':end_date
                }
    print(symbol_name)
   
    content= requests.get(url=endpoint,params=payload)
    data=content.json() 

    sqlconn=pyodbc.connect('Driver={SQL Server};'
                          'Server=.;'
                          'Database=NishDB;'
                          'Trusted_Connection=yes;')
    cursor= sqlconn.cursor()

    for row in data['candles']:

        currenttime = datetime.datetime.fromtimestamp(row['datetime']/1000).strftime('%Y-%m-%d')

        cursor.execute('''Insert into NishDb.dbo.Stocks_Price(Symbol,[Open],High,Low,[Close],volume,TransDate) values(?,?,?,?,?,?,?)''',
                       (symbol_name,row['open'],row['high'],row['low'],row['close'],row['volume'],currenttime))
        sqlconn.commit()

Monday, November 5, 2018

SSIS Error: Execute Sql Task failed

Error:
SQL SPLoadxxx:Error: Executing the query "EXEC SPLoadxxx ?" failed with the following error: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

When I executed the Stored procedure it executed perfectly.
So seems SSIS was not able to execute it properly and I found out that
"The Execute SQL Task" in SSIS  "Parameter Mapping" had an extra parameter which was not been used by the Stored procedure.


Wednesday, September 7, 2016

SSAS 2012 Tabular Cube:Attributes(columns) not visible

Issue is where your table in the model diagram has both Measure and Attributes and you add a new column to the table in model diagram.
After Cube deployment it only shows the Measures in the Client tool like Excel.
It did not show the new attribute which was added to the model table later.

Solution is to set the "Hidden" property on the properties windows of the newly added Attribute(Column) to "True" and click save and then change it to "False" again.

And now if you deploy then the new columns will show up as dimensions.
There is some bug so it doesnot take up the correct way when column added .



Friday, May 27, 2016

Convert Views to Tables

Recently I wanted to create a StoredProcedure to convert all the views which read millions of records to tables in a database.

The Reason being when we process a cube its always better if it reads from a simple view or a table.
So the ETL(SSIS) job will execute the view which is complicated and has many joins and look ups to a table which can be easily converted to a table and it can be read into the cube.

Here is code to convert all views ending with "_ToTable" to table with same name.
Example : dbo.SalesOrder_ToTable view will be converted to dbo.SalesOrder so having the same schema name (.dbo) as the view .

GO
DECLARE @View_Name varchar(255)

DECLARE MY_CURSOR CURSOR
  LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT SCHEMA_NAME(schema_id) +'.'+name AS view_name
FROM sys.views where name like '%_ToTable%'

OPEN MY_CURSOR
FETCH NEXT FROM MY_CURSOR INTO @View_Name
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @Tablename_Name varchar(255)
Set @Tablename_Name = Replace(@View_Name,'_ToTable','')
IF OBJECT_ID(@Tablename_Name, 'U') IS NOT NULL  Exec('DROP TABLE '+@Tablename_Name)
Exec('Select * into '+@Tablename_Name+' from '+@View_Name )
    FETCH NEXT FROM MY_CURSOR INTO @View_Name
END
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR

Monday, February 29, 2016

SSAS: Dimension Parent Child Hierarchy recursive

In the last post I showed you know to build a Parent-Child Hierarchy. This time we implement the same Parent-Child Hierarchy in a different way. Building a Parent Child Hierarchy looks complicated but is easy if all the rules are followed correctly. I will here list down few steps which will help us building it better.

Scenario:
As you see below we have a simple table which you can build using SSIS and map the corresponding Parent and Child values.
Portfolio:
Primary Column With unique IDs.
ParentPortfolio:
Portfolio Id of the corresponding Parent.
RollUp:
Operator used while aggregrating values(Auto Sum) from Child to Parent node.
You can read more on RollUps here
https://msdn.microsoft.com/en-us/library/ms175417.aspx

Example: P0120 has Parent P0200 and P0200 has Parent as P0500


Step1:
 You can create a New SSAS Project and create the DataSourceView.
 After importing the table in the DataSource View we need to set the primary key.
Here Portfolio is set as Primary key.
 After that a link is created between Portfolio and ParentPortfolio as shown below.



Step2:
Right Click "Dimension" folder in the Project in Visual Studio and Click "New Dimension".
The Dimesnion wizard opens up as shown below.
Select Portfolio Column as Key Column and for display we show the ID+Name combination which is already  there in the Description Column.


Step3:
Select Portfolio and ParentPortfolio columns. You can add more columns like Name later on.


Step4:
Name it "Portfolio".


Step5:
Process the Dimension. The output is shown below.


Step6:
Now we need to get rid of the Unknown member and set Rollup operator. For the Portfolio Dimension set the following Properties:


Step7:
For Parent Portfolio Attribute set the following Properties:


Step 8:
Process the Dimension:
The Unknown Level will disappear and all the Levels will have the Unary Rollup sign.


Step 9:
Rename "Portfolio" attribute to "Portfolio Flat" and "Parent Portfolio" to Portfolio.
The reason we do is because I want to name my hierarchy attribute as "Portfolio".
Its helps a lot and is very user friendly for the EndUser.


This is how it should look like: