If you work with Microsoft Fabric Lakehouses, you know the story all too well:
You load data, transform it, optimize it… and then you still need to manually refresh the SQL Endpoint to make sure your tables, schema changes, and metadata are fully synchronized.
That extra manual step is finally gone.
Today, a brand‑new pipeline activity has landed in Microsoft Fabric:
This is one of those deceptively small features that unlocks a huge amount of automation power. For the first time, you can orchestrate a full end‑to‑end Lakehouse workflow — ingestion, transformation, optimization, and SQL Endpoint refresh — all inside a single Fabric pipeline.
The Refresh SQL Endpoint activity allows you to:
Trigger a refresh of your Lakehouse SQL Endpoint directly from a pipeline
Ensure schema changes (new columns, reordered columns, dropped fields) are reflected immediately
Guarantee that downstream Warehouse queries, Power BI models, and semantic layers see the latest metadata
Remove the need for manual refreshes or external scripts
Build fully automated, production‑ready data workflows
This is especially valuable for teams who:
Add or modify Lakehouse table schemas dynamically
Use pipelines for incremental ingestion
Rely on SQL Endpoint for reporting or downstream processing
Want deterministic, repeatable orchestration without manual intervention
π ️ How you’ll use it
You simply drop the Refresh SQL Endpoint activity into your pipeline, point it at your Lakehouse, and place it after any step that modifies tables or schema.
Set Variable which uses SQL EndpointId from Variable library file.This variable will be used in Refresh activity.
So when we deploy from Dev-> Test-> Prod the variable library will refer to correct SQL Endpoint.
No comments:
Post a Comment