You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using Storage Queue Trigger, the Event source length is the amount of messages in the Storage Queue and Target executions per instance is the extensions.queues.batchSize property defined in host.json.
My understanding:
So let's say, I want to configure scaling in a way that each intstance only processes one queue message at a time.
In this case, i would set the extensions.queues.batchSize to 1.
So now, Azure Functions premium plan would scale-out an instance for each queue message (until the maximum amount of instances is reached, from then on it will wait for processing messages to complete).
As stated above, the target-based scaling uses the messages in the queue to determine the needed amount of instances.
But after they start processing, they are no longer in the queue and so the target-based scaling would immediately vote to scale-in the scaled-out instances, wouldn't it?
Example:
Azure Functions Premium plan is configured to have 1 instance "always ready" and scale up 4 additional instances
There are 3 messages in the queue the queue trigger reads from
The scaling is configured as described above (1 message per instance at a time)
So now, the target-based scaling will scale-out two additional instances so that all three messages can be processed concurrently
But now, the queue is empty. So the target-based scaling will now vote to scale-in back to the initial 1 instance
Questions:
Is my understanding of the scale-in behavior correct or do I miss something?
If it is correct, do the instances marked for scale-in actually complete their work or will they be shut down after some time? I have read about drain mode and graceful shutdown, but I am not sure if I understand it correctly.
If it does not actually let the instances complete their work (shut down after some time even if it did not complete), then how to ensure they are completed? Writing a queue message again during shut down would result in a queue item, which would result in a scale-out again. So it just goes back and forth?
About my function:
My function is used to process some word files (docx). Usually, most jobs complete within seconds or minutes. However, there can be jobs that run a few hours.
I have read about Durable Functions as well, but I am not sure if it solves my problem since the target-based algorithm would be the same for Durable Functions and "Normal" Functions.
The text was updated successfully, but these errors were encountered:
deanhdev
changed the title
Details about Scale-in behavior with target-based scaling in Azure Functions Premium Plan
Question: Details about Scale-in behavior with target-based scaling in Azure Functions Premium Plan
Feb 27, 2025
Context:
When using Azure Functions Premium plan, scaling is done via "event-driven scaling"
When using a Storage Queue Trigger, "target-based scaling" is used by default.
In its documentation Target-based scaling, Microsoft describes that the equation used to determine the desired instances for scaling is:
When using Storage Queue Trigger, the
Event source length
is the amount of messages in the Storage Queue andTarget executions per instance
is theextensions.queues.batchSize
property defined inhost.json
.My understanding:
So let's say, I want to configure scaling in a way that each intstance only processes one queue message at a time.
In this case, i would set the
extensions.queues.batchSize
to1
.So now, Azure Functions premium plan would scale-out an instance for each queue message (until the maximum amount of instances is reached, from then on it will wait for processing messages to complete).
As stated above, the target-based scaling uses the messages in the queue to determine the needed amount of instances.
But after they start processing, they are no longer in the queue and so the target-based scaling would immediately vote to scale-in the scaled-out instances, wouldn't it?
Example:
Questions:
About my function:
My function is used to process some word files (docx). Usually, most jobs complete within seconds or minutes. However, there can be jobs that run a few hours.
I have read about Durable Functions as well, but I am not sure if it solves my problem since the target-based algorithm would be the same for Durable Functions and "Normal" Functions.
The text was updated successfully, but these errors were encountered: