Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure: create the container on the fly if it does not exist #158

Open
DaleyKD opened this issue Dec 1, 2022 · 4 comments
Open

Azure: create the container on the fly if it does not exist #158

DaleyKD opened this issue Dec 1, 2022 · 4 comments

Comments

@DaleyKD
Copy link

DaleyKD commented Dec 1, 2022

Looking at the code, I thought it would attempt to create a new container if it came back as not found. However, an exception is thrown in my code.

  • DistributedLock.Azure 1.0.0.0
  • Azurite in Docker
const string electionMutexAzureStorageConnectionString = "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;";
services.AddSingleton<IDistributedLockProvider>(_ =>
{
    var container = new BlobContainerClient(electionMutexAzureStorageConnectionString, "my-mutex-leases");
    return new AzureBlobLeaseDistributedSynchronizationProvider(container, options =>
    {

    });
});
namespace DistributedLockTest
{
    public class MyService : IHostedService
    {
        private readonly IDistributedLockProvider _lockProvider;
        
        public MyService(IDistributedLockProvider lockProvider)
        {
            _lockProvider = lockProvider;
        }

        public async Task StartAsync(CancellationToken cancellationToken)
        {
            while (!cancellationToken.IsCancellationRequested)
            {
                await StartMutexAsync("MyLockName", ExecuteAsync, cancellationToken);
            }
        }

        public Task StopAsync(CancellationToken cancellationToken)
        {
            return Task.CompletedTask;
        }

        private async Task StartMutexAsync(string name, Func<CancellationToken, Task> taskToRun, CancellationToken cancellationToken)
        {
            await using (var handle = await _lockProvider.TryAcquireLockAsync(name, cancellationToken: cancellationToken))
            {
                if (handle != null)
                {
                    await taskToRun(cancellationToken);
                    Console.WriteLine("No longer in taskToRun!!!!!");
                }
                else
                {
                    Console.WriteLine($"Not the leader. Waiting ...");
                    await Task.Delay(TimeSpan.FromSeconds(30), cancellationToken);
                    Console.WriteLine($"Trying again ...");
                }
            }
        }

        private async Task ExecuteAsync(CancellationToken cancellationToken)
        {
            // I have the lock
            Console.WriteLine($"I'm the LEADER: {Process.GetCurrentProcess().Id}");

            while (!cancellationToken.IsCancellationRequested)
            {
                Console.WriteLine("Do some work, then wait ...");
                await Task.Delay(TimeSpan.FromSeconds(15), cancellationToken);
            }
        }
    }
}

Here's the error immediately kicked out:

Unhandled Exception: Azure.RequestFailedException: The specified container does not exist.
RequestId:07df2a0f-3ca9-4c17-a4f8-d686ab6a16d9
Time:2022-12-01T21:11:27.091Z
Status: 404 (The specified container does not exist.)
ErrorCode: ContainerNotFound

Content:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
  <Code>ContainerNotFound</Code>
  <Message>The specified container does not exist.
RequestId:07df2a0f-3ca9-4c17-a4f8-d686ab6a16d9
Time:2022-12-01T21:11:27.091Z</Message>
</Error>

Headers:
x-ms-error-code: ContainerNotFound
x-ms-request-id: 07df2a0f-3ca9-4c17-a4f8-d686ab6a16d9
Connection: keep-alive
Keep-Alive: REDACTED
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Thu, 01 Dec 2022 21:11:27 GMT
Server: Azurite-Blob/3.20.1

   at Azure.Storage.Blobs.BlobRestClient.<AcquireLeaseAsync>d__38.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)
   at Azure.Storage.Blobs.Specialized.BlobLeaseClient.<AcquireInternal>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Azure.Storage.Blobs.Specialized.BlobLeaseClient.<AcquireAsync>d__25.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Medallion.Threading.Azure.AzureBlobLeaseDistributedLock.<TryAcquireAsync>d__9.MoveNext() in C:\Users\mikea_000\Documents\Interests\CS\DistributedLock\DistributedLock.Azure\AzureBlobLeaseDistributedLock.cs:line 117
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Medallion.Threading.Azure.AzureBlobLeaseDistributedLock.<TryAcquireAsync>d__9.MoveNext() in C:\Users\mikea_000\Documents\Interests\CS\DistributedLock\DistributedLock.Azure\AzureBlobLeaseDistributedLock.cs:line 152
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Threading.Tasks.ValueTask`1.get_Result()
   at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1.ConfiguredValueTaskAwaiter.GetResult()
   at Medallion.Threading.Internal.BusyWaitHelper.<WaitAsync>d__0`2.MoveNext() in /_/DistributedLock.Core/Internal/BusyWaitHelper.cs:line 29
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Threading.Tasks.ValueTask`1.get_Result()
   at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1.ConfiguredValueTaskAwaiter.GetResult()
   at Medallion.Threading.Internal.Helpers.<Convert>d__1`2.MoveNext() in /_/DistributedLock.Core/Internal/Helpers.cs:line 24
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Threading.Tasks.ValueTask`1.get_Result()
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at DistributedLockTest.MyService.<StartMutexAsync>d__4.MoveNext() in C:\Users\home\source\repos\DistributedLockTest\DistributedLockTest\MyService.cs:line 34
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
   at DistributedLockTest.MyService.<StartAsync>d__2.MoveNext() in C:\Users\home\source\repos\DistributedLockTest\DistributedLockTest\MyService.cs:line 23
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Extensions.Hosting.Internal.Host.<StartAsync>d__12.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.<RunAsync>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.<RunAsync>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
   at DistributedLockTest.Program.<Main>d__6.MoveNext() in C:\Users\home\source\repos\DistributedLockTest\DistributedLockTest\Program.cs:line 25
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
   at DistributedLockTest.Program.<Main>(String[] args)
@madelson
Copy link
Owner

madelson commented Dec 2, 2022

Hi @DaleyKD thanks for your interest in the library.

The implementation tries to clean up after itself, so currently we don’t support creating containers on the fly.

I also wanted to minimize the number of API calls on the main lock acquisition path, although in this case I suppose we could wait to catch the exception before creating the container.

What use case do you have that would make that desirable? As someone who is not an Azure expert, do you know if there is anything about container creation that would make it less desirable/harder to automate than automatic blob creation?

@madelson madelson changed the title Azure: ContainerNotFound Azure: consider creating the container on the fly if it does not exist Dec 2, 2022
@DaleyKD
Copy link
Author

DaleyKD commented Dec 2, 2022

First, I totally misread the code path and see where you're automatically creating the blob, while I thought it was the container. My apologies.

The main reason is that I don't want to ask my customers to go in and manually create a container. I'd hope asking them to create the storage account would suffice. (If I DO have to ask them, it's not the end of the world.)

Creating a container isn't that bad. Let me see if I can find a code snippet from my other project.

Well, this is straight from MSFT and how they had us do election mutexes: https://github.com/mspnp/cloud-design-patterns/blob/master/leader-election/DistributedMutex/BlobLeaseManager.cs

At the very bottom, you see that they use a BlobContainerClient to create the container.

I would kindly "argue" that the code should create the container if it doesn't exist, but not clean it up. Reason? In my code, we have MANY services doing leader election. We use the same container, but different blobs.

@madelson madelson changed the title Azure: consider creating the container on the fly if it does not exist Azure: create the container on the fly if it does not exist Dec 2, 2022
@madelson
Copy link
Owner

madelson commented Dec 2, 2022

The main reason is that I don't want to ask my customers to go in and manually create a container. I'd hope asking them to create the storage account would suffice. (If I DO have to ask them, it's not the end of the world.)

Makes sense. Also fits with one of the goals of the library which is to "just work" without extra setup on the back-end.

Well, this is straight from MSFT and how they had us do election mutexes

Nice!

I would kindly "argue" that the code should create the container if it doesn't exist, but not clean it up. Reason? In my code, we have MANY services doing leader election. We use the same container, but different blobs.

I agree. Upon consideration I think this would be good behavior to add. In fact, it is very consistent with our FileDistributedLock which lazily creates (but does not clean up) the lock file's directory.

Is this something you'd be interested in contributing?

Implementation notes:

  • Add a function CreateContainerIfNotExistsAsync to BlobClientWrapper. Should use this._blobClient.GetParentBlobContainerClient() and should create the container with the same debugging metadata tag that AzureBlobLeaseDistributedLock uses when creating blobs.
  • In AzureBlobLeaseDistributedLock.TryAcquireAsync, catch the case where the container does not exist and in that condition create the container and then the blob before retrying.
  • Add a test case TestWrapperCreateContainerIfNotExists to AzureBlobLeaseDistributedLockTest similar to TestWrapperCreateIfNotExists (but simpler since there aren't multiple types).
  • Replace the test case TestThrowsIfContainerDoesNotExist with one called TestCreatesContainerIfNeeded that asserts the successful lock acquisition and that the container is not cleaned up when the lock is released. The test itself should clean up the container after the fact.

Also, at risk of stating the obvious, in the short-term you should be able to work around this by simply creating the container yourself during process spinup or right before lock acquisition.

@trx1
Copy link

trx1 commented Apr 11, 2023

Here's what I have as a workaround:

var blobServiceClient = new BlobServiceClient(azureStorageConnectionString);
if (!blobServiceClient.GetBlobContainers().Any(x => x.Name.Equals(containerName)))
{
    blobServiceClient.CreateBlobContainer(containerName);
}
services.AddAzureClients(builder =>
{
    builder.AddClient<BlobContainerClient, BlobClientOptions>(options => new BlobContainerClient(azureStorageConnectionString, containerName));
});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants