Top 10 Azure Functions Anti-Patterns

Tsuyoshi Ushio
10 min readOct 18, 2023

--

Disaster happens for Azure Functions

I’ve often come across suboptimal practices related to Azure Functions. To streamline the information, I’ve highlighted the top 10 Azure Functions anti-patterns. My hope is that by shedding light on these common missteps, you can design better Azure Functions and prevent potential system outages or inconsistent behavior in your applications.

1. Overriding Azure Functions Host Functionality (C#)

It’s important to note that overriding the Azure Functions Host functionality isn’t supported. Many might be unaware of the repercussions of this anti-pattern. Consider the following example in Startup.cs: This snippet essentially overrides the native Azure Functions capabilities, which can lead to unintended behaviors.

Example 1: Overriding Authentication Logic

The following code snippet, from Startup.cs, attempts to override Azure's default authentication logic:

// BAD CODE
public override void Configure(IFunctionsHostBuilder builder) {
:
builder.Services
.AddAuthentication() // DO NOT DO THIS!!!!!
:

By adopting such an approach, you risk undermining the core functionalities of Azure Functions. To elaborate, this action interferes with the authentication logic that Azure uses to validate requests from internal microservices.

Example 2: Overriding IConfiguration

Overriding IConfiguration can render Azure Functions Host ineffective. The below code showcases this anti-pattern:

  // BAD CODE  
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
// Override the default IConfiguration.
// NEVER CALL AddSingleton<IConfiguration>!!!!!
builder.Services.AddSingleton<IConfiguration>(
new ConfigurationBuilder()
.AddEnvironmentVariables()
.Build()
);
}
}

Contrarily, refer to the Azure Functions documentation on customizing configuration sources. Here’s the correct way to update IConfiguration:

// GOOD CODE
public class Startup : FunctionsStartup
{
// Use ConfigureAppConfiguration for update IConfiguration
public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
{
FunctionsHostBuilderContext context = builder.GetContext();

builder.ConfigurationBuilder
.AddJsonFile(Path.Combine(context.ApplicationRootPath, "appsettings.json"), optional: true, reloadOnChange: false)
.AddJsonFile(Path.Combine(context.ApplicationRootPath, $"appsettings.{context.EnvironmentName}.json"), optional: true, reloadOnChange: false)
.AddEnvironmentVariables();
}

public override void Configure(IFunctionsHostBuilder builder)
{
}
}

Isolated Worker Overrides

Similar missteps can occur with dotnet-isolated. Here’s a problematic example:

// BAD CODE

var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services =>
{
// NEVER CALL AddSingleton<IConfiguration>()!
services.AddSingleton<IConfiguration>(
new ConfigurationBuilder()
.AddEnvironmentVariables()
.Build());
})
.Build();

host.Run();

This code often results in exceptions that terminate the isolated process, making diagnosis challenging.

Improper Key Vault Configuration

For those using isolated workers, remember that Key Vault configurations should be managed on the host side, not the worker side. Misusing the Key Vault might lead to excessive requests, surpassing the set threshold. Here’s what you should avoid:

// BAD CODE

var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureAppConfiguration(config =>{
var azureKeyVaultURL = Environment.GetEnvironmentVariable("AzureKeyVaultURL");
var azureKeyVaultADAppID = Environment.GetEnvironmentVariable("AzureKeyVaultMIAppID");

config
.SetBasePath(Environment.CurrentDirectory)
.AddAzureKeyVault(new Uri(azureKeyVaultURL), new ManagedIdentityCredential(azureKeyVaultADAppID))
.AddEnvironmentVariables()
.Build();
})
.Build()

For best practices with Key Vault, always use Key Vault Reference.

Expose configuration from worker processes · Issue #418 · Azure/azure-functions-dotnet-worker (github.com)

Remember, it’s imperative to adhere to guidelines and recommended practices. Venturing outside these can lead to unexpected behaviors and might make diagnosing problems more difficult.

2. Not Enabling Runtime Scale Monitoring with VNET

If you’re utilizing VNET with Azure Functions, it’s crucial to enable Runtime Scale Monitoring. Why? Here’s what you need to know:

Non-HTTP Triggers rely on an internal component named the “Scale Controller” for scaling. When Azure Functions are paired with VNET, this Scale Controller struggles to communicate with a customer’s Event Sources that are solely accessible via the VNET, such as Storage Accounts, Event Hubs, and so forth.

The result? A hindered scaling functionality. While you might perceive it as scaling, it’s likely due to the HTTP Trigger, which uses a different component for its scaling mechanism. This gives the illusion of effective scaling, but in reality, you aren’t harnessing the full power of dynamic scaling.

The solution? Enable Runtime Scale Monitoring. With this feature activated, scaling decisions are processed on the Azure Functions Host side, located within the customer’s VNET. This setup allows the Scale Controller to understand scaling decisions without directly interacting with the customer’s Event Sources.

How to Enable It? Head over to the Azure Portal, navigate to Configuration > Functions runtime settings, and turn on the “Runtime Scale Monitoring” option.

3. Avoid Instantiating HTTP or other network Clients Repeatedly

A common mistake observed in Azure Functions is the repeated instantiation of network clients within the code path. This is not limited to C# but applies to other programming languages as well. The primary concern here is the unnecessary creation of new outbound connections without reusing them.

For instance, consider the following C# code:

// BAD CODE

using var client = new HttpClient();

This pattern results in a new connection every time, and if this behavior continues unchecked, you might eventually encounter a Socket Exception. This is particularly problematic in a Serverless environment like Azure Functions, where network resources are often shared among multiple customers. It’s worth noting that this issue isn’t exclusive to HttpClient. If you're using other network clients like Azure SDK, it's advisable to refrain from creating a new client for every single request.

The Solution?

Consider making your client static or, better yet, employ Dependency Injection (DI) as a best practice. This ensures resource optimization and helps avoid potential network-related exceptions. For a deeper dive into implementing this solution, refer to: Use dependency injection in .NET Azure Functions | Microsoft Learn.

For more details:

Improper Instantiation antipattern — Azure Architecture Center | Microsoft Learn

Troubleshooting intermittent outbound connection errors in Azure App Service — Azure App Service | Microsoft Learn

4. Not Leveraging async/await in C#

When working with network clients or other operations that can be asynchronous in nature, it’s crucial to use the async/await pattern. While it might seem fine for simpler applications, in production settings where operations like file I/O or calls to external systems are common, ignoring the asynchronous pattern can lead to issues.

Specifically, not using the async/await pattern can cause your function to block a thread during an external call, leading to thread starvation. An important note: avoid using Thread.Sleep() in Azure Functions at all costs. Instead, pair async/await with await Task.Delay(), as this is a more efficient approach that doesn't block the thread.

Here’s a simple example without async/await:

// BAD CODE (If you have external call or IO operation)

public static class SimpleExample
{
[FunctionName("QueueTrigger")]
public static void Run(
[QueueTrigger("myqueue-items")] string myQueueItem,
ILogger log)
{
log.LogInformation($"C# function processed: {myQueueItem}");
}
}

However, integrating async/await is straightforward. You just need to modify the method signature and utilize the asynchronous methods with await.

Here’s the revised example with async/await:

// GOOD CODE

public static class AsyncExample
{
[FunctionName("BlobCopy")]
public static async Task RunAsync(
[BlobTrigger("sample-images/{blobName}")] Stream blobInput,
[Blob("sample-images-copies/{blobName}", FileAccess.Write)] Stream blobOutput,
CancellationToken token,
ILogger log)
{
log.LogInformation($"BlobCopy function processed.");
await blobInput.CopyToAsync(blobOutput, 4096, token); // external call
}
}

By adopting this approach, you’ll ensure that your Azure Functions perform efficiently, especially under high loads or when dealing with external resources.

5. Avoid Using Host Names Longer Than 32 Characters

Azure Functions mandates a maximum length of 32 characters for its host names.
Resource naming restrictions — Azure Resource Manager | Microsoft Learn

When the system generates the host ID, it truncates the function app name to fit within this 32-character limit. A potential issue arises when there’s a host ID collision, especially if a shared storage account is in use. This collision can impede essential Azure Functions capabilities such as scaling, lease management, and others.

As of October 15, 2023, the Azure Portal does not enforce this restriction. Thus, if you’re deploying functions using tools like AzureCLI, PowerShell, ArmTemplate, Bicep, or Terraform, you need to be particularly vigilant about this naming constraint.

For a more comprehensive understanding and potential solutions, refer to the Host ID Considerations.

6. Avoid Using Outdated Host and Extension Versions

Staying updated with the latest versions is paramount for optimal performance and security. Especially when it comes to Azure Functions Runtime, updating to the most recent version can resolve a multitude of known issues.

Recommended Azure Functions Runtime Versions — Microsoft Learn

It’s advisable to periodically update your libraries. Consider, for instance, the sample .csproj file for .NET Isolated below. It showcases various NuGet packages. For those using non-dotnet languages, the focus should be on the extension bundle. Alongside keeping the host version current, it's also vital to periodically update these SDKs and extension libraries.

  <ItemGroup>
<PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues" Version="5.2.0" />
</ItemGroup>

While updating might sound like basic “housekeeping,” it’s surprising how many users face issues because they only consider updates during outages. Various reasons account for this delay. Sometimes a third-party vendor developed the system, leaving only operational staff who cannot alter the code. In other instances, updating the version could inadvertently break a feature, leading to downtime. Never underestimate the potential impact of an outdated version during a production incident. Hence, regular updates are strongly recommended.

To further streamline and secure your updating process, consider enabling certain security features. For example, using Dependabot in GitHub can be instrumental. This bot not only assists in updating outdated libraries but also bolsters your system’s defense against potential threats.

For a step-by-step guide on setting up Dependabot, refer to the Dependabot Quickstart Guide — GitHub Docs.

7. Exercise Caution When Sharing a Storage Account

While Azure Functions generally necessitate a dedicated Storage account, there’s a tendency among some users to share a single account across multiple functions. Although feasible, this approach isn’t advisable.

The storage account maintains various metadata specific to Azure Functions. Some examples include:

metadata of Azure Functions

Sharing one storage account among multiple FunctionApps could lead to confusion. When compounded with constraints like the 32-character limit for host names, unexpected issues can arise.

Moreover, challenges may crop up when you attempt to delete and then recreate a FunctionApp using the same Storage account. There’s potential for incorrect metadata or configurations within the storage account to cause issues. If you find yourself needing to remove and then recreate a FunctionApp associated with an existing Storage account, it’s wise either to set up a new storage account or to meticulously clean up the existing one — ensuring all blobs, queues, tables, and files are properly managed.

8. Avoid Logging Sensitive Information

A recurring oversight we’ve noticed with users is the inadvertent logging or display of confidential details, particularly secrets like Connection Stringor token. It’s imperative to ensure that such sensitive information never makes its way into logs.

One effective way to steer clear of this pitfall is by utilizing Managed Identities. With Managed Identity, there’s no need to juggle connection strings or SAS tokens, minimizing the risk of accidental exposure.

Use Identity-Based Connections with Azure Functions — Microsoft Learn

Another proactive approach is to employ sanitizers, which can mask or remove sensitive data before it gets logged. If you’re keen to delve deeper into this, you can examine how Azure handles sanitization in their function host:

[Sanitizer in Azure Functions Host — GitHub]

9. Exercising Caution with Multiple Triggers in an App

When building applications, especially in serverless architectures like Azure’s Function App, it’s tempting to incorporate multiple triggers. While the platform does allow for this, I advise caution. This is particularly true for dynamic SKUs such as the Consumption Planor the Elastic Premium Plan. Here’s why:

  1. Ambiguity in Scaling Decisions: With multiple triggers, the system aggregates the scaling needs of each one, and then arrives at a scaling decision. This aggregation can cloud the clarity of scaling patterns. Instead of each trigger dictating its own scaling needs, their collective demands could lead to unpredictable scaling outcomes.
  2. Resource Overhead: Incorporating a variety of triggers can lead to a situation where more computational resources (threads, CPU, memory) are used than necessary. Consider the scenario where you combine an HTTP trigger with an EventHub trigger. The Event Hub trigger might scale to a maximum of 32 instances, whereas the HTTP trigger might scale to, say, 100 instances. In this setup, all 100 instances will run the EventHub listener, but only 32 of these instances will be effectively used. Similarly, if you were to introduce a queue trigger, which might need only one worker due to a low queue volume, all 100 instances would still be listening or polling the queue. This means unnecessary system resources are being consumed, which could affect performance and cost.

Recommendation: It’s wise to segregate triggers based on their specific requirements and characteristics. By dedicating specific instances or apps to individual triggers, you can achieve more predictable scaling and efficient resource utilization.

In the world of cloud computing, it’s not just about what capabilities you can leverage, but also about how judiciously you use them. And when it comes to triggers in Function Apps, a little prudence goes a long way.

10. Risks of Overloading a single plan

Deploying multiple Function Apps on a single plan within the AppService Plancan seem efficient. However, it comes with challenges:

  1. Resource Competition: When numerous Function Apps share one server farm, they vie for the same CPU and memory. A surge in one app can deprive others, affecting performance and potentially causing failures.
  2. Troubleshooting Complexity: Identifying the root cause of performance issues becomes harder when multiple apps tap into shared resources.

Recommendation: Monitor resource utilization when hosting multiple apps on a shared plan. Consider allocating resource-heavy or crucial apps to dedicated plan for consistent performance and easier issue resolution. Always assess the combined needs and growth projections of your apps before making a decision.

Conclusion: Navigating Through Anti-Patterns with the Right Tools

Throughout our discussion, we’ve delved into various anti-patterns that can emerge in application development and deployment. While these pitfalls can be challenging, there’s a silver lining. Some of these anti-patterns can be identified using the Diagnose and Solve Problemstool.

Rather than navigating issues blindly, this tool points you directly to the root of the problem. And it doesn’t stop there. Alongside diagnosing the issue, it offers actionable recommendations to rectify it. This dual capability — to both identify and guide remediation — makes it an indispensable asset. I personally vouch for its efficacy; it’s a tool we consistently rely on, especially when addressing incidents.

When working with Azure Functions, it’s crucial to adhere to best practices. Doing so not only enhances the efficiency and reliability of your functions but also prevents potential pitfalls associated with common anti-patterns.
Explore more on Azure Functions best practices | Microsoft Learn

In closing, while being mindful of potential pitfalls is crucial, equally important is knowing the right tools at your disposal. ‘Diagnose and Solve Problems’ is one such powerful ally in ensuring your apps run smoothly and efficiently.

Diagnose and solve problems.

--

--

Tsuyoshi Ushio
Tsuyoshi Ushio

Written by Tsuyoshi Ushio

Senior Software Engineer — Microsoft

Responses (1)