Blog
Going Serverless with Python WSGI Apps
I’ve been writing web applications and services in Python since the late 1990s, and enjoy it so much that I created the Pecan web application framework way back in 2010. Configuring and deploying Python web applications, especially WSGI compliant applications, is fairly straightforward, with great WSGI servers like Gunicorn and uWSGI, and excellent Apache integration via mod_wsgi. But, for many use cases, creating and maintaining one or more cloud servers creates unnecessary cost and complexity. Security patches, kernel upgrades, SSL certificate management, and more, can be a real burden.
Since the creation of AWS Lambda, “serverless” has become a pretty popular buzzword. Could Lambda provide a way to deploy Python WSGI applications that helps reduce cost, complexity, and management overhead? First, let’s consider what serverless really means.
Introducing Lambda
AWS Lambda is a cloud service that lets developers deploy and run code without provisioning or managing servers. Under the hood, there is of course still a server where code is run, but its existence is largely abstracted away. Lambda, and other services in the category, are likely better defined as “functions as a service” (FaaS).
Lambda provides built-in Python support, and invoking Lambda functions can be done manually, or via an event triggered by an another AWS service, including Amazon S3, Amazon DynamoDB, and even Amazon Alexa, just to name a few.
Lambda functions can also be invoked via HTTP through the use of the Amazon API Gateway, which opens up the possibility that WSGI applications could be exposed through Lambda. That said, the complexity of setting up a WSGI application to run within a Lambda execution environment is daunting.
The Serverless Framework
Enter the Serverless Framework, a toolkit for creating, managing, deploying, and operating serverless architectures. Serverless supports AWS Lambda, and other FaaS platforms, and makes the process of getting your code deployed to Lambda much easier. Serverless is written in JavaScript, and is easily installable through npm:
$ npm install serverless -g
Once installed, you can use the serverless tool from the command line to perform a whole host of tasks, such as creating new functions from templates, deploying functions to providers, and invoking functions directly.
Serverless WSGI
The serverless-wsgi plugin for the Serverless Framework allows you to take any Python WSGI application, and deploy it to Lambda with ease. Let’s take a look at how!
I’ve been working on a Python-based IndieAuth implementation called PunyAuth for a few weeks, and as an infrequently accessed web service, its a perfect candidate for a FaaS-backed deployment.
First, I installed the serverless-wsgi plugin:
$ npm install serverless-wsgi -g
Then, I created a file called punywsgi.py that exposes PunyAuth as a WSGI application:
from pecan.deploy import deploy
app = deploy('my-config.py')
In order to bundle up PunyAuth and all of its dependencies, serverless-wsgi needs a requirements.txt file, which is easily done using pip:
$ pip freeze > requirements.txt
Finally, I created a serverless.yml file that defines the service:
service: serverless-punyauth
plugins:
- serverless-wsgi
custom:
wsgi:
app: punywsgi.app
provider:
name: aws
runtime: python3.6
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
Resource:
- arn:aws:s3:::cleverdevil-punyauth-testing/*
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
The serverless.yml file declares a service called serverless-punyauth, enables the serverless-wsgi plugin, and directs it to expose the WSGI app defined in punywsgi.app. When using serverless-wsgi, the bundled wsgi.handler can automatically map requests and responses coming in through the Amazon API Gateway to the deployed WSGI app.
In the case of PunyAuth, the function itself needs read/write access to a particular AWS S3 bucket, which is accomplished here by defining an AWS IAM role that explicitly grants this access.
At this point, the application is ready to be deployed to AWS Lambda.
$ serverless deploy
Serverless: Packaging Python WSGI handler...
Serverless: Packaging required Python packages...
Serverless: Linking required Python packages...
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Unlinking required Python packages...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (2.08 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................
Serverless: Stack update finished...
Service Information
service: serverless-punyauth
stage: dev
region: us-east-1
stack: serverless-punyauth-dev
api keys:
None
endpoints:
ANY - https://rpmchol040.execute-api.us-east-1.amazonaws.com/dev
ANY - https://rpmchol040.execute-api.us-east-1.amazonaws.com/dev/{proxy+}
functions:
app: serverless-punyauth-dev-app
Serverless: Removing old service versions...
Tada! We’ve deployed PunyAuth as a Lambda function!
Use Cases and Benefits
Deploying WSGI applications on Lambda is certainly cool, but its also not appropriate for all use cases. Typically, WSGI applications are deployed in always-running WSGI servers. With Lambda, the behind-the-scenes server that represents the environment for your application is magically started and stopped on an as-needed basis, and the application itself will need to be loaded during the function invokation on-demand. This adds some additional overhead, so in the case of high-performance or frequently-accessed applications, you’ll likely want to go another route.
That said, for applications like PunyAuth, where performance isn’t super critical, and the application is accessed relatively infrequently, this approach has a multitude of benefits.
Benefit: Cost
Deploying a Python WSGI application the traditional way, with always-on infrastructure, will certainly result in higher performance, but also in significantly higher cost. With Lambda, you only pay for the actual execution time of your functions, rather than paying for, say, an EC2 instance that is always on. That means that hosting a low-traffic WSGI app in Lambda could cost you pennies a month.
Benefit: Management
While servers have certainly become easier to manage over the years, and managed hosting providers exist that will handle operating system updates and security patches, there’s no question that deploying to Lambda will reduce your management overhead. The Lambda execution environment is entirely managed by Amazon, allowing you to focus on your application code, rather than on managing a fleet of servers.
Benefit: Security
With AWS handling the heavy lifting of keeping the execution environment up-to-date with security patches, and the ability to apply fine-grained controls using AWS IAM roles, keeping your application secure is a bit easier.
Conclusion
AWS Lambda and the Serverless Framework provide a whole new way to host Python WSGI applications that can help reduce cost, eliminate management, and improve security.
FAQ
- How do cold starts affect the performance of serverless Python WSGI apps, and what strategies can minimize their impact?
Cold starts in serverless Python WSGI apps can introduce latency during the initial request after the function has been idle. To mitigate this, optimizing the application's startup time by reducing dependencies and leveraging AWS's provisioned concurrency feature, which keeps a specified number of instances warm, can be effective. Regularly invoking the function through scheduled events also helps to reduce the occurrence of cold starts.
- Can existing Python WSGI apps be migrated to a serverless architecture without significant refactoring, or are there specific compatibility considerations?
Migrating existing Python WSGI applications to a serverless architecture may require minimal to moderate refactoring, depending on the application's complexity and external dependencies. Key considerations include ensuring the app's statelessness and adapting to the serverless model's event-driven nature. Using compatibility layers or serverless-specific frameworks can ease the transition by abstracting some of the underlying changes needed.
- How does serverless deployment compare in cost for high-traffic applications, considering the pay-per-use pricing model of AWS Lambda?
- The cost-effectiveness of serverless deployment for high-traffic applications hinges on the application's ability to scale and resource usage efficiency. While the pay-per-use model of AWS Lambda can offer significant savings for variable traffic, it's essential to monitor and optimize the function executions and resource allocations to avoid unexpected costs. Implementing cost controls and alerts can help manage expenses in alignment with traffic patterns.
Author Spotlight:
Jonathan LaCour
Keep Up To Date With AWS News
Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.