Using GCP to Load Test a Web Application
--
One of the applications I help maintain at work is web based and involves the processing of audio files passed to it. Having this application raises the issue of how best to load test it with a large volume of files of varying length. The most direct solution would be to set up a script that, when run, will upload files directly to the application endpoint from my local machine, and continue to do so until stopped. This method however has a multitude of issues within it, which result in it not being a very good solution when trying to simulate many users uploading to the application at once, as well as making your local machine not good for much else whilst it is running. In order to get past these limitations and develop a far superior solution by storing the files in a GCP cloud storage bucket, to be fetched and uploaded to the endpoint via a cloud function triggered by a scheduler and pub/sub topic.
The diagram represents the cloud infrastructure at work here. All of this can be created and linked up manually via the GCP console, however in order to make this process easier I have deployed it via Terraform, and will go through each element via the Terraform plan below.
terraform {
backend "local" {
path = "./terraform.tfstate"
}
}locals {
project = "my-gcp-project-id"
region = "europe-west3"
service-name = "loadtester"
api-url = "https://my/upload/endpoint"
}
provider "google" {
project = local.project
region = local.region
}resource "google_storage_bucket" "bucket" {
name = "loadtester-bucket"
location = "EU"
}resource "google_secret_manager_secret" "secret" {
secret_id = "loadtester-api-secret"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "secret-version" {
secret = google_secret_manager_secret.secret.id
secret_data = var.secret_data
}
resource "google_secret_manager_secret_iam_binding" "member" {
project = google_secret_manager_secret.secret.project
secret_id = google_secret_manager_secret.secret.secret_id
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${local.project}@appspot.gserviceaccount.com"
]
}resource "google_pubsub_topic" "topic" {
name = "loadtester-topic"
}resource "google_storage_bucket_object" "archive" {
name = "functions/loadtester.zip"
bucket = google_storage_bucket.bucket.name
source = "loadtester.zip"
}
resource "google_cloudfunctions_function" "function" {
name = "loadtester"
runtime = python37
region = local.region
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = google_pubsub_topic.topic.loadtester-topic
}
available_memory_mb = 4096
timeout = 60
source_archive_bucket = google_storage_bucket_object.archive.bucket
source_archive_object = google_storage_bucket_object.archive.name
entry_point = "handler"
environment_variables = {
"API_URL": local.api-url
"API_KEY_SECRET_NAME": google_secret_manager_secret.api-secret.name
"BUCKET_NAME": google_storage_bucket.bucket.name
}
}resource "google_cloud_scheduler_job" "short-scheduler" {
name = "loadbalancer-short-scheduler"
schedule = "* * * * *"
time_zone = UTC
region = local.region pubsub_target {
topic_name = google_pubsub_topic.topic.id
data = base64encode(jsonencode({
"file_type": "short"
}))
}
}resource "google_cloud_scheduler_job" "mid-scheduler" {
name = "loadbalancer-mid-scheduler"
schedule = "* * * * *"
time_zone = UTC
region = local.region pubsub_target {
topic_name = google_pubsub_topic.topic.id
data = base64encode(jsonencode({
"file_type": "mid"
}))
}
}resource "google_cloud_scheduler_job" "long-scheduler" {
name = "loadbalancer-long-scheduler"
schedule = "* * * * *"
time_zone = UTC
region = local.region pubsub_target {
topic_name = google_pubsub_topic.topic.id
data = base64encode(jsonencode({
"file_type": "long"
}))
}
}
First we want to create a bucket to hold the function source code, as well as our files that we want to upload.
resource "google_storage_bucket" "bucket" {
name = "loadtester-bucket"
location = "EU"
}
Most web applications require an API key to attach to the request in order to authenticate the upload. For this I have included a GCP Secret Manager secret which will be pulled in to the script when the function is invoked. Notice the secret is passed in as a variable, rather than a local. This is done as it is sensitive data and we want to limit the locations in which it is present as much as possible. Creating secrets and using sensitive data in terraform means having to be extra mindful of where the backend is being stored. In this case it is a local state so it is safe, however if you are storing your backend remotely make sure it is secure. Defining it this way will create a prompt to be raised for it to be input when trying to apply the plan, this ensures no trace of it is kept anywhere but the state.
NOTE: If you wish to avoid keeping the api key in the state file, the secret can be created manually and the secret name can be referred to in the terraform file.
resource "google_secret_manager_secret" "secret" {
secret_id = "loadtester-api-secret"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "secret-version" {
secret = google_secret_manager_secret.secret.id
secret_data = var.secret_data
}
Here we have also given the default Google service account permissions to access the secret. This is the default account used to run the function unless specified otherwise. If you wish to use a different service account, this will have to be defined.
resource "google_secret_manager_secret_iam_binding" "member" {
project = google_secret_manager_secret.secret.project
secret_id = google_secret_manager_secret.secret.secret_id
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${local.project}@appspot.gserviceaccount.com"
]
}
Next up we have the function infrastructure itself. Here we have defined the function along with the function source code zip file we are uploading to the bucket defined earlier. A pubsub topic has also been define to enable triggering of the function via cloud schedulers. The highlights of this section are the function settings themselves, as well as the environment variables.
For the function settings, these are going to be unique to your specific use case. My function currently works by copying the file into the function before uploading it to the endpoint, meaning the function need a lot of memory. If your application is able to receive files directly from the bucket, then the memory can be lower. The timeout is also dependant on how long your function takes to complete, so feel free to play around with this to optimise it.
For the environment variables we have passed in the upload endpoint, api key secret name, and bucket name. The endpoint could be hard coded into the function source code, but for the sake of keeping the environment variables in the same place I have passed it in here. The other 2 variables are generated here so it makes sense to pass them in from here.
resource "google_pubsub_topic" "topic" {
name = "loadtester-topic"
}resource "google_storage_bucket_object" "archive" {
name = "functions/loadtester.zip"
bucket = google_storage_bucket.bucket.name
source = "loadtester.zip"
}
resource "google_cloudfunctions_function" "function" {
name = "loadtester"
runtime = python37
region = local.region
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = google_pubsub_topic.topic.loadtester-topic
}
available_memory_mb = 4096
timeout = 60
source_archive_bucket = google_storage_bucket_object.archive.bucket
source_archive_object = google_storage_bucket_object.archive.name
entry_point = "handler"
environment_variables = {
"API_URL": local.api-url
"API_KEY_SECRET_NAME": google_secret_manager_secret.api-secret.name
"BUCKET_NAME": google_storage_bucket.bucket.name
}
}
Finally we have defined the schedulers. For my application I have defined 3, one each for short, medium, and long files. Depending on your application requirements, more or fewer may be required. Notice as well the schedule. This is defined in cron format. The application I am targeting is designed to handle frequent incoming requests, so I have set the schedule as quick as possible at once a minute. This can be tailored for each scheduler to fit your needs.
resource "google_cloud_scheduler_job" "short-scheduler" {
name = "loadbalancer-short-scheduler"
schedule = "* * * * *"
time_zone = UTC
region = local.regionpubsub_target {
topic_name = google_pubsub_topic.topic.id
data = base64encode(jsonencode({
"file_type": "short"
}))
}
}resource "google_cloud_scheduler_job" "mid-scheduler" {
name = "loadbalancer-mid-scheduler"
schedule = "* * * * *"
time_zone = UTC
region = local.regionpubsub_target {
topic_name = google_pubsub_topic.topic.id
data = base64encode(jsonencode({
"file_type": "mid"
}))
}
}resource "google_cloud_scheduler_job" "long-scheduler" {
name = "loadbalancer-long-scheduler"
schedule = "* * * * *"
time_zone = UTC
region = local.regionpubsub_target {
topic_name = google_pubsub_topic.topic.id
data = base64encode(jsonencode({
"file_type": "long"
}))
}
}
Now we have covered the infrastructure we can take a look into the source code itself:
from google.cloud import storage
from google.cloud import secretmanager
from random import randint
import base64
import json
import os
import requests
API_KEY_SECRET_NAME = os.environ.get('API_KEY_SECRET_NAME')
API_URL = os.environ.get('API_URL')
BUCKET_NAME = os.environ.get('BUCKET_NAME')
def get_random_file(length):
print("Selecting file...")
file_path = os.path.join("media_files", length)
blobs = []
storage_client = storage.Client()
blobs_iterator = storage_client.list_blobs(BUCKET_NAME, prefix=file_path)
for blob in blobs_iterator:
if not blob.name.endswith("/"):
blobs.append(blob.name.split("/")[-1])
i = 0 if len(blobs) == 1 else randint(0, len(blobs) - 1)
file_name = blobs[i]
bucket = storage_client.bucket(BUCKET_NAME)
blob = bucket.blob(os.path.join(file_path, file_name))
blob.download_to_filename(os.path.join("/tmp/", file_name))
print(f"File selected: {file_name}")
return file_name
def get_api_key():
secret_client = secretmanager.SecretManagerServiceClient()
name = f"{API_KEY_SECRET_NAME}/versions/latest"
access_response = secret_client.access_secret_version(request={"name": name})
return access_response.payload.data.decode("UTF-8")
def upload_file(file_name, api_key, language, nr_speakers):
files = {'file': open(os.path.join("/tmp", file_name), 'rb')}
print("Uploading file...")
response = requests.post(API_URL, files=files, verify=True)
try:
assert response.status_code == 200
os.remove(os.path.join("/tmp", file_name))
print(f"{response.status_code} - upload successful")
except Exception:
print(f"{response.content} - upload failed")
def handler(event, context):
if 'data' in event:
decoded = base64.b64decode(event['data']).decode('utf-8')
pubsub_message = json.loads(decoded)
else:
pubsub_message = event
file_length = pubsub_message['length']
print(f"""Load testing:
File Length: {file_length}
""")
file_to_upload = get_random_file(file_length)
api_key = get_api_key()
upload_file(file_to_upload, api_key)
The file structure within the bucket separates the files into their given length. When triggered the function will randomly select a file of given length, depending on which scheduler triggers it, and upload that file to the web application.
To begin load testing, from the GCP Cloud Scheduler Console, you can start running the desired cloud schedulers to trigger the function based on their set cron triggers, or more often by manually using the ‘RUN NOW’ button. Multiple schedulers can be run at the same time to simulate real world traffic. When finished the load test can be stopped by pausing the schedulers.