How to Switch AssemblyAI Speech-to-Text Model Tiers by User Email With LaunchDarkly Feature Flags featured image

LaunchDarkly makes it easier for developers to swap between AI models based on application contexts, such as user email domain and other identifying factors such as device, zip code, region, or other custom context you may develop. In this tutorial, we will build a streamlined example demonstrating how to switch between different AssemblyAI model tiers with LaunchDarkly based on a user’s context in a Python and Flask application.

AssemblyAI’s Universal-1 speech recognition launch in April 2024 made the new state-of-the-art speech-to-text AI model available to developers via a traditional web API. The launch also created a dual-class tier system, with the “Best” tier providing the highest accuracy while the “Nano” tier provided significantly lower cost for a slight accuracy tradeoff. 

To complete this tutorial be sure to have the following: 

Both tools have a free trial or plan for you to get started without having to put down a credit card.

Configure Python environment and dependencies

Start by creating a new project directory named `launchdarkly-assemblyai-flask`, changing into that directory, creating a new virtual environment for the dependencies we will need to install, and activating that virtualenv.

Within your terminal window, type in the following commands:

mkdir launchdarkly-assemblyai-flask
cd launchdarkly-assemblyai-flask
python -m venv venv
source venv/bin/activate

We need to install the LaunchDarkly Python server-side SDK, as well as a few other Python packages. The following are the packages along with the specific version numbers that were used to create this tutorial, but this tutorial should work with future versions until backwards-incompatible versions are released:

  • assemblyai version 0.26.0
  • Flask version 3.0.3
  • launchdarkly-server-sdk version 9.4.0
  • validators version 0.28.3

Create a new requirements.txt file in your project directory and copy the following lines into the file:

assemblyai>=0.26.0
Flask>=3.0.3
launchdarkly-server-sdk>=9.4.0
validators>=0.28.3

Save the requirements.txt file and then install the dependencies into your virtual environment with the following command:

pip install -r requirements.txt

After the downloads and installations are complete, we're ready to start building our application. Note that you can obtain all of the code from this tutorial in this launchdarkly-python-examples repository under assemblyai-flask.

Coding a Flask app to serve transcriptions via an API

Create a new file named app.py within your project directory. Start the file with the following lines of code:

import assemblyai as aai, os, validators
from flask import Flask, request

The imports bring in our dependencies to call the AssemblyAI transcription API, osmodule (for obtaining our environment variables), the validators library, and the Flask library.

Next, a few lines below the imports, create the AssemblyAI Transcriber object and initialize the Flask application:

transcriber = aai.Transcriber()
app = Flask(__name__)

Now we'll build our endpoint that will take in an email and URL as part of the query string, validate the string, and then use the URL to call the AssemblyAI API.

Note that this is meant as a straightforward demonstration. In a real-world application, you will want to sanitize and control user inputs such as email addresses and URLs so they are not directly passed to the logic in your application.

Add the following lines to your app.py file under the line where the Flask app was initialized.

@app.route("/transcribe")
def email_transcription():
    email = request.args.get('email', '')
    if not validators.email(email):
        return "<h1>You need to specify a valid 'email' query parameter.</h1>"

    transcribe_url = request.args.get('url', '')
    if not validators.url(transcribe_url):
        return "<h1>You need to specify a valid 'url' query parameter.</h1>"

    # use the lower-cost option Nano option
    config = aai.TranscriptionConfig(speech_model="nano")

    # uses ASSEMBLYAI_API_KEY from environment variables if set
    transcriber = aai.Transcriber(config=config)
    print(f"transcribing URL: {transcribe_url}")

    # this API can take awhile - typically done asynchronously
    transcript = transcriber.transcribe(transcribe_url)
    return f"<h1>Transcription</h1><p>{transcript.text}</p>"

The above code specifies the Nano model tier for AssemblyAI Speech-to-Text API call, and the API is called within the transcriber.transcribe(transcribe_url) code. Typically you will want to call API functions asynchronously as they can take time to complete, or there can be network failures that prevent an API call from working, but this simplified setup will work for our example application.

Save the app.py file. We need to set a couple of environment variables before we can run our application, including an AssemblyAI API key. First, go to AssemblyAI's website and sign up

After you enter your email address, password and where you heard about them, the next screen will give you your API key. Copy the API key.

With the API key copied, create an environment variables file named .env and set two environment variables:

export ASSEMBLYAI_API_KEY="paste key here" # AssemblyAI API key
export FLASK_RUN_PORT=3000

Paste the copied API key into the ASSEMBLYAI_API_KEY value, replacing the "paste key here" text. Save the .env file.

Invoke the environment variables on the command line by running:

source .env

Run the Flask application with flask run:

 * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:3000
Press CTRL+C to quit

Open your web browser and paste in http://127.0.0.1:3000/transcribe. You'll see:

That means it's working! We just need to add the email and url query parameters with valid values to get our transcription via the API.

Paste the following URL with the included query strings in your browser format to complete the API call and get the transcription: http://127.0.0.1:3000/transcribe?email=hello@mail.com&url=https://transcription.url

For example, if you wanted to use the URL for a reading of the Gettysburg Address by Abraham Lincoln, you could use the following URL for the M4B file from archive.org: https://archive.org/download/gettysburg_johng_librivox/gettysburg_johng_librivox.m4b.

To run this URL on your local server, here is the complete URL to send email and URL to localhost:

http://127.0.0.1:3000/transcribe?email=hello@gmail.com&url=https://archive.org/download/gettysburg_johng_librivox/gettysburg_johng_librivox.m4b

After a bit of a delay as it completes the API call to handle the transcription. When complete, you should see something similar to this screenshot in your own browser:

That's what success looks like in getting the Nano model tier running via an API call from your local Flask application. In the next section, we'll add a string feature flag to swap between models based on what email address is specified.

Swapping between model tiers with a LaunchDarkly flag

Now we’ll swap between two model tiers with a LaunchDarkly flag. 

To add feature flags to the application, first sign up for a free trial LaunchDarkly account

When you enter the dashboard, click the "Create a flag" button. This example application will use the email address to swap between the back-end Nano and Best AssemblyAI model tiers based on potential highly value customer email addresses, but you can use many different attributes in the Context to guide how feature flags should evaluate the user experience.

On the create flag screen, enter "High value customer by email" as the name, and it will auto-populate "high-value-customer-by-email" as the key value. Our application will use that key value as the LAUNCHDARKLY_FLAG_KEY environment variable.

Select the "String" flag type, then under variations, fill in "Best model tier" as the Name and "best" as the Value for the first variation, then "Nano model tier" as the Name and "nano" as the Value for the second variation, as shown in the following screenshot:

Press the "Create flag" button.

Change the Default rule to "Nano model tier":

Press the "+ Add rule" drop-down button and click on "Build a custom rule". Name the custom rule "Target prospective customer email domains". Select "email" as the Attribute, "ends with" as the Operator, and then enter some example email domains such as "ge.com", "microsoft.com" and "walmart.com".

For Rollout, select "Best model tier" for Serve. Finally, flip the toggle from "Off" to "On" at the top of the flag page.

One more step and then we can update the code in our application. Grab the SDK key for your project so that we can set it as an environment variable. Press “command + K or click on the "Settings" icon at the bottom of the navigation:

Select "Projects" on the left navigation, then click the "LaunchDarkly" project. Copy the SDK key from your Test environment.

Edit the .env file and add two more lines with your LaunchDarkly SDK key and the Flag key.

export LAUNCHDARKLY_SDK_KEY='' # from launchdarkly UI
export LAUNCHDARKLY_FLAG_KEY='high-value-customer-by-email'

We need to update our code and then we're ready to run the final project. 

Update the app.py file with the following new lines commented below.

import assemblyai as aai, os, validators
from flask import Flask, request
# add the following 3 import lines
import ldclient
from ldclient import Context
from ldclient.config import Config


# add these 3 lines to initialize the LaunchDarkly SDK
ld_sdk_key = os.getenv("LAUNCHDARKLY_SDK_KEY")
feature_flag_key = os.getenv("LAUNCHDARKLY_FLAG_KEY")
ldclient.set_config(Config(ld_sdk_key))


transcriber = aai.Transcriber()
app = Flask(__name__)


@app.route("/transcribe")
def email_transcription():
    email = request.args.get('email', '')
    if not validators.email(email):
        return "<h1>You need to specify a valid 'email' query parameter.</h1>"
    # add the following comment and 2 lines
    # specify the user and email to LaunchDarkly as a Context 
    context = Context.builder('transcript-app').kind('user')\
                                               .set("email", email).build()

    transcribe_url = request.args.get('url', '')
    if not validators.url(transcribe_url):
        return "<h1>You need to specify a valid 'url' query parameter.</h1>"

    # add this line to obtain the feature flag evaluated value
    flag_value = ldclient.get().variation(feature_flag_key, context, False)

    # add this comment and config line so the flag value is fed into the API call
    # update the following line to have LaunchDarkly evaluate what model to use
    config = aai.TranscriptionConfig(speech_model=flag_value)

    # uses ASSEMBLYAI_API_KEY from environment variables if set
    transcriber = aai.Transcriber(config=config)
    print(f"transcribing URL: {transcribe_url}")

    # this API can take awhile - typically done asynchronously
    transcript = transcriber.transcribe(transcribe_url)
    return f"<h1>Transcription</h1><p>{transcript.text}</p>"

Run it with flask –app app run and go the URL http://localhost:3000/transcribe?email=matt@gmail.com&url=https://archive.org/download/gettysburg_johng_librivox/gettysburg_johng_librivox.m4b.

You'll be served the "nano" version of the transcription. Then try an email address with @microsoft.com, @ge.com, or @salesforce.com such as "http://localhost:3000/transcribe?email=matt@microsoft.com&url=https://archive.org/download/gettysburg_johng_librivox/gettysburg_johng_librivox.m4b".

You should see a different transcript version. This one is run with the "Best" model tier from AssemblyAI.

You can now modify the custom flag rule and use this String feature flag for your own application.

What's next?

We created a concise Flask application that transcribes valid URLs of audio files and added the introductory LaunchDarkly feature flags configurations to show how to swap between two AssemblyAI model tiers to balance accuracy with cost.

There are many more ways to expand what contexts to use to decide what model to use on the back end. You can also expand the web application so the email address is much more valuable, such as sending the results of the transcription to the email after it is completed rather than having a long waiting time for a page to load due to the synchronous backend API call.

Here are several more resources that will be valuable in determining what you can build next:

  • Viewing, comparing, and copying flag settings provides a good next step for handling feature flags between different environments such as testing and production
  • LaunchDarkly for AI makes it possible to do more complex model swapping configurations
  • Experiment Flags are a step beyond the standard feature flag where you will test your hypotheses when switching between AI models
  • Turning flags on and off gives more details on enabling and disabling feature flags, without any redeploys of code or other infrastructure changes

Join us on Discord, tweet @LaunchDarkly or @mattmakai, or send me an email at mmakai@launchdarkly.com and let us know what you're building with this open source code.

Like what you read?
Get a demo
Related Content

More about Feature Flags

June 25, 2024