Skip to content

Plugin Development

OLIVE uses plugins to provide a service (such as speech activity detection, or speaker recognition). Plugins must be developed to conform to a strict API in order to behave as expected once made available to OLIVE. This document outlines the common structures used, and how to best develop a plugin efficiently.

Plugin Structure

A plugin resides in a folder containing at a minimum the following structure:

PLUGIN_NAME
|- plugin.py
   |- domains
      |- DOMAIN_NAME
         |- meta.conf

The PLUGIN_NAME may be 'sad-energy-v1', and the DOMAIN_NAME could be 'fast-v1’. Each plugin can have one or more domains, where a domain typically focuses on particular conditions (such as microphone or telephone audio conditions) or languages, in the case of language-dependent capabilities like speech recognition or translation. The plugin leverages models within the domain that are tailored to such conditions. The code within plugin.py is aware of the domain requested for a given call to the plugin and can therefore apply domain-specific processing as needed.

Each plugin has several functions that must be overwritten in order to function within OLIVE. These functions are:

def init(self)
def load(self, domain_id)
def list_classes(self, domain_id)

Additionally, depending on the traits defined for the plugin, enrollment (adding classes to a domain, such as a speaker or language) or scoring functions are to be overwritten. The meta.conf file in a domain folder should include at a minimum a label and a description, but may additionally include other options to overwrite default values including resample_rate and timeout_weight`:

    label: fast-v1
    description: A rapid speech activity detection algorithm based on energy
    resample_rate: 16000
    timeout_weight: 1000

Plugin Traits

A plugin must inherit one or more traits. These include those outlined below, some of which are specific to a type of data being processed. For more detailed information on Plugin Traits, refer to the linked documentation.

Audio Traits

  • GlobalScorer is for plugins designed to produce a single set of scores (one per class) for an entire audio (e.g. speaker recognition (SID) or language recognition (LID) that assume one speaker/language per file).
  • RegionScorer is for plugins that produce multiple class scores associated with regions in time (e.g. keyword spotting identifies a keyword in a specific temporal range, speaker recognition that temporally localizes a speaker within a file that may contain other speakers).
  • FrameScorer is for plugins that inherently short, fixed duration units of time. Egg. Speech activity detection gives a speech or not-speech decision every X milliseconds. It has additional methods for producing 'hard-decision' segments for a given threshold that a plugin can override to apply custom padding/merging (in speech activity detection, for example).
  • AudioConverter trait is for plugins that support converting audio to a new form, such as speech enhancement/noise removal.

Video Traits

  • BoundingBoxScorer is used to provide a score both for a time in a video, and a location in a video frame. For example, face detection uses this trait.

Text Traits

  • TextTransformer is used for text processing plugins, in which raw text is provided as input and raw text is returned. For example, TextTransformer is used in machine translation plugins, wherein the input text is in language X and the output text is that text transformed into language Y.

Generic Traits

  • ClassModifier trait is for plugins that support adding and removing new classes to a domain, like in SID and possibly LID.
  • CompositeTrait this is a new, very flexible trait that allows the use of multiple traits to be returned from a plugin. It also relaxes the tight connection between traits like globalScore and RegionScore to just audio. Thus a video processing plugin implementing the composite trait can return a RegionScore for a visual model, such as deep fake detection, indicating that a temporal region in the video is fake. CompositeTrait can bundle multiple scores together in a single call. For example, a plugin that does both transcription and translation can return both TextTransformer (for translation) and RegionScorer (for speech recognition) type results

Plugin Version Code

There are requirements that each plugin must have such that the plugin specifies its own version, the minimum required OLIVE Runtime and OLIVE Software versions, as well as the creation and revision dates for the plugin. These should be specified in the plugin's _init_ function, and are as follows:

        self.version = '1.0.1'
        self.minimum_runtime_version = "5.0.0"
        self.minimum_olive_version = "5.0.0"
        self.create_date = "2020-5-1"
        self.revision_date = "2020-9-24"

OLIVE will use these values to validate against its version and the version of the runtime that's currently loaded.

For the Plugin version numbering, OLIVE and OLIVE plugins follow the Semantic Versioning 2.0.0 standard - if the version impact of any change is ever in question, please refer to this standard.

As a very brief summary, version numbers are formatted: MAJOR.MINOR.PATCH, just as we do with OLIVE.

  • MAJOR version when you make incompatible API changes,
  • MINOR version when you add functionality in a backwards compatible manner, and
  • PATCH version when you make backwards compatible bug fixes.

General Coding Guidelines

There are many, many plugins that have been written, and a lot has been learned about this process. There is no need to reinvent the wheel and try to create things from scratch that may have already been created and battle-hardened. With this in mind, it is recommended to review the code of existing plugins prior to developing your own in order to leverage any code that will help in your plugin development. This may be a simple as merging scores from multiple regions in a scoring process, or updating options passed by the user during analysis time.

Configurable Options

A plugin may use options in several forms. Some may be internal to the plugin itself, others exposed to the user to configure, and others available to be updated at runtime during use. Due to the way in which the base plugin is loaded and spun out to workers on demand, it’s important to follow the guidelines below to ensure options are passed to the running thread correctly.

Static Parameters

Some parameters don’t change at all and should never go in the config, and yet they are useful during initial development. Have these at the top of your plugin starting with ‘static_’:

      static_interpolate         = 4

User Config

Make the user config very simple with ’string’ = ‘value’ where the value can be a string, int, or float. If other types are needed (like a list), we’ll need to edit the function below. Comments have to have the first non-whitespace character as ‘#’. The intent of this file is to EXPOSE to the user parameters they can play with. An example is:

$ cat sad-energy-v1/plugin_config.py
# USER specifed parameters
user_config = dict(
    min_speech           = 0.3,
    sad_threshold        = 1.0
)

Default Config

Internal to plugin.py, we can have more parameters that we as developers need to test or change during development. This default config should contain all possible parameters. For this, create a simple dictionary:

    ##################################################
    # CONFIG - DO NOT CHANGE THESE UNLESS TOLD TO DO SO BY A DEVELOPER
    config_default = dict(
        # Configurable as [region scoring] parameters
        sad_threshold=1.0,
        sad_interpolate=4,
        sad_filter=11,
        min_speech=0.5
    )
    ##################################################

Initialization

During initialization, the default config is updated with the user_config using this code. The result is stored in self.config. Below is a snippet of code that is common in any plugin.

    import importlib    

    class CustomPlugin(Plugin, RegionScorer, ClassModifier):
        def __init__(self):
    .
            self.VALID_PARAMS    = ['region'] + list(default_config.keys()) # These are the valid keys to check that users don’t pass rubbish keys in opts.
            # Load the user_config in memory
            self.config   = default_config
            loader        = importlib.machinery.SourceFileLoader('plugin_config', os.path.join(os.path.dirname(os.path.realpath(__file__)), 'plugin_config.py'))
            spec          = importlib.util.spec_from_loader(loader.name, loader)
            mod           = importlib.util.module_from_spec(spec)
            spec.loader.exec_module(mod)
            self.config.update(mod.user_config)

Dynamic Options By Updating Local Config

Now users can pass opts as a dictionary to run_*_scoring and these need to overwrite the base config values. For this, we use an update function that hands back the config we can then use for the scoring command (or add class audio command). Note that we should never edit self.config once established in init(self) due to the workers share or do not share memory.

    # RegionScorer #

    def run_region_scoring(self, domain_id, audio, workspace, classes=None, opts=None):
        # Update options if they are passed in
        config = self.update_opts(opts)

    def add_class_audio(self, domain_id, audio, class_id, enrollspace, opts=None):
        domain = self.get_domains()[domain_id]
        # Get the config for this run
        config = self.update_opts(opts)

    def update_opts(self, opts):
        config = idt.Config(copy.deepcopy(self.config_base))
        if opts is not None:
            # Check that all passed options are valid for this plugin
            param_check = np.in1d(opts.keys(), self.VALID_PARAMS)
            if np.any(~param_check):
                raise Exception("Unknown parameter(s) passed [%s]. Please remove from the optional parameter list."
                                % ','.join(np.array(opts.keys())[~param_check].tolist()))

            config.update(opts)
            # File-passed options are in in text format, so we need to convert these as necessary
            if config.output_ivs_dump_path == 'None':
                config.output_ivs_dump_path = None

            if type(config.dump_detections) != bool:
                if config.dump_detections == 'True': config.dump_detections = True
                elif config.dump_detections == 'False': config.dump_detections = False
                else: self.escape_with_error("Parameter dump_detections set to '{}' but must be 'True' or 'False'".format(config.dump_detections))

            config.sad_threshold   = float(config.sad_threshold)
            config.sad_filter      = int(config.sad_filter)
            config.sad_interpolate = int(config.sad_interpolate)
            config.min_speech      = float(config.min_speech)

            logger.debug("Using user-defined parameter options, new config is: %s" % config)

        return config

Note that all options passed at runtime are in a str format and must be cast to their correct type when dealing with the updating of the options.

Variables

The best practice is to set all self.* and domain.* variables in the init and load functions and update_classes for ClassModifier plugins, and then never modify them anywhere else. This is due to the way memory is shared between workers. If you need to change one of these variable in scoring (for instance) take a copy of the variable and use a local one instead of self.* and domain.*. This is where the local config returned from self.update_opts(opts) is used.

Defining the Scoring Traits for the API

The OLIVE GUI and workflows validation process queries a plugin to see what a user can change, the default values and the options if available. The defaults should come from self.config and the two TraitOptions are CHOICE_TRAIT and BOOLEAN_TRAIT. The example below is for region scoring, but the word ‘region’ can be replaced with global or frame as needed for your plugin. For consistency, the parameters defined here should ONLY be the ones exposed in plugin_config.py for the user. Note the list passed as the options for sad_interpolate in the example which restricts the choices for the user.

    def get_region_scoring_opts(self):
        """
        These options are used in the OLIVE GUI and may be configured on the commandline by passing a file to --options
        """
        region_scoring_trait_options = [

            TraitOption('threshold', "Detection threshold", "Offsets scores so that 0.0 is the new threshold", TraitType.CHOICE_TRAIT, "", self.config.threshold),
            TraitOption(sad_interpolate', “SAD interpolation factor", “Higher values speed up SAD at a cost to accuracy", TraitType.CHOICE_TRAIT, list([1,2,4,8,16]), self.config.threshold),
            TraitOption('dump_detections_to_wave', "Enable saving of detections to waves", "Facilitates rapid listening of \
                detections through the CLI", TraitType.BOOLEAN_TRAIT, "", self.config.dump_detections_to_wave),
        ]

        return region_scoring_trait_options

Case (in)Sensitivity

In addition, please make sure all files within the plugin and domain directories are case-insensitive unique. Do not have two files that are the same text apart from upper or lower case substitutions. For example:

  • Result.txt
  • result.txt
  • ReSuLT.TxT

Plugin/OLIVE Logging and Levels

A logger is imported into a plugin and used to log as necessary. It is recommended that you use the following logging levels in the situations described.

Error

Use when the results of analysis or training are wrong, might be wrong, or were not created at all, such that they should not be used for future work. The use of “error” is a clear message to the user that the results can not be trusted, at least not without further investigation.

Warning

Use when user intervention might improve the results in some way, or when giving information to user might help the user in some way. The use of “warning” implies the results can be trusted. A warning must be understandable to the user, and there must be an action they can take to improve the situation. If not, this message is probably a Debug message (aimed at the developer).

Info

Use (rarely) to print out information of general use, but does not require any actions. For example, printing out the version of the software or the active configuration settings might be done in an Info logger output.

Debug

Use to print out information important or useful to the developer or researcher. Users should see these rarely. When they are seen, they should be passed on to developers or researchers. Debug is also used during development, but all such logging statements should be removed or disabled prior to final testing.

Discussion

The scheme described above is based primarily on how quickly the message must be acted on and who they are aimed at. Importance is not a direct factor. Error messages are expected to be acted on by users, when they occur. Warnings are expected to be acted on by users, on a regular basis. Info messages are not acted upon when generated; they are used when needed. Debug messages are used by developers/researchers.

Because Debug is used for two different purposes, if there are extra logging levels (like "Trace") it's fine to use Debug for run time messages aimed a developers and Trace for active debugging during development.

Example Plugin Code

Below are several examples of plugins including directory structures and code. These can be leveraged as templates when creating new plugins. The focus is only global and region scoring and the enrollment of new class data. Combining all plugin code below will result in a valid, usable plugin for OLIVE.

Plugin Structure

sad-energy-v1/
|- plugin.py
|- plugin_config.py
|- domains/
  |- base-v1
    |- meta.conf
    |- model.txt

The meta.conf file within the base-v1 domain should contain the following:

    label: base-v1
    description: A rapid speech activity detection algorithm based on energy
    resample_rate: 8000
    timeout_weight: 1000

The resample_rate of 8000 instructs OLIVE to resample the audio to 8000 Hz if not there already. Setting this to 0 instructs OLIVE to pass the audio directly to the plugin without resampling. Additionally, a dummy model file is created:

    $ echo 0.0 > sad-energy-v1/domains/base-v1/model.txt

The plugin_config.py must have a user_config dictionary:

    # USER specifed parameters
    user_config = dict(
    min_speech           = 0.3,
    sad_threshold        = 1.0
    )

Boilerplate Initialization

The following code imports modules for the plugin and initializes the plugin with multiple traits.

    import copy, os, glob, time, math, h5py, numpy as np, re, sys, datetime
    import importlib
    from olive.plugins import *
    import idento3 as idt

    default_config = idt.Config(dict(
        # DETECTION OPTIONS
        threshold             = 0.0,
        min_speech_for_region = 0.3,
        do_padding = True,
        region_pad = 0.1,
    ))

    # Static variables
    static_maximum_audio_duration = 10000  # in seconds to exemplify a static variable

    class CustomPlugin(Plugin, RegionScorer, GlobalScorer, ClassModifier):

    def __init__(self):
        self.task            = "SAD"
        self.label           = "Speech Activity Detection"
        self.description     = "An Energy-based SAD System"
        self.vendor          = "SRI"
        self.version         = '1.0.0'
        self.minimum_runtime_version = '5.1.0'
        self.minimum_olive_version = '5.1.0'
        self.create_date     = "2025-01-01"
        self.revision_date   = "2025-01-01"
        self.group           = "Speech"
        self.loaded_domains  = []
        self.loaded_base     = False
        self.loaded          = False
        self.config          = default_config
        self.VALID_PARAMS    = ['region'] + list(self.config.keys()) # For checking user inputs and flagging unknown paramters. Region and channel are passed with 5-column enrollment

        # Load the user_config in memory
        loader        = importlib.machinery.SourceFileLoader('plugin_config', os.path.join(os.path.dirname(os.path.realpath(__file__)), 'plugin_config.py'))
        spec          = importlib.util.spec_from_loader(loader.name, loader)
        mod           = importlib.util.module_from_spec(spec)
        spec.loader.exec_module(mod)
        self.config.update(mod.user_config)

    def update_opts(self, opts, domain):
        # Copy values
        config = idt.Config(dict(self.config))

        if opts is not None:
            # Check that all passed options are valid for this plugin
            param_check = np.in1d(list(opts.keys()), self.VALID_PARAMS)
            if np.any(~param_check):
                self.escape_with_error("Unknown parameter(s) passed [%s]. Please remove from the optional parameter list."
                                % ','.join(np.array(list(opts.keys()))[~param_check].tolist()))

            config.update(opts)

            # File-passed options are in in text format, so we need to convert these as necessary
            config.threshold       = float(config.threshold)
            config.min_speech_for_region = float(config.min_speech_for_region)
            config.region_pad            = int(config.region_pad)

            if type(config.do_padding) == str:
                config.do_padding = True if config.do_padding == 'True' else False

            logger.debug("Using user-defined parameter options, new config is: %s" % config)

        return config

Finally, at the very end of the plugin.py file, the plugin must instantiate itself:

    # This line is very important! Every plugin should have one
    plugin = CustomPlugin()

Loading models

After initialization, the plugin’s load function is called along with the domain_id to load. In this example plugin, a single domain exists. We utilize the get_artifact call from the domain object to fetch files from within the domain such as ‘model.txt’. The same function exists for self.get_artifact in which case a file is fetched from the root plugin directory.

    def load(self, domain_id, device=None):

        domain = self.get_domains()[domain_id]
        domain.device = device

        # Domain dependent components
        if domain_id not in self.loaded_domains:
            domain.model = open(domain.get_artifact('model.txt'), 'r').readlines()  # A simple example of fetching an artifact
            self.loaded_domains.append(domain_id)

        logger.info("Loading of plugin '%s' domain '%s' complet." % (self.label, domain_id))

Device and GPU Use

Many plugins leverage a GPU. As loading models often requires knowing whether they will be loaded to GPU or CPU, a device option is available for the load call.

Global Scoring

Global scoring it used when a single score per class is expected for the entire audio file. It does not return timestamps (see Region Scoring below). In this example plugin, the minimum, maximum and average energy of the audio is returned. Note that the local config is passed to get_speech in order to use the threshold.

    def run_global_scoring(self, domain_id, audio, workspace, classes=None, opts=None):
        domain = self.get_domains()[domain_id]
        audio.make_mono()

        # Deal with region/channel opts
        if opts is not None and 'region' in opts:
            audio.trim_samples(opts['region'])

        # Update options if they are passed in
        config = self.update_opts(opts, domain)

        speech, nrg = self.get_speech(audio, config)

        result = {'minimum': np.min(nrg), 'maximum': np.max(nrg), 'mean': np.mean(nrg)}

        return result



    def get_speech(self, audio, config, medfilt_length=41):
        eps = 10e-7
        wavdata, sample_rate = audio.data, audio.sample_rate
        win_size = int(0.025 * sample_rate)
        frame    = int(0.010 * sample_rate)

        # Energy based VAD (thr dB from the max power)
        num_frames   = int(np.floor((len(wavdata) - win_size + frame) / frame))
        index = np.tile(list(range(0, win_size)), (num_frames, 1)) + \
            np.tile((list(range(0, num_frames * frame, frame))), (win_size, 1)).T

        wav = np.abs(wavdata[index])
        pow = 10 * np.log(np.sum(wav ** 2 + eps, axis=1)) / np.log(10)
        posteriors = np.array((pow > config.threshold), dtype=np.int8)
        vad_path = np.array(ss.medfilt(posteriors, medfilt_length), dtype=np.int8)
        speech = vad_path > 0

        return speech, pow

Of course, we need to define the function to list the configurable options:

    def get_global_scoring_opts(self):
        """
        These options are used in the OLIVE GUI and may be configured on the commandline by passing a file to --options
        """

        global_scoring_trait_options = [
            TraitOption('sad_threshold', 'Detection threshold', 'Applied to SAD energy metric', TraitType.CHOICE_TRAIT, '', self.config.sad_threshold),
        ]

        return global_scoring_trait_options

Region Scoring

For region scoring, we return a dictionary with keys being the class(es) being identified and values being a list of tuples of (start_time, end_time, class, score). We make use of the Idento3 module in the OLIVE runtime to use an Alignment object which conveniently handles boolean indices, padding and getting start/end times for speech.

    def run_region_scoring(self, domain_id, audio, workspace, classes=None, opts=None):
        domain = self.get_domains()[domain_id]
        audio.make_mono()

        # Deal with region/channel opts
        if opts is not None and 'region' in opts:
            audio.trim_samples(opts['region'])

        # Update options if they are passed in
        config = self.update_opts(opts, domain)

        speech, nrg = self.get_speech(audio, config)

        align = idt.Alignment()
        align.add_from_indices('speech', speech, 'speech')
        if config.do_padding:
            align.pad(config.region_pad)

        result_list = []

        for st, en in align.get_start_end('speech', lab='speech', unit='seconds'):
            if en-st > config.min_speech:
                result_list.append((st, en, 'speech', 1.0))

        return {'speech': result_list}

Again, we need to define the function to list the configurable options:

    def get_region_scoring_opts(self):
        """
        These options are used in the OLIVE GUI and may be configured on the commandline by passing a file to --options
        """
        region_scoring_trait_options = [
            TraitOption('sad_threshold', 'Detection threshold', 'Applied to SAD energy metric', TraitType.CHOICE_TRAIT, '', self.config.sad_threshold),
            TraitOption('min_speech', 'Minimum speech', 'Region scoring minimum speech requirement', TraitType.CHOICE_TRAIT, '', self.config.min_speech),
        ]

        return region_scoring_trait_options

Listing classes for Plugins that Support Enrollment

Plugins can allow users to enroll data to represent or augment detectable classes. This could be used, for instance, when adding new languages to a language ID plugin. The API for this capability requires two new functions to be overwritten:

    def list_classes(self, domain_id):

        domain = self.get_domains()[domain_id]

        enrollment_dir = self.get_enrollment_storage(domain_id)

        user_classes = [x for x in os.listdir(enrollment_dir)]

        return ['speech'] + user_classes
    def update_classes(self, domain_id):

        domain = self.get_domains()[domain_id]

        domain.classes = self.list_classes(domain_id)

The list_classes function is used by OLIVE and the UI to determine what classes can be returned by a plugin, whereas update_classes is used to update the models in memory after a plugin has had new enrollment data added. Importantly, the load function now needs to add a call to update_classes when loading a domain:

    self.update_classes(domain_id)

Enrollment of New Data/Classes

Data enrolled in the plugin should be stored in the pre-defined directory offered by OLIVE for the plugin and domain. This is obtained with:

    self.get_enrollment_storage(domain_id)

Two functions are used to add audio. One is add_class_audio which processes a single audio file ready for enrollment - this is called multiple times on multiple threads when more than one file is being added for a class. Once a class’s new audio has been added, a finalize function is called to place the data in the correct location for the plugin.

    def add_class_audio(self, domain_id, audio, class_id, enrollspace, opts=None):
        domain = self.get_domains()[domain_id]
        config = self.update_opts(opts, domain)

        # A staging directory for dumping files for the audio file
        audio_dir = os.path.join(enrollspace, "staging", audio.id)
        utils.mkdirs(audio_dir)
        out_embed_filename = os.path.join(audio_dir, audio.id + '.txt')
        speech, nrg = self.get_speech(audio, config)
        data = {'speech': speech, 'nrg': nrg}
        idt.save_dict_in_hdf5(out_embed_filename, data)

Finalization is called after the staging of audio above:

    def finalize_class(self, domain_id, class_id, enrollspace):

        final_enrollment_dir = self.get_enrollment_storage(domain_id, class_id)

        for file in glob.glob(os.path.join(enrollspace, "staging", "*")):
            if len(os.listdir(file))>0:
                dest = os.path.join(final_enrollment_dir, os.path.basename(file))
                if os.path.exists(dest):
                    shutil.rmtree(dest)
                shutil.move(file, dest)
            else:
                logger.warn("Audio id [%s] for class_id [%s] failed to enroll" % (file, class_id))

Debugging Plugin Code

Debugging OLIVE plugin code while running can be difficult when used in a client/server environment as it is intended. The best methods here include verbose debugging output to determine where a problem exists. In most instances, one can develop using the command line interface with provided OLIVE CLI client tools like the Java OliveAnalyze Suite and python olivepyanalyze tools. This is the recommended route for testing and debugging.

In extreme circumstances, i.e. where an issue with a plugin is preventing the OLIVE server from launching at all, the deprecated direct CLI localanalyze tools may be used. Setting parameters as below will allow for breakpoints or use of IPython embed() to debug code mid analysis:

localanalyze -j 1 --nochildren --verbose --debug $plugin/domains/$domain in.lst

Note that these CLI tools are not the focus of OLIVE development and do not always exhibit the same behavior as a client/server scenario, as they are not using the OLIVE server at all, but directly executing plugin code. Therefore, they should be used cautiously, perhaps to get a plugin running, before thoroughly testing in the active serve environment.

GPU Usage

Due to the way OLIVE spawns worker threads and the difficulty in sharing resources across CPU and GPU, we utilize a function to get the CUDA device and send the model to the GPU at run time when needed. The function below get_cuda_device is called from the run_\*_scoring functions:

    def get_cuda_device(self, domain_id):
        domain = self.get_domains()[domain_id]

        if not hasattr(domain, 'device'):
            domain.device = None

        device_conf = domain.config.get('domain', 'device') if domain.device is None else domain.device
        cuda_device = "-1"

        if not ('gpu' == device_conf[:3] or 'cpu' == device_conf[:3]):
            self.escape_with_error("'device' parameter in meta.conf of domain [{}] should be 'cpu' or 'gpuN' where N is the index of the GPU to use. Instead, it is set to '{}'".format(domain_id, device_conf))

        if 'gpu' == device_conf[:3]:
            try:
                # Make sure gpu index can be extracted as int
                gpu_index = int(device_conf[3:])

            except ValueError:
                self.escape_with_error("'device' parameter in meta.conf of domain [{}] should be 'cpu' or 'gpuN' where N is the index of the GPU to use. Instead, it is set to '{}'".format(domain_id, device_conf))

            # Check for CVD
            if 'CUDA_VISIBLE_DEVICES' in os.environ:
                if os.environ['CUDA_VISIBLE_DEVICES'] == "":
                    self.escape_with_error("Requested gpu use in meta.conf of domain, but environment variable 'CUDA_VISIBLE_DEVICES' is empty. Either unset this variable or set it apprioriately to GPUs to be used")

                else:
                    cvd = np.array(os.environ['CUDA_VISIBLE_DEVICES'].split(','), dtype=int)
                    cvd_map = dict(zip(cvd, np.arange(len(cvd)).astype(int)))
                    if gpu_index not in cvd_map:
                        self.escape_with_error("Requested gpu {} in meta.conf of domain {} but this GPU was not listed in environment variable CUDA_VISIBLE_DEVICES.".format(gpu_index, os.environ['CUDA_VISIBLE_DEVICES']))

                    else:
                        gpu_index = cvd_map[gpu_index]

            cuda_device = "{}".format(gpu_index)
            logger.info("Allocated GPU {} to plugin/domain {}/{}".format(cuda_device, self.label, domain_id))

        return cuda_device

After defining the local config in the scoring function, we then add:

    # Device to run on
    if not hasattr(domain, 'cuda_device'):
        domain.cuda_device = self.get_cuda_device(domain_id)

    if domain.cuda_device != "-1":
        device = torch.device('cuda:{}'.format(domain.cuda_device))

    else:
        device = torch.device('cpu')

This local variable device can then be used to send a model to the GPU.