# GenSynth Documentation

#### Creating the Python 3 Plugin

This section provides an example of a plugin for a performance metric interface, which will let you use custom performance metrics in GenSynth. You create this module in the Performance Metric Manager screen, accessed from the Entities tab.

The plugin provides a class that may be instantiated multiple times; it should not save state in any global or class variables, only in instance variables.

Generally, the flow is:

1. When starting to create Validation or Test metrics, GenSynth creates an instance of the user's class. The constructor initializes counters and other data structures.

2. For each batch of Validation (or Test) data, GenSynth runs the network to fetch the values of the metrics tensors and performance fetch tensors (Job Configuration). The performance fetch values and input data are passed to the update() method of the user's object (called once per batch).

3. After all data has been sent, the get_worker_results() method of your object is called; it may return any info required for the next step. This method is called on each worker.

4. The results from each worker are put in a list, and passed to the reduce_all_worker_results() method of the user's object in the master process. This method must aggregate the data from each worker and return a dictionary of metrics with scalar values.

### Note

When you create your plugin, it must contain a public class with these methods. Furthermore, you must configure the names of the fetch and output keys the update method requires at the Input Keys field of the Performance Metric entity.

 Constructor:__init__(self, folder_name): Definition: Create a fresh object for collecting metrics; e.g, initialize all state variables to zero, an empty container, or None. Arguments: Folder_name indicates a temporary scratch folder on the filesystem that may be used to write results. This folder is under the configuration output_dir and therefore readable from any worker. Each worker, however, is given a unique folder. Returns: None. Method:update(self, data, tensor_values): Definition: The update() method is called after each batch of data is evaluated, in test and validation phases.The data parameter is structured data corresponding to the data fed to the network for this batch.The tensor_values parameter provides network tensor values. Tensor names are mapped to keys in the job configuration. Arguments:data: Data fetched from the Iterator GetNext operation of the dataset’s tf.data.Iterator for the given phase, with the same (nested) structure as the tf.data.dataset.tensor_values: A dictionary that maps fetch names (as configured Input Keys) to the values obtained by fetching the corresponding tensors. If a tensor is not scalar, the value is numpy.array type, having the shape of the corresponding tensor. Returns: A dictionary or None. For GenSynth Explain, if the config parameters for data keys or names are keys in this dictionary, the values from this returned dictionary will be used for GenSynth Explain, and, in the case that some names or data keys also refer to tensors or ops in the graph, the values from this dictionary override the tensor values for GenSynth Explain. This may be necessary to postprocess the values into a standard format. For more information about using these results, see references to the update() function in the GenSynth Explain User Guide. Method:get_worker_results(self): Definition: The get_worker_results() method returns the partial result from the worker to be provided to reduce_all_worker_results().This method may return any data type that can be pickled for communication between workers. It should not reset the state, permitting further updates and more results.This method should have no side-effects. If it is called multiple times in a row, it must return the same value each time.Generally this method will return data, but it may instead write data to files in the folder_name directory and return the names of those files, assuming the reduce method is designed for that approach. Arguments: None Returns: Any data type that may be pickled for communication between workers. Method:reduce_all_worker_results( self, worker_results_list): Definition: The reduce_all_worker_results() method aggregates the partial results from each worker and produces the final metrics.The results from each worker are put in a list, and passed to the reduce_all_worker_results() method of the user's object in the master process. This method must aggregate the data from all workers and return a dictionary of metrics with scalar values. Arguments:worker_results_list: A list of items returned by each worker's get_worker_results() method.Note that this may be an empty list to solicit the list of supported metrics. Returns: A dictionary that maps metric output keys to scalar values. The keys must be consistent across runs. If one of the metrics is the primary performance metric, it must be named at the time a new job is started.If a metric does not have a value (e.g., due to division by zero), it should be set to None.

This example Performance Metric module (for the 10-class Simpnet Tutorial) compares the labelled data for each class with the predictions from the model to create a confusion matrix, from which performance metrics are calculated in the reduce_all_workers() method.

When using the Simpnet tutorial, the prediction_values Performance Fetch Tensor would be set to the Accuracy/ArgMax:0 tensor when starting a New Job.

 Example:import numpy as np class ExampleMetrics: def __init__(self, folder_name): self.confusion_matrix = np.zeros([10, 10], dtype=int) def update(self, data, tensor_values): # true value from training label true_values = data["label"] # prediction from network tensor key prediction_values = tensor_values['prediction_values'] batch_size = len(true_values) # Using the true value and predicted value for each sample # to update the confusion matrix. for index in range(batch_size): self.confusion_matrix[true_values[index], prediction_values[index]] += 1 def get_worker_results(self): # partial results from each worker return self.confusion_matrix def reduce_all_worker_results(self, worker_results_list): # sum confusion matrices across worker results cumulative_confusion_matrix = \ np.sum(worker_results_list, axis=0) # If any of the class totals are zero then the # corresponding metric will be set to None. This also # allows worker_results_list to be set as an empty list # to allow all metrics to be returned as None class_label_totals = \ np.sum(cumulative_confusion_matrix, axis=1) if 0 not in class_label_totals: recall_vector = np.zeros(10) for i in range(10): recall_vector[i] = \ cumulative_confusion_matrix[i, i] / \ class_label_totals[i] recall = np.average(recall_vector) else: recall = None class_prediction_totals = \ np.sum(cumulative_confusion_matrix, axis=0) if 0 not in class_prediction_totals: precision_vector = np.zeros(10) for i in range(10): precision_vector[i] = \ cumulative_confusion_matrix[i, i] / \ class_prediction_totals[i] precision = np.average(precision_vector) else: precision = None # always return metrics, even if they are None return { "recall": recall, "precision": precision }

### Note

Any Python modules imported by your module must be in the PYTHONPATH.