# GenSynth Documentation

#### Learn and Build

This final group provides the tools to set your learning and build options.

First is the Learn section. This will tell GenSynth how you want it to be trained.

These are the variables available to you.

### Tip

DarwinAI recommends using the default settings so GenSynth can learn the best policy from your settings when learning to generate new models. However, you can specify the type of optimizer and learning rate type that may work best for you.

Variable

Description

Mixed Precision Learning

Enabling this option allows GenSynth to learn faster and with less memory (allowing for larger batch sizes) by operating in mixed precision. The level of speedup will depend on the hardware used. This does not impact the precision of the network, only learning calculations.

Max Epochs

The upper bound on the number of training epochs completed for each cycle.

Optimizer Type

Select the TensorFlow optimizer of your choice. Note that different optimizers need different Optimizer Parameters.

If you select the DarwinAI proprietary GenSynth Learn optimizer (default), you must also select a Momentum parameter value bounded between 0.0 and 0.999 inclusively (default 0.9).

To find the required parameters for the other optimizers, refer to TensorFlow documentation for the optimizer type.

Learning Rate Type

For learning-rate type, if you have selected the GenSynth-Learn optimizer type, only the GenSynth-Learn learning rate will be available to you.

The GenSynth-Learn learning rate type has only a learning-rate parameter. For other types, refer to the TensorFlow documentation for the parameters required for each type.

Optimizer Parameters

You need to enter values for different parameters for each optimizer type. For help with these parameters, refer to the corresponding TensorFlow documentation.

Learning Rate Parameters

You need to enter values for different parameters for each learning rate type. For help with these parameters, refer to the corresponding TensorFlow documentation.

Next is the Build section. All fields here are optional.

Variable

Description

Improve Model Input

Enabling this option lets GenSynth improve the provided model for higher accuracy. If an untrained model is provided, GenSynth Learn will try to train the model. You should select this if you are providing an untrained model.

Focus on Macro Exploration

Enabling this option tells GenSynth to learn and build neural networks with a bigger focus on macroarchitecture design exploration. The generated models will exhibit greater macroarchitecture topology differences when this is enabled.

Maximum Cycles

The maximum number of GenSynth cycles (new generated models) that you want to complete. GenSynth may stop sooner if other criteria are met.

Target Size (Ratio)

Specify the size of the generated network (in terms of number of parameters) that you would like to achieve, relative to the original network. GenSynth may stop before the target size is achieved if other criteria are met.

Maximize Perf. Metric

Enabling this option (default) maximizes the Performance Metric Tensor defined in the Network definition. (Usually the Performance Metric Tensor represents accuracy and should be maximized; however, if the primary metric of a network should be minimized, disable this setting. E.g., disable the setting when the loss tensor is used in lieu of an accuracy tensor.)

Minimum Performance Target

Visible if Maximize is enabled, this sets the stopping point for the value of the Perf Metric. If the Perf Metric falls below this level, no more cycles will be executed.

Maximum Performance Target

Visible if Maximize is disabled, this sets the stopping point for the value of the Perf Metric. If the Perf Metric goes above this level, no more cycles will be executed.

When you are happy with your selections, click the Start Job button to add it to the queue. Your job will be immediately visible in the Incomplete Actions view Once it starts, your job will be visible on the Home tab and on the History tab. When your job is complete you will only be able to see it on the History tab.

Unless there are other jobs in the queue, the actual design generation process will start in about a minute. The system picks jobs from the queue in the order they were added as GPUs become available.

Access the list of jobs in the queue by clicking Incomplete Actions in the user drop-down menu.

There are regularization options for improving how your model learns. If you open the Advanced options in the Learn section, these options are available to you.

To include either L1 or L2 regularization during the learning process, enable theUse Regularization toggle. You may specify a value for the L1 Parameter, the L2 Parameter, or both for a combination of the two.

To enable Group Lasso regularization, enable both the Use Regularization and Use Group Lasso toggles. The Grouping Method may be either channel or layer. You must specify the Group Lasso Parameter. Optionally you may set the fraction of regularization (between 0.0 and 1.0) to come from the L1 metric.

 Use Regularization Select this box to use Regularization. Gradient Clipping Ratio Entering a gradient clipping ratio allows GenSynth to perform gradient norm clipping when learning. Good values to try for gradient clipping ratios are between 1 and 5. L1 Parameter The lambda multiple for L1 regularization. This non-negative field is mandatory if L1 regularization is to be used. L2 Parameter The lambda multiple for L2 regularization. This non-negative field is mandatory if L2 regularization is to be used. L2 is not available if Use Group Lasso is available.Note: both L1 and L2 may be used. Use Group Lasso Enabling this toggle allows group lasso regularization. Group Lasso Parameter The lambda multiple for group lasso regularization. This non-negative field is mandatory if Group Lasso is enabled. Grouping Method Can be one of "channel" or "layer". This field is optional, and the default is layer. Grouping Method is only available if Use Group Lasso is enabled. L1 Fraction This is the fraction (between 0 and 1) of L1 regularization that will be added to the group lasso regularization, default 0.0.The net regularization isL1_fraction * L1_regularization_term + (1-L1_fraction)*group_lasso_regularization_termFor example if L1_fraction is 0.3, then our net regularization is:0.3 * L1_regularization_term + 0.7 * group_lasso_regularization_termA L1_fraction in range (0,1) makes this a sparse group lasso regularization instead of pure group lasso (L1_fraction=0) or pure L1 (L1_fraction=1).

### Tip

A Best practice starting point for L1 and L2 is 1E-5 with a higher end at 1E-4.