Jul 03, 2018 Automated hyperparameter tuning of machine learning models can be accomplished using Bayesian optimization. In contrast to random search, Bayesian optimization chooses the next hyperparameters in an informed method to spend more time evaluating promising values. The end outcome can be fewer evaluations of the objective function and better generalization performance on the test set compared to random or grid search. Jun 10, 2018 Machine Learning Control: Tuning a PID Controller with Genetic Algorithms (Part 2) - Duration: 9:45. Steve Brunton 5,844 views. Jun 24, 2019 Moreover, you will learn how to launch an automated machine learning process to allow algorithm selection and hyperparameter tuning. Automated machine learning iterates over many. Jul 03, 2018 Automated hyperparameter tuning of machine learning models can be accomplished using Bayesian optimization. In contrast to random search, Bayesian optimization chooses the next.
This alert has been successfully added and will be sent to:
You will be notified whenever a record that you have chosen has been cited.
To manage your alert preferences, click on the button below.
Manage my AlertsPlease log in to your account
Modern High-Level Synthesis (HLS) tools allow C descriptions of computation to be compiled to optimized low-level RTL, but expose a range of manual optimization options, compiler directives and tweaks to the developer. In many instances, this results in a tedious iterative development flow to meet resource, timing and power constraints which defeats the purpose of adopting the high-level abstraction in the first place. In this paper, we show how to use Machine Learning routines to predict the impact of HLS compiler optimization on final FPGA utilization metrics. We compile multiple variations of the high-level C code across a range of compiler optimizations and pragmas to generate a large design space of candidate solutions. On the Machsuite benchmarks, we are able to train a linear regression model to predict resources, latency and frequency metrics with high accuracy (R2 > 0.75). We expect such developer-assistance tools to (1) offer insight to drive manual selection of suitable directive combinations, and (2) automate the process of selecting directives in the complex design space of modern HLS design.
Check if you have access through your login credentials or your institution to get full access on this article.
Sign inCopyright © 2016 Owner/Author
Association for Computing Machinery
New York, NY, United States
View this article in digital edition.
View Digital Edition-->APPLIES TO: Basic edition Enterprise edition (Upgrade to Enterprise edition)
Efficiently tune hyperparameters for your model using Azure Machine Learning. Hyperparameter tuning includes the following steps:
Hyperparameters are adjustable parameters you choose to train a model that govern the training process itself. For example, to train a deep neural network, you decide the number of hidden layers in the network and the number of nodes in each layer prior to training the model. These values usually stay constant during the training process.
In deep learning / machine learning scenarios, model performance depends heavily on the hyperparameter values selected. The goal of hyperparameter exploration is to search across various hyperparameter configurations to find a configuration that results in the best performance. Typically, the hyperparameter exploration process is painstakingly manual, given that the search space is vast and evaluation of each configuration can be expensive.
Azure Machine Learning allows you to automate hyperparameter exploration in an efficient manner, saving you significant time and resources. You specify the range of hyperparameter values and a maximum number of training runs. The system then automatically launches multiple simultaneous runs with different parameter configurations and finds the configuration that results in the best performance, measured by the metric you choose. Poorly performing training runs are automatically early terminated, reducing wastage of compute resources. These resources are instead used to explore other hyperparameter configurations.
Automatically tune hyperparameters by exploring the range of values defined for each hyperparameter.
Each hyperparameter can either be discrete or continuous and has a distribution of values described by aparameter expression.
Discrete hyperparameters are specified as a choice
among discrete values. choice
can be:
range
objectlist
objectIn this case, batch_size
takes on one of the values [16, 32, 64, 128] and number_of_hidden_layers
takes on one of the values [1, 2, 3, 4].
Advanced discrete hyperparameters can also be specified using a distribution. The following distributions are supported:
quniform(low, high, q)
- Returns a value like round(uniform(low, high) / q) * qqloguniform(low, high, q)
- Returns a value like round(exp(uniform(low, high)) / q) * qqnormal(mu, sigma, q)
- Returns a value like round(normal(mu, sigma) / q) * qqlognormal(mu, sigma, q)
- Returns a value like round(exp(normal(mu, sigma)) / q) * qContinuous hyperparameters are specified as a distribution over a continuous range of values. Supported distributions include:
uniform(low, high)
- Returns a value uniformly distributed between low and highloguniform(low, high)
- Returns a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributednormal(mu, sigma)
- Returns a real value that's normally distributed with mean mu and standard deviation sigmalognormal(mu, sigma)
- Returns a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributedAn example of a parameter space definition:
This code defines a search space with two parameters - learning_rate
and keep_probability
. learning_rate
has a normal distribution with mean value 10 and a standard deviation of 3. keep_probability
has a uniform distribution with a minimum value of 0.05 and a maximum value of 0.1.
You can also specify the parameter sampling method to use over the hyperparameter space definition. Azure Machine Learning supports random sampling, grid sampling, and Bayesian sampling.
In random sampling, hyperparameter values are randomly selected from the defined search space. Random sampling allows the search space to include both discrete and continuous hyperparameters.
Grid sampling performs a simple grid search over all feasible values in the defined search space. It can only be used with hyperparameters specified using choice
. For example, the following space has a total of six samples:
Bayesian sampling is based on the Bayesian optimization algorithm and makes intelligent choices on the hyperparameter values to sample next. It picks the sample based on how the previous samples performed, such that the new sample improves the reported primary metric.
When you use Bayesian sampling, the number of concurrent runs has an impact on the effectiveness of the tuning process. Typically, a smaller number of concurrent runs can lead to better sampling convergence, since the smaller degree of parallelism increases the number of runs that benefit from previously completed runs.
Bayesian sampling only supports choice
, uniform
, and quniform
distributions over the search space.
Note
Bayesian sampling does not support any early termination policy (See Specify an early termination policy). When using Bayesian parameter sampling, set early_termination_policy = None
, or leave off the early_termination_policy
parameter.
Specify the primary metric you want the hyperparameter tuning experiment to optimize. Each training run is evaluated for the primary metric. Poorly performing runs (where the primary metric does not meet criteria set by the early termination policy) will be terminated. In addition to the primary metric name, you also specify the goal of the optimization - whether to maximize or minimize the primary metric.
primary_metric_name
: The name of the primary metric to optimize. The name of the primary metric needs to exactly match the name of the metric logged by the training script. See Log metrics for hyperparameter tuning.primary_metric_goal
: It can be either PrimaryMetricGoal.MAXIMIZE
or PrimaryMetricGoal.MINIMIZE
and determines whether the primary metric will be maximized or minimized when evaluating the runs.Optimize the runs to maximize 'accuracy'. Make sure to log this value in your training script.
The training script for your model must log the relevant metrics during model training. When you configure the hyperparameter tuning, you specify the primary metric to use for evaluating run performance. (See Specify a primary metric to optimize.) In your training script, you must log this metric so it is available to the hyperparameter tuning process.
Log this metric in your training script with the following sample snippet:
The training script calculates the val_accuracy
and logs it as 'accuracy', which is used as the primary metric. Each time the metric is logged it is received by the hyperparameter tuning service. It is up to the model developer to determine how frequently to report this metric.
Terminate poorly performing runs automatically with an early termination policy. Termination reduces wastage of resources and instead uses these resources for exploring other parameter configurations.
When using an early termination policy, you can configure the following parameters that control when a policy is applied:
evaluation_interval
: the frequency for applying the policy. Each time the training script logs the primary metric counts as one interval. Thus an evaluation_interval
of 1 will apply the policy every time the training script reports the primary metric. An evaluation_interval
of 2 will apply the policy every other time the training script reports the primary metric. If not specified, evaluation_interval
is set to 1 by default.delay_evaluation
: delays the first policy evaluation for a specified number of intervals. It is an optional parameter that allows all configurations to run for an initial minimum number of intervals, avoiding premature termination of training runs. If specified, the policy applies every multiple of evaluation_interval that is greater than or equal to delay_evaluation.Azure Machine Learning supports the following Early Termination Policies.
Bandit is a termination policy based on slack factor/slack amount and evaluation interval. The policy early terminates any runs where the primary metric is not within the specified slack factor / slack amount with respect to the best performing training run. It takes the following configuration parameters:
slack_factor
or slack_amount
: the slack allowed with respect to the best performing training run. slack_factor
specifies the allowable slack as a ratio. slack_amount
specifies the allowable slack as an absolute amount, instead of a ratio.
For example, consider a Bandit policy being applied at interval 10. Assume that the best performing run at interval 10 reported a primary metric 0.8 with a goal to maximize the primary metric. If the policy was specified with a slack_factor
of 0.2, any training runs, whose best metric at interval 10 is less than 0.66 (0.8/(1+slack_factor
)) will be terminated. If instead, the policy was specified with a slack_amount
of 0.2, any training runs, whose best metric at interval 10 is less than 0.6 (0.8 - slack_amount
) will be terminated.
evaluation_interval
: the frequency for applying the policy (optional parameter).
delay_evaluation
: delays the first policy evaluation for a specified number of intervals (optional parameter).
In this example, the early termination policy is applied at every interval when metrics are reported, starting at evaluation interval 5. Any run whose best metric is less than (1/(1+0.1) or 91% of the best performing run will be terminated.
Median stopping is an early termination policy based on running averages of primary metrics reported by the runs. This policy computes running averages across all training runs and terminates runs whose performance is worse than the median of the running averages. This policy takes the following configuration parameters:
evaluation_interval
: the frequency for applying the policy (optional parameter).delay_evaluation
: delays the first policy evaluation for a specified number of intervals (optional parameter).In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run will be terminated at interval 5 if its best primary metric is worse than the median of the running averages over intervals 1:5 across all training runs.
Truncation selection cancels a given percentage of lowest performing runs at each evaluation interval. /cooking-dash-2016-hack-tool-free-download.html. Runs are compared based on their performance on the primary metric and the lowest X% are terminated. It takes the following configuration parameters:
truncation_percentage
: the percentage of lowest performing runs to terminate at each evaluation interval. Specify an integer value between 1 and 99.evaluation_interval
: the frequency for applying the policy (optional parameter).delay_evaluation
: delays the first policy evaluation for a specified number of intervals (optional parameter).In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run will be terminated at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all runs at interval 5.
If you want all training runs to run to completion, set policy to None. This will have the effect of not applying any early termination policy.
If no policy is specified, the hyperparameter tuning service will let all training runs execute to completion.
evaluation_interval
1 and delay_evaluation
5. These are conservative settings, that can provide approximately 25%-35% savings with no loss on primary metric (based on our evaluation data).Control your resource budget for your hyperparameter tuning experiment by specifying the maximum total number of training runs. Optionally specify the maximum duration for your hyperparameter tuning experiment.
max_total_runs
: Maximum total number of training runs that will be created. Upper bound - there may be fewer runs, for instance, if the hyperparameter space is finite and has fewer samples. Must be a number between 1 and 1000.max_duration_minutes
: Maximum duration in minutes of the hyperparameter tuning experiment. Parameter is optional, and if present, any runs that would be running after this duration are automatically canceled.Note
If both max_total_runs
and max_duration_minutes
are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
Additionally, specify the maximum number of training runs to run concurrently during your hyperparameter tuning search.
max_concurrent_runs
: Maximum number of runs to run concurrently at any given moment. If none specified, all max_total_runs
will be launched in parallel. If specified, must be a number between 1 and 100.Note
The number of concurrent runs is gated on the resources available in the specified compute target. Hence, you need to ensure that the compute target has the available resources for the desired concurrency.
Allocate resources for hyperparameter tuning:
This code configures the hyperparameter tuning experiment to use a maximum of 20 total runs, running four configurations at a time.
Configure your hyperparameter tuning experiment using the defined hyperparameter search space, early termination policy, primary metric, and resource allocation from the sections above. Additionally, provide an estimator
that will be called with the sampled hyperparameters. The estimator
describes the training script you run, the resources per job (single or multi-gpu), and the compute target to use. Since concurrency for your hyperparameter tuning experiment is gated on the resources available, ensure that the compute target specified in the estimator
has sufficient resources for your desired concurrency. (For more information on estimators, see how to train models.)
Configure your hyperparameter tuning experiment:
Once you define your hyperparameter tuning configuration, submit an experiment:
experiment_name
is the name you assign to your hyperparameter tuning experiment, and workspace
is the workspace in which you want to create the experiment (For more information on experiments, see How does Azure Machine Learning work?)
Often, finding the best hyperparameter values for your model can be an iterative process, needing multiple tuning runs that learn from previous hyperparameter tuning runs. Reusing knowledge from these previous runs will accelerate the hyperparameter tuning process, thereby reducing the cost of tuning the model and will potentially improve the primary metric of the resulting model. When warm starting a hyperparameter tuning experiment with Bayesian sampling, trials from the previous run will be used as prior knowledge to intelligently pick new samples, to improve the primary metric. Additionally, when using Random or Grid sampling, any early termination decisions will leverage metrics from the previous runs to determine poorly performing training runs.
Azure Machine Learning allows you to warm start your hyperparameter tuning run by leveraging knowledge from up to 5 previously completed / cancelled hyperparameter tuning parent runs. You can specify the list of parent runs you want to warm start from using this snippet:
Additionally, there may be occasions when individual training runs of a hyperparameter tuning experiment are cancelled due to budget constraints or fail due to other reasons. It is now possible to resume such individual training runs from the last checkpoint (assuming your training script handles checkpoints). Resuming an individual training run will use the same hyperparameter configuration and mount the outputs folder used for that run. The training script should accept the resume-from
argument, which contains the checkpoint or model files from which to resume the training run. You can resume individual training runs using the following snippet:
You can configure your hyperparameter tuning experiment to warm start from a previous experiment or resume individual training runs using the optional parameters resume_from
and resume_child_runs
in the config:
The Azure Machine Learning SDK provides a Notebook widget that visualizes the progress of your training runs. The following snippet visualizes all your hyperparameter tuning runs in one place in a Jupyter notebook:
This code displays a table with details about the training runs for each of the hyperparameter configurations.
You can also visualize the performance of each of the runs as training progresses.
Additionally, you can visually identify the correlation between performance and values of individual hyperparameters using a Parallel Coordinates Plot.
You can visualize all your hyperparameter tuning runs in the Azure web portal as well. For more information on how to view an experiment in the web portal, see how to track experiments.
Once all of the hyperparameter tuning runs have completed, identify the best performing configuration and the corresponding hyperparameter values:
Refer to train-hyperparameter-* notebooks in this folder:
Learn how to run notebooks by following the article Use Jupyter notebooks to explore this service.