From b208cacbf4c816546e05a018e09de3163841fe63 Mon Sep 17 00:00:00 2001 From: Eduardo Cueto Mendoza Date: Sun, 16 Aug 2020 18:35:37 -0600 Subject: [PATCH] Last paper added to corpus --- Corpus/CORPUS.txt | 7535 +++++++++++++++++ ... for Stochastic Differential Equations.txt | Bin 117781 -> 0 bytes ...caling Laws for Neural Language Models.txt | Bin 127810 -> 0 bytes ...orks via L1 Regularization - CHEN YANG.txt | Bin 55892 -> 0 bytes Corpus/THE LOTTERY TICKET HYPOTHESIS.txt | Bin 201513 -> 0 bytes ... CARBON FOOTPRINTS OF MACHINE LEARNING.txt | Bin 112992 -> 0 bytes ...Neural Network Models More Efficiently.txt | 535 -- ... in Deep Neural Networks - Trevor Gale.txt | 678 -- ..._Platform-Aware_Neural_ECCV_2018_paper.txt | Bin 58773 -> 0 bytes ... ConvolutionalNeural Network Inference.txt | 1187 --- ...Memory-Efficient Neural Network Design.txt | Bin 88544 -> 0 bytes 11 files changed, 7535 insertions(+), 2400 deletions(-) delete mode 100644 Corpus/Scalable Gradients for Stochastic Differential Equations.txt delete mode 100644 Corpus/Scaling Laws for Neural Language Models.txt delete mode 100644 Corpus/Structured Pruning of Convolutional Neural Networks via L1 Regularization - CHEN YANG.txt delete mode 100644 Corpus/THE LOTTERY TICKET HYPOTHESIS.txt delete mode 100644 Corpus/TOWARDS THE SYSTEMATIC REPORTING OF THE ENERGY AND CARBON FOOTPRINTS OF MACHINE LEARNING.txt delete mode 100644 Corpus/The 4 Research Techniques to Train Deep Neural Network Models More Efficiently.txt delete mode 100644 Corpus/The State of Sparsity in Deep Neural Networks - Trevor Gale.txt delete mode 100644 Corpus/Tien-Ju_Yang_NetAdapt_Platform-Aware_Neural_ECCV_2018_paper.txt delete mode 100644 Corpus/You Cannot Improve What You Do not Measure FPGA vs. ASIC Efficiency Gaps for ConvolutionalNeural Network Inference.txt delete mode 100644 Corpus/vDNN Virtualized Deep Neural Networks for Scalable Memory-Efficient Neural Network Design.txt diff --git a/Corpus/CORPUS.txt b/Corpus/CORPUS.txt index 851f739..70d7dd3 100644 --- a/Corpus/CORPUS.txt +++ b/Corpus/CORPUS.txt @@ -21521,4 +21521,7539 @@ In this section, we show the plots of feature importance for all the tasks. <> +<> <> <> + + +<> <> <> +Scalable Gradients for Stochastic Differential Equations + +Xuechen Li. Ting-Kam Leonard Wong + +Google Research University of Toronto + +Abstract + +The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic Differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset. + +1 Introduction + +Deterministic dynamical systems can often be modeled by ordinary Differential equations (ODEs). The adjoint sensitivity method can efficiently compute gradients of ODE solutions with constant memory cost. This method was well-known in the physics, numerical analysis, and control communities for decades [3, 4, 60, 65]. Recently, it was combined with modern reverse-mode automatic differentiation packages, enabling ODEs with millions of parameters to be fit to data [12] and allow. +ing more flexible density estimation and time-series models [23, 32, 72]. +Stochastic Differential equations (SDEs) generalize ODEs, adding instantaneous noise to their dynamics [55, 77, 78]. They are a natural model for phenomena governed by many small and unobserved interactions, such as motion of molecules in a liquid [8], + +allele frequencies in a gene pool [15], or prices in a market [79]. Previous attempts on fitting SDEs mostly relied on methods with poor scaling properties. The pathwise approach [22, 89], a form of forward-mode automatic differentiation, scales poorly in time with the number of parameters and states in the model. On the other hand, simply differentiating through the operations of an SDE solver [19] scales poorly in memory. +In this work, we generalize the adjoint method to stochastic dynamics defined by SDEs. We give a sim.ple and practical algorithm for fitting SDEs with tens of thousands of parameters, while allowing the use of high-order adaptive time-stepping SDE solvers. We call this approach the stochastic adjoint sensitivity method. + +<
> + +Table 1: Asymptotic complexity comparison. L is the number of steps used in a fixed-step solve, and D is the number of state and parameters. Both memory and time are expressed in units of the cost of evaluating the drift and diffusion functions once each. +There are two main difficulties in generalizing the ad.joint formulation for ODEs to SDEs. The first is mathematical: SDEs are defined using nonstandard integrals that usually rely on Ito calculus. The adjoint method requires solving the dynamics backwards in time from the end state. However, it is not clear exactly what running the SDE backwards means in the context of stochastic calculus, and when it correctly reconstructs the forward trajectory. We address this problem in Section 3, deriving a backward Stratonovich SDE whose dynamics compute the necessary gradient. +The second difficulty is computational: To retrace the steps, one needs to reconstruct the noise sampled on the forward pass, ideally without storing it. In Section 4, we give an algorithm that allows querying a Brownian motion sample at any time point arbitrarily-precisely, while only storing a single random seed. + +We combine our adjoint approach with a gradient-based stochastic variational inference scheme for efficiently marginalizing over latent SDE models with arbitrary differentiable likelihoods. This model fam.ily generalizes several existing families such as latent ODEs [12, 72], Gaussian state-space models [36, 81], and deep Kalman filters [40], and can naturally handle irregularly-sampled times series and missing observations. We train latent SDEs on toy and real datasets, demonstrating competitive performance compared to existing approaches for dynamics modeling. + +2 Background: Stochastic Flows + +2.1 Adjoint Sensitivity Method +The adjoint sensitivity method is an efficient approach to solve control problems relying on the adjoint (co-state) system [65]. Chen et al. [12] used this method to compute the gradient with respect to parameters of a neural ODE, which is a particular model among many others inspired by the theory of dynamical systems [10, 11, 26, 44, 46, 74, 86]. The method, shown in Algorithm 1, is scalable, since the most costly computation is a vector-Jacobian product defining its backwards dynamics. In addition, since the gradient is obtained by solving another ODE, no intermediate computation is stored as in the case of regular backpropagation [73]. + +2.2 Stochastic Differential Equations +We briefly define SDEs: Consider a filtered probability space <> on which an m-dimensional adapted Wiener process (aka Brownian motion) <> is defined. For a fixed terminal time t +<>, we denote by <> the time horizon. We denote the ith component of Wt by <>. A stochastic process <> can be defined by an Ito SDE + +<>, (1) + +where z0 . Rd is the starting state, and <> and <> are the drift and diffusion functions, respectively. For ease of presentation, we let m =1 in the following unless otherwise stated. Our contributions can be easily generalized to cases where +m> 1. Here, the second integral on the right hand side of (1) is the Ito stochastic integral [55]. When the coefficients are globally Lipschitz in both the state and time, there exists a unique strong solution to the SDE [55]. +2.3 Neural Stochastic Differential Equations +Similar to neural ODEs, one can consider drift and diffusion functions defined by neural networks, a model known as the neural SDE [32, 45, 82, 83]. +Amongst work on neural SDEs, none has enabled an efficient training framework. In particular, Tzen and Raginsky [82] and Liu et al. [45] considered computing the gradient by simulating the forward dynamics of an explicit Jacobian matrix. This Jacobian has size of either the square of the number of parameters, or the number of parameters times the number of states, building on the pathwise approach [22, 89]. In contrast, our approach only requires a small number of cheap vector-Jacobian products, independent of the dimension of the parameter and state vectors. These vector-Jacobian products have the same asymptotic time cost as evaluating the drift and diffusion functions, and can be easily computed by modern automatic differentiation libraries [1, 16, 49, 59]. +2.4 Backward Stratonovich Integral +Our stochastic adjoint sensitivity method involves stochastic processes running both forward and back.ward in time. The Stratonovich stochastic integral, due to its symmetry, gives nice expressions for the backward dynamics and is more convenient for our purpose. Our results can be straightforwardly applied to ItSDEs as well, using a simple conversion (see e.g. [64, Sec. 2]). +Following the treatment of Kunita [41], we introduce the forward and backward Stratonovich integrals. Let <> be a two-sided filtration, where <> is the \sigma-algebra generated by <> for <> such that <>. For a continuous semi-martingale <> adapted to the forward filtration <>, the Stratonovich stochastic integral is + +<> + +where <> is a partition of the interval <> denotes the size of largest segment of the partition, and the limit is to be interpreted in the L2 sense. The Ito integral uses instead the left endpoint <> rather than the average. In general, the Ito and Stratonovich integrals differ by a term of finite variation. + +To define the backward Stratonovich integral, we consider c the backward Wiener process <> defined as <> for all t that is adapted to the backward filtration <>. For a continuous semimartingale <> adapted to the backward filtration, + +Algorithm 1 ODE Adjoint Sensitivity + +<> + +Algorithm 2 SDE Adjoint Sensitivity (Ours) + +<> + +Figure 1: Pseudocode of the (ODE) adjoint sensitivity method (left), and our generalization to Stratonovich SDEs (right). differences are highlighted in blue. Square brackets denote vector concatenation. +the backward Stratonovich integral is Moreover, each .s,t is a smooth diffeomorphism N flow of diffeomorphisms generated by the SDE (2). + +<
> + +from Rd to itself. We thus call S the stochastic (b) The backward flow <> satisfies the backward SDE: + +<> + +where <> is the partition. + +2.5 Stochastic Flow of diffeomorphisms +<> + +It is well known that an ODE defines a flow of diffeomorphisms [6]. Here we consider the stochastic analog <>, (3) for the Stratonovich SDE s + +<> + +for all <> and <> such that <>. + +<> (2) + +The coefficients in (2) and (3) differ by only a negative sign. This symmetry is due to our use of the Stratonovich integral (see Figure 2). + +<> + +Throughout the paper, we assume that both b and <> have infinitely many bounded derivatives w.r.t. the state, and bounded first derivatives w.r.t. time, i.e. <>, so that the SDE has a unique strong solution. Let <> be the solution at time t +when the process is started at z at time s. Given a realization of the Wiener process, this defines a collection of continuous maps <> from Rd to itself. +The following theorem shows that these maps are diffeomorphisms (after choosing a suitable modification) and that they satisfy backward SDEs. +Theorem 2.1 ([41, Theorem 3.7.1]). (a) With probability 1, the collection <> satisfies the flow property + +<>. + +3 Sensitivity via Stochastic Adjoint + +We present our main contribution: a stochastic analog of the adjoint sensitivity method for SDEs. We use (3) to derive another backward Stratonovich SDE, which we call the stochastic adjoint process. The direct implication is a gradient computation algorithm that works by solving a set of dynamics in reverse time, and relies on cheap vector-Jacobian products without storing any intermediate quantities. +The proof included in Appendix 9.1 relies on Its lemma in the Stratonovich form [41, Theorem 2.4.1]. We stress that this lemma considers only the case where the endpoint z is fixed and deterministic. +Now, we extend to the case where the endpoint is not deterministic, but rather computed from the forward flow. To achieve this, we compose the state process and the loss function. Consider As, <>. The chain rule gives As, <>. Let + +<> + +3.1 Stochastic Adjoint Process +The goal is to derive a stochastic adjoint process <> that can be simulated by evaluating only vector-Jacobian products, where <> is a + +<> (6) + +Note that As, <>. + +Since <> is scalar loss of the terminal state from the forward flow a constant, <> satisfies the augmented <> backward SDE system + +backward SDE for the process + +<> + +We first derive <>, assuming that <> follows the inverse flow from a deterministic end state ZT + +<> + +that does not depend on the realized Wiener process (Lemma 3.1). We then extend to the case where <> is obtained by the forward flow starting from a deterministic initial state z0 (Theorem 3.2). This latter part is unconventional, and the resulting value cannot be interpreted as the solution to a backward SDE anymore due to loss of adaptedness. Instead, we will formulate the result with the Ito map [69]. Finally, it is straightforward to extend the state Zt to include parameters of the drift and diffusion functions such that the desired gradient can be obtained for stochastic optimization; we comment on this step in Section 3.3. + +<> + +Since the drift and diffusion functions of this augmented system are <>, the system has a unique strong solution. Let s=0 and t = T . Since (7) admits a strong solution, we may write + +<>, (8) + +We first present the SDE for the Jacobian matrix of where <> denotes the path of the Wiener the backward flow. +process and + +Lemma 3.1 (Dynamics of <>). Consider the stochastic flow generated by the backward SDE (3) as in <> + +Theorem 2.1(b). Letting Js,t(z) := r.s,t(z), we have +is a deterministic measurable function (the Ito map) [69, Chapter V, definition 10.9]. Intuitively, F can be thought as a black box that computes the solution + +<> + +to the backward SDE system (7) given the position at time T and the realized Wiener process samples. Similarly, we let G be the solution map for the forward flow (2). The next theorem follows immediately from (6) and the definition of <>, we have +for all <> and <>. Furthermore, letting + +<>, (4) + + we have + +Theorem 3.2. For <>-almost all <>, + +<> + +where <> + +<>, (5) + +for all <> and <> and (8). + +Proof. This is a consequence of composing <> + + +This shows that one can obtain the gradient by "composing" the backward SDE system (7) with the original forward SDE (2) and ends our continuous-time analysis. + +3.2 Numerical Approximation +In practice, we compute solutions to SDEs with numerical solvers Fh and Gh, where <> denotes the mesh size of a fixed grid. The approximate algorithm thus outputs <>. The following theorem provides sufficient conditions for convergence. +Theorem 3.3. Suppose the schemes Fh and Gh satisfy the following conditions: (i) <> in probability as <>, and +(ii) for any <>, we have <> in probability as <>. Then, for any starting point z of the forward flow, we have + +<> + +in probability as <>. + +See Appendix 9.2 for the proof. Usual schemes such as the Euler-Maruyama scheme (more generally ItTaylor schemes) converge pathwise (i.e. almost surely) from any fixed starting point [38] and satisfies (i). While (ii) is strong, we note that the SDEs considered here have smooth coefficients, and thus their solutions enjoy nice regularity properties in the starting position. There.fore, it is reasonable to expect that the corresponding numerical schemes to also behave nicely as a function of both the mesh size and the starting position. To the best of our knowledge, this property is not considered +at all in the literature on numerical methods for SDEs (where the initial position is fixed), but is crucial in the proof of Theorem 3.3. In Appendix 9.3, we prove that condition (ii) holds for the Euler-Maruyama scheme. Detailed analysis for other schemes is beyond the scope of this paper. + +3.3 The Algorithm +So far we have derived the gradient of the loss with respect to the initial state. We can extend these results to give gradients with respect to parameters of the drift and diffusion functions by treating them as an additional part of the state whose dynamics has zero drift and diffusion. We summarize this in Algorithm 2, assuming access only to a black-box solver sdeint. All terms in the augmented dynamics, such as <> can be cheaply evaluated by calling <> and <>, respectively. +difficulties with non-diagonal diffusion. In principle, we can simulate the forward and backward adjoint dynamics with any high-order solver of choice. However, +for general matrix-valued diffusion functions ., to ob.tain a numerical solution with strong order 1 +beyond 1/2, we need to simulate multiple integrals of the Wiener process such as <>. +These random variables are difficult to simulate and costly to approximate [87]. +Fortunately, if we restrict our SDE to have diagonal noise, then even though the backward SDE for the stochastic adjoint will not in general have diagonal noise, it will satisfy a commutativity property [70]. In that case, we can safely adopt certain numerical schemes of strong order 1.0 (e.g. Milstein [52] and stochastic Runge-Kutta [71]) without approximating multiple integrals or the Levy area during simulation. We formally show this in Appendix 9.4. +One may also consider numerical schemes with high weak order [39]. However, analysis of this scenario is beyond the current scope. + +3.4 Software and Implementation +We have implemented several common SDE solvers in PyTorch [59] with adaptive time-stepping using a PI controller [9, 30]. Following +torchdiffeq [12], we have created a user-friendly subclass of torchautograd. Function that facilitates gradient computation using our stochastic adjoint framework for SDEs that are subclasses of torch.nn.Module. We include a short code snippet covering the main idea of the stochastic adjoint in Appendix 9.12. The complete codebase can be found at https://github.com/google-research/torchsde. + +4 Virtual Brownian Tree + +Our formulation of the adjoint can be numerically integrated efficiently, since simulating its dynamics only requires evaluating cheap vector-Jacobian products, as opposed to whole Jacobians. However, the backward-in-time nature introduces a new difficulty: The same Wiener process sample path used in the for.ward pass must be queried again during the backward pass. Brownian storing Brownian motion increments implies a large memory consumption and complicates the usage of adaptive time-stepping integrators, where the evaluation times in the backward pass may be different from those in the forward pass. +To overcome this issue, we combine Brownian trees with splittable pseudorandom number generators (PRNGs) to give an algorithm that can query values of a Wiener +1A numerical scheme is of strong order p if <> for all <>, where Xt and XN. are respectively the coupled true solution and numerical solution, N and . are respectively the iteration index and step size such that N. = T , and C is independent of process sample path at arbitrary times. This algorithm, which we call the virtual Brownian tree, has O(1) memory cost, and time cost logarithmic with respect to the inverse error tolerance. + +<
> + +Figure 3: Evaluating a Brownian motion sample at time tq using a virtual Brownian tree. Our algorithm repeatedly bisects the interval, sampling from a Brownian bridge at each halving to determine intermediate values. Each call to the random number generator uses a unique key whose value depends on the path taken to reach it. + +4.1 Brownian Bridges and Brownian Trees +Levy's Brownian bridge [67] states that given a start time ts and end time te along with their respective Wiener process values ws and we, the marginal of the process at time <> is a normal distribution: +  +<>. (9) + +We can recursively apply this formula to evaluate the process at the midpoint of any two distinct timestamps where the values are already known. Constructing the whole sample path of a Wiener process in this manner results in what is known as the Brownian tree [17]. Storing this tree would be memory-intensive, but we show how to reconstruct any node in this tree as desired. + +4.2 Brownian Trees using Splittable Seeds +We assume access to a splittable PRNG [14], which has an operation split that deterministically generates two keys from an existing key. Given a key, the function BrownianBridge samples deterministically from (9). To obtain the Wiener process value at a specific time, we must first know or sample the values at the initial and terminal times. Then, the virtual Brownian tree recursively samples from the midpoint of Brownian bridges, each sample using a key split from that of its parent node. The algorithm terminates when the most recently sampled time is close enough to the desired time. We outline the full procedure in Algorithm 3. + +Algorithm 3 Virtual Brownian Tree + +<> + +This algorithm has constant memory cost. For a fixed-step-size solver taking L steps, the tolerance that the tree will need to be queried at scales as 1/L. Thus the per-step time complexity scales as log L. Our implementation uses an efficient count-based PRNG [76] which avoids passing large random states, and instead simply passes integers. Table 1 compares the asymptotic time complexity of this approach against existing alternatives. + +5 Latent Stochastic Differential Equations + +The algorithms presented in Sections 3 and 4 allow us to efficiently compute gradients of scalar objectives with respect to SDE parameters, letting us fit SDEs to data. This raises the question: Which loss to optimize? +Simply fitting SDE parameters to maximize likelihood will in general cause overfitting, and will result in the diffusion function going to zero. In this section, we show how to do efficient variational inference in SDE models, and optimize the marginal log-likelihood to fit both prior (hyper-)parameters and the parameters of a tractable approximate posterior over functions. +In particular, we can parameterize both a prior over functions and an approximate posterior using SDEs: + +<>, (prior) + +<>, (approx. post.) + +where <> and <> are Lipschitz in both arguments, and both processes have the same starting value: <>. +If both processes share the same diffusion function <>, then the KL divergence between them is finite (under additional mild regularity conditions; see Appendix 9.6), and can be estimated by sampling paths from the posterior process. Then, the evidence lower + +<
> + +Figure 4: Graphical models for the generative process (decoder) and recognition network (encoder) of the latent stochastic Differential equation model. This model can be viewed as a variational autoencoder with infinite-dimensional noise. Red circles represent entire function draws from Brownian motion. Given the initial state z0 and a Brownian motion sample path <>, the intermediate states <> are deterministically approximated by a numerical SDE solver. + +bound (ELBO) can be written as: + +<>, (10) + +where <> satisfies <>, and the expectation is taken over the approximate posterior process defined by (approx. post.). The likelihoods of the observations x1,...,xN at times t1,...,tN depend only on the latent states zt at corresponding times. +To compute the gradient with respect to prior parameters <> and variational parameters <>, we need only augment the forward SDE with an extra scalar variable whose drift function is <> and whose diffusion function is zero. The backward dynamics can be derived analogously using (7). We include a detailed derivation in Appendix 9.6. Thus, a stochastic estimate of the gradients of the loss w.r.t. all parameters can be computed in a single pair of forward and backward SDE solves. +The variational parameters . can either be optimized individually for each sequence, or if multiple time series are sharing parameters, then an encoder network can be trained to input the observations and output .. This architecture, shown in figure 4, can be viewed as an infinite-dimensional variational autoencoder [35, 68]. +6 Related Work +Sensitivity Analysis for SDEs. Gradient computation is closely related to sensitivity analysis. Computing gradients with respect to parameters of vector fields of an SDE has been extensively studied in the stochastic control literature [42]. In particular, for low dimensional problems, this is done effectively using dynamic programming [7] and finite differences [20, 43]. However, both approaches scale poorly with the dimensionality of the parameter vector. +Analogous to REINFORCE (or the score-function estimator) [21, 37, 88], Yang and Kushner [89] considered deriving the gradient as rE[L(ZT )] = E[L(ZT )H] for some random variable H. However, H usually depends on the density of ZT with respect to the Lebesgue measure which can be difficult to compute. Gobet and Munos [22] extended this approach by weakening a non-degeneracy condition using Mallianvin calculus [53]. +Closely related to the current approach is the pathwise method [89], which is also a continuous-time analog of the reparameterization trick [35, 68]. Existing meth.ods in this regime [22, 45, 82] all require simulating a (forward) SDE where each step requires computing entire Jacobian matrices. This computational cost is prohibitive for high-dimensional systems with a large number of parameters. +Based on the Euler discretization, Giles and Glasser.man [19] considered simply performing reverse-mode automatic differentiation through all intermediate steps. They named this method the adjoint approach, which, by modern standards, is a form of "backpropagation through the operations of a numerical solver". This approach, widely adopted in the field of finance for calibrating market models [19], has high memory cost, and relies on a fixed Euler-Maruyama discretization. Recently, this approach was also used by Hegde et al. +[27] to learn parameterized drift and diffusion functions Figure 5: (a) Same fixed step size used in both forward and reverse simulation. Boxplot generated by repeating the experiment with different Brownian motion sample paths 64 times. (b) Colors of dots represent tolerance levels and correspond to the colorbar on the right. Only atol was varied and rtol was set to 0. + + +of an SDE. In scientific computing, Innes et al. [31] considered backpropagating through high-order implicit SDE solvers. +In the machine learning literature, Ryder et al. [75] perform variational inference over the state and parameters for Euler-discretized latent SDEs and optimize the model with regular backpropagation. This approach should not be confused with the formulation of variational inference for non-discretized SDEs presented in previous works [25, 57, 82] and our work, as it is unclear whether the limit of their discretization corresponds to that obtained by operating with continuous-time SDEs using Girsanov's theorem. +Backward SDEs. Our stochastic adjoint process re.lies on the notion of backward SDEs devised by Kunita [41], which is based on two-sided filtrations. This is different from the more traditional notion of backward SDEs where only a single filtration is defined [58, 62]. +Based on the latter notion, forward-backward SDEs (FBSDEs) have been proposed to solve stochastic optimal control problems [63]. However, simulating FBS-DEs is costly due to the need to estimate conditional expectations in the backward pass [58]. +Bayesian Learning of SDEs. Recent works considered the problem of inferring an approximate posterior SDE given observed data under a prior SDE with the same diffusion coefficient [25, 57, 82]. The special case with constant diffusion coefficients was considered more than a decade ago [5]. Notably, computing the KL divergence between two SDEs over a finite time horizon was well-explored in the control literature [33, 80]. We include background on this topic in Appendix 9.5. +Bayesian learning and parameter estimation for SDEs have a long history [24]. Techniques which don't fit require positing a variational family such as then extended Kalman filter and Markov chain Monte Carlo have been considered in the literature [50]. +7 Experiments +The aim of this section is threefold. We first empirically verify our theory by comparing the gradients obtained by our stochastic adjoint framework against analytically derived gradients for problems having closed-form solutions. We then fit latent SDE models with our framework on two synthetic datasets, verifying that the variational inference framework allows learning a generative model of time series. Finally, we learn dynamics parameterized by neural networks with a latent SDE from a motion capture dataset, demonstrating competitive performance compared to existing approaches. +We report results based on an implementation of Brownian motion that stores all intermediate queries. The virtual Brownian tree allowed training with much larger batch sizes on GPUs, but was not necessary for our small-scale experiments. Notably, our adjoint approach, even when combined with the Brownian motion implementation that stores noise, was able to reduce the memory usage by 1/2-1/3 compared to directly back-propagating through solver operations on the tasks we considered. +7.1 Numerical Studies +We consider three test problems (examples 1-3 from [66]; details in Appendix 9.7), all of which have closed-form solutions. We compare the gradient computed from simulating our stochastic adjoint process using the Milstein scheme against the exact gradient. Figure 5(a) shows that for test example 2, the error between the adjoint gradient and analytical gradient decreases with step size. +For all three test problems, the mean squared error across dimensions tends to be smaller as the absolute tolerance of the adaptive solver is reduced (e.g. see Fig. 5 (b)). However, the Number of Function Evaluations (NFEs) tends to be much larger than that in the ODE case [12]. + +Additionally, for two out of three test problems, we found that our adjoint approach with the Milstein scheme and fixed step size can be much more time.efficient than regular backpropagation through operations of the Milstein and Euler schemes (see e.g. Fig. 5(c)). Backpropagating through the Euler scheme gives gradients of higher error compared to the Milstein method. On the other hand, directly backpropagating through the Milstein solve requires evaluating high-order derivatives and can be costly. +Results for examples 1 and 3 are in Appendix 9.8. + +Figure 6: Learned posterior and prior dynamics on data from a stochastic Lorenz attractor. All samples from our model are continuous-time paths, and form a multi-modal, non-Gaussian distribution. +7.2 Synthetic Datasets +We trained latent SDEs with our adjoint framework to recover (1) a 1D Geometric Brownian motion, and (2) a 3D stochastic Lorenz attractor process. The main objective is to verify that the learned posterior can reconstruct the training data, and that the learned priors are not deterministic. We jointly optimize the evidence lower bound (10) with respect to parameters of the prior and posterior distributions at the initial latent state z0, the prior and posterior drift, the diffusion function, the encoder, and the decoder. We include the details of datasets and architectures in Appendix 9.9. +For the stochastic Lorenz attractor, not only is the model able to reconstruct the data well, but also the learned prior process can produce bimodal samples in both data and latent space. This is showcased in the last row of Figure 6 where the latent and data space samples cluster around two modes. This is hard to achieve using a latent ODE with a unimodal Gaussian initial approximate posterior. We include additional visualizations in Appendix 9.10. +7.3 Motion Capture Dataset +To demonstrate that latent SDEs can learn complex dynamics from real-world datasets, we evaluated their predictive performance on a 50-dimensional motion capture dataset. The dataset, from Gan et al. [18], consists of 23 walking sequences of subject 35 partitioned into 16 training, 3 validation, and 4 test sequences. We follow the preprocessing of Wang et al. [85]. +In designing the recognition network, we follow Y�ld�z et al. [90] and use a fully connected network to encode the first three observations of each sequence and there.after predicted the remaining sequence. This encoder is chosen for fair comparison to existing models, and could be extended to a recurrent or attention model [84]. The overall architecture is described in Appendix 9.11 and is similar to that of ODE2VAE [90], with a similar number of parameters. We also use a fixed step size 1/5 of smallest interval between any two observations [90]. +We train latent ODE and latent SDE models with the Adam optimizer [34] and its default hyperparameter settings, with an initial learning rate of 0.01 that is exponentially decayed with rate 0.999 during each iteration. We perform validation over the number of training iterations, KL penalty [29], and KL annealing schedule. All models were trained for at most 400 iterations, where we start to observe severe overfitting for most model instances. We report the test MSE on future observations following Y�ld�z et al. [90]. We believe that the improved performance is due to the strong regularization in path space, as removing the KL penalty improve training error but caused validation error to deteriorate. + +Table 2: Test MSE on 297 future frames averaged over 50 samples. 95% confidence interval reported based on t-statistic results from [90]. + +<
> + +8 Discussion + +We presented a generalization of the adjoint sensitivity method to compute gradients through solutions of SDEs. In contrast to existing approaches, this method has nearly the same time and memory complexity as simply solving the SDE. We showed how our stochastic adjoint framework can be combined with a gradient-based stochastic variational inference scheme for train.ing latent SDEs. +It is worthwhile to mention that SDEs and the commonly used GP models define two distinct classes of stochastic processes, albeit having a nonempty inter.section (e.g. Ornstein-Uhlenbeck processes fall under both). Computationally, the cost of fitting GPs lies in the matrix inversion, whereas the computational bottle.neck of training SDEs is the sequential numerical solve. Empirically, another avenue of research is to reduce the variance of gradient estimates. In the future, we may adopt techniques such as control variates or antithetic paths. +On the application side, our method opens up a broad set of opportunities for fitting any differentiable SDE model, such as Wright-Fisher models with selection and mutation parameters [15], derivative pricing models in finance, or infinitely-deep Bayesian neural networks [61]. In addition, the latent SDE model enabled by our frame.work can be extended to include domain knowledge and structural or stationarity constraints [48] in the prior process for specific applications. +On the theory side, there remain fundamental questions to be answered. Convergence rates of numerical gradients estimated with general schemes are unknown. Additionally, since our analyses are based on strong orders of schemes, it is natural to question whether convergence results still hold when we consider weak errors, and moreover if the method could be reformulated more coherently with rough paths theory [47]. + +Acknowledgements +We thank Yulia Rubanova, Danijar Hafner, Mufan Li, Shengyang Sun, Kenneth R. Jackson, Simo S�rkk�, Daniel Lacker, and Philippe Casgrain for helpful discus.sions. We thank �a�atay Y�ld�z for helpful discussions regarding evaluation settings of the mocap task. We also thank Guodong Zhang, Kevin Swersky, Chris Rackauckas, and members of the Vector Institute for helpful comments on an early draft of this paper. + +References +[1] Mart�n Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Je�rey Dean, Matthieu Devin, Sanjay Ghemawat, Geo�rey Irving, Michael Isard, et al. Tensorflow: A system for large-scale +machine learning. In 12th Symposium on Oper. +ating Systems Design and Implementation, pages +265�283, 2016. +[2] R Adams. Sobolev Spaces. Academic Press, 1975. +[3] Joel Andersson. A general-purpose software frame.work for dynamic optimization. PhD thesis, Aren.berg Doctoral School, KU Leuven, 2013. +[4] Joel Andersson, Joris Gillis, Greg Horn, James B Rawlings, and Moritz Diehl. CasADi: a software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1�36, 2019. +[5] C�dric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, and John S Shawe-Taylor. variational inference for diffusion processes. In Advances in Neural Information Processing Systems, pages 17�24, 2008. +[6] VI Arnold. Ordinary Differential Equations. The MIT Press, 1978. +[7] Jonathan Baxter and Peter L Bartlett. Infinite-horizon gradient-based policy search. 2001. +[8] Robert Brown. ... microscopical observations ... on the particles contained in the pollen of plants. The Philosophical Magazine, 4(21):161�173, 1828. +[9] Pamela M Burrage, R Herdiana, and Kevin Bur-rage. Adaptive stepsize based on control theory for stochastic Differential equations. Journal of Computational and Applied Mathematics, 170(2): 317�336, 2004. +[10] Bo Chang, Lili Meng, Eldad Haber, Frederick Tung, and David Begert. Multi-level residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348, 2017. +[11] Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +[12] Ricky Tian Qi Chen, Yulia Rubanova, Jesse Bet.tencourt, and David K Duvenaud. Neural ordinary Differential equations. In Advances in neural in.formation processing systems, pages 6571�6583, 2018. +[13] Kyunghyun Cho, Bart Van Merri�nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. +[14] Koen Claessen and Micha. H Pa.ka. Splittable pseudorandom number generators using crypto.graphic hashing. In ACM SIGPLAN Notices, vol.ume 48, pages 47�58. ACM, 2013. +[15] Warren J Ewens. Mathematical population genetics 1: theoretical introduction, volume 27. Springer Science & Business Media, 2012. +[16] Roy Frostig, Matthew James Johnson, and Chris Leary. Compiling machine learning programs via high-level tracing, 2018. +[17] Jessica G Gaines and Terry J Lyons. Variable step size control in the numerical solution of stochastic Differential equations. SIAM Journal on Applied Mathematics, 57(5):1455�1484, 1997. +[18] Zhe Gan, Chunyuan Li, Ricardo Henao, David E Carlson, and Lawrence Carin. Deep temporal sig.moid belief networks for sequence modeling. In Advances in Neural Information Processing systems, pages 2467�2475, 2015. +[19] Mike Giles and Paul Glasserman. Smoking ad-joints: Fast Monte Carlo greeks. Risk, 19(1):88�92, 2006. +[20] Paul Glasserman and David D Yao. Some guide.lines and guarantees for common random numbers. Management Science, 38(6):884�908, 1992. +[21] Peter W Glynn. Likelihood ratio gradient estima.tion for stochastic systems. Communications of the ACM, 33(10):75�84, 1990. +[22] Emmanuel Gobet and R�mi Munos. Sensitivity analysis using ItMalliavin calculus and martin.gales, and application to stochastic optimal control. SIAM Journal on control and optimization, 43(5): 1676�1713, 2005. +[23] Will Grathwohl, Ricky T. Q. Chen, Jesse Bet.tencourt, Ilya Sutskever, and David Duvenaud. FFJORD: Free-form continuous dynamics for scal.able reversible generative models. International Conference on Learning Representations, 2019. +[24] Narendra Gupta and Raman Mehra. computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations. IEEE transactions on automatic control, 19(6): 774�783, 1974. +[25] Jung-Su Ha, Young-Jin Park, Hyeok-Joo Chae, Soon-Seo Park, and Han-Lim Choi. Adaptive path-integral autoencoders: Representation learning and planning for dynamical systems. In Advances in Neural Information Processing Systems, pages 8927�8938, 2018. +[26] Eldad Haber and Lars Ruthotto. Stable architec.tures for deep neural networks. Inverse Problems, 34(1):014004, 2017. +[27] Pashupati Hegde, Markus Heinonen, Harri L�hdesm�ki, and Samuel Kaski. Deep learning with Differential gaussian process flows. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1812�1821, 2019. +[28] Markus Heinonen, Cagatay Yildiz, Henrik Man.nerstr, Jukka Intosalmi, and Harri L�hdesm�ki. Learning unknown ode models with gaussian pro.cesses. arXiv preprint arXiv:1803.04303, 2018. +[29] Irina Higgins, Loic Matthey, Arka Pal, Christo.pher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta.vae: Learning basic visual concepts with a con.strained variational framework. ICLR, 2(5):6, 2017. +[30] Silvana Ilie, Kenneth R Jackson, and Wayne H Enright. Adaptive time-stepping for the strong numerical solution of stochastic Differential equations. Numerical Algorithms, 68(4):791�812, 2015. +[31] Mike Innes, Alan Edelman, Keno Fischer, Chris Rackauckus, Elliot Saba, Viral B Shah, and Will Tebbutt. Zygote: A differentiable programming system to bridge machine learning and scien.ti�c computing. arXiv preprint arXiv:1907.07587, 2019. +[32] Junteng Jia and Austin R. Benson. Neural Jump Stochastic Differential Equations. arXiv e-prints, art. arXiv:1905.10403, May 2019. +[33] Hilbert Johan Kappen and Hans Christian Ruiz. Adaptive importance sampling for control and in.ference. Journal of Statistical Physics, 162(5): 1244�1266, 2016. +[34] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[35] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +[36] Genshiro Kitagawa and Will Gersch. Linear gaus.sian state space modeling. In Smoothness Priors Analysis of Time Series, pages 55�65. Springer, 1996. +[37] Jack PC Kleijnen and Reuven Y Rubinstein. Op.timization and sensitivity analysis of computer simulation models by the score function method. European Journal of Operational Research, 88(3): 413�427, 1996. +[38] Peter E Kloeden and Andreas Neuenkirch. The pathwise convergence of approximation schemes for stochastic Differential equations. LMS jour.nal of Computation and Mathematics, 10:235�253, 2007. +[39] Peter E Kloeden and Eckhard Platen. Numer.ical solution of stochastic Differential equations, volume 23. Springer Science & Business Media, 2013. +[40] Rahul G Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +[41] Hiroshi Kunita. Stochastic Flows and Jump.diffusions. Springer, 2019. +[42] Harold Kushner and Paul G Dupuis. Numerical methods for stochastic control problems in continu.ous time, volume 24. Springer Science & Business Media, 2013. +[43] Pierre L�Ecuyer and Gafitan Perron. On the con.vergence rates of ipa and fdc derivative estimators. Operations Research, 42(4):643�656, 1994. +[44] Qianxiao Li, Long Chen, Cheng Tai, and E Weinan. Maximum principle based algorithms for deep learning. The Journal of Machine Learning Re.search, 18(1):5998�6026, 2017. +[45] Xuanqing Liu, Si Si, Qin Cao, Sanjiv Kumar, and Cho-Jui Hsieh. Neural sde: Stabilizing neural ode networks with stochastic noise. arXiv preprint arXiv:1906.02355, 2019. +[46] Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer neural networks: Bridg.ing deep architectures and numerical Differential equations. arXiv preprint arXiv:1710.10121, 2017. +[47] Terry J Lyons. Differential equations driven by rough signals. Revista Matemfitica Iberoamericana, 14(2):215�310, 1998. +[48] Yi-An Ma, Tianqi Chen, and Emily Fox. A com.plete recipe for stochastic gradient mcmc. In Ad.vances in Neural Information Processing Systems, pages 2917�2925, 2015. +[49] Dougal Maclaurin, David Duvenaud, M Johnson, and RP Adams. Autograd: Reverse-mode differ.entiation of native python. In ICML workshop on Automatic Machine Learning, 2015. +[50] Isambi S Mbalawata, Simo S�rkk�, and Heikki Haario. Parameter estimation in stochastic differential equations with markov chain monte carlo and non-linear kalman filtering. Computational Statistics, 28(3):1195�1223, 2013. +[51] Grigori Noah Milstein and Michael V Tretyakov. Stochastic Numerics for Mathematical Physics. Springer Science & Business Media, 2013. +[52] Grigorii Noikhovich Milstein. Numerical integra.tion of stochastic Differential equations, volume 313. Springer Science & Business Media, 1994. +[53] Ivan Nourdin and Giovanni Peccati. Normal ap.proximations with Malliavin calculus: from Stein�s method to universality, volume 192. Cambridge University Press, 2012. +[54] Daniel Ocone and fitienne Pardoux. A general.ized itventzell formula. application to a class of anticipating stochastic Differential equations. 25 (1):39�71, 1989. +[55] Bernt �ksendal. Stochastic Differential Equations. Springer, 2003. +[56] Bernt Oksendal. Stochastic Differential equations: an introduction with applications. Springer Science & Business Media, 2013. +[57] Manfred Opper. Variational inference for stochas.tic Differential equations. Annalen der Physik, 531 (3):1800233, 2019. +[58] Etienne Pardoux and Shige Peng. Backward stochastic Differential equations and quasilinear parabolic partial Differential equations. In Stochas.tic Partial Differential Equations and Their Ap.plications, pages 200�217. Springer, 1992. +[59] Adam Paszke, Sam Gross, Soumith Chintala, Gre.gory Chanan, Edward Yang, Zachary DeVito, Zem.ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +[60] Barak A Pearlmutter. Gradient calculations for dy.namic recurrent neural networks: A survey. IEEE Transactions on Neural networks, 6(5):1212�1228, 1995. +[61] Stefano Peluchetti and Stefano Favaro. Neural stochastic Differential equations. arXiv preprint arXiv:1904.01681, 2019. +[62] Shige Peng. A general stochastic maximum principle for optimal control problems. SIAM Journal on Control and Optimization, 28(4):966�979, 1990. +[63] Shige Peng and Zhen Wu. Fully coupled forward-backward stochastic Differential equations and ap.plications to optimal control. SIAM Journal on Control and Optimization, 37(3):825�843, 1999. +[64] Eckhard Platen. An introduction to numerical methods for stochastic Differential equations. Acta numerica, 8:197�246, 1999. +[65] Lev Semenovich Pontryagin. Mathematical Theory of Optimal Processes. Routledge, 2018. +[66] Christopher Rackauckas and Qing Nie. Adaptive methods for stochastic Differential equations via natural embeddings and rejection sampling with memory. Discrete and Continuous Dynamical systems. Series B, 22(7):2731, 2017. +[67] Daniel Revuz and Marc Yor. Continuous martin.gales and Brownian motion, volume 293. Springer Science & Business Media, 2013. +[68] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. +[69] L Chris G Rogers and David Williams. diffusions, Markov Processes and Martingales: Volume 2, ItCalculus, volume 2. Cambridge University Press, 2000. +[70] Andreas Rler. Runge�Kutta methods for stratonovich stochastic Differential equation systems with commutative noise. Journal of Com.putational and Applied mathematics, 164:613�627, 2004. +[71] Andreas Rler. Runge�Kutta methods for the strong approximation of solutions of stochastic Differential equations. SIAM Journal on Numerical Analysis, 48(3):922�952, 2010. +[72] Yulia Rubanova, Ricky TQ Chen, and David Du.venaud. Latent odes for irregularly-sampled time series. Neural Information Processing Systems, 2019. +[73] David E Rumelhart, Geo�rey E Hinton, Ronald J Williams, et al. Learning representations by back-propagating errors. Cognitive Modeling, 5(3):1, 1988. +[74] Lars Ruthotto and Eldad Haber. Deep neural networks motivated by partial Differential equations. arXiv preprint arXiv:1804.04272, 2018. +[75] Thomas Ryder, Andrew Golightly, A Stephen Mc-Gough, and Dennis Prangle. Black-box variational inference for stochastic Differential equa.tions. arXiv preprint arXiv:1802.03335, 2018. +[76] John K Salmon, Mark A Moraes, Ron O Dror, and David E Shaw. Parallel random numbers: as easyas1,2, 3. In Proceedings of 2011 Interna.tional Conference for High Performance Comput.ing, Networking, Storage and Analysis, page 16. ACM, 2011. +[77] Simo S�rkk�. Bayesian filtering and smoothing, volume 3. Cambridge University Press, 2013. +[78] Simo S�rkk� and Arno Solin. Applied stochas.tic Differential equations, volume 10. Cambridge University Press, 2019. +[79] Steven E Shreve. Stochastic calculus for finance II: Continuous-time models, volume 11. Springer Science & Business Media, 2004. +[80] Evangelos Theodorou. Nonlinear stochastic con.trol and information theoretic dualities: Connec.tions, interdependencies and thermodynamic in.terpretations. Entropy, 17(5):3352�3375, 2015. +[81] Ryan Turner, Marc Deisenroth, and Carl Ras.mussen. State-space inference and learning with gaussian processes. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 868�875, 2010. +[82] Belinda Tzen and Maxim Raginsky. Neural stochastic Differential equations: Deep latent gaus.sian models in the diffusion limit. arXiv preprint arXiv:1905.09883, 2019. +[83] Belinda Tzen and Maxim Raginsky. Theoretical guarantees for sampling and inference in generative models with latent diffusions. Proceeings of the Conference on Learning Theory, 2019. +[84] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, .ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998�6008, 2017. +[85] Jack M Wang, David J Fleet, and Aaron Hertz.mann. Gaussian process dynamical models for human motion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):283�298, 2007. +[86] E Weinan. A proposal on machine learning via dy.namical systems. Communications in Mathematics and Statistics, 5(1):1�11, 2017. +[87] Magnus Wiktorsson et al. Joint characteristic function and simultaneous simulation of iterated itintegrals for multiple independent brownian motions. The Annals of Applied Probability, 11(2): 470�487, 2001. +[88] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce.ment learning. Machine Learning, 8(3-4):229�256, 1992. +[89] Jichuan Yang and Harold J Kushner. A monte carlo method for sensitivity analysis and paramet.ric optimization of nonlinear stochastic systems. SIAM Journal on Control and Optimization, 29 (5):1216�1249, 1991. +[90] �a�atay Y�ld�z, Markus Heinonen, and Harri L�hdesm�ki. Ode2vae: Deep generative second order odes with bayesian neural networks. arXiv preprint arXiv:1905.10994, 2019. + +9 Appendix + +Notation. For a fixed terminal time <>, we denote by <> the time horizon. Let <> be the class +of infinitely differentiable functions from Rd to itself. Let Cp,q be the class of functions from <> to <> that <> be +are p and q times continuously differentiable in the first and second component, respectively. Let <> the subclass with bounded derivatives of all possible orders. For a positive integer m, we adopt the short hand +[m]= {1, 2,...,m}. We denote the Euclidean norm of a vector v by |v|. For f . Cp,q, we denote its Jacobian with respect to the first component by rf. + +9.1 Proof of Theorem 3.1 +Proof of Theorem 3.1. We have <>, where <> is defined in (3). Now we take the gradient with respect to z on both sides. The solution is differentiable with respect to z and we may differentiate under the stochastic integral [41, Proposition 2.4.3]. Theorem 3.4.3 [41] is sufficient for the regularity conditions required. Since <>, applying the Stratonovich version of Its formula to (4), we have (5). + +9.2 Proof of Theorem 3.3 +Proof of Theorem 3.3. By the triangle inequality, + +<> + +We show that both I and I converge to 0 in probability as <>. For simplicity, we suppress z and W�. +Bounding I(1) . Let > 0 be given. Since Gh . G in probability, there exist M1 > 0 and h0 > 0 such that <>, <>, for all <>. +By Lemma 2.1 (iv) of Ocone and Pardoux [54], which can be easily adapted to our context, there exists a positive random variable C1, finite almost surely, such that <>, and there exists M2 > 0 such that <>. Given M2, there exists h1 > 0 such that + +<> + +Now, suppose <>. Then, by the union bound, with probability at least 1, we have  + +<> + +On this event, we have + +<> (1) + +Thus, we have shown that (1) converges to 0 in probability as <>. Bounding <>. The idea is similar. By condition (ii), we have + +<> + +in probability. Using this and condition (i), for given <>, there exist <> and <> such that for all <>, we have + +<> + +with probability at least 1. On this event, we have + +<> + +Thus <> also converges to 0 in probability as <>. + +9.3 Euler-Maruyama Scheme satisfies Local Uniform Convergence +Here we verify that the Euler-Maruyama scheme satisfies condition (ii) when d =1. Our proof can be extended to +the case where d> 1 assuming an Lp estimate of the error; see the discussion after the proof of Proposition 9.1. Proposition 9.1. Let Fh(z) be the Euler-Maruyama discretization of a 1-dimensional SDE with mesh size h of F(z). Then, for any compact <>, we have + +<> + +Usual convergence results in stochastic numerics only control the error for a single fixed starting point. Here, we strengthen the result to local uniform convergence. Our main idea is to apply a Sobolev inequality argument [54, Part II]. To do so, we need some preliminary results about the Euler-Maruyama discretization of the original SDE and its derivative. We first recall a theorem characterizing the expected squared error for general schemes. +Theorem 9.2 (Mean-square order of convergence [51, Theorem 1.1]). Let <> be the solution to an Ito SDE, and <> be a numerical discretization with fixed step size h, both of which are started at <> and defined on the same probability space. Let the coefficients of the SDE be <>. Furthermore, suppose that the numerical scheme has order of accuracy p1 for the expectation of deviation and order of accuracy p2 for the mean-square deviation. If <> and <>, then, for any <>, and <> +for a constant C that does not depend on h or z. + +We refer the reader to [51] for the precise definitions of orders of accuracy and the proof. Given this theorem, we establish an estimate regarding errors of the discretization and its derivative with respect to the initial position. + +Lemma 9.3. We have + + <>, + +where C1 is a constant independent of z and h. + +Proof of Lemma 9.3. Since the coefficients of the SDE are of class <>, we may differentiate the SDE in z to +b get the SDE for the derivative rzZz [41]. Specifically, letting <>, we have + +<> + +Note that the augmented process (F(z), rzF(z)) satisfies an SDE with <> coefficients. By the chain rule, +one can easily show that the derivative of the Euler-Maruyama discretization Fh(z) is the discretization of the derivative process Y z . Thus, (Fh(z), rzFh(z)) is simply the discretization of (F(z), rzF(z)). +Since the Euler-Maruyama scheme has orders of accuracy (p1,p2) = (1.5, 1.0) [51, Section 1.1.5], by Theorem 9.2, we have + +<> + +for some constant C1 that does not depend on z or h. + +We also recall a variant of the Sobolev inequality which we will apply for d =1. Theorem 9.4 (Sobolev inequality [2, Theorem 5.4.1.c]). For any p>d, there exists a universal constant cp such that + +<> + +where + +<> + +for all continuously differentiable <>. + +Proof of Proposition 9.1. define H. :� . R . R, regarded as a random function <>, by + +<> + +where <> is a fixed constant. Since H. is continuously differentiable a.s., by Theorem 9.4, + +<>, + +Without loss of generality, we may let the compact set be <> where <>. Then, + +<> (11) + +It remains to estimate <>. Starting from the definition of <>, a standard estimation yields + +<> + +where C2 is a deterministic constant depending only on . (but not z and h). +Now we take expectation on both sides. By Lemma 9.3, we have + +<> + +where the last integral is finite since <>. + +We have shown that <>. Thus kH.k. 0 in L2 , and hence also in probability, as <>. From equation 11, we have that <> converges to 0 in probability as <>. +It is clear from the above proof that we may generalize to the case where d> 1 and other numerical schemes if we can bound the expected <>, p-norm of <> in terms of z and h, for p>d, where W 1,p here denotes the Sobolev space consisting of all real-valued functions on Rd whose weak derivatives are functions in Lp. For the Euler scheme and <>, we need only bound the Lp norm of the discretization error in term of z and h for general p. To achieve this, we would need to make explicit the dependence on z for existing estimates (see e.g. [39, Chapter 10]). +Generically extending the argument to other numerical schemes, however, is technically non-trivial. We plan to address this question in future research. + +9.4 Stochastic Adjoint has Commutative Noise when Original SDE has Diagonal Noise +Recall the Stratonovich SDE (2) with drift and diffusion functions <> governed by a set of parameters <>. Consider the augmented state composed of the original state and parameters Yt =(Zt,.). The augmented state satisfies a Stratonovich SDE with the drift function <> and diffusion functions <> for <>. By (5) and (6), the dynamics for the adjoint process of the augmented state is characterized by the backward SDE: + +<> + +By definitions of f and gi, the Jacobian matrices rf(x, s) and rgi(x, s) can be written as: +  +<> + +Thus, we can write out the backward SDEs for the adjoint processes of the state and parameters separately: + +<> + +Now assume the original SDE has diagonal noise. Then, m = d and Jacobian matrix r.i(z) can be written as: + +<> + +Consider the adjoint process for the augmented state along with the backward flow of the backward SDE (3). We write the overall state as <>, where we abuse notation slightly to let <> denote the backward +flow process. Then, by (12) and (13), {Xt}t.T satisfies a backward SDE with a diffusion function that can be written as: + +<> + +Recall, for an SDE with diffusion function <>, it is said to satisfy the commutativity property [70] if + +<> + +for all j1,j2 . [m] and k . [d]. When an SDE has commutative noise, the computationally intensive double Itintegrals (and the Levy areas) need not be simulated by having the numerical scheme take advantage of the following property of iterated integrals [30]: + +<> + +where the Brownian motion increment <> for <> can be easily sampled. To see that the diffusion function (14) indeed satisfies the commutativity condition (15), we consider several cases: +<> Both LHS and RHS are zero unless j1 == k, since for .i,j2 (x) to be non-zero, <> Similar to the case above. Write <>, where <>. Both LHS and RHS are zero unless <>, since + +<> + +for <> to be non-zero <> or <> and <>. + +Since in all scenarios, LHS = RHS, we conclude that the commutativity condition holds. +Finally, we comment that the Milstein scheme for the stochastic adjoint of diagonal noise SDEs can be implemented such that during each iteration of the backward solve, vjp is only called a number of times independent respect to the dimensionality of the original SDE. + +9.5 Background on Latent SDE + +Consider a filtered probability space <>, where <> is a finite time horizon. +Recall the approximate posterior process that we intend to learn is governed by the SDE: + +<>, (16) + +Suppose there exists a measurable function u(z, t) such that <>, and <> satisfies Novikov's condition, i.e. <>. Novikov's condition ensures that the process + +<> + +is a P-martingale. By Girsanov Theorem II [56, Theorem 8.6.4], the process <> is a Wiener process under the probability measure Q defined by + +<>, + +Moreover, since a simple rewrite shows that + +<>, (17) + +we conclude that the Q-law of (17) (or equivalently (16)) is the same as the P -law of the prior process. + +9.5.1 Deriving the Variational Bound + +Let xt1,...,xtN be observation data at times t1,...,tN , whose conditionals only depend on the respective latent states zt1,...,ztN . Since the Q-law of the approximate posterior is the same as the P-law of the prior, + +<> + +where the second line follows from the definition of Q and third line follows from Jensen's inequality. In the last equality we used the fact that the Ito integral <> is a martingale. + +9.6 Stochastic Adjoint for Latent SDE + +Note that the variational free energy (10) can be derived from Girsanov's change of measure theorem [57]. To efficiently Monte Carlo estimate this quantity and its gradient, we simplify the equation by noting that for a one-dimensional process <> adapted to the filtration generated by a one-dimensional Wiener process <>, +if Novikov's condition [55] is satisfied, then the process defined by the Ito integral Vs dWs is a Martingale [55]. Hence, <>, and + +<> + +To Monte Carlo simulate the quantity in the forward pass along with the original dynamics, we need only extend the original augmented state with an extra variable Lt such that the new drift and diffusion functions for the new augmented state <> are + +<> + +By (7), the backward SDEs of the adjoint processes become + +<> (18) + +In this case, neither do we need to actually simulate the backward SDE of the extra variable nor do we need to simulate its adjoint. Moreover, when considered as a single system for the augmented adjoint state, the diffusion function of the backward SDE (18) satisfies the commutativity property (15). + +9.7 Test Problems + +In the following, <> and p are parameters of SDEs, and x0 is a fixed initial value. + +Example 1. + +<> + +Analytical solution: + +<> + +Example 2. + +<> + +Analytical solution: + +<> + +Example 3. + +<> + +Analytical solution: + +<> + +In each numerical experiment, we duplicate the equation 10 times to obtain a system of SDEs where each dimension had their own parameter values sampled from the standard Gaussian distribution and then passed through a sigmoid to ensure positivity. Moreover, we also sample the initial value for each dimension from a Gaussian distribution. + +<
> + +Figure 7: (a-c) Example 1. (d-f) Example 3. + +9.8 Results for Example 1 and 3 + +9.9 Toy Datasets Configuration + +9.9.1 Geometric Brownian Motion +Consider a geometric Brownian motion SDE: + +<>. + +We use <>, and <> as the ground-truth model, where <>. We sample 1024 time series, each of which is observed at intervals of 0.02 from time 0 to time 1. We corrupt this data using Gaussian noise with mean zero and standard deviation 0.01. +To recover the dynamics, we use a GRU-based [13] latent SDE model where the GRU has 1 layer and 100 hidden units, the prior and posterior drift functions are MLPs with 1 hidden layer of 100 units, and the diffusion function is an MLP with 1 hidden layer of 100 hidden units and the sigmoid activation applied at the end. The drift function in the posterior is time-inhomogenous in the sense that it takes in a context vector of size 1 at each observation that is output by the GRU from running backwards after processing all future observations. The decoder is a linear mapping from a 4 dimensional latent space to observation space. For all nonlinearities, we use the softplus function. We <> the observation model to be Gaussian with noise standard deviation 0.01. +We optimize the model jointly with respect to the parameters of a Gaussian distribution for initial latent state distribution, the prior and posterior drift functions, the diffusion function, the GRU encoder, and the decoder. We use a fixed discretization with step size of 0.01 in both the forward and backward pass. We use the Adam optimizer [34] with an initial learning rate of 0.01 that is decay by a factor of 0.999 after each iteration. We use a linear KL annealing schedule over the first 50 iterations. +9.9.2 Stochastic Lorenz Attractor + +Consider a stochastic Lorenz attractor SDE with diagonal noise: + +<>, + +<>, + +<>. + +We use <>, and (x0,y0,z0) sampled from the standard Gaussian distribution as the ground-truth model. We sample 1024 time series, each of which is observed at intervals of 0.025 from time 0 to time 1. We normalize these samples by their mean and standard deviation across each dimension and corrupt this data by Gaussian noise with mean zero and standard deviation 0.01. +We use the same architecture and training procedure for the latent SDE model as in the geometric Brownian motion section, except that the diffusion function consists of four small neural networks, each for a single dimension of the latent SDE. + +9.10 Additional Visualization + +<
> + +Figure 8: Additional visualizations of learned posterior and prior dynamics on the synthetic stochastic Lorenz attractor dataset. First row displays the true data and posterior reconstructions. Second row displays samples with initial latent state for each trajectory is sampled independently. Third row displays samples with initial latent state sampled and fixed to be the same for different trajectories. +See Figure 8 for additional visualization on the synthetic Lorenz attractor dataset. See Figure 9 for visualization on the synthetic geometric Brownian motion dataset. We comment that for the second example, the posterior reconstructs the data well, and the prior process exhibit behavior of the data. However, from the third row, we can observe that the prior process is learned such that most of the uncertainty is account for in the initial latent state. We leave the investigation of more interpretable prior process for future work. + +9.11 Model Architecture for Learning from Motion Capture Dataset +We use a latent SDE model with an MLP encoder which takes in the first three frames and outputs the mean and log-variance of the variational distribution of the initial latent state and a context vector. The decoder has a similar architecture as that for the ODE2VAE model [90] and projects the 6-dimensional latent state into the 50-dimensional observation space. The posterior drift function takes in a 3-dimensional context vector output by the encoder and the current state and time, whereas the prior drift only takes in the current state and time. The diffusion function is composed of multiple small neural nets, each producing a scalar for the corresponding + +<
> + +Figure 9: Visualizations of learned posterior and prior dynamics on the synthetic geometric Brownian motion dataset. First row displays the true data and posterior reconstructions. Orange contour covers 95% of 512 samples. Second row displays samples with initial latent state for each trajectory is sampled independently. Third row displays samples with initial latent state sampled and fixed to be the same for different trajectories. + +dimension such that the posterior SDE has diagonal noise. We use the same observation likelihood as that of the ODE2VAE model [90]. We comment that the overall parameter count of our model (11605) is smaller than that of ODE2VAE for the same task (12157). +The latent ODE baseline was implemented with a similar architecture, except is does not have the diffusion and prior drift components, and its vector field defining the ODE does not take in a context vector. Therefore, the model has slightly fewer parameters (10573) than the latent SDE model. See Figure 10 for overall details of the architecture. +The main hyperparameter we tuned was the coefficient for reweighting the KL. For both the latent ODE and SDE, we considered training the model with a reweighting coefficient in {1, 0.1, 0.01, 0.001}, either with or without a linear KL annealing schedule that increased from 0 to the prescribed value over the first 200 iterations of training. + +9.12 Stochastic Adjoint Implementation + +We include the core implementation of the stochastic adjoint, assuming access to a callable Brownian motion bm, an Euler-Maruyama integrator ito_int_diag for diagonal noise SDEs, and several helper functions whose purposes can be inferred from their names. +<> +<> <> <> + + +<> <> <> + Scaling Laws for Neural Language Models + + + Jared Kaplan Sam McCandlish + + Johns Hopkins University, OpenAI OpenAI + jaredk@jhu.edu sam@openai.com + + + + Tom Henighan Tom B. Brown Benjamin Chess Rewon Child + OpenAI OpenAI OpenAI OpenAI + henighan@openai.com tom@openai.com bchess@openai.com rewon@openai.com + + Scott Gray Alec Radford Jeffrey Wu Dario Amodei + OpenAI OpenAI OpenAI OpenAI + scott@openai.com alec@openai.com jeffwu@openai.com damodei@openai.com + + + + Abstract + + We study empirical scaling laws for language model performance on the cross-entropy loss. + The loss scales as a power-law with model size, dataset size, and the amount of compute + used for training, with some trends spanning more than seven orders of magnitude. Other + architectural details such as network width or depth have minimal effects within a wide + range. Simple equations govern the dependence of overfitting on model/dataset size and the + dependence of training speed on model size. These relationships allow us to determine the + optimal allocation of a fixed compute budget. Larger models are significantly more sample- + efficient, such that optimally compute-efficient training involves training very large models + on a relatively modest amount of data and stopping significantly before convergence. + + + Equal contribution. + + Contributions: Jared Kaplan and Sam McCandlish led the research. Tom Henighan contributed the LSTM ex- + periments. Tom Brown, Rewon Child, and Scott Gray, and Alec Radford developed the optimized Transformer + implementation. Jeff Wu, Benjamin Chess, and Alec Radford developed the text datasets. Dario Amodei provided + guidance throughout the project. Contents + + 1 Introduction 2 + + 2 Background and Methods 6 + + 3 Empirical Results and Basic Power Laws 7 + + 4 Charting the Infinite Data Limit and Overfitting 10 + + 5 Scaling Laws with Model Size and Training Time 12 + + 6 Optimal Allocation of the Compute Budget 14 + + 7 Related Work 18 + + 8 Discussion 18 + + Appendices 20 + + A Summary of Power Laws 20 + + B Empirical Model of Compute-Efficient Frontier 20 + + C Caveats 22 + + D Supplemental Figures 23 + + + 1 Introduction + + Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning + tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of + data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in + language modeling, with state of the art models [RNSS18, DCLT18, YDY + 19, LOG + 19, RSR + 19] approaching + human-level performance on many specific tasks [WPN + 19], including the composition of coherent + multiparagraph prompted text samples [RWC + 19]. + One might expect language modeling performance to depend on model architecture, the size of neural models, + the computing power used to train them, and the data available for this training process. In this work we will + empirically investigate the dependence of language modeling loss on all of these factors, focusing on the + Transformer architecture [VSP + 17, LSP + 18]. The high ceiling and low floor for performance on language + tasks allows us to study trends over more than seven orders of magnitude in scale. + Throughout we will observe precise power-law scaling for performance as a function of training time, + context length, dataset size, model size, and compute budget. + + 1.1 Summary + + Our key findings for Transformer language models are are as follows: + + 2 Here we display predicted compute when using a sufficiently small batch size. See Figure 13 for comparison to the + purely empirical data. + + <
> + + Figure 1 Language modeling performance improves smoothly as we increase the model size, dataset + size, and amount of compute 2 used for training. For optimal performance all three factors must be scaled + up in tandem. Empirical performance has a power-law relationship with each individual factor when not + bottlenecked by the other two. + + + Performance depends strongly on scale, weakly on model shape: Model performance depends most + strongly on scale, which consists of three factors: the number of model parameters N (excluding + embeddings), the size of the datasetD, and the amount of compute C used for training. Within reasonable limits, + performance depends very weakly on other architectural hyperparameters such as depth vs. width. (Section + 3) + + Smooth power laws: Performance has a power-law relationship with each of the three scale factors + N;D;C when not bottlenecked by the other two, with trends spanning more than six orders of magnitude + (see Figure 1). We observe no signs of deviation from these trends on the upper end, though performance + must flatten out eventually before reaching zero loss. (Section 3) + + Universality of overfitting: Performance improves predictably as long as we scale up N and D in tandem, + but enters a regime of diminishing returns if eitherNorDis held fixed while the other increases. The + performance penalty depends predictably on the ratioN0:74 =D, meaning that every time we increase the + model size 8x, we only need to increase the data by roughly 5x to avoid a penalty. (Section 4) + + Universality of training: Training curves follow predictable power-laws whose parameters are roughly + independent of the model size. By extrapolating the early part of a training curve, we can roughly predict the + loss that would be achieved if we trained for much longer. (Section 5) + + Transfer improves with test performance: When we evaluate models on text with a different distribution + than they were trained on, the results are strongly correlated to those on the training validation set with + a roughly constant offset in the loss – in other words, transfer to a different distribution incurs a constant + penalty but otherwise improves roughly in line with performance on the training set. (Section 3.2.2) + + Sample efficiency: Large models are more sample-efficient than small models, reaching the same level of + performance with fewer optimization steps (Figure 2) and using fewer data points (Figure 4). + + Convergence is inefficient: When working within a fixed compute budget C but without any other restrictions + on the model size N or available dataD, we attain optimal performance by training very large models + and stopping significantly short of convergence(see Figure 3). Maximally compute-efficient training would + therefore be far more sample efficient than one might expect based on training small models to convergence, + with data requirements growing very slowly as <> with training compute. (Section 6) + + Optimal batch size: The ideal batch size for training these models is roughly a power of the loss only, + and continues to be determinable by measuring the gradient noise scale [MKAT18]; it is roughly 1-2 million + tokens at convergence for the largest models we can train. (Section 5.1) + Taken together, these results show that language modeling performance improves smoothly and predictably + as we appropriately scale up model size, data, and compute. We expect that larger language models will + perform better and be more sample efficient than current models. + + <
> + + Figure 2 We show a series of language model training runs, with models ranging in size from10 3 to10 9 + parameters (excluding embeddings). + + <
> + + Figure 3 As more compute becomes available, we can choose how much to allocate towards training larger + models, using larger batches, and training for more steps. We illustrate this for a billion-fold increase in + compute. For optimally compute-efficient training, most of the increase should go towards increased model + size. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to + increase parallelism through larger batch sizes, with only a very small increase in serial training time required. + + + + 1.2 Summary of Scaling Laws + + The test loss of a Transformer trained to auto regressively model language can be predicted using a power-law + when performance is limited by only either the number of non-embedding parametersN, the dataset sizeD, + or the optimally allocated compute budget C_min (see Figure 1): + 1.For models with a limited number of parameters, trained to convergence on sufficiently large + datasets: + <> (non-embedding parameters) (1.1) + 2.For large models trained with a limited dataset with early stopping: + <> (tokens) (1.2) + 3.When training with a limited amount of compute, a sufficiently large dataset, an optimally-sized + model, and a sufficiently small batch size (making optimal 3 use of compute): + <> + 3 We also observe an empirical power-law trend with the training computeC(Figure 1) while training at fixed batch + size, but it is the trend withCmin that should be used to make predictions. They are related by equation (5.5). + + <
> + + Figure 4 Left: The early-stopped test lossL(N;D)varies predictably with the dataset size D and model + size N according to Equation (1.5).Right: After an initial transient period, learning curves for all model + sizes N can be fit with Equation (1.6), which is parameterized in terms of S_min , the number of steps when + training at large batch size (details in Section 5.1). + + + These relations hold across eight orders of magnitude in C_min , six orders of magnitude inN, and over two + orders of magnitude inD. They depend very weakly on model shape and other Transformer hyperparameters + (depth, width, number of self-attention heads), with specific numerical values associated with the Webtext2 + training set [RWC + 19]. The power lawsN ; D ; min specify the degree of performance improvement C expected as we scale upN,D, orCmin ; for example, doubling the number of parameters yields a loss that + is smaller by a factor <>. The precise numerical values ofNc ;C min ;andDc c depend on the + vocabulary size and tokenization and hence do not have a fundamental meaning. + The critical batch size, which determines the speed/efficiency tradeoff for data parallelism ([MKAT18]), also + roughly obeys a power law in L: + + <> + + Equation (1.1) and (1.2) together suggest that as we increase the model size, we should increase the dataset + size sublinearly according to <>. In fact, we find that there is a single equation combining + (1.1) and (1.2) that governs the simultaneous dependence on N and D and governs the degree of overfitting: + + <> (1.5) + + with fits pictured on the left in figure 4. We conjecture that this functional form may also parameterize the + trained log-likelihood for other generative modeling tasks. + When training a given model for a finite number of parameter update stepsSin the infinite data limit, after + an initial transient period, the learning curves can be accurately fit by (see the right of figure 4) + + <> (1.6) + + where <> and <>, and S_min(S) is the minimum possible number of optimization steps + (parameter updates) estimated using Equation (5.4). + When training within a fixed compute budgetC, but with no other constraints, Equation (1.6) leads to the + prediction that the optimal model sizeN, optimal batch sizeB, optimal number of stepsS, and dataset size + Dshould grow as + <> (1.7) + with + <> (1.8) + which closely matches the empirically optimal resultsN/C0:73 ,B/C0:24 , andS/C0:03 . As the + computational budget C increases, it should be spent primarily on larger models, without dramatic increases + in training time or dataset size (see Figure 3). This also implies that as models grow larger, they become + increasingly sample efficient. In practice, researchers typically train smaller models for longer than would + be maximally compute-efficient because of hardware constraints. Optimal performance depends on total + compute as a power law (see Equation (1.3)). + We provide some basic theoretical motivation for Equation (1.5), an analysis of learning curve fits and their + implications for training time, and a breakdown of our results per token. We also make some brief comparisons + to LSTMs and recurrent Transformers [DGV + 18]. + + 1.3 Notation + + We use the following notation: + L– the cross entropy loss in nats. Typically it will be averaged over the tokens in a context, but in + some cases we report the loss for specific tokens within the context. + N– the number of model parameters,excluding all vocabulary and positional embeddings + C6NBS– an estimate of the total non-embedding training compute, whereBis the batch size, + andSis the number of training steps (ie parameter updates). We quote numerical values in PF-days, + where one PF-day= 10 15 243600 = 8:6410 19 floating point operations. + D– the dataset size in tokens + B_crit – the critical batch size [MKAT18], defined and discussed in Section 5.1. Training at the + critical batch size provides a roughly optimal compromise between time and compute efficiency. + Cmin – an estimate of the minimum amount of non-embedding compute to reach a given value of + the loss. This is the training compute that would be used if the model were trained at a batch size + much less than the critical batch size. + S_min – an estimate of the minimal number of training steps needed to reach a given value of the loss. + This is also the number of training steps that would be used if the model were trained at a batch size + much greater than the critical batch size. + X – power-law exponents for the scaling of the loss as <> where X can be any of + <>. + + 2 Background and Methods + + We train language models on WebText2, an extended version of the WebText [RWC + 19] dataset, tokenized + using byte-pair encoding [SHB15] with a vocabulary size n vocab = 50257. We optimize the autoregressive + log-likelihood (i.e. cross-entropy loss) averaged over a 1024-token context, which is also our principal + performance metric. We record the loss on the WebText2 test distribution and on a selection of other text + distributions. We primarily train decoder-only [LSP + 18, RNSS18] Transformer [VSP + 17] models, though + we also train LSTM models and Universal Transformers [DGV + 18] for comparison. + + 2.1 Parameter and Compute Scaling of Transformers + + We parameterize the Transformer architecture using hyperparameters n layer (number of layers),d model + (dimension of the residual stream), d (dimension of the intermediate feed-forward layer),dattn (dimension of + the attention output), and n heads (number of attention heads per layer). We include n ctx tokens in the input + context, with n ctx = 1024 except where otherwise noted. + We use N to denote the model size, which we define as the number of non-embedding parameters + + <> (2.1) + + where we have excluded biases and other sub-leading terms. Our models also have n vocab d model parameters + in an embedding matrix, and use n ctx d model parameters for positional embeddings, but we do not include + these when discussing the ‘model size’N; we will see that this produces significantly cleaner scaling laws. + Evaluating a forward pass of the Transformer involves roughly + + <> (2.2) + + add-multiply operations, where the factor of two comes from the multiply-accumulate operation used in + matrix multiplication. A more detailed per-operation parameter and compute count is included in Table 1. + + <
> + + Table 1 Parameter counts and compute (forward pass) estimates for a Transformer model. Sub-leading + terms such as nonlinearities, biases, and layer normalization are omitted. + + + + For contexts and models with d model > n ctx =12, the context-dependent computational cost per token is a + relatively small fraction of the total compute. Since we primarily study models where d model n ctx=12, + we do not include context-dependent terms in our training compute estimate. Accounting for the backwards + pass (approximately twice the compute as the forwards pass), we then define the estimated non-embedding + compute as <> floating point operators per training token. + + 2.2 Training Procedures + + Unless otherwise noted, we train models with the Adam optimizer [KB14] for a fixed <> steps with + a batch size of512sequences of1024tokens. Due to memory constraints, our largest models (more than + 1B parameters) were trained with Adafactor [SS18]. We experimented with a variety of learning rates and + schedules, as discussed in Appendix D.6. We found that results at convergence were largely independent of + learning rate schedule. Unless otherwise noted, all training runs included in our data used a learning rate + schedule with a 3000 step linear warmup followed by a cosine decay to zero. + + 2.3 Datasets + + We train our models on an extended version of the WebText dataset described in [RWC + 19]. The original + WebText dataset was a web scrape of outbound links from Reddit through December 2017 which received at + least 3 karma. In the second version, WebText2, we added outbound Reddit links from the period of January + to October 2018, also with a minimum of 3 karma. The karma threshold served as a heuristic for whether + people found the link interesting or useful. The text of the new links was extracted with the Newspaper3k + python library. In total, the dataset consists of 20.3M documents containing 96 GB of text and <> + words (as defined bywc). We then apply the reversible tokenizer described in [RWC + 19], which yields + <> tokens. We reserve <> of these tokens for use as a test set, and we also test on similarly- + prepared samples of Books Corpus [ZKZ + 15], Common Crawl [Fou], English Wikipedia, and a collection + of publicly-available Internet Books. + + 3 Empirical Results and Basic Power Laws + + To characterize language model scaling we train a wide variety of models, varying a number of factors + including: + + Model size (ranging in size from 768 to 1.5 billion non-embedding parameters) + Dataset size (ranging from 22 million to 23 billion tokens) + Shape (including depth, width, attention heads, and feed-forward dimension) + Context length (1024 for most runs, though we also experiment with shorter contexts) + Batch size (219 for most runs, but we also vary it to measure the critical batch size) + + <
> + + Figure 5 Performance depends very mildly on model shape when the total number of non-embedding + parametersNis held fixed. The loss varies only a few percent over a wide range of shapes. Small differences + in parameter counts are compensated for by using the fit toL(N)as a baseline. Aspect ratio in particular can + vary by a factor of 40 while only slightly impacting performance; an(nlayer ;d model ) = (6;4288)reaches a + loss within 3% of the(48;1600)model used in [RWC + 19]. + + <
> + + Figure 6 Left:When we include embedding parameters, performance appears to depend strongly on the + number of layers in addition to the number of parameters.Right:When we exclude embedding parameters, + the performance of models with different depths converge to a single trend. Only models with fewer than 2 + layers or with extreme depth-to-width ratios deviate significantly from the trend. + + + In this section we will display data along with empirically-motivated fits, deferring theoretical analysis to + later sections. + + 3.1 Approximate Transformer Shape and Hyperparameter Independence + + Transformer performance depends very weakly on the shape parameters n layer; n heads , and d when we hold + the total non-embedding parameter count N fixed. To establish these results we trained models with fixed + size while varying a single hyperparameter. This was simplest for the case of n heads . When varying n layer, + we simultaneously varied d model while keeping <> layer d2 fixed. Similarly, to vary d model at fixed + model size we also simultaneously varied the d model parameter, as required by the parameter counts in Table + 1. Independence of n layers would follow if deeper Transformers effectively behave as ensembles of shallower + models, as has been suggested for ResNets [VWB16]. The results are shown in Figure 5. + + 3.2 Performance with Non-Embedding Parameter CountN + + In Figure 6 we display the performance of a wide variety of models, ranging from small models with shape + (n layer, d model) = (2,128)through billion-parameter models, ranging in shape from(6;4288)through + (207;768). Here we have trained to near convergence on the full WebText2 dataset and observe no over- + fitting (except possibly for the very largest models). + As shown in Figure 1, we find a steady trend with non-embedding parameter countN, which can be fit to the + first term of Equation (1.5), so that + + <> (3.1) + + <
> + + Figure 7 + + + To observe these trends it is crucial to study performance as a function ofN; if we instead use the total + parameter count (including the embedding parameters) the trend is somewhat obscured (see Figure 6). This + suggests that the embedding matrix can be made smaller without impacting performance, as has been seen in + recent work [LCG + 19]. + Although these models have been trained on the WebText2 dataset, their test loss on a variety of other datasets + is also a power-law in N with nearly identical power, as shown in Figure 8. + + 3.2.1 Comparing to LSTMs and Universal Transformers + In Figure 7 we compare LSTM and Transformer performance as a function of non-embedding parameter + countN. The LSTMs were trained with the same dataset and context length. We see from these figures + that the LSTMs perform as well as Transformers for tokens appearing early in the context, but cannot match + the Transformer performance for later tokens. We present power-law relationships between performance and + context position Appendix D.5, where increasingly large powers for larger models suggest improved ability + to quickly recognize patterns. + We also compare the performance of standard Transformers to recurrent Transformers [DGV + 18] in Figure + 17 in the appendix. These models re-use parameters, and so perform slightly better as a function ofN, at the + cost of additional compute per-parameter. + + 3.2.2 Generalization Among Data Distributions + We have also tested our models on a set of additional text data distributions. The test loss on these datasets + as a function of model size is shown in Figure 8; in all cases the models were trained only on the WebText2 + dataset. We see that the loss on these other data distributions improves smoothly with model size, in direct + parallel with the improvement on WebText2. We find that generalization depends almost exclusively on the + in-distribution validation loss, and does not depend on the duration of training or proximity to convergence. + We also observe no dependence on model depth (see Appendix D.8). + + 3.3 Performance with Dataset Size and Compute + + We display empirical trends for the test loss as a function of dataset sizeD(in tokens) and training compute + Cin Figure 1. + For the trend withDwe trained a model with <> on fixed subsets of the WebText2 + dataset. We stopped training once the test loss ceased to decrease. We see that the resulting test losses can be + fit with simple power-law + + <> (3.2) + + in the dataset size. The data and fit appear in Figure 1. + The total amount of non-embedding compute used during training can be estimated asC= 6NBS, where + Bis the batch size,Sis the number of parameter updates, and the factor of6accounts for the forward and + backward passes. Thus for a given value ofCwe can scan over all models with variousNto find the model + + <
> + + Figure 8 Left:Generalization performance to other data distributions improves smoothly with model size, + with only a small and very slowly growing offset from the WebText2 training distribution.Right: + Generalization performance depends only on training distribution performance, and not on the phase of training. + We compare generalization of converged models (points) to that of a single large model (dashed curves) as it + trains. + + + with the best performance on stepS= C . Note that in these results the batch size B remains fixed for + all models, which means that these empirical results are not truly optimal. We will account for this in later 6BS + sections using an adjusted C_min to produce cleaner trends. + The result appears as the heavy black line on the left-hand plot in Figure 1. It can be fit with + + <> (3.3) + + The figure also includes images of individual learning curves to clarify when individual models are optimal. + We will study the optimal allocation of compute more closely later on. The data strongly suggests that sample + efficiency improves with model size, and we also illustrate this directly in Figure 19 in the appendix. + + 4 Charting the Infinite Data Limit and Overfitting + + In Section 3 we found a number of basic scaling laws for language modeling performance. Here we will + study the performance of a model of size N trained on a dataset with D tokens while varying N and D + simultaneously. We will empirically demonstrate that the optimally trained test loss accords with the scaling + law of Equation (1.5). This provides guidance on how much data we would need to train models of increasing + size while keeping overfitting under control. + + 4.1 Proposed L(N;D) Equation + + We have chosen the parameterization (1.5) (repeated here for convenience): + + <> (4.1) + + using three principles: + + 1.Changes in vocabulary size or tokenization are expected to rescale the loss by an overall factor. The + parameterization of L(N;D) (and all models of the loss) must naturally allow for such a rescaling. + 2.Fixing D and sending N!1, the overall loss should approachL(D). Conversely, fixing N and + sending D!1 the loss must approach L(N). + 3.L(N;D) should be analytic atD=1, so that it has a series expansion in 1=D with integer powers. + Theoretical support for this principle is significantly weaker than for the first two. + + Our choice of L(N;D) satisfies the first requirement because we can rescaleNc ;D c with changes in the + vocabulary. This also implies that the values ofNc ;D c have no fundamental meaning. + + <
> + + Figure 9 The early-stopped test lossL(N;D)depends predictably on the dataset size D and model sizeN + according to Equation (1.5).Left: For largeD, performance is a straight power law inN. For a smaller fixed + D, performance stops improving as N increases and the model begins to overfit. (The reverse is also true, + see Figure 4.)Right: The extent of overfitting depends predominantly on the ratio <>, as predicted in + equation (4.3). The line is our fit to that equation. + + + Since we stop training early when the test loss ceases to improve and optimize all models in the same way, we + expect that larger models should always perform better than smaller models. But with fixed finiteD, we also + do not expect any model to be capable of approaching the best possible loss (ie the entropy of text). Similarly, + a model with fixed size will be capacity-limited. These considerations motivate our second principle. Note + that knowledge ofL(N)at infinite D and L(D) at infinite N fully determines all the parameters inL(N;D). + The third principle is more speculative. There is a simple and general reason one might expect overfitting + to scale/1=Dat very largeD. Overfitting should be related to the variance or the signal-to-noise ratio + of the dataset [AS17], and this scales as1=D. This expectation should hold for any smooth loss function, + since we expect to be able to expand the loss about theD! 1limit. However, this argument assumes that + 1=D corrections dominate over other sources of variance, such as the finite batch size and other limits on the + efficacy of optimization. Without empirical confirmation, we would not be very confident of its applicability. + Our third principle explains the asymmetry between the roles of N and D in Equation (1.5). Very similar + symmetric expressions 4 are possible, but they would not have a 1=D expansion with integer powers, and + would require the introduction of an additional parameter. + In any case, we will see that our equation forL(N;D)fits the data well, which is the most important justification + for our L(N;D). + + 4.2 Results + + We regularize all our models with 10% dropout, and by tracking test loss and stopping once it is no longer + decreasing. The results are displayed in Figure 9, including a fit to the four parameters <> in + Equation (1.5): + + <
> + + Table 2 Fits to L(N;D) + + We obtain an excellent fit, with the exception of the runs where the dataset has been reduced by a factor of + 1024, to about <> tokens. With such a small dataset, an epoch consists of only 40 parameter updates. + Perhaps such a tiny dataset represents a different regime for language modeling, as overfitting happens very + early in training (see Figure 16). Also note that the parameters differ very slightly from those obtained in + Section 3, as here we are fitting the full L(N;D) rather than just L(N;1) or L(1;D). + To chart the borderlands of the infinite data limit, we can directly study the extent of overfitting. For all but + the largest models, we see no sign of overfitting when training with the full 22B token WebText2 dataset, + so we can take it as representative ofD=1. Thus we can compare finiteDto the infinite data limit by + <> For example, one might have used <>, but this does not have a 1=D expansion. + + <
> + + Figure 10 The critical batch size B crit follows a power law in the loss as performance increase, and does + not depend directly on the model size. We find that the critical batch size approximately doubles for every + 13%decrease in loss B crit is measured empirically from the data shown in Figure 18, but it is also roughly + predicted by the gradient noise scale, as in [MKAT18]. + + + defining + <> (4.2) + + and studying it as a function ofN;D. In fact, we see empirically that L depends only a specific combination + of N and D, as shown in Figure 16. This follows from the scaling law of Equation (1.5), which implies + + <> (4.3) + + Note that at large D this formula also has a series expansion in powers of 1=D. + We estimate that the variation in the loss with different random seeds is roughly <>, which means that to + avoid overfitting when training to within that threshold of convergence we require + + <> (4.4) + + With this relation, models smaller than10 9 parameters can be trained with minimal overfitting on the 22B + token WebText2 dataset, but our largest models will encounter some mild overfitting. More generally, this + relation shows that dataset size may grow sub-linearly in model size while avoiding overfitting. Note however + that this does not typically represent maximally compute-efficient training. We should also emphasize that + we have not optimized regularization (eg the dropout probability) while varying dataset and model size. + + 5 Scaling Laws with Model Size and Training Time + + In this section we will demonstrate that a simple scaling law provides a good description for the loss as a + function of model size N and training time. First we will explain how to use the results of [MKAT18] to + define a universal training step S_min , which accounts for the fact that most of our models have not been + trained at an optimal batch size. Then we will demonstrate that we can fit the model size and training time + dependence of the loss using Equation (1.6). Later we will use these results to predict the optimal allocation + of training compute between model size and training time, and then confirm that prediction. + + 5.1 Adjustment for Training at B_crit (L) + + A simple empirical theory for the batch size dependence of training was developed in [MKAT18] (see also + [SLA + 18, ZLN + 19]). It was argued that there is a critical batch size B_crit for training; forBup to B_crit + the batch size can be increased with very minimal degradation in compute-efficiency, whereas for <> increases in + B result in diminishing returns. It was also argued that the gradient noise scale provides a simple + prediction for B_crit , and that neither depends directly on model size except through the value of the loss that + has been attained. These results can be used to predict how training time and compute will vary with the + batch size. To utilize both training time and compute as effectively as possible, it is best to train with a batch + size <>. Training at <> minimizes the number of training steps, while <> minimizes + the use of compute. + More specifically, it was demonstrated that for a wide variety of neural network tasks, the number of training + stepsSand the number of data examples processed E=BS satisfy the simple relation + + <> (5.1) + + when training to any fixed value of the lossL. Here S_min is the minimum number of steps necessary to reach + L, while E_min is the minimum number of data examples that must be processed. + We demonstrate the relation (5.1) for Transformers in Figure 18 in the appendix. This relation defines the + critical batch size + + <> (5.2) + + which is a function of the target value of the loss. Training at the critical batch size makes a roughly optimal + time/compute tradeoff, requiring 2S_min training steps and processing <> data examples. + In Figure 10 we have plotted the critical batch size and gradient noise scale 5 as a function of training loss for + two different models. We see that B_crit(L) is independent of model size, and only depends on the lossL. So + the predictions of [MKAT18] continue to hold for Transformer language models. The critical batch size can + be fit with a power-law in the loss + + <> (5.3) + + where <> and <>. + + We have chosen this parameterization for B_crit(L) because as the loss approaches its minimum value L_min, + the gradient noise scale is expected to diverge, and we expect B_crit to track this noise scale. We do not + know L_min, as we see no sign that our models are approaching it, but L_min>0 since the entropy of natural + language is non-zero. Since apparently L_min is much smaller than the values ofLwe have achieved, we used + a parameterization where B_crit diverges asL!0. + We will use B_crit (L)to estimate the relation between the number of training steps S while training at batch + sizeB= 2 19 tokens and the number of training steps while training at <>. This is simply + + <> (5.4) + + for any given target value L for the loss. This also defines a critical value of the compute needed to train toL + with a model of sizeNif we were to train at <>. This is + + <> (5.5) + + where <> estimates the (non-embedding) compute used at batch size B. + + 5.2 Results for <> and Performance with Model Size and Compute + + Now we will use S_min defined in Equation (5.4) to obtain a simple and universal fit for the dependence of the + loss on model size and training time in the infinite data limit. We will fit the stable, Adam-optimized training + runs using Equation (1.6), repeated here for convenience: + + <> (5.6) + + for the loss. We include all training steps after the warmup period of the learning rate schedule, and find a fit + to the data with the parameters: + 5 Although the critical batch size roughly matches the gradient noise scale, we are using a direct measurements of + B_crit from Figures 18 and 10 for all our later analyses. + + <
> + + Figure 11 When we hold either total compute or number of training steps fixed, performance follows + L(N;S)from Equation (5.6). Each value of compute budget has an associated optimal model size that + maximizes performance. Mediocre fits at small S are unsurprising, as the power-law equation for the learning + curves breaks down very early in training. + + <
> + + Table 3 Fits toL(N;S) + + + With these parameters, we obtain the learning curve fits in Figure 4. Though the fits are imperfect, we believe + they are quite compelling given the simplicity of Equation (5.6). + The data and fits can be visualized in a different and more interesting way, as shown in Figure 11. There we + study the test loss as a function of model size while fixing either the total non-embedding compute C used + in training, or the number of stepsS. For the fits we use Equation (5.5) and (5.4) along with the parameters + above and Equation (5.6). + The power-law dependence of the loss on S_min reflects the interplay of optimizer dynamics and the loss + landscape. Since the fits are best late in training, when the loss may be approximately quadratic, the power- + law should provide information about the spectrum of the Hessian of the loss. Its universality suggests that + the Hessian eigenvalue density is roughly independent of model size. + + 5.3 Lower Bound on Early Stopping Step + + The results for<>can be used to derive a lower-bound (and rough estimate) of the step at which + early stopping should occur when training is data limited. It is motivated by the idea that finite and infiniteD + learning curves for a given model will be very similar until we reach <>. Thus overfitting should + be proportional to the correction from simply ending training at S stop . This will underestimate S_stop, because + in reality the test loss will decrease more slowly when we have a finiteD, and therefore we will require more + training steps to reach the optimal test loss at finiteD. This line of reasoning leads to the inequality + + <> (5.7) + + whereL(N;1)is the converged loss, evaluated with infinite available data. This inequality and its + comparison to the empirical data is displayed in Figure 16 in the appendix. In that figure, the values of S stop and L(N;D) are empirical (though S stop is adjusted to mimic training at <>), while L(N;1) is + computed from the fit to L(N;D) evaluated at D=1. + + + 6 Optimal Allocation of the Compute Budget + + We displayed the empirical trend of performance as a function of the computation used during training in + the top-right of Figure 1. However, this result involved training at a fixed batch sizeB, whereas we know + + <
> + + Figure 12 Left:Given a fixed compute budget, a particular model size is optimal, though somewhat larger + or smaller models can be trained with minimal additional compute.Right:Models larger than the compute- + efficient size require fewer steps to train, allowing for potentially faster training if sufficient additional + parallelism is possible. Note that this equation should not be trusted for very large models, as it is only valid in the + power-law region of the learning curve, after initial transient effects. + + + <
> + + Figure 13 When adjusting performance to simulate training far below the critical batch size, we find a + somewhat altered power law for L(C_min) when compared with the fully empirical results. The conspicuous + lump at <> PF-days marks the transition from 1-layer to 2-layer networks; we exclude 1-layer networks + in the power-law fits. It is the L(C_min) trend that we expect to provide a reliable extrapolation for larger + compute. + + + that in fact we could train more efficiently 6 by training at the batch size B_crit discussed in Section 5.1. + Large and small values of the loss could have been achieved with fewer samples or fewer steps, respectively, + and correcting for this inefficiency by standardizing to the critical batch size results in cleaner and more + predictable trends. + In this section we will adjust for this oversight. More importantly, we will use the results of Section 5 + to determine the optimal allocation of compute between model size N and the quantity of data processed + during training, namely <>. We will determine this allocation both empirically and theoretically, by + using the equation for <>, and we will demonstrate that these methods agree. + + 6.1 Optimal Performance and Allocations + + Let us first study the loss as a function of the optimally allocated compute from Equation (5.5). The result is + plotted in Figure 13, along with a power-law fit. We see that as compared to the compute plot of Figure 1, the + new fit with C_min is somewhat improved. + Given L(C_min), it is natural to ask for the optimal model size N(C_min) that provides the minimal loss with a + given quantity of training compute. The optimal model size is shown in Figure 14. We observe that N(C_min) + + 6 One might ask why we did not simply train at B_crit in the first place. The reason is that it depends not only on the + model but also on the target value of the loss we wish to achieve, and so is a moving target. + + <> + + Figure 14 Left:Each value of the compute budget C_min has an associated optimal model sizeN. Optimal + model size grows very rapidly with C_min, increasing by 5x for each 10x increase in compute. The number + of data examples processed makes up the remainder of the increase, growing relatively modestly by only 2x. + Right:The batch-adjusted number of optimization steps also grows very slowly, if at all, meaning that most + of the growth in data examples processed can be used for increased batch sizes. + + + can be fit very well with a power-law + + <> (6.1) + + In Figure 12, we show the effect of training models of sub-optimal sizes (see Appendix B.4). + By definition <>, and so we can use <> to extract further results. In particular, since + prior fits show <> and <>, we can conclude that <>. This leads us to conclude min that + the optimal number of steps will only grow very slowly with compute, as + + <>; (6.2) + + matching the empirical results in Figure 14. In fact the measured exponent is sufficiently small that our results + may even be consistent with an exponent of zero. + + Thus we conclude that as we scale up language modeling with an optimal allocation of computation, we + should predominantly increase the model sizeN, while simultaneously scaling up the batch size via <> + with negligible increase in the number of serial steps. Since compute-efficient training uses relatively + few optimization steps, additional work on speeding up early training dynamics may be warranted. + + 6.2 Predictions from <> + + The results for <> and the allocations can be predicted from the <> equation obtained in + Section 5. Given our equation for <>, we can substitute <> and then find the minimum + of the loss as a function ofN, while fixing the training compute. We carry out this procedure in detail in 6NB + Appendix B, where we also provide some additional predictions. + For the loss as a function of training compute, we predict that + + <> (6.3) + + in excellent agreement with the exponent of Figure 13. We also predict that + + <> (6.5) + + which also matches the scaling of Figure 14 to within a few percent. Our scaling laws provide a predictive + framework for the performance of language modeling. + + <
> + + Figure 15 Far beyond the model sizes we study empirically, we find a contradiction between our equations + for<>andL(D)due to the slow growth of data needed for compute-efficient training. The intersection + marks the point before which we expect our predictions to break down. The location of this point is highly + sensitive to the precise exponents from our power-law fits. + + + 6.3 Contradictions and a Conjecture + + We observe no signs of deviation from straight power-law trends at large values of compute, data, or model + size. Our trends must eventually level off, though, since natural language has non-zero entropy. + Indeed, the trends for compute-efficient training described in this section already contain an apparent contra- + diction. At scales several orders of magnitude above those documented here, the performance predicted by + the<>scaling law decreases below what should be possible given the slow growth in training data with + compute. This implies that our scaling laws must break down before this point, but we conjecture that the + intersection point has a deeper meaning: it provides an estimate of the point at which Transformer language + models reach maximal performance. + Since the amount of data used by compute-efficient training grows slowly with the compute budget, the + performance predicted by<>eventually hits a lower bound set by theL(D)power law (see Figure 15). + Let us work this out in more detail. + To keep overfitting under control, the results of Section 4 imply that we should scale the dataset size as + + <> (6.6) + + where we have used the compute-efficient <> from Figure 14. + Let us compare this to the data requirements of compute-efficient training. If we train at the critical batch + size (i.e. <>) and never re-use data during training, we find that data usage grows with compute as + + <> (6.7) + + This is the maximum rate at which the dataset size can productively grow with compute, since it means that + we are only training for a single epoch. But it grows the dataset much more slowly than in Equation (6.6). + It appears to imply that compute-efficient training will eventually run into a problem with overfitting, even if + the training process never re-uses any data! + According to Figure 1, we expect that when we are bottlenecked by the dataset size (ie by overfitting), the + loss should scale as <>. This implies that the loss would scale with compute as <> + once we are data-limited. Once again, we have a contradiction, as this will eventually intersect with min + our prediction for <> from Figure 13, where we found a scaling <> + The intersection point of <> and <> occurs at + + <> (6.8) + + though the numerical values are highly uncertain, varying by an order or magnitude in either direction de- + pending on the precise values of the exponents from the power-law fits. The most obvious interpretation is + that our scaling laws break down at or before we reach this point, which is still many orders of magnitude + away in both compute and model size. + One might also conjecture that this intersection point has a deeper meaning. If we cannot increase the model + size beyond N without qualitatively different data requirements, perhaps this means that once we reach + C and N, we have extracted all of the reliable information available in natural language data. In this min + interpretation, L would provide a rough estimate for the entropy-per-token 7 of natural language. In this + scenario, we would expect the loss trend to level off at or before L. + We can guess at the functional form of<>as it levels off by considering a version of our training + dataset with added noise. For example, we could append a random string of tokens to each context shown + to the model to artificially boost the loss by a constant additive factor. Then, the distance from the noise + floor LxL noise would be a more meaningful performance metric, with even a small decrease in this distance + potentially representing a significant boost in qualitative performance. Since the artificial noise would affect + all of our trends equally, the critical point of 6.8 would not change (aside from the absolute value of L, and + may be meaningful even if it occurs after the leveling off. + + 7 Related Work + + Power laws can arise from a wide variety of sources [THK18]. Power-law scalings with model and dataset + size in density estimation [Was06] and in random forest models [Bia12] may be connected with our results. + These models suggest that power-law exponents may have a very rough interpretation as the inverse of the + number of relevant features in the data. + Some early [BB01, Goo01] work found power-law scalings between performance and dataset size. More + recent work [HNA + 17, HAD19] also investigated scaling between model size and data size; their work is + perhaps the closest to ours in the literature 8 . Note, however, that [HNA + 17] found super-linear scaling of + dataset size with model size, whereas we find a sub-linear scaling. There are some parallels between our + findings on optimal allocation of compute and [Kom19], including power-law learning curves. EfficientNets + [TL19] also appear to obey an approximate power-law relation between accuracy and model size. Very recent + work [RRBS19b] studies scaling with both dataset size and model size for a variety of datasets, and fits an + ansatz similar to ours. + EfficientNet [TL19] advocates scaling depth and width exponentially (with different coefficients) for optimal + performance of image models, resulting in a power-law scaling of width as a function of depth. We find that + for language models this power should be roughly one when scaling up (as width/depth should remain fixed). + But more importantly, we find that the precise architectural hyperparameters are unimportant compared to the + overall scale of the language model. In [VWB16] it was argued that deep models can function as ensembles + of shallower models, which could potentially explain this finding. Earlier work [ZK16] has compared width + and depth, and found that wide ResNets can outperform deep ResNets on image classification. Some studies + fix computation per data example, which tends to scale in proportion to the number of model parameters, + whereas we investigate scaling with both model size and the quantity of training computation. + Various works [AS17, BHMM18] have investigated generalization in highly overparameterized models, find- + ing a “jamming transition” [GJS + 19] when the model size reaches the dataset size (this may require training + many orders of magnitude beyond typical practice, and in particular does not use early stopping). We do + not observe such a transition, and find that the necessary training data scales sublinearly in the model size. + Expansions in the model size, particularly at large width [JGH18, LXS + 19], may provide a useful framework + for thinking about some of our scaling relations. Our results on optimization, such as the shape of learning + curves, can likely be explained using a noisy quadratic model, which can provide quite accurate predictions + [ZLN + 19] in realistic settings. Making this connection quantitative will require a characterization of the + Hessian spectrum [Pap18, GKX19, GARD18]. + + 8 Discussion + + We have observed consistent scalings of language model log-likelihood loss with non-embedding parameter + countN, dataset sizeD, and optimized training computation C_min , as encapsulated in Equations (1.5) and + (1.6). Conversely, we find very weak dependence on many architectural and optimization hyperparameters. + Since scalings with <> are power-laws, there are diminishing returns with increasing scale. + + 7 Defining words using the wc utility, the WebText2 dataset has1:4tokens per word and <> characters per token. + 8 After this work was completed, [RRBS19a] also appeared, which makes similar predictions for the dependence of + loss on both model and dataset size. + We were able to precisely model the dependence of the loss on N and D, and alternatively on N and S, when + these parameters are varied simultaneously. We used these relations to derive the compute scaling, magnitude + of overfitting, early stopping step, and data requirements when training large language models. So our scaling + relations go beyond mere observation to provide a predictive framework. One might interpret these relations + as analogues of the ideal gas law, which relates the macroscopic properties of a gas in a universal way, + independent of most of the details of its microscopic constituents. + It is natural to conjecture that the scaling relations will apply to other generative modeling tasks with a + maximum likelihood loss, and perhaps in other settings as well. To this purpose, it will be interesting to + test these relations on other domains, such as images, audio, and video models, and perhaps also for random + network distillation. At this point we do not know which of our results depend on the structure of natural + language data, and which are universal. It would also be exciting to find a theoretical framework from + which the scaling relations can be derived: a ‘statistical mechanics’ underlying the ‘thermodynamics’ we + have observed. Such a theory might make it possible to derive other more precise predictions, and provide a + systematic understanding of the limitations of the scaling laws. + In the domain of natural language, it will be important to investigate whether continued improvement on the + loss translates into improvement on relevant language tasks. Smooth quantitative change can mask major + qualitative improvements: “more is different”. For example, the smooth aggregate growth of the economy + provides no indication of the specific technological developments that underwrite it. Similarly, the smooth + improvements in language model loss may hide seemingly qualitative changes in capability. + Our results strongly suggest that larger models will continue to perform better, and will also be much more + sample efficient than has been previously appreciated. Big models may be more important than big data. + In this context, further investigation into model parallelism is warranted. Deep models can be trained using + pipelining [HCC + 18], which splits parameters depth-wise between devices, but eventually requires increased + batch sizes as more devices are used. Wide networks on the other hand are more amenable to parallelization + [SCP + 18], since large layers can be split between multiple workers with less serial dependency. Sparsity + [CGRS19, GRK17] or branching (e.g. [KSH12]) may allow for even faster training of large networks through + increased model parallelism. And using methods like [WRH17, WYL19], which grow networks as they train, + it might be possible to remain on the compute-efficient frontier for an entire training run. + + Acknowledgements + + We would like to thank Shan Carter, Paul Christiano, Jack Clark, Ajeya Cotra, Ethan Dyer, Jason Eisner, + Danny Hernandez, Jacob Hilton, Brice Menard, Chris Olah, and Ilya Sutskever for discussions and for feed- + back on drafts of this work. + + + Appendices + + + A Summary of Power Laws + + For easier reference, we provide a summary below of the key trends described throughout the paper. + + <
> + + Table 4 + + The empirical fitted values for these trends are: + + <
> + + Table 5 + + The optimal parameters for compute efficient training are given by: + + <
> + + Table 6 + + + B Empirical Model of Compute-Efficient Frontier + + Throughout this appendix all values of C,S and C are adjusted for training at the critical batch size B_crit . + We have left off the ‘adj’ label to avoid cluttering the notation. + + B.1 Defining Equations + + The power-law fit to the learning curves implies a simple prescription for compute-efficient training. In this + appendix, we will derive the optimal performance, model size, and number of training steps as a function of + the compute budget. We start with the Equation (1.6), repeated here for convenience: + + <> (B.1) + + Here,S represents the number of parameter updates when training at the critical batch size[MKAT18], + which was defined in Equation (5.2) 9 : + + <> (B.2) + + We would like to determine optimal training parameters for a fixed compute budget, so we replaceS= + <>, where C is the number of FLOPs used in the training run: + + <> (B.3) + + Now, we set@N L = 0to find the condition for optimality: C + + <> (B.4) + + Equation (B.3) and (B.4) together determine the compute-efficient frontier. + + B.2 Efficient Training + + Now we assemble the implications of (B.3) and (B.4). First, note that inserting (B.4) into (B.3) yields + + <> (B.5) + + which implies that for compute-efficient training, we should train to a fixed percentage N=10% above the converged loss. + Next, let’s determine how the optimal loss depends on the compute budget. Eliminating S + N yields a power-law dependence of performance on compute: + + <> (B.6) + where we defined + + <> (B.7) + + <> (B.8) + + Similarly, we can eliminateLto find N(C): + + <> (B.9) + + and + + <> (B.10) + + 9 There is a slight ambiguity here: we can imagine training either at a constant batch size <>, or we could + instead train at a variable batch sizeB~(L), whereB~is the instantaneous critical batch size (as opposed to B, which is + the averaged version). These two prescriptions result in the same number of steps, so we can ignore this subtlety (see + [MKAT18]). + + B.3 Comparison to Inefficient + + Typically, researchers train models until they appear to be close to convergence. In this section, we compare + the efficient training procedure described above to this more typical setup. We define a the convergence factor + fas the percent deviation from the converged loss: + + <> (B.11) + + For compute-efficient training we have <> from the previous section, but researchers + typically use a much smaller value. Here, we choose f0=2% as an estimate. For a fixed value of the loss, + we predict: + <> (B.12) + + <> (B.13) + + <> (B.14) + + So that compute-efficient training uses 7.7x fewer parameter updates, 2.7x more parameters, and 65% less + compute to reach the same loss. + + B.4 Suboptimal Model Sizes + + We can solve A.1 to find an expression for the amount of compute needed to reach a given value of the loss + L with a model of size N: + + <> (B.15) + + Using A.6 and A.9, we can eliminateLin favor ofNe (L), the model size which reaches L most efficiently. + From there, we find an expression for the excess compute needed as a consequence of using a suboptimal + model size: + + <> (B.16) + + The result is shown in Figure X. Models between 0.6x and 2.2x the optimal size can be used with only a + 20% increase in compute budget. Using a smaller model is useful when accounting for the cost inference. A + larger model can be trained the the same level of performance in fewer steps, allowing for more parallelism + and faster training if sufficient hardware is available (see Figure Y): + + <> (B.17) + + A 2.2x larger model requires 45% fewer steps at a cost of 20% more training compute. Note that this equation + should not be trusted for very large models, as it is only valid in the power-law region of the learning curve + after initial transient effects. + + C Caveats + + In this section we list some potential caveats to our analysis. + + At present we do not have a solid theoretical understanding for any of our proposed scaling laws. + The scaling relations with model size and compute are especially mysterious. It may be possible to + understand scaling at very large DS holding model size fixed [AS17], and also the shape of learning + curves late in training, by modeling the loss with a noisy quadratic. But the scaling withDat very + large model size still remains mysterious. Without a theory or a systematic understanding of the + corrections to our scaling laws, it’s difficult to determine in what circumstances they can be trusted. + + <
> + + Figure 16 Left:We characterize the step on which early stopping occurs, as a function of the extent of + overfitting. The red line indicates a lower bound for early stopping that is derived in Section 5.3.Right: + We display train and test loss for a series of 300M parameter models trained on different sized dataset sub- + samples. The test loss typically follows that of a run done with unrestricted data until diverging. Note that the + degree of overfitting (as compared to the infinite data limit) is significantly overestimated by L_test & L_train + (denoted by a black bar for each run). + + + We are not especially confident in the prediction of B_crit (L)for values of the loss far outside the + range we have explored. Changes in B_crit could have a significant impact on trade-offs between + data parallelism and the number of serial training steps required, which would have a major impact + on training time. + We did not thoroughly investigate the small data regime, and our fits forL(N;D)were poor for + the smallest values ofD(where an epoch corresponded to only40steps). Furthermore, we did + not experiment with regularization and data augmentation. Improvements in these could alter our + results, quantitatively or qualitatively. + We used the estimated training compute <>, which did not include contributions proporcional + to nctx (see Section 2.1). So our scalings with compute may be confounded in practice in the + regime of very large nctx , specifically where nctx & 12d model. + We tuned learning rates, and we experimented with learning rate schedules. But we may have + neglected to tune some hyperparameter (e.g. intialization scale or momentum) that have an important + effect on scaling. + The optimal choice of learning rate is sensitive to the target loss. When training close to convergence, + it may be necessary to use a smaller learning rate to avoid divergences. But when conducting a short + training run (eg due to compute limitations), it may be possible to use a larger learning rate. We did + not experiment with higher learning rates for training runs that did not proceed to convergence. + + D Supplemental Figures + + D.1 Early Stopping and Test vs Train + + In section 5.3 we described the result shown in Figure 16, which provides a prediction for a lower bound on + the early stopping step. We also show the train and test loss for a given model size when training on different + sized datasets. + + D.2 Universal Transformers + + We compare the performance of standard Transformers to recurrent Transformers [DGV + 18] in Figure 17. + These models re-use parameters, and so perform slightly better as a function ofN, but slightly worse as a + function of compute C. We include several different possibilities for parameter re-use. + + D.3 Batch Size + + We measure the critical batch size using the data displayed in figure 18. This made it possible to estimate + B_crit(L) in figure 10. + + <
> + + Figure 17 We compare recurrent Transformers [DGV + 18], which re-use parameters, to standard Trans- + formers. Recurrent Transformers perform slightly better when comparing models with equal parameter count, + but slightly worse when accounting for reuse and comparing per FLOP. + + <
> + + Figure 18 These figures demonstrate fits to Equation (5.1) for a large number of values of the lossL, and + for two different Transformer model sizes. These fits were used to measure B_crit (L)for Figure 10. + + + + D.4 Sample Efficiency vs Model Size + + It is easy to see from figure 2 that larger models train faster, and are therefore more sample efficient. We + provide another way of looking at this phenomenon in figure 19, which shows when different models reach + various fixed values of the loss. + + <
> + + Figure 19 The number of minimum serial steps needed to reach any fixed value of the test loss decreases + precipitously with model size. Sample efficiency (show here for training far below the critical batch size) + improves greatly as well, improving by a factor of almost 100 when comparing the smallest possible model + to a very large one. + + <
> + + Figure 20 This figure provides information about the performance per token as a function of model size + and training time.Left:Loss per token as a function of its positionTin the 1024-token context. Loss scales + predictably as a power-law inT.Right: Test loss per token as a function of training step. + + <
> + + Figure 21 In addition to the averaged loss, individual tokens within the 1024-token context also improve + smoothly as model size increases. Training runs with shorter context nctx = 8 (dashed lines) perform better + on early tokens, since they can allocate all of their capacity to them. + + + D.5 Context Dependence + + The trends for loss as a function of model size are displayed for different tokens in the context in Figure 21. + We see that models trained on nctx = 1024 show steady improvement with model size on all but the first + token. + Fixing model size, it appears that the loss scales as a power-law as a function of positionTin the context, see + Figure 20. This may be a consequence of underlying power-law correlations in language [EP94, ACDE12, + LT16], or a more general feature of the model architecture and optimization. It provides some suggestion for + the potential benefits (or lack thereof) from training on larger contexts. Not only do larger models converge + to better performance atT= 1024, but they also improve more quickly at early tokens, suggesting that larger + models are more efficient at detecting patterns with less contextual information. In the right-hand plot we + show how per-token performance varies for a fixed model as a function of the training step. The model begins + by learning short-range information, and only learns longer-range correlations later in training. + We have also included models trained with a tiny context nctx = 8 in order to compare with our longer + context models. Even modestly sized models trained on nctx = 8 can dominate our largest nctx = 1024 + models on very early tokens. This also suggests that further improvements should be possible with much + larger models trained on large contexts. + + D.6 Learning Rate Schedules and Error Analysis + + We experimented with a variety of learning rates and schedules. A host of schedules and resulting test + performances for a small language model are plotted in Figure 22. We conclude that the choice of learning + rate schedule is mostly irrelevant, as long as the total summed learning rate is sufficiently large, and the + schedule includes a warmup period and a final decay to near-vanishing learning rate. Variations among + + <
> + + Figure 22 We test a variety of learning rate schedules including cosine decay, linear decay, as well as other + faster/slower decays schedules on a 3 million parameter model, shown on the left. For these experiments we + do not decay to zero, since we find that this tends to give a fixed improvement close to the end of training. + We find that, as long as the learning rate is not too small and does not decay too quickly, performance does + not depend strongly on learning rate. Run-to-run variation is at the level of 0.05 in the loss, so averaging + multiple runs is necessary to validate performance changes smaller than this level. + + <
> + + Figure 23 The trend for performance as a function of parameter count,L(N), is fit better by a power law + than by other functions such as a logarithm at a qualitative level. + + + schedules appear to be statistical noise, and provide a rough gauge for the scale of variation between different + training runs. Experiments on larger models suggest that the variation in the final test loss between different + random seeds is roughly constant in magnitude for different model sizes. + We found that larger models require a smaller learning rate to prevent divergence, while smaller models can + tolerate a larger learning rate. To implement this, the following rule of thumb was used for most runs: + + <
> (D.1) + + We expect that this formula could be improved. There may be a dependence on network width, likely set by + the initialization scale. The formula also breaks down forN >10 10 parameters. Nevertheless, we found that + it works sufficiently well for the models we considered. + + D.7 Fit Details and Power Law Quality + + We experimented with a number of functional forms for the fits to <>, and <> the power-law + fits were qualitatively much more accurate than other functions such as logarithms (see Figure 23). + ForL(C), we do not include small models with only 1 layer in the fit, as the transition from 1 to 2 layers + causes a noticeable lump in the data. For L(N) we also do not include very small models with only 1 layer in + the fit, and we exclude the largest models that have not trained fully to convergence. Fit parameters change + marginally if we do include them, and the trend extrapolates well in both directions regardless. + + D.8 Generalization and Architecture + + In figure 24 we show that generalization to other data distributions does not depend on network depth when we + hold the total parameter count fixed. It seems to depend only on the performance on the training distribution. + + <> + + Figure 24 We show evaluations on a series of datasets for models with approximately 1.5 Billion parameters. + We observe no effect of depth on generalization; generalization performance depends primarily on + training distribution performance. The 12-layer model overfit the Internet Books dataset and we show the + early-stopped performance; we have not seen this surprising result in other experiments. + + + List of Figures + + 1 Summary of simple power laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 + 2 Illustration of sample efficiency and compute efficiency. . . . . . . . . . . . . . . . . . . . .4 + 3 How to scale up model size, batch size, and serial steps . . . . . . . . . . . . . . . . . . . .4 + 4 Performance when varying model and data size, or model and training steps, simultaneously5 + 5 Weak dependence of performance on hyperparameter tuning . . . . . . . . . . . . . . . . .8 + 6 Comparison of performance trend when including or excluding embeddings . . . . . . . . .8 + 7 LSTM and Transformer performance comparison . . . . . . . . . . . . . . . . . . . . . . .9 + 8 Generalization to other test datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 + 9 Universality of overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 + 10 Critical batch size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 + 11 Performance versus compute budget or number of parameter updates . . . . . . . . . . . . .14 + 12 Training on suboptimal models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 + 13 Comparison between empirical and adjusted compute trends . . . . . . . . . . . . . . . . .15 + 14 Optimal model size and serial number of steps versus compute budget . . . . . . . . . . . .16 + 15 Contradiction between compute and data trends . . . . . . . . . . . . . . . . . . . . . . . .17 + 16 Early stopping lower bound and training curves for overfit models . . . . . . . . . . . . . .23 + 17 Universal transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 + 18 Batch size scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 + 19 Another look at sample efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 + 20 Power-law dependence of performance on position in context . . . . . . . . . . . . . . . . .25 + 21 Performance at different context positions versus model size . . . . . . . . . . . . . . . . .25 + 22 Learning rate schedule scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 + 23 Comparison of Power-Law and Logarithmic Fits . . . . . . . . . . . . . . . . . . . . . . .26 + 24 Generalization versus depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 + + List of Tables + + 1 Parameter and compute counts for Transformer . . . . . . . . . . . . . . . . . . . . . . . .7 + 2 Fits toL(N;D). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 + 3 Fits toL(N;S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 + 4 Key trend equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 + 5 Key parameters to trend fits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 + 6 Trends for compute-efficient training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 + + References + + [ACDE12]Eduardo G Altmann, Giampaolo Cristadoro, and Mirko Degli Esposti. On the origin of long- + range correlations in texts.Proceedings of the National Academy of Sciences, 109(29):11582– + 11587, 2012. 25 + [AS17]Madhu S. Advani and Andrew M. Saxe. High-dimensional dynamics of generalization error in + neural networks.arXiv, 2017, 1710.03667. 11, 18, 22 + [BB01]Michele Banko and Eric Brill. Scaling to very very large corpora for natural language disam- + biguation. InProceedings of the 39th annual meeting on association for computational linguis- + tics, pages 26–33. Association for Computational Linguistics, 2001. 18 + [BHMM18]Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine + learning and the bias-variance trade-off.arXiv, 2018, 1812.11118. 18 + [Bia12]GÊrard Biau. Analysis of a random forests model.Journal of Machine Learning Research, + 13(Apr):1063–1095, 2012. 18 + [CGRS19]Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with + sparse transformers. CoRR, abs/1904.10509, 2019, 1904.10509. URLhttp://arxiv.org/ + abs/1904.10509. 19 + [DCLT18]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep + bidirectional transformers for language understanding, 2018, arXiv:1810.04805. 2 + [DGV + 18]Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Uni- + versal transformers. CoRR, abs/1807.03819, 2018, 1807.03819. URLhttp://arxiv.org/ + abs/1807.03819. 6, 9, 23, 24 + [EP94]Werner Ebeling and Thorsten Pöschel. Entropy and long-range correlations in literary english. + EPL (Europhysics Letters), 26(4):241, 1994. 25 + [Fou]The Common Crawl Foundation. Common crawl. URLhttp://commoncrawl.org. 7 + [GARD18]Guy Gur-Ari, Daniel A. Roberts, and Ethan Dyer. Gradient descent happens in a tiny subspace. + 2018, arXiv:1812.04754. 18 + [GJS + 19]Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d’Ascoli, + Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling description of generalization with + number of parameters in deep learning.arXiv, 2019, 1901.01608. 18 + [GKX19]Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net op- + timization via hessian eigenvalue density. CoRR, abs/1901.10159, 2019, 1901.10159. URL + http://arxiv.org/abs/1901.10159. 18 + [Goo01]Joshua Goodman. A bit of progress in language modeling.CoRR, cs.CL/0108005, 2001. URL + http://arxiv.org/abs/cs.CL/0108005. 18 + [GRK17]Scott Gray, Alec Radford, and Diederik P Kingma. Gpu kernels for block-sparse weights.ope- + nai.com, 2017. 19 + [HAD19]Joel Hestness, Newsha Ardalani, and Gregory Diamos. Beyond human-level accuracy: Compu- + tational challenges in deep learning. InProceedings of the 24th Symposium on Principles and + Practice of Parallel Programming, PPoPP ’19, pages 1–14, New York, NY, USA, 2019. ACM. + doi:10.1145/3293883.3295710. 18 + [HCC + 18]Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, + and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. + CoRR, abs/1811.06965, 2018, 1811.06965. URLhttp://arxiv.org/abs/1811.06965. 19 + [HNA + 17]Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kia- + ninejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is pre- + dictable, empirically, 2017, 1712.00409. 18 + [JGH18]Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and + generalization in neural networks. InAdvances in neural information processing systems, pages + 8571–8580, 2018. 18 + [KB14]Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014, + 1412.6980. 7 + [Kom19]Aran Komatsuzaki. One epoch is all you need, 2019, arXiv:1906.06669. 18 + [KSH12]Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep + convolutional neural networks. InProceedings of the 25th International Conference on Neural + Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012. Curran + Associates Inc. URLhttp://dl.acm.org/citation.cfm?id=2999134.2999257. 19 + [LCG + 19]Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu + Soricut. Albert: A lite bert for self-supervised learning of language representations, 2019, + 1909.11942. 9 + [LOG + 19]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike + Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretrain- + ing approach. CoRR, abs/1907.11692, 2019, 1907.11692. URLhttp://arxiv.org/abs/ + 1907.11692. 2 + [LSP + 18]Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and + Noam Shazeer. Generating wikipedia by summarizing long sequences.arXiv:1801.10198 [cs], + 2018, 1801.10198. URLhttp://arxiv.org/abs/1801.10198. 2, 6 + [LT16]Henry W Lin and Max Tegmark. Criticality in formal languages and statistical physics.arXiv + preprint arXiv:1606.06737, 2016. 25 + [LXS + 19]Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- + Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models + under gradient descent, 2019, arXiv:1902.06720. 18 + [MKAT18]Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model + of large-batch training, 2018, arXiv:1812.06162. 3, 5, 6, 12, 13, 21 + [Pap18]Vardan Papyan. The full spectrum of deep net hessians at scale: Dynamics with sample size. + CoRR, abs/1811.07062, 2018, 1811.07062. URLhttp://arxiv.org/abs/1811.07062. 18 + [RNSS18]Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language + understanding by generative pre-training.URL https://s3-us-west-2. amazonaws. com/openai- + assets/research-covers/languageunsupervised/language understanding paper. pdf, 2018. 2, 6 + [RRBS19a]Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive + prediction of the generalization error across scales, 2019, 1909.12673. 18 + [RRBS19b]Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive + prediction of the generalization error across scales, 2019, arXiv:1909.12673. 18 + [RSR + 19]Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, + Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified + text-to-text transformer, 2019, arXiv:1910.10683. 2 + [RWC + 19]Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language + models are unsupervised multitask learners.openai.com, 2019. 2, 5, 6, 7, 8 + [SCP + 18]Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanan- + takool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and + Blake Hechtman. Mesh-tensorflow: Deep learning for supercomputers, 2018, 1811.02084. 19 + [SHB15]Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words + with subword units.CoRR, 2015, 1508.07909. 6 + [SLA + 18]Christopher J. Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig, and + George E. Dahl. Measuring the effects of data parallelism on neural network training, 2018, + arXiv:1811.03600. 12 + [SS18]Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory + cost.CoRR, abs/1804.04235, 2018, 1804.04235. URLhttp://arxiv.org/abs/1804.04235. + 7 + [THK18]Stefan Thurner, Rudolf Hanel, and Peter Klimek.Introduction to the theory of complex systems. + Oxford University Press, 2018. 18 + [TL19]Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural + networks.CoRR, abs/1905.11946, 2019, 1905.11946. URLhttp://arxiv.org/abs/1905. + 11946. 18 + [VSP + 17]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, + Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, + S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors,Advances in Neural + Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc., 2017. URL + http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf. 2, 6 + [VWB16]Andreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles + of relatively shallow networks, 2016, arXiv:1605.06431. 8, 18 + [Was06]Larry Wasserman.All of nonparametric statistics. Springer Science & Business Media, 2006. + [WPN + 19]Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, + Omer Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose + language understanding systems, 2019, 1905.00537. 2 + [WRH17]Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Growing a brain: Fine-tuning by in- + creasing model capacity.2017 IEEE Conference on Computer Vision and Pattern Recognition + (CVPR), Jul 2017. doi:10.1109/cvpr.2017.323. 19 + [WYL19]Wei Wen, Feng Yan, and Hai Li. Autogrow: Automatic layer growing in deep convolutional + networks, 2019, 1906.02909. 19 + [YDY + 19]Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. + Le. Xlnet: Generalized autoregressive pretraining for language understanding, 2019, + arXiv:1906.08237. 2 + [ZK16]Sergey Zagoruyko and Nikos Komodakis. Wide residual networks.Procedings of the British + Machine Vision Conference 2016, 2016. doi:10.5244/c.30.87. 18 + [ZKZ + 15]Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Tor- + ralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by + watching movies and reading books.2015 IEEE International Conference on Computer Vision + (ICCV), Dec 2015. doi:10.1109/iccv.2015.11. 7 + [ZLN + 19]Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George E. Dahl, + Christopher J. Shallue, and Roger B. Grosse. Which algorithmic choices matter at which batch + sizes? insights from a noisy quadratic model.CoRR, abs/1907.04164, 2019, 1907.04164. URL + http://arxiv.org/abs/1907.04164. 12, 18 +<> <> <> + + +<> <> <> +Structured Pruning of Convolutional Neural Networks via L1 Regularization + +CHEN YANG1,2, ZHENGHONG YANG1,2, ABDUL MATEEN KHATTAK2,3 , LIU YANG1,2, WENXIN ZHANG1,2, WANLIN GAO1,2 , AND MINJUAN WANG1,2 +1Key Laboratory of Agricultural Informatization Standardization, Ministry of Agriculture and Rural Affairs, Beijing 100083, China 2College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China 3Department of Horticulture, The University of Agriculture, Peshawar 25120, Pakistan +Corresponding authors: Wanlin Gao (wanlin_cau@163.com) and Minjuan Wang (minjuan@cau.edu.cn) +This work was supported by the Project of Scientific Operating Expenses from Ministry of Education of China under Grant 2017PT19. + +ABSTRACT +Deep learning architecture has achieved amazing success in many areas with the recent advancements in convolutional neural networks (CNNs). However, real-time applications of CNNs are seriously hindered by the significant storage and computational costs. Structured pruning is a promising method to compress and accelerate CNNs and does not need special hardware or software for an auxiliary calculation. Here a simple strategy of structured pruning approach is proposed to crop unimportant filters or neurons automatically during the training stage. The proposed method introduces a mask for all filters or neurons to evaluate their importance. Thus the filters or neurons with zero mask are removed. To achieve this, the proposed method adopted L1 regularization to zero filters or neurons of CNNs. Experiments were conducted to assess the validity of this technique. The experiments showed that the proposed approach could crop 90.4%, 95.6% and 34.04% parameters on LeNet-5, VGG-16, and ResNet-32respectively, with a negligible loss of accuracy. + + +INDEX +TERMS Convolutional neural networks, regularization, structured pruning. + + +I. INTRODUCTION + +During the recent years, convolutional neural networks (CNNs) [1] have accomplished successful applications in many areas such as image classification [2], object detection [3], neural style transfer [4], identity authentication [5], information security [6], speech recognition and natural language processing. However, these achievements were made through leveraging large-scale networks, which possessed millions or even billions of parameters. Those large-scale networks heavily relied on GPUs to accelerate computation. Moreover, devices with limited resources, such as mobile, FPGA or embedded devices, etc. have difficulties to deploy CNNs in actual applications. Thus, it is critical to accelerate the inference of CNNs and reduce storage for a wide range of applications [7]. +According to the studies done so far, the major approaches for compressing deep neural networks can be categorized into four groups, i.e. low-rank decomposition [8], parameter quantization [9], knowledge distillation [10][13], and +network pruning [14]. For the deep neural networks (DNN) that have been trained, the low-rank decomposition technology decomposes and approximates a tensor to a smaller level to achieve compression. The low-rank decomposition achieves efficient speedup because it reduces the elements of the matrix. However, it can only decompose or approximate tensors one by one within every layer, and cannot discover the redundant parameters of DNN. Besides, more research has been focused on network module designs, which are smaller, more efficient and more sophisticated. These models, such as SqueezeNet [15], MobileNet [16] and Shufflenet [17], are basically made up of low resolutions convolution with lesser parameters and better performance. +At present, network pruning is a major focus of research, which not only accelerates DNN, but also reduces redundant parameters. Actually, using a large-scale network directly may provide state-of-the-art performance, so learning a large-scale network is needed. However, optimum network architecture may not be known. Thus, a massive redundancy exists in large neural networks. To combat this problem, network pruning is useful to remove redundant parameters, filters, channels or neurons, and address the over-fitting issue. +This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ + +<
> + +FIGURE 1. The architecture of the layer with the mask. (a) The architecture of a convolutional layer with the mask. (b) The architecture of a fully-connected layer with the mask. The proposed approach chooses the unimportant filters and neurons (highlighted in yellow) by the order of magnitude of mask value. +Network pruning techniques can also be broadly categorized as structured pruning and non-structured pruning. Non-structured pruning aims to remove single parameters that have little influence on the accuracy of networks and non-structured pruning is efficient and effective for compact.ing networks. Nonetheless, non-structured pruning is difficult to be widely used in practical applications. Actually, the operation of convolution is reformulated as a matrix-by-matrix multiplication in many prevalent deep learning frameworks. This requires additional information to rep.resent pruned locations in non-structured pruning method. Therefore, special hardware or software is needed to assist with the calculation, which may increase computation time. Instead, structured pruning directly removes the entire filters, channels or neurons. Thus, the remaining network architecture can be used directly by the existing hardware. For example, Anwar et al. [18] employed particle filtering to structured sparsity convolutional neural network at channel-wise, kernel-wise, and intra-kernel stride levels. At present, several structured pruning methods [24], [25], [27] are mainly based on the statistical information of parameters or +activation outputs. These methods do not consider the loss and are unable to remove parameters during training. In addition, some methods, such as those mentioned by [19], [20], require layer-by-layer iterative pruning and recovery accuracy, which involves enormous calculations. On the contrary, the proposed approach links pruning with minimization of loss and can be implemented during the training. +It is inspiring that the filters who's weights are all zero can be safely removed, because, whatever the input, they would not extract any features. This study presents a scheme to prune filters or neurons of fully-connected layers based on L1 regularization [21] to zero out the weights of some filters or neurons. Similar to this method, Wen et al. [31] adopted group LASSO regularization [40] to zero out filters. However, all the weights are required to compute an extra gradient, which is computationally expensive for a large-scale network. +Contrarily, in the proposed method, a mask is introduced to address this issue and the regularization term only is the l1-norm of the mask, which easily calculates the gradients of the mask. In this method, the parameters of filters or neurons are multiplied by a mask to pick unimportant filters or neurons, and once the mask is zero the corresponding filter or neuron will be removed. Here, though a mask is introduced for filters or neurons, the method does not change the architecture of the network. This allows for other compression methods to be used with the proposed technique. Similar to the proposed method, Lin et al. [32] also adopted a mask to identify unimportant filters or neurons, but the value of the mask could not be changed by training. In addition, removing unimportant filters or neurons may temporarily degrade accuracy, but the network can be retrained for recovery performance. FIGURE 1 shows the framework of the proposed method. +In this article, a structured pruning technology is presented, which allows for simultaneously learning and removing unimportant filters or neurons of CNNs. The main contributions are as follows: + +� A simple yet effective method based L1 regularization is presented to compress CNNs model during the training stage. + +� A threshold is adopted to solve the optimization problem of l1-norm. In this approach, only some mask values are required to be near zero, though not completely zero. The detail is provided in the following section. + + + +II. PREVIOUS WORK + +The importance of compressing deep learning models before the application is self-evident, especially for expanding the application scenarios of deep learning [11]. For example, a compressed deep learning model can be combined with edge computing [12] to enable Internet of things devices under.stand data. In this section, we will review the contributions of others. +Le Cun et al. [14] first proposed a saliency measurement method called Optimal Brain Damage (OBD) to selectively delete weights by second-derivative information of error function. Later, Hassibi and Strok [22] proposed the Optimal Brain Surgeon (OBS) algorithm based on OBD. The OBS not only removed unimportant weights but also automatically adjusted the remaining weights, which improved accuracy and generalization ability. All these methods are based on Taylor expansion (even OBD and OBS are required to compute Hessian matrix), which may be computationally intensive especially for large networks. In addition, they use a criterion of minimal increase in error on the training data. Guo et al. [23] introduced a binary matrix to dynamically choose important weights. Han et al. [24], [25] directly removed weights with values lower than a predefined threshold to compress networks, then followed by retraining to recover accuracy. Considering most filters in CNNs that tended to be smooth in the spatial domain, Liu et al. [26] extended Guo's work to the frequency domain by implementing Discrete Cosine Transform (DCT) to filters in the spatial domain. However, these non-structured pruning technologies were hard to use in real applications, because extra software or hardware was required for the calculation. +Directly cropping a trained model by the value of weight is a wide method. Normally it is used to find an effective evaluation to judge the importance of weights and to cut the unimportant connection or filter to reduce the redundancy of a model. Hu et al. [27] thought the activation outputs of a significant portion of neurons were zero in a large network, whatever inputs the network received. These zero activation neurons were unimportant, so they defined the Average Percentage of Zeros (ApoZ) to observe the percentage of activations of a neuron and cropped the neurons with fewer activations. Li et al. [28] introduced a structured pruning method by measuring the norm of filters to remove unimportant filters. Luo et al. [29] took advantage of a subset of input channels to approximate output for compressing convolutional layers. Changpinyo et al. [30] proposed a random method to compress CNNs. They randomly connected the output channel to a small subset of input channels to compress CNNs. Though successful to an extent, their method did not directly relate to the loss, hence it was necessary to retrain the network for the recovery of accuracy. On the other hand, such a scheme could only be used layer-by-layer. Thus, it was essential to iterate over and over to prune, which would result in massive computation costs. +Ding et al. [37] applied a customized L2 regularization to remove unimportant filters and simultaneously stimulate important filters to grow stronger. Lin et al. [32] proposed a Global & Dynamic Filter Pruning (GDP) method, which could dynamically recover the previously removed filters. Liu et al. [33] enforced channel-level sparsity in the net.work to compress DNNs in the training phase. In addition, Gordon et al. [39] iteratively shrank and expanded a network targeting reduction of particular resources (e.g. FLOPS, or the number of parameters). + +III. The APPROACH OF STRUCTURED PRUNING FOR CNNs + +A. NOTATIONS +First of all, notations are clarified in this section. CNN is a multi-layer deep feed-forward neural network, which is composed of a stack of convolutional layers, pooling layers, and full-connected layers. In an l-layer CNNs model, <>. +represents the k-th filter of l layer, <> denotes the number of feature maps in l-1 layer and d indicates the kernel size. Let us denote feature maps in the l layer by <>, where <> is the size, Cl is the number of channels, and Zl is the output of l-1 layer. In addition, Zk +represents the k-th feature map of l layer. The output feature map Zk can be computed as: + +<>, (1) + +where f is a non-linear activation function, is the convolutional operation and bk is the bias. <> represents the training set, where xi and yi represent the training sample and label respectively, and N indicates the number of samples. + +B. THE PROPOSED SCHEME + +structured +The goal of pruning is to remove those redundant filters or neurons, which are unimportant or useless for the performance of the networks. Essentially, the main role of the convolutional layer filters is to extract local features. However, once all the parameters of a filter are zeroed, the filter is confirmed unimportant. Whatever the inputs for the filter, the outputs are always zero. Under the circumstance, the filters are unable to extract any information. When the filters are multiplied by zero, all the parameters of the filters become zero. Based on this observation, a mask is introduced for every filter to estimate its importance. This can be formulated as: + +<>, (2) + +where mlk represents the k-th mask of l-layer. + +Therefore, the problem of zeroing out the values of some filters can be transformed to zero some mask. For this purpose, the following optimization solution is proposed: + +<>, (3) + +where <> is a loss function, such as cross-entropy loss, <> is the output of CNNs and C is a hyper-parameter that controls the +number of pruned filters. Equation (3) is the core of the proposed method. Once the optimal solution of the equation is obtained, the pruning is achieved. +In addition, this method can also remove redundant neurons in a fully-connected layer. The inference of fully-connected layer can be represented by: + +<>, (4) + +where <> is a weight matrix and Zl.1 . Rn.1 is the input of l-th layer. Here, when fully-connected layers introduce mask, the inference of these layers can be reformu.lated as: + +<>, (5) + +where <> is a mask vector and <> is Hadamard product operator. +Equation (3) can be transformed into the following form based Lagrange multiplier: + +<>, (6) + +where <> is a coefficient associated with C. +Equation (6) is an NP-hard problem because of the zero norm. Thus, it is quite difficult to obtain an optimal solution with equation (6). +Therefore, l1-norm is adopted to replace l0-norm, as: + +<>. (7) + +Equation (7) can be solved by SGD in practical application, so the proposed method is simple and easy to implement. We just need to introduce a mask for each layer and train the network. Though the proposed method introduces mask, the network topology will be preserved because the mask can be absorbed into weight. + +C. THRESHOLD + +L1 regularization is a widely used sparse technology, which pushes the coefficients of uninformative features to zero. So a sparse +network is achieved by solving equation (7). However, there is a problem in solving equation (7). +Here the mask value cannot be completely zeroed in practical application, because the objective function (7) +is non-convex and the global optimal solution may not be obtained. A strategy is adopted in the proposed method to solve this problem. If the order of magnitude of the mask value is small enough, it can be considered almost as zero. Thus, to decide whether the mask is zero, a threshold is introduced. However, considering only the value of the mask is meaningless if the mask is not completely zero. Because there is a linear transformation between mask and convolution. One can shrink the masks while expanding the weights to keep the product of them the same. Hence, considering the mask and weight simultaneously is necessary. The average value of the product of the mask and the weight is used to determine whether the mask is exactly zero or not? The specific definition can be presented as: + +<> (8) + +where <> is a pre-defined threshold and <> is the average operation. This strategy is efficient and reasonable, which can be proved by the results of the experiment. + +Algorithm 1 The Proposed Pruning Approach + +<> + +Merging weights and masks and then removing the mask layer. Return the pruned network architecture and preserved weights. + +D. FINE-TUNING AND OTHER REGULARIZATION STRATEGIES + +Pruning may temporarily lead to degradation in accuracy, so fine-tuning is necessary to improve accuracy. Furthermore, the proposed method can be employed iteratively to obtain a narrower architecture. Actually, a single iteration of proposed method is enough to yield noticeable compaction. The method is elaborated in Algorithm 1. +Essentially, the purpose of this approach is to adjust some masks to adequately small order of magnitude. Therefore, L2 regularization can also serve as a regularization strategy in this approach. + +IV. EXPERIMENTS + +The approach was primarily evaluated through three net.works: LeNet-5 on MNIST dataset, VGG-16 on CIFAR-10 dataset and ResNet-32 on CIFAR-10 dataset. The implementation of this approach was accomplished through the standard Keras library. All experiments were conducted through Intel E5-2630 V4 CPU and NVIDIA 1080Ti GPU. + +A. DATASETS + +1) MNIST +MNIST dataset of handwritten digits from 0 to 9 is widely applied to evaluate machine learning models. This dataset owns 60000 train samples and 10000 test samples. + +2) CIFAR-10 +The CIFAR-10 dataset [41] has a total of 60000 images consisting of 10 classes, each having 6000 images with 32x32 resolution. There are 50000 training images and 10000 test images. During training, a data augmentation scheme was adopted, which contained random horizontal flip, rotation, and translation. The input data was normalized using the means and standard deviations. + +B. NETWORK MODELS + +1) LENET-5 +LeNet-5 is a convolutional neural network designed by LeCun et al. [34]. It has two convolutional and two + +<
> + +TABLE 1. The result of lenet-5 on mnist. full-connected layers. This network has 44.2K learnable parameters. In this network, dropout is used in the full-connected layer. + +2) VGG-16 +The original VGG-16 [35] has thirteen convolutional and two fully-connected layers and has 130M learn-able parameters. However, VGG-16 is very complex for CIFAR-10 dataset. So the fully-connected layers were removed. Moreover, Batch Normalization was used after each convolution operation. The modified model has 14.7M learn-able parameters. + +3) RESNET-32 + +Deep residual network (ResNet) [42] is a state-of-the-art multiple CNNs architecture. In this paper, ResNet-32 was implemented to evaluate the proposed method. The used ResNet-32 had the same architecture as described in [42], which contained three stages of convolutional, one global average pooling after last convolutional layer and one fully-connected layer. In addition, when the dimensions increased, 1x1 convolution was adopted as identity mapping to match the dimensions. This network has 0.47M learnable parameters. + +C. THE DETAIL OF TRAINING, PRUNING, AND FINE-TUNING + +To obtain the baseline of accuracy in the experiments, we trained LeNet-5 on MNIST, VGG-16 on CIFAR-10, and ResNet-32 on CIFAR-10 from scratch. Then, the pruning was performed on the basis of the trained network and the strategy of regularization was chosen as L1 regularization, with the mask initialized to 1. Later, we would retrain the pruned network for the recovery of accuracy. + +1) LENET-5 ON MNIST +The original network was normally trained from scratch, for a total of 30 epochs, by Adam [43] with a batch sizes of 128. The learning rate was initialized to 0.001, the weight decay was set to 0.0005. The momentum was set to 0.9 and the dropout rate was set to 0.5 for the fully-connected layer. While implementing the pruning training, only the epochs was modified. The epochs was set at 10 and the threshold mentioned above to select pruned filters was set at 0.01. The pruned network was then retrained to compensate for the loss of accuracy. We adopted the same hyper-parameter setting as in normal training. + +2) VGG-16 ON CIFAR-10 +To get the baseline accuracy, the network was normally trained from scratch by SGD with a batch size of 128. The total epochs were set to 60. The initial learning rate was set to 0.01 and then scaled up by 0.1 every 20 epochs. The weight decay was set at 0.0005 and the momentum at 0.9. While implementing the pruning training, epochs was set to 30 , the learning rate was scaled by 0.1 every 10 epochs and other settings remained the same, while the threshold was set at 0.01. Finally, the pruned model was retrained following the same pre-processing and hyper-parameter settings as the normal training. + +3) RESNET-32 ON CIFAR-10 +Generally, the network was trained from scratch by SGD as the baseline with a batch size of 128. The weight decay was set at 0.0001, the epochs were set at 120, and the momentum was set at 0.9. The initial learning rate was set at 0.1 and then scaled by 0.1 at 60 and 100 epochs. Here, for pruning training, the epoch was set at 30, the learning rate was scaled by +0.1 every 10 epochs and the other settings remained the same. After pruning, the network was retrained from scratch. The epochs was modified to 60 and the learning rate was scaled by 0.1 every 20 epochs. + +D. RESULTS OF THE EXPERIMENTS + +1) LENET-5 ON MNIST +As per the results in TABLE 1, 88.84% of the parameters were removed without any impact on performance. Based on the proposed method, 95.46% of the parameters were discarded as well with an accuracy loss of 0.57%. + +<
> + +TABLE 2. Result of VGG-16 on CIFAR-10 datasets. + +<
> + +TABLE 1 also reveals that there was enormous redundancy in fully-connected layers because at least 90% parameters of fully-connected layers could easily be dropped. According to the form, the proposed method may indeed seek important connections. The reasons can be summarized in two points. First, when parameters of 83.83% are removed, the accuracy doesn't change. This indicates that the pruned parameters are unimportant for maintaining the accuracy of the network. Second, it is difficult to remove some filters or neurons, especially the neurons of fully-connected layers, when the pruning rate gradually increases. So the remaining connections are crucial. +In addition, the convolutional layer, especially the first one, is hard to prune in comparison with the next layer. The possible explanation could be that the proposed method automatically selects the unimportant filters through a backpropagation algorithm. However, the backpropagation algorithm will cause the previous layer to suffer gradient vanishing problem. That is why the former layers are hard to prune compared to the later ones. + +2) VGG-16 ON CIFAR-10 +As depicted in TABLE 2, over 94.4% of parameters could be removed with a negligible accuracy loss of 0.51%. It can also be observed that the loss of accuracy was only 2.04% when prune parameters of 97.76%. The proposed method proved to be effective again in reducing redundancy. +In fact, preserving the remaining architecture without retaining the parameters (training the pruned network from scratch) is also a strategy to fine-tune network. This strategy was adopted here to retrain the network and the results were promising, as shown in TABLE 2. The results reveal that a better effect can be achieved through directly retraining the pruned network from scratch. Perhaps the significance of the proposed method is that it furnishes the facility to discover excellent architectures, as mentioned by Liu et al. [36] as well. Nevertheless, training a pruned network from scratch + +FIGURE 2. Comparison of L1 regularization and L2 regularization. "accuracy loss" represents the difference of accuracy between pruned CNNs and original CNNs. A positive value indicates the improvement of network accuracy after pruning, while a negative value indicates the decrease of accuracy. +is expensive in terms of computation cost, especially in case of large-scale datasets and networks. + +3) RESNET-32 ON CIFAR-10 +Pruning ResNet-32 based on the order of magnitude of the mask may result in different output map dimensions in the residual module. So a 1x1 convolution is needed as identity mapping to match dimensions. However, this operation brings about extra parameter and computation. To avoid this problem, a percentile was defined to remove filters of the same proportion in every convolutional layer. TABLE 3 shows that the proposed method removed 34% parameters with accuracy loss of 0.65%. Moreover, over 62.3% of parameters could also be discarded with an accuracy loss of 1.76%. Thus, it was confirmed that the proposed method could reduce the redundancy of complex network, +i.e. ResNet. + +<
> + +FIGURE 3. The comparison of pruned and reserved filters. (a) The comparison of parameters order of magnitude between pruned and reserved filters. The x-axis represents the distribution interval and the y-axis represents the percentage of the parameter in the interval. (b) The comparison of non-zero activations. The left bar represents average non-zero activation percentage, and the right bar represents average non-zero activation value. + +<
> + +TABLE 3. Rest lt of RESNET-32 on CIFAR-10 datasets. + +V. ANALYSIS + +A. L2 REGULARIZATION + +L2 regularization was also explored as a regularization strategy in this study. As shown in FIGURE 2, the LeNet-5 can also be compressed without degrading accuracy based L2 regularization. Nevertheless, there is some difference between L1 regularization and L2 regularization. Both L1 and L2 regularizations can improve accuracy when pruning rate is less than 84%, but the effect of L2 regularization is better. The main reason is that regularization techniques can prevent overfitting and improve the generalization ability. Moreover, with the pruning rate increasing, L1 regularization can achieve a greater compression effect in the same accuracy. As per Han et al. [24], L1 regularization pushes more parameters closer to zero, so it can prune more parameters. Having studied the difference between L1 regularization and L2 regularization, the inclination is more towards the L1 regularization from the perspective of compression and accuracy trade-off. + +B. THE EFFECT OF PRUNING + +To better describe the effect of the proposed method, a comparison was made between the pruned filters and reserved filters. The CONV3-1 layer of VGG-16, which owned 256 filters, was chosen while the . set at 0.008. Based on the above setting, 125 filters of CONV3-1 layer could be removed. Empirically, a weak filter or neuron always has lower activation outputs, lower activation frequency, and lower weight value. Hence weight values and activation outputs were chosen here to evaluate the difference between pruned and preserved filters. +As shown in Figure 2, the bulk of values of pruned parameters, with a percentage of 96.9, are less than 10.6, in terms of the weight absolute values. However, most of the values of reserved parameters, with a percentage of 94.5, are greater than 0.001. The results indicate an enormous distribution difference between the values of the pruned and the reserved parameters. Therefore, the present approach can effectively reduce the order of magnitude of the pruned parameters. +In addition, the test set was chosen as a sample to calculate the average non-zero activation values and percentage of CONV3-1. As obvious from Figure 3, both the average percentage of non-zero activation and the average values of non-zero activation of the pruned filters was much lower than those of the reserved filters. From the activation perspective, the pruned filters were weak, because the output and weight values of pruned filters were negligible compared with the reserved filters and could be completely ignored. Thus, using the order of magnitude of the mask to determine pruned filters or neurons was reasonable. + +C. COMPARISON WITH OTHER METHODS +In this section, two classical structured prune methods were compared with the proposed method. First, in LeNet-5 on MNIST-10 dataset, the proposed method was compared with that of Wen et al. [31]. In this experiment, both the proposed and Wen et al. [31] methods adopted the same coefficient of sparsity regularization (. = 0.03). The results (TABLE 5) show that both the methods were analogous in terms of accuracy and compression effect. However, the proposed method is simpler and costs less computation in practice. Further, the proposed method was also compared with that + +<
> + +TABLE 4. Compare of VGG-16 on CIFAR-10. + +<
> + +TABLE 5. Compare of LENET-5 on MNIST. + +of Liu et al. [33] in VGG-16 on CIFAR-10. Again, the same sparsity regularization coefficient (. = 0.005) was adopted for both the methods. However, Liu et al. [33] adopted a fixed percentage threshold setting, whereas, the scheme of threshold setting of proposed method was different from Liu. The results (in TABLE 4) reveal that the proposed method was superior in terms of compression efficiency, although there was a slight loss of accuracy. In general, the proposed method can not only generate sparsity but also achieve better pruning effect with its improved threshold. +Nevertheless, some shortcomings were also observed with this approach. One is that though this approach doesn't change the existing CNNs architecture, the added mask layer essentially increases the number of layers in the network. This may increase optimization difficulty. However, this problem can be solved by Batch Normalization (BN [38]). The other is that, as this method introduces a threshold, the pruning effect may not be smooth. The pruning rate may change drastically with small changes in the <>, which is not conducive to finding the best <>. +VI. CONCLUSION In this article, a structured pruning technology is proposed to automatically tailor redundant filters or neurons based on regularization. A mask is introduced to remove unimportant filters or neurons by zeroing the values of some masks dur.ing training. In addition, to deal with the problem that the mask cannot be completely zeroed in practice, a threshold is designed to zero the mask. Experimentation with multiple datasets has proved that the proposed method can effectively remove parameters with a negligible loss of accuracy. In the future, establishing a relation between the hyper-parameter <> and the pruning rate will be considered to facilitate the adjustment of hyper-parameter .. + + +ACKNOWLEDGMENT +All the mentioned support is gratefully acknowledged. + + +REFERENCES +[1] Y. LeCun, Y. Bengio, and G. Hinton, ��Deep learning,�� Nature, vol. 521, pp. 436�444, May 2015. +[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ��ImageNet classification with deep convolutional neural networks,�� in Proc. Adv. Neural Inf. Pro.cess. Syst. (NIPS), 2012, pp. 1097�1105. +[3] R. Girshick, J. Donahue, T. Darrell, and J. Malik, ��Rich feature hierarchies for accurate object detection and semantic segmentation,�� in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 580�587. +[4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ��Generative adversarial nets,�� in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2672�2680. +[5] C. Shen, Y. Li, Y. Chen, X. Guan, and R. Maxion, ��Performance analysis of multi-motion sensor behavior for active smartphone authentication,�� IEEE Trans. Inf. Forensics Security, vol. 13, no. 1, pp. 48�62, Jan. 2018. +[6] C. Shen, Y. Chen, X. Guan, and R. Maxion, ��Pattern-growth based mining mouse-interaction behavior for an active user authentication system,�� IEEE Trans. Dependable Secure Comput., to be published. +[7] Y. Cheng, D. Wang, P. Zhou, and T. Zhang, ��A survey of model compres.sion and acceleration for deep neural networks,�� 2017, arXiv:1710.09282. [Online]. Available: https://arxiv.org/abs/1710.09282 +[8] C. Tai, T. Xiao, Y. Zhang, X. Wang, and E. Weinan, ��Convolutional neural networks with low-rank regularization,�� 2015, arXiv:1511.06067. [Online]. Available: https://arxiv.org/abs/1511.06067 +[9] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen, ��Compressing neural networks with the hashing trick,�� in Proc. Int. Conf. Mach. Learn., 2015, pp. 2285�2294. +[10] Y. Gong, L. Liu, M. Yang, and L. Bourdev, ��Compressing deep convolutional networks using vector quantization,�� 2014, arXiv:1412.6115. [Online]. Available: https://arxiv.org/abs/1412.6115 +[11] Z. Tian, S. Su, W. Shi, X. Du, M. Guizani, and X. Yu, ��A data-driven method for future Internet route decision modeling,�� Future Gener. Com-put. Syst., vol. 95, pp. 212�220, Jun. 2018. +[12] Z. Tian, W. Shi, Y. Wang, C. Zhu, X. Du, S. Su, Y. Sun, and N. Guizani, ��Real-time lateral movement detection based on evidence reasoning net.work for edge computing environment,�� IEEE Trans. Ind. Informat., vol. 15, no. 7, pp. 4285�4294, Jul. 2019. +[13] R. Liu, N. Fusi, and L. Mackey, ��Teacher-student compression with gener.ative adversarial networks,�� 2018, arXiv:1812.02271. [Online]. Available: https://arxiv.org/abs/1812.02271 +[14] Y. LeCun, J. S. Denker, and S. A. Solla, ��Optimal brain damage,�� in Proc. Adv. Neural Inf. Process. Syst., 1990, pp. 598�605. +[15] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, andK. Keutzer, ��SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size,�� 2016, arXiv:1602.07360. [Online]. Avail.able: https://arxiv.org/abs/1602.07360 +[16] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, ��MobileNets: efficient convolutional neu.ral networks for mobile vision applications,�� 2017, arXiv:1704.04861. [Online]. Available: https://arxiv.org/abs/1704.04861 +[17] X. Zhang, X. Zhou, M. Lin, and J. Sun, ��Shufflenet: An extremely efficient convolutional neural network for mobile devices,�� in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 6848�6856. +[18] S. Anwar, K. Hwang, and W. Sung, ��Structured pruning of deep convolu.tional neural networks,�� ACM J. Emerg. Technol. Comput. Syst., vol. 13, no. 3, p. 32, 2017. +[19] Y. He, X. Zhang, and J. Sun, ��Channel pruning for accelerating very deep neural networks,�� in Proc. IEEE Int. Conf. Comput. Vis., Jun. 2017, pp. 1389�1397. +[20] J.-H. Luo and J. Wu, ��An entropy-based pruning method for CNN compression,�� arXiv:1706.05791, 2017. [Online]. Available: https://arxiv.org/abs/1706.05791 +[21] R. Tibshirani, ��Regression selection and shrinkage via the lasso,�� J. Roy. Stat. Soc. B, vol. 58, no. 1, pp. 267�288, 1996. +[22] B. Hassibi and D. G. Stork, ��Second order derivatives for network pruning: Optimal brain surgeon,�� in Proc. Adv. Neural Inf. Process. Syst., 1993, pp. 164�171. +[23] Y. Guo, A. Yao, and Y. Chen, ��Dynamic network surgery for efficient DNNs,�� in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 1379�1387. +[24] S. Han, J. Pool, J. Tran, and W. Dally, ��Learning both weights and con.nections for efficient neural network,�� in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 1135�1143. +[25] S. Han, H. Mao, and W. J. Dally, ��Deep compression: Com.pressing deep neural networks with pruning, trained quantization and Huffman coding,�� 2015, arXiv:1510.00149. [Online]. Available: https://arxiv.org/abs/1510.00149 +[26] Z. Liu, J. Xu, X. Peng, and R. Xiong, ��Frequency-domain dynamic pruning for convolutional neural networks,�� in Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 1043�1053. +[27] H. Hu, R. Peng, Y.-W. Tai, and C.-K. Tang, �finetwork trimming: A data-driven neuron pruning approach towards efficient deep architectures,�� 2016, arXiv:1607.03250. [Online]. Available: https://arxiv. org/abs/1607.03250 +[28] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, ��Pruning filters for efficient convNets,�� 2016, arXiv:1608.08710. [Online]. Available: https://arxiv.org/abs/1608.08710 +[29] J.-H. Luo, J. Wu, and W. Lin, ��ThiNet: A filter level pruning method for deep neural network compression,�� in Proc. IEEE Int. Conf. Comput. Vis., Jun. 2017, pp. 5058�5066. +[30] S. Changpinyo, M. Sandler, and A. Zhmoginov, ��The power of sparsity in convolutional neural networks,�� arXiv:1702.06257. [Online]. Available: https://arxiv.org/abs/1702.06257 +[31] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li, ��Learning structured sparsity in deep neural networks,�� in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 2074�2082. +[32] S. Lin, R. Ji, Y. Li, Y. Wu, F. Huang, and B. Zhang, ��Accelerating convolutional networks via global & dynamic filter pruning,�� in Proc. IJCAI, 2018, pp. 2425�2432. +[33] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang, ��Learning efficient convolutional networks through network slimming,�� in Proc. IEEE Int. Conf. Comput. Vis., Jun. 2017, pp. 2736�2744. +[34] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ��Gradient-based learn.ing applied to document recognition,�� Proc. IEEE, vol. 86, no. 11, pp. 2278�2324, Nov. 1998. +[35] K. Simonyan and A. Zisserman, ��Very deep convolutional networks for large-scale image recognition,�� 2014, arXiv:1409.1556. [Online]. Avail.able: https://arxiv.org/abs/1409.1556 +[36] Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell, ��Rethinking the value of network pruning,�� 2018, arXiv:1810.05270. [Online]. Available: https://arxiv.org/abs/1810.05270 +[37] X. Ding, G. Ding, J. Han, and S. Tang, ��Auto-balanced filter pruning for efficient convolutional neural networks,�� in Proc. 32nd AAAI Conf. Artif. Intell., 2018, pp. 6797�6804. +[38] S. Ioffe and C. Szegedy, ��Batch normalization: Accelerating deep network training by reducing internal covariate shift,�� 2015, arXiv:1502.03167. [Online]. Available: https://arxiv.org/abs/1502. 03167 +[39] A. Gordon, E. Eban, O. Nachum, B. Chen, H. Wu, T.-J. Yang, and E. Choi, ��MorphNet: Fast & simple resource-constrained structure learning of deep networks,�� in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1586�1595. +[40] M. Yuan and Y. Lin, ��Model selection and estimation in regression with grouped variables,�� J. Roy. Statist. Soc., B (Statist. Methodol.), vol. 68, no. 1, pp. 49�67, 2006. +[41] A. Krizhevsky and G. Hinton, ��Learning multiple layers of features from tiny images,�� Univ. Toronto, Toronto, ON, Canada, Tech. Rep. 4, 2009. +[42] K. He, X. Zhang, S. Ren, and J. Sun, ��Deep residual learning for image recognition,�� in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 770�778. +[43] D. P. Kingma and J. Ba, ��Adam: A method for stochastic opti.mization,�� 2014, arXiv:1412.6980. [Online]. Available: https://arxiv.org/ abs/1412.6980 + +CHEN YANG is currently pursuing the master's degree with the Department of College of Information and Electrical Engineering, China Agricultural University, Beijing, China. His research is about general deep learning and machine learning but his main research interest includes deep models compression. + +ZHENGHONG YANG received the master's and Ph.D. degrees from Beijing Normal University, in 1990 and 2001, respectively. He is currently a Professor with the College of Science, China Agricultural University. He has presided two projects of National Natural Science Foundation. He has written two teaching and research books and has published more than 40 academic papers in domestic and foreign journals, among them, about 30 are cited by SCI/EI/ISTP. His major research +interests include the matrix theory, numerical algebra, image processing, and so on. He is a member of Beijing and Chinese Society of Computational Mathematics. + +ABDUL MATEEN KHATTAK received the Ph.D. degree in horticulture and landscape from the University of Reading, U.K., in 1999. He was a Research Scientist in different agriculture research organizations before joining the University of Agriculture, Peshawar, Pakistan, where he is currently a Professor with the Department of Horticulture. He has conducted academic and applied research on different aspects of tropical fruits, vegetables, and ornamental plants. He has also worked for Alberta Agriculture and Forestry, Canada, as a Research Associate, and Organic Agriculture Centre of Canada as a Research and Extension Coordinator, for Alberta province. There he helped in developing organic standards for greenhouse production and energy saving technologies for Alberta greenhouses. He is a Professor with considerable experience in teaching and research. He is currently a Visiting Professor with the College of Information and Electrical Engineering, China Agricultural University, Beijing. He has published 59 research articles in scientific journals of international repute. He has also attended and presented in several international scientific conferences. His research interests include greenhouse produc.tion, medicinal, aromatic and ornamental plants, light quality, supplemental lighting, temperature effects on greenhouse crops, aquaponics, and organic production. + +LIU YANG is currently pursuing the master's degree with the College of Information and Elec.trical Engineering, China Agricultural University, Beijing, China. Her research interests include the application of image recognition and intelligent robots in the field of agriculture. + +WENXIN ZHANG is currently pursuing the master's degree with the School of Information and Electrical Engineering, China agricultural univer.sity, Beijing, China. Her research interest includes pose estimation methods about pig based on deep learning for timely access to pig information. + +WANLIN GAO received the B.S., M.S., and Ph.D. degrees from China Agricultural University, in 1990, 2000, and 2010, respectively. He is the currently the Dean of the College of Information and Electrical Engineering, China Agricultural University. He has been the principal investiga.tor (PI) of over 20 national plans and projects. He has published 90 academic papers in domestic and foreign journals, among them, over 40 are cited by SCI/EI/ISTP. He has written two teaching +materials, which are supported by the National Key Technology Research and Development Program of China during the 11th Five-Year Plan Period, and �ve monographs. He holds 101 software copyrights, 11 patents for inventions, and eight patents for new practical inventions. His major research interests include the informationization of new rural areas, intelligence agriculture, and the service for rural comprehensive information. He is a member of Science and Technology Committee of the Ministry of Agricul.ture, a member of Agriculture and Forestry Committee of Computer Basic Education in colleges and universities, and a Senior Member of Society of Chinese Agricultural Engineering, etc. + +MINJUAN WANG received the Ph.D. degree from the School of Biological Science and Medical Engineering, Beihang University, under the super.vision of Prof. Hong Liu, in June 2017. She was a Visiting Scholar with the School of Environmen.tal Science, Ontario Agriculture College, Univer.sity of Guelph, from October 2015 to May 2017. She is currently a Postdoctoral Fellow with the College of Information and Electrical Engineer.ing, China Agricultural University. Her research +interests mainly include bioinformatics and the Internet of Things key technologies. +<> <> <> + + +<> <> <> + The 4 Research Techniques to Train Deep Neural Network Models More Efficiently + + + James Le Follow + + + Deep learning and unsupervised feature learning have shown + great promise in many practical applications. State-of-the-art + performance has been reported in several domains, ranging + from speech recognition and image recognition to text + processing and beyond. + + + It’s also been observed that increasing the scale of deep + learning—with respect to numbers of training examples, model + parameters, or both—can drastically improve accuracy. These + results have led to a surge of interest in scaling up the training + and inference algorithms used for these models and in + improving optimization techniques for both. + + + The use of GPUs is a significant advance in recent years that + makes the training of modestly-sized deep networks practical. + A known limitation of the GPU approach is that the training + speed-up is small when the model doesn’t Ft in a GPU’s + memory (typically less than 6 gigabytes). + + + To use a GPU eLectively, researchers often reduce the size of + the dataset or parameters so that CPU-to-GPU transfers are not + a significant bottleneck. While data and parameter reduction + work well for small problems (e.g. acoustic modeling for speech + recognition), they are less attractive for problems with a large + number of examples and dimensions (e.g., high-resolution + images). + + + In the previous post, we + talked about 5 different + algorithms for efficient deep + learning inference. In this + article, we’ll discuss the + upper right part of the + quadrant on the left. What + are the best research + techniques to train deep + neural networks more + efficiently? + + + + 1 — Parallelization Training + Let’s start with parallelization. As the Fgure below shows, the + number of transistors keeps increasing over the years. But + single-threaded performance and frequency are plateauing in + recent years. Interestingly, the number of cores is increasing. + + So what we really need to know is how to parallelize the + problem to take advantage of parallel processing. There are a + lot of opportunities to do that in deep neural networks. + + + For example, we can do data parallelism: feeding 2 images + into the same model and running them at the same time. This + does not aLect latency for any single input. It doesn’t make it + shorter, but it makes the batch size larger. It also requires + coordinated weight updates during training. + + + For example, in JeL Dean’s paper “Large Scale Distributed Deep + Networks,” there’s a parameter server (as a master) and a + couple of model workers (as slaves) running their own pieces of + training data and updating the gradient to the master. + + Another idea is model parallelism — splitting up the model + and distributing each part to different processors or different + threads. For example, imagine we want to run convolution in + the image below by doing a 6-dimension “for” loop. What we + can do is cut the input image by 2x2 blocks, so that each + thread/processor handles 1/4 of the image. Also, we can + parallelize the convolutional layers by the output or input + feature map regions, and the fully-connected layers by the + output activation. + + 2 — Mixed Precision Training + Larger models usually require more compute and memory + resources to train. These requirements can be lowered by using + reduced precision representation and arithmetic. + + Performance (speed) of any program, including neural network + training and inference, is limited by one of three factors: + arithmetic bandwidth, memory bandwidth, or latency. + Reduced precision addresses two of these limiters. Memory + bandwidth pressure is lowered by using fewer bits to store the + same number of values. Arithmetic time can also be lowered on + processors that oLer higher throughput for reduced precision + math. For example, half-precision math throughput in recent + GPUs is 2× to 8× higher than for single-precision. In addition + to speed improvements, reduced precision formats also reduce + the amount of memory required for training. + + Modern deep learning training systems use a single-precision + (FP32) format. In their paper “Mixed Precision Training,” + researchers from NVIDIA and Baidu addressed training with + reduced precision while maintaining model accuracy. + + Specifically, they trained various neural networks using the + IEEE half-precision format (FP16). Since FP16 format has a + narrower dynamic range than FP32, they introduced three + techniques to prevent model accuracy loss: maintaining a + master copy of weights in FP32, loss-scaling that minimizes + gradient values becoming zeros, and FP16 arithmetic with + accumulation in FP32. + + + Using these techniques, they + demonstrated that a wide + variety of network + architectures and + applications can be trained + to match the accuracy of + FP32 training. Experimental + results include convolutional + and recurrent network + architectures, trained for classification, regression, and + generative tasks. + + + Applications include image classification, image generation, + object detection, language modeling, machine translation, and + speech recognition. The proposed methodology requires no + changes to models or training hyperparameters. + + + + 3 — Model Distillation + Model distillation refers to the idea of model compression by + teaching a smaller network exactly what to do, step-by-step, + using a bigger, already-trained network. The ‘soft labels’ refer + to the output feature maps by the bigger network after every + convolution layer. The smaller network is then trained to learn + the exact behavior of the bigger network by trying to replicate + its outputs at every level (not just the Final loss). + + + The method was First proposed by Bucila et al., 2006 and + generalized by Hinton et al., 2015. In distillation, knowledge is + transferred from the teacher model to the student by + minimizing a loss function in which the target is the + distribution of class probabilities predicted by the teacher + model. That is — the output of a softmax function on the + teacher model’s logits. + + + So how do teacher-student + networks exactly work? + + + The highly-complex teacher + network is Frst trained + separately using the + complete dataset. This step + requires high computational + performance and thus can + only be done ohine (on + high-performing GPUs). + + While designing a student network, correspondence needs + to be established between intermediate outputs of the + student network and the teacher network. This + correspondence can involve directly passing the output of a + layer in the teacher network to the student network, or + performing some data augmentation before passing it to the + student network. + + Next, the data are forward-passed through the teacher + network to get all intermediate outputs, and then data + augmentation (if any) is applied to the same. + + Finally, the outputs from the teacher network are back- + propagated through the student network so that the student + network can learn to replicate the behavior of the teacher + network. + + 4 — Dense-Sparse-Dense Training + The research paper “Dense-Sparse-Dense Training for Deep + Neural Networks” was published back in 2017 by researchers + from Stanford, NVIDIA, Baidu, and Facebook. Applying Dense- + Sparse-Dense (DSD) takes 3 sequential steps: + + Dense: Normal neural net training…business as usual. It’s + notable that even though DSD acts as a regularizer, the + usual regularization methods such as dropout and weight + regularization can be applied as well. The authors don’t + mention batch normalization, but it would work as well. + + Sparse: We regularize the + network by removing + connections with small + weights. From each layer in + the network, a percentage of + the layer’s weights that are + closest to 0 in absolute value is selected to be pruned. This + means that they are set to 0 at each training iteration. It’s + worth noting that the pruned weights are selected only + once, not at each SGD iteration. Eventually, the network + recovers the pruned weights’ knowledge and condenses it in + the remaining ones. We train this sparse net until + convergence. + + Dense: First, we re-enable the pruned weights from the + previous step. The net is again trained normally until + convergence. This step increases the capacity of the model. + It can use the recovered capacity to store new knowledge. + The authors note that the learning rate should be 1/10th of + the original. Since the model is already performing well, the + lower learning rate helps preserve the knowledge gained in + the previous step. +<> <> <> + + +<> <> <> + THE LOTTERY TICKET HYPOTHESIS. FINDING SPARSE , TRAINABLE NEURAL NETWORKS + + + Jonathan Frankle Michael Carbin + MIT CSAIL MIT CSAIL + jfrankle@csail.mit.edu mcarbin@csail.mit.edu + + + + ABSTRACT + + Neural network pruning techniques can reduce the parameter counts of trained net- + works by over 90%, decreasing storage requirements and improving computational + performance of inference without compromising accuracy. However, contemporary + experience is that the sparse architectures produced by pruning are difficult to train + from the start, which would similarly improve training performance. + We find that a standard pruning technique naturally uncovers subnetworks whose + initializations made them capable of training effectively. Based on these results, we + articulate the lottery ticket hypothesis. dense, randomly-initialized, feed-forward + networks contain subnetworks (winning tickets) that—when trained in isolation— + reach test accuracy comparable to the original network in a similar number of + iterations. The winning tickets we find have won the initialization lottery. their + connections have initial weights that make training particularly effective. + We present an algorithm to identify winning tickets and a series of experiments + that support the lottery ticket hypothesis and the importance of these fortuitous + initializations. We consistently find winning tickets that are less than 10-20% of + the size of several fully-connected and convolutional feed-forward architectures + for MNIST and CIFAR10. Above this size, the winning tickets that we find learn + faster than the original network and reach higher test accuracy. + + + 1 INTRODUCTION + + Techniques for eliminating unnecessary weights from neural networks (pruning) (LeCun et al., 1990; + Hassibi & Stork, 1993; Han et al., 2015; Li et al., 2016) can reduce parameter-counts by more than + 90% without harming accuracy. Doing so decreases the size (Han et al., 2015; Hinton et al., 2015) + or energy consumption (Yang et al., 2017; Molchanov et al., 2016; Luo et al., 2017) of the trained + networks, making inference more efficient. However, if a network can be reduced in size, why do we + not train this smaller architecture instead in the interest of making training more efficient as well? + Contemporary experience is that the architectures uncovered by pruning are harder to train from the + start, reaching lower accuracy than the original networks. 1 + + Consider an example. In Figure 1, we randomly sample and train subnetworks from a fully-connected + network for MNIST and convolutional networks for CIFAR10. Random sampling models the effect + of the unstructured pruning used by LeCun et al. (1990) and Han et al. (2015). Across various levels + of sparsity, dashed lines trace the iteration of minimum validation loss 2 and the test accuracy at that + iteration. The sparser the network, the slower the learning and the lower the eventual test accuracy. + + 1 “Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate + the difficulty of training a network with a small capacity.” (Li et al., 2016) “During retraining, it is better to retain + the weights from the initial training phase for the connections that survived pruning than it is to re-initialize the + pruned layers...gradient descent is able to find a good solution when the network is initially trained, but not after + re-initializing some layers and retraining them.” (Han et al., 2015) + 2 As a proxy for the speed at which a network learns, we use the iteration at which an early-stopping criterion + would end training. The particular early-stopping criterion we employ throughout this paper is the iteration of + minimum validation loss during training. See Appendix C for more details on this choice. + + <
> + + Figure 1. The iteration at which early-stopping would occur (left) and the test accuracy at that iteration + (right) of the Lenet architecture for MNIST and the Conv-2, Conv-4, and Conv-6 architectures for + CIFAR10 (see Figure 2) when trained starting at various sizes. Dashed lines are randomly sampled + sparse networks (average of ten trials). Solid lines are winning tickets (average of five trials). + + + In this paper, we show that there consistently exist smaller subnetworks that train from the start and + learn at least as fast as their larger counterparts while reaching similar test accuracy. Solid lines in + Figure 1 show networks that we find. Based on these results, we state the lottery ticket hypothesis. + The Lottery Ticket Hypothesis.A randomly-initialized, dense neural network contains a subnet- + work that is initialized such that—when trained in isolation—it can match the test accuracy of the + original network after training for at most the same number of iterations. + + More formally, consider a dense feed-forward neural network <> with initial parameters <> + When optimizing with stochastic gradient descent (SGD) on a training set, f reaches + minimum validation loss lat iteration j with test accuracy a. In addition, consider training <> + with a mask <> on its parameters such that its initialization is <> When optimizing + with SGD on the same training set (with m fixed), f reaches minimum validation loss l0 at iteration j0 + with test accuracy a0 . The lottery ticket hypothesis predicts that 9m for which <> (commensurate + training time), <> (commensurate accuracy), and <> (fewer parameters). + We find that a standard pruning technique automatically uncovers such trainable subnetworks from + fully-connected and convolutional feed-forward networks. We designate these trainable subnetworks, + <>,winning tickets, since those that we find have won the initialization lottery with a + combination of weights and connections capable of learning. When their parameters are randomly + reinitialized (<> where <>), our winning tickets no longer match the performance of + the original network, offering evidence that these smaller networks do not train effectively unless + they are appropriately initialized. + + Identifying winning tickets. We identify a winning ticket by training a network and pruning its + smallest-magnitude weights. The remaining, unpruned connections constitute the architecture of the + winning ticket. Unique to our work, each unpruned connection’s value is then reset to its initialization + from original network before it was trained. This forms our central experiment. + 1.Randomly initialize a neural network <> (where <>). + 2.Train the network for j iterations, arriving at parameters <>. + 3.Prune p% of the parameters in j, creating a mask m. + 4.Reset the remaining parameters to their values in <>, creating the winning ticket<>. + As described, this pruning approach is one-shot. the network is trained once,p%of weights are + pruned, and the surviving weights are reset. However, in this paper, we focus on iterative pruning, + which repeatedly trains, prunes, and resets the network over n rounds; each round prunes <> of the + weights that survive the previous round. Our results show that iterative pruning finds winning tickets + that match the accuracy of the original network at smaller sizes than does one-shot pruning. + Results.We identify winning tickets in a fully-connected architecture for MNIST and convolutional + architectures for CIFAR10 across several optimization strategies (SGD, momentum, and Adam) with + techniques like dropout, weight decay, batchnorm, and residual connections. We use an unstructured + pruning technique, so these winning tickets are sparse. In deeper networks, our pruning-based strategy + for finding winning tickets is sensitive to the learning rate. it requires warmup to find winning tickets + at higher learning rates. The winning tickets we find are 10-20% (or less) of the size of the original + + <
> + + Figure 2. Architectures tested in this paper. Convolutions are 3x3. Lenet is from LeCun et al. (1998). + Conv-2/4/6 are variants of VGG (Simonyan & Zisserman, 2014). Resnet-18 is from He et al. (2016). + VGG-19 for CIFAR10 is adapted from Liu et al. (2019). Initializations are Gaussian Glorot (Glorot + & Bengio, 2010). Brackets denote residual connections around layers. + + + network (smaller size). Down to that size, they meet or exceed the original network’s test accuracy + (commensurate accuracy) in at most the same number of iterations (commensurate training time). + When randomly reinitialized, winning tickets perform far worse, meaning structure alone cannot + explain a winning ticket’s success. + The Lottery Ticket Conjecture.Returning to our motivating question, we extend our hypothesis + into an untested conjecture that SGD seeks out and trains a subset of well-initialized weights. Dense, + randomly-initialized networks are easier to train than the sparse networks that result from pruning + because there are more possible subnetworks from which training might recover a winning ticket. + Contributions. + We demonstrate that pruning uncovers trainable subnetworks that reach test accuracy comparable + to the original networks from which they derived in a comparable number of iterations. + We show that pruning finds winning tickets that learn faster than the original network while + reaching higher test accuracy and generalizing better. + We propose the lottery ticket hypothesis as a new perspective on the composition of neural + networks to explain these findings. + Implications.In this paper, we empirically study the lottery ticket hypothesis. Now that we have + demonstrated the existence of winning tickets, we hope to exploit this knowledge to. + Improve training performance.Since winning tickets can be trained from the start in isolation, a hope + is that we can design training schemes that search for winning tickets and prune as early as possible. + Design better networks.Winning tickets reveal combinations of sparse architectures and initializations + that are particularly adept at learning. We can take inspiration from winning tickets to design new + architectures and initialization schemes with the same properties that are conducive to learning. We + may even be able to transfer winning tickets discovered for one task to many others. + Improve our theoretical understanding of neural networks.We can study why randomly-initialized + feed-forward networks seem to contain winning tickets and potential implications for theoretical + study of optimization (Du et al., 2019) and generalization (Zhou et al., 2018; Arora et al., 2018). + + 2 WINNING TICKETS IN FULLY-CONNECTED NETWORKS + + In this Section, we assess the lottery ticket hypothesis as applied to fully-connected networks trained + on MNIST. We use the Lenet-300-100 architecture (LeCun et al., 1998) as described in Figure 2. + We follow the outline from Section 1. after randomly initializing and training a network, we prune + the network and reset the remaining connections to their original initializations. We use a simple + layer-wise pruning heuristic. remove a percentage of the weights with the lowest magnitudes within + each layer (as in Han et al. (2015)). Connections to outputs are pruned at half of the rate of the rest of + the network. We explore other hyperparameters in Appendix G, including learning rates, optimization + strategies (SGD, momentum), initialization schemes, and network sizes. + + <
> + + Figure 3. Test accuracy on Lenet (iterative pruning) as training proceeds. Each curve is the average + of five trials. Labels arePm —the fraction of weights remaining in the network after pruning. Error + bars are the minimum and maximum of any trial. + + + Notation.Pm =kmk0 is the sparsity of mask m, e.g., <> m = 25% when 75% of weights are pruned. + Iterative pruning.The winning tickets we find learn faster than the original network. Figure 3 plots + the average test accuracy when training winning tickets iteratively pruned to various extents. Error + bars are the minimum and maximum of five runs. For the first pruning rounds, networks learn faster + and reach higher test accuracy the more they are pruned (left graph in Figure 3). A winning ticket + comprising 51.3% of the weights from the original network (i.e.,Pm = 51.3%) reaches higher test + accuracy faster than the original network but slower than whenPm = 21.1%. When Pm < 21.1%, + learning slows (middle graph). When Pm = 3.6%, a winning ticket regresses to the performance of + the original network. A similar pattern repeats throughout this paper. + Figure 4a summarizes this behavior for all pruning levels when iteratively pruning by 20% per + iteration (blue). On the left is the iteration at which each network reaches minimum validation loss + (i.e., when the early-stopping criterion would halt training) in relation to the percent of weights + remaining after pruning; in the middle is test accuracy at that iteration. We use the iteration at which + the early-stopping criterion is met as a proxy for how quickly the network learns. + The winning tickets learn faster asPm decreases from 100% to 21%, at which point early-stopping + occurs38%earlier than for the original network. Further pruning causes learning to slow, returning + to the early-stopping performance of the original network whenPm = 3.6%. Test accuracy increases + with pruning, improving by more than 0.3 percentage points whenPm = 13.5%; after this point, + accuracy decreases, returning to the level of the original network whenPm = 3.6%. + At early stopping, training accuracy (Figure 4a, right) increases with pruning in a similar pattern to + test accuracy, seemingly implying that winning tickets optimize more effectively but do not generalize + better. However, at iteration 50,000 (Figure 4b), iteratively-pruned winning tickets still see a test + accuracy improvement of up to 0.35 percentage points in spite of the fact that training accuracy + reaches 100% for nearly all networks (Appendix D, Figure 12). This means that the gap between + training accuracy and test accuracy is smaller for winning tickets, pointing to improved generalization. + Random reinitialization. To measure the importance of a winning ticket’s initialization, we retain + the structure of a winning ticket (i.e., the mask m) but randomly sample a new initialization <>. + We randomly reinitialize each winning ticket three times, making 15 total per point in Figure 4. We + find that initialization is crucial for the efficacy of a winning ticket. The right graph in Figure 3 + shows this experiment for iterative pruning. In addition to the original network and winning tickets at + Pm = 51% and 21% are the random reinitialization experiments. Where the winning tickets learn + faster as they are pruned, they learn progressively slower when randomly reinitialized. + The broader results of this experiment are orange line in Figure 4a. Unlike winning tickets, the + reinitialized networks learn increasingly slower than the original network and lose test accuracy after + little pruning. The average reinitialized iterative winning ticket’s test accuracy drops off from the + original accuracy when Pm = 21.1%, compared to 2.9% for the winning ticket. When Pm = 21%, + the winning ticket reaches minimum validation loss 2.51x faster than when reinitialized and is half a + percentage point more accurate. All networks reach 100% training accuracy for Pm = 5%; Figure + + <
> + + Figure 4. Early-stopping iteration and accuracy of Lenet under one-shot and iterative pruning. + Average of five trials; error bars for the minimum and maximum values. At iteration 50,000, training + accuracy 100% for Pm = 2% for iterative winning tickets (see Appendix D, Figure 12). + + + 4b therefore shows that the winning tickets generalize substantially better than when randomly + reinitialized. This experiment supports the lottery ticket hypothesis’ emphasis on initialization. + the original initialization withstands and benefits from pruning, while the random reinitialization’s + performance immediately suffers and diminishes steadily. + One-shot pruning.Although iterative pruning extracts smaller winning tickets, repeated training + means they are costly to find. One-shot pruning makes it possible to identify winning tickets + without this repeated training. Figure 4c shows the results of one-shot pruning (green) and randomly + reinitializing (red); one-shot pruning does indeed find winning tickets. When 67.5% Pm = 17.6%, + the average winning tickets reach minimum validation accuracy earlier than the original network. + When 95.0% Pm = 5.17%, test accuracy is higher than the original network. However, iteratively- + pruned winning tickets learn faster and reach higher test accuracy at smaller network sizes. The + green and red lines in Figure 4c are reproduced on the logarithmic axes of Figure 4a, making this + performance gap clear. Since our goal is to identify the smallest possible winning tickets, we focus + on iterative pruning throughout the rest of the paper. + + 3 WINNING TICKETS IN CONVOLUTIONAL NETWORKS + + Here, we apply the lottery ticket hypothesis to convolutional networks on CIFAR10, increasing + both the complexity of the learning problem and the size of the networks. We consider the Conv-2, + Conv-4, and Conv-6 architectures in Figure 2, which are scaled-down variants of the VGG (Simonyan + & Zisserman, 2014) family. The networks have two, four, or six convolutional layers followed by + two fully-connected layers; max-pooling occurs after every two convolutional layers. The networks + cover a range from near-fully-connected to traditional convolutional networks, with less than 1% of + parameters in convolutional layers in Conv-2 to nearly two thirds in Conv-6. 3 + + Finding winning tickets. The solid lines in Figure 5 (top) show the iterative lottery ticket experiment + on Conv-2 (blue), Conv-4 (orange), and Conv-6 (green) at the per-layer pruning rates from Figure 2. + The pattern from Lenet in Section 2 repeats. as the network is pruned, it learns faster and test accuracy + rises as compared to the original network. In this case, the results are more pronounced. Winning + + 3 Appendix H explores other hyperparameters, including learning rates, optimization strategies (SGD, + momentum), and the relative rates at which to prune convolutional and fully-connected layers. + + <
> + + Figure 5. Early-stopping iteration and test and training accuracy of the Conv-2/4/6 architectures when + iteratively pruned and when randomly reinitialized. Each solid line is the average of five trials; each + dashed line is the average of fifteen reinitializations (three per trial). The bottom right graph plots test + accuracy of winning tickets at iterations corresponding to the last iteration of training for the original + network (20,000 for Conv-2, 25,000 for Conv-4, and 30,000 for Conv-6); at this iteration, training + accuracy100%forPm 2%for winning tickets (see Appendix D). + + + + tickets reach minimum validation loss at best 3.5x faster for Conv-2 (Pm = 8.8%), 3.5x for Conv-4 + (Pm = 9.2%), and 2.5x for Conv-6 (Pm = 15.1%). Test accuracy improves at best 3.4 percentage + points for Conv-2 (Pm = 4.6%), 3.5 for Conv-4 (Pm = 11.1%), and 3.3 for Conv-6 (Pm = 26.4%). + All three networks remain above their original average test accuracy when Pm > 2%. + As in Section 2, training accuracy at the early-stopping iteration rises with test accuracy. However, at + iteration 20,000 for Conv-2, 25,000 for Conv-4, and 30,000 for Conv-6 (the iterations corresponding + to the final training iteration for the original network), training accuracy reaches 100% for all networks + when Pm = 2% (Appendix D, Figure 13) and winning tickets still maintain higher test accuracy + (Figure 5 bottom right). This means that the gap between test and training accuracy is smaller for + winning tickets, indicating they generalize better. + Random reinitialization.We repeat the random reinitialization experiment from Section 2, which + appears as the dashed lines in Figure 5. These networks again take increasingly longer to learn upon + continued pruning. Just as with Lenet on MNIST (Section 2), test accuracy drops off more quickly + for the random reinitialization experiments. However, unlike Lenet, test accuracy at early-stopping + time initially remains steady and even improves for Conv-2 and Conv-4, indicating that—at moderate + levels of pruning—the structure of the winning tickets alone may lead to better accuracy. + Dropout.Dropout (Srivastava et al., 2014; Hinton et al., 2012) improves accuracy by randomly dis- + abling a fraction of the units (i.e., randomly sampling a subnetwork) on each training iteration. Baldi + & Sadowski (2013) characterize dropout as simultaneously training the ensemble of all subnetworks. + Since the lottery ticket hypothesis suggests that one of these subnetworks comprises a winning ticket, + it is natural to ask whether dropout and our strategy for finding winning tickets interact. + Figure 6 shows the results of training Conv-2, Conv-4, and Conv-6 with a dropout rate of 0.5. Dashed + lines are the network performance without dropout (the solid lines in Figure 5). 4 We continue to find + winning tickets when training with dropout. Dropout increases initial test accuracy (2.1, 3.0, and 2.4 + percentage points on average for Conv-2, Conv-4, and Conv-6, respectively), and iterative pruning + increases it further (up to an additional 2.3, 4.6, and 4.7 percentage points, respectively, on average). + Learning becomes faster with iterative pruning as before, but less dramatically in the case of Conv-2. + + + 4 We choose new learning rates for the networks as trained with dropout—see Appendix H.5. + + <
> + + Figure 6. Early-stopping iteration and test accuracy at early-stopping of Conv-2/4/6 when iteratively + pruned and trained with dropout. The dashed lines are the same networks trained without dropout + (the solid lines in Figure 5). Learning rates are 0.0003 for Conv-2 and 0.0002 for Conv-4 and Conv-6. + + <
> + + Figure 7. Test accuracy (at 30K, 60K, and 112K iterations) of VGG-19 when iteratively pruned. + + + These improvements suggest that our iterative pruning strategy interacts with dropout in a complementary + way. Srivastava et al. (2014) observe that dropout induces sparse activations in the final + network; it is possible that dropout-induced sparsity primes a network to be pruned. If so, dropout + techniques that target weights (Wan et al., 2013) or learn per-weight dropout probabilities (Molchanov + et al., 2017; Louizos et al., 2018) could make winning tickets even easier to find. + + 4 VGG AND RESNET FOR CIFAR10 + + Here, we study the lottery ticket hypothesis on networks evocative of the architectures and techniques + used in practice. Specifically, we consider VGG-style deep convolutional networks (VGG-19 on + CIFAR10—Simonyan & Zisserman (2014)) and residual networks (Resnet-18 on CIFAR10—He + et al. (2016)). 5 These networks are trained with batchnorm, weight decay, decreasing learning + rate schedules, and augmented training data. We continue to find winning tickets for all of these + architectures; however, our method for finding them, iterative pruning, is sensitive to the particular + learning rate used. In these experiments, rather than measure early-stopping time (which, for these + larger networks, is entangled with learning rate schedules), we plot accuracy at several moments + during training to illustrate the relative rates at which accuracy improves. + Global pruning.On Lenet and Conv-2/4/6, we prune each layer separately at the same rate. For + Resnet-18 and VGG-19, we modify this strategy slightly. we prune these deeper networks globally, + removing the lowest-magnitude weights collectively across all convolutional layers. In Appendix + I.1, we find that global pruning identifies smaller winning tickets for Resnet-18 and VGG-19. Our + conjectured explanation for this behavior is as follows. For these deeper networks, some layers have + far more parameters than others. For example, the first two convolutional layers of VGG-19 have + 1728 and 36864 parameters, while the last has 2.35 million. When all layers are pruned at the same + rate, these smaller layers become bottlenecks, preventing us from identifying the smallest possible + winning tickets. Global pruning makes it possible to avoid this pitfall. + VGG-19.We study the variant VGG-19 adapted for CIFAR10 by Liu et al. (2019); we use the + the same training regime and hyperparameters. 160 epochs (112,480 iterations) with SGD with + 5 See Figure 2 and Appendices I for details on the networks, hyperparameters, and training regimes. + + <
> + + Figure 8. Test accuracy (at 10K, 20K, and 30K iterations) of Resnet-18 when iteratively pruned. + + + momentum (0.9) and decreasing the learning rate by a factor of 10 at 80 and 120 epochs. This + network has 20 million parameters. Figure 7 shows the results of iterative pruning and random + reinitialization on VGG-19 at two initial learning rates. 0.1 (used in Liu et al. (2019)) and 0.01. At the + higher learning rate, iterative pruning does not find winning tickets, and performance is no better than + when the pruned networks are randomly reinitialized. However, at the lower learning rate, the usual + pattern reemerges, with subnetworks that remain within 1 percentage point of the original accuracy + whilePm 3.5%. (They are not winning tickets, since they do not match the original accuracy.) + When randomly reinitialized, the subnetworks lose accuracy as they are pruned in the same manner as + other experiments throughout this paper. Although these subnetworks learn faster than the unpruned + network early in training (Figure 7 left), this accuracy advantage erodes later in training due to the + lower initial learning rate. However, these subnetworks still learn faster than when reinitialized. + To bridge the gap between the lottery ticket behavior of the lower learning rate and the accuracy + advantage of the higher learning rate, we explore the effect of linear learning rate warmup from 0 to + the initial learning rate over k iterations. Training VGG-19 with warmup (k= 10000, green line) at + learning rate 0.1 improves the test accuracy of the unpruned network by about one percentage point. + Warmup makes it possible to find winning tickets, exceeding this initial accuracy whenPm 1.5%. + Resnet-18.Resnet-18 (He et al., 2016) is a 20 layer convolutional network with residual connections + designed for CIFAR10. It has 271,000 parameters. We train the network for 30,000 iterations with + SGD with momentum (0.9), decreasing the learning rate by a factor of 10 at 20,000 and 25,000 + iterations. Figure 8 shows the results of iterative pruning and random reinitialization at learning + rates 0.1 (used in He et al. (2016)) and 0.01. These results largely mirror those of VGG. iterative + pruning finds winning tickets at the lower learning rate but not the higher learning rate. The accuracy + of the best winning tickets at the lower learning rate (89.5% when 41.7%, Pm = 21.9%) falls + short of the original network’s accuracy at the higher learning rate (90.5%). At lower learning rate, + the winning ticket again initially learns faster (left plots of Figure 8), but falls behind the unpruned + network at the higher learning rate later in training (right plot). Winning tickets trained with warmup + close the accuracy gap with the unpruned network at the higher learning rate, reaching 90.5% test + accuracy with learning rate 0.03 (warmup,k= 20000) atPm = 27.1%. For these hyperparameters, + we still find winning tickets whenPm 11.8%. Even with warmup, however, we could not find + hyperparameters for which we could identify winning tickets at the original learning rate, 0.1. + + 5 DISCUSSION + + Existing work on neural network pruning (e.g., Han et al. (2015)) demonstrates that the function + learned by a neural network can often be represented with fewer parameters. Pruning typically + proceeds by training the original network, removing connections, and further fine-tuning. In effect, + the initial training initializes the weights of the pruned network so that it can learn in isolation during + fine-tuning. We seek to determine if similarly sparse networks can learn from the start. We find that + the architectures studied in this paper reliably contain such trainable subnetworks, and the lottery + ticket hypothesis proposes that this property applies in general. Our empirical study of the existence + and nature of winning tickets invites a number of follow-up questions. + The importance of winning ticket initialization.When randomly reinitialized, a winning ticket + learns more slowly and achieves lower test accuracy, suggesting that initialization is important to + its success. One possible explanation for this behavior is these initial weights are close to their final + values after training—that in the most extreme case, they are already trained. However, experiments + in Appendix F show the opposite—that the winning ticket weights move further than other weights. + This suggests that the benefit of the initialization is connected to the optimization algorithm, dataset, + and model. For example, the winning ticket initialization might land in a region of the loss landscape + that is particularly amenable to optimization by the chosen optimization algorithm. + Liu et al. (2019) find that pruned networks are indeed trainable when randomly reinitialized, seemingly + contradicting conventional wisdom and our random reinitialization experiments. For example, on + VGG-19 (for which we share the same setup), they find that networks pruned by up to 80% and + randomly reinitialized match the accuracy of the original network. Our experiments in Figure 7 + confirm these findings at this level of sparsity (below which Liu et al. do not present data). However, + after further pruning, initialization matters. we find winning tickets when VGG-19 is pruned by up + to 98.5%; when reinitialized, these tickets reach much lower accuracy. We hypothesize that—up + to a certain level of sparsity—highly overparameterized networks can be pruned, reinitialized, and + retrained successfully; however, beyond this point, extremely pruned, less severely overparamterized + networks only maintain accuracy with fortuitous initialization. + The importance of winning ticket structure.The initialization that gives rise to a winning ticket + is arranged in a particular sparse architecture. Since we uncover winning tickets through heavy + use of training data, we hypothesize that the structure of our winning tickets encodes an inductive + bias customized to the learning task at hand. Cohen & Shashua (2016) show that the inductive bias + embedded in the structure of a deep network determines the kinds of data that it can separate more + parameter-efficiently than can a shallow network; although Cohen & Shashua (2016) focus on the + pooling geometry of convolutional networks, a similar effect may be at play with the structure of + winning tickets, allowing them to learn even when heavily pruned. + The improved generalization of winning tickets.We reliably find winning tickets that generalize + better, exceeding the test accuracy of the original network while matching its training accuracy. + Test accuracy increases and then decreases as we prune, forming anOccam’s Hill(Rasmussen & + Ghahramani, 2001) where the original, overparameterized model has too much complexity (perhaps + overfitting) and the extremely pruned model has too little. The conventional view of the relationship + between compression and generalization is that compact hypotheses can better generalize (Rissanen, + 1986). Recent theoretical work shows a similar link for neural networks, proving tighter generalization + bounds for networks that can be compressed further (Zhou et al. (2018) for pruning/quantization + and Arora et al. (2018) for noise robustness). The lottery ticket hypothesis offers a complementary + perspective on this relationship—that larger networks might explicitly contain simpler representations. + Implications for neural network optimization.Winning tickets can reach accuracy equivalent to + that of the original, unpruned network, but with significantly fewer parameters. This observation + connects to recent work on the role of overparameterization in neural network training. For example, + Du et al. (2019) prove that sufficiently overparameterized two-layer relu networks (with fixed-size + second layers) trained with SGD converge to global optima. A key question, then, is whether the + presence of a winning ticket is necessary or sufficient for SGD to optimize a neural network to a + particular test accuracy. We conjecture (but do not empirically show) that SGD seeks out and trains a + well-initialized subnetwork. By this logic, overparameterized networks are easier to train because + they have more combinations of subnetworks that are potential winning tickets. + + + 6 LIMITATIONS AND FUTURE WORK + + We only consider vision-centric classification tasks on smaller datasets (MNIST, CIFAR10). We do + not investigate larger datasets (namely Imagenet (Russakovsky et al., 2015)). iterative pruning is + computationally intensive, requiring training a network 15 or more times consecutively for multiple + trials. In future work, we intend to explore more efficient methods for finding winning tickets that + will make it possible to study the lottery ticket hypothesis in more resource-intensive settings. + Sparse pruning is our only method for finding winning tickets. Although we reduce parameter-counts, + the resulting architectures are not optimized for modern libraries or hardware. In future work, we + intend to study other pruning methods from the extensive contemporary literature, such as structured + pruning (which would produce networks optimized for contemporary hardware) and non-magnitude + pruning methods (which could produce smaller winning tickets or find them earlier). + The winning tickets we find have initializations that allow them to match the performance of the + unpruned networks at sizes too small for randomly-initialized networks to do the same. In future + work, we intend to study the properties of these initializations that, in concert with the inductive + biases of the pruned network architectures, make these networks particularly adept at learning. + On deeper networks (Resnet-18 and VGG-19), iterative pruning is unable to find winning tickets + unless we train the networks with learning rate warmup. In future work, we plan to explore why + warmup is necessary and whether other improvements to our scheme for identifying winning tickets + could obviate the need for these hyperparameter modifications. + + 7 RELATED WORK + + In practice, neural networks tend to be dramatically overparameterized. Distillation (Ba & Caruana, + 2014; Hinton et al., 2015) and pruning (LeCun et al., 1990; Han et al., 2015) rely on the fact that + parameters can be reduced while preserving accuracy. Even with sufficient capacity to memorize + training data, networks naturally learn simpler functions (Zhang et al., 2016; Neyshabur et al., 2014; + Arpit et al., 2017). Contemporary experience (Bengio et al., 2006; Hinton et al., 2015; Zhang et al., + 2016) and Figure 1 suggest that overparameterized networks are easier to train. We show that dense + networks contain sparse subnetworks capable of learning on their own starting from their original + initializations. Several other research directions aim to train small or sparse networks. + Prior to training.Squeezenet (Iandola et al., 2016) and MobileNets (Howard et al., 2017) are + specifically engineered image-recognition networks that are an order of magnitude smaller than + standard architectures. Denil et al. (2013) represent weight matrices as products of lower-rank factors. + Li et al. (2018) restrict optimization to a small, randomly-sampled subspace of the parameter space + (meaning all parameters can still be updated); they successfully train networks under this restriction. + We show that one need not even update all parameters to optimize a network, and we find winning + tickets through a principled search process involving pruning. Our contribution to this class of + approaches is to demonstrate that sparse, trainable networks exist within larger networks. + After training.Distillation (Ba & Caruana, 2014; Hinton et al., 2015) trains small networks to mimic + the behavior of large networks; small networks are easier to train in this paradigm. Recent pruning + work compresses large models to run with limited resources (e.g., on mobile devices). Although + pruning is central to our experiments, we study why training needs the overparameterized networks + that make pruning possible. LeCun et al. (1990) and Hassibi & Stork (1993) first explored pruning + based on second derivatives. More recently, Han et al. (2015) showed per-weight magnitude-based + pruning substantially reduces the size of image-recognition networks. Guo et al. (2016) restore + pruned connections as they become relevant again. Han et al. (2017) and Jin et al. (2016) restore + pruned connections to increase network capacity after small weights have been pruned and surviving + weights fine-tuned. Other proposed pruning heuristics include pruning based on activations (Hu et al., + 2016), redundancy (Mariet & Sra, 2016; Srinivas & Babu, 2015a), per-layer second derivatives (Dong + et al., 2017), and energy/computation efficiency (Yang et al., 2017) (e.g., pruning convolutional + filters (Li et al., 2016; Molchanov et al., 2016; Luo et al., 2017) or channels (He et al., 2017)). Cohen + et al. (2016) observe that convolutional filters are sensitive to initialization (“The Filter Lottery”); + throughout training, they randomly reinitialize unimportant filters. + During training.Bellec et al. (2018) train with sparse networks and replace weights that reach + zero with new random connections. Srinivas et al. (2017) and Louizos et al. (2018) learn gating + variables that minimize the number of nonzero parameters. Narang et al. (2017) integrate magnitude- + based pruning into training. Gal & Ghahramani (2016) show that dropout approximates Bayesian + inference in Gaussian processes. Bayesian perspectives on dropout learn dropout probabilities during + training (Gal et al., 2017; Kingma et al., 2015; Srinivas & Babu, 2016). Techniques that learn per- + weight, per-unit (Srinivas & Babu, 2016), or structured dropout probabilities naturally (Molchanov + et al., 2017; Neklyudov et al., 2017) or explicitly (Louizos et al., 2017; Srinivas & Babu, 2015b) + prune and sparsify networks during training as dropout probabilities for some weights reach 1. In + contrast, we train networks at least once to find winning tickets. These techniques might also find + winning tickets, or, by inducing sparsity, might beneficially interact with our methods. + + REFERENCES + Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for + deep nets via a compression approach.ICML, 2018. + Devansh Arpit, Stanisław Jastrz˛ebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S + Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at + memorization in deep networks. InInternational Conference on Machine Learning, pp. 233–242, + 2017. + Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? InAdvances in neural information + processing systems, pp. 2654–2662, 2014. + Pierre Baldi and Peter J Sadowski. Understanding dropout. InAdvances in neural information + processing systems, pp. 2814–2822, 2013. + Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein. Deep rewiring. Training + very sparse deep networks.Proceedings of ICLR, 2018. + Yoshua Bengio, Nicolas L Roux, Pascal Vincent, Olivier Delalleau, and Patrice Marcotte. Convex + neural networks. InAdvances in neural information processing systems, pp. 123–130, 2006. + Joseph Paul Cohen, Henry Z Lo, and Wei Ding. Randomout. Using a convolutional gradient norm to + win the filter lottery.ICLR Workshop, 2016. + Nadav Cohen and Amnon Shashua. Inductive bias of deep convolutional networks through pooling + geometry.arXiv preprint arXiv.1605.06743, 2016. + Misha Denil, Babak Shakibi, Laurent Dinh, Nando De Freitas, et al. Predicting parameters in deep + learning. InAdvances in neural information processing systems, pp. 2148–2156, 2013. + Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise + optimal brain surgeon. InAdvances in Neural Information Processing Systems, pp. 4860–4874, + 2017. + Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes + over-parameterized neural networks. InInternational Conference on Learning Representations, + 2019. URLhttps.//openreview.net/forum?id=S1eK3i09YQ. + Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation. Representing model + uncertainty in deep learning. Ininternational conference on machine learning, pp. 1050–1059, + 2016. + Yarin Gal, Jiri Hron, and Alex Kendall. Concrete dropout. InAdvances in Neural Information + Processing Systems, pp. 3584–3593, 2017. + Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural + networks. InProceedings of the thirteenth international conference on artificial intelligence and + statistics, pp. 249–256, 2010. + Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. InAdvances + In Neural Information Processing Systems, pp. 1379–1387, 2016. + Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for + efficient neural network. InAdvances in neural information processing systems, pp. 1135–1143, + 2015. + Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John + Tran, and William J Dally. Dsd. Regularizing deep neural networks with dense-sparse-dense + training flow.Proceedings of ICLR, 2017. + Babak Hassibi and David G Stork. Second order derivatives for network pruning. Optimal brain + surgeon. InAdvances in neural information processing systems, pp. 164–171, 1993. + Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image + recognition. InProceedings of the IEEE conference on computer vision and pattern recognition, + pp. 770–778, 2016. + Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. + InInternational Conference on Computer Vision (ICCV), volume 2, pp. 6, 2017. + Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network.arXiv + preprint arXiv.1503.02531, 2015. + Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. + Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint + arXiv.1207.0580, 2012. + Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, + Marco Andreetto, and Hartwig Adam. Mobilenets. Efficient convolutional neural networks for + mobile vision applications.arXiv preprint arXiv.1704.04861, 2017. + Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming. A data-driven + neuron pruning approach towards efficient deep architectures.arXiv preprint arXiv.1607.03250, + 2016. + Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt + Keutzer. Squeezenet. Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. + arXiv preprint arXiv.1602.07360, 2016. + Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. Training skinny deep neural networks + with iterative hard thresholding methods.arXiv preprint arXiv.1607.05423, 2016. + Diederik P Kingma and Jimmy Ba. Adam. A method for stochastic optimization.arXiv preprint + arXiv.1412.6980, 2014. + Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameteri- + zation trick. InAdvances in Neural Information Processing Systems, pp. 2575–2583, 2015. + Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. + Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. InAdvances in neural + information processing systems, pp. 598–605, 1990. + Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to + document recognition.Proceedings of the IEEE, 86(11).2278–2324, 1998. + Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension + of objective landscapes.Proceedings of ICLR, 2018. + Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for + efficient convnets.arXiv preprint arXiv.1608.08710, 2016. + Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value + of network pruning. InInternational Conference on Learning Representations, 2019. URL + https.//openreview.net/forum?id=rJlnB3C5Ym. + Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. In + Advances in Neural Information Processing Systems, pp. 3290–3300, 2017. + Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through + l_0regularization.Proceedings of ICLR, 2018. + Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet. A filter level pruning method for deep neural + network compression.arXiv preprint arXiv.1707.06342, 2017. + Zelda Mariet and Suvrit Sra. Diversity networks.Proceedings of ICLR, 2016. + Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural + networks.arXiv preprint arXiv.1701.05369, 2017. + Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional + neural networks for resource efficient transfer learning.arXiv preprint arXiv.1611.06440, 2016. + Sharan Narang, Erich Elsen, Gregory Diamos, and Shubho Sengupta. Exploring sparsity in recurrent + neural networks.Proceedings of ICLR, 2017. + Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, and Dmitry P Vetrov. Structured bayesian + pruning via log-normal multiplicative noise. InAdvances in Neural Information Processing + Systems, pp. 6778–6787, 2017. + Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias. On the + role of implicit regularization in deep learning.arXiv preprint arXiv.1412.6614, 2014. + Carl Edward Rasmussen and Zoubin Ghahramani. Occam’s razor. In T. K. Leen, T. G. Dietterich, + and V. Tresp (eds.),Advances in Neural Information Processing Systems 13, pp. 294–300. MIT + Press, 2001. URLhttp.//papers.nips.cc/paper/1925-occams-razor.pdf. + Jorma Rissanen. Stochastic complexity and modeling.The annals of statistics, pp. 1080–1100, 1986. + Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, + Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition + challenge.International Journal of Computer Vision, 115(3).211–252, 2015. + Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image + recognition.arXiv preprint arXiv.1409.1556, 2014. + Suraj Srinivas and R Venkatesh Babu. Data-free parameter pruning for deep neural networks.arXiv + preprint arXiv.1507.06149, 2015a. + Suraj Srinivas and R Venkatesh Babu. Learning neural network architectures using backpropagation. + arXiv preprint arXiv.1511.05497, 2015b. + Suraj Srinivas and R Venkatesh Babu. Generalized dropout.arXiv preprint arXiv.1611.06791, 2016. + Suraj Srinivas, Akshayvarun Subramanya, and R Venkatesh Babu. Training sparse neural networks. + InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, + pp. 138–145, 2017. + Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. + Dropout. A simple way to prevent neural networks from overfitting.The Journal of Machine + Learning Research, 15(1).1929–1958, 2014. + Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural + networks using dropconnect. InInternational Conference on Machine Learning, pp. 1058–1066, + 2013. + Tien-Ju Yang, Yu-Hsin Chen, and Vivienne Sze. Designing energy-efficient convolutional neural + networks using energy-aware pruning.arXiv preprint, 2017. + Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding + deep learning requires rethinking generalization.arXiv preprint arXiv.1611.03530, 2016. + Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P Adams, and Peter Orbanz. Compressibility + and generalization in large-scale deep learning.arXiv preprint arXiv.1804.05862, 2018. + + A ACKNOWLEDGMENTS + + We gratefully acknowledge IBM, which—through the MIT-IBM Watson AI Lab—contributed the + computational resources necessary to conduct the experiments in this paper. We particularly thank + IBM researchers German Goldszmidt, David Cox, Ian Molloy, and Benjamin Edwards for their + generous contributions of infrastructure, technical support, and feedback. We also wish to thank + Aleksander Madry, Shafi Goldwasser, Ed Felten, David Bieber, Karolina Dziugaite, Daniel Weitzner, + and R. David Edelman for support, feedback, and helpful discussions over the course of this project. + This work was supported in part by the Office of Naval Research (ONR N00014-17-1-2699). + + B ITERATIVE PRUNING STRATEGIES + + In this Appendix, we examine two different ways of structuring the iterative pruning strategy that we + use throughout the main body of the paper to find winning tickets. + + Strategy 1. Iterative pruning with resetting. + + 1.Randomly initialize a neural network <> where <> and <> is a mask. + 2.Train the network forjiterations, reaching parameters <>. + 3.Prune s% of the parameters, creating an updated mask m0 where <>. + 4.Reset the weights of the remaining portion of the network to their values in <>. That is, let + <>. + 5.Let <> and repeat steps 2 through 4 until a sufficiently pruned network has been + obtained. + + Strategy 2. Iterative pruning with continued training. + + 1.Randomly initialize a neural network <> where <> and <> is a mask. + 2.Train the network for j iterations. + 3.Prune s% of the parameters, creating an updated mask m0 where <>. + 4.Let <> and repeat steps 2 and 3 until a sufficiently pruned network has been obtained. + 5.Reset the weights of the remaining portion of the network to their values in <>. That is, let + <>. + + The difference between these two strategies is that, after each round of pruning, Strategy 2 retrains + using the already-trained weights, whereas Strategy 1 resets the network weights back to their initial + values before retraining. In both cases, after the network has been sufficiently pruned, its weights are + reset back to the original initializations. + Figures 9 and 10 compare the two strategies on the Lenet and Conv-2/4/6 architectures on the + hyperparameters we select in Appendices G and H. In all cases, the Strategy 1 maintains higher + validation accuracy and faster early-stopping times to smaller network sizes. + + C EARLY STOPPING CRITERION + + Throughout this paper, we are interested in measuring the speed at which networks learn. As a proxy + for this quantity, we measure the iteration at which an early-stopping criterion would end training. + The specific criterion we employ is the iteration of minimum validation loss. In this Subsection, we + further explain that criterion. + Validation and test loss follow a pattern where they decrease early in the training process, reach a + minimum, and then begin to increase as the model overfits to the training data. Figure 11 shows an + example of the validation loss as training progresses; these graphs use Lenet, iterative pruning, and + Adam with a learning rate of 0.0012 (the learning rate we will select in the following subsection). + This Figure shows the validation loss corresponding to the test accuracies in Figure 3. + + <
> + + Figure 9. The early-stopping iteration and accuracy at early-stopping of the iterative lottery ticket + experiment on the Lenet architecture when iteratively pruned using the resetting and continued + training strategies. + + <
> + + Figure 10. The early-stopping iteration and accuracy at early-stopping of the iterative lottery ticket + experiment on the Conv-2, Conv-4, and Conv-6 architectures when iteratively pruned using the + resetting and continued training strategies. + + <
> + + Figure 11. The validation loss data corresponding to Figure 3, i.e., the validation loss as training + progresses for several different levels of pruning in the iterative pruning experiment. Each line is + the average of five training runs at the same level of iterative pruning; the labels are the percentage + of weights from the original network that remain after pruning. Each network was trained with + Adam at a learning rate of 0.0012. The left graph shows winning tickets that learn increasingly faster + than the original network and reach lower loss. The middle graph shows winning tickets that learn + increasingly slower after the fastest early-stopping time has been reached. The right graph contrasts + the loss of winning tickets to the loss of randomly reinitialized networks. + + <
> + + Figure 12. Figure 4 augmented with a graph of the training accuracy at the end of 50,000 iterations. + + + In all cases, validation loss initially drops, after which it forms a clear bottom and then begins + increasing again. Our early-stopping criterion identifies this bottom. We consider networks that reach + this moment sooner to have learned “faster.” In support of this notion, the ordering in which each + experiment meets our early-stopping criterion in Figure 3 is the same order in which each experiment + reaches a particular test accuracy threshold in Figure 3. + Throughout this paper, in order to contextualize this learning speed, we also present the test accuracy + of the network at the iteration of minimum validation loss. In the main body of the paper, we find + that winning tickets both arrive at early-stopping sooner and reach higher test accuracy at this point. + + + D TRAINING ACCURACY FOR LOTTERY TICKET EXPERIMENTS + + This Appendix accompanies Figure 4 (the accuracy and early-stopping iterations of Lenet on MNIST + from Section 2) and Figure 5 (the accuracy and early-stopping iterations of Conv-2, Conv-4, and + Conv-6 in Section Section 3) in the main body of the paper. Those figures show the iteration of + early-stopping, the test accuracy at early-stopping, the training accuracy at early-stopping, and the + test accuracy at the end of the training process. However, we did not have space to include a graph + of the training accuracy at the end of the training process, which we assert in the main body of the + paper to be 100% for all but the most heavily pruned networks. In this Appendix, we include those + additional graphs in Figure 12 (corresponding to Figure 4) and Figure 13 (corresponding to Figure 5). + As we describe in the main body of the paper, training accuracy reaches 100% in all cases for all but + the most heavily pruned networks. However, training accuracy remains at 100% longer for winning + tickets than for randomly reinitialized networks. + + + E COMPARING RANDOM REINITIALIZATION AND RANDOM SPARSITY + + In this Appendix, we aim to understand the relative performance of randomly reinitialized winning + tickets and randomly sparse networks. + + 1.Networks found via iterative pruning with the original initializations (blue in Figure 14). + 2.Networks found via iterative pruning that are randomly reinitialized (orange in Figure 14). + 3.Random sparse subnetworks with the same number of parameters as those found via iterative + pruning (green in Figure 14). + + <
> + + Figure 13. Figure 5 augmented with a graph of the training accuracy at the end of the training process. + + + Figure 14 shows this comparison for all of the major experiments in this paper. For the fully-connected + Lenet architecture for MNIST, we find that the randomly reinitialized networks outperform random + sparsity. However, for all of the other, convolutional networks studied in this paper, there is no + significant difference in performance between the two. We hypothesize that the fully-connected + network for MNIST sees these benefits because only certain parts of the MNIST images contain + useful information for classification, meaning connections in some parts of the network will be more + valuable than others. This is less true with convolutions, which are not constrained to any one part of + the input image. + + + F EXAMINING WINNING TICKETS + + In this Appendix, we examine the structure of winning tickets to gain insight into why winning tickets + are able to learn effectively even when so heavily pruned. Throughout this Appendix, we study the + winning tickets from the Lenet architecture trained on MNIST. Unless otherwise stated, we use the + same hyperparameters as in Section 2. glorot initialization and adam optimization. + + F.1 WINNING TICKET INITIALIZATION (ADAM) + + Figure 15 shows the distributions of winning ticket initializations for four different levels ofPm . To + clarify, these are the distributions of the initial weights of the connections that have survived the + pruning process. The blue, orange, and green lines show the distribution of weights for the first + hidden layer, second hidden layer, and output layer, respectively. The weights are collected from five + different trials of the lottery ticket experiment, but the distributions for each individual trial closely + mirror those aggregated from across all of the trials. The histograms have been normalized so that the + area under each curve is 1. + The left-most graph in Figure 15 shows the initialization distributions for the unpruned networks. We + use glorot initialization, so each of the layers has a different standard deviation. As the network is + pruned, the first hidden layer maintains its distribution. However, the second hidden layer and the + output layer become increasingly bimodal, with peaks on either side of 0. Interestingly, the peaks + are asymmetric. the second hidden layer has more positive initializations remaining than negative + initializations, and the reverse is true for the output layer. + The connections in the second hidden layer and output layer that survive the pruning process tend + to have higher magnitude-initializations. Since we find winning tickets by pruning the connections + with the lowest magnitudes in each layer at theend, the connections with the lowest-magnitude + initializations must still have the lowest-magnitude weights at the end of training. A different trend + holds for the input layer. it maintains its distribution, meaning a connection’s initialization has less + relation to its final weight. + + F.2 WINNING TICKET INITIALIZATIONS (SGD) + + We also consider the winning tickets obtained when training the network with SGD learning rate 0.8 + (selected as described in Appendix G). The bimodal distributions from Figure 15 are present across + all layers (see Figure 16. The connections with the highest-magnitude initializations are more likely + to survive the pruning process, meaning winning ticket initializations have a bimodal distribution + with peaks on opposite sides of 0. Just as with the adam-optimized winning tickets, these peaks are + of different sizes, with the first hidden layer favoring negative initializations and the second hidden + layer and output layer favoring positive initializations. Just as with the adam results, we confirm that + each individual trial evidences the same asymmetry as the aggregate graphs in Figure 16. + + F.3 REINITIALIZING FROM WINNING TICKET INITIALIZATIONS + + Considering that the initialization distributions of winning ticketsDm are so different from the + Gaussian distributionDused to initialize the unpruned network, it is natural to ask whether randomly + reinitializing winning tickets fromDm rather thanDwill improve winning ticket performance. We do + not find this to be the case. Figure 17 shows the performance of winning tickets whose initializations + are randomly sampled from the distribution of initializations contained in the winning tickets for + + <
> + + Figure 14. The test accuracy at the final iteration for each of the networks studied in this paper. + + <
> + + Figure 15. The distribution of initializations in winning tickets pruned to the levels specified in the + titles of each plot. The blue, orange, and green lines show the distributions for the first hidden layer, + second hidden layer, and output layer of the Lenet architecture for MNIST when trained with the + adam optimizer and the hyperparameters used in 2. The distributions have been normalized so that + the area under each curve is 1. + + <
> + + Figure 16. Same as Figure 15 where the network is trained with SGD at rate 0.8. + + + adam. More concretely, let <> be the set of initializations found in the winning 0 ticket with maskm. We sample a new set of parameters + <> and train the network <> We perform this sampling on a per-layer basis. The results of this experiment are in Figure 17. + Winning tickets reinitialized fromDm perform little better than when randomly reinitialized from D. + We attempted the same experiment with the SGD-trained winning tickets and found similar results. + + F.4 PRUNING AT ITERATION 0 + + One other way of interpreting the graphs of winning ticket initialization distributions is as follows. + weights that begin small stay small, get pruned, and never become part of the winning ticket. (The + only exception to this characterization is the first hidden layer for the adam-trained winning tickets.) + If this is the case, then perhaps low-magnitude weights were never important to the network and can + be pruned from the very beginning. Figure 18 shows the result of attempting this pruning strategy. + Winning tickets selected in this fashion perform even worse than when they are found by iterative + + <
> + + Figure 17. The performance of the winning tickets of the Lenet architecture for MNIST when the + layers are randomly reinitialized from the distribution of initializations contained in the winning + ticket of the corresponding size. + + <
> + + Figure 18. The performance of the winning tickets of the Lenet architecture for MNIST when + magnitude pruning is performed before the network is ever trained. The network is subsequently + trained with adam. + + <
> + + Figure 19. Between the first and last training iteration of the unpruned network, the magnitude by + which weights in the network change. The blue line shows the distribution of magnitudes for weights + that are not in the eventual winning ticket; the orange line shows the distribution of magnitudes for + weights that are in the eventual winning ticket. + + + pruning and randomly reinitialized. We attempted the same experiment with the SGD-trained winning + tickets and found similar results. + + F.5 COMPARING INITIAL AND FINAL WEIGHTS IN WINNING TICKETS + + In this subsection, we consider winning tickets in the context of the larger optimization process. To + do so, we examine the initial and final weights of the unpruned network from which a winning ticket + derives to determine whether weights that will eventually comprise a winning ticket exhibit properties + that distinguish them from the rest of the network. + We consider the magnitude of the difference between initial and final weights. One possible rationale + for the success of winning tickets is that they already happen to be close to the optimum that gradient + descent eventually finds, meaning that winning ticket weights should change by a smaller amount + than the rest of the network. Another possible rationale is that winning tickets are well placed in the + optimization landscape for gradient descent to optimize productively, meaning that winning ticket + weights should change by a larger amount than the rest of the network. Figure 19 shows that winning + ticket weights tend to change by a larger amount then weights in the rest of the network, evidence + that does not support the rationale that winning tickets are already close to the optimum. + It is notable that such a distinction exists between the two distributions. One possible explanation for + this distinction is that the notion of a winning ticket may indeed be a natural part of neural network + optimization. Another is that magnitude-pruning biases the winning tickets we find toward those + containing weights that change in the direction of higher magnitude. Regardless, it offers hope that + winning tickets may be discernible earlier in the training process (or after a single training run), + meaning that there may be more efficient methods for finding winning tickets than iterative pruning. + Figure 20 shows the directions of these changes. It plots the difference between the magnitude of the + final weight and the magnitude of the initial weight, i.e., whether the weight moved toward or away + + <
> + + Figure 20. Between the first and last training iteration of the unpruned network, the magnitude by + which weights move away from 0. The blue line shows the distribution of magnitudes for weights + that are not in the eventual winning ticket; the orange line shows the distribution of magnitudes for + weights that are in the eventual winning ticket. + + + <
> + + Figure 21. The fraction of incoming connections that survive the pruning process for each node in + each layer of the Lenet architecture for MNIST as trained with adam. + + + from 0. In general, winning ticket weights are more likely to increase in magnitude (that is, move + away from 0) than are weights that do not participate in the eventual winning ticket. + + F.6 WINNING TICKET CONNECTIVITY + + In this Subsection, we study the connectivity of winning tickets. Do some hidden units retain a + large number of incoming connections while others fade away, or does the network retain relatively + even sparsity among all units as it is pruned? We find the latter to be the case when examining the + incoming connectivity of network units. for both adam and SGD, each unit retains a number of + incoming connections approximately in proportion to the amount by which the overall layer has + been pruned. Figures 21 and 22 show the fraction of incoming connections that survive the pruning + process for each node in each layer. Recall that we prune the output layer at half the rate as the rest of + the network, which explains why it has more connectivity than the other layers of the network. + + <
> + + Figure 22. Same as Figure 21 where the network is trained with SGD at rate 0.8. + + <
> + + Figure 23. The fraction of outgoing connections that survive the pruning process for each node in + each layer of the Lenet architecture for MNIST as trained with adam. The blue, orange, and green + lines are the outgoing connections from the input layer, first hidden layer, and second hidden layer, + respectively. + + <
> + + Figure 24. Same as Figure 23 where the network is trained with SGD at rate 0.8. + + + + However, this is not the case for the outgoing connections. To the contrary, for the adam-trained + networks, certain units retain far more outgoing connections than others (Figure 23). The distributions + are far less smooth than those for the incoming connections, suggesting that certain features are far + more useful to the network than others. This is not unexpected for a fully-connected network on a + task like MNIST, particularly for the input layer. MNIST images contain centered digits, so the pixels + around the edges are not likely to be informative for the network. Indeed, the input layer has two + peaks, one larger peak for input units with a high number of outgoing connections and one smaller + peak for input units with a low number of outgoing connections. Interestingly, the adam-trained + winning tickets develop a much more uneven distribution of outgoing connectivity for the input layer + than does the SGD-trained network (Figure 24). + + + + F.7 ADDING NOISE TO WINNING TICKETS + + In this Subsection, we explore the extent to which winning tickets are robust to Gaussian noise added + to their initializations. In the main body of the paper, we find that randomly reinitializing a winning + ticket substantially slows its learning and reduces its eventual test accuracy. In this Subsection, + we study a less extreme way of perturbing a winning ticket. Figure 25 shows the effect of adding + Gaussian noise to the winning ticket initializations. The standard deviation of the noise distribution + of each layer is a multiple of the standard deviation of the layer’s initialization Figure 25 shows noise + distributions with standard deviation 0.5%,1%,2%, and 3%. Adding Gaussian noise reduces the test + accuracy of a winning ticket and slows its ability to learn, again demonstrating the importance of + the original initialization. As more noise is added, accuracy decreases. However, winning tickets + are surprisingly robust to noise. Adding noise of 0.5% barely changes winning ticket accuracy. Even + after adding noise of 3%, the winning tickets continue to outperform the random reinitialization + experiment. + + <
> + + Figure 25. The performance of the winning tickets of the Lenet architecture for MNIST when + Gaussian noise is added to the initializations. The standard deviations of the noise distributions for + each layer are a multiple of the standard deviations of the initialization distributions; in this Figure, + we consider multiples 0.5, 1, 2, and 3. + + + G HYPERPARAMETER EXPLORATION FOR FULLY-CONNECTED NETWORKS + + This Appendix accompanies Section 2 of the main paper. It explores the space of hyperparameters + for the Lenet architecture evaluated in Section 2 with two purposes in mind. + + 1.To explain the hyperparameters selected in the main body of the paper. + 2.To evaluate the extent to which the lottery ticket experiment patterns extend to other choices + of hyperparameters. + + G.1 EXPERIMENTAL METHODOLOGY + + This Section considers the fully-connected Lenet architecture (LeCun et al., 1998), which comprises + two fully-connected hidden layers and a ten unit output layer, on the MNIST dataset. Unless otherwise + stated, the hidden layers have 300 and 100 units each. + The MNIST dataset consists of 60,000 training examples and 10,000 test examples. We randomly + sampled a 5,000-example validation set from the training set and used the remaining 55,000 training + examples as our training set for the rest of the paper (including Section 2). The hyperparameter + selection experiments throughout this Appendix are evaluated using the validation set for determining + both the iteration of early-stopping and the accuracy at early-stopping; the networks in the main body + of this paper (which make use of these hyperparameters) have their accuracy evaluated on the test set. + The training set is presented to the network in mini-batches of 60 examples; at each epoch, the entire + training set is shuffled. + Unless otherwise noted, each line in each graph comprises data from three separate experiments. The + line itself traces the average performance of the experiments and the error bars indicate the minimum + and maximum performance of any one experiment. + Throughout this Appendix, we perform the lottery ticket experiment iteratively with a pruning rate of + 20% per iteration (10% for the output layer); we justify the choice of this pruning rate later in this + Appendix. Each layer of the network is pruned independently. On each iteration of the lottery ticket + experiment, the network is trained for 50,000 training iterations regardless of when early-stopping + occurs; in other words, no validation or test data is taken into account during the training process, and + early-stopping times are determined retroactively by examining validation performance. We evaluate + validation and test performance every 100 iterations. + For the main body of the paper, we opt to use the Adam optimizer (Kingma & Ba, 2014) and Gaussian + Glorot initialization (Glorot & Bengio, 2010). Although we can achieve more impressive results on + the lottery ticket experiment with other hyperparameters, we intend these choices to be as generic + as possible in an effort to minimize the extent to which our main results depend on hand-chosen + hyperparameters. In this Appendix, we select the learning rate for Adam that we use in the main body + of the paper. + In addition, we consider a wide range of other hyperparameters, including other optimization + algorithms (SGD with and without momentum), initialization strategies (Gaussian distributions + + <
> + + Figure 26. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Lenet architecture trained with MNIST using the Adam optimizer at various + learning rates. Each line represents a different learning rate. + + <
> + + Figure 27. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Lenet architecture trained with MNIST using stochastic gradient descent at + various learning rates. + + + with various standard deviations), network sizes (larger and smaller hidden layers), and pruning + strategies (faster and slower pruning rates). In each experiment, we vary the chosen hyperparameter + while keeping all others at their default values (Adam with the chosen learning rate, Gaussian Glorot + initialization, hidden layers with 300 and 100 units). The data presented in this appendix was collected + by training variations of the Lenet architecture more than 3,000 times. + + G.2 LEARNING RATE + + In this Subsection, we perform the lottery ticket experiment on the Lenet architecture as optimized + with Adam, SGD, and SGD with momentum at various learning rates. + Here, we select the learning rate that we use for Adam in the main body of the paper. Our criteria for + selecting the learning rate are as follows. + + 1.On the unpruned network, it should minimize training iterations necessary to reach early- + stopping and maximize validation accuracy at that iteration. That is, it should be a reasonable + hyperparameter for optimizing the unpruned network even if we are not running the lottery + ticket experiment. + 2. When running the iterative lottery ticket experiment, it should make it possible to match + the early-stopping iteration and accuracy of the original network with as few parameters as + possible. + 3.Of those options that meet (1) and (2), it should be on the conservative (slow) side so that it is + more likely to productively optimize heavily pruned networks under a variety of conditions + with a variety of hyperparameters. + + <
> + + Figure 28. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Lenet architecture trained with MNIST using stochastic gradient descent + with momentum (0.9) at various learning rates. + + + Figure 26 shows the early-stopping iteration and validation accuracy at that iteration of performing + the iterative lottery ticket experiment with the Lenet architecture optimized with Adam at various + learning rates. According to the graph on the right of Figure 26, several learning rates between 0.0002 + and 0.002 achieve similar levels of validation accuracy on the original network and maintain that + performance to similar levels as the network is pruned. Of those learning rates, 0.0012 and 0.002 + produce the fastest early-stopping times and maintain them to the smallest network sizes. We choose + 0.0012 due to its higher validation accuracy on the unpruned network and in consideration of criterion + (3) above. + We note that, across all of these learning rates, the lottery ticket pattern (in which learning becomes + faster and validation accuracy increases with iterative pruning) remains present. Even for those + learning rates that did not satisfy the early-stopping criterion within 50,000 iterations (2.5e-05 and + 0.0064) still showed accuracy improvements with pruning. + + G.3 OTHER OPTIMIZATION ALGORITHMS + + G.3.1 SGD + Here, we explore the behavior of the lottery ticket experiment when the network is optimized with + stochastic gradient descent (SGD) at various learning rates. The results of doing so appear in Figure + 27. The lottery ticket pattern appears across all learning rates, including those that fail to satisfy the + early-stopping criterion within 50,000 iterations. SGD learning rates 0.4 and 0.8 reach early-stopping + in a similar number of iterations as the best Adam learning rates (0.0012 and 0.002) but maintain + this performance when the network has been pruned further (to less than 1% of its original size for + SGD vs. about 3.6% of the original size for Adam). Likewise, on pruned networks, these SGD + learning rates achieve equivalent accuracy to the best Adam learning rates, and they maintain that + high accuracy when the network is pruned as much as the Adam learning rates. + + G.3.2 MOMENTUM + Here, we explore the behavior of the lottery ticket experiment when the network is optimized with + SGD with momentum (0.9) at various learning rates. The results of doing so appear in Figure 28. + Once again, the lottery ticket pattern appears across all learning rates, with learning rates between + 0.025 and 0.1 maintaining high validation accuracy and faster learning for the longest number of + pruning iterations. Learning rate 0.025 achieves the highest validation accuracy on the unpruned + network; however, its validation accuracy never increases as it is pruned, instead decreasing gradually, + and higher learning rates reach early-stopping faster. + + G.4 ITERATIVE PRUNING RATE + + When running the iterative lottery ticket experiment on Lenet, we prune each layer of the network + separately at a particular rate. That is, after training the network, we prunek%of the weights in + + <
> + + Figure 29. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment when pruned at different rates. Each line represents a differentpruning rate—the + percentage of lowest-magnitude weights that are pruned from each layer after each training iteration. + + <
> + + Figure 30. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment initialized with Gaussian distributions with various standard deviations. Each line + is a different standard deviation for a Gaussian distribution centered at 0. + + + each layer ( k %of the weights in the output layer) before resetting the weights to their original + initializations and training again. In the main body of the paper, we find that iterative pruning finds 2 + smaller winning tickets than one-shot pruning, indicating that pruning too much of the network at + once diminishes performance. Here, we explore different values ofk. + Figure 29 shows the effect of the amount of the network pruned on each pruning iteration on early- + stopping time and validation accuracy. There is a tangible difference in learning speed and validation + accuracy at early-stopping between the lowest pruning rates (0.1 and 0.2) and higher pruning rates (0.4 + and above). The lowest pruning rates reach higher validation accuracy and maintain that validation + accuracy to smaller network sizes; they also maintain fast early-stopping times to smaller network + sizes. For the experiments throughout the main body of the paper and this Appendix, we use a + pruning rate of 0.2, which maintains much of the accuracy and learning speed of 0.1 while reducing + the number of training iterations necessary to get to smaller network sizes. + In all of the Lenet experiments, we prune the output layer at half the rate of the rest of the network. + Since the output layer is so small (1,000 weights out of 266,000 for the overall Lenet architecture), + we found that pruning it reaches a point of diminishing returns much earlier the other layers. + + G.5 INITIALIZATION DISTRIBUTION + + To this point, we have considered only a Gaussian Glorot (Glorot & Bengio, 2010) initialization + scheme for the network. Figure 30 performs the lottery ticket experiment while initializing the Lenet + architecture from Gaussian distributions with a variety of standard deviations. The networks were + optimized with Adam at the learning rate chosen earlier. The lottery ticket pattern continues to appear + across all standard deviations. When initialized from a Gaussian distribution with standard deviation + 0.1, the Lenet architecture maintained high validation accuracy and low early-stopping times for the + longest, approximately matching the performance of the Glorot-initialized network. + + G.6 NETWORK SIZE + + <
> + + Figure 31. The early-stopping iteration and validation accuracy at at that iteration of the iterative + lottery ticket experiment on the Lenet architecture with various layer sizes. The label for each line + is the size of the first and second hidden layers of the network. All networks had Gaussian Glorot + initialization and were optimized with Adam (learning rate 0.0012). Note that the x-axis of this plot + charts the number ofweightsremaining, while all other graphs in this section have charted thepercent + of weights remaining. + + Throughout this section, we have considered the Lenet architecture with 300 units in the first hidden + layer and 100 units in the second hidden layer. Figure 31 shows the early-stopping iterations and + validation accuracy at that iteration of the Lenet architecture with several other layer sizes. All + networks we tested maintain the 3.1 ratio between units in the first hidden layer and units in the + second hidden layer. + The lottery ticket hypothesis naturally invites a collection of questions related to network size. Gener- + alizing, those questions tend to take the following form. according to the lottery ticket hypothesis, do + larger networks, which contain more subnetworks, find “better” winning tickets? In line with the + generality of this question, there are several different answers. + If we evaluate a winning ticket by the accuracy it achieves, then larger networks do find better + winning tickets. The right graph in Figure 31 shows that, for any particular number of weights (that + is, any particular point on the x-axis), winning tickets derived from initially larger networks reach + higher accuracy. Put another way, in terms of accuracy, the lines are approximately arranged from + bottom to top in increasing order of network size. It is possible that, since larger networks have + more subnetworks, gradient descent found a better winning ticket. Alternatively, the initially larger + networks have more units even when pruned to the same number of weights as smaller networks, + meaning they are able to contain sparse subnetwork configurations that cannot be expressed by + initially smaller networks. + If we evaluate a winning ticket by the time necessary for it to reach early-stopping, then larger + networks have less of an advantage. The left graph in Figure 31 shows that, in general, early-stopping + iterations do not vary greatly between networks of different initial sizes that have been pruned to the + same number of weights. Upon exceedingly close inspection, winning tickets derived from initially + larger networks tend to learn marginally faster than winning tickets derived from initially smaller + networks, but these differences are slight. + If we evaluate a winning ticket by the size at which it returns to the same accuracy as the original + network, the large networks do not have an advantage. Regardless of the initial network size, the + right graph in Figure 31 shows that winning tickets return to the accuracy of the original network + when they are pruned to between about 9,000 and 15,000 weights. + + H HYPERPARAMETER EXPLORATION FOR CONVOLUTIONAL NETWORKS + + This Appendix accompanies Sections 3 of the main paper. It explores the space of optimization + algorithms and hyperparameters for the Conv-2, Conv-4, and Conv-6 architectures evaluated in + Section 3 with the same two purposes as Appendix G. explaining the hyperparameters used in the main + body of the paper and evaluating the lottery ticket experiment on other choices of hyperparameters. + + H.1 EXPERIMENTAL METHODOLOGY + + The Conv-2, Conv-4, and Conv-6 architectures are variants of the VGG (Simonyan & Zisserman, + 2014) network architecture scaled down for the CIFAR10 (Krizhevsky & Hinton, 2009) dataset. Like + VGG, the networks consist of a series of modules. Each module has two layers of 3x3 convolutional + filters followed by a maxpool layer with stride 2. After all of the modules are two fully-connected + layers of size 256 followed by an output layer of size 10; in VGG, the fully-connected layers are of + size 4096 and the output layer is of size 1000. Like VGG, the first module has 64 convolutions in + each layer, the second has 128, the third has 256, etc. The Conv-2, Conv-4, and Conv-6 architectures + have 1, 2, and 3 modules, respectively. + The CIFAR10 dataset consists of 50,000 32x32 color (three-channel) training examples and 10,000 + test examples. We randomly sampled a 5,000-example validation set from the training set and used the + remaining 45,000 training examples as our training set for the rest of the paper. The hyperparameter + selection experiments throughout this Appendix are evaluated on the validation set, and the examples + in the main body of this paper (which make use of these hyperparameters) are evaluated on test set. + The training set is presented to the network in mini-batches of 60 examples; at each epoch, the entire + training set is shuffled. + The Conv-2, Conv-4, and Conv-6 networks are initialized with Gaussian Glorot initialization (Glorot + & Bengio, 2010) and are trained for the number of iterations specified in Figure 2. The number + of training iterations was selected such that heavily-pruned networks could still train in the time + provided. On dropout experiments, the number of training iterations is tripled to provide enough time + for the dropout-regularized networks to train. We optimize these networks with Adam, and select the + learning rate for each network in this Appendix. + As with the MNIST experiments, validation and test performance is only considered retroactively + and has no effect on the progression of the lottery ticket experiments. We measure validation and test + loss and accuracy every 100 training iterations. + Each line in each graph of this section represents the average of three separate experiments, with + error bars indicating the minimum and maximum value that any experiment took on at that point. + (Experiments in the main body of the paper are conducted five times.) + We allow convolutional layers and fully-connected layers to be pruned at different rates; we select + those rates for each network in this Appendix. The output layer is pruned at half of the rate of the + fully-connected layers for the reasons described in Appendix G. + + H.2 LEARNING RATE + + In this Subsection, we perform the lottery ticket experiment on the the Conv-2, Conv-4, and Conv-6 + architectures as optimized with Adam at various learning rates. + + <
> + + Figure 32. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Conv-2 (top), Conv-4 (middle), and Conv-6 (bottom) architectures trained + using the Adam optimizer at various learning rates. Each line represents a different learning rate. + + + Here, we select the learning rate that we use for Adam in the main body of the paper. Our criteria + for selecting the learning rate are the same as in Appendix G. minimizing training iterations and + maximizing accuracy at early-stopping, finding winning tickets containing as few parameters as + possible, and remaining conservative enough to apply to a range of other experiments. + Figure 32 shows the results of performing the iterative lottery ticket experiment on the Conv-2 (top), + Conv-4 (middle), and Conv-6 (bottom) architectures. Since we have not yet selected the pruning rates + for each network, we temporarily pruned fully-connected layers at 20% per iteration, convolutional + layers at 10% per iteration, and the output layer at 10% per iteration; we explore this part of the + hyperparameter space in a later subsection. + For Conv-2, we select a learning rate of 0.0002, which has the highest initial validation accuracy, + maintains both high validation accuracy and low early-stopping times for the among the longest, + and reaches the fastest early-stopping times. This learning rate also leads to a 3.3 percentage point + improvement in validation accuracy when the network is pruned to 3% of its original size. Other + learning rates, such 0.0004, have lower initial validation accuracy (65.2% vs 67.6%) but eventually + reach higher absolute levels of validation accuracy (71.7%, a 6.5 percentage point increase, vs. 70.9%, + a 3.3 percentage point increase). However, learning rate 0.0002 shows the highest proportional + decrease in early-stopping times. 4.8x (when pruned to 8.8% of the original network size). + For Conv-4, we select learning rate 0.0003, which has among the highest initial validation accuracy, + maintains high validation accuracy and fast early-stopping times when pruned by among the most, + and balances improvements in validation accuracy (3.7 percentage point improvement to 78.6% + when 5.4% of weights remain) and improvements in early-stopping time (4.27x when 11.1% of + weights remain). Other learning rates reach higher validation accuracy (0.0004—3.6 percentage point + improvement to 79.1% accuracy when 5.4% of weights remain) or show better improvements in + early-stopping times (0.0002—5.1x faster when 9.2% of weights remain) but not both. + For Conv-6, we also select learning rate 0.0003 for similar reasons to those provided for Conv-4. + Validation accuracy improves by 2.4 percentage points to 81.5% when 9.31% of weights remain + and early-stopping times improve by 2.61x when pruned to 11.9%. Learning rate 0.0004 reaches + high final validation accuracy (81.9%, an increase of 2.7 percentage points, when 15.2% of weights + remain) but with smaller improvements in early-stopping times, and learning rate 0.0002 shows + greater improvements in early-stopping times (6.26x when 19.7% of weights remain) but reaches + lower overall validation accuracy. + We note that, across nearly all combinations of learning rates, the lottery ticket pattern—where + early-stopping times were maintain or decreased and validation accuracy was maintained or increased + during the course of the lottery ticket experiment—continues to hold. This pattern fails to hold at + the very highest learning rates. early-stopping times decreased only briefly (in the case of Conv-2 or + Conv-4) or not at all (in the case of Conv-6), and accuracy increased only briefly (in the case of all + three networks). This pattern is similar to that which we observe in Section 4. at the highest learning + rates, our iterative pruning algorithm fails to find winning tickets. + + H.3 OTHER OPTIMIZATION ALGORITHMS + + H.3.1 SGD + Here, we explore the behavior of the lottery ticket experiment when the Conv-2, Conv-4, and Conv-6 + networks are optimized with stochastic gradient descent (SGD) at various learning rates. The results + of doing so appear in Figure 33. In general, these networks—particularly Conv-2 and Conv-4— + proved challenging to train with SGD and Glorot initialization. As Figure 33 reflects, we could not + find SGD learning rates for which the unpruned networks matched the validation accuracy of the + same networks when trained with Adam; at best, the SGD-trained unpruned networks were typically + 2-3 percentage points less accurate. At higher learning rates than those in Figure 32, gradients tended + to explode when training the unpruned network; at lower learning rates, the networks often failed to + learn at all. + At all of the learning rates depicted, we found winning tickets. In all cases, early-stopping times + initially decreased with pruning before eventually increasing again, just as in other lottery ticket + experiments. The Conv-6 network also exhibited the same accuracy patterns as other experiments, + with validation accuracy initially increasing with pruning before eventually decreasing again. + + <
> + + Figure 33. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Conv-2 (top), Conv-4 (middle), and Conv-6 (bottom) architectures trained + using SGD at various learning rates. Each line represents a different learning rate. The legend for + each pair of graphs is above the graphs. + + However, the Conv-2 and Conv-4 architectures exhibited a different validation accuracy pattern + from other experiments in this paper. Accuracy initially declined with pruning before rising as + the network was further pruned; it eventually matched or surpassed the accuracy of the unpruned + network. When they eventually did surpass the accuracy of the original network, the pruned networks + reached early-stopping in about the same or fewer iterations than the original network, constituting + a winning ticket by our definition. Interestingly, this pattern also appeared for Conv-6 networks at + slower SGD learning rates, suggesting that faster learning rates for Conv-2 and Conv-4 than those in + Figure 32 might cause the usual lottery ticket accuracy pattern to reemerge. Unfortunately, at these + higher learning rates, gradients exploded on the unpruned networks, preventing us from running these + experiments. + + H.3.2 MOMENTUM + + Here, we explore the behavior of the lottery ticket experiment when the network is optimized with + SGD with momentum (0.9) at various learning rates. The results of doing so appear in Figure 34. + In general, the lottery ticket pattern continues to apply, with early-stopping times decreasing and + accuracy increasing as the networks are pruned. However, there were two exceptions to this pattern. + + 1.At the very lowest learning rates (e.g., learning rate 0.001 for Conv-4 and all but the highest + learning rate for Conv-2), accuracy initially decreased before increasing to higher levels + than reached by the unpruned network; this is the same pattern we observed when training + these networks with SGD. + 2.At the very highest learning rates (e.g., learning rates 0.005 and 0.008 for Conv-2 and Conv- + 4), early-stopping times never decreased and instead remained stable before increasing; this + is the same pattern we observed for the highest learning rates when training with Adam. + + + H.4 ITERATIVE PRUNING RATE + + For the convolutional network architectures, we select different pruning rates for convolutional and + fully-connected layers. In the Conv-2 and Conv-4 architectures, convolutional parameters make up a + relatively small portion of the overall number of parameters in the models. By pruning convolutions + more slowly, we are likely to be able to prune the model further while maintaining performance. + In other words, we hypothesize that, if all layers were pruned evenly, convolutional layers would + become a bottleneck that would make it more difficult to find lower parameter-count models that are + still able to learn. For Conv-6, the opposite may be true. since nearly two thirds of its parameters are + in convolutional layers, pruning fully-connected layers could become the bottleneck. + Our criterion for selecting hyperparameters in this section is to find a combination of pruning rates + that allows networks to reach the lowest possible parameter-counts while maintaining validation + accuracy at or above the original accuracy and early-stopping times at or below that for the original + network. + Figure 35 shows the results of performing the iterative lottery ticket experiment on Conv-2 (top), + Conv-4 (middle), and Conv-6 (bottom) with different combinations of pruning rates. + According to our criteria, we select an iterative convolutional pruning rate of 10% for Conv-2, 10% for + Conv-4, and 15% for Conv-6. For each network, any rate between 10% and 20% seemed reasonable. + Across all convolutional pruning rates, the lottery ticket pattern continued to appear. + + H.5 LEARNING RATES (DROPOUT ) + + In order to train the Conv-2, Conv-4, and Conv-6 architectures with dropout, we repeated the exercise + from Section H.2 to select appropriate learning rates. Figure 32 shows the results of performing + the iterative lottery ticket experiment on Conv-2 (top), Conv-4 (middle), and Conv-6 (bottom) with + dropout and Adam at various learning rates. A network trained with dropout takes longer to learn, so + we trained each architecture for three times as many iterations as in the experiments without dropout. + 60,000 iterations for Conv-2, 75,000 iterations for Conv-4, and 90,000 iterations for Conv-6. We + iteratively pruned these networks at the rates determined in Section H.4. + + <
> + + Figure 34. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Conv-2 (top), Conv-4 (middle), and Conv-6 (bottom) architectures trained + using SGD with momentum (0.9) at various learning rates. Each line represents a different learning + rate. The legend for each pair of graphs is above the graphs. Lines that are unstable and contain large + error bars (large vertical lines) indicate that some experiments failed to learn effectively, leading to + very low accuracy and very high early-stopping times; these experiments reduce the averages that the + lines trace and lead to much wider error bars. + + <
> + + Figure 35. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Conv-2 (top), Conv-4 (middle), and Conv-6 (bottom) architectures with an + iterative pruning rate of 20% for fully-connected layers. Each line represents a different iterative + pruning rate for convolutional layers. + + + + The Conv-2 network proved to be difficult to consistently train with dropout. The top right graph + in Figure 36 contains wide error bars and low average accuracy for many learning rates, especially + early in the lottery ticket experiments. This indicates that some or all of the training runs failed to + learn; when they were averaged into the other results, they produced the aforementioned pattern + in the graphs. At learning rate 0.0001, none of the three trials learned productively until pruned to + more than 26.5%, at which point all three trials started learning. At learning rate 0.0002, some of the + trials failed to learn productively until several rounds of iterative pruning had passed. At learning + rate 0.0003, all three networks learned productively at every pruning level. At learning rate 0.0004, + one network occasionally failed to learn. We selected learning rate 0.0003, which seemed to allow + networks to learn productively most often while achieving among the highest initial accuracy. + It is interesting to note that networks that were unable to learn at a particular learning rate (for + example, 0.0001) eventually began learning after several rounds of the lottery ticket experiment (that + is, training, pruning, and resetting repeatedly). It is worth investigating whether this phenomenon + was entirely due to pruning (that is, removing any random collection of weights would put the + network in a configuration more amenable to learning) or whether training the network provided + useful information for pruning, even if the network did not show improved accuracy. + For both the Conv-4 and Conv-6 architectures, a slightly slower learning rate (0.0002 as opposed to + 0.0003) leads to the highest accuracy on the unpruned networks in addition to the highest sustained + accuracy and fastest sustained learning as the networks are pruned during the lottery ticket experiment. + With dropout, the unpruned Conv-4 architecture reaches an average validation accuracy of 77.6%, a + 2.7 percentage point improvement over the unpruned Conv-4 network trained without dropout and + one percentage point lower than the highest average validation accuracy attained by a winning ticket. + The dropout-trained winning tickets reach 82.6% average validation accuracy when pruned to 7.6%. + Early-stopping times improve by up to 1.58x (when pruned to 7.6%), a smaller improvement than + then 4.27x achieved by a winning ticket obtained without dropout. + With dropout, the unpruned Conv-6 architecture reaches an average validation accuracy of 81.3%, + an improvement of 2.2 percentage points over the accuracy without dropout; this nearly matches + the 81.5% average accuracy obtained by Conv-6 trained without dropout and pruned to 9.31%. + The dropout-trained winning tickets further improve upon these numbers, reaching 84.8% average + validation accuracy when pruned to 10.5%. Improvements in early-stopping times are less dramatic + than without dropout. a 1.5x average improvement when the network is pruned to 15.1%. + At all learning rates we tested, the lottery ticket pattern generally holds for accuracy, with improve- + ments as the networks are pruned. However, not all learning rates show the decreases in early-stopping + times. To the contrary, none of the learning rates for Conv-2 show clear improvements in early- + stopping times as seen in the other lottery ticket experiments. Likewise, the faster learning rates for + Conv-4 and Conv-6 maintain the original early-stopping times until pruned to about 40%, at which + point early-stopping times steadily increase. + + H.6 PRUNING CONVOLUTIONS VS PRUNING FULLY-CONNECTED LAYERS + + Figure 37 shows the effect of pruning convolutions alone (green), fully-connected layers alone + (orange) and pruning both (blue). The x-axis measures the number of parameters remaining to + emphasize the relative contributions made by pruning convolutions and fully-connected layers to + the overall network. In all three cases, pruning convolutions alone leads to higher test accuracy + and faster learning; pruning fully-connected layers alone generally causes test accuracy to worsen + and learning to slow. However, pruning convolutions alone has limited ability to reduce the overall + parameter-count of the network, since fully-connected layers comprise 99%, 89%, and 35% of the + parameters in Conv-2, Conv-4, and Conv-6. + + <
> + + Figure 36. The early-stopping iteration and validation accuracy at that iteration of the iterative lottery + ticket experiment on the Conv-2 (top), Conv-4 (middle), and Conv-6 (bottom) architectures trained + using dropout and the Adam optimizer at various learning rates. Each line represents a different + learning rate. + + <
> + + Figure 37. Early-stopping iteration and accuracy of the Conv-2 (top), Conv-4 (middle), and Conv-6 + (bottom) networks when only convolutions are pruned, only fully-connected layers are pruned, and + both are pruned. The x-axis measures the number of parameters remaining, making it possible to + see the relative contributions to the overall network made by pruning FC layers and convolutions + individually. + + + + I HYPERPARAMETER EXPLORATION FOR VGG-19 AND RESNET-18 ON CIFAR10 + + This Appendix accompanies the VGG-19 and Resnet-18 experiments in Section 4. It details the + pruning scheme, training regimes, and hyperparameters that we use for these networks. + + I.1 GLOBAL PRUNING + + In our experiments with the Lenet and Conv-2/4/6 architectures, we separately prune a fraction of + the parameters in each layer (layer-wise pruning). In our experiments with VGG-19 and Resnet-18, + we instead pruneglobally; that is, we prune all of the weights in convolutional layers collectively + without regard for the specific layer from which any weight originated. + Figures 38 (VGG-19) and 39 (Resnet-18) compare the winning tickets found by global pruning + (solid lines) and layer-wise pruning (dashed lines) for the hyperparameters from Section 4. When + training VGG-19 with learning rate 0.1 and warmup to iteration 10,000, we find winning tickets when + Pm 6.9%for layer-wise pruning vs.Pm 1.5%for global pruning. For other hyperparameters, + accuracy similarly drops off when sooner for layer-wise pruning than for global pruning. Global + pruning also finds smaller winning tickets than layer-wise pruning for Resnet-18, but the difference is + less extreme than for VGG-19. + In Section 4, we discuss the rationale for the efficacy of global pruning on deeper networks. In + summary, the layers in these deep networks have vastly different numbers of parameters (particularly + severely so for VGG-19); if we prune layer-wise, we conjecture that layers with fewer parameters + become bottlenecks on our ability to find smaller winning tickets. + Regardless of whether we use layer-wise or global pruning, the patterns from Section 4 hold. at + learning rate 0.1, iterative pruning finds winning tickets for neither network; at learning rate 0.01, the + lottery ticket pattern reemerges; and when training with warmup to a higher learning rate, iterative + pruning finds winning tickets. Figures 40 (VGG-19) and 41 (Resnet-18) present the same data as + Figures 7 (VGG-19) and 8 (Resnet-18) from Section 4 with layer-wise pruning rather than global + pruning. The graphs follow the same trends as in Section 4, but the smallest winning tickets are larger + than those found by global pruning. + + I.2 VGG-19 DETAILS + + The VGG19 architecture was first designed by Simonyan & Zisserman (2014) for Imagenet. The + version that we use here was adapted by Liu et al. (2019) for CIFAR10. The network is structured + as described in Figure 2. it has five groups of 3x3 convolutional layers, the first four of which are + followed by max-pooling (stride 2) and the last of which is followed by average pooling. The network + has one final dense layer connecting the result of the average-pooling to the output. + We largely follow the training procedure for resnet18 described in Appendix I. + + We use the same train/test/validation split. + We use the same data augmentation procedure. + We use a batch size of 64. + We use batch normalization. + We use a weight decay of 0.0001. + We use three stages of training at decreasing learning rates. We train for 160 epochs (112,480 + iterations), decreasing the learning rate by a factor of ten after 80 and 120 epochs. + We use Gaussian Glorot initialization. + + We globally prune the convolutional layers of the network at a rate of 20% per iteration, and we do + not prune the 5120 parameters in the output layer. + Liu et al. (2019) uses an initial pruning rate of 0.1. We train VGG19 with both this learning rate and + a learning rate of 0.01. + + + I.3 RESNET-18 DETAILS + + The Resnet-18 architecture was first introduced by He et al. (2016). The architecture comprises 20 + total layers as described in Figure 2. a convolutional layer followed by nine pairs of convolutional + layers (with residual connections around the pairs), average pooling, and a fully-connected output + layer. + We follow the experimental design of He et al. (2016). + + We divide the training set into 45,000 training examples and 5,000 validation examples. We + use the validation set to select hyperparameters in this appendix and the test set to evaluate + in Section 4. + We augment training data using random flips and random four pixel pads and crops. + We use a batch size of 128. + We use batch normalization. + We use weight decay of 0.0001. + We train using SGD with momentum (0.9). + We use three stages of training at decreasing learning rates. Our stages last for 20,000, + 5,000, and 5,000 iterations each, shorter than the 32,000, 16,000, and 16,000 used in He + et al. (2016). Since each of our iterative pruning experiments requires training the network + 15-30 times consecutively, we select this abbreviated training schedule to make it possible + to explore a wider range of hyperparameters. + We use Gaussian Glorot initialization. + + We globally prune convolutions at a rate of 20% per iteration. We do not prune the 2560 parameters + used to downsample residual connections or the 640 parameters in the fully-connected output layer, + as they comprise such a small portion of the overall network. + + I.4 LEARNING RATE + + In Section 4, we observe that iterative pruning is unable to find winning tickets for VGG-19 and + Resnet-18 at the typical, high learning rate used to train the network (0.1) but it is able to do so at a + lower learning rate (0.01). Figures 42 and 43 explore several other learning rates. In general, iterative + pruning cannot find winning tickets at any rate above 0.01 for either network; for higher learning + rates, the pruned networks with the original initialization perform no better than when randomly + reinitialized. + + I.5 WARMUP ITERATION + + In Section 4, we describe how adding linear warmup to the initial learning rate makes it possible to + find winning tickets for VGG-19 and Resnet-18 at higher learning rates (and, thereby, winning tickets + that reach higher accuracy). In Figures 44 and 45, we explore the number of iterationskover which + warmup should occur. + For VGG-19, we were able to find values ofkfor which iterative pruning could identify winning + tickets when the network was trained at the original learning rate (0.1). For Resnet-18, warmup made + it possible to increase the learning rate from 0.01 to 0.03, but no further. When exploring values ofk, + we therefore us learning rate 0.1 for VGG-19 and 0.03 for Resnet-18. + In general, the greater the value ofk, the higher the accuracy of the eventual winning tickets. + + Resnet-18. For values ofkbelow 5000, accuracy improves rapidly askincreases. This relationship + reaches a point of diminishing returns abovek= 5000. For the experiments in Section 4, we select + k= 20000, which achieves the highest validation accuracy. + + VGG-19. For values ofkbelow 5000, accuracy improves rapidly askincreases. This relationship + reaches a point of diminishing returns abovek= 5000. For the experiments in Section 4, we select + k= 10000, as there is little benefit to larger values ofk. + + <
> + + Figure 38. Validation accuracy (at 30K, 60K, and 112K iterations) of VGG-19 when iteratively + pruned with global (solid) and layer-wise (dashed) pruning. + + <
> + + Figure 39. Validation accuracy (at 10K, 20K, and 30K iterations) of Resnet-18 when iteratively + pruned with global (solid) and layer-wise (dashed) pruning. + + <
> + + Figure 40. Test accuracy (at 30K, 60K, and 112K iterations) of VGG-19 when iteratively pruned with + layer-wise pruning. This is the same as Figure 7, except with layer-wise pruning rather than global + pruning. + + <
> + + Figure 41. Test accuracy (at 10K, 20K, and 30K iterations) of Resnet-18 when iteratively pruned with + layer-wise pruning. This is the same as Figure 8 except with layer-wise pruning rather than global + pruning. + + <
> + + Figure 42. Validation accuracy (at 10K, 20K, and 30K iterations) of Resnet-18 when iteratively + pruned and trained with various learning rates. + + <
> + + Figure 43. Validation accuracy (at 30K, 60K, and 112K iterations) of VGG-19 when iteratively + pruned and trained with various learning rates. + + <
> + + Figure 44. Validation accuracy (at 10K, 20K, and 30K iterations) of Resnet-18 when iteratively + pruned and trained with varying amounts of warmup at learning rate 0.03. + + <
> + + Figure 45. Validation accuracy (at 30K, 60K, and 112K iterations) of VGG-19 when iteratively + pruned and trained with varying amounts of warmup at learning rate 0.1. +<> <> <> + + +<> <> <> +The State of Sparsity in Deep Neural Networks + +Trevor Gale *1 � Erich Elsen *2 Sara Hooker 1 � + +Abstract + +like image classification and machine translation commonly + +We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousands of experiments, we demonstrate that complex techniques (Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression rates on smaller datasets perform inconsistently, and that simple magnitude pruning approaches achieve comparable or better results. Based on insights from our experiments, we achieve a new state-of-the-art sparsity-accuracy trade-off for ResNet-50 using only magnitude pruning. Additionally, we repeat the experiments performed by Frankle & Carbin (2018) and Liu et al. (2018) at scale and show that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization. Together, these results highlight the need for large-scale benchmarks in the field of model compression. We open-source our code, top performing model checkpoints, and results of all hyperparameter configurations to establish rigorous baselines for future work on compression and sparsification. + +1. Introduction +Deep neural networks achieve state-of-the-art performance +in a variety of domains including image classification (He +et al., 2016), machine translation (Vaswani et al., 2017), +and text-to-speech (van den Oord et al., 2016; Kalchbrenner et al., 2018). +While model quality has been shown to +scale with model and dataset size (Hestness et al., 2017), +the resources required to train and deploy large neural net. +works can be prohibitive. State-of-the-art models +have tens of millions of parameters, and require billions of floating-point operations to make a prediction for a single input sample. +Sparsity has emerged as a leading approach to address these challenges. By sparsity, we refer to the property that a subset of the model parameters have a value of exactly zero2. With zero valued weights, any multiplications (which dominate neural network computation) can be skipped, and models can be stored and transmitted compactly using sparse matrix formats. It has been shown empirically that deep neural networks can tolerate high levels of sparsity (Han et al., 2015; Narang et al., 2017; Ullrich et al., 2017), and this property has been leveraged to significantly reduce the cost associated with the deployment of deep neural networks, and to enable the deployment of state-of-the-art models in severely resource constrained environments (Theis et al., 2018; Kalchbrenner et al., 2018; Valin & Skoglund, 2018). +Over the past few years, numerous techniques for induc.ing sparsity have been proposed and the set of models and datasets used as benchmarks has grown too large to reasonably expect new approaches to explore them all. In addition to the lack of standardization in modeling tasks, the distribution of benchmarks tends to slant heavily towards convolutional architectures and computer vision tasks, and the tasks used to evaluate new techniques are frequently not representative of the scale and complexity of real-world tasks where model compression is most useful. These char.acteristics make it difficult to come away from the sparsity literature with a clear understanding of the relative merits of different approaches. +In addition to practical concerns around comparing techniques, multiple independent studies have recently proposed that the value of sparsification in neural networks has been misunderstood (Frankle & Carbin, 2018; Liu et al., 2018). While both papers suggest that sparsification can be viewed as a form of neural architecture search, they disagree on what is necessary to achieve this. Speci�cally, Liu et al. +2 The term sparsity is also commonly used to refer to the pro.portion of a neural networks weights that are zero valued. Higher sparsity corresponds to fewer weights, and smaller computational and storage requirements. We use the term in this way throughout this paper. + +(2018) re-train learned sparse topologies with a random weight initialization, whereas Frankle & Carbin (2018) posit that the exact random weight initialization used when the sparse architecture was learned is needed to match the test set performance of the model sparsified during optimization. +In this paper, we address these ambiguities to provide a strong foundation for future work on sparsity in neural networks. Our main contributions: (1) We perform a comprehensive evaluation of variational dropout (Molchanov et al., 2017), l0 regularization (Louizos et al., 2017b), and magnitude pruning (Zhu & Gupta, 2017) on Transformer trained on WMT 2014 English-to-German and ResNet-50 trained on ImageNet. To the best of our knowledge, we are the first to apply variational dropout and l0 regularization to models of this scale. While variational dropout and l0 regularization achieve state-of-the-art results on small datasets, we show that they perform inconsistently for large-scale tasks and that simple magnitude pruning can achieve comparable or better results for a reduced computational budget. (2) Through insights gained from our experiments, we achieve a new state-of-the-art sparsity-accuracy trade-off for ResNet-50 using only magnitude pruning. (3) We repeat the lottery ticket (Frankle & Carbin, 2018) and scratch (Liu et al., 2018) experiments on Transformer and ResNet-50 across a full range of sparsity levels. We show that unstruc.tured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with pruning as part of the optimization process. (4) We open-source our code, model checkpoints, and results of all hyperparameter settings to establish rigorous baselines for future work on model compression and sparsification 3. + +2. Sparsity in Neural Networks + +We briefly provide a non-exhaustive review of proposed approaches for inducing sparsity in deep neural networks. + +Simple heuristics based on removing small magnitude weights have demonstrated high compression rates with minimal accuracy loss (Str�om, 1997; Collins & Kohli, 2014; Han et al., 2015), and further refinement of the sparsification process for magnitude pruning techniques has increased achievable compression rates and greatly reduced computational complexity (Guo et al., 2016; Zhu & Gupta, 2017). Many techniques grounded in Bayesian statistics and in.formation theory have been proposed (Dai et al., 2018; Molchanov et al., 2017; Louizos et al., 2017b;a; Ullrich et al., 2017). These methods have achieved high compres.sion rates while providing deep theoretical motivation and connections to classical sparsification and regularization techniques. +3https://bit.ly/2ExE8Yj + +Some of the earliest techniques for sparsifying neural networks make use of second-order approximation of the loss surface to avoid damaging model quality (LeCun et al., 1989; Hassibi & Stork, 1992). More recent work has achieved comparable compression levels with more computationally efficient first-order loss approximations, and further refinements have related this work to efficient empirical estimates of the Fisher information of the model parameters (Molchanov et al., 2016; Theis et al., 2018). +Reinforcement learning has also been applied to automat.ically prune weights and convolutional filters (Lin et al., 2017; He et al., 2018), and a number of techniques have been proposed that draw inspiration from biological phenomena, and derive from evolutionary algorithms and neuromorphic computing (Guo et al., 2016; Bellec et al., 2017; Mocanu et al., 2018). +A key feature of a sparsity inducing technique is if and how it imposes structure on the topology of sparse weights. While unstructured weight sparsity provides the most flexibility for the model, it is more difficult to map efficiently to parallel processors and has limited support in deep learn.ing software packages. For these reasons, many techniques focus on removing whole neurons and convolutional filters, or impose block structure on the sparse weights (Liu et al., 2017; Luo et al., 2017; Gray et al., 2017). While this is practical, +there is a trade-off between achievable compression levels for a given model quality and the level of structure imposed on the model weights. In this work, we focus on unstructured sparsity with the expectation that it upper bounds the compression-accuracy trade-off achievable with structured sparsity techniques. + +3. Evaluating sparsification Techniques at Scale + +As a first step towards addressing the ambiguity in the sparsity literature, we rigorously evaluate magnitude-based pruning (Zhu & Gupta, 2017), sparse variational dropout (Molchanov et al., 2017), and l0 regularization (Louizos et al., 2017b) on two large-scale deep learning applications: ImageNet classification with ResNet-50 (He et al., 2016), and neural machine translation (NMT) with the Transformer on the WMT 2014 English-to-German dataset (Vaswani et al., 2017). For each model, we also benchmark a random weight pruning technique, representing the lower bound of compression-accuracy trade-off any method should be expected to achieve. +Here we briefly review the four techniques and introduce our experimental framework. We provide a more detailed overview of each technique in Appendix A. + +3.1. Magnitude Pruning +Magnitude-based weight pruning schemes use the magnitude of each weight as a proxy for its importance to model quality, and remove the least important weights according to some sparsification schedule over the course of training. For our experiments, we use the approach introduced in Zhu & Gupta (2017), which is conveniently available in the TensorFlow model pruning library 4. This technique allows for masked weights to reactivate during training based on gradient updates, and makes use of a gradual sparsification schedule with sorting-based weight thresholding to achieve a user specified level of sparsification. These features enable high compression ratios at a reduced computational cost relative to the iterative pruning and re-training approach used by Han et al. (2015), while requiring less hyperparameter tuning relative to the technique proposed by Guo et al. (2016). + +3.2. Variational Dropout +Variational dropout was originally proposed as a re.interpretation of dropout training as variational inference, providing a Bayesian justification for the use of dropout in neural networks and enabling useful extensions to the standard dropout algorithms like learnable dropout rates (Kingma et al., 2015). It was later demonstrated that by learning a model with variational dropout and per-parameter dropout rates, weights with high dropout rates can be re.moved post-training to produce highly sparse solutions (Molchanov et al., 2017). +Variational dropout performs variational inference to learn the parameters of a fully-factorized Gaussian posterior over the weights under a log-uniform prior. In the standard formulation, we apply a local reparameterization to move the sampled noise from the weights to the activations, and then apply the additive noise reparameterization to further reduce the variance of the gradient estimator. Under this parameterization, we directly optimize the mean and variance of the neural network parameters. After training a model with variational dropout, the weights with the highest learned dropout rates can be removed to produce a sparse model. + +3.3. l0 Regularization +l0 regularization explicitly penalizes the number of non.zero weights in the model to induce sparsity. However, the l0-norm is both non-convex and non-differentiable. To address the non-differentiability of the l0-norm, Louizos et al. (2017b) propose a reparameterization of the neural network weights as the product of a weight and a stochastic gate variable sampled from a hard-concrete distribution. The parameters of the hard-concrete distribution can be + +4 https://bit.ly/2T8hBGn + +Table 1. Constant hyperparameters for all Transformer experiments. More details on the standard configuration for training the Transformer can be found in Vaswani et al. (2017). + +<
> + +optimized directly using the reparameterization trick, and the expected l0-norm can be computed using the value of the cumulative distribution function of the random gate variable evaluated at zero. + +3.4. Random Pruning Baseline +For our experiments, we also include a random sparsification procedure adapted from the magnitude pruning technique of Zhu & Gupta (2017). Our random pruning technique uses the same sparsity schedule, but differs by selecting the weights to be pruned each step at random rather based on magnitude and does not allow pruned weights to reactivate. This technique is intended to represent a lower-bound of the accuracy-sparsity trade-off curve. + +3.5. Experimental Framework +For magnitude pruning, we used the TensorFlow model pruning library. We implemented variational dropout and l0 regularization from scratch. For variational dropout, we verified our implementation by reproducing the results from the original paper. To verify our l0 regularization implementation, we applied our weight-level code to Wide ResNet (Zagoruyko & Komodakis, 2016) trained on CIFAR-10 and replicated the training FLOPs reduction and accuracy results from the original publication. Verification results for variational dropout and l0 regularization are included in Appendices B and C. For random pruning, we modified the TensorFlow model pruning library to randomly select weights as opposed to sorting them based on magnitude. +For each model, we kept the number of training steps constant across all techniques and performed extensive hyper-parameter tuning. While magnitude pruning is relatively simple to apply to large models and achieves reasonably consistent performance across a wide range of hyperparameters, variational dropout and l0-regularization are much less well understood. To our knowledge, we are the first to apply these techniques to models of this scale. To produce a fair comparison, we did not limit the amount of hyperparameter tuning we performed for each technique. In total, our results encompass over 4000 experiments. + +<
> + +Figure 1. Sparsity-BLEU trade-off curves for the Transformer. +Top: Pareto frontiers for each of the four sparsification techniques applied to the Transformer. Bottom: All experimental results with each technique. Despite the diversity of approaches, the relative performance of all three techniques is remarkably consistent. Magnitude pruning notably outperforms more complex techniques for high levels of sparsity. + +4. Sparse Neural Machine Translation + +We adapted the Transformer (Vaswani et al., 2017) model for neural machine translation to use these four sparsification techniques, and trained the model on the WMT 2014 English-German dataset. We sparsified all fully-connected layers and embeddings, which make up 99.87% of all of the parameters in the model (the other parameters coming from biases and layer normalization). The constant hyper-parameters used for all experiments are listed in table 1. We followed the standard training procedure used by Vaswani et al. (2017), but did not perform checkpoint averaging. This setup yielded a baseline BLEU score of 27.29 averaged across five runs. +We extensively tuned the remaining hyperparameters for each technique. Details on what hyperparameters we explored, and the results of what settings produced the best models can be found in Appendix D. + +4.1. Sparse Transformer Results & Analysis +All results for the Transformer are plotted in Figure 1. De.spite the vast differences in these approaches, the relative performance of all three techniques is remarkably consistent. While l0 regularization and variational dropout pro.duce the top performing models in the low-to-mid sparsity range, magnitude pruning achieves the best results for highly sparse models. While all techniques were able to outperform the random pruning technique, randomly removing weights produces surprisingly reasonable results, which is perhaps indicative of the models ability to recover from damage during optimization. +What is particularly notable about the performance of Magnitude pruning is that our experiments uniformly remove the same fraction of weights for each layer. This is in stark contrast to variational dropout and l0 regularization, where the distribution of sparsity across the layers is learned through the training process. Previous work has shown that a non-uniform sparsity among different layers is key to achieving high compression rates (He et al., 2018), and variational dropout and l0 regularization should theoretically be able to leverage this feature to learn better distributions of weights for a given global sparsity. +Figure 2 shows the distribution of sparsity across the differ.ent layer types in the Transformer for the top performing model at 90% global sparsity for each technique. Both l0 regularization and variational dropout learn to keep more parameters in the embedding, FFN layers, and the output transforms for the multi-head attention modules and induce more sparsity in the transforms for the query and value in.puts to the attention modules. Despite this advantage, l0 regularization and variational dropout did not significantly outperform magnitude pruning, even yielding inferior results at high sparsity levels. +It is also important to note that these results maintain a constant number of training steps across all techniques and that the Transformer variant with magnitude pruning trains 1.24x and 1.65x faster than l0 regularization and variational dropout respectively. While the standard Transformer train.ing scheme produces excellent results for machine translation, it has been shown that training the model for longer can improve its performance by as much as 2 BLEU (Ott et al., 2018). Thus, when compared for a fixed training cost magnitude pruning has a distinct advantage over these more complicated techniques. + +<
> + +Figure 2. Average sparsity in Transformer layers. Distributions calculated on the top performing model at 90% sparsity for each technique. l0 regularization and variational dropout are able to learn non-uniform distributions of sparsity, while magnitude pruning induces user-specified sparsity distributions (in this case, uniform). + +Table 2. Constant hyperparameters for all RN50 experiments. + +<
> + +5. Sparse Image classification + +To benchmark these four sparsity techniques on a large-scale computer vision task, we integrated each method into ResNet-50 and trained the model on the ImageNet large-scale image classification dataset. We sparsified all convolutional and fully-connected layers, which make up 99.79% of all of the parameters in the model (the other parameters coming from biases and batch normalization). +The hyperparameters we used for all experiments are listed in Table 2. Each model was trained for 128000 iterations with a batch size of 1024 images, stochastic gradient descent with momentum, and the standard learning rate schedule (see Appendix E.1). This setup yielded a baseline top-1 accuracy of 76.69% averaged across three runs. We trained each model with 8-way data parallelism across 8 accelerators. Due to the extra parameters and operations required for variational dropout, the model was unable to fit into device memory in this configuration. For all variational dropout experiments, we used a per-device batch size of 32 images and scaled the model over 32 accelerators. + +5.1. ResNet-50 Results & Analysis +Figure 3 shows results for magnitude pruning, variational dropout, and random pruning applied to ResNet-50. Surprisingly, we were unable to produce sparse ResNet-50 models with l0 regularization that did not significantly damage model quality. Across hundreds of experiments, our models were either able to achieve full test set performance with no sparsification, or sparsification with test set performance akin to random guessing. Details on all hyperparameter settings explored are included in Appendix E. +This result is particularly surprising given the success of l0 regularization on Transformer. One nuance of the l0 regularization technique of Louizos et al. (2017b) is that the model can have varying sparsity levels between the training and test-time versions of the model. At training time, a parameter with a dropout rate of 10% will be zero 10% of the time when sampled from the hard-concrete distribution. How.ever, under the test-time parameter estimator, this weight + + +Figure 3. Sparsity-accuracy trade-off curves for ResNet-50. +Top: Pareto frontiers for variational dropout, magnitude pruning, and random pruning applied to ResNet-50. Bottom: All experimental results with each technique. We observe large variation in performance for variational dropout and l0 regularization between Transformer and ResNet-50. Magnitude pruning and variational dropout achieve comparable performance for most sparsity levels, with variational dropout achieving the best results for high sparsity levels. +will be non-zero.5. Louizos et al. (2017b) reported results applying l0 regularization to a wide residual network (WRN) (Zagoruyko & Komodakis, 2016) on the CIFAR-10 dataset, and noted that they observed small accuracy loss at as low as 8% reduction in the number of parameters during training. Applying our weight-level l0 regularization implementation to WRN produces a model with comparable training time sparsity, but with no sparsity in the test-time parameters. For models that achieve test-time sparsity, we observe significant accuracy degradation on CIFAR-10. This result is consistent with our observation for l0 regularization applied to ResNet-50 on ImageNet. +The variation in performance for variational dropout and l0 regularization between Transformer and ResNet-50 is striking. While achieving a good accuracy-sparsity trade-off, variational dropout consistently ranked behind l0 regularization on Transformer, and was bested by magnitude pruning for sparsity levels of 80% and up. However, on ResNet-50 we observe that variational dropout consistently produces +5The fraction of time a parameter is set to zero during training depends on other factors, e.g. the . parameter of the hard-concrete distribution. However, this point is generally true that the training and test-time sparsities are not necessarily equivalent, and that there exists some dropout rate threshold below which a weight that is sometimes zero during training will be non-zero at test-time. + + +Figure 4. Average sparsity in ResNet-50 layers. Distributions calculated on the top performing model at 95% sparsity for each technique. Variational dropout is able to learn non-uniform distributions of sparsity, decreasing sparsity in the input and output layers that are known to be disproportionately important to model quality. +models on-par or better than magnitude pruning, and that l0 regularization is not able to produce sparse models at all. Variational dropout achieved particularly notable results in the high sparsity range, maintaining a top-1 accuracy over 70% with less than 4% of the parameters of a standard ResNet-50. +The distribution of sparsity across different layer types in the best variational dropout and magnitude pruning models at 95% sparsity are plotted in Figure 4. While we kept sparsity constant across all layers for magnitude and random pruning, variational dropout significantly reduces the amount of sparsity induced in the first and last layers of the model. +It has been observed that the first and last layers are often disproportionately important to model quality (Han et al., 2015; Bellec et al., 2017). In the case of ResNet-50, the first convolution comprises only .037% of all the parameters in the model. At 98% sparsity the first layer has only 188 non-zero parameters, for an average of less than 3 parameters per output feature map. With magnitude pruning uniformly sparsifying each layer, it is surprising that it is able to achieve any test set performance at all with so few parameters in the input convolution. +While variational dropout is able to learn to distribute sparsity non-uniformly across the layers, it comes at a significant increase in resource requirements. For ResNet-50 trained with variational dropout we observed a greater than 2x in.crease in memory consumption. When scaled across 32 accelerators, ResNet-50 trained with variational dropout completed training in 9.75 hours, compared to ResNet-50 with magnitude pruning finishing in 12.50 hours on only 8 accelerators. Scaled to a 4096 batch size and 32 accelerators, ResNet-50 with magnitude pruning can complete the same number of epochs in just 3.15 hours. +Figure 5. Sparsity-accuracy trade-off curves for ResNet-50 with modified sparsification scheme. Altering the distribution of sparsity across the layers and increasing training time yield significant improvement for magnitude pruning. + +5.2. Pushing the Limits of Magnitude Pruning +Given that a uniform distribution of sparsity is suboptimal, and the significantly smaller resource requirements for ap.plying magnitude pruning to ResNet-50 it is natural to won.der how well magnitude pruning could perform if we were to distribute the non-zero weights more carefully and increase training time. +To understand the limits of the magnitude pruning heuristic, we modify our ResNet-50 training setup to leave the first convolutional layer fully dense, and only prune the final fully-connected layer to 80% sparsity. This heuristic is reasonable for ResNet-50, as the first layer makes up a small fraction of the total parameters in the model and the final layer makes up only .03% of the total FLOPs. While tuning the magnitude pruning ResNet-50 models, we observed that the best models always started and ended pruning during the third learning rate phase, before the second learning rate drop. To take advantage of this, we increase the number of training steps by 1.5x by extending this learning rate region. Results for ResNet-50 trained with this scheme are plotted in Figure 5. +With these modifications, magnitude pruning outperforms variational dropout at all but the highest sparsity levels while still using less resources. However, variational dropout's performance in the high sparsity range is particularly notable. With very low amounts of non-zero weights, we find it likely that the models performance on the test set is closely tied to precise allocation of weights across the different layers, and that variational dropout's ability to learn this distribution enables it to better maintain accuracy at high sparsity levels. This result indicates that efficient sparsification techniques that are able to learn the distribution of sparsity across layers are a promising direction for future work. +Its also worth noting that these changes produced models at 80% sparsity with top-1 accuracy of 76.52%, only .17% off our baseline ResNet-50 accuracy and .41% better than the results reported by He et al. (2018), without the extra complexity and computational requirements of their reinforcement learning approach. This represents a new state-of-the-art sparsity-accuracy trade-off for ResNet-50 trained on ImageNet. + +6. sparsification as Architecture Search +While sparsity is traditionally thought of as a model com.pression technique, two independent studies have recently suggested that the value of sparsification in neural networks is misunderstood, and that once a sparse topology is learned it can be trained from scratch to the full performance achieved when sparsification was performed jointly with optimization. +Frankle & Carbin (2018) posited that over-parameterized neural networks contain small, trainable subsets of weights, deemed "winning lottery tickets". They suggest that sparsity inducing techniques are methods for finding these sparse topologies, and that once found the sparse architectures can be trained from scratch with the same weight initialization that was used when the sparse architecture was learned. They demonstrated that this property holds across different convolutional neural networks and multi-layer perceptrons trained on the MNIST and CIFAR-10 datasets. +Liu et al. (2018) similarly demonstrated this phenomenon for a number of activation sparsity techniques on convolutional neural networks, as well as for weight level sparsity learned with magnitude pruning. However, they demonstrate this result using a random initialization during re.training. +The implications of being able to train sparse architectures from scratch once they are learned are large: once a sparse topology is learned, it can be saved and shared as with any other neural network architecture. Re-training then can be done fully sparse, taking advantage of sparse linear algebra to greatly accelerate time-to-solution. However, the combination of these two studies does not clearly establish how this potential is to be realized. +Beyond the question of whether or not the original random weight initialization is needed, both studies only explore convolutional neural networks (and small multi-layer perceptrons in the case of Frankle & Carbin (2018)). The majority of experiments in both studies also limited their analyses to the MNIST, CIFAR-10, and CIFAR-100 datasets. While these are standard benchmarks for deep learning models, they are not indicative of the complexity of real-world tasks where model compression is most useful. Liu et al. (2018) do explore convolutional architectures on the Ima.geNet datasets, but only at two relatively low sparsity levels (30% and 60%). They also note that weight level sparsity on ImageNet is the only case where they are unable to re.produce the full accuracy of the pruned model. + +<
> + +Figure 6. Scratch and lottery ticket experiments with magnitude pruning. Top: results with Transformer. Bottom: Results with ResNet-50. Across all experiments, training from scratch using a learned sparse architecture is unable to re-produce the performance of models trained with sparsification as part of the optimization process. +To clarify the questions surrounding the idea of sparsification as a form of neural architecture search, we repeat the experiments of Frankle & Carbin (2018) and Liu et al. (2018) on ResNet-50 and Transformer. For each model, we explore the full range of sparsity levels (50% -98%) and compare to our well-tuned models from the previous sections. + +6.1. Experimental Framework +The experiments of Liu et al. (2018) encompass taking the final learned weight mask from a magnitude pruning model, randomly re-initializing the weights, and training the model with the normal training procedure (i.e., learning rate, num.ber of iterations, etc.). To account for the presence of sparsity at the start of training, they scale the variance of the initial weight distribution by the number of non-zeros in the matrix. They additionally train a variant where they increase the number of training steps (up to a factor of 2x) such that the re-trained model uses approximately the same number of FLOPs during training as model trained with sparsification as part of the optimization process. They refer to these two experiments as "scratch-e" and "scratch-b" respectively. +Frankle & Carbin (2018) follow a similar procedure, but use the same weight initialization that was used when the sparse weight mask was learned and do not perform the longer training time variant. +For our experiments, we repeat the scratch-e, scratch-b and lottery ticket experiments with magnitude pruning on Transformer and ResNet-50. For scratch-e and scratch-b, we also train variants that do not alter the initial weight distribution. For the Transformer, we re-trained five replicas of the best magnitude pruning hyperparameter settings at each sparsity level and save the weight initialization and final sparse weight mask. For each of the five learned weight masks, we train five identical replicas for the scratch-e, scratch-b, scratch-e with augmented initialization, scratch-b with augmented initialization, and the lottery ticket experiments. For ResNet-50, we followed the same procedure with three re-trained models and three replicas at each sparsity level for each of the five experiments. Figure 6 plots the averages and min/max of all experiments at each sparsity level 6. + +6.2. Scratch and Lottery Ticket Results & Analysis +Across all of our experiments, we observed that training from scratch using a learned sparse architecture is not able to match the performance of the same model trained with sparsification as part of the optimization process. +Across both models, we observed that doubling the number of training steps did improve the quality of the results for the scratch experiments, but was not sufficient to match the test set performance of the magnitude pruning baseline. As sparsity increased, we observed that the deviation between the models trained with magnitude pruning and those trained from scratch increased. For both models, we did not observe a benefit from using the augmented weight initialization for the scratch experiments. +For ResNet-50, we experimented with four different learn.ing rates schemes for the scratch-b experiments. We found that scaling each learning rate region to double the number of epochs produced the best results by a wide margin. These results are plotted in Figure 6. Results for the ResNet-50 scratch-b experiments with the other learning rate variants are included with our release of hyperparameter tuning results. +For the lottery ticket experiments, we were not able to replicate the phenomenon observed by Frankle & Carbin (2018). The key difference between our experiments is the complex.ity of the tasks and scale of the models, and it seems likely that this is the main factor contributing to our inability to train these architecture from scratch. +For the scratch experiments, our results are consistent with the negative result observed by (Liu et al., 2018) for Im. +ageNet and ResNet-50 with unstructured weight pruning. By replicating the scratch experiments at the full range of +6Two of the 175 Transformer experiments failed to train from scratch at all and produced BLEU scores less than 1.0. We omit these outliers in Figure 6 +sparsity levels, we observe that the quality of the models degrades relative to the magnitude pruning baseline as sparsity increases. For unstructured weight sparsity, it seems likely that the phenomenon observed by Liu et al. (2018) was produced by a combination of low sparsity levels and small-to-medium sized tasks. We'd like to emphasize that this result is only for unstructured weight sparsity, and that prior work Liu et al. (2018) provides strong evidence that activation pruning behaves differently. + +7. Limitations of This Study +Hyperparameter exploration. For all techniques and models, we carefully hand-tuned hyperparameters and per.formed extensive sweeps encompassing thousands of experiments over manually identified ranges of values. However, the number of possible settings vastly outnumbers the set of values that can be practically explored, and we cannot eliminate the possibility that some techniques significantly outperform others under settings we did not try. +Neural architectures and datasets. Transformer and ResNet-50 were chosen as benchmark tasks to represent a cross section of large-scale deep learning tasks with diverse architectures. We can fit exclude the possibility that some techniques achieve consistently high performance across other architectures. More models and tasks should be thoroughly explored in future work. +8. Conclusion +In this work, we performed an extensive evaluation of three state-of-the-art sparsification techniques on two large-scale learning tasks. Notwithstanding the limitations discussed in section 7, we demonstrated that complex techniques shown to yield state-of-the-art compression on small datasets per.form inconsistently, and that simple heuristics can achieve comparable or better results on a reduced computational bud.get. Based on insights from our experiments, we achieve a new state-of-the-art sparsity-accuracy trade-off for ResNet.50 with only magnitude pruning and highlight promising directions for research in sparsity inducing techniques. +Additionally, we provide strong counterexamples to two recently proposed theories that models learned through pruning techniques can be trained from scratch to the same test set performance of a model learned with sparsification as part of the optimization process. Our results highlight the need for large-scale benchmarks in sparsification and model compression. As such, we open-source our code, check.points, and results of all hyperparameter configurations to establish rigorous baselines for future work. + +Acknowledgements +We would like to thank Benjamin Caine, Jonathan Frankle, +Raphael Gontijo Lopes, Sam Greydanus, and Keren Gu for +helpful discussions and feedback on drafts of this paper. + +References +Bellec, G., Kappel, D., Maass, W., and Legenstein, R. A. Deep Rewiring: Training Very Sparse Deep Networks. CoRR, abs/1711.05136, 2017. +Collins, M. D. and Kohli, P. Memory Bounded Deep convolutional Networks. CoRR, abs/1412.1442, 2014. URL http://arxiv.org/abs/1412.1442. +Dai, B., Zhu, C., and Wipf, D. P. Compressing Neural Networks using the Variational Information Bottleneck. CoRR, abs/1802.10399, 2018. +Frankle, J. and Carbin, M. The Lottery Ticket Hy.pothesis: Training Pruned Neural Networks. CoRR, abs/1803.03635, 2018. URL http://arxiv.org/ +abs/1803.03635. +Gray, S., Radford, A., and Kingma, D. P. Block-sparse gpu kernels. https://blog.openai.com/ +block-sparse-gpu-kernels/, 2017. +Guo, Y., Yao, A., and Chen, Y. Dynamic Network Surgery for efficient DNNs. In NIPS, 2016. +Han, S., Pool, J., Tran, J., and Dally, W. J. Learning both Weights and Connections for efficient Neural Network. In NIPS, pp. 1135�1143, 2015. +Hassibi, B. and Stork, D. G. Second order derivatives for network pruning: Optimal brain surgeon. In NIPS, pp. 164�171. Morgan Kaufmann, 1992. +He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learn.ing for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770�778, 2016. +He, Y., Lin, J., Liu, Z., Wang, H., Li, L., and Han, S. AMC: automl for model compression and acceleration on mo.bile devices. In Computer Vision -ECCV 2018 -15th European Conference, Munich, Germany, September 8.14, 2018, Proceedings, Part VII, pp. 815�832, 2018. +Hestness, J., Narang, S., Ardalani, N., Diamos, G. F., Jun, H., Kianinejad, H., Patwary, M. M. A., Yang, Y., and Zhou, Y. Deep learning scaling is predictable, empirically. CoRR, abs/1712.00409, 2017. +Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., van den Oord, A., Dieleman, S., and Kavukcuoglu, K. efficient Neural Audio Synthesis. In Proceedings of the 35th Interna.tional Conference on Machine Learning, ICML 2018, Stockholmsm� +assan, Stockholm, Sweden, July 10-15, 2018, pp. 2415�2424, 2018. +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013. +Kingma, D. P., Salimans, T., and Welling, M. Variational dropout and the local reparameterization trick. CoRR, abs/1506.02557, 2015. +LeCun, Y., Denker, J. S., and Solla, S. A. Optimal Brain Damage. In NIPS, pp. 598�605. Morgan Kaufmann, 1989. +Lin, J., Rao, Y., Lu, J., and Zhou, J. Runtime neural pruning. In NIPS, pp. 2178�2188, 2017. +Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, +C. Learning efficient Convolutional Networks through Network Slimming. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pp. 2755�2763, 2017. +Liu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T. Rethinking the Value of Network Pruning. CoRR, abs/1810.05270, 2018. +Louizos, C., Ullrich, K., and Welling, M. Bayesian Com.pression for Deep Learning. In Advances in Neural In.formation Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 De.cember 2017, Long Beach, CA, USA, pp. 3290�3300, 2017a. +Louizos, C., Welling, M., and Kingma, D. P. Learn.ing Sparse Neural Networks through L0 Regularization. CoRR, abs/1712.01312, 2017b. +Luo, J., Wu, J., and Lin, W. Thinet: A Filter Level Pruning Method for Deep Neural Network Compression. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pp. 5068�5076, 2017. +Mitchell, T. J. and Beauchamp, J. J. Bayesian Variable Selection in Linear Regression. Journal of the American Statistical Association, 83(404):1023�1032, 1988. +Mocanu, D. C., Mocanu, E., Stone, P., Nguyen, P. H., Gibescu, M., and Liotta, A. Scalable Training of Arti�.cial Neural Networks with Adaptive Sparse Connectivity Inspired by Network Science. Nature Communications, 2018. +Molchanov, D., Ashukha, A., and Vetrov, D. P. Variational Dropout Sparsifies Deep Neural Networks. In Proceed.ings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 Au.gust 2017, pp. 2498�2507, 2017. +Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. Pruning Convolutional Neural Networks for Resource Ef.�cient Transfer Learning. CoRR, abs/1611.06440, 2016. +Narang, S., Diamos, G. F., Sengupta, S., and Elsen, E. Ex.ploring Sparsity in Recurrent Neural Networks. CoRR, abs/1704.05119, 2017. +Ott, M., Edunov, S., Grangier, D., and Auli, M. Scaling Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 -November 1, 2018, pp. 1�9, 2018. +Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic Backpropagation and Approximate Inference in Deep Generative models. In ICML, volume 32 of JMLR Workshop and Conference Proceedings, pp. 1278�1286. JMLR.org, 2014. +Str�om, N. Sparse Connection and Pruning in Large Dynamic Artificial Neural Networks. In EUROSPEECH, 1997. +Theis, L., Korshunova, I., Tejani, A., and Husz�ar, F. Faster gaze prediction with dense networks and Fisher pruning. CoRR, abs/1801.05787, 2018. URL http://arxiv. +org/abs/1801.05787. +Ullrich, K., Meeds, E., and Welling, M. Soft Weight-Sharing for Neural Network Compression. CoRR, abs/1702.04008, 2017. +Valin, J. and Skoglund, J. Lpcnet: Improving Neural Speech Synthesis Through Linear Prediction. CoRR, abs/1810.11846, 2018. URL http://arxiv.org/ +abs/1810.11846. +van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., and Kavukcuoglu, K. Wavenet: A Generative Model for Raw Audio. In The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016, pp. 125, 2016. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Atten.tion is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural In.formation Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 6000�6010, 2017. +Zagoruyko, S. and Komodakis, N. Wide Residual Networks. In Proceedings of the British Machine Vision Conference 2016, BMVC 2016, York, UK, September 19-22, 2016, 2016. +Zhu, M. and Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression. CoRR, abs/1710.01878, 2017. URL http://arxiv.org/ +abs/1710.01878. + +The State of Sparsity in Deep Neural Networks: Appendix + +A. Overview of Sparsity Inducing Techniques + +Here we provide a more detailed review of the three sparsity techniques we benchmarked. + +A.1. Magnitude Pruning +Magnitude-based weight pruning schemes use the magnitude of each weight as a proxy for its importance to model quality, and remove the least important weights according to some sparsification schedule over the course of training. Many variants have been proposed (Collins & Kohli, 2014; Han et al., 2015; Guo et al., 2016; Zhu & Gupta, 2017), with the key differences lying in when weights are removed, whether weights should be sorted to remove a precise pro.portion or thresholded based on a fixed or decaying value, and whether or not weights that have been pruned still re.ceive gradient updates and have the potential to return after being pruned. +Han et al. (2015) use iterative magnitude pruning and re.training to progressively sparsify a model. The target model is first trained to convergence, after which a portion of weights are removed and the model is re-trained with these weights fixed to zero. This process is repeated until the target sparsity is achieved. Guo et al. (2016) improve on this approach by allowing masked weights to still receive gradient updates, enabling the network to recover from incorrect pruning decisions during optimization. They achieve higher compression rates and interleave pruning steps with gradient update steps to avoid expensive re-training. Zhu & Gupta (2017) similarly allow gradient updates to masked weights, and make use of a gradual sparsification schedule with sorting-based weight thresholding to maintain accuracy while achieving a user specified level of sparsification. +Its worth noting that magnitude pruning can easily be adapted to induce block or activation level sparsity by re.moving groups of weights based on their p-norm, average, max, or other statistics. Variants have also been proposed that maintain a constant level of sparsity during optimization to enable accelerated training (Mocanu et al., 2018). + +A.2. Variational Dropout +Consider the setting of a dataset D of N i.i.d. samples (x, y) and a standard classification problem where the goal is to learn the parameters w of the conditional probability p(y|x, w). Bayesian inference combines some initial belief over the parameters w in the form of a prior distribution p(w) with observed data D into an updated belief over the parameters in the form of the posterior distribution p(w|D). In practice, computing the true posterior using Bayes' rule is computationally intractable and good approximations are needed. In variational inference, we optimize the parameters <> of some parameterized model <> such that <> is a close approximation to the true posterior distribution p(w|D) as measured by the Kullback-Leibler divergence between the two distributions. The divergence of our ap.proximate posterior from the true posterior is minimized in practice by maximizing the variational lower-bound + +<> + +where <> + +Using the Stochastic Gradient Variational Bayes (SGVB) (Kingma et al., 2015) algorithm to optimize this bound, <> reduces to the standard cross-entropy loss, and the KL divergence between our approximate posterior and prior over the parameters serves as a regularizer that enforces our initial belief about the parameters w. +In the standard formulation of variational dropout, we as.sume the weights are drawn from a fully-factorized Gaussian approximate posterior. + +<> + +Where <> and <> are neural network parameters. For each training step, we sample weights from this distribution and use the reparameterization trick (Kingma & Welling, 2013; Rezende et al., 2014) to differentiate the loss w.r.t. the parameters through the sampling operation. Given the weights are normally distributed, the distribution of the activations B after a linear operation like matrix multiplication or convolution is also Gaussian and can be calculated in closed form 7. + +<> + +with <> and <> where <> are the inputs to the layer. Thus, rather + +7 We ignore correlation in the activations, as is done by Molchanov et al. (2017) + +than sample weights, we can directly sample the activations at each layer. This step is known as the local reparameterization trick, and was shown by Kingma et al. (2015) to reduce the variance of the gradients relative to the standard formulation in which a single set of sampled weights must be shared for all samples in the input batch for efficiency. Molchanov et al. (2017) showed that the variance of the gradients could be further reduced by using an additive noise reparameterization, where we define a new parameter + +<> + +Under this parameterization, we directly optimize the mean and variance of the neural network parameters. +Under the assumption of a log-uniform prior on the weights w, the KL divergence component of our objective function <> can be accurately approximated (Molchanov et al., 2017): + +<> + +After training a model with variational dropout, the weights with the highest . values can be removed. For all their experiments, Molchanov et al. (2017) removed weights with log . larger than 3.0, which corresponds to a dropout rate greater than 95%. Although they demonstrated good results, it is likely that the optimal <> threshold varies across different models and even different hyperparameter settings of the same model. We address this question in our experiments. + +A.3. l0 Regularization +To optimize the l0-norm, we reparameterize the model weights . as the product of a weight and a random vari.able drawn from the hard-concrete distribution. + +<> where <> and <> + +In this formulation, the <> parameter that controls the posi.tion of the hard-concrete distribution (and thus the probability that zj is zero) is optimized with gradient descent. <> and <> are fixed parameters that control the shape of the hard-concrete distribution. <> controls the curvature or temperature of the hard-concrete probability density function, and <> and <> stretch the distribution s.t. zj takes value 0 or 1 with non-zero probability. + +On each training iteration, zj is sampled from this distri.bution and multiplied with the standard neural network weights. The expected l0-norm LC can then be calcu.lated using the cumulative distribution function of the hard-concrete distribution and optimized directly with stochastic gradient descent. + +<> + +At test-time, Louizos et al. (2017b) use the following estimate for the model parameters. + +<> + +Interestingly, Louizos et al. (2017b) showed that their objective function under the l0 penalty is a special case of a variational lower-bound over the parameters of the network under a spike and slab (Mitchell & Beauchamp, 1988) prior. + +B. Variational Dropout Implementation Verification + +To verify our implementation of variational dropout, we applied it to LeNet-300-100 and LeNet-5-Caffe on MNIST and compared our results to the original paper (Molchanov et al., 2017). We matched our hyperparameters to those used in the code released with the paper8. All results are listed in table 3 + +Table 3. Variational Dropout MNIST Reproduction Results. + +<
> + +Our baseline LeNet-300-100 model achieved test set accuracy of 98.42%, slightly higher than the baseline of 98.36% reported in (Molchanov et al., 2017). Applying our varia.tional dropout implementation to LeNet-300-100 with these hyperparameters produced a model with 97.52% global sparsity and 98.42% test accuracy. The original paper produced + +8 https://github.com/ars-ashuha/variational-dropout-Sparsifies.dnn + +<
> + +Figure 7. Forward pass FLOPs for WRN-28-10 trained with l0 regularization. Our implementation achieves FLOPs reductions comparable to those reported in Louizos et al. (2017b). + +a model with 98.57% global sparsity, and 98.08% test accuracy. While our model achieves .34% higher tests accuracy with 1% lower sparsity, we believe the discrepancy is mainly due to difference in our software packages: the authors of (Molchanov et al., 2017) used Theano and Lasagne for their experiments, while we use TensorFlow. +Given our model achieves highest accuracy, we can decrease the log . threshold to trade accuracy for more sparsity. With a <> threshold of 2.0, our model achieves 98.5% global sparsity with a test set accuracy of 98.40%. With a log . threshold of 0.1, our model achieves 99.1% global sparsity with 98.13% test set accuracy, exceeding the sparsity and accuracy of the originally published results. +On LeNet-5-Caffe, our implementation achieved a global sparsity of 99.29% with a test set accuracy of 99.26%, ver.sus the originaly published results of 99.6% sparsity with 99.25% accuracy. Lowering the <> threshold to 2.0, our model achieves 99.5% sparsity with 99.25% test accuracy. + +C. l0 Regularization Implementation Verification + +The original l0 regularization paper uses a modified version of the proposed technique for inducing group sparsity in models, so our weight-level implementation is not directly comparable. However, to verify our implementation we trained a Wide ResNet (WRN) (Zagoruyko & Komodakis, 2016) on CIFAR-10 and compared results to those reported in the original publication for group sparsity. +As done by Louizos et al. (2017b), we apply l0 to the first convolutional layer in the residual blocks (i.e., where dropout would normally be used). We use the weight decay formulation for the re-parameterized weights, and scale the weight decay coefficient to maintain the same initial length scale of the parameters. We use the same batch size of 128 samples and the same initial <>, and train our model on a single GPU. +Our baseline WRN-28-10 implementation trained on CIFAR-10 achieved a test set accuracy of 95.45%. Using our l0 regularization implementation and a l0-norm weight of .0003, we trained a model that achieved 95.34% accuracy on the test set while achieving a consistent training-time FLOPs reduction comparable to that reported by Louizos et al. (2017b). Floating-point operations (FLOPs) required to compute the forward over the course of training WRN.28-10 with l0 are plotted in Figure 7. +During our re-implementation of the WRN experiments from Louizos et al. (2017b), we identified errors in the original publications FLOP calculations that caused the number of floating-point operations in WRN-28-10 to be miscalculated. Wefive contacted the authors, and hope to resolve this issue to clarify their performance results. + +D. Sparse Transformer Experiments + +D.1. Magnitude Pruning Details +For our magnitude pruning experiments, we tuned four key hyperparameters: the starting iteration of the sparsification process, the ending iteration of the sparsification process, the frequency of pruning steps, and the combination of other regularizers (dropout and label smoothing) used during train.ing. We trained models with 7 different target sparsities: 50%, 60%, 70%, 80%, 90%, 95%, and 98%. At each of these sparsity levels, we tried pruning frequencies of 1000 and 10000 steps. During preliminary experiments we identi.�ed that the best settings for the training step to stop pruning at were typically closer to the end of training. Based on this insight, we explored every possible combination of start and end points for the sparsity schedule in increments of 100000 steps with an ending step of 300000 or greater. + +By default, the Transformer uses dropout with a dropout rate of 10% on the input to the encoder, decoder, and before each layer and performs label smoothing with a smoothing parameter of <>. We found that decreasing these other regularizers produced higher quality models in the mid to high sparsity range. For each hyperparameter combination, we tried three different regularization settings: standard label smoothing and dropout, label smoothing only, and no regularization. + +D.2. Variational Dropout Details +For the Transformer trained with variational dropout, we extensively tuned the coefficient for the KL divergence component of the objective function to find models that achieved high accuracy with sparsity levels in the target range. We found that KL divergence weights in the range +<>, where N is the number of samples in the training set, produced models in our target sparsity range. (Molchanov et al., 2017) noted difficulty training some models from scratch with variational dropout, as large portions of the model adopt high dropout rates early in training before the model can learn a useful representation from the data. To address this issue, they use a gradual ramp-up of the KL divergence weight, linearly increasing the regularizer coefficient until it reaches the desired value. +For our experiments, we explored using a constant regu.larizer weight, linearly increasing the regularizer weight, and also increasing the regularizer weight following the cubic sparsity function used with magnitude pruning. For the linear and cubic weight schedules, we tried each combination of possible start and end points in increments of 100000 steps. For each hyperparameter combination, we also tried the three different combinations of dropout and label smoothing as with magnitude pruning. For each trained model, we evaluated the model with 11 <> thresholds in the range [0, 5]. For all experiments, we initialized all <> parameters to the constant value <>. + +D.3. l0 Regularization Details +For Transformers trained with l0 regularization, we simi.larly tuned the coefficient for the l0-norm in the objective function. We observed that much higher magnitude regu.larization coefficients were needed to produce models with the same sparsity levels relative to variational dropout. We +found that l0-norm weights in the range <> produced models in our target sparsity range. +For all experiments, we used the default settings for the paramters of the hard-concrete distribution: <>, and <>. We initialized the <> parameters to 2.197, corresponding to a 10% dropout rate. +For each hyperparameter setting, we explored the three reg.ularizer coefficient schedules used with variational dropout and each of the three combinations of dropout and label smoothing. + +D.4. Random Pruning Details +We identified in preliminary experiments that random pruning typically produces the best results by starting and ending pruning early and allowing the model to finish the rest of the training steps with the final sparse weight mask. For our experiments, we explored all hyperparameter combinations that we explored with magnitude pruning, and also included start/end pruning step combinations with an end step of less than 300000. + +E. Sparse ResNet-50 + +E.1. Learning Rate +For all experiments, the we used the learning rate scheme used by the official TensorFlow ResNet-50 implementation9. With our batch size of 1024, this includes a linear ramp-up for 5 epochs to a learning rate of .4 followed by learning rate drops by a factor of 0.1 at epochs 30, 60, and 80. + +E.2. Magnitude Pruning Details +For magnitude pruning on ResNet-50, we trained models with a target sparsity of 50%, 70%, 80%, 90%, 95%, and 98%. At each sparsity level, we tried starting pruning at steps 8k, 20k, and 40k. For each potential starting point, we tried ending pruning at steps 68k, 76k, and 100k. For every hyperparameter setting, we tried pruning frequencies of 2k, 4k, and 8k steps and explored training with and without label smoothing. During preliminary experiments, we observed that removing weight decay from the model consistently caused significant decreases in test accuracy. Thus, for all hyperparameter combinations, we left weight decay on with the standard coefficient. +For a target sparsity of 98%, we observed that very few hy.perparameter combinations were able to complete training without failing due to numerical issues. Out of all the hyper-parameter configurations we tried, only a single model was able to complete training without erroring from the presence of NaNs. As explained in the main text, at high sparsity levels the first layer of the model has very few non-zero parameters, leading to instability during training and low test set performance. Pruned ResNet-50 models with the first layer left dense did not exhibit these issues. + +E.3. Variational Dropout Details +For variational dropout applied to ResNet-50, we explored the same combinations of start and end points for the kl-divergence weight ramp up as we did for the start and end points of magnitude pruning. For all transformer experi.ments, we did not observe a significant gain from using a cubic kl-divergence weight ramp-up schedule and thus only explored the linear ramp-up for ResNet-50. For each combi.nation of start and end points for the kl-divergence weight, we explored 9 different coefficients for the kl-divergence loss term: .01/N,.03/N,.05/N,.1/N,.3/N,.5/N,1/ N, 10 / N, and 100 / N. +Contrary to our experience with Transformer, we found ResNet-50 with variational dropout to be highly sensitive to the initialization for the <> parameters. With the standard setting of -10, we couldnfit match the baseline accuracy, and with an initialization of -20 our models achieved + +9 https://bit.ly/2Wd2Lk0 + +good test performance but no sparsity. After some exper.imentation, we were able to produce good results with an initialization of -15. +While with Transformer we saw a reasonable amount of variance in test set performance and sparsity with the same model evaluated at different log . thresholds, we did not observe the same phenomenon for ResNet-50. Across a range of log . values, we saw consistent accuracy and nearly identical sparsity levels. For all of the results reported in the main text, we used a <> threshold of 0.5, which we found to produce slightly better results than the standard threshold of 3.0. + +E.4. l0 Regularization Details +For l0 regularization, we explored four different initial <> values corresponding to dropout rates of 1%, 5%, 10%, and 30%. For each dropout rate, we extenively tuned the l0 .norm weight to produce models in the desired sparsity range. After identifying the proper range of l0-norm coefficients, we ran experiments with 20 different coefficients in that range. For each combination of these hyperparameters, we tried all four combinations of other regularizers: standard weight decay and label smoothing, only weight decay, only label smoothing, and no regularization. For weight decay, we used the formulation for the reparameterized weights provided in the original paper, and followed their approach of scaling the weight decay coefficient based on the initial dropout rate to maintain a constant length-scale between the l0 regularized model and the standard model. +Across all of these experiments, we were unable to produce ResNet models that achieved a test set performance better than random guessing. For all experiments, we observed that training proceeded reasonably normally until the l0-norm loss began to drop, at which point the model incurred severe accuracy loss. We include the results of all hyperparameter combinations in our data release. +Additionally, we tried a number of tweaks to the learning process to improve the results to no avail. We explored training the model for twice the number of epochs, training with much higher initial dropout rates, modifying the <> parameter for the hard-concrete distribution, and a modified test-time parameter estimator. + +E.5. Random Pruning Details +For random pruning on ResNet-50, we shifted the set of possible start and end points for pruning earlier in training relative to those we explored for magnitude pruning. At each of the sparsity levels tried with magnitude pruning, we tried starting pruning at step 0, 8k, and 20k. For each potential starting point, we tried ending pruning at steps 40k, 68k, and 76k. For every hyperparameter setting, we tried pruning frequencies of 2k, 4k, and 8k and explored training with and without label smoothing. + +E.6. Scratch-B Learning Rate Variants +For the scratch-b (Liu et al., 2018) experiments with ResNet. +50, we explored four different learning rate schemes for the extended training time (2x the default number of epochs). + +The first learning rate scheme we explored was uniformly scaling each of the five learning rate regions to last for double the number of epochs. This setup produced the best results by a wide margin. We report these results in the main text. +The second learning rate scheme was to keep the standard learning rate, and maintain the final learning rate for the extra training steps as is common when fine-tuning deep neural networks. The third learning rate scheme was to maintain the standard learning rate, and continually drop the learning rate by a factor of 0.1 every 30 epochs. The last scheme we explored was to skip the learning rate warm-up, and drop the learning rate by 0.1 every 30 epochs. This learning rate scheme is closest to the one used by Liu et al. (2018). We found that this scheme underperformed relative to the scaled learning rate scheme with our training setup. +Results for all learning rate schemes are included with the released hyperparameter tuning data. +<> <> <> + + +<> <> <> + NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications + + Tien-Ju Yang 1⋆[0000−0003−4728−0321] , Andrew Howard 2 ,BoChen 2 , + Xiao Zhang 2 ,AlecGo 2 ,MarkSandler 2 , Vivienne Sze 1 , and Hartwig Adam 2 + + 1 Massachusetts Institute of Technology + 2 Google Inc. + {tjy,sze}@mit.edu,{howarda,bochen,andypassion,ago,sandler,hadam}@google.com + + + Abstract. + + This work proposes an algorithm, called NetAdapt, that + automatically adapts a pre-trained deep neural network to a mobile plat- + form given a resource budget. While many existing algorithms simplify + networks based on the number of MACs or weights, optimizing those + indirect metrics may not necessarily reduce the direct metrics, such as + latency and energy consumption. To solve this problem, NetAdapt + incorporates direct metrics into its adaptation algorithm. These direct metrics + are evaluated using empirical measurements, so that detailed knowledge + of the platform and tool chain is not required. NetAdapt automatically + and progressively simplifies a pre-trained network until the resource bud- + get is met while maximizing the accuracy. Experiment results show that + NetAdapt achieves better accuracy versus latency tradeoffs on both + mobile CPU and mobile GPU, compared with the state-of-the-art automated + network simplification algorithms. For image classification on the + ImageNet dataset, NetAdapt achieves up to a 1.7× speedup in-measured + inference latency with equal or higher accuracy on MobileNets (V1&V2). + + + 1 Introduction + + Deep neural networks (DNNs or networks) have become an indispensable component + of artificial intelligence, delivering near or super-human accuracy on com- + mon vision tasks such as image classification and object detection. However, + DNN-based AI applications are typically too computationally intensive to be + deployed on resource-constrained platforms, such as mobile phones. This hinders + the enrichment of a large set of user experiences. + A significant amount of recent work on DNN design has focused on improving + the efficiency of networks. However, the majority of works are based on optimizing + the “indirect metrics”, such as the number of multiply-accumulate operations + (MACs) or the number of weights, as proxies for the resource consumption of + a network. Although these indirect metrics are convenient to compute and + integrate into the optimization framework, they may not be good approximations + to the “direct metrics” that matter for the real applications such as latency + + <
> + + Fig. 1.NetAdapt automatically adapts a pretrained network to a mobile platform + given a resource budget. This algorithm is guided by the direct metrics for resource + consumption. NetAdapt eliminates the requirement of platform-specific knowledge by + using empirical measurements to evaluate the direct metrics. At each iteration, Ne- + tAdapt generates many network proposals and measures the proposals on the target + platform. The measurements are used to guide NetAdapt to generate the next set of + network proposals at the next iteration. + + + and energy consumption. The relationship between an indirect metric and the + corresponding direct metric can be highly non-linear and platform-dependent as + observed by [15, 25, 26]. In this work, we will also demonstrate empirically that + a network with a fewer number of MACs can be slower when actually running + on mobile devices; specifically, we will show that a network of 19% less MACs + incurs 29% longer latency in practice (see Table 1). + There are two common approaches to designing efficient network architectures. + The first is designing a single architecture with no regard to the underlying + platform. It is hard for a single architecture to run optimally on all the platforms + due to the different platform characteristics. For example, the fastest architecture + on a desktop GPU may not be the fastest one on a mobile CPU with the + same accuracy. Moreover, there is little guarantee that the architecture could + meet the resource budget (e.g., latency) on all platforms of interest. The second + approach is manually crafting architectures for a given target platform based + on the platform’s characteristics. However, this approach requires deep knowledge + about the implementation details of the platform, including the toolchains, + the configuration and the hardware architecture, which are generally unavailable + given the proprietary nature of hardware and the high complexity of modern sys- + tems. Furthermore, manually designing a different architecture for each platform + can be taxing for researchers and engineers. + In this work, we propose a platform-aware algorithm, calledNetAdapt,to + address the aforementioned issues and facilitate platform-specific DNN deployment. NetAdapt 3 + NetAdapt (Fig. 1) incorporates direct metrics in the optimization loop, so + it does not suffer from the discrepancy between the indirect and direct metrics. + The direct metrics are evaluated by the empirical measurements taken from the + target platform. This enables the algorithm to support any platform without + detailed knowledge of the platform itself, although such knowledge could still be + incorporated into the algorithm to further improve results. In this paper, we use + latency as the running example of a direct metric and resource to target even + though our algorithm is generalizable to other metrics or a combination of them + (Sec. 4.3). + The network optimization of NetAdapt is carried out in an automatic way to + gradually reduce the resource consumption of a pretrained network while + maximizing the accuracy. The optimization runs iteratively until the resource budget + is met. Through this design, NetAdapt can generate not only a network that + meets the budget, but also a family of simplified networks with different trade- + offs, which allows dynamic network selection and further study. Finally, instead + of being a black box, NetAdapt is designed to be easy to interpret. For exam- + ple, through studying the proposed network architectures and the corresponding + empirical measurements, we can understand why a proposal is chosen and this + sheds light on how to improve the platform and network design. + The main contributions of this paper are: + A framework that uses direct metrics when optimizing a pretrained network + to meet a given resource budget. Empirical measurements are used to evaluate + the direct metrics such that no platform-specific knowledge is required. + An automated constrained network optimization algorithm that maximizes + accuracy while satisfying the constraints (i.e., the predefined resource bud- + get). The algorithm outperforms the state-of-the-art automatic network + simplification algorithms by up to 1.7×in terms of reduction inmeasured inference + latency while delivering equal or higher accuracy. Moreover, a family + of simplified networks with different trade-offs will be generated to allow + dynamic network selection and further study. + Experiments that demonstrate the effectiveness of NetAdapt on different + platforms and on real-time-class networks, such as the small MobileNetV1, + which is more difficult to simplify than larger networks. + + + 2 Related Work + + There is a large body of work that aims to simplify DNNs.We refer the readers + to [21] for a comprehensive survey, and summarize the main approaches below. + The most related works are pruning-based methods. [6, 14, 16] aim to remove + individual redundant weights from DNNs. However, most platforms cannot fully + take advantage of unstructured sparse filters [26]. Hu et al. [10] and Srinivas et + al. [20] focus on removing entire filters instead of individual weights. The draw- + back of these methods is the requirement of manually choosing the compression + rate for each layer. MorphNet [5] leverages the sparsifying regularizers to + automatically determine the layerwise compression rate. ADC [8] uses reinforcement + learning to learn a policy for choosing the compression rates. The crucial + difference between all the aforementioned methods and ours is that they are not + guided by the direct metrics, and thus may lead to sub-optimal performance, as + we see in Sec. 4.3. + Energy-aware pruning [25] uses an energy model [24] and incorporates the + estimated energy numbers into the pruning algorithm. However, this requires de- + signing models to estimate the direct metrics of each target platform, which re- + quires detailed knowledge of the platform including its hardware architecture [3], + and the network-to-array mapping used in the toolchain [2]. NetAdapt does not + have this requirement since it can directly use empirical measurements. + DNNs can also be simplified by approaches that involve directly designing + efficient network architectures, decomposition or quantization. MobileNets [9, 18] + and ShueNets [27] provide efficient layer operations and reference architecture + design. Layer-decomposition-based algorithms [13, 23] exploit matrix + decomposition to reduce the number of operations. Quantization [11, 12, 17] reduces + the complexity by decreasing the computation accuracy. The proposed + algorithm, NetAdapt, is complementary to these methods. For example, NetAdapt + can adapt MobileNets to further push the frontier of efficient networks as shown + in Sec. 4 even though MobileNets are more compact and much harder to simplify + than the other larger networks, such as VGG [19]. + + 3 Methodology: NetAdapt + + We propose an algorithm, called NetAdapt, that will allow a user to automatically + simplify a pretrained network to meet the resource budget of a platform + while maximizing the accuracy. NetAdapt is guided by direct metrics for resource + consumption, and the direct metrics are evaluated by using empirical measurements, + thus eliminating the requirement of detailed platform-specific knowledge. + + 3.1 Problem Formulation + NetAdapt aims to solve the following non-convex constrained problem: + + <> (1) + + where Net is a simplified network from the initial pretrained network, <> + computes the accuracy, <> evaluates the direct metric for resource con- + sumption of the jth resource, and <> is the budget of the jth resource and + the constraint on the optimization. The resource can be latency, energy, memory + footprint, etc., or a combination of these metrics. + Based on an idea similar to progressive barrier methods [1], NetAdapt breaks + this problem into the following series of easier problems and solves it iteratively: + + <> (2) + + + Algorithm 1:NetAdapt + + <> + + where <> is the network generated by the ith iteration, and Net_0 is the initial + pretrained network. As the number of iterations increases, the constraints (i.e., + current resource budget <> gradually become tighter. <>, + which is larger than zero, indicates how much the constraint tightens for the jth + resource in the ith iteration and can vary from iteration to iteration. This is + referred to as “resource reduction schedule”, which is similar to the concept of + learning rate schedule. The algorithm terminates when Res <> + is equal to or smaller thanBud j for every resource type. It outputs the final + adapted network and can also generate a sequence of simplified networks (i.e., + the highest accuracy network from each iteration <>) to provide the + efficient frontier of accuracy and resource consumption trade-offs. + + 3.2 Algorithm Overview + + For simplicity, we assume that we only need to meet the budget of one resource, + specifically latency. One method to reduce the latency is to remove filters from + the convolutional (CONV) or fully-connected (FC) layers. While there are other + ways to reduce latency, we will use this approach to demonstrate NetAdapt. + The NetAdapt algorithm is detailed in pseudo code in Algorithm 1 and in + Fig. 2. Each iteration solves Eq. 2 by reducing the number of filters in a single + CONV or FC layer (theChoose # of Filters and Choose Which Filters + blocks in Fig. 2). The number of filters to remove from a layer is guided by + empirical measurements. NetAdapt removes entire filters instead of individual + weights because most platforms can take advantage of removing entire filters, + + <
> + + Fig. 2.This figure visualizes the algorithm flow of NetAdapt. At each iteration, Ne- + tAdapt decreases the resource consumption by simplifying (i.e., removing filters from) + one layer. In order to maximize accuracy, it tries to simplify each layer individually + and picks the simplified network that has the highest accuracy. Once the target budget + is met, the chosen network is then fine-tuned again until convergence. + + and this strategy allows reducing both filters and feature maps, which play an + important role in resource consumption [25]. The simplified network is then + fine-tuned for a short length of time in order to restore some accuracy (the + Short-Term Fine-Tuneblock). + In each iteration, the previous three steps (highlighted in bold) are applied on + each of the CONV or FC layers individually 3 . As a result, NetAdapt generates + K (i.e., the number of CONV and FC layers) network proposals in one iteration, + each of which has a single layer modified from the previous iteration. The network + proposal with the highest accuracy is carried over to the next iteration (the + Pick Highest Accuracy block). Finally, once the target budget is met, the + chosen network is fine-tuned again until convergence (theLong-Term Fine-Tuneblock). + + + 3.3 Algorithm Details + + This section describes the key blocks in the NetAdapt algorithm (Fig. 2). + Choose Number of FiltersThis step focuses on determining how many + filters to preserve in a specific layer based on empirical measurements. NetAdapt + gradually reduces the number of filters in the target layer and measures the + resource consumption of each of the simplified networks. The maximum number + 3 The algorithm can also be applied to a group of multiple layers as a single unit + (instead of a single layer). For example, in ResNet [7], we can treat a residual block + as a single unit to speed up the adaptation process. + + <
> + + Fig. 3.This figure illustrates how layer-wise look-up tables are used for fast resource + consumption estimation. + + + of filters that can satisfy the current resource constraint will be chosen. Note + that when some filters are removed from a layer, the associated channels in the + following layers should also be removed. Therefore, the change in the resource + consumption of other layers needs to be factored in. + Choose Which FiltersThis step chooses which filters to preserve based on + the architecture from the previous step. There are many methods proposed in + the literature, and we choose the magnitude-based method to keep the algorithm + simple. In this work, the N filters that have the largest ℓ2-norm magnitude will + be kept, whereNis the number of filters determined by the previous step. More + complex methods can be adopted to increase the accuracy, such as removing the + filters based on their joint influence on the feature maps [25]. + Short-/Long-Term Fine-TuneBoth the short-term fine-tune and long- + term fine-tune steps in NetAdapt involve network-wise end-to-end fine-tuning. + Short-term fine-tune has fewer iterations than long-term fine-tune. + At each iteration of the algorithm, we fine-tune the simplified networks with + a relatively smaller number of iterations (i.e., short-term) to regain accuracy, in + parallel or in sequence. This step is especially important while adapting small + networks with a large resource reduction because otherwise the accuracy will + drop to zero, which can cause the algorithm to choose the wrong network proposal. + As the algorithm proceeds, the network is continuously trained but does not + converge. Once the final adapted network is obtained, we fine-tune the network + with more iterations until convergence (i.e., long-term) as the final step. + + + 3.4 Fast Resource Consumption Estimation + + As mentioned in Sec. 3.3, NetAdapt uses empirical measurements to determine + the number of filters to keep in a layer given the resource constraint. In theory, + we can measure the resource consumption of each of the simplified networks + on the fly during adaptation. However, taking measurements can be slow and + difficult to parallelize due to the limited number of available devices. Therefore, + it may be prohibitively expensive and become the computation bottleneck. + + <
> + + Fig. 4.The comparison between the estimated latency (using layer-wise look-up tables) + and the real latency on a single large core of Google Pixel 1 CPU while adapting the + 100% MobileNetV1 with the input resolution of 224 [9]. + + + We solve this problem by building layer-wise look-up tables with pre-measured + resource consumption of each layer. When executing the algorithm, we look up + the table of each layer, and sum up the layer-wise measurements to estimate + the network-wise resource consumption, which is illustrated in Fig. 3. The rea- + son for not using a network-wise table is that the size of the table will grow + exponentially with the number of layers, which makes it intractable for deep + networks. Moreover, layers with the same shape and feature map size only need + to be measured once, which is common for modern deep networks. + Fig. 4 compares the estimated latency (the sum of layer-wise latency from the + layer-wise look-up tables) and the real latency on a single large core of Google + Pixel 1 CPU while adapting the 100% MobileNetV1 with the input resolution of + 224 [9]. The real and estimated latency numbers are highly correlated, and the + difference between them is sufficiently small to be used by NetAdapt. + + + 4 Experiment Results + + In this section, we apply the proposed NetAdapt algorithm to MobileNets [9, 18], + which are designed for mobile applications, and experiment on the ImageNet + dataset [4]. We did not apply NetAdapt on larger networks like ResNet [7] and + VGG [19] because networks become more difficult to simplify as they become + smaller; these networks are also seldom deployed on mobile platforms. We benchmark + NetAdapt against three state-of-the-art network simplification methods: + Multipliers[9] are simple but effective methods for simplifying networks. + Two commonly used multipliers are the width multiplier and the resolution + multiplier; they can also be used together. Width multiplier scales the + number of filters by a percentage across all convolutional (CONV) and fully- + connected (FC) layers, and resolution multiplier scales the resolution of the + input image. We use the notation “50% MobileNetV1 (128)” to denote ap- + plying a width multiplier of 50% on MobileNetV1 with the input image + resolution of 128. + MorphNet[5] is an automatic network simplification algorithm based on sparsifying regularization. + ADC[8] is an automatic network simplification algorithm based on reinforcement learning. + + We will show the performance of NetAdapt on the small MobileNetV1 (50% + MobileNetV1 (128)) to demonstrate the effectiveness of NetAdapt on real-time- + class networks, which are much more difficult to simplify than larger networks. + To show the generality of NetAdapt, we will also measure its performance on + the large MobileNetV1 (100% MobileNetV1 (224)) across different platforms. + Lastly, we adapt the large MobileNetV2 (100% MobileNetV2 (224)) to push the + frontier of efficient networks. + + + 4.1 Detailed Settings for MobileNetV1 Experiments + + We perform most of the experiments and study on MobileNetV1 and detail the + settings in this section. + NetAdapt ConfigurationMobileNetV1 [9] is based on depthwise separable + convolutions, which factorize am×m standard convolution layer into am×m + depthwise layer and a 1×1 standard convolution layer called a pointwise layer. In + the experiments, we adapt each depthwise layer with the corresponding pointwise + layer and choose the filters to keep based on the pointwise layer. When adapting + the small MobileNetV1 (50% MobileNetV1 (128)), the latency reduction (<> + in Eq. 2) starts at 0.5 and decays at the rate of 0.96 per iteration. When adapting + other networks, we use the same decay rate but scale the initial latency reduction + proportional to the latency of the initial pretrained network. + Network TrainingWe preserve ten thousand images from the training + set, ten images per class, as the holdout set. The new training set without the + holdout images is used to perform short-term fine-tuning, and the holdout set is + used to pick the highest accuracy network out of the simplified networks at each + iteration. The whole training set is used for the long-term fine-tuning, which is + performed once in the last step of NetAdapt. + Because the training configuration can have a large impact on the accuracy, + we apply the same training configuration to all the networks unless otherwise + stated to have a fairer comparison. We adopt the same training configuration as + MorphNet [5] (except that the batch size is 128 instead of 96). The learning rate + for the long-term fine-tuning is 0.045 and that for the short-term fine-tuning is + 0.0045. This configuration improves ADC network’s top-1 accuracy by 0.3% and + almost all multiplier networks’ top-1 accuracy by up to 3.8%, except for one data + point, whose accuracy is reduced by 0.2%. We use these numbers in the following + analysis. Moreover, all accuracy numbers are reported on the validation set to + show the true performance. + Mobile Inference and Latency MeasurementWe use Google’s Tensor- + Flow Lite engine [22] for inference on a mobile CPU and Qualcomm’s Snap- + dragon Neural Processing Engine (SNPE) for inference on a mobile GPU. For + experiments on mobile CPUs, the latency is measured on a single large core of + + <
> + + Fig. 5.The figure compares NetAdapt (adapting the small MobileNetV1) with the + multipliers [9] and MorphNet [5] on a mobile CPU of Google Pixel 1. + + + Google Pixel 1 phone. For experiments on mobile GPUs, the latency is measured + on the mobile GPU of Samsung Galaxy S8 with SNPE’s benchmarking tool. For + each latency number, we report the median of 11 latency measurements. + + 4.2 Comparison with Benchmark Algorithms + Adapting Small MobileNetV1 on a Mobile CPUIn this experiment, we + apply NetAdapt to adapt the small MobileNetV1 (50% MobileNetV1 (128)) to + a mobile CPU. It is one of the most compact networks and achieves real-time + performance. It is more challenging to simplify than other larger networks + (include the large MobileNet V1). The results are summarized and compared with + the multipliers [9] and MorphNet [5] in Fig. 5. We observe that NetAdapt + outperforms the multipliers by up to 1.7×faster with the same or higher accuracy. + For MorphNet, NetAdapt’s result is 1.6×faster with 0.3% higher accuracy. + + Adapting Large MobileNetV1 on a Mobile CPUIn this experiment, we + apply NetAdapt to adapt the large MobileNetV1 (100% MobileNetV1 (224)) + on a mobile CPU. It is the largest MobileNetV1 and achieves the highest ac- + curacy. Because its latency is approximately 8×higher than that of the small + MobileNetV1, we scale the initial latency reduction by 8×. The results are shown + and compared with the multipliers [9] and ADC [8] in Fig. 6. NetAdapt achieves + higher accuracy than the multipliers and ADC while increasing the speed by + 1.4× and 1.2×, respectively. + While the training configuration is kept the same when comparing to the + benchmark algorithms discussed above, we also show in Fig. 6 that the accuracy + of the networks adapted using NetAdapt can be further improved with a better + training configuration. After simply adding dropout and label smoothing, the + accuracy can be increased by 1.3%. Further tuning the training configuration + for each adapted network can give higher accuracy numbers, but it is not the + focus of this paper. + + <
> + + Fig. 6.The figure compares NetAdapt (adapting the large MobileNetV1) with the + multipliers [9] and ADC [8] on a mobile CPU of Google Pixel 1. Moreover, the accuracy + of the adapted networks can be further increased by up to 1.3% through using a better + training configuration (simply adding dropout and label smoothing). + + <
> + + Fig. 7.This figure compares NetAdapt (adapting the large MobileNetV1) with the + multipliers [9] and ADC [8] on a mobile GPU of Samsung Galaxy S8. Moreover, the + accuracy of the adapted networks can be further increased by up to 1.3% through using + a better training configuration (simply adding dropout and label smoothing). + + + Adapting Large MobileNetV1 on a Mobile GPUIn this experiment, we + apply NetAdapt to adapt the large MobileNetV1 on a mobile GPU to show the + generality of NetAdapt. Fig. 7 shows that NetAdapt outperforms other benchmark + algorithms by up to 1.2×speed-up with higher accuracy. Due to the + limitation of the SNPE tool, the layerwise latency breakdown only considers the + computation time and does not include the latency of other operations, such as + feature map movement, which can be expensive [25]. This affects the precision + of the look-up tables used for this experiment. Moreover, we observe that there + is an approximate 6.2ms (38% of the latency of the network before applying + NetAdapt) non-reducible latency. These factors cause a smaller improvement on + the mobile GPU compared with the experiments on the mobile CPU. Moreover, + when the better training configuration is applied as previously described, the + accuracy can be further increased by 1.3%. + + <
> <
> + + Fig. 8.The accuracy of different short- Fig. 9.The comparison between before + term fine-tuning iterations when adapt- and after long-term fine-tuning when + ing the small MobileNetV1 (without long- adapting the small MobileNetV1 on a mo- + term fine-tuning) on a mobile CPU of bile CPU of Google Pixel 1. Although the + Google Pixel 1. Zero iterations means no short-term fine-tuning preserves the accu- + short-term fine-tuning. racy well, the long-term fine-tuning gives + the extra 3.4% on average (from 1.8% to + 4.5%). + + + 4.3 Ablation Studies + Impact of Direct MetricsIn this experiment, we use the indirect metric (i.e., + the number of MACs) instead of the direct metric (i.e., the latency) to guide + NetAdapt to investigate the importance of using direct metrics. When computing + the number of MACs, we only consider the CONV and FC layers because batch + normalization layers can be folded into the corresponding CONV layers, and the + other layers are negligibly small. Table 1 shows that NetAdapt outperforms the + benchmark algorithms with lower numbers of MACs and higher accuracy. This + demonstrates the effectiveness of NetAdapt. However, we also observe that the + network with lower numbers of MACs may not necessarily be faster. This shows + the necessity of incorporating direct measurements into the optimization flow. + + Impact of Short-Term Fine-TuningFig. 8 shows the accuracy of adapting + the small MobileNetV1 with different short-term fine-tuning iterations (without + long-term fine-tuning). The accuracy rapidly drops to nearly zero if no short- + term fine-tuning is performed (i.e., zero iterations). In this low accuracy region, + the algorithm picks the best network proposal solely based on noise and hence NetAdapt 13 + + <
> + + Fig. 10.NetAdapt and the multipliers generate different simplified networks when + adapting the small MobileNetV1 to match the latency of 25% MobileNetV1 (128). + + + gives poor performance. After fine-tuning a network for a short amount of time + (ten thousand iterations), the accuracy is always kept above 20%, which allows + the algorithm to make a better decision. Although further increasing the number + of iterations improves the accuracy, we find that using forty thousand iterations + leads to a good accuracy versus speed trade-off for the small MobileNetV1. + + Impact of Long-Term Fine-TuningFig. 9 illustrates the importance of per- + forming the long-term fine-tuning. Although the short-term fine-tuning preserves + the accuracy well, the long-term fine-tuning can still increase the accuracy by + up to another 4.5% or 3.4% on average. Since the short-term fine-tuning has a + short training time, the training is terminated far before convergence. Therefore, + it is not surprising that the final long-term fine-tuning can further increase the + accuracy. + + Impact of Resource Reduction Schedules Table 2 shows the impact of + using three different resource reduction schedules, which are defined in Sec. 3.1. + Empirically, using a larger resource reduction at each iteration increases the + adaptation speed (i.e., reducing the total number of adaptation iterations) at the + cost of accuracy. With the same number of total iterations, the result suggests + that a smaller initial resource reduction with a slower decay is preferable. + + 4.4 Analysis of Adapted Network Architecture + The network architectures of the adapted small MobileNetV1 by using NetAdapt + and the multipliers are shown and compared in Fig. 10. Both of them have similar + latency as 25% MobileNetV1 (128). There are two interesting observations. + + <
> + + Table 3.The comparison between NetAdapt (adapting the large MobileNetV2 (100% + MobileNetV2 (224))) and the multipliers [18] on a mobile CPU of Google Pixel 1. We + compare the latency at similar accuracy and the accuracy at similar latency. + + + First, NetAdapt removes more filters in layers 7 to 10, but fewer in layer 6. + Since the feature map resolution is reduced in layer 6 but not in layers 7 to 10, + we hypothesize that when the feature map resolution is reduced, more filters are + needed to avoid creating an information bottleneck. + The second observation is that NetAdapt keeps more filters in layer 13 (i.e. + the last CONV layer). One possible explanation is that the ImageNet dataset + contains one thousand classes, so more feature maps are needed by the last FC + layer to do the correct classification. + + 4.5 Adapting Large MobileNetV2 on a Mobile CPU + In this section, we show encouraging early results of applying NetAdapt to + MobileNetV2 [18]. MobileNetV2 introduces the inverted residual with linear + bottleneck into MobileNetV1 and becomes more efficient. Because MobileNetV2 + utilizes residual connections, we only adapt individual inner (expansion) layers + or reduce all bottleneck layers of the same resolution in lockstep. The main + differences between the MobileNetV1 and MobileNetV2 experiment settings are that + each network proposal is short-term fine-tuned with ten thousand iterations, the + initial latency reduction is 1ms, the latency reduction decay is 0.995, the batch + size is 96, and dropout and label smoothing are used. NetAdapt achieves 1.1% + higher accuracy or 1.2×faster speed than the multipliers as shown in Table 3. + + 5 Conclusion + + In summary, we proposed an automated algorithm, called NetAdapt, to adapt a + pretrained network to a mobile platform given a real resource budget. NetAdapt + can incorporate direct metrics, such as latency and energy, into the optimization + to maximize the adaptation performance based on the characteristics of the + platform. By using empirical measurements, NetAdapt can be applied to any + platform as long as we can measure the desired metrics, without any knowledge + of the underlying implementation of the platform. We demonstrated empirically + that the proposed algorithm can achieve better accuracy versus latency trade-off + (by up to 1.7×faster with equal or higher accuracy) compared with other + state-of-the-art network simplification algorithms. In this work, we aimed to highlight + the importance of using direct metrics in the optimization of efficient networks; + we hope that future research efforts will take direct metrics into account in order + to further improve the performance of efficient networks. + + + Bibliography + + [1] Audet, C., J. E. Dennis, J.: A progressive barrier for derivative-free nonlin- + ear programming. SIAM Journal on Optimization20(1), 445–472 (2009) + [2] Chen, Y.H., Emer, J., Sze, V.: Eyeriss: A Spatial Architecture for Energy- + Efficient Dataflow for Convolutional Neural Networks. In: Proceedings of the + 43rd Annual International Symposium on Computer Architecture (ISCA) + (2016) + [3] Chen, Y.H., Krishna, T., Emer, J., Sze, V.: Eyeriss: An Energy-Efficient + Reconfigurable Accelerator for Deep Convolutional Neural Networks. IEEE + Journal of Solid-State Circuits52, 127–138 (2016) + [4] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A + large-scale hierarchical image database. In: IEEE Conference on Computer + Vision and Pattern Recognition (CVPR). pp. 248–255. IEEE (2009) + [5] Gordon, A., Eban, E., Nachum, O., Chen, B., Yang, T.J., Choi, E.: Mor- + phnet: Fast & simple resource-constrained structure learning of deep net- + works. In: IEEE Conference on Computer Vision and Pattern Recognition + (CVPR) (2018) + [6] Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections + for efficient neural network. In: Advances in Neural Information Processing + Systems. pp. 1135–1143 (2015) + [7] He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image + Recognition. In: IEEE Conference on Computer Vision and Pattern Recog- + nition (CVPR) (2016) + [8] He, Y., Han, S.: Adc: Automated deep compression and acceleration with + reinforcement learning. arXiv preprint arXiv:1802.03494 (2018) + [9] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, + T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural + networks for mobile vision applications. arXiv preprint arXiv:1704.04861 + (2017) + [10] Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network Trimming: A Data- + Driven Neuron Pruning Approach towards Efficient Deep Architectures. + arXiv preprint arXiv:1607.03250 (2016) + [11] Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized + neural networks. In: Advances in Neural Information Processing Systems. + pp. 4107–4115 (2016) + [12] Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., + Kalenichenko, D.: Quantization and training of neural networks for efficient + integer-arithmetic-only inference. arXiv preprint arXiv:1712.05877 (2017) + [13] Kim, Y.D., Park, E., Yoo, S., Choi, T., Yang, L., Shin, D.: Compression of + deep convolutional neural networks for fast and low power mobile applica- + tions. arXiv preprint arXiv:1511.06530 (2015) + [14] Le Cun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances + in Neural Information Processing Systems (1990) 16 T.-J. Yang et al. + [15] Liangzhen Lai, Naveen Suda, V.C.: Not all ops are created equal! In: SysML + (2018) + [16] Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolu- + tional neural networks for resource efficient transfer learning. arXiv preprint + arXiv:1611.06440 (2016) + [17] Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: Xnor-net: Imagenet + classification using binary convolutional neural networks. In: European Con- + ference on Computer Vision (ECCV) (2016) + [18] Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.C.: Inverted + residuals and linear bottlenecks: Mobile networks for classification, detection + and segmentation. In: IEEE Conference on Computer Vision and Pattern + Recognition (CVPR) (2018) + [19] Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large- + Scale Image Recognition. In: International Conference on Learning Repre- + sentations (ICLR) (2014) + [20] Srinivas, S., Babu, R.V.: Data-free parameter pruning for deep neural net- + works. arXiv preprint arXiv:1507.06149 (2015) + [21] Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.: Efficient processing of deep + neural networks: A tutorial and survey. Proceedings of the IEEE105(12), + 2295–2329 (Dec 2017). https://doi.org/10.1109/JPROC.2017.2761740 + [22] TensorFlow Lite: https://www.tensorflow.org/mobile/tflite/ + [23] Yang, Z., Moczulski, M., Denil, M., de Freitas, N., Smola, A., Song, L., + Wang, Z.: Deep fried convnets. In: Proceedings of the IEEE International + Conference on Computer Vision. pp. 1476–1483 (2015) + [24] Yang, Tien-Ju and Chen, Yu-Hsin and Emer, Joel and Sze, Vivienne: A + Method to Estimate the Energy Consumption of Deep Neural Networks. + In: Asilomar Conference on Signals, Systems and Computers (2017) + [25] Yang, Tien-Ju and Chen, Yu-Hsin and Sze, Vivienne: Designing energy- + efficient convolutional neural networks using energy-aware pruning. In: + IEEE Conference on Computer Vision and Pattern Recognition (CVPR) + (2017) + [26] Yu, J., Lukefahr, A., Palframan, D., Dasika, G., Das, R., Mahlke, S.: Scalpel: + Customizing dnn pruning to the underlying hardware parallelism. In: Pro- + ceedings of the 44th Annual International Symposium on Computer Archi- + tecture (2017) + [27] Zhang, X., Zhou, X., Lin, M., Sun, J.: Shuenet: An extremely ef- + ficient convolutional neural network for mobile devices. arXiv preprint + arXiv:1707.01083 (2017) +<> <> <> + + +<> <> <> + TOWARDS THE SYSTEMATIC REPORTING OF THE ENERGY AND CARBON FOOTPRINTS OF MACHINE LEARNING + + Peter Henderson y , Jieru Hu z , Joshua Romoff + Emma Brunskill y , Dan Jurafsky y , Joelle Pineau z; + y Stanford University, z Facebook, Mila, McGill University + + + February 14, 2020 + + ABSTRACT + + Accurate reporting of energy and carbon usage is essential for understanding the potential climate + impacts of machine learning research. We introduce a framework that makes this easier by providing a + simple interface for tracking realtime energy consumption and carbon emissions, as well as generating + standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient + reinforcement learning algorithms to incentivize responsible research in this area as an example for + other areas of machine learning. Finally, based on case studies using our framework, we propose + strategies for mitigation of carbon emissions and reduction of energy consumption. By making + accounting easier, we hope to further the sustainable development of machine learning experiments + and spur more research into energy efficient algorithms. + + 1 Introduction + + Global climate change is a scientifically well-recognized phenomenon and appears to be accelerated due to greenhouse + gas (GHG) emissions such as carbon dioxide or equivalents (CO 2eq ) (Crowley,2000;IPCC,2018). The harmful health + and safety impacts of global climate change are projected to “fall disproportionately on the poor and vulnerable” (IPCC, + 2018). Energy production remains a large factor in GHG emissions, contributing about 25% of GHG emissions in + 2010 (IPCC,2018). With the compute and energy demands of many modern machine learning (ML) methods growing + exponentially (Amodei and Hernandez,2018), ML systems have the potential to significantly contribute to carbon + emissions. Recent work has demonstrated these potential impacts through case studies and suggested various mitigating + strategies (Strubell et al.,2019;Schwartz et al.,2019). + Systematic and accurate measurements are needed to better estimate the broader energy and carbon footprints of ML – + in both research and production settings. Accurate accounting of carbon and energy impacts aligns incentives with + energy efficiency (Schwartz et al.,2019), raises awareness, and drives mitigation efforts (Sundar et al.,2018;LaRiviere + et al.,2016), among other benefits. 1 Yet, most ML research papers do not regularly report energy or carbon emissions + metrics. 2 + + We hypothesize that part of the reason that much research does not report energy and carbon metrics is due to the + complexities of collecting them. Collecting carbon emission metrics requires understanding emissions from energy + grids, recording power outputs from GPUs and CPUs, and navigating among different tools to accomplish these tasks. + To reduce this overhead, we present experiment-impact-tracker a lightweight framework for consistent, easy, and + more accurate reporting of energy, compute, and carbon impacts of ML systems. + In Section4, we introduce the design and capabilities of our framework and the issues with accounting we aim to solve + with this new framework. Section5expands on the challenges of using existing accounting methods and discusses our + + 1 See Section4.1for an extended discussion on the importance of accounting. + 2 See Section3and AppendixBfor more information. + 3 https.//github.com/Breakend/experiment-impact-tracker + + + learnings from analyzing experiments with experiment-impact-tracker. For example, in an empirical case study on + image classification algorithms, we demonstrate that floating point operations (FPOs), a common measure of efficiency, + are often uncorrelated with energy consumption with energy metrics gathered by experiment-impact-tracker. + In Section6, we focus on recommendations for promoting energy-efficient research and mitigation strategies for carbon + emissions. Using our framework, we present aReinforcement Learning Energy Leaderboard in Section6.1to encourage + development of energy efficient algorithms. We also present a case study in machine translation to show how regional + energy grid differences can result in large variations inCO 2eq emissions. Emissions can be reduced by up to 30x just + by running experiments in locations powered by more renewable energy sources (Section6.2). Finally, we suggest + systemic and immediate changes based on our findings. + + •incentivizing energy-efficient research through leaderboards (Section6.1) + •running experiments in carbon-friendly regions (Section6.2) + •reducing overheads for utilizing efficient algorithms and resources (Section7.1) + •considering energy-performance trade-offs before deploying energy hungry models (Section7.2) + •selecting efficient test environment especially in RL (Section7.3) + •ensuring reproducibility to reduce energy consumption from replication difficulties (Section7.4) + •consistently reporting energy and carbon metrics (Section7.5) + + 2 Related Work + + Estimating GHG emissions and their downstream consequences is important for setting regulatory standards (U.S. + Environment Protection Agency,2013) and encouraging self-regulation (Byerly et al.,2018). In particular, these + estimates are used to set carbon emissions reduction targets and in turn set carbon prices for taxes or emissions trading + systems. 4 A large body of work has examined modeling and accounting of carbon emissions 5 at different levels of + granularity. at the global scale (IPCC,2018); using country-specific estimates (Ricke et al.,2018); targeting a particular + industrial sector like Information and Communication Technologies, for example, modeled byMalmodin et al.(2013); + or even targeting a particular application like bitcoin mining, for example, modeled byMora et al.(2018). + At the application level, some work has already modeled carbon impacts specifically in computationally intensive + settings like bitcoin mining (Krause and Tolaymat,2018;Stoll et al.,2019;Zade et al.,2019;Mora et al.,2018). + Such application-specific efforts are important for prioritizing emissions mitigation strategies. without understanding + projected impacts, policy decisions could focus on ineffective regulation. However, with large amounts of heterogeneity + and endogeneity in the underlying data, it can be difficult to model all aspects of an application’s usage. For example, + one study suggested that “bitcoin emissions alone could push global warming above 2°C” (Mora et al.,2018). But + Masanet et al.(2019),Houy(2019), and others, criticized the underlying modeling assumptions which led to such large + estimates of carbon emissions. This shows that it is vital that these models provide accurate measurements if they are to + be used for informed decision making. + With ML models getting more computationally intensive (Amodei and Hernandez,2018), we want to better understand + how machine learning in research and industry impacts climate change. However, estimating aggregate climate change + impacts of ML research and applications would require many assumptions due to a current lack of reporting and + accounting. Instead, we aim to emphasize and aid systematic reporting strategies such that accurate field-wide estimates + can be conducted in the future. + Some recent work investigates climate impacts of machine learning research, specifically Strubell et al.(2019) + demonstrate the issue of carbon and energy impacts of large NLP models by evaluating estimated power usage and + carbon emissions for a set of case studies. The authors suggest that. “authors should report training time and sensitivity + to hyperparameters”, “academic researchers need equitable access to computation resources”, and “researchers should + prioritize computationally efficient hardware and algorithms”.Schwartz et al.(2019) provide similar proposals, + suggesting floating point operations (FPOs) as a guiding efficiency metric. Lacoste et al.(2019) recently provided a + website for estimating carbon emissions based on GPU type, experiment length, and cloud provider. In Section5, we + 4 An emissions trading system is a cap on total allowed carbon emissions for a company with permits issued. When a company + emits a certain amount of carbon, they trade in a permit, creating a market for emissions permits. This is a market-based approach to + incentivize emission reductions. See Ramstein et al.(2019) for a description of such carbon pricing efforts across different countries. + 5 See also assorted examinations on carbon accounting, standardized reporting, and policy recommendations (Stechemesser and + Guenther,2012; Dayarathna et al.,2015; IPCC,2018; Ajani et al.,2013; Bellassen and Stephan,2015;Andrew and Cortese,2011; + Tang and Demeritt, 2018;Cotter et al.,2011;Tol,2011;U.S. Environment Protection Agency,2013; Ricke et al.,2018). + discuss how while the estimation methods of these works provide some understanding of carbon and energy impacts, + nuances in the estimation methods may make them inaccurate – particularly in experiments which utilize combined CPU + and GPU workloads heavily. We build a framework aiming to provide more accurate and easier systematic reporting of + carbon and energy footprints. We also provide additional mitigation and reporting strategies – beyond those discussed + by these prior works – to emphasize how both companies and research labs can be more carbon and energy efficient. + It is worth noting that prior work has also examined the carbon impacts of research in other fields, focusing mostly on + emissions from conference travel (Spinellis and Louridas,2013;Astudillo and AzariJafari,2018;Hackel and Sparkman, + 2018). We provide a brief discussion on ML-related conference travel in AppendixA, but will focus mainly on accurate + accounting of energy and carbon footprints of ML compute. + + 3 Background + + We briefly provide a primer on energy and carbon accounting, which form the basis of our proposed framework for + measuring and reporting the ecological footprint of ML research. + + 3.1 Energy Accounting + + Energy accounting is fairly straightforward. The energy consumption of a system can be measured in Joules (J) or + Watt-hours (Wh), 6 representing the amount of energy needed to power the system. Life-cycle accounting might also + consider the energy required to manufacture components of the system – for example, the production of GPUs or + CPUs (Jones et al.,2013). However, we largely ignore life-cycle aspects of energy accounting due to the difficulties in + attributing manufacturing impacts on a per-experiment basis. Measuring data-center energy impacts also contain several + layers, focusing on hardware-centric and software-centric analyses. Many parts contribute to the power consumption + of any computational system. Dayarathna et al.(2015) survey energy consumption components of a data center and + their relative consumption. cooling (50%), lighting (3%), power conversion (11%), network hardware (10%), and + server/storage (26%). The server and storage component can further be broken down into contributions from DRAM, + CPUs, among other compute components. Accurate accounting for all of these components requires complex modeling + and varies depending on workload. Since we aim to provide a framework at the per-experiment software level, we only + account for aspects of energy consumption which expose interfaces for energy metrics. For the purpose of our work, this + is constrained to DRAM, CPUs, and GPUs. To account for all other components, we rely on a power usage effectiveness + (PUE) factor (Strubell et al.,2019). This factor rescales the available power metrics by an average projected overhead + of other components. With more available software interfaces, more robust modeling can be performed as reviewed by + Dayarathna et al.(2015). + + 3.2 Carbon Accounting + + Carbon accounting can be all-expansive, so we focus on a narrow definition provided by Stechemesser and Guenther + (2012). “carbon accounting at the project scale can be defined as the measuring and non-monetary valuation of carbon + and GHG emissions and offsetting from projects, and the monetary assessment of these emissions with offset credits to + inform project-owners and investors but also to establish standardized methodologies.” Carbon and GHG emissions are + typically measured in some form close to unitsCO 2eq . This is the amount of carbon – and other GHG converted to + carbon amounts – released into the atmosphere as a result of the project. Carbon offsetting is the amount of carbon + emissions saved as a result of the project. For example, a company may purchase renewable energy in excess of + the energy required for their project to offset for the carbon emissions they contributed. Since our goal is to inform + and assess carbon emissions of machine learning systems, we ignore carbon offsetting 7 . We also do not consider + carbon accounting in the financial sense, but do provide metrics on monetary impacts through the social cost of carbon + (SC-CO2). TheU.S. Environment Protection Agency(2013) uses this metric when developing administrative rules and + regulations. According to the EPA, “The SC-CO2 is a measure, in dollars, of the long-term damage done by a ton of + carbon dioxide (CO2) emissions in a given year. This dollar figure also represents the value of damages avoided for + a small emission reduction (i.e., the benefit of a CO2 reduction).” We rely on the per-country social cost of carbon + developed byRicke et al.(2018), which accounts for different risk profiles of country-level policies and GDP growth in + their estimates of SC-CO2. + Carbon emissions from a project can also consider life-cycle emissions (for example, manufacturing of CPUs may emit + carbon as part of the process). We do not consider these aspects of emissions. We instead, consider only carbon emissions + from energy consumption. A given energy grid powering an experiment will have a carbon intensity. the grams of + + 6 One Watt is a unit of power – equivalent to one Joule per second. + 7 See discussion in AppendixCfor more information on why. + + CO2 emitted per kWh of energy used. This carbon intensity is determined based on the energy sources supplying the + grid. Each energy source has its own carbon intensity accounted for through a full life-cycle analysis (IPCC,2015). For + example, coal power has a median carbon intensity of 820 gCO 2eq / kWh, while hydroelectricity has a mean carbon + intensity of 24 gCO 2eq / kWh. Carbon emissions for a compute system can be estimated by understanding the carbon + intensity of the local energy grid and the energy consumption of the system. Similar analyses have been done for + bitcoin (Krause and Tolaymat,2018). These analyses, however, attempt to extrapolate impacts of bitcoin mining in + general, while in this work we attempt to examine machine learning impacts on a per-experiment basis. + + 3.3 Current State of Reporting in Machine Learning Research + + We briefly examine the current state of accounting in the machine learning literature and review commonly reported + computational metrics. Here we look at a non-exhaustive list of reported metrics from papers we surveyed and group + them into different categories. + + •Energy + –Energy in Joules (Assran et al.,2019) + –Power consumption in Watts (Canziani et al.,2016) + •Compute + –PFLOPs-hr (Amodei and Hernandez,2018), the floating point operations per second needed to run the + experiment in one hour + –Floating Point Operations (FPOs) or Multiply-Additions (Madds), typically reported as the computations + required to perform one forward pass through a neural network (Howard et al.,2017;Sandler et al.,2018; + Schwartz et al.,2019) + –The number of parameters defined by a neural network (often reported together with FPOs) (Howard + et al.,2017;Sandler et al.,2018) + –GPU/CPU utilization as a percentage (Assran et al.,2019;Dalton et al.,2019) + –GPU-hours or CPU-hours, the processor cycles utilized (or in the case of the GPU percentage utilized), + times the runtime (Soboczenski et al.,2018) + •Runtime + –Inference time, the time it takes to run one forward pass through a neural network, (Jeon and Kim,2018; + Qin et al.,2018) + –Wall clock training time, the total time it takes to train a network (Assran et al.,2019;Dalton et al.,2019). + –Hardware and time together (e.g., 8 v100 GPUs for 5 days) (Krizhevsky et al.,2012;Ott et al.,2018; + Gehring et al.,2017) + •Carbon Emissions + –US-average carbon emissions (Strubell et al.,2019) + + Example 1 To get a rough estimate of the prevalence of these metrics, we randomly sampled 100 NeurIPS papers from + the 2019 proceedings. In addition to the metrics above, we also investigate whether hardware information was reported + (important for extrapolating energy and carbon information with partial information). Of these papers, we found 1 + measured energy in some way, 45 measured runtime in some way, 46 provided the hardware used, 17 provided some + measure of computational complexity (e.g., compute-time, FPOs, parameters), and 0 provided carbon metrics. See + Appendix B for more details on methodology. + + Some of these metrics, when combined, can also be used to roughly estimate energy or carbon metrics. For example, + the experiment time (h) can be multiplied by the thermal design power (TDP) of the GPUs used (W) 8 . This results + in a Watt-hour energy metric. This can then be multiplied by the carbon intensity of the local energy grid to assess + the amount ofCO 2eq emitted. This method of estimation omits CPU usage and assumes a 100% GPU utilization. + Alternatively, Amodei and Hernandez(2018) use a utilization factor of 33% for GPUs. Similarly, the PFLOPs-hr metric + can by multiplied by TDP (Watts) and divided by the maximum computational throughput of the GPU (in PFLOPs). + This once again provides a Watt-hour energy metric. This, however, makes assumptions based on maximum efficiency + of a GPU and disregards variations in optimizations made by underlying frameworks (e.g., Tensorflow versus Pytorch; + AMD versus NVIDIA drivers). + + 8 This is a rough estimate of the maximum operating capacity of a GPU. + + As we will demonstrate using our framework (see Section5.2), the assumptions of these estimation methods lead to + significant inaccuracies. However, aggregating all necessary accounting information is not straightforward or easy; it + requires finding compatible tools, handling nuances on shared machines, among other challenges. + It is worth noting that some metrics focus on the computational requirements of training (which require additional + resources to compute gradients and backpropagate, in the case of neural networks) versus the computational requirements + of inference. The former is often more energy and carbon intensive in machine learning research, while the later is more + intensive in production systems (the cost of training is insignificant when compared to the lifetime costs of running + inference millions of times per day, every day). We will remain largely agnostic to this differentiation until some + discussions in Sections6.2and7.2. + + 4 A New Framework for Tracking Machine Learning Impacts + + 4.1 Motivation + + The goal of our experiment-impact-tracker framework is to provide an easy to deploy, reproducible, and quickly + understood mechanism for all machine learning papers to report carbon impact summaries, along with additional + appendices showing detailed energy, carbon, and compute metrics. + + Example 2A carbon impact summary generated by our framework can be found at the end of this paper in the Carbon + Impact Statement section. In brief, the experiments in our paper contributed 8.021 kg ofCO 2eq to the atmosphere and + used 24.344 kWh of electricity, having a USA-specific social cost of carbon of $0.38 ($0.00, $0.95) (Ricke et al.,2018). + + Such statements and informational reporting are important for, among other reasons, awareness, aligning incentives, + and enabling accurate cost-benefit analyses. + Awareness. Informational labels and awareness campaigns have been shown to be effective drivers of eco-friendly + behaviors (depending on the context) (Banerjee and Solomon,2003;Sundar et al.,2018;Newell and Siikamäki,2014; + Byerly et al.,2018). Without consistent and accurate accounting, many researchers will simply be unaware of the + impacts their models might have and will not pursue mitigating strategies. Consistent reporting also may provide social + incentives to reduce carbon impacts in research communities. + Aligning Incentives. While current reporting often focuses solely on performance metrics (accuracy in classification, + perplexity in language modeling, average return in reinforcement learning, etc), standardized reporting of energy in + addition to these metrics aligns incentives towards energy efficient models in research output (Schwartz et al.,2019). + Those who accurately report carbon emissions may have more incentive to reduce their carbon footprint. This may also + drive traffic to low-emission regions, spurring construction of more carbon-friendly data centers. 9 + + Cost-Benefit Analysis and Meta-Analysis. Cost-benefit analyses can be conducted with accurate energy metrics + reporting, but are impossible without it. For example, the estimated generated revenue of a model can be weighed + against the cost of electricity. In the case of models suggested by Rolnick et al.(2019), the carbon emissions saved by a + model can be weighed against the emissions generated by the model. Consistent reporting also opens the possibility for + performing meta-analyses on energy and carbon impacts (Henderson and Brunskill,2018). Larger extrapolations to + field-wide impacts of research conferences can also be assessed with more frequent reporting. + + 4.2 Design Considerations + + We consider five main principles when designing the framework for systematic reporting. usability, interpretability, + extensibility, reproducibility, and fault tolerance. + Usability. Perceived ease-of-use can be an important factor in adoption of new technologies and methods (Gefen and + Straub,2000). Since gathering key energy (kWh) and carbon (CO 2eq ) metrics requires specific knowledge about – and + aggregation of – different sources of information, there may be a barrier to the ease-of-use in the current status quo. As + a result, a core design consideration in developing tools for these metrics is usability, or ease-of-use. We accomplish + this by abstracting away and distilling required knowledge of information sources, keeping amount of required action + from the user to a minimum. + Interpretability. Along with ease-of-use, a key factor in adoption is perceived usefulness (Gefen and Straub,2000). + Since we wish for the reporting of carbon and energy metrics to become widespread, we consider perceived usefulness + + 9 See discussion in Section6.2on regional carbon emission differences. See discussion by LaRiviere et al.(2016) on how more accurate carbon accounting can result in reduced carbon emissions. + + through interpretability. We aim to make reporting tools within the framework useful through simple generation of + graphs and web pages from metrics for easy interpretation. We also provide a mechanism to generate a carbon impact + statement with the social cost of carbon. This dollar amount represents the projected damage from the experiment’s + carbon emissions and helps ground results in values that may be more interpretable. + Extensibility.We design the framework in a modular fashion to handle evolving driver support (see Section5) and + new metrics. The ML community can add new metrics, carbon intensity information, and other capabilities easily. For + each metric, a central data router stores a description, the function which gathers metric data, and a list of compatibility + checks (e.g., the metric can only be gathered on a Linux system). New metrics can be added to this router. 10 Similarly, + new carbon region and electricity grid information can be added as needed to similar centralized locations. 11 + + Reproducibility. Running an algorithm on different sets of hardware has been shown to affect the reproducibility of + algorithmic results (Gundersen and Kjensmo,2018;Sukhoy and Stoytchev,2019). Our framework aides in automating + reproducibility by logging additional metrics like hardware information, Python package versions, etc. These metrics can + help future work assess statistically significant differences in model energy requirements by accounting for controlled + and random variates (Boquet et al.,2019). + Fault tolerance.Mistakes in software are inevitable – as is discussed inSidor and Schulman(2017). We try to log all + rawinformation so that accounting can be recreated and updated based on new information. We also log the version + number of the tool itself, to ensure future comparisons do not mismatch information between versions that may have + changed. + + 4.3 Proposed Framework + + Theexperiment-impact-trackerrequires a simple code change to automatically gather available metrics and a script to + generate online appendices for reporting the data. Currently, on compatible Linux systems, we gather. + + •all python packages and version numbers + •CPU and GPU hardware information + •experiment start and end-times + •the version of theexperiment-impact-trackerframework used + •the energy grid region the experiment is being run in (based on IP address) + •the average carbon intensity in the energy grid region + •CPU- and GPU-package power draw + •per-process utilization of CPUs and GPUs + •GPU performance states + •memory usage + •the realtime CPU frequency (in Hz) + •realtime carbon intensity (only supported in CA right now) + •disk write speed + + The code change required for immediate logging of metrics can be seen in Listing 1. In the background, the framework + launches a thread which polls system supported tools. For example, the thread pollspsutil(Rodola,2016) for measuring + CPU utilization. All of these metrics are logged in parallel with the main machine learning process as described in + Figure1. A script 12 is provided to generate an HTML web page showing graphs and tables for all these metrics, meant + to serve as an online appendix for research papers. 13 Results in the generated appendix can be aggregated across + multiple experiments to show averages along with standard error as recommended in prior work (Henderson et al., + 2018;Colas et al.,2018;Reimers and Gurevych,2017). + + 10 Seehttps.//breakend.github.io/experiment-impact-tracker/contributing_new_metric.html + 11 Seehttps.//breakend.github.io/experiment-impact-tracker/contributing_carbon_region.html. + 12 https.//github.com/Breakend/experiment-impact-tracker/blob/master/scripts/create-compute-appendix + 13 Appendices generated by our framework for Figure7and Figure3are available at.https.//breakend.github.io/ClimateChangeFromMachineLearningResearch/measuring_and_mitigating_energy_and_carbon_footprints_in_machine_learning/. Experiments in Figure5are available athttps.//breakend.github.io/RL-Energy-Leaderboard/ + reinforcement_learning_energy_leaderboard/index.html. + + <> + + Listing 1. Simple code addition required to log experiment details via our framework. + + + + <> + + Figure 1. A diagram demonstrating how the released version of the tool works. The main process launches a monitoring + thread which iterates over a list of metrics associated with function calls to other tools. For example, if available, we + call Intel RAPL to collect CPU power draw or querycaiso.orgto get realtime carbon intensity data for California. + Once all the data that is compatible with the current system is gathered, it is logged to a standardized log file and the + process repeats. The main thread may check in on this thread for exceptions, but the thread will not interrupt the main + process. Once the main thread exits, anatexithook (which is called whenever the main process exits, either successfully + or through an exception) gathers the final information (such as the time the experiment ended), logs it, and then ends + both the monitor and main process. + + + 4.3.1 Tracking Energy Consumption + Different hardware vendors provide different tooling for tracking energy consumption. Our framework hides these + complications from users. We currently use Intel’s RAPL tool with the powercap interface (David et al.,2010) to gather + CPU/DRAM power draw and Nvidia’snvidia-smi 14 for GPU power draw. We usepsutilfor gathering per-process CPU + utilization andnvidia-smifor per-process GPU utilization. We found that on a shared machine – as when running a + job on Slurm – using Intel’s RAPL would provide energy metrics for the entire machine (including other jobs running + on the worker). If two experiments were launched with Slurm to the same worker, using measurements from RAPL + without corrections would double count energy usage from the CPU. + As a result, we assign energy credits on a per-process basis (though we log system-wide information as well). We + track the parent process, and any children spawned. Power credits are provided based on relative usage of system + resources. If a process uses 25% of the CPU (relative to the entire system’s usage), we will credit the process with 25% + of the CPU-based power draw. This ensures that any non-experiment-related background processes – software updates, + weekly jobs, or multiple experiments on the same machine – will not be taken into account during training. + + 14 https.//developer.nvidia.com/nvidia-system-management-interface + + We calculate total energy as. + <> (1) + + where presource are the percentages of each system resource used by the attributable processes relative to the total in-use + resources anderesource is the energy usage of that resource. This is the per-process equivalent of the method which + Strubell et al.(2019) use. We assume the same constant power usage effectiveness (PUE) asStrubell et al.(2019). This + value compensates for excess energy from cooling or heating the data-center. + + 4.3.2 Carbon Accounting + + <
> + + Figure 2. Realtime carbon intensity (CO2 / kWh) collected during one experiment using our framework. As the + experiment continued, the sun rose in California, and with it the carbon intensity decreased. + + For calculating carbon emissions, we use the power estimate from the previous section in kilowatt-hours (kWh) and + multiply it by the carbon intensity of the local energy grid (CO2 / kWh). To gather carbon intensity metrics + for energy grids, we build on the open-source portions ofhttps.//www.electricitymap.organd define regions + based on map-based geometries, using the smallest bounding region for a given location as the carbon intensity + estimate of choice. For example, for an experiment run in San Francisco, if the average carbon intensity is available + for both the USA and California, the latter will be used. We estimate the region the experiment is conducted in + based on the machine’s IP address. Carbon intensities are gathered from the average fallback values provided in the + https.//www.electricitymap.orgcode where available and supplemented with additional metrics from various + governmental or corporate reports. We note thatelectricitymap.orgestimates are based on a closed-source system + and uses the methodology described byTranberg et al.(2019). All estimates fromelectricitymap.orgare of + the regional supply, rather than production (accounting for imports from other regions). Since https.//caiso.com + provides realtime intensities including imports for free, for experiments run in California, we also provide realtime + carbon intensity information. We do this by polling https.//caiso.com for the current intensity of the California + energy grid every five minutes. This helps gather even more accurate estimates of carbon emissions to account for daily + shifts in supply. For example, experiments run in California during the day time use roughly 2 of night-time experiments. + This is because much of California’s renewable energy comes from solar plants. Figure2is an automatically generated 3 + graph showing this phenomenon from an experiment using our framework. We hope that as users find more accurate + realtime or average measurements of regional supply-based carbon intensities, they will add them to the tool for even + more accurate measurements in the future. + + 5 The Importance and Challenges of Accounting. Why a New Framework? + + 5.1 FPOs Can Be Misleading + + Floating Point Operations (FPOs) are the de facto standard for reporting “efficiency” of a deep learning model (Schwartz + et al.,2019), and intuitively they should be correlated with energy efficiency – after all, fewer operations should result + in faster and more energy efficient processing. However, this is not always the case. + Previously,Jeon and Kim(2018) demonstrated mechanisms for constructing networks with larger FPOs, but lower + inference time – discussing the “Trap of FLOPs”. Similarly,Qin et al.(2018) show how Depthwise 3x3 Convolutions + comprised just 3.06% of an example network’s Multiply-Add operations, while utilizing 82.86% of the total training + time in the FPO-efficient MobileNet architectureHoward et al.(2017). Underlying optimizations at the firmware, deep + learning framework, memory, or even hardware level can change energy efficiency and run-time. This discrepancy has + led to Github Issues where users expect efficiency gains from FPO-efficient operations, but do not observe them. 15 + + Example 3 To investigate this empirically, we repeatedly run inference through pre-trained image classification models + and measure FPOs, parameters, energy usage, and experiment length using theexperiment-impact-trackerframework. + As described in Figure3, we find little correlation between FPOs and energy usage or experiment runtime when + comparing across different neural network architectures. However, within an architecture – relying on the same + operation types, but with different numbers of operations – FPOs are almost perfectly correlated with energy and + runtime efficiency. Thus, while FPOs are useful for measuring relative ordering within architecture classes, they are not + adequate on their own to measure energy or even runtime efficiency. + + <
> + + Figure 3. We run 50,000 rounds of inference on a single sampled image through pre-trained image classification models + and record kWh, experiment time, FPOs, and number of parameters (repeating 4 times on different random seeds). + References for models, code, and expanded experiment details can be found in AppendixD. We run a similar analysis + toCanziani et al.(2016) and find (left) that FPOs are not strongly correlated with energy consumption (R2 = 0.083, + Pearson 0.289) nor with time (R2 = 0.005, Pearson 0.074) when measured across different architectures. However, + within an architecture (right) correlations are much stronger. Only considering different versions of VGG, FPOs are + strongly correlated with energy (R2 =.999, Pearson 1.0) and time (R2 =.998, Pearson .999). Comparing parameters + against energy yields similar results (see AppendixDfor these results and plots against experiment runtime). + + + 5.2 Estimates with Partial Information Can Be Inaccurate + + The current state of accounting for energy and carbon varies across fields and papers (see Section 3). Few works, if any, + report all of the metrics that our framework collects. However, it is possible to extrapolate energy and carbon impacts + from some subsets of these metrics. This can give a very rough approximation of the energy used by an experiment in + kWh (see Section 3 for background). + + Example 4 We demonstrate how several such estimation methods compare against the more fine-grained accounting + methods we describe in Section4.16 As seen in Figure4, we find significant differences from when we track all data + (as through theexperiment-impact-trackerframework) to when we use partial data to extrapolate energy and carbon + emissions. Only using GPUs and the experiment time ignores memory or CPU effects; only using the average case US + region ignores regional differences. More details for this experiment can be found in AppendixE. + + We also note that the possible estimation differences in Figure4do not include possible errors from counting multiple + processes at once, as described in Section4.3.1. Clearly, without detailed accounting, it is easy to severely over- or + underestimate carbon or energy emissions by extrapolating from partial information. + 15 See for example.https.//github.com/tensorflow/tensorflow/issues/12132andhttps.//github.com/tensorflow/tensorflow/issues/12940 + 16 We also provide a script to do the rough calculation of energy and carbon footprints based on GPU type, IP address (which + is used to retrieve the location of the machine and that region’s carbon intensity), experiment length, and utilization factor. + https.//github.com/Breakend/experiment-impact-tracker/blob/master/scripts/get-rough-emissions-estimate + + <
> + + Figure 4. We compare carbon emissions (left) and kWh (right) of our Pong PPO experiment (see AppendixEfor more + details) by using different estimation methods. By only using country wide or even regional average estimates, carbon + emissions may be over or under-estimated (respectively). Similarly, by using partial information to estimate energy + usage (right, for more information about the estimation methods see AppendixE), estimates significantly differ from + when collecting all data in real time (as in our method). Clearly, without detailed accounting, it is easy to over- or + under-estimate carbon or energy emissions in a number of situations. Stars indicate level of significance. * p < .05, ** p + < .01, *** p < .001, **** p < .0001. Annotation provided via.https.//github.com/webermarcolivier/statannot. + + + 6 Encouraging Efficiency and Mitigating Carbon Impacts. Immediate Mitigation Strategies + + With experiment-impact-tracker, we hope to ease the burden of standardized reporting. We have demonstrated + differences in more detailed estimation strategies from the current status quo. In this Section, we examine how accurate + reporting can be used to drive immediate mitigating strategies for energy consumption and carbon emissions. + + 6.1 Energy Efficiency Leaderboards + + A body of recent work has emphasized making more computationally efficient models (Wu et al.,2019;Coleman + et al.,2019;Jiang et al.,2019), yet another line of work has focused on the opposite. building larger models with + more parameters to tackle more complex tasks (Amodei and Hernandez,2018;Sutton,2019). We suggest leaderboards + which utilize carbon emissions and energy metrics to promote an informed balance of performance and efficiency. + DawnBench (Wu et al.,2019) has done this in terms of runtime and cost, 17 but by doing the same for energy and carbon + emissions, baseline implementations can converge to more efficient climate-friendly settings. This can also help spread + information about the most energy and climate-friendly combinations of hardware, software, and algorithms such that + new work can be built on top of these systems instead of more energy-hungry configurations. + A Deep RL Energy Leaderboard. + To demonstrate how energy leaderboards can be used to disseminate information on energy efficiency, we create a Deep + RL Energy Leaderboard. 18 The website is generated using the same tool for creating HTML appendices described in + Section4. All information (except for algorithm performance on tasks) comes from theexperiment-impact-tracker + framework. We populate the leaderboard for two common RL benchmarking environments, PongNoFrameskip-v4 and + BreakNoFrameskip-v4 (Bellemare et al.,2013;Brockman et al.,2016;Mnih et al.,2013), and four baseline algorithms, + PPO (Schulman et al.,2017), A2C (Mnih et al.,2016), A2C with V-Traces (Espeholt et al.,2018;Dalton et al.,2019), + and DQN (Mnih et al.,2013). The experimental details and results can also be found in Figure5. We find that no + algorithm is the energy efficiency winner across both environments, though the PPO implementation provided byHill + et al.(2018) attains balance between efficiency and performance when using default settings across algorithms. + + Example 5To see how such a leaderboard might help save energy, consider a Deep RL class of 235 students. 19 For a + homework assignment, each student must run an algorithm 5 times on Pong. The class would save 888 kWh of energy + + 17 For image classification and question answering tasks. + 18 https.//breakend.github.io/RL-Energy-Leaderboard/reinforcement_learning_energy_leaderboard/index.html + 19 See for example,Stanford’s CS 234. + + <
> + + Figure 5. We evaluate A2C, PPO, DQN, and A2C+VTraces on PongNoFrameskip-v4 (left) and BreakoutNoFrameskip- + v4 (right), two common evaluation environments included in OpenAI Gym. We train for only 5M timesteps, less than + prior work, to encourage energy efficiency and evaluate for 25 episodes every 250k timesteps. We show the Average + Return across all evaluations throughout training (giving some measure of both ability and speed of convergence of an + algorithm) as compared to the total energy in kWh. Weighted rankings of Average Return per kWh place A2C+Vtrace + first on Pong and PPO first on Breakout. Using PPO versus DQN can yield significant energy savings, while retaining + performance on both environments (in the 5M samples regime). See AppendixFfor more details and results in terms of + asymptotic performance. + + + by using PPO versus DQN, while achieving similar performance. 20 This is roughly the same amount needed to power a + US home for one month. 21 + + We, thus, encourage the community to submit more data to the leaderboard to find even more energy efficient algorithms + and configurations. + + 6.2 Running In Carbon-Friendly Regions + + We noted in Section4that it is important to assess which energy grid experiments are run on due to the large differences + in carbon emissions between energy grids. Figure6showsCO 2eq intensities for an assortment of locations, cloud- + provider regions, and energy production methods. We note that an immediate drop in carbon emission can be made by + moving all training jobs to carbon-efficient energy grids. In particular, Quebec is the cleanest available cloud region + to our knowledge. Running a job in Quebec would result in carbon emission 30x lower than running a job in Estonia + (based on 2017 averages). + + Example 6To demonstrate this in practice, we run inference on two translation models 1000 times and measure energy + usage. We extrapolate the amount of emissions and the difference between the two algorithms if run in different energy + grids, seen in Figure7. The absolute difference in emissions between the two models is fairly small (though significant) + if run in Quebec (.09 gCO 2eq ), yet the gap increases as one runs the jobs in less carbon-friendly regions (at 3.04 g + CO 2eq in Estonia). + + We provide a script with our framework to show all cloud provider region with emission statistics to make this decision- + making process easier. 22 We note thatLacoste et al.(2019) provide a website using partial information estimation to + extrapolate carbon emissions based on cloud provider region, GPU type, and experiment length in hours. Their tool + may also be used for estimating carbon emissions in cloud-based experiments ahead of time. + For companies that train and deploy large models often, shifting these resources is especially important. ML training + is not usually latency bound. companies can run training in cloud regions geographically far away since training + models usually does not require round trip communication requirements. Contrary to some opinions, 23 there is not a + necessary need to eliminate computation-heavy models entirely, as shifting training resources to low carbon regions will + immediately reduce carbon emissions with little impact to production systems. For companies seeking to hit climate + + 20 These rankings may change with different code-bases and hyperparameters. + 21 https.//www.eia.gov/tools/faqs/faq.php?id=97&t=3 + 22 See.get-region-emissions-info scriptandlookup-cloud-region-info script. + 23 https.//www.theguardian.com/technology/2019/sep/17/tech-climate-change-luddites-data + + <
> + + Figure 6. Carbon Intensity (gCO 2eq /kWh) of selected energy grid regions is shown from least carbon emissions (left) to + most carbon emissions (right). Red/unshaded boxes indicate carbon intensities of cloud provider regions. Blue/shaded + boxes indicate carbon intensities of various generation methods. Oil shale is the most carbon emitting method of energy + production in the Figure. Estonia is powered mainly by oil shale and thus is close to it in carbon intensity. Similarly, + Québec is mostly powered by hydroelectric methods and is close to it in carbon intensity. Cloud provider carbon + intensities are based on the regional energy grid in which they are located. Thus, us-west-1, located in California, has + the same carbon intensity as the state. Seehttps.//github.com/Breakend/experiment-impact-tracker/for + data sources of regional information. Energy source information fromKrey et al.(2014);International Energy Agency + (2015). + + + + change policy targets, promotion of carbon neutral regions and shifting of all machine learning systems to those regions + would accelerate reaching targets significantly and reduce the amount of offset purchasing required to meet goals (thus + saving resources). 24 It is worth noting that some companies like Google already purchase offsets (Google,2016), so it + may be unclear why shifting resources is necessary. We provide an extended discussion on this in AppendixC. As a + matter of total emissions reductions, running compute in carbon-friendly regions prevents emissions now, while offsets + may not come into effect for several years. Moreover, continuing offset purchasing at current levels, while shifting + resources to green regions would result in a net-negative carbon footprint. + + + + 7 Discussion. Systemic Changes + + + We demonstrated several use cases for accounting which can drive immediate mitigation strategies. However, the + question remains. how can we encourage systemic changes which lead to energy and carbon efficiency in ML systems? + + + 7.1 Green Defaults for Common Platforms and Tools + + Energy leaderboards help provide information on energy efficient configurations for the whole stack. However, to truly + spread energy efficient configurations, underlying frameworks should by default use the most energy-efficient settings + possible. This has been shown to be an effective way to drive pro-environmental behavior (Pichert and Katsikopoulos, + 2008). For example, Nvidia apex provides easy mixed-precision computing as an add-on which yields efficiency + gains. 25 However, it requires knowing this and using it.Merity(2019) also discusses the current difficulties in using + highly efficient components. Making such resources supported as defaults in frequently used frameworks, like PyTorch, + would immediately improve the efficiency of all downstream projects. We encourage maintainers of large projects to + prioritize and support such changes. + + + + 24 See, for example, Amazon’s goal.https.//press.aboutamazon.com/news-releases/news-release-details/amazon-co-founds-climate- + pledge-setting-goal-meet-paris + 25 https.//github.com/NVIDIA/apex + + <
> + + Figure 7. We use pre-trained En-Fr translation models downloaded from PyTorch Hub. a convolutional network (Gehring + et al.,2017) and transformer (Ott et al.,2018). We generate 1000 random sequences either between 3-50 words in + length using the essential_generators Python package.https.//pypi.org/project/essential-generators/. + We repeat with 20 random seeds. [Left] We show the true difference in energy consumption. [Right] We show estimated + kgCO 2eq released if the experiment had been conducted in a number of increasingly carbon-intensive energy grids. + Differences remain significant throughout, but the absolute difference increases as more carbon-intensive regions are + assumed. + + 7.2 How much is your performance gain worth? Balancing gains with cost + + While training jobs can easily be shifted to run in clean regions, there are often restrictions for inference-time use of + machine learning models which prevent such a move. Many companies are deploying large machine learning models + powered by GPUs for everyday services. + + Example 7 Production translation services, can process 100B words per day (Turovsky,2016). roughly 4.2 million + times our experiment in Figure 7. If all translation traffic were in Estonia, 12,768 kgCO 2eq (the carbon sequestered by + 16.7 acres of forest in one year (Agency,2008)) would be saved per day by using the more efficient model, yet if all + traffic were in Québec, 378 kgCO 2eq would be saved (the carbon sequestered by .5 acres of forest in one year (Agency, + 2008)). Considering the amounts of required compute, small differences in efficiency can scale to large emissions and + energy impacts. + + These services are latency-bound at inference time and thus cannot mitigate carbon emissions by shifting to different + regions. Instead, energy-efficiency is key. We encourage companies to consider weighing energy costs (both social and + monetary) with the performance gains of a new model before deploying it. In the case of our translation experiment in + Figure7, the pre-trained convolutional model we use is significantly more energy hungry across than the transformer + model we use. When deploying a new energy-hungry translation model, we ask companies to consider is the BLEU + score improvement really worth the energy cost of deploying it? Are there ways to route to different models to balance + this trade-off? For example, suppose an energy-hungry model only improves performance in some subset of the data. + Routing to this model only in that subset would maximize performance while minimizing energy footprint. We note + that considering such trade-offs is of increased importance for models aiming to reduce carbon emissions as described + by Rolnick et al.(2019). Deploying a large deep learning model for, say, improving the energy efficiency of a building, + is not worth it if the energy costs of the model outweigh the gains. We also leave an open question to economists to + help assess the welfare benefits of gains on a particular machine learning metric (e.g., how much is BLEU score worth + in a translation service). This would allow the social welfare of the metric to be balanced against the social cost of + carbon (Ricke et al.,2018) for deployment decisions. + Central to all of these cost-benefit analyses are accurate accounting. Our tool provides one step in consistent and + accurate accounting for such purposes. + + 7.3 Efficient testing environments + + In Section7.1we discuss the adoption of green default configurations and Section7.2discusses cost-benefit analyses for + deployments. Another consideration particular to research – especially RL – is the selection of the most efficient testing + environments which assess the mechanism under test. For example, if an RL algorithm solves a particularly complex task + in an interesting way, like solving a maze environment, is there a way to demonstrate the same phenomenon in a more + efficient environment. Several works have developed efficient versions of RL environments which reduce run-times + significantly. In particular,Dalton et al.(2019) improve the efficiency of Atari experiments by keeping resources on + the GPU (and thus avoiding energy and time overheads from moving memory back and forth).Chevalier-Boisvert + et al.(2018) develop a lightweight Grid World environment with efficient runtimes for low-overhead experiments. An + important cost-benefit question for researchers is whether the same point can be proven in a more efficient setting. + + 7.4 Reproducibility + + A key aspect to our work is helping to promote reproducibility by aiding in consistent reporting of experimental details. + We encourage all researchers to release code and models (when it is socially and ethically responsible to do so), to + prevent further carbon emissions. Replicating results is an important, if not required, part of research. If replication + resources are not available, then more energy and emissions must be spent to replicate results – in the case of extremely + large models, the social cost of carbon may be equivalently large. Thus, we ask researchers to also consider energy and + environmental impacts from replication efforts, when weighing model and code release. We note that there may very + well be cases where safety makes this trade-off lean in the direction of withholding resources, but this is likely rare + in most current research. For production machine learning systems, we encourage developers to release models and + codebases internally within a company. This may encourage re-use rather than spending energy resources developing + similar products. + + 26 See for example, search which now uses transformer networks at both Microsoft and Google. + https.//www.blog.google/products/search/search-language-understanding-bert/andhttps.//azure.microsoft.com/en-us/blog/microsoft- + makes-it-easier-to-build-popular-language-representation-model-bert-at-large-scale/ + 27 Efficient routing of traffic to regions has been considered before byNguyen et al.(2012) andBerral et al.(2010). It may be + worth considering efficient routing of traffic to particular models as well. + + 7.5 Standardized reporting + + We suggest that all papers include standardized reporting of energy and carbon emissions. We also suggest adding a + Carbon Impact Statement at the end of papers (just like ours below) which estimates the carbon emissions of the paper. + This can be reported in a dollar amount via the country-specific social cost of carbonRicke et al.(2018). We provide a + script 28 to parse logs from theexperiment-impact-trackerframework and generate such a statement automatically. We + suggest this to spread awareness and bring such considerations to the forefront. We also emphasize that research, even + when compute intensive, is immensely important for progress. It is unknown what sequence of papers may inspire a + breakthrough (Stanley and Lehman,2015) which would reduce emissions by more than any suggestion here. While + emissions should be minimized when possible, we suggest that impact statements be only used for awareness. + We also suggest that, when developing features which visualize compute intensity for cloud or internal workloads, + developers consider providing built-in tools to visualize energy usage and carbon emissions. For example, the Colab + Research Environment shows RAM and Disk capacity, 29 but could also show and provide access to these other metrics + more easily. Providing similar informational labels (Byerly et al.,2018) within internal tooling could mitigate some + energy and carbon impacts within companies. + + 7.6 Badging + + Informational labeling has had a long history of being used in public policy (Banerjee and Solomon,2003). In the + USA, the “Energy Star” label has been used to guide customers to eco-friendly products. More recently, “badges” + rewarded by thePsychological Sciencejournal were shown to be effective, with a jump from 3% of articles reporting + open data to 39% one year later. ACM has introduced similar reproducibility badges. 30 With consistent reporting of + carbon and energy metrics, climate friendly research badges can be introduced by conferences to recognize any paper + that demonstrates a significant effort to mitigate its impacts. For example, a compute intensive paper, when showing + evidence of explicitly running resources in a clean region can be rewarded with such a badge. Another example badge + can be awarded to papers that create energy-friendly algorithms with similar performance as the state-of-the-art 31 . + The goal of these badges is to draw further attention to efficient versions of state-of-the-art systems and to encourage + mitigation efforts while, again, not punishing compute-intensive experiments. + + 7.7 Driver and Implementation Difficulties + + The experiment-impact-tracker framework abstracts away many of the previously mentioned difficulties in estimating + carbon and energy impacts. it handles routing to appropriate tools for collecting information, aggregates information + across tools to handle carbon calculations, finds carbon intensity information automatically, and corrects for multiple + processes on one machine. Yet, a few other challenges may be hidden by using the framework which remain difficult to + circumvent. + AsKhan et al.(2018) discuss, and we encounter ourselves, poor driver support makes tracking energy difficult. Not + every chipset supports RAPL, nor does every Linux kernel. Neither NVIDIA or Intel provide first party supported python + libraries for access to measurements.nvidia-smiper-process measurements in docker containers are not supported. 32 + A body of work has also looked at improving estimates of energy usage from RAPL by fitting a regression model to + real energy usage patterns (Povoa et al.,2019;Kavanagh and Djemame,2019;Ghosh et al.,2013;Song et al.,2013). + The Slurm workload manager provides an energy accounting plugin, 33 but requires administrator access to add. For + those without access to Slurm, Intel’s RAPL supports access to measurements through three mechanisms, but only one + of these (the powercap interface only available on Linux systems) does not require root access (see more discussion + byKhan et al.(2018)). To promote widespread reporting, we avoid any tool which requires administrative access or + would not be accessible on most Linux systems. Providing better supported tools for user-level access to power metrics + would make it possible to more robustly measure energy usage. Aggregating metrics and handling the intricacies of + these downstream tools requires time and knowledge. We try to abstract as much of these challenges away in the + experiment-impact-tracker, though some driver-related issues require driver developer support. + + 28 https.//github.com/Breakend/experiment-impact-tracker/blob/master/scripts/ + generate-carbon-impact-statement + 29 https.//colab.research.google.com/ + 30 https.//www.acm.org/publications/policies/artifact-review-badging + 31 See, for example,Clark et al.(2020) which creates a more efficient version of text encoder pre-training. + 32 https.//github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-242150861 + 33 https.//slurm.schedmd.com/acct_gather_energy_plugins.html + + We also note that carbon intensities for machines in cloud data centers may not reflect the regional carbon intensities. + Some providers buy clean energy directly for some data centers, changing the realtime energy mix for that particular + data center. We were unable to find any information regarding realtime energy mixes in such cases and thus could not + account for these scenarios. If providers exposed realtime APIs for such information this would help in generating + more accurate estimates. Moreover, customized hardware in cloud provider regions does not always provide energy + accounting mechanisms or interfaces. If cloud providers supported libraries for custom hardware, this could be used for + more detailed accounting in a wider range of cloud-based compute scenarios + + 8 Concluding Remarks and Recommendations + + We have shown how theexperiment-impact-trackerand associated tools can help ease the burden of consistent + accounting and reporting of energy, compute, and carbon metrics; we encourage contribution to help expand the + framework. We hope the Deep RL Energy Leaderboard helps spread information on energy efficient algorithms and + encourages research in efficiency. While we focus on compute impacts of machine learning production and research, a + plethora of other work considers costs of transportation for conferences (Holden et al.,2017;Spinellis and Louridas, + 2013;Bossdorf et al.,2010) and compute hardware manufacturing (Venkatesan,2015). We encourage researchers and + companies to consider these other sources of carbon impacts as well. Finally, we recap several points that we have + highlighted in mitigating emissions and supporting consistent accountability. + What can machine learning researchers do? + + •Run cloud jobs in low carbon regions only (see Section6.2). + •Report metrics as we do here, make energy-efficient configurations more accessible by reporting these results + (see Section7.5). + •Work on energy-efficient systems, create energy leaderboards (see Section6). + •Release code and models whenever safe to do so (see Section7.4). + •Integrate energy efficient configurations as defaults in baseline implementations (see Section7.1). + •Encourage climate-friendly initiatives at conferences (see Sections7.6and7.5). + + What can industry machine learning developers and framework maintainers do? + + •Move training jobs to low carbon regions immediately. Make default launch configurations and documentation + point to low carbon regions (see Section6.2). + •Provide more robust tooling for energy tracking and carbon intensities (see Section7.7). + •Integrate energy efficient operations as default in frameworks (see Section7.1). + •Release code and models (even just internally in the case of production systems) whenever safe to do so (see + Section7.4). + •Consider energy-based costs versus benefits of deploying new models (see Section7.2). + •Report model-related energy metrics (see Section7.5). + + We hope that regardless of which tool is used to account for carbon and energy emissions, the insights we provide here + will help promote responsible machine learning research and practices. + + Carbon Impact Statement + + This work contributed 8.021 kg ofCO 2eq to the atmosphere and used 24.344 kWh of electricity, having a + USA-specific social cost of carbon of $0.38 ($0.00, $0.95). Carbon accounting information can be found + here. https.//breakend.github.io/ClimateChangeFromMachineLearningResearch/measuring_and_ + mitigating_energy_and_carbon_footprints_in_machine_learning/ and https.//breakend.github. + io/RL-Energy-Leaderboard/reinforcement_learning_energy_leaderboard/index.html. The social cost + of carbon uses models from (Ricke et al.,2018). This statement and carbon emissions information was generated using + experiment-impact-trackerdescribed in this paper. + + References + US Environmental Protection Agency. Greenhouse gas equivalencies calculator, 2008. URLhttps.//www.epa.gov/ + energy/greenhouse-gas-equivalencies-calculator. + Judith I Ajani, Heather Keith, Margaret Blakers, Brendan G Mackey, and Helen P King. Comprehensive carbon stock + and flow accounting. a national framework to support climate change mitigation policy.Ecological Economics, 89. + 61–72, 2013. + Dario Amodei and Danny Hernandez. AI and Compute.https.//blog.openai.com/openai-five/, 2018. + Jane Andrew and Corinne Cortese. Accounting for climate change and the self-regulation of carbon disclosures. In + Accounting Forum, volume 35, pages 130–138. Taylor & Francis, 2011. + Mahmoud ("Mido") Assran, Joshua Romoff, Nicolas Ballas, Joelle Pineau, and Mike Rabbat. Gossip-based actor- + learner architectures for deep reinforcement learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, + E. Fox, and R. Garnett, editors,Advances in Neural Information Processing Systems 32, pages 13299–13309. Curran + Associates, Inc., 2019. + Miguel F. Astudillo and Hessam AzariJafari. Estimating the global warming emissions of the LCAXVII conference. + connecting flights matter.The International Journal of Life Cycle Assessment, 23(7).1512–1516, Jul 2018. ISSN + 1614-7502. + Abhijit Banerjee and Barry D Solomon. Eco-labeling for energy efficiency and sustainability. a meta-evaluation of us + programs.Energy policy, 31(2).109–123, 2003. + Valentin Bellassen and Nicolas Stephan.Accounting for Carbon. Cambridge University Press, 2015. + Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Environment. An + Evaluation Platform for General Agents.Journal of Artificial Intelligence Research, 47.253–279, 2013. + Josep Ll. Berral, Íñigo Goiri, Ramón Nou, Ferran Julià, Jordi Guitart, Ricard Gavaldà, and Jordi Torres. Towards energy- + aware scheduling in data centers using machine learning. InProceedings of the 1st International Conference on + Energy-Efficient Computing and Networking, e-Energy ’10, page 215–224, New York, NY, USA, 2010. Association + for Computing Machinery. ISBN 9781450300421. + Thomas Boquet, Laure Delisle, Denis Kochetkov, Nathan Schucher, Parmida Atighehchian, Boris Oreshkin, and + Julien Cornebise. DECoVaC. Design of Experiments with Controlled Variability Components. arXiv preprint + arXiv.1909.09859, 2019. + Oliver Bossdorf, Madalin Parepa, and Markus Fischer. Climate-neutral ecology conferences. just do it!Trends in + Ecology & Evolution, 25(2).61, 2010. + Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. + OpenAI Gym, 2016. + Hilary Byerly, Andrew Balmford, Paul J Ferraro, Courtney Hammond Wagner, Elizabeth Palchak, Stephen Polasky, + Taylor H Ricketts, Aaron J Schwartz, and Brendan Fisher. Nudging pro-environmental behavior. evidence and + opportunities.Frontiers in Ecology and the Environment, 16(3).159–168, 2018. + Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. An analysis of deep neural network models for practical + applications.arXiv preprint arXiv.1605.07678, 2016. + Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin. Hardnet. A low memory traffic + network. InProceedings of the IEEE International Conference on Computer Vision, pages 3552–3561, 2019. + Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic Gridworld Environment for OpenAI Gym. + https.//github.com/maximecb/gym-minigrid, 2018. + Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. {ELECTRA}. Pre-training text encoders + as discriminators rather than generators. InInternational Conference on Learning Representations, 2020. + Cédric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. How many random seeds? statistical power analysis in deep + reinforcement learning experiments.arXiv preprint arXiv.1806.08295, 2018. + Cody Coleman, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, + Chris Ré, and Matei Zaharia. Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance + Benchmark.SIGOPS Oper. Syst. Rev., 53(1).14–25, July 2019. ISSN 0163-5980. + Julie Cotter, Muftah Najah, and Shihui Sophie Wang. Standardized reporting of climate change information in australia. + Sustainability accounting, management and policy journal, 2(2).294–321, 2011. + Thomas J Crowley. Causes of climate change over the past 1000 years.Science, 289(5477).270–277, 2000. + Steven Dalton, Iuri Frosio, and Michael Garland. GPU-Accelerated Atari Emulation for Reinforcement Learning, 2019. + Howard David, Eugene Gorbatov, Ulf R Hanebutte, Rahul Khanna, and Christian Le. RAPL. memory power estimation + and capping. In2010 ACM/IEEE International Symposium on Low-Power Electronics and Design (ISLPED), pages + 189–194. IEEE, 2010. + Miyuru Dayarathna, Yonggang Wen, and Rui Fan. Data center energy consumption modeling. A survey. IEEE + Communications Surveys & Tutorials, 18(1).732–794, 2015. + Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, + Tim Harley, Iain Dunning, et al. IMPALA. Scalable Distributed Deep-RL with Importance Weighted Actor-Learner + Architectures. InInternational Conference on Machine Learning, pages 1406–1415, 2018. + David Gefen and Detmar W Straub. The relative importance of perceived ease of use in is adoption. A study of + e-commerce adoption.Journal of the association for Information Systems, 1(1).8, 2000. + Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence + learning. InProceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. + JMLR. org, 2017. + Sayan Ghosh, Sunita Chandrasekaran, and Barbara Chapman. Statistical modeling of power/energy of scientific kernels + on a multi-gpu system. In2013 International Green Computing Conference Proceedings, pages 1–6. IEEE, 2013. + Google. Google’s Green PPAs. What, How, and Why.https.//static.googleusercontent.com/media/www. + google.com/en//green/pdfs/renewable-energy.pdf, 2013. + Google. Achieving Our 100% Renewable Energy Purchasing Goal and Going Be- + yond. https.//static.googleusercontent.com/media/www.google.com/en//green/pdf/ + achieving-100-renewable-energy-purchasing-goal.pdf, 2016. + Odd Erik Gundersen and Sigbjørn Kjensmo. State of the art. Reproducibility in artificial intelligence. InThirty-Second + AAAI Conference on Artificial Intelligence, 2018. + Leor Hackel and Gregg Sparkman. Evaluating the climate impact of psychological science. Costs and opportunities. + Affective Seminar, 2018. URLhttps.//osf.io/dg5ap/?show=view. + Peter Henderson and Emma Brunskill. Distilling information from a flood. A possibility for the use of meta-analysis + and systematic review in machine learning research. InCritiquing and Correcting Trends in Machine Learning + Workshop (CRACT) at NeurIPS, 2018. + Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement + learning that matters. InThirty-Second AAAI Conference on Artificial Intelligence, 2018. + Ashley Hill, Antonin Raffin, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, Rene Traore, Prafulla Dhariwal, + Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and + Yuhuai Wu. Stable baselines.https.//github.com/hill-a/stable-baselines, 2018. + Matthew H Holden, Nathalie Butt, Alienor Chauvenet, Michaela Plein, Martin Stringer, and Iadine Chadès. Academic + conferences urgently need environmental policies.Nature ecology & evolution, 2017. + Nicolas Houy. Rational mining limits bitcoin emissions.Nature Climate Change, 9(9).655–655, 2019. + Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, + and Hartwig Adam. Mobilenets. Efficient convolutional neural networks for mobile vision applications.arXiv + preprint arXiv.1704.04861, 2017. + Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional + networks. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, + 2017. + Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet. + AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size.arXiv preprint arXiv.1602.07360, 2016. + International Energy Agency.CO2 Emissions from Fuel Combustion. 2015. + IPCC.Climate Change 2014. Mitigation of Climate Change. Working Group III Contribution to the IPCC Fifth + Assessment Report. Cambridge University Press, 2015. + IPCC.Global Warming of 1.5 °C. 2018. + Yunho Jeon and Junmo Kim. Constructing fast network through deconstruction of convolution. InAdvances in Neural + Information Processing Systems, pages 5951–5961, 2018. + Angela H. Jiang, Daniel L. K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, + Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, and Padmanabhan Pillai. Accelerating Deep Learning by + Focusing on the Biggest Losers.arXiv e-prints, art. arXiv.1910.00762, Oct 2019. + Alex K Jones, Liang Liao, William O Collinge, Haifeng Xu, Laura A Schaefer, Amy E Landis, and Melissa M Bilec. + Green computing. A life cycle perspective. In2013 International Green Computing Conference Proceedings, pages + 1–6. IEEE, 2013. + Richard Kavanagh and Karim Djemame. Rapid and accurate energy models through calibration with ipmi and rapl. + Concurrency and Computation. Practice and Experience, 31(13).e5124, 2019. + Kashif Nizam Khan, Mikael Hirki, Tapio Niemi, Jukka K. Nurminen, and Zhonghong Ou. RAPL in Action. Experiences + in Using RAPL for Power Measurements.ACM Trans. Model. Perform. Eval. Comput. Syst., 3(2).9.1–9.26, March + 2018. ISSN 2376-3639. + Max J Krause and Thabet Tolaymat. Quantification of energy and carbon costs for mining cryptocurrencies.Nature + Sustainability, 1(11).711, 2018. + V. Krey, O. Masera, G. Blanford, T. Bruckner, R. Cooke, K. Fisher-Vanden, H. Haberl, E. Hertwich, E. Kriegler, + D. Mueller, S. Paltsev, L. Price, S. Schlömer, D. Ürge-Vorsatz, D. van Vuuren, and T. Zwickel. Annex 2 - metrics and + methodology. InClimate Change 2014. Mitigation of Climate Change. IPCC Working Group III Contribution to + AR5. Cambridge University Press, November 2014. URLhttp.//pure.iiasa.ac.at/id/eprint/11109/. + Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural + Networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors,Advances in Neural Information + Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012. + Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of + machine learning.arXiv preprint arXiv.1910.09700, 2019. + Jacob LaRiviere, Gavin Mccormick, and Sho Kawano. How better accounting can more cheaply reduce carbon + emissions.Policy Brief, 4, 2016. + Jens Malmodin, Pernilla Bergmark, and Dag Lundén. The future carbon footprint of the ict and e&m sectors.on + Information and Communication Technologies, page 12, 2013. + Eric Masanet, Arman Shehabi, Nuoa Lei, Harald Vranken, Jonathan Koomey, and Jens Malmodin. Implausible + projections overestimate near-term bitcoin co2 emissions.Nature Climate Change, 9(9).653–654, 2019. + Stephen Merity. Single Headed Attention RNN. Stop Thinking With Your Head.arXiv preprint arXiv.1911.11423, + 2019. + Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin + Riedmiller. Playing Atari With Deep Reinforcement Learning. InNIPS Deep Learning Workshop. 2013. + Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, + and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. InInternational conference on + machine learning, pages 1928–1937, 2016. + Camilo Mora, Randi L Rollins, Katie Taladay, Michael B Kantar, Mason K Chock, Mio Shimada, and Erik C Franklin. + Bitcoin emissions alone could push global warming above 2 °C.Nature Climate Change, 8(11).931, 2018. + Richard G Newell and Juha Siikamäki. Nudging energy efficiency behavior. The role of information labels.Journal of + the Association of Environmental and Resource Economists, 1(4).555–598, 2014. + Kim Khoa Nguyen, Mohamed Cheriet, Mathieu Lemay, Victor Reijs, Andrew Mackarel, and Alin Pastrama. + Environmental-aware virtual data center network.Computer Networks, 2012. + Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. InProceedings of the + Third Conference on Machine Translation. Research Papers, Brussels, Belgium, 2018. Association for Computational + Linguistics. + Daniel Pichert and Konstantinos V. Katsikopoulos. Green defaults. Information presentation and pro-environmental + behaviour.Journal of Environmental Psychology, 28(1).63 – 73, 2008. ISSN 0272-4944. doi. https.//doi.org/10.1016/ + j.jenvp.2007.09.004. URLhttp.//www.sciencedirect.com/science/article/pii/S0272494407000758. + Lucas Venezian Povoa, Cesar Marcondes, and Hermes Senger. Modeling energy consumption based on resource + utilization. InInternational Conference on Computational Science and Its Applications, pages 225–240. Springer, + 2019. + Zheng Qin, Zhaoning Zhang, Dongsheng Li, Yiming Zhang, and Yuxing Peng. Diagonalwise Refactorization. An + Efficient Training Method for Depthwise Convolutions. In2018 International Joint Conference on Neural Networks + (IJCNN), pages 1–8. IEEE, 2018. + Celine Ramstein, Goran Dominioni, Sanaz Ettehad, Long Lam, Maurice Quant, Jialiang Zhang, Louis Mark, Sam + Nierop, Tom Berg, Paige Leuschner, et al. State and trends of carbon pricing 2019, 2019. + Nils Reimers and Iryna Gurevych. Reporting Score Distributions Makes a Difference. Performance Study of LSTM- + networks for Sequence Tagging. InEMNLP, 2017. + Katharine Ricke, Laurent Drouet, Ken Caldeira, and Massimo Tavoni. Country-level social cost of carbon.Nature + Climate Change, 2018. + Giampaolo Rodola. Psutil package. a cross-platform library for retrieving information on running processes and system + utilization, 2016. + David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin + Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, + Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. + Platt, Felix Creutzig, Jennifer Chayes, and Yoshua Bengio. Tackling Climate Change with Machine Learning.arXiv + e-prints, art. arXiv.1906.05433, Jun 2019. + Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2. Inverted + residuals and linear bottlenecks. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, + pages 4510–4520, 2018. + John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization + algorithms.arXiv preprint arXiv.1707.06347, 2017. + Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green AI.arXiv e-prints, art. arXiv.1907.10597, Jul + 2019. + Sam Shead. AI Researchers Left Disappointed As NIPS Sells Out In Under 12 Min- + utes. Forbes, Sep 2018. URL https.//www.forbes.com/sites/samshead/2018/09/05/ + ai-researchers-left-disappointed-as-nips-sells-out-in-under-12-minutes/#7dda67fc20e9. + Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh + Mishra, and Juan Carlos Niebles. The ai index 2019 annual report.AI Index Steering Committee, Human-Centered + AI Initiative, Stanford University., 2019. + Szymon Sidor and John Schulman. Openai baselines. Dqn (blogpost). 2017. + Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.arXiv + preprint arXiv.1409.1556, 2014. + Frank Soboczenski, Michael D Himes, Molly D O’Beirne, Simone Zorzan, Atilim Gunes Baydin, Adam D Cobb, + Yarin Gal, Daniel Angerhausen, Massimo Mascaro, Giada N Arney, et al. Bayesian deep learning for exoplanet + atmospheric retrieval.arXiv preprint arXiv.1811.03390, 2018. + Shuaiwen Leon Song, Kevin Barker, and Darren Kerbyson. Unified performance and power modeling of scientific + workloads. InProceedings of the 1st International Workshop on Energy Efficient Supercomputing, page 4. ACM, + 2013. + Diomidis Spinellis and Panos Louridas. The carbon footprint of conference papers.PloS one, 8(6).e66508, 2013. + Kenneth O Stanley and Joel Lehman.Why greatness cannot be planned. The myth of the objective. Springer, 2015. + Kristin Stechemesser and Edeltraud Guenther. Carbon accounting. a systematic literature review.Journal of Cleaner + Production, 36.17–38, 2012. + Christian Stoll, Lena Klaaßen, and Ulrich Gallersdörfer. The carbon footprint of bitcoin.Joule, 2019. + Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and Policy Considerations for Deep Learning in NLP. + arXiv preprint arXiv.1906.02243, 2019. + Vladimir Sukhoy and Alexander Stoytchev. Eliminating the Variability of Cross-Validation Results with LIBLINEAR + due to Randomization and Parallelization. 2019. + Shyam Sundar, Ashish Kumar Mishra, and Ram Naresh. Modeling the impact of media awareness programs on + mitigation of carbon dioxide emitted from automobiles.Modeling Earth Systems and Environment, 4(1).349–357, + 2018. + Richard Sutton. The bitter lesson.Incomplete Ideas (blog), March, 13, 2019. + Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent + Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InComputer Vision and Pattern Recognition + (CVPR), 2015. + Samuel Tang and David Demeritt. Climate change and mandatory carbon reporting. Impacts on business process and + performance.Business Strategy and the Environment, 27(4).437–455, 2018. + Richard SJ Tol. The social cost of carbon.Annu. Rev. Resour. Econ., 3(1).419–443, 2011. + Bo Tranberg, Olivier Corradi, Bruno Lajoie, Thomas Gibon, Iain Staffell, and Gorm Bruun Andresen. Real-time carbon + accounting method for the european electricity markets.Energy Strategy Reviews, 26.100367, 2019. + Barak Turovsky. Ten years of Google Translate.Google Official Blog, 2016. + U.S. Environment Protection Agency. Social Cost of Carbon.https.//www.epa.gov/sites/production/files/2016- + 12/documents/social_cost_of_carbon_fact_sheet.pdf, 2013. + Chandramouli Venkatesan. Comparative Carbon Footprint Assessment of the Manufacturing and Use Phases of Two + Generations of AMD Accelerated Processing Units, 2015. + Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay Less Attention with Lightweight and + Dynamic Convolutions. InInternational Conference on Learning Representations, 2019. + Michel Zade, Jonas Myklebost, Peter Tzscheutschler, and Ulrich Wagner. Is bitcoin the only problem? a scenario model + for the power demand of blockchains.Frontiers in Energy Research, 7, 2019. + Sergey Zagoruyko and Nikos Komodakis. Wide residual networks.arXiv preprint arXiv.1605.07146, 2016. + Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet. An extremely efficient convolutional neural + network for mobile devices. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, + pages 6848–6856, 2018. + + + A Conference Travel + + Prior work has also examined conference travel for various fields as a major source of impact Spinellis and Louridas + (2013); Astudillo and AzariJafari(2018);Hackel and Sparkman(2018). For example,Spinellis and Louridas(2013) + found that theCO 2eq emissions from travel per conference participant was about 801 kgCO 2eq ,Astudillo and AzariJafari + (2018) estimated around 883 kgCO 2eq emissions per participant, andHackel and Sparkman(2018) estimate around 910 + kg ofCO 2eq emissions per participant. Interestingly, these separate papers all align around the same carbon emissions + numbers per conference participant. Using this and ML conference participant statistics we can gain some (very) rough + insight into the carbon emissions caused by conference travel (not including food purchases, accommodations, and + travel within the conference city). + Conference participation has grown particularly popular in ML research, attracting participants from industry and + academia. In 2018 the Neural Information Processing Systems (NeurIPS) conference sold out registrations in 12 + minutes (Shead,2018). In 2019, according to the AI Index Report 2019 (Shoham et al.,2019), conferences had the + following attendance. CVPR (9,227); IJCAI (3,015); AAAI (3,227); NeurIPS (13,500); IROS (3,509); ICML (6,481); + ICLR (2,720); AAMAS (701); ICAPS (283); UAI (334). The larger conferences also showed continued growth. + NeurIPS showed a year-over-year growth 41% from 2018 to 2019. Given only these conferences and their attendances + in 2019, the lower 801kgCO 2eq average emissions estimate per participant (Spinellis and Louridas,2013), this adds up + to roughly 34,440,597 kgCO 2eq emitted in 2019 from ML-related conferences (not considering co-location and many + other factors). + + B NeurIPS Sampling on Metric Reporting + + We randomly sampled 100 NeurIPS papers from the 2019 proceedings, of these papers we found 1 mea- + sured energy in some way, 45 measured runtime in some way, 46 provided the hardware used, 17 pro- + vided some measure of computational complexity (e.g., compute-time, FPOs, parameters), and 0 pro- + vided carbon metrics. We sampled from the NeurIPS proceedings page. https.//papers.nips.cc/book/ + advances-in-neural-information-processing-systems-32-2019. We first automatically check for key + words (below) related to energy, compute, and carbon. We then examined the context of the word to classify it + as relating to hardware details (e.g., Nvidia Titan X GPU), computational efficiency (e.g., FPOs, MAdds, GPU-hours), + runtime (e.g., the experiment ran for 8 hours), energy (e.g., a plot of performance over Joules or Watts), or carbon (e.g., + we estimate 10 kg CO 2eq were emitted). We also manually validate papers for similar metrics that didn’t appear in the + keyword search. If a paper did not contain experiments we removed it and randomly redrew a new paper. In many cases, + metrics are only provided for some subset of experiments (or for particular ablation experiments). We nonetheless count + these as reporting the metric. Where a neural network diagram or architecture description was provided, we did not + consider this to be reporting a compute metric. + + compute_terms = ["flop", "fpo", "pflop", "tflops", "tflop", "parameters", "params", "pflops", "flops", "fpos", "gpu-hours", + "cpu-hours", "cpu-time", "gpu-time", "multiply-add", "madd"] + hardware_terms = ["nvidia", "intel", "amd", "radeon", "gtx", "titan", "v100", "tpu", "ryzen", "cpu", "gpu"] + time_terms = ["seconds", "second", "hour", "hours", "day", "days", "time", "experiment length", "run-time", "runtime"] + energy_terms = ["watt", "kWh", "joule", "joules", "wh", "kwhs", "watts", "rapl", "energy", "power"] + carbon_terms = ["co2", "carbon", "emissions"] + + C Carbon Discussion + + But cloud providers claim 100% carbon neutrality in my region, why do I need to shift my resources? + While we estimate energy mixes based on regional grids, cloud providers sometimes aim for carbonneutralitythrough + a mixture of mechanisms which may change the energy mix being provided to a data center in an otherwise carbon + intensive energy grid or otherwise offset unclean energy usage. Data centers draw energy from the local energy grids + and as a result the mix of energy they consume largely depends on the composition of the power running in the grids. If + the local energy grids are powered by a mix of fuel and renewable energy, a data center will inevitably consume fuel + energy as well. + Due to the fact that the consumers do not know the origin of the physical electricity from the utility grid, it is difficult to + assign ownership of the renewable energy consumption. The Environmental Protection Agency (EPA) uses renewable + energy certificates (RECs) to track the generation and consumption of renewable energy. one REC is issued when + one megawatt-hour (MWh) of electricity is generated from a renewable source and delivered to the energy grid. 34 + Consumers can then purchase RECs from a renewable energy provider and apply them to their electricity usage. This + means consumers can claim they run on renewable energy by purchasing RECs from providers that doesn’t actually + power the energy grids that they draw electricity from. Although this means that the consumers’ realtime carbon + footprints will still be decided by the composition of renewable and fuel energy in their local energy grids, more + renewable energy can flow onto the grid by purchasing the RECs and future development of renewable sources is + supported. Google, to offset its carbon emissions, uses RECs and power purchase agreements (PPAs) with renewable + energy providers to ensure that more renewable energy powers the same electricity grids that its data centers are in. 35 + Google then sells the renewable energy as it becomes available back to the electricity grids and strips away the RECs. + Over one year, Google applies equal amounts of RECs to its data centers’ total energy consumption. This method + helps green energy provider development by creating a long term demand. However, PPAs provide RECs forfuture + renewables, not only current energy on the grid which may remain unchanged. As it states. “While the renewable + facility output is not being used directly to power a Google data center, the PPA arrangement assures that additional + renewable generation sufficient to power the data center came on line in the area.” + We can see that even if a cloud provider’s data centers are carbon neutral, the actual CO2 eq emissions can vary largely + and depends on the region and even time of the day (solar energy cannot be generated at night). We suggest that cloud + providers release tools for understanding the carbon intensity for each data center region regardless of offset purchasing. + While the purchases of PPAs and RECs are valuable for driving towards renewable energy in otherwise dirty regions, + for machine learning model training, where the resources can be moved, we believe shifting resources to low intensity + regions is more beneficial to long term carbon impacts. Other cloud-based jobs where latency requirements prevent + shifting resources will remain to drive PPA/REC purchasing, and consequently renewable energy demand. + + D ImageNet Experiments + + We load pre-trained models available through PyTorch Hub (see https.//pytorch.org/hub) – namely + AlexNet (Krizhevsky et al.,2012), DenseNet (Huang et al.,2017), GoogLeNet (Szegedy et al.,2015), HardNet (Chao + et al.,2019), MobileNetv2 (Sandler et al.,2018), ShuffleNet (Zhang et al.,2018), SqueezeNet (Iandola et al.,2016), + VGG (Simonyan and Zisserman,2014), and Wide ResNets (Zagoruyko and Komodakis,2016). We run 50,000 rounds + of inference on a single image through pre-trained image classification models and run similar analysis toCanziani et al. + (2016). We repeat experiments on 4 random seeds. + + 34 https.//www.epa.gov/greenpower/renewable-energy-certificates-recs + 35 We note that this process is likely similar for most cloud providers, but Google is the most open with their methodology, so we + are able to gain more insight from the materials they publish. Information described here is mainly put together fromGoogle(2016) + andGoogle(2013). + 36 https.//static.googleusercontent.com/media/www.google.com/en/us/green/pdfs/renewable-energy.pdf + + We count flops and parameters using the thop package (for package version numbers see automated logs in the online + appendix linked above).https.//github.com/Lyken17/pytorch-OpCounter + Code for running the experiment is available at. https.//github.com/Breakend/ + ClimateChangeFromMachineLearningResearch/blob/master/paper_specific/run_inference.py + An online appendix showing all per-experiment details can be seen here. https.//breakend.github.io/ + ClimateChangeFromMachineLearningResearch/measuring_and_mitigating_energy_and_carbon_ + footprints_in_machine_learning/ + + The plot of FPOs versus runtime can be seen in Figure8and plots against number of parameters can be seen in Figure9. + Number of parameters similarly have no strong correlation with energy consumption (R2 = 0.002, Pearson 0.048), + nor time (R2 = 0.14, Pearson 0.373). We note that our runtime results likely differ fromCanziani et al.(2016) due to + the architectural differences in the model sets we use. + For parameter plots, see Figure9, for extended time and energy Figures, see Figure8. + + <
> + + Figure 8. We seek to investigate the connection between FPOs, energy usage, and experiment time, similarly toCanziani + et al.(2016). We run 50,000 rounds of inference on a single image through pre-trained image classification models + available through PyTorch Hub (seehttps.//pytorch.org/hub) – namely (Krizhevsky et al.,2012;Huang et al., + 2017;Szegedy et al.,2015;Chao et al.,2019;Sandler et al.,2018;Zhang et al.,2018;Iandola et al.,2016;Simonyan + and Zisserman,2014;Zagoruyko and Komodakis,2016). We record experiment time and the kWh of energy used to run + the experiments and repeat experiments 4 times, averaging results. We find that FPOs are not strongly correlated with + energy consumption (R2 = 0.083, Pearson0.289) nor with time (R2 = 0.005, Pearson 0.074). Number of parameters + (plotted in Appendix) similarly have no strong correlation with energy consumption (R2 = 0.002, Pearson 0.048), nor + time (R2 = 0.14, Pearson 0.373). We note, however, thatwithin an architecturecorrelations are much stronger. For + example, only considering different versions of VGG, FPOs are strongly correlated with energy (R2 =.999, Pearson + 1.0) and time (R2 =.998, Pearson .999). See Appendix for experiment details, code, and data links. Our runtime + results likely differ fromCanziani et al.(2016) due to the architectural differences in the model sets we use. + + E Estimation Methods + + We use our PPO Pong experiment (see AppendixFfor more details) as the experiment under comparison. For carbon + emission estimates, we use three estimation methods. realtime emissions data for California (collected by our framework + fromcaiso.org) times the power usage at that time integrated over the length of the experiment; multiplying total + energy usage recorded by our method by the California average carbon intensity; multiplying total energy usage + recorded by our method by the EPA US average carbon intensity (Strubell et al.,2019). For energy estimates, we use. + (1) the experiment time multiplied by the number of GPUs, a utilization factor of 1/3 or 1, and the Thermal Design + Power (TDP) – which can be thought of as the maximum Watt draw – of the GPU (Amodei and Hernandez,2018); (2) + the measured GPU-hrs of our tool multiplied by the TDP; a rough calculation of PFLOPs-hr (following the methodology + + <
> + + Figure 9. The same experiments as in Figure3, plotting parameters as the varying factor instead. See Figure3for + correlation values. + + + of (Amodei and Hernandez,2018) by the PFLOPs/TDP of the GPU; (3) our tool’s accounting method which tracks + energy from GPU readings, accounts for CPU time/energy, and measures utilization in realtime. + + F Reinforcement Learning + + We investigate the energy efficiency of four baseline RL algorithms. PPO (Hill et al.,2018;Schulman et al.,2017), + A2C (Hill et al.,2018;Mnih et al.,2016), A2C with VTraces (Espeholt et al.,2018;Dalton et al.,2019), and DQN (Hill + et al.,2018;Mnih et al.,2016). We evaluate on PongNoFrameskip-v4 (left) and BreakoutNoFrameskip-v4 (right), two + common evaluation environments included in OpenAI Gym (Bellemare et al.,2013;Brockman et al.,2016;Mnih et al., + 2013). + We train for only 5M timesteps, less than prior work, to encourage energy efficiency (Mnih et al.,2016,2013). We use + default settings from code provided in stable-baselines (Hill et al.,2018) and cule (Dalton et al.,2019), we only modify + evaluation code slightly. Modifications can be found here. + + •https.//github.com/Breakend/rl-baselines-zoo-1(for stable-baselines modifications) + •https.//github.com/Breakend/cule(for cule modifications) + + Since we compare both on-policy and off-policy methods, for fairness all evaluation is based on 25 separate rollouts + completed every 250k timesteps. This is to ensure parity across algorithms. We execute these in parallel together as + seen in the cule code.https.//github.com/Breakend/cule/blob/master/examples/a2c/test.py. + While average return across all evaluation episodes (e.g., averaging together the step at 250k timesteps and every + evaluation step until 5M timesteps) can be seen in the main text, the asymptotic return (for the final round of evaluation + episodes) can be seen Figure10. Plots comparing experiment runtime to asymptotic and average returns (respectively) + can be seen in Figure11and Figure12. + Our online leaderboard can be seen at. https.//breakend.github.io/RL-Energy-Leaderboard/ + reinforcement_learning_energy_leaderboard/index.html + We note that while DQN underperforms as compared to PPO here, better hyperparameters may be found such that DQN + is the more energy efficient algorithm. Moreover, we only use the 5M samples regime, whereas prior work has used + 10M or more samples for training, so DQN results seen here would correspond to earlier points in training in other + papers. + + <
> + + Figure 10. Pong (left) and Breakout (right) asymptotic return. + + <
> + + Figure 11. Pong (left) and Breakout (right) as a function of experiment length and asymptotic return. + + <
> + + Figure 12. Pong (left) and Breakout (right) as a function of experiment length and average return. +<> <> <> + + +<> <> <> +vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design + +Minsoo Rhu Natalia Gimelshein Jason Clemons Arslan Zul�qar Stephen W. Keckler NVIDIA Santa Clara, CA 95050 +{mrhu, ngimelshein, jclemons, azulfiqar, skeckler}@nvidia.com + +Abstract + +The most widely used machine learning frame.works require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card containing 12 GB of memory, with 18% performance loss compared to a hypothetical, oracular GPU with enough memory to hold the entire DNN. + +I. INTRODUCTION +Deep neural networks (DNNs) have recently been success.fully deployed in various application domains such as computer vision [1], speech recognition [2], and natural language processing [3] thanks to their superior performance compared to traditional state-of-the-art approaches. Such proliferation of deep learning techniques has led several software frameworks to be developed in recent years to analyze and facilitate the design of neural networks [4, 5, 6, 7]. The list of available frameworks continue to expand with developers constantly adding more features and improving computational efficiency to foster research in the area of deep learning. Due to the tremendous compute horsepower offered by graphics processing units (GPUs), these frameworks provide strong backend support for GPU software libraries such as cuDNN [8]. In fact, almost every group today involved in training neural networks is deploying GPUs for accelerated deep learning [9]. +While these popular machine learning (ML) frameworks facilitate the study of DNNs, a major limitation of the use of these frameworks is that the DRAM capacity limits of the GPU(s) in the system eventually limit the size the of the DNN that can be trained (Section II-C). To work around the memory capacity bottleneck [10, 11], ML practitioners must either use less desirable DNN architectures (e.g., smaller number of +Published as a conference paper at the 49th IEEE/ACM International Symposium on Microarchitecture (MICRO-49), 2016. + +<
> + +Fig. 1: GPU memory usage when using the baseline, network-wide allocation policy (left axis). The right axis shows the maximum fraction of this baseline allocation actually utilized when traversing through the network layer-wise. The numbers next to the names of each network refer to the batch size throughout this paper. Studied DNNs are detailed in Section IV-C. +layers, smaller batch sizes, less performing but more memory-Efficient convolutional algorithms) or parallelize the DNN across multiple GPUs [12]. Figure 1 highlights how the memory consumption trends of the ImageNet [13] winning DNNs have evolved over time. AlexNet [1], for instance, only contained 5 convolutional layers with 2 fully-connected layers and required a "mere" 1.1 GB of memory allocation for training, which is well below the 12 GB memory capacity of the state-of-the-art NVIDIA Titan X. The more recent VGG.16 [14], on the other hand, contains 16 convolutional layers and 3 fully-connected layers, incurring a total of 28 GB of memory usage for batch size 256. Because a single GPU can only accommodate a batch size of 64 for VGG-16, training with batch 256 requires parallelization across multiple GPUs or the network must be sequentially executed multiple times with smaller batches. With the most recent ImageNet winning network adopting more than a hundred convolutional layers [15], the trend in deep learning is to move towards larger and deeper network designs [14, 16, 17, 18]. As a result, alleviating the rigid physical memory limitations of GPUs is becoming increasingly important. +In this paper, we propose virtualized Deep Neural Network (vDNN), a runtime memory management solution that virtualizes the memory usage of deep neural networks across both GPU and CPU memories. Our vDNN allows ML practitioners to deploy larger and deeper networks beyond the physical +capacity of available GPUs, enabling them to focus more on their algorithms while the system architecture and run.time system transparently manage the allocation, placement, movement, and release of their data. The motivation behind vDNN is based on the following three key observations: + +1) DNNs trained via stochastic gradient-descent (SGD) are designed and structured with multiple layers [19]; 2) the training of these neural networks involves a series of layer-wise computations, the order of which is statically �xed and repeated for millions to billions of iterations throughout the entire training process; and 3) even though the GPU can, at any given time, only process a single layer's computation (due to the layer-wise computational characteristics of SGD-based DNN training), popular ML frameworks adopt a network-wide memory allocation policy because DNN training requires the intermediate feature maps of all the layers in the network to be backed up in GPU memory for gradient updates (Section II-C). In other words, existing memory management schemes overprovision the memory allocations to accommo.date the usage of the entire network layers, even though the GPU is only using a subset of this allocation for the layer-wise requirements. We observe that such memory underutilization issue becomes more severe for deeper networks, leading to 53% to 79% of allocated memory not being used at all at any given time (Figure 1). The goal of vDNN is to conservatively allocate GPU memory for the immediate usage of a given layer's computation so that the maximum and average memory usage is drastically reduced, allowing re.searchers to train larger networks. To achieve this goal, vDNN exploits the data dependencies of allocated data structures, particularly the intermediate feature maps that account for the majority of memory usage (Section II-C), and either releases or moves these intermediate data between GPU and CPU memory. Specifically, vDNN either 1) aggressively releases these feature maps from the GPU memory if no further reuse exists, or 2) offloads (and later prefetches) to (from) CPU memory if further reuse does exist but is not immediately required. By exploiting the inter-layer memory access and reuse patterns of DNNs, our vDNN memory manager intelligently overlaps the normal DNN computations with the offload/prefetch/release operations, effectively virtualizing the memory usage of DNNs with little to no performance loss. The operations of vDNN are completely transparent to programmers and enable them to train larger and deeper neural networks that consume memory well beyond the limits of physical memory of GPUs today. The key contributions of our work are: +� This work is the first to present a detailed, quantitative analysis on GPU-based DNN training, as opposed to re.cent literature targeting energy-Efficient accelerators for DNN inference [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. + +� To the best of our knowledge, our work is the first that provides an in-depth characterization study on the memory access characteristics of DNNs and their effect on the GPU memory system from an architectural perspective. + +� This work identifies the key limitations of current ML frameworks' memory management policies as they re.quire the network-wide memory usage of the target DNN to monolithically fit within the physical capacity of the GPU. We demonstrate this by showing that existing frameworks fail in training 6 out of the 10 studied DNNs when their memory allocation size (14 GB to 67 GB) exceeds the GPU memory budget (12 GB in NVIDIA's Titan X). + +� We propose, implement, and evaluate a runtime memory manager called vDNN that virtualizes the memory usage of neural networks across CPU and GPU memories. Our vDNN solution reduces the average GPU memory usage of these 6 memory hungry networks by 73% to 98%, allowing them to be trained on a single Titan X card. Compared to a hypothetical, oracular GPU containing enough memory to hold the entire DNN, vDNN incurs 1% to 18% performance overhead. + +II. BACKGROUND AND MOTIVATION + +This section provides an overview of modern DNNs, the memory management policies of current ML frameworks, and their key limitations that motivate this work. + +A. DNN Architecture +Convolutional neural networks are one of the most popular ML algorithms for high accuracy computer vision tasks. While other types of networks are also gaining tractions (e.g., recurrent neural networks for natural language pro.cessing), all of these DNNs are trained using a backward propagation algorithm [19] via stochastic gradient-descent (SGD). For clarity of exposition and owing to their state-of.the-art performance in the ImageNet competition, this paper mainly focuses on the feedforward style convolutional neural networks commonly seen in AlexNet [1], OverFeat [30], GoogLeNet [17], and VGG [14]. However, the key intuitions of our work are equally applicable to any neural network that exhibits layer-wise computational characteristics and is trained via SGD, detailed later in this section. +DNNs are designed using a combination of multiple types of layers, which are broadly categorized as convolutional layers (CONV), activation layers (ACTV), pooling layers (POOL), and fully-connected layers (FC). A neural network is structured as a sequence of multiple instances of these layers. DNNs for computer vision tasks in particular are broadly structured into the following two modules: 1) the feature extraction layers that detect distinguishable features across input images, and 2) the classification layers that analyze the extracted features and classify the image into a given image category. Feature extraction layers are generally designed using CONV/ACTV/POOL layers and are positioned as the initial part of the DNN. The classification layers are built up using the FC layers and are found at the end of the DNN computation sequence. The general trend in deep learning is to design the network with a large number of feature extraction layers so that a deep hierarchy of features are trained for robust image classification [14, 15, 17]. + +Fig. 2: Memory allocations required for linear networks using the baseline memory manager (bold arrows). For inference, the sum of all green (W) and red (X) arrows are allocated. For training, two additional data structures for dX and dY are required: both are sized to the maximum of all blue (dY) arrows and are reused while traversing back the layers during backward propagation. An optional temporary buffer, called workspace in cuDNN [8] (yellow arrow, WS), is needed in certain convolutional algorithms. The workspace buffer is sized with the maximum workspace requirement among all layers and is reused during backward propagation. +B. DNN Training vs. Inference +A neural network needs to be trained before it can be deployed for an inference or classification task. Training entails learning and updating the weights of the layers of a neural network by performing the operations of forward and backward propagation algorithms [19]. The direction of traversal, as well as the mathematical operations that must be performed, differ for forward and backward propagation. +Forward Propagation. Forward propagation is performed from the first (input) layer to the last (output) layer, whereas backward propagation is performed in the opposite direction (last to first layer), from right to left in Figure 2. Intuitively, forward propagation traverses the network layer-wise and per.forms the aforementioned feature extraction and classification tasks on a given input, leading to an image classification. Dur.ing forward propagation, each layer applies a mathematical operation to its input feature maps (X) and stores the results as output feature maps (Y). For linear feedforward DNNs, the resulting Y of layer(n.1) is directly used as the input X by layer(n) (Figure 2). The computation flow of forward propagation is therefore a serialized process, as layer(n) can initiate its layer's operation only when the preceding layer(n.1) is finished with its computation and forwarded its output Y to layer(n)'s input X. Non-linear network topologies can contain one-to-many (fork) and many-to-one (join) inter.layer dependencies, but forward propagation still involves a series of layer-wise computations as detailed in Figure 3. Note that the GPU can only process a single layer's computation at any given time due to such inter-layer data dependencies. As a result, the minimum, per layer memory allocations required are determined by the layer's input-output relationships and its mathematical function1. For instance, a CONV layer using the + +1 Popular activation functions (sigmoid/tanh/ReLU [1]) can be refactored into an in-place algorithm using element-wise computation. Both Caffe and Torch leverage this in-place memory optimization and only allocate memory space for Y and dY for forward (Y) and backward (both Y and dY) propagation [31]. This paper adopts this in-place optimization for both baseline and vDNN for a conservative evaluation. + +<
> + +Fig. 3: (a) The computation graph and its inter-layer dependencies of a GoogLeNet-style, non-linear feedforward network during forward propagation. Refcnt refers to the number of consumer layers that depends on the current, producer layer's Y. The order in which the GPU processes each layer's forward computation is shown in (b), from layer(1) to layer(5), highlighting the layer-wise computation of DNN training. The producer-consumer relationship is reversed during backward propagation. +most memory-Efficient convolutional algorithm (e.g., implicit GEMM in cuDNN [8]2) requires three data structures, the input/output feature maps (X and Y) and the weights of the layer (W) for forward propagation. Employing a fast.fourier-transform (FFT) based convolution algorithm however requires an additional, temporary workspace (WS) buffer to manage transformed maps. +Backward Propagation. For DNNs that are not fully trained, the inferred image category might be incorrect. As a result, a loss function is used to derive the magnitude of the inference error at the end of forward propagation. Specifically, the gradient of the loss function is derived with respect to the last layer(N)'s output: + +<> (1) + +The value in Equation 1 is forwarded to the last layer(N) as its input gradient maps (dY), and the output gradient maps (dX) are derived based on the chain rule [19]: + +<> (2) + +Because the output <> is the product of the input +<> with <>, deriving the value of dX for layer(N) +generally requires memory for both its input/output gradient maps (dY and dX) and also the input/output feature maps (X and Y) for this layer. For linear networks, the calculated dX of layer(N) is directly passed on to the preceding layer(N.1) to be used as dY for layer(N.1)'s dX derivation (Figure 2). + +2 cuDNN (version 4.0) provides six different convolutional algorithms. Implicit GEMM requires the least memory allocation as no additional workspace is needed. FFT-based convolutional algorithms on the other hand incur larger memory allocations because of the additional data structures required to store the feature maps transformed into frequency domain. More details are available in [8, 32]. + +<
> + +Fig. 4: Breakdown of GPU memory usage based on its functionality (left axis). The right axis shows the fraction of allocated memory consumed by feature maps. +This chain rule is similarly used to derive the gradients of the weights to update the network model. + +Similar to forward propagation, backward propagation is also performed layer-wise to the respective incoming gradient maps, dYs. Once backward propagation reaches the first layer, the weights are adjusted using the weight gradients so that the prediction error is reduced for the next classification task. Hence, training a network involves both forward and backward propagation, which are repeated for millions to billions of iterations. Because of the stochastic nature of SGD-based backward propagation, the network input is generally batched with hundreds of images (e.g., 128 and 256 images for best performing AlexNet and VGG-16), which increases memory allocation size but helps the network model better converge to an optimal solution. +C. Motivation: Scalable and Memory-Efficient DNN Design +To aid the design and deployment of neural networks, a va.riety of ML frameworks have been developed in recent years, including Caffe, Torch, Neon, TensorFlow, and Theano [9]. The rich set of features offered by these frameworks coupled with their ability to accelerate DNN training and inference using GPUs greatly simplifies the process of implementing neural networks. Despite their flexibility, popular ML frame.works suffer from severe limitations in the way they allocate and manage memory. +To illustrate the shortcomings of ML frameworks in man.aging memory, consider the example shown in Figure 2. When training a DNN using existing ML frameworks, the memory required across all of the layers of the network must fit within the physical GPU memory capacity. The key reason for this GPU-side, network-wide memory allocation strategy is to reap performance benefits. More Specifically, page-migration based virtualization solutions that expose both CPU and GPU memory for page allocations (regardless of whether the virtualization feature is provided by future CUDA runtime extensions or programming models such as OpenMP +(4.0) [33]) must transfer pages via PCIe, which involves several latency-intensive processes such as CPU interrupts for system calls, page-table updates, TLB updates/shootdowns, and the actual page transfer. Prior work [34] reported that + +Fig. 5: Per layer memory usage of VGG-16 (256). For brevity, we only show the memory usage during forward propagation and for layers that contain weights (CONV and FC). Left axis corresponds to the sum of workspace and per layer input/output feature maps. The right axis corresponds to the memory consumption for storing weights. The memory usage during backward propagation follows similar trends to this figure. +the latency to page-in a single 4 KB page to the GPU is 20 to 50 's, meaning the PCIe bandwidth utilization using page-migration is 80 to 200 MB/sec, as opposed to the DMA initiated cudaMemcpy that achieves an average 12.8 GB/sec out of the 16 GB/sec maximum PCIe bandwidth. As the amount of data to be paged in/out via PCIe can be 10s of GBs for very deep networks (Figure 15), ML frameworks will suffer from huge performance penalties when relying on page-migration for training DNNs. +Note that because of the layer-wise gradient update rule of the backward propagation algorithm (property of the chain rule, Section II-B), each layer's feature maps (X) are later reused during its own backward propagation pass. This means that all Xs must still be available in GPU memory until backward computation is completed. Figure 4 shows the amount of memory usage based on its functionality and the growing significance of feature maps as networks become deeper. Because deeper networks need to keep track of a larger number of Xs, the fraction of memory allocated for feature maps grows monotonically as the number of layers increases. Training the network itself is still done layer-wise, however, regardless of the depth of the neural network. The baseline network-wide memory allocation policy is therefore both extremely wasteful and not scalable because it does not take into account the layer-wise DNN training. Figure 5 shows the per layer memory usage of VGG-16 during forward propagation, which provides the following key observations. First, the intermediate feature maps and workspace (left axis) incur an order of magnitude higher memory usage compared to the weights (right axis) of each layer. Second, most of these intermediate data structures are concentrated on the feature extraction layers and are less significant in the later classifier layers. Third, the weights, while smaller in size compared to these intermediate data, are mostly concentrated on the classifier layers due to their full connectivity. Lastly, the per layer memory usage is much smaller than the 28 + +<
> + +Fig. 6: VGG-16's per layer computation latency for forward and backward propagation (left axis). Right axis shows the reuse distance of each layer's input feature maps, X. We define the reuse distance of a layer(n)'s X as the latency between the completion of layer(n)'s forward propagation and the start of the same layer(n)'s backward propagation. + +GB of memory required by the baseline policy (Figure 1), showing significant opportunities for memory savings with a fine-grained, layer-wise memory management policy. + +III. VIRTUALIZED DNN +The design objective of our virtualized DNN (vDNN) memory manager is to virtualize the memory usage of DNNs, using both GPU and CPU memory, while minimizing its impact on performance. vDNN is completely transparent to the programmer as the allocation, placement, movement, and release of data is seamlessly orchestrated by the system architecture and the runtime system. Such abstraction enables ML practitioners to focus more on their ML algorithm and not have to worry about the low level details of GPU memory management. vDNN primarily optimizes the memory usage of the feature extraction layers as the majority of memory usage is concentrated on these layers, accounting for 81% of memory usage on AlexNet and 96% on VGG-16 (256). More Specifically, we target the feature maps of these feature extraction layers as these intermediate data structures account for the majority of GPU memory usage (Figure 4 and Fig.ure 5). The intuitions of vDNN can also be applied to weights and to the classification layers, but with less of a memory saving benefit. + +A. Design Principle +Previous sections highlighted the fact that the memory requirement per individual layer is substantially smaller than what is actually provisioned with the baseline, network-wide memory allocation policy. vDNN adopts a sliding-window based, layer-wise memory management strategy in which the runtime memory manager conservatively allocates memory from its memory pool for the immediate usage of the layer that is currently being processed by the GPU. Intermediate data structures that are not needed by the current layer are targeted for memory release to reduce memory usage. +Forward Propagation. As discussed in Section II-C, deep networks have to keep track of a large number of the inter- + +<
> + +Fig. 7: Execution flow of a linear network during forward propagation. The figure assumes that layer(N) is currently being processed by the GPU. During this layer's forward computation, the data associated with the arrows marked with black Xs (all preceding layer's input feature maps) are not used and can safely be released from the memory pool. + +<
> + +Fig. 8: Execution flow of a linear network during backward propagation. The figure assumes that layer(2) is currently being processed by the GPU. Data associated with the arrows marked with black Xs can safely be released because they will not be reused during the training of this input image batch. + +mediate feature maps (Xs) that are extracted during forward propagation. Once a given layer(n)'s forward computation is complete, however, layer(n)'s X is not reused until the GPU comes back to the same layer(n)'s corresponding backward computation. Because the reuse distance of layer(n)'s X is on the order of milliseconds to seconds (e.g., more than 60 ms and 1200 ms for the first layer of AlexNet and VGG-16 (64), respectively), deep networks end up allocating a significant number of Xs that effectively camp inside the GPU memory without immediate usage (Figure 6). As a result, tackling these Xs for memory optimization is crucial for Efficient utilization of GPU memory as these intermediate data account for a significant fraction of memory allocations (Figure 4). vDNN therefore conditionally offloads these intermediate Xs to CPU memory via the system interconnect (e.g., PCIe, NVLINK [35]) if they are targeted for memory release. Section III-C details the vDNN memory transfer policy that decides which layers are chosen for offloading its X. Once the offload operation is complete, vDNN releases the offloaded X from the memory pool to reduce GPU memory usage. +Care must be taken however when evaluating the feasibility of offloading a layer's input X. This is because, for non-linear network topologies, multiple layers can be the consumers of a previously computed layer's output feature maps (Y). For instance, layer(2) and layer(3) in Figure 3 are both using the output Y of layer(1) as its input X. offloading and consequently releasing the input X of layer(2), before reaching + +<
> + +Fig. 9: Performance effect of offload and prefetch. FWD(n) and BWD(n) are the forward and backward computations for layer(n), respectively. OFF(n) is the offloading of layer(n)'s X and PRE(n) is the corresponding prefetch operation for layer(n). + +layer(3)'s forward computation, is problematic as these two layers share the same data structure for the input X. vDNN therefore keeps track of the inter-layer dependencies in the form of a dataflow graph (e.g., Refcnt in Figure 3) and allows the offload/release operation to be initiated only when the currently processing layer is the last consumer of its input feature maps. Figure 7 is an example execution flow of a linear DNN during forward propagation, highlighting when it becomes safe to release a layer's X. +Backward Propagation. Similar to forward propagation, vDNN aggressively releases data structures that are not needed for training the remaining layers� backward computation. During layer(n)'s backward propagation, layer(n+1)'s Y and dY are no longer required because the GPU has already completed the gradient updates for this layer (Figure 8). Again, by leveraging the layer-wise DNN backward propagation, vDNN immediately frees up a layer's Y and dY once this layer's backward computation is complete. X and dX are not released as the preceding layer's backward propagation will be needing these values for gradient derivation. Note that if a layer has offloaded its X to host memory, vDNN should guarantee that the offloaded data is copied back to GPU memory before the gradient update is initiated. Naively copying back the data on-demand will serialize the backward computation behind the memory copying operation of X. vDNN therefore launches a prefetch operation for layer(n)'s offloaded feature maps, which is overlapped with layer(m)'s backward computation, with n
> + + of current FPGA architectures causing it, and present suggested architectural solutions that can + reduce this gap. + + 3 COMPUTING ARCHITECTURES + + We implement three different highly optimized state-of-the-art CAs for accelerating CNN infer- + encetasksinRTLusingparameterizableSystemVerilogHDL.We refer to the three CAs as ASU-like + [26,27],Intel-DLA-like[2],andChain-NN-like[50].We implement all the hardware computational + blocks required to execute all the layers described in Section2.1for three different CNN models: + AlexNet, VGG-16, and ResNet-50. We also implement the control logic required to run the CAs + starting from reading the input features and weights from on-chip buffers, transferring them to + the computational blocks, and writing the fInal results in the output feature buffers. The on-chip + buffer sizes and the parallelization factors for each of the nested CONV loops are fixed in both + the FPGA and ASIC implementations for each of these CAs according to the optimal design point + originally reported in References [2,27,50]. For consistency and to enable fair comparisons, we + also use a fixed-point data representation for all three CAs with 16-bit features and 8-bit weights + as in Reference [27], which causes less than 2% accuracy degradation. We consider the external + memory interface and direct memory access engines to be out of the scope of this work, as they + do not affect the conclusions we seek to draw about the performance and area gaps or the bot- + tlenecks of current FPGA architectures in accelerating CNNs. However, our performance models + put off-chip data transfer into consideration according to any external memory interface that we + specify. In our experiments, we report two sets of results: one assuming infinite bandwidth and the + other assuming one bank of DDR4 memory at 1200MHz with a total bandwidth of 17GB/s similar + to that used in Reference [2]. + We carefully chose those three CAs out of numerous architectures proposed in the literature + to be diverse; the wide variations between them help ensure our analysis of FPGA vs. ASIC efff- + ciencyhasbroadapplicability.ThemaindifferencesbetweenthethreeCAs,summarizedinTable1, + are: + •All three CAs have different parallelization schemes. In other words, the array of MAC units + in each CA has a different number of dimensions leading to different execution orders, tiling + and unrolling factors for the CONV loops in Algorithm1. Output tiles of size(POM ×POX × + POY ),(POM ×POX ×1),and(POM ×1×1)are produced by the ASU-like, Intel-DLA-like, + and Chain-NN-like PE arrays, respectively. + •The Intel-DLA-like CA uses a mathematical optimization for CONV layers with kernels of + size 3×3 known as the Winograd Transform [22], which reduces the number of MAC op- + erations needed to compute convolutions. However, the ASU-like and Chain-NN-like CAs + + ACM Transactions on Reconffgurable Technology and Systems, Vol. 11, No. 3, Article 20. Pub. date: December 2018. 20:8 A.Boutrosetal. + + + + + + + + + + + + + + + + + + Fig. 2. ASU-like CA tiling schemes and hardware architecture. + + perform conventional sliding-window convolution operations. This enables us to explore + different convolution schemes with different degrees of control logic complexity and ob- + serve their effect on the area and performance gaps. + • The three CAs implement their weight buffers differently. The Chain-NN-like CA stores the + kernel weights in small distributed buffers such that every MAC unit has its local scratch- + pad for weights implemented in the FPGA’s soft logic (MLABs). In contrast, both the ASU- + like and Intel-DLA-like CAs have larger weight buffers implemented using on-chip memory + blocks (BRAMs) to feed a group of MAC units. In FC layers, the Intel-DLA-like CA also + interchanges the roles of weight and feature buffers. + • The CAs differ in whether and how they use double-buffering to hide memory transfer + time. The ASU-like CA uses double-buffering for weights to hide the computation time of + FC layers by fflling one buffer from off-chip memory while using the weights in the other + buffer for computations. The Intel-DLA-like CA uses double-buffering by interchanging + input and output buffers after each layer to eliminate any external memory transfers if all + the output feature maps of a layer can fft in on-chip buffers. The Chain-NN-like CA does + not use any double-buffering techniques. + None of the three CAs is available as an open-source implementation, and hence we imple- + mented them from scratch to carry out the study presented in this article under controlled condi- + tions (e.g., RTL implementation, same FPGA platform, same weight and activation precisions, etc.) + to enable fair comparisons and focus only on the architectural aspects of these CAs. In Sections3.1, + 3.2,and3.3, we describe the details of the three CAs we implemented and any extensions added + to them for the sake of our study. + + 3.1 ASU-like CA + This CA was proposed in Reference [27] by Ma et al. from Arizona State University (ASU) and + then expanded in Reference [26] to support the ELTWISE and BNORM layers used in recent CNN + models. The core of this CA, shown in Figure2(c), is a three-dimensional MAC unit array of size + POM ×POX ×POY that can compute both CONV and FC layers. + Feature maps and weights are tiled to minimize external memory transfers by either buffering + all weights or all input feature maps in on-chip memory at any layer of the CNN model. In the + shallower layers of the network, all the weights but onlyN +K−1 rows of the input feature OY maps are buffered on-chip such that 0> + + Fig. 10. Area gap between FPGA and ASIC implementations for different blocks of: (a) BSC, (b) LRN, and + (c) ELT. The percentages represent the contribution of each component to the total area of the FPGA + implementation. + + Interestingly, the computational performance gap is not consistent among different CAs; how- + ever different variations of the same CA have similar gap results. The Intel-DLA-like CA has + the smallest ASIC-to-FPGA computational performance ratio (≈2.9) compared to the ASU-like + and Chain-NN-like CAs (≈4.6 and 6.2,respectively). We believe that the reason is that the Intel- + DLA-like CA has a modular daisy-chain architecture, which is more routing-friendly and bene- + ffts the FPGA implementation more than the ASIC one due to the relatively slow speed of FPGA + routing. + + 5.3 Area Gap + On average, the FPGA implementations have 8.7×larger area than their ASIC counterparts and + the gap is, in contrast to the performance gap, fairly similar across different variations of the three + CAs. To understand the reasons for this gap, Figures10(a),10(b), and10(c) illustrate the area ratio + of different components in the FPGA implementations to those in the ASIC implementations for + the BSC, LRN, and ELT variations, respectively. The percentages written above the bars represent + the area breakdown of each FPGA implementation into different components and hence indicate + the contribution of each component to the overall area gap. We notice that the convolution engine, + which has the largest contribution to total area (up to 60% in some cases) and thus the strongest + impactonthetotalareagap,hasanFPGA-to-ASICareaarearatiorangingfrom13to31fordifferent + variations of the three CAs. The Intel-DLA-like uses Winograd transform to significantly reduce + MAC operations in convolution, which costs almost the same area as the convolution engine in the + FPGA implementation. However, the Winograd transform and inverse transform blocks in this CA + have FPGA-to-ASIC area ratios of 28 and 26, respectively, which are almost twice the area gap for + the convolution engine, since they contain a large number of multi-input adders implemented in + the FPGA’s soft fabric compared to the convolution engine, which is mostly implemented in hard + DSP blocks. The smallest area gap is in the feature and weight buffers, since the RAMs in the FPGA + and the ASIC implementations are both custom SRAM blocks. However, the buffers area ratios are + still signiffcant (≈3–5)because of the area overhead of the programmable routing in BRAM tiles + as well as the underutilization of some of the M20K blocks on the FPGA, whereas in the ASIC + implementations, we use memories with the exact required sizes. The NORM block has an area + ratio of 32 and 28 and consumes 22% and 14% of the total area in ASU-like and Intel-DLA-like CAs, + respectively, since it is a heavily arithmetic block and is mostly implemented in the soft fabric. + However, it only consumes 3% of the total area in the Chain-NN-like CA, which produces outputs + in one dimension only and therefore does not normalize output features at different locations in + parallel. The POOL, ELTWISE and BNORM blocks have large area ratios, however they have small + overall areas and hence limited impact on the total gap. + An interesting observation is that the area gap in the convolution engine of the Intel-DLA-like + CA is significantly less than that of the other two CAs: an area ratio of 13 compared to 20 and + 29 in ASU-like and Chain-NN-like CAs, respectively. This is because the Intel-DLA-like CA uses + the hard adders in the DSP blocks to implement its dot-product unit, while the other two CAs + pay for the area of the complete DSP block on the FPGA but only make use of the multipliers + inside it and thus have a higher area gap compared to their ASIC counterparts. This observation + motivates the investigation of new DSP block designs that could bring more of the convolution + engine functionality inside the hard DSP block. For instance, the ASU-like CA needs two separate + accumulators for the two independent 18-bit multipliers, which is not supported in current DSP + blocks. Hence, the DSP block accumulators are wasted and soft logic is used to implement the + accumulators. The convolution engine of the Chain-NN-like CA has the highest area gap as it + implements input multiplexing, accumulation, and output de-multiplexing in the soft fabric. + + 5.4 Architectural Insights + Based on the results of Sections5.1and5.2, we can draw several architectural insights: + + • According to the resource utilization results in Figure8(b), the limiting factor is the DSP + block count available on-chip, with close to 100% resource utilization in most cases. One + direct approach to gain higher performance is adding more DSP blocks to current FPGAs, + especially given that a DSP-focused device spends only 5% of its core area on DSP blocks + [21]. This requires a careful architectural study to determine the optimal ratio and area + distribution between DSPs, BRAMs, and ALMs for DL-tuned FPGAs that are still ffexible + enough and suitable for other applications as well. These architectural explorations require + a suite of DL benchmark circuits such as the one we developed in this work, and which we + plan to expand and open-source in future work. + • AsshowninFigure10, the area gap of the convolution engine of the Intel-like-DLA CA is + significantly less than that of the other two CAs, since it makes better use of the DSP block + available functionalities such as the internal adders and hard cascade chains. By looking + at the ASIC area breakdown of the convolution engine, we can see that about 72% of the + logic in the convolution engine of the Intel-DLA-like CA was implemented inside hard DSP + blocks on the FPGA compared to only 32% and 35% in the ASU-like and Chain-NN-like CAs, + respectively, and the rest is implemented in the soft fabric. We believe that small changes to + the DSP block architecture could capture more of the convolution engine hardware inside + the hard circuitry of the DSP block. For example, adding an operation mode that conffgures + the two internal adders as independent accumulators for two independent 18-bit MACs + (such as in the ASU-like CA) or having a small circular shift register accumulator for inter- + leaving dot-product operations (as in the Intel-DLA-like CA) would save soft logic. Neither + of the DSP block enhancements would add much logic to the block, nor would they require + more block routing ports (inputs and outputs) and, therefore, the DSP block area increase + would be minimal. To increase the DSP block count on-chip, as mentioned in our first sug- + gestion, we not only wish to avoid signiffcant block area increase, but also remove DSP + block functionalities that are unnecessary for DL applications and would not cause severe + performance degradation when implemented in the soft fabric. For example, removing the + built-in constant coefficient banks in the Arria 10 DSP blocks should be evaluated as they + are not usable by any of our CAs. + •In this study, we used 16- and 8-bit fixed-point precision for features and weights, respec- + tively, in all CAs to ensure fair comparisons. However, the most suitable precision for CNN + inference is debatable and varies widely in the literature from single-precision floating- + point down to ternary and binary [28]. Currently, DSP blocks from Intel and Xilinx support + a limited number of precisions. For instance, a DSP block in Intel Arria 10, and similarly + Stratix 10, FPGAs supports two 18-bit, one 27-bit, or one single-precision floating-point + multiplication. However, a DSP slice in Xilinx Virtex Ultrascale FPGAs supports one 27×18 + multiplication. Designers can sometimes fft more low-precision multiplies that match cer- + tain patterns using clever tricks such as performing two 8-bit multiplies that share one + operand using a single Xilinx DSP slice [8]. Even with these operand packing tricks, using + lower precision leaves a portion of the DSP block logic idle. We can avoid this by designing + DSP blocks that natively support low-precision multiplications and reuse routing ports and + multiplier sub-arrays to keep the area overhead minimal. + •When implementing the three CAs, we noticed that the required on-chip buffers are either + deep central buffers for input and output features or smaller and more distributed buffers + for the weights. When we tried to extend the double-buffering technique used in the Intel- + DLA-like CA to more layers of models larger than AlexNet by implementing deeper stream + buffers, it resulted in a net performance degradation as the operating frequency dropped + significantly due to depth stitching of M20K BRAMs to implement those deep buffers. How- + ever, when implementing the small weight buffers of the Chain-NN-like CA in MLABs, the + high utilization of the soft fabric also resulted in lower operating frequency. This observa- + tion indicates that having only M20K BRAMs and MLABs to implement on-chip memories + might not be a good fft for DL acceleration on FPGAs. This also requires a more detailed ar- + chitectural study to determine the best size and ratio of on-chip BRAMs and their effect on + the overall performance using DL-representative benchmarks, and we believe our parame- + terized CAs can form the start of this benchmark set. In addition, the memory-richness of + FPGAs can be enhanced by employing emerging technologies such as Magnetic Tunneling + Junction memories, which can provide bigger yet more dense BRAMs for memory-intensive + applications as shown in Reference [54]. + + 6 CONCLUSION + + In this article, we implemented three highly optimized state-of-the-art CAs for accelerating CNN + inference, which are: ASU-like, Intel-DLA-like, and Chain-NN-like CAs. We implemented three + variations of each CA (BSC, LRN, and ELT) for three different CNN models (VGG-16, AlexNet, and + ResNet-50, respectively) on an Intel Arria 10 FPGA device and compared them to 28nm ASIC im- + plementations of the same CAs to quantify the programmability cost that comes with using FPGAs + on the performance and area of DL accelerators. Across different variations of the three CAs, we + observed a consistent area gap with an average FPGA-to-ASIC area ratio of 8.7×, to which the con- + volution engine contributes the most with area ratios ranging from 13 to 31 for different CAs. The + performance gap, unlike the area gap, varies significantly across different CAs. The computational + performance of the ASIC implementations is 2.8×to 6.3×faster than that of the FPGA imple- + mentations when assuming infinite external memory bandwidth. We find that the Intel-DLA-like + CA has the smallest performance gap compared to its ASIC counterpart indicating that focusing + on modular and routing-friendly designs is of great importance for building efficient FPGA-based + DL accelerators. Finally, we suggest several FPGA DSP and RAM architecture changes for future + work that could reduce the area and performance gaps and enable more efficient DL acceleration + on FPGAs. + + ACKNOWLEDGMENTS + + The authors thank Martin Langhammer, Debbie Marr,and Eriko Nurvitadhi for helpful discussions, + as well as Huawei, Intel, and NSERC for funding support. + + REFERENCES + [1] M. Abadi et al. 2016. TensorFlow: A system for large-scale machine learning. InProceedings of the OSDI. 265–283. + [2] U. Aydonat et al. 2017. An OpenCL (TM) deep learning accelerator on Arria 10. InProceedings of the FPGA. 55–64. + [3] Y. Chen et al. 2014. DaDianNao: A machine-learning supercomputer. InProceedings of the MICRO. 609–622. + [4] Y. Chen et al. 2017. Eyeriss: An energy-efficient reconffgurable accelerator for deep convolutional neural networks.In Proceedings of the JSSC, Vol. 52. 127–138. + [5] S. Chetlur et al. 2014. CuDNN: efficient primitives for deep learning.arXiv:1410.0759. + [6] E. Chung and J. Fowers. 2017. Accelerating persistent neural networks at data center scale. InProceedings of the HOT CHIPS,Vol.29. + [7] F. Colombo et al. 2017. Deep artiffcial composer: A creative neural network model for automated melody generation. In Proceedings of the EvoMUSART. 81–96. + [8] Y. Fu et al. 2016. Deep learning with INT8 optimization on Xilinx devices. Inwhite paper of Xilinx. + [9] L. Gatys et al. 2015. A neural algorithm of artistic style.arXiv:1508.06576. + [10] A. Graves et al. 2013. Speech recognition with deep recurrent neural networks. InProceedings of the ICASSP. 6645–6649. + [11] Y. Guan et al. 2017. FP-DNN: An automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates. InProceedings of the FCCM. 152–159. + [12] Matthew R. Guthaus et al. 2016. OpenRAM: An open-source memory compiler. InProceedings of the ICCAD. + [13] P. Gysel et al. 2016. Hardware-oriented approximation of convolutional neural networks.arXiv:1604.03168. + [14] K. He et al. 2015. Delving deep into rectiffers: Surpassing human-level performance on ImageNet classification. In Proceedings of the ICCV. 1026–1034. + [15] K. He et al. 2016. Deep residual learning for image recognition. InProceedings of the CVPR. 770–778. + [16] S. Herculano-Houzel. 2009. The human brain in numbers: A linearly scaled-up primate brain. InFrontiers in Human + Neuroscience,Vol.3. + [17] S. Ioffe and C. Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate + shift. InProceedings of the ICML. 448–456. + [18] Y. Jia et al. 2014. Caffe: Convolutional architecture for fast feature embedding.arXiv:1408.5093. + [19] N. Jouppi et al. 2017. In-data center performance analysis of a tensor processing unit. InProceedings of the ISCA. 1–12. + [20] A. Krizhevsky et al. 2012. ImageNet classification with deep convolutional neural networks. InProceedings of the + NIPS. 1097–1105. + [21] M. Langhammer and B. Pasca. 2015. Floating-point DSP block architecture for FPGAs. InProceedings of the FPGA. + 117–125. + [22] A. Lavin and S. Gray. 2016. Fast algorithms for convolutional neural networks. InProceedings of the CVPR. 4013–4021. + [23] Z. Liu et al. 2016. Automatic code generation of convolutional neural networks in FPGA implementation. InProceed- + ings of the FPT. 61–68. + [24] L. Lu et al. 2017. Evaluating fast algorithms for convolutional neural networks on FPGAs. InProceedings of the FCCM. + 101–108. + [25] Y. Ma et al. 2016. Scalable and modularized RTL compilation of convolutional neural networks onto FPGA. InPro- + ceedings of the FPL.1–8. + [26] Y. Ma et al. 2017. An automatic RTL compiler for high-throughput FPGA implementation of diverse deep convolu- + tional neural networks. InProceedings of the FPL.1–8. + [27] Y. Ma et al. 2017. Optimizing loop operation and dataflow in FPGA acceleration of deep convolutional neural net- + works. InProceedings of the FPGA. 45–54. + [28] A. Mishra et al. 2017. WRPN: Wide reduced-precision networks.arXiv:1709.01134. + [29] E. Nurvitadhi et al. 2016. Accelerating binarized neural networks: Comparison of FPGA, CPU, GPU, and ASIC. In + Proceedings of the FPT. 77–84. + [30] K. Ovtcharov et al. 2015. Accelerating deep convolutional neural networks using specialized hardware. InMicrosoft + Research Whitepaper,Vol.2. + [31] A. Prost-Boucle et al. 2017. Scalable high-performance architecture for convolutional ternary neural networks on + FPGA. InProceedings of the FPL.1–7. + [32] A. Putnam et al. 2014. A reconffgurable fabric for accelerating large-scale data center services. InProceedings of the + ISCA. 13–24. + [33] J. Qiu et al. 2016. Going deeper with embedded FPGA platform for convolutional neural network. InProceedings of + the FPGA. 26–35. + [34] R. Rashid et al. 2014. Comparing performance, productivity and scalability of the TILT overlay processor to OpenCL + HLS. InProceedings of the FPT. 20–27. + [35] D. E. Rumelhart et al. 1985.Learning Internal Representations by Error Propagation. Technical Report. + [36] O. Russakovsky et al. 2015. Imagenet large scale visual recognition challenge. InProceedings of the IJCV, Vol. 115. + 211–252. + [37] H. Sharma et al. 2016. From high-level deep neural models to FPGAs. InProceedings of the MICRO. 1–12. + [38] F. Shen et al. 2016. Weighted residuals for very deep networks. InProceedings of the ICSAI. 936–941. + [39] Y. Shen et al. 2016. Overcoming resource underutilization in spatial CNN accelerators. InProceedings of the FPL.1–4. + [40] Y. Shen et al. 2017. Maximizing CNN accelerator efficiency through resource partitioning. InProceedings of the ISCA. + 535–547. + [41] D. Silver et al. 2017. Mastering the game of go without human knowledge. InNature, Vol. 550. 354–359. + [42] N. Suda et al. 2016. Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural + networks. InProceedings of the FPGA. 16–25. + [43] A. Suleiman et al. 2017. Towards closing the energy Gap between HOG and CNN features for embedded vision. + arXiv:1703.05853. + [44] I. Sutskever et al. 2014. Sequence to sequence learning with neural networks. InProceedings of the NIPS. 3104–3112. + [45] C. Szegedy et al. 2015. Going deeper with convolutions. InProceedings of the CVPR. + [46] Kosuke Tatsumura et al. 2016. High density, low energy, magnetictunnel junction based block RAMs for memory-rich + FPGAs. InProceedings of the FPT. 4–11. + [47] Y. Umuroglu et al. 2017. FINN: A framework for fast, scalable binarized neural network inference. InProceedings of + the FPGA. 65–74. + [48] S. Venieris and C. Bouganis. 2016. fpgaConvNet: A framework for mapping convolutional neural networks on FPGAs. + InProceedings of the FCCM. 40–47. + [49] G. Venkatesh et al. 2017. Accelerating deep convolutional networks using low-precision and sparsity. InProceedings + of the ICASSP. 2861–2865. + [50] S. Wang et al. 2017. Chain-NN: An energy-efficient 1D chain architecture for accelerating deep convolutional neural + networks. InProceedings of the DATE. 1032–1037. + [51] Y. Wang et al. 2016. DeepBurning: Automatic generation of FPGA-based learning accelerators for the neural network + family. InProceedings of the DAC.1–6. + [52] X. Wei et al. 2017. Automated systolic array architecture synthesis for high throughput CNN inference on FPGAs. In + Proceedings of the DAC.1–6. + [53] H. Wong et al. 2011. Comparing FPGA vs. custom CMOS and the impact on processor microarchitecture. InProceed- + ings of the FPGA. 5–14. + [54] S. Yazdanshenas et al. 2017. Don’t forget the memory: Automatic block RAM modelling, optimization, and architec- + ture exploration. InProceedings of the FPGA. 115–124. + [55] C. Zhang et al. 2015. Optimizing FPGA-based accelerator design for deep convolutional neural networks. InProceed- + ings of the FPGA. 161–170. + [56] C. Zhang et al. 2016. Energy-efficient CNN implementation on a deeply pipelined FPGA cluster. InProceedings of the + ISLPED. 326–331. + [57] C. Zhang and V. Prasanna. 2017. Frequency domain acceleration of convolutional neural networks on CPU-FPGA + shared memory system. InProceedings of the FPGA. 35–44. <> <> <> \ No newline at end of file diff --git a/Corpus/Scalable Gradients for Stochastic Differential Equations.txt b/Corpus/Scalable Gradients for Stochastic Differential Equations.txt deleted file mode 100644 index da0f9386f44ef4231e141d6b5caae0e3c7145727..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 117781 zcmce<+isg%mhZ>w{S>njB#=-NTcplXyHoOU*_ORrma1H}1YLZhD2WzDs(7$1CA(4G z4J62QZW18DMu5I)q=R4s8~X)T_f_7aUtqsNe*ZBK-#0&!me=alA*#G6^5Z;=Ip#S0 z$CzU}H~Tv36#e3?-*qoW#d)tgn2g=UaO93A!_HMPp7c8IY45-MJEzO`M;~R`eHb|WQ6a#<8CqPT)E#3dY`(Zac^?xh8OO5I2sNn z!v&wJskzVBJE`^gl|O&q?Mxui%K_w>Oefv=X#H~t6u+AD--EO9WK?t}vtJecewXQ9 zjJkiCb_bnackFrtSGddWpzCgj!@fIdZJw?;#&A38b;8$vJr@6K{S>1=^givi*VbCi zwbpuj=hLQhC!KNg@WrWf+wRr0@7inawb?N}@X1Xvy6#Tg$@cDP)4dyYKlO&wu`4E% z?#(S6H5|DA@{fNrnZOM0=;^buyBUrr{kuAOj<33|IR7~84JK~f9k9T?Pg>-g?&NBC z?#7-~?_6hiv+4&l>h^oxbG7fE8N1KD$(4vSvd7io?Pz%09Zeu()6qk{(_2<>>@E}c z!%^d5e`-ALV5{Niya(srxpTh{LeICw==bisZgSOwOn%~i>86w64bOJsK&%A~bH%30BsA^* zotS0PyXhJ!>a%4+<8o_fq9Qys?XA#d{6V_!gy*cZSG^^WUR4_0$peqKf%A^nh z)hUl@s{N+>elXK9BU9 z!`{s;Q>zZxxM^}V8cr{-tncCN>P)XjT#dS1VB?Uh-sROQOG^jEd2tJ)g>Bp&`Ss9m zih;OQ?0~po)9s$lQreHbS?}2Y297(r?RH?CzkzrAcPs95*U_hXEAoZ72orxPIukZa?AbQuZ&0BX-@5DUJ?XAu-M^VoWQ2D?!(T1Ta?h;an)#!c5rr zgCQG>zC4edBL?6UHyvoTK410F0B|EzRDBn6+eD_r4!%PpJ}i0GL+aXV9?xi_t&S(%+uI=90=Ka8>}M_da5{?n*d`bm zarLYiL+r{*SSPmlQKo3!Di17{jm=NRsAr4|NqPh0i4H0RG1zB&N}g7C{l&~indKrs z10z*AgUPVY=vu<8@pn+6QBJ0j@%;Adw^` zx+q>TCURY}wiqy1bbx<7_fi)*6(n?!%zXR(Qft}%bNF<*-cjK;*H+IN0eS~hrH?2R z-?4_blj(?UTTEE6Ni+1~+~cBiE#1N*c^Ood6W=c#hMs0RN)0^l=BQk;xbRCb(^&0R zX3`&C9xmsWMhiZkjE3h^HeDjioZgPj@N4b4gYg}Dn)!B&y#4MMsa;VTwi*uy%|kCk zGMYV=4!G$JdS)e_`*nwI<2&}l4Z8~QIlO@R+*!~Iu)C`$5^Wo{wRQp8qmYU4rF(9> z$?qZ)0Z-?2jfASJ<+=`YOW$K?ESt(tMn%>+`@##Sq|ch}F*+wmIDMDRcaBA(DJGqU zdNK`0vvH4)nXjqh0tV`SxgB-KVYbsjPj->y7GeTxNbkF!ioVQ&!DZGl-!PXX&x4H< zwPzfz8qsVa7oF1T`KWi{6|HH_so2_GOb5mkvdP#qUG`P3yMkW%T+mIo-(S5R3_lN| zcE7#48>1>#D)P5rtK`yzN3>zR@P-ZhBFl{?EK61*>Ac%1&Ut^bzOu2hwYIXgiPb3E z3@V9Ilg~qUQ@|4%_}_J=efuSUCtWB(pg9)jdEf859T-81wl{H_16@mipL@Gd*2!q# zuurjGx@MO_EN^g1kW3MlMkNL*_L2Q3d)0Fh{5fKwATV~=Ro-#2&^kj7b*_ucE;PZp zIqPfa*0yN_=elOp75zQy$>yIlF>A)t18*mt7r0ND#$C9pKh8Mhrq}N??wan?=-GNE zp?NC-h7r>~wY4#Yh%}yD6%+R0ROShU;UBo<<>bfzB;^I^r(yC@1bP0*de;~+;MCLK zB01zto}2Z4=g?j~&5t(45MEoNxmkQiZ-1Kh*b=tcP1dtFEs@JOM8lh` zV>32kqxSmBdV6KNEiPL862EjAgc+<&_ucTb*uXf2B^?YWuG7cAWS?TJK!?vzChKl- z)Aeuu<*!WqW&NdxteTzUtl? zpBHXWpj%DX;x?FUxV`GCCpcz(#hc0+UXkbrGJ%CY91K6n4isg*wL2K1x$%*QtZ6V} zu?z0p_y*24=9>wiaW602z~_C&OfL(#Qd0MdWk@7HLI|TW*Un#AB2u zW+eHBQ!^WF4m}jtOXC$yb7Mu#BhZlc5JYn0NTiaP!3H>Zd31bme6(yp#fp1Z^sz#Q zE6A*(FACscU7FLvY9wdFq@0Pou>({Y-Fc>ZJVLqkP~z|29KPRkzsK)K&a~EcfTf1F zcO#s{i3dcmIK&=^`K#e*Np^)^feSndfjby1&q-F@8i-Mh&U!5CsMo(+MfrDdYowdL z{`yxSa@B^t-!mMr>k$w}XIxI^jd&hG)Zq=R**&lDjC1Ygx_j$w^@=Qd%6iTN0IR*p z*$KXq9G<8xk|Sfvj_ny>7Pj9}53Pl`g)xJB;02OcR(6$m6^23jL_222VVdvsi~%5} zEzsdzdDU~RpQK`0*wQf#3Y0iG{8cgeII6ODSD`P`raTiG7OdG*d6*Yhol{RC~ zp@G>cjFMmUK*#td$SjN-d4lfX1`HNM#$*XQwj3mT6l0?c<}&ui3UZ;yp&T<^up*>x zSKNzYIv#5SnNnRHW9tN4U)J5$%1%pSFqz2G8aNCZoG>H}z;-@v2Adn}3w3;akJ5ra zv&l*boOipo?sd^etjnh98v=E*u_nlkuf;P#oj@6(Mo`na_XP_#zP>xzX`kw`!`{?^ zR2Kc_$;ResWHRRIwE$zwI_{x4FEucBaFmqV;G(ygF>3Rs2NA>S(% zS7uxCkiLpyF{z;8Mb@h+8`G;sunMoqZfDD&Kk0XbIE{WOOUGw&GWxv^I{#JC8J=ln zFuz8-ii{Tsi`I%UDH|dA_zj$56)#7((JM;+zekiH(6XF$7Um z`fDC$Xs|V3X7f!v8|4M`)&5QNu|2?uD$;6R<0gA;<5l`L)`UD~(_X*kE>tDh*WcUn zdV4M53URO)pa%UKE5lJxZwyq19Ly&9^$x1vriM*=Co>uGZN^MFb0l4H>pqzsw(8mj zNn8R(49@W;O8DS8V6F^q|4KZ7nI?!piWz9_7TZ|jA*@|OLc#|>b1DfeHiosO2;=pB z1xQ>NejHZTQM=i42O%PGREY=#+$c`T{u`&m-Uk{oZbZgltR-y2G_yI|l5aTMxkSxc z`|2OwNw+?dcABeaO3`%IJ!VHly*LL;V0UW&mI!%>D=?65F_yIV5{g}iK+%T4lU?j~ zvNH{jk{&fDJQ(F;q(J@_y1RizM;PLA6!3P5p+nyRQEa>{z-@8qyFJ;))D+G#214_IEO?N#yBwRa#4rZ>#=@x3$B-Xk40MxC zn>l>rOR(VzkdjeImXD%9HdR9Q@XZo#hwC=c^%j&$+d zjrS++a@?5g^QL_)Jg)#AU6pHe?#*Byu#`{)KYgy}d->Ago1G!jLFaHcY+rT-oQ(J6 zxqBbWz;{{HG%7{mP1wdN9&{Il5FCvO-=5uFeE8%VAEs_^+?X~#H72>iT%6%_5Ipd% zz*1xX`98B)@PdpsE%Q+1qchPaD~batj==_<3XU3gZ+fc^I*0=M@0Mi)`#7NN+gvmy zy9F6-kzL08fi1>14WiC171xDJu0LM9RLI~GAmQSVp_BMac0_f#&F%2?&Nhd~0hTX5 zESP8cF|-I>u%V+98mCP21VDs05msURX0PbCYhnUTenrE1NMP2M|JeQ2nz29T-4MZa zc6a>9oviI2pK5B>2bf0ho-hwF%WPiZ7d)q|3Q07)L)%*)CbaWmskiK!08|yJUC(EE zU(TI(e|Vdx{>Ilp>@S#;w*Ha`9;WX81pLwx<=nzx$sK3!yUE8F>r@2(%sZ|H|a zfIr;0ACB|R`0~;Hn)|hLXG=d!-2PZ{XJW#`PYxDu1Pq#doGFF~K7*u0Y;LH^mpH z)p8A6G0%SH&&}^CwYca_aB*><3GZVGjk}joIcN}OJZdi9PS;1QG~RP<3mK^(Z~HwF z81G+V5u4P2_tEzD`?Ed1ZoK~$|C+484WVZ zF_46+N+jcjn6;rOCPKhW43jH;oB-jsAv$L=%Fj&JVK`&)O!|siTO_hNlJS z3|S4IN`_2gg)aNUGm9-8_HF@ydy{+ZH*;n5ym1SC7GsHX{@?g;Y-T>cL9bePK+;XA zV7B~OkD{XVq*0vC46c$a#J(h&Ad6Z)XG|y(_o!13116eRj0af(`^R1zMde@GJ6Yck z5Ng5KXIUGJe`*V;&XfNQff#8r&^GLd z(&oSBca=YF4Jr~7TU#mPDz)*??_|!WjWDp4x`iGBY8t`krOAwIc;3CZSUQ11`v`uw z-#Fc8JGsX4p7qf@n3|GjA4dw{e}DBS`d#&Bn&-@Q1z@o)-O+Jvq}3*1&W$ zIB9&^m)>%}`Sj>F_vPCLiP*K*LqC}5q4xVle?pVSVlrj~k+QY~qMzHBbDtiu+>v}g zznPL4CGq0B#_7*(7gC7PJOfqf)qLMoK!pu??z19(ZV!TC{JMW$YD|{*8`b}udWT{? z^t_h;iKMev{%QRo9%(_}{5A7I`1*b$f9|V)&1n=_^qYH`_M9L1`fGz8v0 z{~6QSgb^;-NiA)sgVJG`Y4$&u?#T zbMrpD&A7=kh@T_$@%_*HS>z@EXi2^K&&E$vlWVy~ecroNek*C9@5bHf`LM%k$)b}9 zqa@1Z6e5@twOEp$vFzO^;g`vX_6Q`FP}Tm=rfVqUk{PO`QzY0;=UHUYDchPHkgMQ2 zo%N^qOh-Px2qe|vyonGLgETw}Eyzea)hDJjYsYVxPO@F9V=^C2;Q;Kvnb#W-S4I2D z-6K=7;lVY;_V{Kfl-xTpL%`fyqN_LA^k&-n+6PK)!)=6I0My<7g^&GSfqf~3{K%WI z=|on;pRKY68$Yqr&Nn|l_sLGqJHQR^vw8jDt^1Y{<@RaTg8fnZ{&@C@g`WU(4nK>3 zYm0*+WhW`bPdJ?-sqDd}_Hr{IM;_*(VXWR?{}Es(z((+e2ZHKzUpR&bzc{OaNyKga z3rPu{B&3S|0F=J6QeZ?2`&c?#K4x->*^MB^x_ggc9NHhVImcM6>@<-9f7B40qvJ=Y8?RFV&D5&l#!R+^j--+VaV_M7Tcwv0>n%Yc0iL4A!W$S{BJPr>`9`ObT3smhARIG>W_li&vC zHU-aGPHcU{fc?+De&D?4?5DrPCo>Y|U)ShazC=+p6VFy0R=9~GL1>BuipZ0TXI+)e zJ`AVc>f>lssSPeXb^FdOt>=Ct5e+X|ltNR@QmwbP=8@PL! zV`j(xWQ}q~z3kz@kD+U%lPXiwI9edjnIIbA9Xk~??p+efuzaT&>J>Ca6j=&ZR*Ic! zG7Bf0TEj9&dV>HW5*f9ufHNi3$eYx`J?jEousLcHs?w-PWvniCLc1)^>5VINOxBUP z?A~aKjnk(N7tHI!kf&a=hSi?!H`-5H`vM=v!kfrG)o7&Cl-xB7POZ8#=g+lTNHU3% zy+=Qc_wN-E(eD$8rS^nwkS8d>&gk!R1w>2(L!e19xepH#=zIZMOioHRRpqGlct(-m zU{vPdqd+ySj)zw@@D8XiY2#v}d^78lZ+-p1=zW(Zu*P!Uj*lK)*wiaQ)nI)+*!1y8 zU|%>KoXxUgxINg)gAcr%{J;O>(7pGERI+X!{OLD8U@SYgk7Jmi#NAnTOkh%lNm`M$ z;JJ)IfJo^NtMqE82`{=8O}bQY0&L_uB(%nafGII~WxA5W1gyZpSS?flROX10d`jQq z&gH*Y{8!Fgk|1$@gs&OZ_~Nd#qU5dQn;1eFX~eYh`Ev8AIOr!JHI{sjq*CuhX4=>$i#sIESq(W}e5oD?v`pwvv<@jYV=_ z@IVn_A9)uXo-n32xZ-z_GtiMWtpqu+fth`e$HfiXAd5Tf4a;vTYZG#@kg>&M)p zLPiSU$I1B5q;-;lGL6lcYPj+KR5sP}zEg$+$f9Xc!-gbXebSKT1WYM|m=!h=8R#kh zBA#7}MshaR${)W_uKzk&G#E5!Kc!xL>J?QhH|9^O+haz}+98O^5bIB+)zHcZw@Uu( zec5fcte}D=87qqOFMs~u7TF_}Nz8rA`JbCYjf+qj$yt#b<>ZtbhEzQ^OuoBFS_y$h z)7W+#P?9`G@Al+{&l+z0Y)lT2okTKOg=+Cbi6);Whi2EXEPu*;o{Z~)n)*sMo|wu6 zf)07hn2K2JNjL)Z}*by8hQm1fo zN=Z#(1JcMN^io|tmxn%(Rup*ff#dqw96Nd8ymVmWq zMPMBIl$1w&-Lk(D_zx3?6Q5?$c)Kewx?eSmvqX`rvp?QDIKY6&QmkW+vi6XrPTR54 z>QN>g09x=8cx0~IZw{9vWbChHaMYY17W{h7$H!vfE0P;Y-9(?7Tsn~XnQ{<8%SfW~ zrBCKGF!!T0%u#iiqbMkU?qHP!m+i8-oj>^D&vu~mAFdM%J1V6|6p`C?4Tf?KJ+(6F=dDeAHU-%OPP#wpB%JWO6Xy&XulM&im!CjFs`*$FIW# za}4J~`0_7?#+-$izS&qI^P-|LIV-&2^t+_>f;3}E1|?`5J~~{&p6?+v9l)me9>fGev=eQu2Zr7VsjlE_Q=?W=<5z zKly9q&3|LNBDV|5 zv-R{W*s1mc!1_1C5XnROW_s3!wdl{B?2Ddy`18CS6MpAIfOiZs=qI~7rED2eg27ng zBePI!?z2AJ{h}uhbm+q^9m+~a8M4#at1Prm5H2t&=QI`h2WYh9A8^Bw+s` zX8uP%xcNZ*()J=ia(+9DKmFHj;H=%6;6xFnb0;C#6b$3AgOJUZKAMqf?zZB*tQB4~ zfq$QMYJW6*TIv$Ufx!{>VKi7`n4nx8c6S~oSZ{((dbG5!f_>!4E6<4;P%6d_s0dyt zIMN|UMzUa}3Juw7?0$Jc{fq8tj1QfGk(88`Ax~(a9au}z1vk|MBgB`@(f={a>AB5> zSyt0=cm#IXC&159nTNExIwd6}tKAORraIGC3C2s=moouO;<8z%u2<3Em3fH=NbVx>M5Et=iAd&k3N*f`q z6x0Gdx<6gDXO!*s69fcHNw)#M5U&Elz_rOjDkqF@XCM=<$M}(3gMBBDj)URnkRs$Y zkn1fEm3fiCITt@lM^lV5s9IzP17l8}_y}@GhOq^@DX1MoLA)-^m4tvSor>2zq~hRB zfW`NNcdNwoZ0)Oj@DtwW8V48F@O2dg5gfyDD^X>=x&9C~r7jP=DTtr*oLs#R*oL5D z>9w?zOXAYpi_nc0SeRKf^pXSZPo-nkW;52~$)+?@ZNtQuv@&4?a|VY2=z>+j8|nS_KichO9k zl0WNL8Q*g|4De)$^E<86{r5L+nd1iEZ2RP_aqKmQ@ar>F0NX?Y@>SFwdykUSZ+vaR zF%B7M!(@n%${CG0oGVa2N*1@IwT!}Knds~8%GWpflPl$3kKFG`0Jc~VY7gdZtu-Zx zUJfS24eT_+%5(2MX0)x4D7>9f?rCC=UUE!>9IDLfeHB!s^kZsFWolFc#^PybeNbIe z%D=F%K#KwT$~DHqG5s*#mI*x}&@$H7@_Tp66#BC>q*idzQ{8pkuVN)*tW z5lS!np!_2S|Bg%BK;gd3h&4;oXY@8*X^2FEdvg)oWFr?QWhcD*D)o+hC7DoOmD4CM z>ak4pSrS3P4qLcVS-&%X_>HTiA;Z!KD%{1=ndKF|tscjV?F=x$g-kjLKeXr=ro$sY z|3bnK?h(p$I^w4kFVa~KhHd7F@S|jD`JkxZ2*Zo#&4|vD2% z^XKkTu+`y3)3P#5g(x0FT|2Asg^u>{KKiFLITn(;KH&WCT*<}F*RZW8TwfyB>wO6w4nL^AYOj$yJBbfE%>%c1rN1X*ky>w*8D(jR3=Jq_rWr*v+2I;Z zda61Qb8(V$;GvcG?tQfAVr?w^Lw9bcWBK=E$VPCd`@Kh7ADe5IRX){9jonIb|6}_k z&-KNJc5hQjYQm|b;g?wVvbw$&fCWVh_1El>B32U?6=`X5KdjhkHW!#Mqf_48*ZvAs zKOl_N6S4tyDNMM1+I%@6f@LMS(L7Kv*fPnFvG4*$NGLTsnC7e!Yi}#$hUc~Lc)uSV zDrYh!C)-Nj^(bK|1OcY#5nt(~#kXQs@L>qMUXFSk4@LK7)2E<8C{|7-Vqaz)ZU*)87FMbypIPGCUObg`I#F^q zkv{^O4-ug1_{E^IdGSIZ*gA3N)+w;W{G&ed8pzLQCOan^7`(?_8t~M38jnb7|HLZl znhSrjZgkrrr9Qj@F*y5dLyP$@%EqF2uY$IgTR`hR@#AEBtroG5Ej|2S*e^tZROVg$ zhZj9AF5nP+a(rV)WX7aQk}0gC*;w*hkEXX}B-o1mjh{z;iQ{Ic`fn6C$t4imdCAI; zyy^Avidle@OdOzJ2G!M78CXh*MMHBl$}(SMqnSwY z(fw*|-VBUqIba_~LA&2mVNf6mRg6^@DrDlOrA{knx<pS#6%P&y~;av@W}->5x&Oii#BMWZQt1*(*TZ?yA(Yzut_rDmG5Go@r8H8RE> zc1lwutt__R#_&&CbpMHa6wRL8vzt%Q*lYd2>y*4YqU1Vck&dKJt7`8ESF#+tOia{& zyZCR|z$B9u@=;@G+gNN;!T~285q{L|R`{Z=)z)@`p|Seh?^IwqEo@Y}>>RH_#8mI@ zxX6d9UMJKW^M->#q`1D8wP9G{Gx8ipt8a?Y^sXSgC{+4fGJG{iQFv9T!|HKy*Jlju z&2TqEEJ*b3rFc#y(P+z^?5?aM$jT~0L2N81A!!L-R&roTIk<4GIZpr{h$}Xi^5w9B zH1_k}#YOi|DzRO%V@CyvpcwhMRE%7`pb!t0Desgg5)@U=-IUGT@fEtuC@s=Q$L6`m z!uB(g{jAd`>}L$Dj+?3jC(E$;Jk)ph0`{Wl^!lsfa057PRD#F+w^|=l@^KF`OetHx z+#o$dX3O)((gZ1e$5T`4yA*!mP*gSNI|dEVmGnj_>1Z0NzUW8FSX%8Z61PLzLMjVt z;=%@JnnDa6XU&j?+>K;aIhF1fRFZ1e!A6=?g`yrVF&SiG3tOk=V9TN`pzJ^vVyv)%M%QFXI?!CdR2$bZVsq7?e4=h>cxTvSCh%@c<(R%;*zx1=~Pk_w#4h7N z=9I(&|L1xrU$#vwlQpqq_M=TKM$+Wz7`8~~eh+ehQAut5-Jv?RDpv+zgbrmWSKXY$@U)9$_+_QO=P7IOt2Hrw~m z6xQ@bWg$jTiE$f+)lMX;UA4u{9RiYH#43SzIHXzlF5EIGr~4p!>P>2k-V^ zDDz~>Vit>9hXZ|Da+yR>!Cy_6NfE_rL|XWqU`Im=M{~_m&;mX+)eD9MJp^vEP^KAa zzN+J`RMO3qT@U`MD(kwMjar24rBJM1EFI^ckq_w+OMvQ`xKl0WZ8BOM9WxWiUkyQM z^+*zV@#FuaGDQ-qCUMhF4fWY!PR=l3xu79x<>Gv0M8vEx_Ec&k_`Q`;<#9&mR4`Es zWrd{>qL?5musI+VhtM87Qqt7KuoVw5^_R_9(=0=kb3>0x9FX-8l*o3EKIz+{^NQoFDvsl=Ppj@EWzBNQGD0{}&wHAX5xW2^u@o!9+{N`- zx4uF_lM$0(4W^&SlLG6Bo=8g-WXCjxiSQE@HoLWx_W>3{Z0o8$Z_vWh!6XZyUS~g5Q#s*$`pP*oN?uKo;Y2Eq{VRf+Ifg*CP1r2s12;kcX5_ zQwij-jU5bkC=DNH-Z?SqJS4nG(9M4S?BwGg z>?m)#G00>S1?{=kE5J(SK2t~$(-9%;TRL`)KfGVF@|jg=Jb%k1Q(b28PL{1d6wf zN5|$5JkbuzDvmcEG6a61b21oUGulu#X1J!q-`#;mF&sYa5z1SYig*2CbI>2sK!e8P z7N6=Cq=d+d4M&yf0M0~*FLH` zFIF&&-0GZjm>G%6-HhDqW=mL6X<{H=dKkqRTJ;)$qki-oTEdO+f0%M$TKN%xm>F6d z=GRtqa>ETdk4O<1yI<4#@}qV(nMFHj-shjKsVJQvgZW-|)$aahpn9@fyL!hz6E0~u zcl$LNIW<)dz&b2ZQuwK=EdpL^ATIuxV*Y^kpXenEaB zp!EM6M`_;1RX=;}eZ?S+IEM|xNz?T`_beT%z*vpjgrm1+qlPkTHFwkmg2?*dD;JYy zC&~oYv4hei)~$rkg+K>id)5638&)=L!AZrMYc0X(YkDT>n-OP3#$gD zM7wPy_o&Nu9(w0*V6Cr~dE1yw7C;ht%%#T5j<@Dyv*ILexzD{mvl6r85>`(UpQ3-H9RrNq~ z28FBzdqYF2SQwODVg1ZRfpr0jOHdLlY&pL}uvlmjSL%?i?XS$J5i3DDRrL#TcKnYtglm$A#bk$6OKzQjJtG&M#BbjmU-NdUf8`(=`z{i!C; z9kJ8hpJ3dSZRpqBSQ?CR^3JvQB-SvFx!`O#*i%S^?O-qzWC}Ue-Ty9|{RfYIbDo(R zuio)uBSA@X^l_ksp&FINNbs)pq1);7kmhLkOHKzpPpxvLa93|^%Q~*I8KXu=`#S+y zFZofr<5nyNHs$${4Rk|E4X-PJIbv2ohbz{=d%!2+V7y2(odOS%>k?3rtowxgM3-Lv zU(rXvo>OsKm-_m1&UPK7WT!|A$+cJW8VVdaweN>c@QwM+MmwormO-9rt!IIz!M2Dg z*bz-ow8plr0XaxDD(vWP1|MLyATxBS@O)Xx+feyPd&sNDtJF8l$@XzA)ss(QKHXC|<R zI}C>xm^a5bIv#i`vl8fxYk~l68Iiax1(<4OPZ;|*zFq>b=#*2q*pF!oz(`Zl-Z3h0 zr$Fj;gG{QlQeIPeWAUg2%gjl(F~k^{*KjmGe>$NBL{)xR2ibs1S0SkY60*E#va%A)bq7pjAph55y}P?L-1Q-d-HhylQYj^RGeB z{5=hn-12|uwuyAWV#&yo3s20kP@{zbL<7L_X{x@YKfDr*)bS&Zy;eI7@2%4%PrIhE zK{M|)u1K&a?8aE}laNlZdfm)wlQp9oc@K;Ew5?<18CoWfxVaXS@CGGR6V16Cm7Ucw zm7MB{H{AXeS<@!)48WSL?vA2o0fF}ObV8WTA`!HyvRV=ZB!fh$q``Q*LAJd9iV+8l zA`FwTIpfV(7JJoG2sOjV{ml>7!rl*mO?kv@W2yyDnPBc1qSQrFTEFvr9+v0_nh94- zfskLzr!6bz|~>!4GOVu8!L$^leN7E@H}b>8cj%3#hIXHG{{x-L!HIp0@5(zCU~ zz16R4?=b<6ltEPcc>bU9;o5NHk=poQ`*E`gt+P+&x2aF{Y^+-rGxit%G=6iR3b;;U z{Z}^`)O(p}xBOLj^uF?I2v?h&&N+Sat1C$-@7B$wp{&{8A2a;m=YB zGbBhq`D(+1`oz=ERE5pG@TGqh=y>48Q8@RWi_$N3?7Du?D&T4xd+x=kpyV@JFW8Wl zK_TfHqIQVAWa=)R(;gDapX-sS78(ZE>U4Yz(2iW5bU0GwHXLG%?Zc`adqGu>rUERk|EKRrx zg@|tVMjb&{U~=9SMovyR9h(eCBE1DJrx^uFH+kqWS}UShP`z2_hA^NFrAim#mXfHF z7hbO#1{g12^ausj@Avoi_9$*?yrV!5q3OESl671$AT4NGJ=TB#3GgAp;ei(?^zz+g zbw1tFv%|;VlUs=K#=7W~FrEPmvp3TmUcR(O%(-|dBf%HQ$)+;X3wvW#TTTGf#O5r5%pvcAYjfi!bF=i?%7!lS)J*@Fkq~c^} zw{mNIw&6d1nZ1-bB9DL zHv?uvNhNLgc?c0C{e>PINAoB0LwpEhm848D6PvpZLTXxvlv5aAs)Vz~kR{t@1F+FO zET7fCKemn-HrGDLWa-v)3M-(kXsaRKi`XFRD^23TH*cnc;h1aUPkjy>9AOZeai1W4 z(sOiPfy;o!ZD`rSDz0+`do};W6LsQv?8HzN<0Ef?bSxXqJ49aCxi_q7uNZK1u>whe zGHk7rO@xb;&uECptHwdXX~wv8xYHKCQWjN4f{@;1TN1WRNBC@%22p@DMSwc;nsZO& zSHU9W(3_*g7S|->*@;9KZCE}xO%DK^bse(QESoIJ0e;YO2w@Z7wPo>n!kmEHO?VEE zTYh!*y4muGA(h~31uAMS@@FaK+a@N$s0>EU1(IEUTpg%~Q;(Tn%H2u%Hh~*`AP);G zmN|)~v~_&$nzI6%D_c9Lhi6|5O1^}rCs%p5F@saNz9vs0yLaR%fl|wV$2KY~G1xgt zln5Z=Nwen>i|v(-&6OP;mcmILVR*{i3mHjpZniwuZxFlHROl^9zNtW0Vn!(mKKjg1 zv~Wv-la|dE@kqAEA(T}V`o%f7SV+C!ppN2xF{?&A(*-kyLlA;1&eqH75H? z7eHeIV+*^+B&r-d-+!iF^yiRm#k<@ObXvlvy}9@=PW!2uCQ!>isI&ITR> zkN57k)$~DS*mRBHnOtQD6O2Dx+WAB_LZ_U`n zoF{mM#Hy-0Cbf&m8W~_DHmkWMq#NXh5p+~hB^e0_!Je)iG?76GjXL?#?2^)35OS$Ud&*4Y+5a^G~@2YL95UTEST zw^^C5N6?8&ts^j$SpB?5e{xYQ zW>9QyGF0y)lX~+d0iIY%gp>C6DI2QKp`<+KL`{S_3Fry5u=vg_?b0VlteB;2$H@i) zl&b8hbbmr;QrzmfvPq_P_wKnS^&gsYt>bPIs5(U!MnrmyVW&UloslBv5 z=SM%~qc`!0b-j;P9$@-vo6LOn`&PLBu>2;J8NRr!;)zrD%I@9J-TKdrTm`WNwUjecu}5Zm$VZ~8CI z*IM|>&%Q!sjpZk`v^Cog&DX}#THBp{?BP!|m^z1i?OMNId3fGetu>pCx?OXv z?y7bJ|B>gxf3!vUPg+JEb8d}`&Rk3HT6)*kyYyP)G}#(Q#r4oybBq6P^ZyU-yT@JKbdnXi{R%kEi7t-@VKZD9QMIN_UbuYmb)lE}n}j}!q$>8oockNPtX@kE_r zZ&(86277f&_>%}?^pSMu$>ZUd+dgMrL!^L)P=G$MweHKv))9U+5?ZIq9OR6Y)@GR~ zk>~STq|t3{c%+7FXrShkA*bQZ7djsjrY$l1l=hk}d!KqJ6fDyeN~lE!sz$`6GsDrD z63%=S)~@g*jTCGE{H!nKE`yGV@%vHuM3*v5hHLm5Sw}M8ES1OdipCw859m)40lY5- zMOHW)K^fFw9h|JUguh{27(T6hV@$LWodw&h3Mg;ZYm~%iqdFp$(oC;tTvkrE$baZ{ zdbM1i6askI*V@J_k4u*aeqV;@HOP;`zGkDy`6!3Zwdi-uD$aR(GtZQzS+lpRc5`nK zbE6WtX;BbcM00DP*)hvk^vqBqxziPoBlt+k+%&%>uT_9W|B?NQPy2Wxjyi=+|53aTfj5M zQ2?hH92Uf!`PW`VFhP`1%zJwruGuo~_vaWYkYe27>@&+awfY}XCV=bN3GSf6*w+GX zSTkvch+-?n?Gv!GQT6qd@LZmO$t6}*B-tNr@GQ?^6J4%_81384P*XuFA!@Z9(ZH%T z5|dWbZ_$R*Je7kK3`)`SoU#=wAe@dS;f;uxz@(H0HD^{7T%m`sgvRVpj6oQpn_*S{ z0Kbhohs%uRJt%93suR#znXCm}WAN)_b7y5sq22lCebhBcO02AKFbM?PX*W6~Z3jAX zv6LZ=a3|qoiDj10YD{VEIAl^?M8tnFp)sBRv4blw-u(^%q>3N1RV32~02SkLfi{wG ztxLkC=Z`h%Wl3tldd0=1Z3tl)I~wLx1+%5`5dLG8&RX^H*V2lEqHnR2&r~);Iw`Vg zwrJ8&zJR@`5jlaCS#o$<@}Q^{TM&cbPF4F-5+KD(yV*R?A(t6+P4#6$BeR+QZ1h zYou;Ixd%F^d|jHf&Bw(Z1?SnYhk~)RH4p}3MP=X!S_`__|0JKB*~D77z&84+*${?v z)21<8s0vwzt{rz*R%%%uSDkdo-J>Zv77p{Q;+T0xf$LPs9`AO z8g6<)qR@wIEs_c(`i;8+PdUnnSoem@RnnKUUeg|p_GIaB$xCuUUW)D2fh`0EV{kv> zr0hq*^T26HlNINv?ih+oyqxuH+K7OlPR=n&^!OdSX#;{DcRN>uo(@U{{v-hkn^NY6 zkJ0MQr3{sXr<+# znd((bT3YtqnfXF?iX;zk2IS%BaCuATXW`m*NU} z?+UU|zDacR+ef~eyqSvaN}Nb%)di?nZ%m)HwH+2@!T0D1t>9G;OC=;D zD6nla)0ZZ;?Fcfxvm0J;22Oe~>(vy5iY?|AL3I_%_wjtkT&0(1L8M~@S#JmFV;OEH zDj<%o{yo9mOVm!8Xd_$q>u6 zHStaYcp_?M8zA_N_k5lNZ|2V~@A(V}XU=EyNqEmA*No&fF1ML|VHUV|v{l{p4c_y32 zQtjT~nF^OAXWE=81a5y7a09?6c(=Faadcg%rdvMpYDUb1h4}_AXmCSs(Hj@Cupu75|Vv0*Ud(?E>Fm^j0Y$NGkK& zQ5O6*OcTadsjU~#*vDc|TFQY$ch?p-ta~;c?*^spoIot;5<^uHGUzBHPkSActFHnK zWN@IREY+25M1@bCV0P=9%EJ%{K;CX{-1sW;7@fUO-~}9YHc1#5+&e8Z;sCdH=vrIJ zzg%(a0CfExlcobKIhYc#Ty|v|jzmI6?Duxk)(3k_o6~-~Zh7UIak1kE;fqPz>af(o zatdbr2PKwzDwC3iQJ8gCO0I|Ay`;JxP69^?Xip0==YWB6Zy3xU|LH&W&;RK^-p}do zI6B~Bhibp?(#cSNKq`C}cw~3&R3y|fWtAmW<|rJ4ORXbN@rVOU;PYta^jqy}(R{|Ap zjxrdE=JC&y9W2pr>JL}ew$1;;Fd@-bhvp+>*#N z&=7G7d-(I6vARLFK}DT!6@ zf~d2BF4p$eTHgB3qmW;1KKl}19!1RNX32FbpaU!wAVu}Dwzs>x8!Eh-6z)w{wGHQQ zt*m9RO5y$ri?hmQSTibT4_Qd|CI)|aZBh@_MM~hZKW{{rkPf_sGrW|+k>jj%JFyiX zswM>nLM@{?wfo>38MnT?_yMxZGOD+dD0RtK0j*47|cNHf^l^H`tuds4a@ zmEZlzPBl3h%MZW0lJ7PAfN(Hhed+~^>UQ`Z#-k#-9$^4f_(^FGq{%=S9ORX z#hEpDukdjZHDWt&?Zvw)D#=*IHDu*NG(!tzcGR1rXV!rn8UH+3E+HTs zP}>|DI)N8e=!V!8V;s3(qToM)`KnC900h#+6Y1QVCq;+m5(Pj!ap56@5P=&mZwYqX z>%*VfTV&}lon-e0e?)N)D=MS&9%-X(Q+VSjg=wW46{f9DBE9!+4LoK?DXABN49I1B zl-Q866w)5S%3>a6Otyw7$cBY)9py3Miq=yO@N#BJ3HkK^hG!ox!sREvLhfJV^A7IV!fo~)25eC<0cevgIQEyqlpC*qlLD&Or?rY@|Cfj z%tTFMPMh4*rrt-!RVG%T3Ae46S=d7YXu0x@mEfQ zdVR%AV!`jE3(YqSO+P(8di-|vh~g{Uh7n%t8+$uj4b&N`CF@Ij*ULwfl!=7ZozEOS zg_4?tYXQ>2Yj7*8;-p>nM%G+=Z+ENKm6oH+!DJ6>&h1p$#s2F`ydiY+)zD0`g%1zVwX$$&oHClTcISFH!1ItdF zV|!o3tlHf*{AxD|)6~-{x>Iavo$c(c?b6ZiW^Go_Q6w5>B~)MEb&}u!Yb2D}b}8p{ z=oc*zaAn)maC2{W2l};lGOh@EC&e7|jbchrkqRFOHZ!?GywDMllWwwQ&nnkt*=_8t zt*fUzNiuC-D`ONS1~S&d#1`s2>zQUs+O#*whH80(HX_kHh)h!pk~{Ry{}iYYyy(X z!VvX%WcP}XxhPn`hgoeev!sa?H0x!p4xS85#15Mt8ped3h|QKTy-y2^M+-nn^=fZz z{>z{Lm-f#3ifga6wsLK!bF1>I{ApQ|GNVR{5uitslW)K_E70X}V^}|;jLlTN3d2?h z!EFO7Dr-SNab3E@K!mh1p*+nf*H2#!yM2yhBWlJ#3(bRAnr`)WI$~iJ(g=4>nTE|* z7f)E{umlc9!-k2N%o0bl6#Q#OK}8;jI_D2R6%AuHQl?brOQ+WT8)*4^v^i#7}GQDR5tmrY<4sA!dzP| zY}CCL(qVnexMMfxj+kNVgM8nQFxI$oaP@ar{qEqb%NgF(A@{D}Z~2T0SYz|_$&dfQ zQKb&t-MK2b5~7%{5EdP9_tiOK6gf(Df+?Srl9s31JV8}{01Q!&M z5m?%@2B`IunI+~r3gH3HPec{wWPEd-qa_yx%*gU_ZP0IC9#=HY zPSgDIoQ=QtBSp)Q6z?gh3`d{B7L*@RKy(`yOBU{IG%S=DKyi~-tT)19Kv0f^0d;=v z-dSeEAs>z4j0xqQU&@J6R!T0$u|q|4UCE?NWC6ICfrLt8WucA6KA;>zsATTJ9X2mi zi%)JzLf*^j+aZ%cC)q+awN$Nqwq`nVELJIqu=EEz;Q&N4R)o#I49+Y?3{+AEe$up< z@6PFaN<{#{Se{+eii4rZ$yMM6E#u4#7NtH`F8so-shmT`1=<9eDnnj!bvokPFcA3L zUM7^WajfK~&|@uqVQ3SNh|iceQbz>6?ZX4LlgL^C?KXeZI<80mZuPv&IhT~?QqL|v z*=mIdJZy+o?s3q57;0m^A&cjp5v;h4rMAT6mIP&MdmWQzr@fQUoy`zS_<6Bas=0t6 z0oAd*mW?Z5$y*zu`tIpFv?FzV-?+y_BryFrK*6C>b`Ci;UZB0kBZUd4BSm(v%{s3I zG@=i6J~Tf+-C5_!t5&hhQW}RU$6Eq*=ZZ3tf*Ppv;3z1$`poi z@qzX@R4&yhy0Epq2F8+?PYp7`&zF#g!~uT*C4gfwHRKvc#APrrMkON!6|inXa^368 zwT+w$w3!ScVR4GVSfD8-XH7uHHWD zpCgIBE7+7P?m3M~L69CsmNWw+beD=X8jkMM_|YHX_Fw@c(?O`0Y_vP(hF{N7PdT?i zU3C?MM}oc4rQ@r1a7s$TDjDQSBPbgM_m%U@s@*B4OU*}!NbDk%&|WEd-=Lmg0ZXq1 z&qwF-oBaF_y-yYrH!CO%niKhExY@;W{06H{)Hg8&P^=~Lomfm77WD1xwYEt~;dtrB zju|rBT<@6fiB8ZH&0wYImO_Ef1L-Ut)8gyWIIi5SyLaSz1=~}pCx+@#$ALjapD6_w zM;t%$r)c0lEoti;mh`aTSljz4H$YA%!`!N~RF}?B9BIWpZGnBlia#P3IQL}3ia~pt z@Nacp{YpQ>8V8X0ZrDeY-6;9i%pQbMCcRS&-}EMjRua0je^s>(-bPq{=CL&r4SK|cBZo~h-a+N-beg{?}`g5Y+>Y1FR&M206b(i>zzAR z4)kw(Y;U<>^`F1Ojw7+XJ6S#Kt(Y}@iTDL2nDX%JF0mXyi1&y~vVw46c?2}Z9sT(C zqwDJ*|2`BO1`LD|M<92S+@1wqyQgyciYXE(n2>BXGsZMzyy8Qe)QHWfLG|g*Sb0^* z7s)@?^s3vxB_~2r2w;>T0T)X2(&Uvw2&xu$K_)iBJ5lh*|Ec(={{T*R=TbOiwln*Y zUpQ?nZ?y#-A^Rc7l3@3`t<7y|=dDdp4%|-i{Pa*v<3Yj2kfRI;+rL9{i72QxzP6EB zPqxqjOw&@om=4K3y>vf#!R(dr5mJ48jdSwkO3vM5)o$ZFnZ%)amcB zw;=0F{HWeUHz8E#EuL9FeSL?BzTzL+AFh~@iRUsvA&f@7AOEkx*-%UNq_`xOpD{Zg ztlvsZmghZELJ>>uSF$NH*16Y~zL@qqU6Rw`4T;bCh@sD(78MK^pC^w)szC6sJb4d`41Vg@>8;2-gm>^{D|Mr@zEy|+yIT>la z#Y2MFF`m_s3n2b>_<4_;6#dRM)`y2LyWx!rIh1ga?vXmYN+?_SNLWy>Rk(X z%I7KS9w|o{oA$rOuR=h;>IhkUE*oKcmbPd#68J_x{%6U8>GgFXLJ!Ae|05|R&%T3j zJ$Lxy-(Qi_c=O}mU-xqF;H0&MgMEh~Pmr&tJ*O?rWyu~L6=-D|UTt~hqG2Yv7)^Pq z+v&KZF8}c1$SPOOPE%r)3l`)mq2iG=QAkVN#csM~qV#&>_KXaimMD1)4nn5Z)L-9Bqg*Hb0RvcpM8D#RBoX0mq zG)p&tmIp*GSbBzV$aTId*VUmhgDAF2iw z626WIeRB97)=*npv((Mo`pRDjla-Rp7leQ%86 z+uBvgWoL(>?=o_G$%l~Fpu(cTTR$q!P()nUi!ywyK*p8VF&_Xh>OBJiQ_e|zkb@3z~gEmmr$$CfM)P;X27k za2i_OecBIB~`Eu-gFZy;f6?s50aF@NxY(f*X^}-*EVrEHl+UF;8PfZByr3t zU0+wd{Jm}Ig`gE`2gSafhfdblPrvIC#v~wtZNzqb4H_KWk?(M@f;}L3<^8mG-&Bd} zXE*HEDSk{UdhOo-_zyP#sw!d)8$o*lF`oou+bhXYLj8nk>~|^)~l~ht>-C<$(o0i3XW_Q`KACEOABAf=6!cX3CIp2ptWTx z@vWiO~zJzIN~2Ybbf+_rR;(&5iH{Q(reopL#dn&K(hetub9nkC@&lDc7l zb(8V6U{$|dL2yO*ZDx|spTGM4{Zljrior@I+4A|yJeAXY0vBKmQ%;);DEg-O0F zDI?*4D^X_jJ?Dp!>b`z(PjBnRe#Xt*VYGP*P&5dwYPy}Nq5~|J`vvvsJ*nccrT6ka z-n3${2#jr#BP@8J^17(pYGF?DWi3-hhr{0jLs69tLw|b9@fU5e_XQgs#|`AiJ2?a) zjbdA;FWBqA#X~r7LM-pI|AA)#T)EDbgskOu$b*_CoCGENvan=N6NYweCaGYROBF*1 zMs~ivtm1_*A#5-uL}fM{76&1Db2Z;7-EP*1(3C7JZ&Dh05_7pEuwqJO5Hg;Se#;+N zPzJGtvvI8I(nst*i4SjWnW4VLrj7pdjJf^t*|TR#Q5cNLpJz6-1n>hx-O!B++L?SY z`vtw{=Zea!Ws~vw_O?XNwzAF0{MrZ-$^W4K@5ws88^(`&_r$&KcY7ak;{B>x!jk)= zaCW=?SVs8wxA_`Td_!9Ha`kA6?+vt0)r8B{R}k$TxB$-oU3YlB`f6yAHOzB+VJ^oq z^rCf@yh*e}Lu>*Sr^q2zHE=YOsC8Hk=dstqEv1Z2)ykt8AhBK=ZB_F- zi;o|^Il`qN;-7V^Dd>QSw_GjG76L8NgG8DuzDG7gx=G09Em6kIgagGq4KZ)#*OOn} zUbke?&Gi`xvA!YCWev=E%?s!mf4@p!Y=rHckyX_C9q8#4qWAE|DVi-4vWDP!5KZe& zUW8@Z*db8S@|-dTbLMzlP{SsGPQIIomBmMFXMH4z_nsX6os0keUd5f;;2YoN`7 zRSGhtI`8cLFO|pH@Q>g5)}^3A#xFl^*VcYx?eu+dB_Ged#(BOXS!c!l4!9K8mSI)xrXywk zV9k(4h#l!emEW^Jl~Qr;e*B+FqarFQ`FlQ`enH@-l-`-0a%5*q*e){NV~{v*6!ZPc z{7$d4ZI)Jxu!m*8N&t0ok9nnMCBpH7DoJFR;@rK`=ROG7Wv26r`p0NWq3h?ANu10? zuB~6Cwsm^i9{7#lA|u)}XJC<5GZ23@kUe+LYG1GKN^1kpw{~}G@VqJoeuo`}F~vnv zD;bpqORcv*(tMyAir z#u`sn9|EM#8r1!4$QGr3gv+puojmS9sQh-?8#hb1Nwe4rv-p0_Bo+vSJ-_L^bgrym zFrNgjV1fg+%Tt8UvH62qKv7qN9s!CinN7ZHjD3X2gij*+g_QMu zL|g>11t#QcyjY>mlMPN;{}u@2ORpH>j;s*;VDo!lRR3PO<08bDp5f^cCxJ-zs<77r z`dGJQ+>I6ZAq<4J#iqf35hEdJa0Y$;R?)vyYW)?AX@;KAZhPr?8>c&10?W+Agjmp( z1YFqTq>=eReiM7FYJI*be934?o5PnCp zYjby3cFJoSff`Im7<*5G-JSL6$?A*9Y_x`Q8AE39T zUUz@rBfZT1(7jnr%FjYQ6&4D2qvo!?@4-)-)(`q5Jz+XgK0$Rl;MFNXw*$o3C3Dbj z){KKMzY8eMM%wi7etF)`w^{y6uuG9HVHe)jta4cNf^YB0n~{l+X;C*TH?~gy*t6ir zAxm?B+0IV-;fig<3bAbWwu`j2G(ygx(D%s}8JF<(HP*vB*s*^DDz0IG4TW1^495#w z-+Mo+Pia%|Edj;y%F8~W2jN~%L6(?+BLA$mO#kbo#D`n)cw(MaEeM)631nuLl zLfCEAb?f$NE!vvz;?$9@ufe8L*>u*Rb$T?Ru*3joy4xpRT~#4$k>bbC3*aZfe?JwU z{is88pW5!?gBNNAOaP!R8PRwyoYV;p_s|UbSj74dHe~MXvTFhrRN|k&*G!quQVb+O3 zKm#MeH$4*bD(X}!F+~GzIyW8fh|Bb%RB=_$B-^-ZPDT_lNBsw*;IA?!%O# zo}qBmlqEaYQduSEtV8vyWqhtX3sQBq}0OjPvrzn?qlU_iA`6)X!d}wp7HSwyN1;@{{^ROJQEu`SaIx z@lvjD%O=}t`+$;Rio139go`prIOAj_N50;ip-)M1W5d-FWkq`ug|4rA+9n&Pbs&Ll zni=5CI*Hr*{DAz(3qUYov>P#Rd#o4SBu#|?pdZKJbi0k=Yu zl6y}bOLm=(%^)*s=Wt|bZ>i=^E5}LGaNifZ%3Ednnvuyd%<}BX?&o}tgktC3!JxjhC)fmQ~>21 ztxaImRiQ!yHdU%i(~&ipG5N`6JJL;r>s?>@ReclgM60;U0h~?@&dG(lrVidiQjgha zdxz-OQ-6s#^f;gaq;;_oyye8As+l?7Jo^0)&9^$0xz{Cp# zHVM356!}QGLb5%Tk}$8qLw?_Eyzd1p-xpVz8_8v#N;M{(5r(aeYMX4&{XW3!T-OEc z>sQqxx#clcag3>H8y_KHT=-gsK{jU^c@lC#$_-mV+}$j0Zv7@_VzarnyAz6b%qnTH zi?yIj{BY<0d$P0s@QRsA_L9T&QDZ zl)S#PA=HwqQZ9`+G~?`&sk2Vih_>Ej7z~LAm4QwaMD0^aiw-An%QYJ}y%4C#fSI19 zNxVIUM+PXsZpK*l;7BEqN)#>hi0F@j1w=xEw!D>NJ7{+8T#Ud;+(VXwkf#g&QRdjT zwl?;*ES$Qfa4LIm%>53OideKlF^8NNRM?n21@IIq4Wqcor@HCh=P|*P?bc6X5A7g({qx^UuR+C# z#SwZ9fZ0ol0E|WsU{9~X#Xy8oP1ZTdmT-u0(o$?{$g&*bwPE&0DZ8!TqThFS!zqx6 zj^!>mu7>j_9BS67iH=#_6sbEyl?zC$S5$`7kj$)*W*70tgj*_qa2DC(*D7nJf@QD0Hpwws<`8(93T2u9qV{;ska#9_RN>e&ADE zsUM@~6)0=G1&UQv-bx+%ddI;ERB^pOvOSa(6IB!(kQO^C2dxs8X$9JdOAwTXhp?sU zqwXaa)Q*=^V2QP9@9yNggfX+Ui}6pJQ~GC4*nE2(8%T$>pCNiF6ac2nN|qt;b`q-hxCLLFF24fNVOy64NHFmE=)Y2%oowQkMq0LsD@YK!L20Cba<_L zpbq#>#!%n@gEwh9aCbe46eFf|gvJ_(8Bh>fAs>Zao(w4fZi4oWP=Bk&5Dk2ymS2VF0)bC<6ui0|aA@d5?;1>5MWNz!U)i0C!pup5f>V^2d9uB6 znx%CZ8&ncMLMp)^fvtfkDW^wcxBMTXTwR6iD#<_yEML*Jd#<_;W0J)n1G~Y>L_8RB zu}LXTE#$%yg#7-oa-xdvV5oJ;m?dY1n;*H9L-(i6xA}HZ7i;A0c)?O5r7zz0VHDifoF{s1&T%D{kuI6PCA&vkj|s z0s%L+6`a{1ahv^9uW_Q8zW}`pS@BQF2*9*_2LgWC$1#807k8cwzxb0Uw0N?k;wE8x zL`KKDSAj2`mO$c%G~5WRt_C~4v{?`qU-wMo(hp;~*IHby;|t59Tj&g2m!^qobgiew zr>(gEczu1{BckQPST=f!w9Ue7sEbvClb z6t93+G;lVU(7U+O6;n=2&2cnPtV%ImpE?zKA;jQO(U?uhUalsS%%s+{84+u5{W%|d zk87#AEH;#=WxQ!X3V5*X>v`Kk8c1TW-_$_w-E=Ut^{`_QLP3_(-QMsJd99An$@Vtq zn<~Wy${XFNlhBuh4C^*f;Arejp%Yr;%1IZ_WTLg zta0BHPA9&Bay%)@KqwF^F4g9%xQHxxZqQv4TdpSZ&BWCZRO`e(a^X`bXl$KF8(=+YiTWL!>M~QHkRlmNTj>6UE z-#UFyO%QiX!6d0qBy|VS1K(-DN!6eNb$k&JW)@R*Hxxka!q+mOI4NjJ>7`QBHjHY- zJHEzw3;+3rQW_Cf;X=YZEXO%|BN8Ds*w3!}({)f#odapum~sP--Cq%urFIW?4TsnW zCLcd?Cu{r1r;YdWG0jy7)ze7v?@peybp40EvShuCx-!Aaa$%W|r^6D_jKi?vI*a#= zWQfT|<+Kam^6mbgvbN=R&itlvJQ?bz^{b)_>}u%$Ck@zFJoEcW6?7_DFd7F&w8qbV zRQ8}2&C51RRJZ#LRxl{+qym6N9~e&a*614cFY`aM7PzL!XDnI*h*h?RzGY67nRarN zaTmsRUc{GZiVLD*xMr%G$;qs<|s&eSrThDsqRPRZIDFcK7kqN9+EgZF8xvy)LJ~;T@hKinDdK=>>{t z(JIsf+3{CGz&WFfT3mRS8$_`&7y$n}HW6{N-fZPux^Q>QesPwKYWDm5llm7UWPj{)vqf3XfXRW!x!A%+K04lY(=uB1p7#voY2vr~5Un z#Iwp|;jBTEgtDlPWSvv%KNqjkr)oo1A7$VAYN3oH>%Jn(qgO+bIV?b^;hB#CwVNAF zT_YS82-A?i^%%A2@UW)o1~@4(%{1O#;H%g9tABN6f!!#1o~N@Msnpr)MBqs3MvKDf z08afo**$8M^x_hxR0JRCF1GQ90*R*3yRwGGe`E|A-`c&GSYI=2*Pg?6`CTuw?6BT^ z?9Y*Kxq!Jo$O+}zSfY1E)r`$PK3{sVbie<>HQciQxAFYSExq9X*{2^DHqn^Z#GD3x zt>>(7`l-G4{)fs3|NVXCkAH9P%h5OuwDFv0=8m%3PNfyS{r3xJkUjVkJR$oa`{N}p z?YFhV_!htN04w`gC8XgaO8aYn*%;U?>Mb2dfpgXn{kNryjXQMpD=bB+N00=iX;@Z2 zTWmb#)Cx%BJ*|B3Bl+5R7LSx~8ZO@x)PUl`?7^jtvWaeKpoCE>mrCc7AI+A5f9Aff z+^=5blZd4jtpNC&7ELkat2O>>RZC|S<-BI!Eq#3P@vq3xX}RUUbd7zd53bzDziQK< z|3sS~Rd&^&>HI`1+vc##QDFp0%PsXjEo-1OZyS99 zeYN;>me2c2>-p{L7|ZfuY?WgJzd=1E^*I4yDv=EVb108D+n(I`36o4(jg3YU_*qYj#`6uI zH(+_!fWP;l54`T+1p5Ov&=+m{!U)F>c5d%YKFRN8MWP_({K2ir3{ywn(6ejU&VM9% zjwqI>FJD(~*|TC;G#iO?j~eyio#_G%34ZzLPI)-FP0f(MjkFdIS4Ux6C zQe9aaT>x3KQi^cwqlKBPv@DDS0V4+^Ok6mUd=tpJyzoaHOV$@5vTf8BEpi3S?XLbG z;WP6HM%biu{u*Tbah6LxFKDbntERH6?0o~;sE$yOV8(|8Me;|48}*~5L#u$n|OF25Oa z#*Q-+rf+<98J#UHroHl-A!!?t*u@$TS8Ck2--N0>-Y{+dP{|PpdpVm^dspfujE=4O ze$XE@My`ewvM$_-M&vF+uGV zFB7rM=A|2%LMj~L8ILSx$kk^9rHX19&FYI{efT1{-Q3A3@4x6`@WiAlP8NJ8A*AhG zTTOrqUZ?03bS(y0VT!mkV@}2Zi>x-{f&Aug^PnRvoy>7u1rHJm3>gG4|8q2oid-$? zt&1BwHN<9%HpGq#lO8(uS!D0HOOV~fIT*EghfO6up>jRO+KS-}E-)vtaEEg=pb|g1 zMvibSoUMWE%Bz5978+c*+=DQ+RkxmTjEbx4H1s=hNSW|g#hQNt(kUsNL6zoCW;mH6-iM?V{P^L7KDqP&WwZu@A=x;4OKqc z*z7$=Q?u^n>8g(3X;D{X1fU_Y#}dPh;ctDdrOB_iv3 z#Hdm-TntwY;2_MNB9&Hivwz6pqy591%b`uQdgORu+E$HZ%y}CBAt0qmO$9eo!Lw@e zvyrA6u+vRwsJPrL@0G*RCKd8n@MNqF#>PPDUTnTtd$;-Kg zye@yAly5<>La6eV_P&LAzlXTAS)OcK`{nlh9l76SjaXNNyG-Eu{9A+Q_<}tW-7-Lt z`5f|vR`WJr4zEP?aQ(8`BNT-e0Z1%|^8_gf!J(o{ILaBnXOrtHRY5(1*DzDFV%t;@ zt*9E7h*aj>^sHIOi9ADSANhl^a6%XwQ*zuu!Gu0D|C-8FGsBFJt+U_ClBIYWsw5Rw&Vs9@_ZvSLEYu9)N zGKkYo^NM0Dnr##hTPl*1h6;|oA6W6f4NIdkt%gBw(dyK~r-W(LZJpXq(4p(p6lgh5 z&u>AJ140(~W}q}-piL2MmLE*Mx-CIusWW9m1BakMBJ8LHPW|fSUvAO-rVAizK(#b0 zo_GiDd;YbSgU4T+CFdCg^UUFwTKtanTwWdY=-l?KPs(_vgrb)4N>vBHXt)r51gCPV zCjOp%ki>3GOz4Kb201r^%}9J-poII^$J(~p*H}t&yz2>pwgIWW`!z0=Wh3y5#M|7} zC)q?tO$r+*qTJQ>DOC3WLLkOe(bY{2RB7RIuB~XS`-@2cN5TMGQwAP;TA~Lvos(dgQ-NxgG=>}Gg0v4A>5Bxf z#|E)|DOFNAVl8hy+#YTX`o=bJxr0f@unil!JqW$auG2tRkDN1DN!l%Hz-c~yJk2R> zyUM0W3)vXJw3`eaFWdXc?Zs7-{;;8#G?{of(-6XdIx=+esLp)=gg+G5&Y4*{VbWIC zvj9J2ng_iOUiS?fbIkG(Q_67T5Qgn3!{LUi%krBNO&1tmS8c*#%i`f;fNhgS7*B=oCT!T-LlEVClQ}Rl-zr`u)Jfj2tM!4_F3YVEb$Ayp z%kg{YTgf)0F0?=IOY5rg<$F*&Ya<0-G<_OekFZEgJV+EyO||X8lPxOv>}BN;lPb#| z{v>04+8KF>oPea`zrwQqkMpCy7v)x26fZwRT=ly*Aigl4`_Tm~8pO%i`>bDXSx#1U z;^DGRXFqau%3dx0;ar4TlYunkb2?{DR;&Xo>AfP~O$@J4!|}>Eh`{{EB^_)XRL9YiI$OS@X)Oxpp-)dCL*@P!t}7>4*E2kS#l|L~ z6So;7z3v$0L7<=+;!@xyjP(c_2Q(-cqmFh)liMvAXY{zo1`a79gOVZ#R(k-XnCb!0 zM@B9U<2ls6F$n8=zIc5Gf1n%g#2L@5-DZcj>JduamdxTQ;jJJzm`wlgaz{?!K((Qp!r%R zN&I$4W3(bfw-#cTR^b%u`HB2n`|pBP2%@SC&GQ zW`{>krZxgZvZ26Dm~E#rvQ!`%ec#h{t707kHTevK%QG4BBp0GpL!)QBNN&a8bud2T z3l-r~jj2U|V%U!%x*!OS^+z0m3=J4ltfXl5Tj=N=rXN(0L58cl~Sv$#!E)T+G_r?4KJm|*dX zr@&C|Ya+q6P)z#~m?~R$R>Zg~cZf2PKby?Rvbpn~wjz0}YbByVfK@>KaJv5c{^>Ov zZ`^EY?;IxKj}u*TZI|0s5(IlQyj+!&$-k6}3C@GzXH6SVA5}_Vhm`R)o2KqGN?=Z8 zDo7>Kd&@{~t{rzZV}M2k2Fy;@T$rLu)2-^93Z_b`)_q8T{Ev%k&bL_0OyHT^n1H*v zK-eTi5g+uuIKLGmm97|%JH@Q*$xLvxahyvr#G6!Lf0^~+_6D%OS-<`!&8Q;pz>8t& zM=2!mYm`Y(Opg{I&zi0YXCPNNkTSV;&#hO5$M`zFJ80er-TexjJw>=|7(=*MR9+nh zh3YOu;OIB+GWp>y*nDNc>a!gtH}svc-wiKm3`h2Kw2`y@Zf7|!zYjWRlZut^uhacw zJIidDGI+dnMY8~qc|l2_mII6FO$qkH?PDS-)gjCZ>khUH6m;efSd?d_)U)Ct$K|o^ zSv&`oVG-G?k8%kDln;LUh#$q~OCj#JS`nsz(}W}i0J}Vt(%k)KR2Sr^s9LvEA7rO{ zRfWIIC61yW*)AFwu-j2}U765^gTcTSg25p!w@L42du~$i0q$1w{`TOq?B_trKpo4? zj;CcRTJW3-gXq)EH)kk|*iP{+voUc!feCFfY_A^;GK>wV5cDGvE(vZ~>kdQF*vgXN zQ=_U1dRuV&OSxqk*vIqFbcA*wU$abKLsd+4tTFTM zGZ`ISp?nYxbX)ZFJsly>m5@Wv!Ls@X;ryUdY}iYurAS?>o>Gz?YRSc8;p?oA&%@%+ z<4$h@`gx<0lEoh1s`R)(_w%whzIWL#>@Nb%)$!t*4fR6}hWhHqqNCQV>E@DKQ^&oGORDNh3goTMWArLW^=*@9Tk=1*y_z);4z@YWmn483)_NqX%C{C+(uW#X8Fa1LR5Zvb{4S__ zm0Cm$yD*^KT~>Oh05yswsPP!JiZ-@c|NjHrV8V+rmEW-*H6c9! zj*eyX0I#%&27nz)Pt9~)ElFtG` zp1n@oB^1-`w@5^!y|V|wUgAf= zzxb)MbukUXhX8HL-pB7#VxzMDo!a|Uvg4t3tfC_icmC9QJKyE1d~_{;pTkT08~e^% z)GKcpi*y6fbUdxb=m=iPYclvgh;gVUA_Tf}VN+_8a8ZnDy`+Rj%Yr(NqcJ*?Dqp>9 zmZF*(6`OVtZGg}I*MIzu5=S@n^j&Yqwj`H3JQQ>zl4}xB3`cER<;NDh2kflkzAhQ! zQV?^tq28?7NeZ`-kO(;_##Kqm^O;@PdYA0;YsCs1wE4}J8!lZB@K-|CgKA##r|~O< znv%b7`+?{6XHOrMg}vu5@3*g1w%(8@uz0dc`JJ`x%!6Rvh!V-b@lBR1j5jlijeU%q zWo(B@o~CFDN`~kKqbQ<8)XMs`%W6l@YCM2?*Fh&(Y7hE!lMshEqgLMI%3dL*r~?}B zcv^zhX{umTf=lH$lVVkF2>6Sm?8rJ%mjC5wy?41) zzLNd|$dk-9i-x49n{O_qqP%MgLJuis&}t4M174Vy76_*!ZtFyYcad+#yZiXRHw*(Fg>JxOjMa&4mmJ+ZUnl3HQ&;asXl<5`7I7otevd zswalqjRu$~xYT^?_k(N$&C5kV?a@$oC_T7kagN+CE(2&fr$-PBi|2BL(IpViaGF`O ze~_Y(U!4AawD$YkOLld4>ZkK8Uvr@*`o0NC7S5pVa2dQxvz7y1jmgZmA~Sg;$U2@RON&mSI5>=-tta>F?iJfl zw%9}Aai&&90RJ?4qI!t?s@qdEnGC2!CtFY@Va(PyI>C#EZ&@tRZsF+}p7HbCX0IHSquYVmZn~JM z-tfzQPLY4|6P6+=b#F3U$~p6DHO*GEQhx5WDv#faFZ; za09r}lEE_{U1x1X@z=3n*#Qx&HIJZ!{99KUV$oo0@}I@LA@D5MM?RMpLxU;=C?A!c z3iua|FnqR4Lws|dFpG#r9+{b7A{oD0sr#x?GH&g&l89f+zwq&;R^kuq1WgU$fX#~b zH*RU$Cjf17i~R08T>FWbzZ*`$e7*7IhE9jqCgRm0jVuS}*7dOUTgp)nT0sb&(+XJ$=<6LXb%oX})FuxPH3-hqfPO zFMDzwAT|g-6>I_6{6)#&Rn_rumIuj+=O1D*$AcP0Zk#S)dm+@oU*+N-Wu2d4B}~%? z_iAUEvz5$osrL*{?AUu>o+db591z4*rv%ub#@usJiWq3aoxkXYU@)=I%NK~_?hTSC z6h1z_i>GrgHomZ3OV*Q&cAn$3DKlrcp>Og*cOD4{Bp?bJ-TT!lAS5uB#xj4A0i=6) ze~HPT`$P5nmli;NX;yf8aeUdUplmff@wUw`o%evvK0ol+^1i+O*C|9BO&V{XM8W*W z>b1RXZW~@n<8LOy>)l`KDWA8x+bG`rX&tKd(>n5;9`3FH$3|s4Fe!lC-XXzi_!BKO}p`L1& zR07+i!2&yseG(sF%t7Ia;24hgukyv`ukQNWm^y76wa=jZ%VJd5dZZHDJ+PMbA6LKlgJOJsxfi1E)&a@J- z&DCpxIS?dt>=3T!E2Q?_iyR^`z!qEMa|OR4a;5GXie7eL7ER>R-Vn&t>rcEmn#!IPhZsj3#5M~H#H?5V=43^oxLj2p>0()OFJhLv?Zn2r)?ATekAh zb&u5yT&gk^ORj&xQf(G+NCxGuh{Gs86^;M1xgGGLH-YG#6U2MDkEg9)HAU8}`9_Pu z?eZJkE;((s{fybx$F@LJv{_LHdiFPNaK^gT-`IJ3@FZujs2>bV9Rw&z*hRhS&Vjh?*k}0oofI>( zK1W!*lqE+WF@E6T(*pWV({-#~)+#_{601WUxr()i563%9Op3xnvFe`F~xc%%Wj zYYLENJhn81p!%nlwd~Ti^ySAgr>q2Mm0J0;L}AjNhN{-0 zdekJuUz$4mBsV_$)f;}@FL#>WS_$;E*=^0CSJ(V2z^Alo4pY+OB1~exC&F z$`+?hKlE?s6xBe(`8MUIIUFm@;{s4K!835Uu1PGBk{UeCLRS7qY``xbr1a`7r&Lja^Lw{O#wzm}$)mh)}EqW%*b9+o`@!pSo% zYNnB1@VnFTx{AnMh{=YnNT~B7&?0N!n1}HGHStRzBAYrg2!mk=WADeW{0&KUP-GbR>4=lMSCs0S zS_UWd)dO+83j=@@`a(MM(*P99&&-9D(}qsLUCE;7(Iv;+hYWTHN5mxK;UnrH&#?hs z9LyD919D`yF2$9KrqT>VSR=E6Cvzb_uO^32F*y=NGOcyERgS^Q<;a{8Dh&Z-%#Zx0 ztEvc%!7&QXuu3*`bpd2j{Pj^TTFg)U>T%J`-6Ls|I93c`oaAc6EcfSoZk@duRsgWo>l68xB53FV|UBO!U)-+E27el)dqM zG}z!apvC-pEV;_}U}N)P%9l*-&DJ=J&d5@})!ccm@}Ez1dspdy^EWI0@}h%{VJ`VEE!{b$;o!k=@Bx)9X@TA}!MG$h zt+wB`yD}Vn)y}p;3aI|@116-a#^K;gC2P@?!=rVfF<;u<`4W2Y^?)z?kcFN2e)*t$ zSbkaNeIDQ*K{eDi*x=1z<7@qSL2RIFZHbM65Xae?gxopG2RZPLFZ1 zD!l`Aj@bKa}G_iXpoXe%**b@4Oi*TQ5K=I z!KV;)1dW@as#&w{B;vkTt}4{6gUQePId-z<&O=2W31J^=SF40yyGuYzhN`xzPANqkfS-9s6#y~k*68_>EpKQB4PW~xKeMrzRdV}r3 z7W^v_qbx9n3sP{^55x&%UH40UxwKZEt0;OrC>|;vca@R4%R}hkHY0zR3CjZR+)c++ zhGs^-sCptsIlVu;xVpN&lyR3z%*Tc+-#Hf;rLnAaJ8kJl1{D|AoyjFkCV4Z3#brPZ zMH2QOqW#HT%1k_&^+rbl_shS%{Fn6uaAaJ*8Q%XbxZ`L79b!RDmV8N0I|n)>(+w4> z58@O~reh8RW_>1h-yB_-N?n>0sq$7XrEb!RNa1>vF|6<|#imL`~R^a_?Z#($|*?ALQ; z&5pq@9g`5m%@YEb{xcw^N?hXryPuR0Fyk=Y@_P1AL#Cjq9L_A!;{>S#>Aq7U8|%(k zcaO-Lw<)`r%r8%YX9T~kGO1r}5^DVqNKLD_tjFc%DB|mEl|{u0zPMyEH5F>!NK0wa zZ&19OX)`#u#j_1bd1TaqMHeqK&oo4$>QRXkfusO4*?8+ER$gIE<$3~?zYtzv4c9ap zDYa3BdCgUnPI@cdb|d(!jUC5vk75qKiXwsJ7R+3riZ*5~kM6JaiuZCWxY3_m>LR|=*6`sr|F-nU&k6M4PquI9 z=C;BM5G8tn_Bb0rH=S6>-WdAkj0~sOu8edpV_vd0k7=v09l6aaQF#K(C-Fvsme~fPyA!b8oUn2+ zpu&zApK*oZKx^H$MUF2S{j#+l$aBReF(+X&Z*#@j7z2lluc3Hrn9BEuN5%>U=AwhK zkuSO8@!94v(RFY$z|QV?1Us*d$@ZFnne_oaLZhw*(XdJ)4h$f|ej<_< zexB+&mT(D8&WrmJ+l`PiW;QOn9}@H9hymXcKCL%NkJEaSG|a6x4;lhetUP?Df-(JL z5;CyGio*9z<2MAL@mV3E3z~T?9MEF=5n?=Md3yP2>5B(;ljb5c^>GO|X|k;v38FbT zlu|Zwp?FB+(}U;fNXoNOZ1H$v>1WP%rWH{&)q0a4(0Y?_sP*Q-Pe##$$FLoZL#8t6 z=MFqw*7ciZFJ&=pd{#(UUSmE-)V@N(5*!A1ZdRBZjiTB0JAW@9h^lN&QbL>7oBx_} z0n9)DiKjf9^Po)odaJ%y{a)XzY{yJ%F>OcM++8mb6(Tx~YmC6nIbS%;z_Fyrrp^^- zO;;SUc04p}$gCXbBXl@Hv=E92vwGu4xggvq3PkIy)zk;6Wh%0cpunXR1R=;P2n|axN*ywO2Avxcf9e(tX@leViQ@!5onIUOQt=+ye72{lC9$OWsH%LV zDzkdBLA9p7&A}n!n-52-9NtkbW;QB0E2{+?wol@bYjJ~o9_Ku3wKo5bD25|{9Gw!D zz?e@MG{@1Plz~$`nI51YIyh1|6B-O`oC^h~cWLvef|*7~vd8vko8*%(*ZRDAI_gLZY=W~|(M4pWB8_m%m2RSH0z^O$A&SKSH&6;Dc8G`K_~^9UsdM%!^(-n33|Ow_ z=77nivHpMc6`gfB$O98>3Vlhr)Kz#v zN2s1(#-$Zb2}J#0O^13vs|OOBDO3&mS;H>7Ms-+Pid=4IG^~Q9w7J2a5CrnDrC2Ox z3tf?6Kz9wfdd+5I`hWiCzhMu6ET!SFz#un_7M4;ggPSm?M8s*yfv8Ar0|4^l>qArH zqNv(el$A)KBv2pCE*0+v$#(hYQyjfyzG{8y&4u0j(N->Ae!`Z$kZxxaoM`x`ZrI;* z&pIdCgi?M*J64w2qlf>E33@oh8`b53e1ey3^4YE%>5QYB=I$dl_I%ipDe2ne^X5Kv zV{fz@!ml)r_Ebw(w6rU}Xj4B^rvazaciez_a&cj<=0=X3HfC*iQ6}4Uvb|g6ImXdU zih8D1+ZmN=Ze7FAtubE?h}|ZhN(vY=szLhIeAX_ds;1+@1FG!F`?Gf(nA{vb{Hi1R zdmxJ{E0dfZEB!;7CFu&qY_R-&zrhaFrp!s1LibF^i?f_TRbBaG51Ly75F=>keFDK#E5?Ir>A^sLH*b zGo}x}x{nhxr%0{Y<~}U`IkJ$$fnXk|l*?xJr*4*|zgf%mZ7pA?BE)u;@5v4R#;GGR zK+UgpbAYBr!>?WGXC#&&6i3tJlW@ND2%CXhvyug*Hpki{LH<0*1}#W1&}9QO1Ix^2 z9zgAY3J6edsNRX``DBP?TmJ_3XD~blG5oBEl?68XR_O11w_j@Jur+28-@r^i;5+br zn|=O>D{ZF-B%5z^({=wFf*7kFELdOO*93WrT8OT|@FVn)+bz);U z#0^`ckw&TIjH{!$$*nCgiVMvU^Yn;oEr2ffWt=ce@WH~h$X*I>qIIq=HX1YAU9`5s z*P2|D$s1Yo9XjYmmaIC!K80PV*Q1NkGfUP^h(PvTq=Ow!ED|O_&k!X|i%8N#Rv@y+ zHPJq}nE`@y{rjOp9trpmF^Ig$9mylc<-iPBXEK z^>I9Vv$g)UE_v4D`d8`c%k-psKK-qND*4m>^t73txJbO+1_RlkQ~IJa`cjAVeQ-AN zF?bI6ID_3uqXC-$b(^S#^YWzW-N*E$8dGuQlB5K+I%+GZj%=2H5n8q(HwnwjfGyT)HgfctY$OI}*6a7z zJoPwFZP#}mdpRP66#;pHZ#xs%&wxpf`L8I5~<-e0q*Q*lcu@ zGo6;z{?rjU7Ftj>!GF;!2}YMEBZEMljFzBP!n<2^~v{-Su;Vyc$zvlz~v_;Sy!bdKG9F~hl z*T(RQCN-Y^Ea$n-wNL}!sPNJz8T^^m)J16mbHq$V@j%>9O(~!NFQkPmyYad8r4$gY> zZpq6@S3t;vGpPyqJ3Xhw`n;1r+8rV5y&3$rp|dExLC2_9Rs@*H=WW*okG(hR!{4;7 zYf`nAcMEtlUFD;YwV?cQ7dbCw0RVJn7wqk>57!23>%(ra zqcNQAO@r*I=QmT0*iz5=+u1+eJiK=^81Bt!k(tSHg7W zWJn|BP8u5f<%q~0J+PnXY!(zo(sbs*^%=c4>+=}{U9&Idm`kwd7btHQ(lx?yxF*ON zLIc>@kP%SJ!KmIO?Le+6r~@EA*AXTMT&iu|RD%vMp=IX^MJnCZUE$Kwqc^}jZ&9;N zXO9sFNg_8d)!UEEm+t~^4DtOrDGXxv`uDP~*OmeGXU&Vn+3bAzzvwgGLg(r~POoQY%@5P7^XX+X z>D)+PkEa*Si_`t-<=Ods`L3DV)#N`e-d!%6?-p0T&iVTvF6ZxO*US0M$9v6>SF_9K z-`B0j)lZJ`&l=*d|M2H`w;Qvw+q#jaSxzthVL=De`Nrwu!qdL{A2%_*{?q*ZlkLsT zt&PpcJG)!&x0>eF>2l-n55F|c_D=JUz+-!Jd$Zr)$>O5_4H$!VDtQZcG~y2dojH{oi*Uo z&AVB1dc9aK@6Rr8t`}Dyn{$3_48g!~PIy*#-N$U%O!=?5T6~yY->0|c!~Eu56P+yQ z|2n(ZoK0`0%h^p6zaRka4XE5+-ZYCh4SVhC_GUKP*X#L&SqD9c{hDOYuKan3xLk*^ZL?5WZO>l@c6v0 zSq%nrJioX)pEa|8xt-q37tp%53rJlo2kkSxnHT6yqwD|UzyGJ38%>l2Gl0M9g0@MJ z9i3ahu?$Q=Hv_)8Ij_)$)^|B;u4m^lrg!rz7Wn*p@u9h0vh$hv&Fq>f3)bVaTU^~3 z9(wE4t!NRZ4HNi{{?dXsuW!%Z&Tcl~H`i}x*D)o!ViA{f){Cc4rrv1-fy2P2-VCl-!IPJgU1CbfExVQpwgH|`vhv!1f};g z5AtlbylGmz*5Y_kmN$#5E3I2Kt=BU!n|-&qq{p{f>5T!u>63v5&iW0=a;}$duII0B zg*Lp^e{8>VbNK`g)M3rSl8P6fvnDJhlUdHR2|$4^cPd?_#}1EAUNp1c>Xm?^%r97h zWxQ@ScCYEMt9u4UOYh@%kJ_KHZ_gK3sUxo+B;>3)VZSUzuLfJ!8%W(6%{Ce_3b@3* zrq4V-+1JhNx0_6&wHdt1KSTa)9jh*G+Pe_BX|V1 zLiE75pGE2a*3Z`Cy5+~s!F+joyIlU;R=63q{QOExVtxww`Zs;H2kY^?IljHPm_i95 z-5mrNKDpPyomep5NYK9jO`*)qx7U!RIfyiDV!zpkk4{0lf754sfDs4G;tIB53US_i zJAZo%l|w9t(Du9P@5B0TQHc^QkjQg6O_IXm0<;V%1w0E$O#F^bVYURho?qU~&d=v2 z^3Wof4NtzG!Y?kSe_mXp+X&3pv*~hiSrb-q4kV~wmksQ&d5ySGU<%ur{dUESBqnK- z&E8MXktO)Wh_c8R#7{jU!JMD{^)LUG{zdOPlld^4o=dKIQ-Q@yOaj>LaF2zyEabPtEU}t*`Gjhd=%yef#P7 zCx82;nO@QP6p8QhZ8hcQ-7RwQ{d4i6T2AH$6S$h4dS%nHy!!d&QM7uGiJYF_o_WcD zx8#lmnydCbBR*Xqdjf>!0s=d)XWLv&ucvPT%sav5;!3ONWeUk>*}VGc=l$qsAejG% zGdr^46D+D*YNL2Eo>pc!JjIjyn(iXh=C-+Lvr44?AOHP7FW6lIL#2}C0oOeTDX>7kOcHsJ zoTLs(Q(%&xRE4zXq$MJB_9-EQE%MBV0DH4|y=3ptKuk~{FtzGlQZ&PW)SZIW!Z){< zA=hZXX6_A#2E`C72Rp#%em=W=d-JZ=8;yR($-t$+xmi{sYJ%NtZER%@Hn{iW?RE1j z)&xl^+N&}6?viHUXW}BI#Q*D2=|amVgI2NjVAA11fOa-tVhH;vc)^{ulk<2#U{PV& zgjrK`l`bh?qC=iHuctR?oZ5WohqGDYS613VY{w8ryW&lfb{d+GVd>7Ixmrad}>A{!Z-angu zWLsW>HM5J?voq7!UNbj~UuUS$^{4kn)qKbU%~osj%p9z@8&NV)0gxM(7mLNsJKzg) zv&s;AvFso>T?7@-nJfQQ9cozve_K5Fs8qExTNjqnA)FCOM(RE{#zFgFE zeT~74y*xYp74(1o(KH$8L7lLqNYm}F)XR=VDo8i*Ml-s63%o&h&kGq~HvKg&B!Iu0 zUd^5~F9uev+{VRXS&wY2*>`j122EPdmx!SjFFMeD;?ZB;f_bb52Bqk(;O)`c?6=JL zjh;1WGnrZMGgdRHz?6fQK@JdOe_#aLXnw}X#)5{xXhwQ2MQgyGyX%_)V)I4|##oXL z4`X59edI-nBWYJ=BN$B}B)*3^+YGkRtQ{k4Al+1JxwBrc?y+}Hk`H8*eA09m{4;US z%cAY3r7=Qe^sfeR^z`8Ae#Xn@PE1T<`Net#oXp!XV;6c>y~wEhAWlaEx3k4Dd}1s`-@h{V&Zb(p(*1zdp1$XSfkskBPh>7 zp+L0|i5ZLen*PhA_Ey?GOMM1PU*aqhue_e2Y(d@UZ<-mnKf69!T%%Ms?`G#`*{=8j zr%2G^!0Ak$LD!QZqwJ6y`FzfqZQdc>`J_dWg0nc5=LC1!@yiI!4@Xz~;eV z%hd~+<*z3)r-dEm^cJBmi!@nSF*zv$9*pkYLK0s~4I~JqVO0}^HzcdIE&C#HEnCF3 zC3TfyTrDuOXjSz9swC5+dA~Ak#Y7LzP?Fx8;gHrwuU4H~p7(-$SlogY@IoOztAjTD zM$k3{_$3y5A-ldXb#~o>;==(Hffc4lHe|teTc&O#40WjMljdg$c65?e@GLqCfuwBv zwJAuAlfX(KGGZ~|R#UMPgBgCLK?(9&FUGyN3zW@<=n=H2*(SbuDw}KXL05V$=2BwwnxIK|KGnrYF z)jO@y%?$6$T9Q+@*KOnp?IE5vnYdbDUuqwFus#s@_gf!B1Jwj}dA2thb1``eMKlt? zyTVzsuZ7VgVl}db1PXwVpz>)g^a#=5dVx4CA%Jv4$g%}MGk~J9b6{dJq>YKY5+4)M zGO@kGJ0s0Hdt54q%kteKs0_GGGq%Qc`SuQc`^EG(`L4VH?3WyM8kxdWxXXH9herp> z^Jbcq@W7dl%`R|eQ5$StqcGSUljqq(E7AVWplb$d^f4-2<4c@h+un&ckx&9}d-10w zS}U`fE)ORAn@_eM#pWr!B`u`0TXqLKB^%dU6%}3rmg5{jXl$>Vz?PPQ3wt=WhmbFC z;-qVYtepsp_+dRcEhuw`Vcf#200iL8K1lN_kOO|ayuHpW1tZ#-oPlVIA|D}yyn6A& zbL@h@JZ*4DbcV5Ye;d;o#@T@sn-oesi5<|#+(ZcFAd~@DOx^0B*pz480pi1rt@R2| zPNwLo%md#i7-NdgATvz|M?VNfK4royG_lo5fm26vsd^2K)mG1|YeZ0439@t+b4bP& zp;6DoVp%qs71O|&`E_Mt1Oh|*Tq1F-81gBbo)NNm9a=ir7qEx0kQ%B3I@VVi$A9gN zU#{HDX`?hjMq`)Ccp!Yv6sRRx-bx+)zXERQ5$rVmH>))9?^5~nn0G5{AH4KNcoZ(O zYzYX*c&$Q}PX4AZhvJu~i*snNGr0P ze|H%Yf4&}HipSjnSKl9xzfwIF&C}OQNxnl^#p4>qs?X|s$wYR4_CNmn|KYF&&m?!X zs=cx6h02QMA<4sbK6iFw&DGb&IOcf!59(Jtzj3vWzm%R~86U-0Eok|+MI>GMao^Xu zf8Su$7jJaFY_4ehBn*Nt!=V6otv_iQnzdwJtSNOCyQIvUE@QDJ$^kBfcG@=)7xARI z##5GT3W+r+*hj1h*~Yq`G=GM@0}tR&jlk``X=^Ah!0W1PK*Gd|9FR$Cp%B>=)-r@=fR_&xUjX!$oFhZm&N(ZTKM~|dz+h^`v13NSyUV9F^fPgYxwe^RU>a9W6>6< zo?_wDocZ*hO$fQuJYNPEEgWOUH}JBE)kSYUeYcP%D5=6Kue7s>jX3;Ke z(zG!ag-%-95$TCtT-M6=sN@lV_BwVTr^*d=J zwgNp8fNt@w&g}D#4IcZb(9#5zB*yYi_LLQSR5eQ5c)Kzd>tRzk@1hqNBw;YLNY;uc zJ-1NDx^)yL7OkzU9zukJS|d^Tr{uRJCrYbITdMW*g$%ck1+(d@m|N>lsu^g9Z*9oW zlh}pD!)cjCJLHx#*xF)~P+u>UuHE-3db^JF*py(mvGJJd9IhR$H_w{2qtoWu(fZ`5 zdD`&LWb?`9qur;?QFGc%9zS_Jfra6lTw4Xt^=1uMY|UBK!L}#u0p_V2Ia#?11bH!h zIqZd5!(TaiPeK12t{r$92TbE&eR7~_9C#XEKYZF8Xc`ZnJjl}sozH{DQdAl#Uv@TZ z=PSFY8tK4!1A!b{kqmv&&lZjD3)^bfU36{S2x?n5Si6mzwTr1uIa!U_kz-MNIFZv) z6f850FO&3pa=5mizJkiaW&x8+&vbdhg%Kd77e@Z1wuD3vH=ojuTAb`W*%JI8w}P0E zW@nwXY5W9#ST&wa@2ln0VaGcX)TZalMRC0V-PRReXb2QZu`gab?c=hV2|)YV;I_4n z_1jK`Y%xbKCu%@Z%rgKZ4ac<#%@z#4R7N9e+d{u2iFtD`FvEg-9z zT^7(4cxkrkLe6rac(e9!eFHv4Tvmfz>}AY&>WlY&;Y<;f5RQCa}jLN60LQ59aAH?y1Fa3B%8sVpRLW$ z);DuIk2W8_;QP+TFi+!=F>mXjF^V-(t zqh@EU<(<3W+iX7^d#BUx@axXk82Nt?bNOPZ4;vd?+1lJ40nhfGFpWmGHy(_V0Pnzd%!lsBQPJ|^2DrB7S4cUDW2q z61P*0e!S6x#J$29912MraAM$NPa$q6nubeW_Y{UI{G{p=pb&*36APJGM1+-~oaOBN z&3#8g7+K!Ure{m7n4E-d%r4J^9cVN1hRUNbT`m`=vWRyif}dxv<+RnSY}V5TT_WTLROGn}y5XDj{DJ=|oA zYp>W->(R5{?7AdWa&i0GVWko{T7VQ3T{a)*1WL&9dM-XMA*7lb7;Qn4r9jY_rns8z z33GYYY(Dw=q1RN~+5w;gC@$WhCp-;-?kO09xE-I<6jqV^fK|8eNf3K|ixPl0LUM+1 z%`*7>Ut`Pb0=tGZ#j^$eoSVqFV|*mF;9Mo~hwU8E7#mgFB2}lv)_bvA?^NEy*@pxQ z2yD)U`>OawPwjC$o8kR7tVO!K=d(J#mK{o2dPHm=G%(|aWGifW)nlBKad6u~Z| zIQZ&lay0Rz9I1nvchoeFBv+`SyK>C0H-J_h@fyTOqryJhNyL<<^xDE~)jiYfPgst+ zNZL$Tgw^$&u!-4%WXT*1;~ZOuKnYac$6IpH3$v0TZvNLv!<;8apW8$* zZ6lXd;RCF@{P($U*CYGARkr%7f17tRG=1v!O`^qfVXG)W#4v{|985V!>HsmMbt!iT zIbBK~4q4^c>4H%#Op7Vl9EyOI@dBMnh8eWMLI*`*SoLalrt0)~zIc28eEuueHf*L5 zB^2QeBN;A{`cYGyzHNZi7>rygS2r0xg?>to8zIsJ{(A4jYJsXZwS-0;2vlSm~APkR0B$(ElRBVWgaB!TpcWGXCev zu`eH2lhDPHkC5~@D4^KN@AXLLFN4QHwB7ZDXclORcC3euAa*@F2H&^C^spl$HaQlx ze6(wc$5I?GLLLMO__+Nkr1LCsU33@FW|qvZEa%meS}O>c)_A$z5IT@#?l5t-H1#3m zxkjvrE-T`P_6rEIDo7_x5Z+Y8o(2xHV`PR`pwPXZK)t&z%#arVmFZ^+h@6Q%$0*@dn|nW0ev0~&;@wc18m>ueT$JaThG?SlIbHp$I1!f z!e07&gdJ_XI0hf20drTe?-~k#5D9-dT zz3spgpBKE2VF+URqwiq~vkV43P1fwppg|#olGcia>?Ey=1aOo;jYY|a4iYhnBnO3J z#?J7i)97j8^@EkA?}F#{QD7PGB1+Gku86G@xs^=kiHSLv?wO^I=-jazX*^cT6dBF* zYBrFEBvAlqw2BHOj_~_SridItx~;onETIiK2NUaFcseie=BGObNBS=)f zs9L7Vu)phXWx7!)&0W!I%yOD6zhr2i;p;Qw%wzx$RZ5obsv;q?MAY5_;krWmWFeJm z(v@k=Fn zG-S>0^E0t1#0LpjG&!%iqfsMjQG^e&hhssGa7z-r^Xr-F@6i5&EU_Gw1%yH^vGieN zv&A+hg25w<>(lEx1X7t=CatQP5W_=*N7kXDx%m(DiUF}bHkM@q$A2^vgxo}2^w=}(lg-vcgr*0Ro!!k%l!nJocEfe{HT&;BFF-o5&*8}MBWH!nmoe7* z%R*W3vb zT|^^hO~e*2@N{&Rc^4**n((b)S!q;KV9Bts|BCDiy&Su(m$AU3P}P#rI@)h!VyFs$ zxuf^_&}uUdB}?gdK}kEY7zoQ;V#l;`8{<$abZNBjkUM~qxq(K#vLnkQ|IsNiKdx4PnU;le_H=)>)D?gS}=9^Q&u9C z3NVgVPQ`nA{B$oF2M643RhgO*PUi(glj3&?3Z|T{N3Kx!(YO@Ow>)^kV#P~44Y8M zhnw3EADIeUKu;EkW?BMaYKZ}Dh(!C!7O^(pAfjkkp$+3(k0nnb(c2I1dp0tX1V1i5 z3!|#!8btNW)A`jvyz3>?ko<|E++#;(!PH+iM6%^#i3zZW52%~aWU7lApomGztv-NWX}kkvw+tA6Y}^Gte1`@cB#7nvuxMd`2gM}0Y$2mo++!-l-Hn5TifZ4GrJ6IFw#(z_jjgMYjVsqS+A*&_I-czdOo8gc_(6; zy(iY#0E01vUF^#X0YpcB~nDZ@i94~$5N1zk|-j=Vu-+EojQ)}{fj zh8SxvalKTJzz}ggCl6O0BFA{e2aR}#k7@?V%2pgx3ES#*0-y+jOqqbuR0T}a0nC5> z%YSJRgQ`qNPvr}-uBBvhOAR(`BTZ5t4wb3OXFCNr)Ltsg`W)Zm03pAT&=-O^UbD0v zuW)*1-rgl9r5Iw8Y3s@K*0%kzpWB^mAN{_`Sk(n+`)v6JY79f{re|mOT^;GlCatVO zBz?(XOI0Vt)ITUIPI1pUnLa8j@$P+@wP{F%sAwV)Z8Z#83q;88DmMm2Tj9UXR0D$8} zU*c5(OHM=qhYR|R_Cu#BzOJPm^?R@2dbRT(#P4BC&sBs#pnKB%??3!8H@xYypzn~aeYT&Pb`M)|%b-w0S0|!N=?rf4A5&4s%fjnZO43o!C&Zc0@@X zBZ25FR_UW>O&&rT@LHFlG)>vu)K5DGPp#WTK`DS80b%nNYrd(A^~bx ztJI2{Y)r|Enr&kcQv$ju_$*e=LK`ZtsZQzyl&V`Zr!}XlDi_ zgf)uC0&VQ#2jcMBF#q_KJc_fvRj%qbk9oS3d2igDMQONLr#w+F*nqIZ*(B6Ef}nx6 zoKkNd3A&c;|HI|E%pMkYTO> zewCA|-`>EG%wiN9sFLGrEjx7CHp(6BNe+TDd}bPw3(;Fpu}keCO}pwj_m!cwL;9N0 zz_l_+3}vrfPzWsC1d_KE=LP9eW-!V%(Gsk-iUf_|^G=Q;9Uw7lbreS>d2(Q=$hMr^#s@p3KFnSNath2i!VcZ>wO1DEPxVyG zh(D=voIDPZOLag|QrGm_-rS-hf@3?eb<8lqU^uQdM4%78?=-&x$Z0B2iME7Y0u3S= zo9-2(i1#t_EG?Chr0zQd5_=G)6L110=dg%dV&STlWm$ftETw9XF-}UG+K!VpMK%WW zb{W|Tje3xp=nOwC!BsBc=$OJO0d*8hAOPXiv;*CQQPdu@tJy+jQS#ovBn1>%Aqc+c zDL4ojM{Rm@doGJHSipK`Zge^O;OFu3urS;Zw*n9zv!;AB`#{}(Ccg7)HQB3=SZY+? z<@Gg?2q;TX2~L{V8<3YuZQGk0J1^i=2>iBTirXgI`}J<~hdmEUgG15Vda}DM0!abt zt_;K0l+pqp#UTAc0I8GE8RwP$G0Dblr3%1Gr;`)l%wO9ZA!#~R%`S6!}D&~AaWBb?B59iIRZx^>* z`vxz$4s!W<{_C8A?ekPKLIi~f$+4nv^l+uXy;EstrTokHLMRSbh~5X3=W`2_*R1sf z%@C)zuQZ9_>CB2`0!rC&=~%mM$Ny*3-y5IY<=MQIe$(6TaN_GX4Lhvg z@BU6z)G{#&iC!@-!-sBl(7YTorR0zU%qTio%T9k)fN_Mi-e+* zTfd9A0uUIh^|F?jcyoIM_$ev zN{8!3lXDpD&4QrSb+c%d_L2cYL3dtfgm+LU+C~Ud`h#lwSAClhb#gHz2Rzu6IDuH^ zi7u(5@lGZ6iEdI{8eFQgH=iQ!tR)N1(-u9+=-z$$VEggob$h*~JXIYVppw)W;m+@p z__6lj@zX7WKGp+i!tdV`*T*-_fZp{Hx@jI&y~q8b*|U0YED^6k@BPHbsih;&oJe3Y zdDYELibdhC`WsVFOIw|6y^$`t^gxxT<(s?XlO7p`l3&}7HpVu1X*NGI+RBa|+qtuq z-Xo)pxU1be8;Q~6XI8iJ_xX`{VeH@X$Y?8_1emdxq533p>dt8Pr7kmHCtH$asc$^& zN1diglICcysjH4De$DER6ik;Z?@-rxw)<$00@gxIqPM_BIha378$lilu?$m5?x3Yr z$YLEzAk_uKf#VQ^N@EvjD{DiustL7{uK9v&5`|?;FcOt(pD7e_nNldGM+W)dxH0U{`8p(N{!hEwu)oY#+&P+Xwg7(CNHXBsy9>PJ3Mww}akh6@?b-X1O z9{`^08Pzn@%vj)MX%vLP{ES&w#TyVD7K+=%FDD(Amr5mSq69V!s4#k^Qw^p=* zBa2;!o|ux!`cW8?RX_omLr}`GtiW$1qH5rZhlS0i_y%kM+Tf|Z1d9>VJk69;Gw|IK z&{;C+$B=&I5FUnfH0e9~GYmL=FpLA3Ru=9ry`3mo`)7hJO!x=rDgcBu5MW3O)abZ0 z&(wNZ5?f|-1tM7^1&eYw7R;o_{-dhnEm?+IEEM?CR*pcsuyU&K^BGWJ2XycfiDKn+^=x&`)xX-c&Opr_0-GQT>N(4+>Git9?afC| zv7BJ3`wr8jh^yB#?_p3|zQ4jS#{$}&-znX%Q1{w;##V@vI8~uf zu|X*<>|@!|R)YE-l0ibmb4Wpj30pQA(>yd6u6Bc>sHBoyL1vXXC=s`8YfM3z8fD_v z70kGE4gG;J;7)ooNVFiX#?FwI04RZP;^e)mrOqQGU!?-^=IUHMzq1ApS|8x=NOOZr z<5fO;JAJzR$WdA(mPCS%;>iO0Ii5)w8Rg^HKiDxa$NcFG%ZBoYn?O*JWA*k*@12E@ z42rH3VjM>#h?kN&w=mSj?$Rc| ztZh_|&rpr>*j5zbNyTzH_|u}+gD+^FZbpmo^E79{N=74~is+REf6xc&?NbgFPAZYVx7$;S_e*T@VycWp83K_KC1}AT#w}`<&?m`n5KJiV zF;+((TKMX4KMtNGrLVtI&pB+kI~-E0hCSrM~*-fY#SWh*rqpvm$G6ZpU(v2yEn=9p4) z!t)z;904}1QcC>Ei5~J}fz^Z&kr_;y_@Luix9J}Eq_{}<0jb=V=YyA+xM+#Dz{F@0 zacDT-eJp%VZ|f0@%UVawfXY$SvNs^nQgZ5Gbw8$H=u2D^SfO06?B0r*D5Oc*iB?4$F zrjVfixExtxBNd#+NufVun&`^cKkYcn0G+LI%Y}hsFGD-S=1cT zu2HOv6yeNUJh;$2J-jG-Qx-Uz@22PW&D*rAOf`0E%)OfO{PKQx4w#q9(NmfRdWD9aIG++u?bu_HveYD1y?p6lI?z*A#Rh zqTQiL7!qpHX@~s#mmQmlfq#%weeVc5Lt#RQ1ZtDiwlheR-BTcdBICtjE z1%TIfc4g7oTxadU8P&rjS%;l#)0wa=qgB;yZzphM1))QoC6;YQFSqbGmD^q=;aI`& z;SV=anAR+AWO)d*3hA!up0TQ$feHfJ2;1Roeem#W!hh7lnjBPZ3?A%VP`PQ|{Itnq zd%bR0i_7G@D;^|HSBzpFeh2|AOCesU8sUjR=-~cOWr*92%XTyHa=>zui-BdTP((!Z zvqlvU60@$l8>wP2tX5F?NiIzg6hIR$2|c?i=7udZ$vnE1f-TrzRi`<49M(u)zY12o z!)&7sRVTaegV&l8Da}NwLo-a%GbI-;$aGGjBy_tJUz2Zl$Hsa!5@nrAv z!_fl7y+_&~d3QByCoI#bJ{b>3OkcIajxF*(ZHr&<&2Dlw{*Nr)b&uU!j~;8OALSQM zQ0IC0XbC55>M;+WU2!F*yy9kDjzQ9m{Q8_U!1%@tOU@fJE)DVIE)OWeQ9=bl<%o2WuK zhDDW*j&jz?Fs6n({q(=SEXKx98)I6)g5?PNk;ie+5y`545|e!#{`pEY?rITGn16$i zSM?fR&TriDAVytgZ?B{#n3QM*3cG^y^qvUFHv;S98H zvsq=C!sdheOY{rrvAM#TCMReaQO(qlqq1&RiKe`i_i=IAuEm*{DoF}cWd};5fa$x; zXQ-DUJ+GZjm$)VyG&qjJMXZuVtOB&9BDv11>%}(kT05}|4<@A=BgZ-_VMDc>B-b#C zVDU^exdX-i+VUuSgdTuw%PdmC&8i-yjRfb)z8=-SZH-*C+zOdpm!lwkbA2mmJ*w1L zjB*r0V!Fw7g(Act&ut0V-8-)O&tU{9+6{&}J^M4~`^?Vba0ArR$W?A~ZgN-|>&29W z62mL8DF9y3Ad>h!1=`NZ4RhW~#t+p2y-#uxX}IUir7PJs-`nb;0#!x6tH#}@1>?Ua z$a>W4o9utlyPS{ldGg-Qdfl*qz<^!W)b_X((8CH`P-V-aw9e|IWEuy0x4A!os>1A3 zFnLa8X$MAh9!}Gknt4Xn&BGKH&UI9KmDB+0GC5}_^#)@n^CwJu%gn=DOk=7I} zotAi_zTGvl3#z_5wU4qgnxcGSOG+UR`hesXfT<3V3QbpL%2vVoI&H4jl2UKi2hBby z0S=SYgugGPRSzUz9}*1_$1wf~1r;Lc$-0g2+wxI6GEYW&OgA?m%G%}nIp?J0duR0% z5F~GR=tb+g)}mD43VXap+<7DDS-^{B63C9S@&@tY0Oq8gibfrTNe?tp=x7QoaXsR? zt6oS+OCkzQ0puh+WB#MBOw&5Z!|gnXR)rINQEpA4mI*dh-#|)}>KFYJhCz|!T*V^I ziMF?#U!6Sob5z?&mCxD2QfkcL zE;-!bboM#N_w*^;zY}STOSwGsmz8;riMhrHiY;34U3H#;NG`v7P{SYxNvj5aszF4! z&1k;N)&{$k>;?0qs=AVlLB{C}H&MD$HXp(WZ^7Rm#su5g6h|G}pIB&UoF-otG6a+c z>#}$gAXpOvYMHR*6*i{2Sf_pjQUKKxv0kYbs0ir?FJ2^~r$MpxYi_7cISp(68~1!H z2Kb?5{G-2X2ng~1f+VS1&jmxJs-io!@stIFY`*QsWGcdtwvoCny#|^nTiRhS zABGvOKVpPtV=Bo!uQh2T^Cho2wZ)jurD9NkYpwOd}Pt zwOF0#Trj(C6FJ z>W^6G8!9J&ea+^WI;O*&R=+;H)Ap}>n|Fi}v`>zzdfZ{R<)x#e=Boq#t6EI9rmwAgISgRcNnF_|(>;m{1}v5$ zky(CL$-eDDT%kr({e}20f{t93Wm}#1!bx0lB-pp`W1CbspQx*%Ah;`r6;qJTpE8%{ zxerJHM;nb^;%*ANfz1wM%d6(%g!AxTYB39^QPjELR~{R z#f8mDGuRuPisG0B6g(1$ ztK@@Jy@6~8nTIVwY63Wo0x>;T&e!EawY$A_>u!YuJe%p~tLMjCkEn(bZNs zR6HXZ7A9i2fQ$4=t;6+OS4&}}OTI|O8I-%>z>)(@Hwr32aU=W9)D5$b)+Lwd1V5v4 zl)b27+Rug81gV639V>Ux7Vg0{vx^wOZVPXe%I*zDUQobX5n1V(g94VL)B|A9C2Akh)D!ZfH zHEPZH)a{n5YD?yG%E58=Z^T-oxyNxZ^$b#o3oI#EKM0@^di7Hb5$pt%$}*tBzEtAa z;!sCAuw5F%O7VG4@`;o>w%yi=f$-5s(@~RLk!1blWo{kgqK?~AunZr}AMrUDIiWk< zDu4bc(+0nMsJ#AIn%*tvu{T@!g!$T%Seh9567?px0h3Tmg%~i))(x=Ci`S|X1SvYL@rf^iNK< zTo;owDcGx-kzsKL+W-LAM!Xv7kwo9$Xd_Uj#0Ip11H;m6mqh~v1+j9BI25j$aT~9k zJ;U7?{w)RZKUYjBv`iue99_^YQ&I&#jVU{~Q{9W?&&Hmf*3#Rf>iRfkUGI3nwx~;Z zgraJ#EyAlW2||G>Fn%#7MsaCsqpMr%B3N7CQ4Q6Ts5BJ=e(eKs0Tup^Br1%b;KvAL zGAxd)*2y!jWh8|OVg&K8CaBR{^uhcbWXjZ6gcnjHvBqe^s+0+b4v?3kV7n6{x3~8K z7uzU-jDS_gtXG=HjgGIE`)moyBuujCN6*-WMb3!MN)Kl@KBKWP&pyx;#sEcDaMkC? zcJE!9Ozi5I$m212U^TNGliLcIW|s<5qtzb(_hmzPO<74@BULE^7*4)~O>Cd{% zAG9;}TKAfcAOD=E#`C_uxkwcbnmE$u9#x0OETG*U@Ay8}9Ou%1auTbWj6ppkm*xNc zzg6=iNb}*UOvqKubgv&-r#QA(>IM1^c}$AH{^Ss0h@X5W`oZ{=I^5N$Wr2JUUv~WG zGn+|mlnBEUQ3;2c)ytiv5WEK`Y~PUrX#vK_#1~)bnt;6JLs==N0Bd8}rej&vJFgnJ z#NDvDwB$2cTf+JyecQfk5g3_;5;qwcVVD6*m0_QwjqolRQp!?Ie71Iem&1@HhH3a4 zF-eF-sup+FM&Q7gOT?9>ThzZ^;=gKnipYdqXp7W}!uze&DAh@Ld(F8BV$q?jTJJQ2 zV_f+VtP#sl6t_fPtbm$U{q8?OEGOwtdUleru6}n7A$vV|XWdyy#8H!VpRJ4-Iu+`8_vhqtCP=sb#gdqwqS97=Kv-*Jc-%& zKI-yXVXH_Wa!I!@m&=A<#Lmah@bA}L^LwXRKk~8XtDa_Eh6le}+wc

<+T@O=uSH9vy>^a#jWl=@uRL-*&ZNUr02Vjb~W7T)@&{r70o^%lBl9G zT&SD6YXtBVJAesHaiBbS)>=DaqpZ#1c$#G4Q1I33WFn(_dpCrv08R9~!+UE%fk=7@ zJ_6IB;^B-fFI)}x*fthkx0sWuEVwdnSQbTmI zdz5Ns{vfAVeW44ASgB!_O|sIV+Bu4ew2l=W?}&yrS9D1pTCx-h3#94i6!~n^p|#AW zxm|afPcNYG`y~eS>!nijv%2QUVm2EZN)V@E+6;Wt2*8LL5`A?{SOo)=XR>=-FVY}I(_n2OMtc>+>b`@Wn-VhOvM3QV)-_MbT5#wH)zF6hOjhL(&>s>_pVuloXTi*W_B2%1N5nTkZfpwe5k5x zj+}NHrjW2~fay($lxv3rn-xbACG;-BkY=aj88RNXOO=G=0?#}-l^~;#-RzvRjH2ZJ ziyxk!aPC5+LXLUWUL7AkS21wJ)qg&elDzfxFRa1$V7s;qz64rIbj``!lJ-Z$?W_>6 z<8MeMzdtr8O`T20erxI>LNn)+7R;$^O`o6w&ytYRpOi)=>BY9>ho zqPB`pcdl>cDkJGyi&LJ8XOR(*Eqggb%z!y&!MK>0AaorZPDQJ_HHREyCPE?jTP+*m zAH~Q?4n0a?>%7cFEv=Tb3OgInBrv0&Xr*aE-ee}^3Q7*XzyPYP*`fo~2TFQFvOf&& zHJLO8kA^4gvfAA$Z~SZ>D4}oJ<;yUQi_v*H9Hkn7`03sx@RK3rnXpmQMWZ4Qg(IV_ zjG3|A$spuBn|lsks(SMhh@i0=2}ZT38&Xa=DI21Oq{F0QYnS-!v?oOxhBe+UIRk>- zL+O3jDj^=^S4ODiACA8sp*Ya;*aW@zyp`N!Zd0TK2jA$>ef5>-~T#^pOgIeIN$yzDX!|6JP?dEyOs&s$x_5<#4o%#QIM}hSac8@jk0%eG>g9iR`9WU z#_TZ*wbotwg_8_&L#mIb&0c}Rggg3w)d!T`SwGx*Hfd@#*^4xy`8M4?VGA|Wu_pzH85aqK8P za`)w_Nv@k?SH$w__y2w~qH8n}a5TqyrA&^x+k!+)Pm!EHFe^pe(E}YR{?85bgb%mx z8d3~tLsQ6mn0IeFW8(}yFq5uWmpfpC4wf=d!;6_<%RXKn{@vz)V1OBxvS_$0J|ORy z%Q{S<8_O~j1wxcL4m&_0X2_nF(rRE?a)PL#Fn|K`Z=2!?(yN+ICWcPaSzywB1*H6E z0irg}ui9I>er(#icI;Hm&Vy?izs;zjM-u5Ukw)xs=fUn*L%b8a%jO;0)w(bAzvtce ztlsKo#1c8)vQyx?9iDY487}Y91P9xD3j6KoTRkq98IscCVjmi^tj1xzSc$X+eTGnL z@s6UMN)6L*( zUXpqW-eIZsN!~0$E88IMI5jnc*{V<84Tj*}nD)P&qmQ)a@{t@{=2;h$?)Y>=DS6Si zl?K{11odtJJrCcfU%H+tY-oyL=op*CeB}di_6_>n9S_A_! z!opyA*v!ikgfbG9jYUh+d60zcr_%>bV2ISYwBw#BHlF1S_vb=| z$H$)$eBAu86MYOZ#CcA^`Q11{dkccS{bXwbE7Y8*%C|H~IBP>^JE8ZrMXR$ZLUq*+)qL~TW&*LjOLnHC>eH^JK8#YHRw9u&${|pF`Kf|8apUj9i}LApX?N;C zh!2y`;?lapHRm4xXV510w-?1#OIiFZeB)XAcsae8pSEheM9yk(va_Axf4{;fevBk4 zi(nxvtf2ia=Mpb_APnm}Pg<2w2nIGc+iXgF9+c)xX<4V?sEC;jdnaxqG7Xatj($w( z?C>wSQH>8{yvAP73^l^yO=Jr zNW0P~I;4Ks28q8!1#k8Ueu2=FQ{*D;$10#IB$hx?lNcK}vty3wca)CGO-I%t1_$=0 zk2|^_XQr=ddwQ_NOthXMBit5?u))0sL6Gy3_mWm(aS9QW8@Amvf`-+Fk_m}FnQJ-q z(K!?XC~1*XvocW@MT}cUFG^&cAk8W5p}{1EBes-lf(99|nhA25{XpiFiNzx0SCj&6 z2aR{cm=5Ka3KUaQZ7^BXpuRhc3cF!+M3O(G(`r5CGbC2N=R& zr`ki=@({PNTSnQDs&j27hORp3#^#n7x}_4g8p)X~HDQ!q4+%L_uYUrsf+JacKz}_0 zcp|wmnw%r1CaT`cEKK(_bA!<%$%08cB5)`7mCQ#9vJ!gn7O9}ZX_k>_yaYn-viC=7 zypKtHqt}iDdQJuU_vMLI8aEF>kRPqT(ypPPul{VU|3XIOq&cp4nyp0f1nVY@%|U%< zre*htB}US)kY?4mpl;KigRY&BV9`!C5M9i4aZ%^HAvesGHJc#R284J|X>Ucrnw9e!kX&Me@`38iCPhrq6^Y$x+R@Czy>}9BqTGmU9H6_7V!*S=3;;v)CK$M zv~b9$It|>sUUUW#p-D!zK0vX*MzAWDCn*GMI4L39G+sji+#|exbT^ zoN7Fn_R4cCf9ULQIh3l?a8NN@K(qlQhb6T7w>9jb(~|9($LjM#SHkXv?6W!vFmix5 zH+FyX#@*i7{>}JXuXmnDCX6z)J8TXH;;FET$r!)tRg{y>-$jV5io>OFHLHdz$Cs@VzsMzAueQxT|(I;FZGtjjw5({CowvWC{@o* zSdkD43M(=jsurPzH8m$z2`YE$j1Hqi5B0+7W>O!Rj%vR6ATBB4=8Bklgcq(61~`QQ zx8ZdhL>8<( z^YHc)SOg7t5?Xyl*{5*0ERXl(I$C_AnG&g|scYXLV*`g2^)*ti zcN9y9M$p4&YvG^zYI|cx)~a=i0zDitxeePJo>xa>`?RyE#uXuO2J* z%Lkve=doi{s9v=fKRF!R7P?}QGyPANQ4GVM(?+zEneAFKudd&-$GjZ*RSzd`_?@u6 zg)|{7?=oG;Jp!~UizH&{ZGZrer9+|C+G9CevSo!Z2dkjSwRMDdp$5})*&(Q9VqNQ; zN<~(i?5kY2sH`211eB<=x{C5CMNELZIb~mxoW;Q96%>iH1uCGO-(E z@qdkBOsxhoM|`@3pe@c`(lPPOeLHx;S{s{T)tWL8v8Bg=hK`dSVe3r z8=j6SW~>m8^^kVF<=wi1IMjL8rwN6EPPqCA>B`sBs|CxEqv=P9YNm|?6m%!_sZ=jb zh5QZ-t|ql%9cl?9(5;A)i0kpHUC$kxkMe|)eyfk;%Umc^tL(~QPooTRMd{q8GD1$o8OBrgAsMM zMQlh@c;A;sOnUjdo0N7_igYL(m9-2N78|Dqopy%6Th^q43ezivi2P1s+~Rf-w9m^4 zlZv?2viiu$peq;@{4ng6)`m)SWYu9MnnRutdd@#?2Oil5(E6pjLjySNpdu2v&;Zah zsn7Xzc7bD<$ewHAvVAj0HN7dp0I)b#FvGDQ#2Kg$XsJK*Zh@jaeM?Q#zvQAs$F<9uNrSw6rEx?}|~qC#{ucAMLXL@>&ht8`O^L*ryOTs)ccp zcR4jc=fV|$M{6GKhTw^ed7X0fp`?vMtMjHwLbA)pgGaep>$`RLm$MI=I}`~+y=kJh zLq3NRJojn8H9=`CqGlrMh-_XkKx=2MKtCngiF1;(I)Gvr+FI48`)ql_K{u`$4? zWNODn#WTGasvcqYu3C~mz@nVKlZPaB8zn91T2~{Z7Dx-(2<3%8vIFd0QoG_s28xbjXlG`XZm# zR5_E(2ceMn{nOgciPR8Ts}L3=#OL^_DF>Q(R5iVpHaZYg5N5N1m63qkvNon{d}vor+FaY8WQ6AfhW1KQ zeB$gMfI^i^kwXfqDSh4aOsqvYp(AtF4Acr)RqX3k5a{}jWPHeT?%Ey4bno<@nX1hL zKI=AFxr)kP537&0yAN$-))k2M6LIwJJ@R<=-{KOS!+JM{jPgS)%JKNV`BrYx1J>zFe&p0EyaQjRf>y@_Sz$P98jQklW!``(zP7 zggkY1i`PQMnMki7d!cT*&$WTiQdfEJ2NR@%rB5~lY)dOz@ z=h}xiE;*(YACagD(5VSgXJJRaT5fOA0^am=0Nq&fkK#I~wEfh5ztuj(#CnDnSGXK&BvZ|AQ$^`rD@ zEbOKl=rjt9g~q=eoRHA0O&G?ikas;hG3UC8h}Yet@8{TvWD=Dm0>ZXuD%*sPG>R>N z5Gz2;%vM;yz35m{iOa>9iS{#;J3&@Xn{feY=m`1DScl|COX1@Kx#R?#%vjqoW+nYJ zYie#P5$I$cK=sP*w@&*-QH>}i>t{7gigLdZ>X0L7%Zs&!n<$ z($xe|guTBo_g>syRDCA)JF1%&M1u%A;784r^-XJUZRZrQ!=+;erhH*52aO~^GqXWPjKg;VQ@a87nFO<|@~9t(xocib8ILzq4~zUM z3<<#<9+uxYS8GcWahuZn!u?v}c=9^j^8z_w)zVl6aiLIb>eP}83`i2aA^r*h z?VRsq#VE5qyy;2tC}z&GccuB~1lgQdTeG|GQJ)njFA%2=KY=soQA@Ot*I^s(xhbaS z*Z|I4=RMbTCdvta9Sa>eU%R{09-#IxW=cuL2jACQ%hR%30(=^&-BR`QfUiS)Y<8Y}UlRP|8{6&i@$rmJYWL3z!^(!Zf=2sT19|_w# z9y7K<);NG&)&?FZEdlajToZ?i0BvtN4c$P_ClY7wcF9_mDZ4kOPvKUQ{N&S)nLLpYUi2WSG=UR3_K}M6OGIva5*h;q@%KxF)zH4 zKH;q`vhIQP{J+(@Vh19cY;C?V+aPm!+md~x7+i|9(fNI57)U+vf6`M(30|j!_?$(Y zP}?}2@%gi@GTRcU`Aqz;|Ct=DrHk#g@o$)#b|u8GTUXHHsi6lGLMQky)f8CO7z_Sr zeV0`mX2Qd{L#Q>!ADpCr_Qh;A%LX}s(Ah?um2%XQ^ zuyxoLur7ErTZJOFwMVOtSuNq0O=yMoi+`%*m`^(QX#sJJ5)FloxH}k^r-)d}Fubwu zI-;|ap`txgWBB)&l`(ftsTBlXcXsx0%M?sTe;}Q(pJ%q0E@KRbPoROD%p&#ER1m>bknmeH^ikdyAsrSdmWBWRNIjDkS()Xe^z4 zX{ucIaam0r6eX)Z-OJRfL?%L=7-O|wR_c3{k8&Ld(_bA(BmMeeHmUX)v!)t;Uq{9S z#q44V)RIP$_P1V}$ZX`@0Y{>~tnfwKx;=jCN8OpejR4w`3J~&Al8 zkEPS@kRH(r<0*t{^@ghNV3FSRChO7IosMZGAGw{RmCPh%&Tg6w-mRd10szZ6rSXIm zp<`YP)R4z>JI+)AeFiQPM!G+teifB(Ax@8L%98?AwdPpMF;*g!=C~v+T_`K$s+`U^4M#E0=`?TQA3N7l zt0IwFp_L`cRRNt8F9@6s;!m9nfOV3a>YN3%TH2})x>_keey~!0?8A?uqp#o3vFQfr zX)lGu4?;b>-Gnqf>9ZqvLPeQSR{c_bPN1V#QmB!R3&Ms7jUlC#&3KJtt}nk=crhi{ zP^8Xv1PfC`plYquf1BB%@_=`OBUC(cC@r(EM(}e?K+e(t>_d?i#m=pjSf$MYIhhH- zR1pmcc|?hjNrS=KoTn}0Wp>|{O`N~%BV)YzTRv3CiRJ1FuG%G1aOu%x)k0NrVnIOB zBC|MOf(U#0dGE2bIyjBB$+0smDV}<3OvVN5EbBS!^W$2enY8 zp6(-kbLbnji71GG#9DV*daI_n#QsGJ&cbtyE(-%m6g!1ca@-S~$x5?Cf`ZHpwPvh8 zhXas$s_?TU?d1Z*Py%?nvK2F_f-1w+-MBFQjfeo~<#PnZ;HJuO^I2fbOIs|n%xWqE zs+gI%3U+E%$tAcp`p2ih4kI2?yx#^4^e-1qzfpZVuWHnIf~i~K5*K0RBfhBWGzrA9 z*(3!Jybt#znO$av*9#Rc7G9h8iMn}eJ_0OE=@{VRd?j0WAccCG*iRmTqh;?!%%tj@ zMArowk(0xyG#0CeimkXGsBBV4iS6yGid<%%r>wV}la@=6vy|LYG8XWFozTnPhT$==c<=%b|D0vGzdT-M>YSH)Z2k?67Ph^nejTT z-z%?O6NvR@K#3nUKdA&IsQNQWDz*LWWe(wvJanbXrST36p)P@jjxc<6^4$+qLFQ1M z!qXBV|59?p#Zj^y->uocI9I)JjTE1`Qz}tyQ^Y-D*DNdqN#PChFLyq9N}| zY!U%M!*F~{HQm;o4;?2f54?Rdo64Fgeipk?)Vg}@m?+BR0e!W%x4Ct%`NLwdx%CTZ zMzT=|ckQ^#3gqmnuC1iA6xMnp47utLNk#@hxBlw8BiFHhbg%jD`2j_&!~3CWB09V^ zI%P2=cNnYgwaxvxv!|NjoSbG2dmz;?%)~g^Rgf^PGenM6EWm*W>$q2W7#}0~>tu|3 z`OE+jGuE$+_m>2i<*m4tl4Y^~=aCKy8)bU#L3dDBza=e|l)&r}XpcF_jhd!1G6|iE zd$oLvr>m3a##|~;yE!&2%m`65#kWFoQt-CWs4Hi^`eAXw8aR3@9b(une3ivbHIy6} ze{(eB)W*Kv@ap8StY&DYhBl&)q++gIFQSM*XG=tE79-6oO&NVkA`^G7b#9k^BTnoL zJgWK)torGvz2mK~U;kpii!7>nWej`Em%SzFw{L+2Rt=GwwK41PV+9g05~-9@XREH` zXiHEr($^Dm)w`_+HAq~5>Fm9hq%%+|LqOov+lTp?xLb-Z31+7ryQ+$^Jh za9ZtZjNQ%3;#002Q&JEr=SNU*UGyDl(GBO$(Bq954RTID&(_1WCp9owo2BYnW{@H^ zCfaD$Y!u7(M|gfN$e_dF*cwD?v>_-kx){P&0idM2b99$tndzI@)q!zj4-wm91ZsSo zS=#fzd1H&cWgTmJO#t$Z{XmF#?>+*wR+;n$q$Xc(${UDHNjQWJiq))Tf)J0_95UIi zz+idy)gOP}+kzj#cIyiMGRbF_Br{bOKgAR9iV+-&o22fREW>x&=t_Ud>l<2GNQ_;< z2i!Ur_`DjKWT@KFHQtCsPhb7>4;r3yAW;lY*tk%2ER6Ni#8wIJVF>f|3=v@|V&R}V zG@pcyyulRnJqSSH<>@)altKiPV-y)nbWg5Swsh66mOjz42>hYxFsYWq24ng}ZABG> zSP`ZTDM}S0(h_#6NCnYpSHDgh>X7aZ&ng((lACC#FfF0m9k~IXLQOgTDCGwIT_g_S zLSb*pZp`7KNBU71ZvQ-Er%Zy+^iL^r8y!N$AE~MeV{^h^@d3u2@ zhny!lc^o$R*T4Kf&8t8B<8hon5t~&r>rB@AD0EGidnA`i1*qAQeR5*$3SDTG@gS+u zO**QFWh7y3ewxw-w61OTaXBthhV}^d%JopldqKxLtLs`zo6p%6`FPec@4^N^_Ch^6 zHRW@#nkF1OUonTTLrOrG@?PtLsb1=DbbNRmH9dG18$g*DZ6TC|V>u}EtAG5%cRCg1 z@K0$~2sH-At?ms3q(+s9k7^%@O0zPTJ~=rm^H|z|^h@B)q&Tx`aut7U<=`S^B&5S+ z?vrg}`5{PGCV^R!sJf2oF!xLw&gZ`p-NzD(@rgKrrMZ-(RK$LVj4$D3uluaAT1(oVOwtC zOVIg@%qYk2FhHU%aAG7+0tOS5w0IK2NAH&L$CmtSjkuXaOxviZkWC3 zY&j`36I{6V){_UxH7CZX_XW)dPj(8L5`n`VKU7bb-XAy5lNmrx2WeOmQzQzUV`C>^ z_Z#xQ^X247!U;-8#0ScUd8@8h|I2 zO@&}h|5iN+Bv4v?3B@*}%D!;kDKuCkO(nxLz^u_F^b`X~@iKMaBI*bOY!}lxe0#10 zl3AmpvV+%|lXWf8iP>jP7c8hxP#~MfRBs&(PNdjx>q3#;oRrTP7?7GRA;;Lrn&0B5}Hkt9(EhPEnVtRVLSe`DZ zARvaycDa#m2^6Vj#%oZl?(uM{oLKieC^Cl?BKQI;sEB4a7w0^*otb{$++sBugeU75 zEOunJkpnFru7iK_0(4Nu%!*A4YV!`;ioH*DJlQ`KWM_1@u%}aqf=Jb68o;Y?tyfGV zFb3$@-U3jf^~_8~=#^@OK|=}?gTJDfDFzwL4zhT|L}B}0N!frTaMr?E zR2;+z3yKpmT;eHFKy9{;s1!U&gVvrUaPZb~hr))mn03!(cWE=mn1i;8Vw5WJXp^$7 zsb+&`ViAg(p#jLm8dbfIL+WM-IdWo@%0ejlns6)~$KO!^{iOqiZ*j@WUfJ|G~brOvM zg@|MrK|)6MYBItaKRYX33G`okKb{C7vY2S(MaXdY$F zP12QUl^X58#=3^pxL}&kAOq$X3o+iFw3|E+!hj3qm|sl)JUo?%H?m{1+u8C7i~rxk zB%`u;I7So`iaJ?2xP&Il=zRK?dI~ZKcOsj0dj=F17lTBgr0^)jlFh(4hH9N2S2g17`Nd(s)hjCAGW;?bT)*Y~#DHL#q5aFe;(O*gm znTpb?p~GEZ1$6y3B5^wGbs&~?RWB$Am1=3G=Wj4y&K95`t{H1A-$Vfl_An<^T_>6% z3Y$0%f9BsgWWb6gp66y(jn!im{7mTrj*SE|8R?!rm z5|taw`1@p~(j@R>-YYqn?f#HnX6z1VOR|!gJQdMuQ@IeQ`*t-5nbni5YSvS}np2XQ zbSL)ztMB&rW%PKAtvu-ClE_``S{*sgRtbxt;G!#*~HPns^! zTm9*&kDv8}gSUF0=aHrkT^Qu4JZCl+q}&iZ^(LI)KCZi7tZ$i=YArBl>H~|F$_k1Q z+FJdVsh!AUNaD+{Mn>Di+Z>LxDI)7qiYbEp5Pv*_mnFikX%1xSEGMQkd(#_?amVQ8 zO|jzC`z%9&1AFv|9J!P!_pOSMaFJA3#Tl=0jKH&_^~q5^b-&rZO`sqBS&2QSek(uP zh7R1&0S!?;_+UjtKPDe3Z^lScpmN*HzMI*nnf>+2ezSHtMPDkXm`#o_BWXMb@z=I3 ze&!R?m{5xUeikJSpQ(e%Fh){s{EeS4$@?7bVsgr5hQz=o{GXoDcYh8(v51oW}IbBu$sU`b<`hv5T_QTGp{Y{=|`J&d=67 z$pMY){&!_-oo7DrRWsGOR4)5-T;}Z*?X}~ldv)~cCz|7SoQ^RlS9v1Ix|CV4fr$N> zf=@+6;baIlIl1k6_GHC|>Wtu}osoxBabyB^hpyS$nC#>CvrKUF=~+yUfG!c*Iv%7E zZD<}pdF%k+oe;sgr_g*xa|h8J0pILgHTUpIBsA%s`A^0UzEdU7(fR#jhWTpx|a4?qC@JwG<#}x`^l!7-AyY?lDs}K6r;f% z9Yu|YV^4|_U=cRpWHwIKbU$t^{!uS!^;7fZW+2NZFd)y2F|m#A=v9oupDQuTX9w#3 z09Q}rBpILQ+3lMts%!pBo{S32v5B|YfCsggEqAo~Z=fXyj8)^^YrZLD@P%`_5uQen z>DwfQ*YBUab2v+gQwWsui(rnR9ywZe3FhB_di)ge6e1#oE=8tHXtRku1s|7)(6$mw z)Lf}`Wh8i;&dw>-jrY7*H21IcfX z@|CS0syOI|)0P&~O~Y1KYB@W9n{*fm#uDeeDnm-nB$?-wPmCajINsU+U7aT@Hx8YV z;k8<|3}s8fHYs=fBZbVg)i?2xlzjcrCE8fZ+*i;$Y2Urjr+VhAQLMeQz1e=(H^C)S zxv12JE{{^pfwQuRV|vpQ8Q5}$?h?)W|JxawaMwovOU(G(6Er2MKRvc^=TkG0QY1x= zmj7ATIpIt9(-~;@^6)#Wd*^xJ(76Ny-kmggy7dHG z0xWE?8__Mp+9<9mk4(Ea0~JE%>K|P`%*K=101<*RX9>Xr|AasgdoxNIm?T-kVRy=$ge6Ov z2MT!x2azRz7xhCX1|lv6tcGuSP7a!l=E(j9qFa!b#2S!}5I7O4(wPRC`r4fnx+}mB zYFIP#Z{#UdJP6{|afMb#0dS;>R$$7iFvd=(E#qd|J{MFSq{hF5%I3vlk&rR@zZyfn zKmHe+Rm<4NAJJ9Ta(BIM*$hMkGxOF@_sj@UlBm3_qrK#M=vMq{)?F)1=hzNEjNs9y zwf*jU6aR>Rwe@ZBN*(JDWPyvm@achD?casG?wN`P>bnE~1C`-Zu^xu}czqJLd>Y{Khq1mpG2;b6vnM5+B;fw#juF{6hSNH zkVDKniI^RwaP1KFLFbku#$iQKGl~kS2LUIzp-5A-t7zg#e6e@1)7pPT9!%ok-hWK; zpk5`(EJ}s8sfz7e%1gwmqu?H;NBH!!NDQw>A$G};hu6?%%#4vE8z&OT%J{*NcttBz z0Z<+4l4VbP(Y=zQQSvi{T!ZfIrTnE1&>;+)(h&ex#dzvbWEFu7lKNI3D@K2Xm94#n zkYG!JavjAN7R~qVVIntm4<{c9E5j&2WC61y5bu=bNA8v!5ZbAZJ=KYrz{*Z_o>8Ot z3!TrV@0o|;04l06qN?l4nKg=cU}C6dC+K&Cl!9XTk4|Tkdj%JK;KU+%XG;_La;mZ=jmk`NXj#SSXMxrj+6!+Evhr2%QzKK#^cc*o0CQNPkwi zQ$GF7_9BiTl!ojLO`ta6{8AUTe+LZ38}P*vvBdI!%cd_HV9o>fm?W{(S}@bV@S^zm z&xftyHqpOKy?lbmzR_=iyvq?L+fS-e4EZg=etZ3hsifCN%KYbNSN^RF%dC8`VU_d| zJ>U9V?p|7Bsr}tkY!F9uGWp0ybJ|%yE^}(OsWo)P+X{Or+5A&qP3P7mcgKk5!EZ z4cg*(JUlVI5wS{oLM%53P>pHW+~}39376@je7ZBxGNZ;>i)XMg60H=UjB$0pC#+u< z$q4a6Lskf0uI2S7RRQ?KfvkD=YF`*(kZNpR^u}>LwL1KKd-Hd6GB`&hRdw8G0vS_5 z5NtECV2G}?s|+}sjma=q+LF*V@A^~KIidayOx9x5^9sn_kjgw%OH7h+0fhb!YNcu! zwf-*j@19BLptan6p^_=wtm@eTOm`nAO3oP(j-gPLvOV3plmGPPdZHHI{{<2s8;LW9 z#!dc(i=>B=mc^A%lOez8e^B|2;CCXY700bd>kWfo1O-3U!Ybr!p2LDT_qVrhCF1e{ zJgBoEp#T{LpwdFIkZ`jTE>RAwL&cP4wF1iF0OM-Ka#p*tNx^7>{86?d3JoQP+PyyT z(3mNnNC2p_IgIb{lyV|#$24@;5v+CZpFWVag}=Y;6WjNmk|6`u*?gY@Z*%c&=(>_5 zQz?pk-YxsVuZy!P*+DrrA4xqN*QX}WRa_QjN7X<|%7mSkY}_jweZK_tVabG?Zw6#;N$wrR{K(rII9lb2ON|5le1BJ23WpopS9 zYR+cN|F60;S&AcD^L#yuugZ{J5yV6Qx>jArlpqO|6mX@KFfE)Q1X095#UO-M-Bpj! zi{AFO%zE7~kXA3!FHo;gzyEj6@!0_>bJMzZlL!xYKl3;IzsZX!g$P@huF~2>L)8Pi z505|VedS;?8{1?ZY?&@P)*^GF>Mdw=+$ zS2@=3>yP|BSI(Owls`R0eY2v#pLtj6LwTq5J_Az8{95zz+^`BY;QeK5?)!t6?J+1F#QaP7va(Uh#EHV5lwe2sGhkvC%mrbN;aN&=JES<95&G6ASSA~%M ze7(_WxY~Y&*<^gg4B!~y<1~!tTDYk(NRv7&h=mRyA$FS089mr~-X%|w3RL2=fM#X-K16(gH8 zkzJq{1w>M17mjRLe|O|i?G;ioAUWy>#&U@>$}MpiE^x!LRWdw5@r8M^ZTxpEl*BSg z))~a?rx7kKr#7WDn6XdOBPD?aBjSb?5jem1!V)mtkquOrs#>&>J4hUP zCRxS36=!QkQZxTi5S<_d2HBHPBzkMO=+4hSC$7&I%dJJFnF3S*%V!b2t}Q&{Rwp=}}&t5k!s z9uV~(f_r666U-wZhra!Nt9|8ktJ;mcSSry_;-adW0n*D^U(P#6-R}bKv3-!c5v}e9 zV;O+9)0cQcqtjulTx69;bm>GzRB71h()r9YKF$&M98}McNJvTzbXt&>w%c<2!m$K* zu4O9rY3?f94W(3*yn5BMan9}us9gVMs<=BQJ3nF({9;`uZ*&$p#T2OYed*5QBGe#f zrYzc_+@=Bs@jD}vRcq=^V!;}g?+dSWp3IKh3A+mmp-!;FthNKB4CN9!RZa4ezKXN1 z42VK@WwpY7%|!fzTc_R3DF42f9&XBY(%D9TfY2yo8xl_LB}OXKNu{G+A_!;LthSEi zF>0}X12T(jbPuZu`9CcFlLPGJyb|!Uhnb|H4C*+XOO^aOcW!FMH-i-gmd zl!%feZzY_#)eF>!5hy>0M7#|9cCuGi<|Fl0!iVgGk-WV8DSB`7ZorAKY2zQZ(3QfS z3n$5dFa{JS8zU>z6qff}v?VMUfho{P0EK^Hi){=CVke9%lY{3FA3I)JYQ9AwP&fSpM zdu8I$4oSM~$56s*_Slj;NqA~;?=tly!=5V#ur^agv<2te1R{Plrtj4SR-V~Kmbuom ztxbvX(8nVJO6SAT1Fs;~d}RecPU}WjYBV13u`;oifO((1AaXL2hRe1=VXH5!s6?Yk3 z>$u^E`K|3N_tnp#e=3=h(p%cDhgHL=y0@aA+~#qKR@2INwwmU{0g$3MfN_NvJ#kxk z`grgp2_c7@n(V2wHPenGw3}AvJ-lrfXY6Dg!I3 zAcRobJP9re#-6G-I60LBxsHSyqj5ddEO=6ElBVoSnZQWKfoflrohs<9y|Q5Bbh9fi zp0-7oEU2SK+h)#FlBwX`NwgYuj@|MZoSdvq`M%Bd;rozZ9_B{d?-xs^5!k7P1CbJyMA>3J@-C&#H%u*B&8MwYTbg`ZHR*$9;cB zE#6Ag1LwcID%DMds}Aw?^dnr%|LyW??bNiIvYj|E)FGiIsFYoxJGjqov%4O}N?r2? zgywrv`lfpdBMvlWQ&=6O%Cmhzjbp4&2#Y{t4%dKMq;VtbXv8L^fL)sDHwADK9K+-e zsZU3yc@|ohh$Da4R0PTpFh?>BX>>56>l)~afhrCaAnTnIa1wb#?+hc7yir_-J$YEg z7Mz)cPm`1%@!uGu8Dfghyf#BN4OxCcd~IEEk{}~vxn zSxw}?Yo>UIl4jghB`X#t$?Fe@_b&O7>Pks@olad^W7V-iHoHSHlm?ARpB1^}wPc`( zlm#DNm`=Cvt9d6CnpM_LCFo|53O6aQXvN%?c3bhn`g3FqZsGt`_!pMWN%8f3|FMXP z@PeZ16|O)2cUYJ*HUIb%>y1aul#e_*e&jcPs(kIwjcLDPP~-AAAMcO-x>A1XStgzG zW6x-;Y|o2U9)HYB_fIff?)s_zkSVDATwb=Q?%t^USoMv$t3{H|j}@14k5qpdlaJxi zf2c4&(ZlCxF+ca_M?33&w8j^((#O&_)W4^w;&bPZxPqlG{#Laoo;cY~Dck>X<9pid zBcG(qm4059FO}{oU#OrF6_?7-m9A7YUqO;ekCiWsqe=N<_KH_pvzN-Bxv4xKKi8p( zr;fcGSZ+6aW%A>8e!K()aa1&!eR0j_CmauWb$Lo8s(K;Z>(shZ)-5XUs(PLEtzI#% z>U#a%g$9CrXc21}yCZ>!R$kfLAI-W(3d(`Exj6@Z`RQeaA_-$c*A_yNCF&wKJ4fCk zQv&|&5ATKx#42HkGQODrW->0eB`1|8-r-24D+ck(=wpJm$&}KjzhK&R+3#v_y%&WM zV?|L;-8uPY@ra~#uuJ8w2B@%K^infiyPAg@Q#>+f^;WYfvy+R|MJD7%wV9YK6D-h% zQfO9ojEqeStEE!g7jCLr%HlO82Ku>wCrSNH;{z)j8{MG6OISm}DtD*6ZMhTv^p{Sf zeco(z)3_hB^-iPv7k=aVx-#zQU#X|+gz4SR)?htJUy`}K-Pqz~>+WTl>(BWZP=V*% zo4@DSA~`b0XR~@3`KNyFo12E$*s;GUnugJCG1^`Ql`ZEldS;6$!ce)8?k%X-<#noT zpv?lK>#CO(N=eNl9UkfGk&0HH13pK+pMWMR`;xziyjvdjvkq5<^z=9|FAyZ(_;bHK z;#>+js(PuYl>Ia4M4ueYVjqg#4EF?ww5Onz?3T4gYX4J@u8<|kOeg^}NUVan;>37= zxw5n#Rh83jTFInjmddb?!lVcgO}K@*yBv`@>y;56!J`YSNGt|)t&R(@IUPaJQ1{kc8M_M`OKkCM&t%yE|9kiFpFwpIz9XK#GsvwKf_Z8O-| zYVLRUb^oOHHhIspj!1x}z4&jUzlr@nps{A;jum1Wsil(I$qJqDf;-6xUQ?b1Q<~+b zLdq1O!Z*pKTo7TA1rFz#;gw8H3s*N>5_c_8bFnC1PA)X-sx~-^mP+ih@gYqUG>myX z#8jCFi;4E;mjkKwMAN*sNbS0E70+?4GXGzyn|ea;9S;ia%B#}30lN|44n~`1%y&^7 z3b<3LRZGC=^bl$WtH=UHQVgtSV#XD;a}P?n>!>*y5JN$+(wd)3ufYj}`dq@*jbfLW zMrKCj#G1Mg=#dm__KV;bz|V@X>SR%iP8_mv3T!uY-CD_dBl`&zH8(8@E)ic*jxcpp zEUH+VPG-zpIkaKm#blBy&bjro)D?1OCuU63*UVu`sTs-l?|P_L+6+0y=(u&72B|b} zXim0&1Ecj*!XlTFd4wryCXN_ZZK53Ln--->8#dR`k@<0&>t<5H<6{40t4nhG-M#%* zyJ&wsI;TXcC0p@j>s>)53hcNl+(GSitDmVr$7V$MFmF5W<7a_oiNIqO7=2w2pB1yP z1e(aCq$?+5qEd0#eO9yuDt(EstZEqL4K&8F7m z5|==unXAqzJu?tzKeJ`DTPbQrc^JCt?#yj)Hacr3D&x0B@hC%7oNuWupK}ooRb$;l zP-mN3g7}7jJ$`Zn0q4~e95fl)`yAjvFDysy->zSU+4V1Q(mH!`pJFNYWiBMfIokQf zw@%akbP}4Adjd7UlyD8FRW%`7>wJ+Re}(3JJEC@M@Q2)1^z_5I54uh1sn-|hLLN4> zd~q%MUA4Ce7EE9s8>$ky6kE-ou5=)-!(kT7o`lgJDHe`6FXozms$rf5=VK!!`?DDb zf7?gjmUtBFS$1a56+BP2m$^XOJ2URh+a$pQxr2Kq7ws13*lBnd(_I{G=6HmIVUQT3 zom?-ILgW_Kp);q*kPNSDn^Pbg!+EG1QP9yTJ0}+#Hfwz7siIVb(y@((I7_;USDY4i zK=jwnBl(X<1nG>&k~_jo+l%D%ytDw)Aq&oHY!%&kRho_F zlm0ZF#unZ**;o0`dbk3>&fVE*Y~l`At!d@9w7u*JCly{+-IH9CRbP&TI$IM@AUU1J zMkRJ{&pv{>s8OD+gkJ2~C!Cf#jox~sd)Ap#Wj%L)+_$#-)^^|g_}1->9@c5OZ``O~ zxv^@!{Da|-pqTKv{P*b8s?l&BeuYL_NN^F!2r^e&VyJ=p^+Y8@I#S0!5-%vm&kn&$ zllMXG{Wngg;CG!+^Xe!J$hnQI&7FU^J;OmwsG{Upveu7@LYD+}7yJ$#2(&%9TQ(}xvCZo`JCh@55 zD~w#BQ7%&{4w1h495a|}v=N;yC&)*u3aF4!WK_!%9y-_#0_?J+a46PI2+;zVI*P35 z6jV4$q3NW;m?@7;Yc5U%&zm8>*@gq6QMt^ip`>02A7R3@+B|DrK;tX#`Kl|yVo^=P zDxOFYqZNcw7?{b>D5V+Cm!p4JdAGLN*aT$dnvt(X+UNXG({4*;J(nTZ8`36UTGolLGlDi{;WL~k2A8k z8M{O@*^-c9mlocck~pVXOLjo`#6vkL!m4y0KHf6%jEgK#ld{E2f?0(+h}QVPGGgkI zgX+UlQ&6JNd{7w_iYz%jiIIa!C%UDV)N_?7)n3|+ln*Y5lnH!F?7~kqk2Zs(FZCb1 zg{27Lr<^kRXj>F;QvTY;nW8IB!ICafs?-YmvUo(Js!qU5ifHK%1ZvCCm<;*ylwyTwv7#B>>l(y6 zr#w(JnE+a7p;whW_aln>+${G#_%HVCMe%FePNDYl{qSl*%|EL{+s!rr!s;t5f*gDI|UuEmf7u5SU_TZ2zBCp zc_qO_7p;Lh=2xHBXrB`4rU3FHx!h}Pfb%WD8eLs*G3%40gw{VehurgwW zSISd2<3MycWts%1h^bEwFIr4R)&8oC-=@(Vt?ZGtUB4&bSYf>GIrA^hP%G4;q;zaV z{K3-pw9&sh;+K#Ee;dKCxdvQ zKAWthfkDAo14G3P8?&^KZTCP?;_1hnCGiDBAF}mca?(c2@m+YY;=xRA)Fje5+g^np zvX9uZCVqu&P|?l8h{FQenrPSp{g*_R=?9||sN?>1cFYW}k$e_F1v1_-574HJ{6SY0 znHEH0@*-LagF@TXp9F%kSnA;yKjmS%X@2bb!0qiy8d&6_t)f_mw;Du&G z=e)|16}A1PgtcNGNV-{?uKdxc{?>j%jkc%SjprG#@Ia|O81TX+JhzvNSjx%zAV`o{qicmb8ezTO=;se<5cCrva zF?3H=K4AAB5H~J)I~qFm<3q*-nlU=SCdiYOzZSmd6=$;*5^VxgMy#e6<*@Z&j>(?% z>`R=~(rT|dHf94%vh*17$?x+pABHWa^soqBn8jt)QZ*qfWJu6oMMM)G0^RbOGI7xi z!{VQP5I?lt5!4Ju0*xvx|B+HfvP^DGOY!d7G6pXBQpceZoJl*VQLM-x{9naBZbz?2 zBl2+~nFtXQm|gZJ73WcuB&8$MkfT0D%b)qb3a^AE<%jW z(bz_GbasAh(*o@d860wL`lwpfe0Xv=**hwd1(qbCM1C<9IiY$#)K0d|h-xK;v&v!I zO7m36CxSW6Wd!>Jt!3M;j#q=fB8+o1GDji%)SvOWRl%NGt=thoOrI0V7tX$`%BY+> zsVbF9dZ7|w?ZmVy=m6lHaer+>wo004JP7i&Z+q?(v?`7!@FH>yUALMf0bWT4X}2kpZ@1=4;w+RluYz=chwzHM}ZC+QTTr za1vVg+;Qb6yj2nZ-0Jx(IZ58bY@V7N5%n2)g$0CD^^6s7S8%Y_MtV-28Y`8a>C5-` zk9@0g`)qCcWB_6n;$rw!)GW`>U$1O<3zN2!HWbP$h_r%b|Dmo_HH#mz;VuYY_mA_m z;GQHr06kldzfaFVyW*7Mt(WE3@?{x*v5fR)K(H5w#^&L~WRBO9b5Pl6JM}3rxF$f$ zw#H8nYurIvg2lrZKgWV#);AxH%@1LB!XAN|t zC{^L3Gj_gRv>YPJvePBE2jM5&?*RB+pU4&gd}VH!52O~ zb5%k?C)*}9hX4FVE7)^4dW*xFlOIEN{~z3YPj$3iA$qM_vx#V7aFOeT16u!-u~#+R zLmEL=c6-KeqsoF;f1n4nTM#x+2fofmX>(sVAIs1CxFgTNP!FQ*(E>~EU zqp4&?%sf|Q9d#(=@o;uK1u?%K4WoThf8sybh<E|Ks@TW*MC|qU|fW1=(@mo?Oia_tCK;A-_@c>kzNS z)hlHxRUspKb@ecb$HTsLH;-9F1M?9Z77b8IpC$u9`%$pD%p39Sx8YaoBRy?xv@lY4 zdcBQC)N1j4lV6Hb^~KIkv-SCyihtNmkQpYdjVSs)DT<>Vpb-4l&wNDz6dOav?LUTS z!$D`{vBU-|sXJa+ZxUBAkdk;@yNIfwXbU5M&Wcs*Yys!8m*o999t_M!6hV=qy3@Y* zP*AJx7=#s8<%8vLUfU>I3rl{7BilS`_gL*ths7$qQk-t-GQ`sMk$a_k5?`GbvsqL+ zFLn-(k4fHlJiNV$$>zK>7?H)~Wwgg8KtVs8FJF>@d>1JfSnT;?7qQ+|3t5REsuaB44e z$Fb{nDyN*{o+C%(N6Da%0Srga)s?DQ5oPHS5nG(9v9Aa z(#bjW9ycR!mH;S#cw>(eC-C3-P0?(dEcE`#QM9tZ*r-!SW$GSk6|rXKW2mX15`u(@XX=`$6-tZ?^0`jLIB(K@H~bL7H3rr7{l| zM2j5p4)LHn9CE6%7B*!vCvM@BUEF3DZ>%%^GLbrXV@0vIJ|9Dlk&6aH-1?zSxF_K1 zu7k@CWkY}AAVz0*L$XZs^DnF!VIT+bB_XSyre9sU0Aa(xHZX5H16=I5A*i<{wK8GU$&XE;*^KJN<20ti7r zpa%t+MD7Z8GnC>)8R;0&Oe7`oIm|F5MG$Y>eShhQ z0TXeH->InD%_L@eG@y?HKCH zLl37{$lMGW3BW8kSkuUdf$uakpHO7u)67r9b-6!p>>jP*M!wl>8n=~P1DpDxUglm> z+0oOtt&Pu*v**1biAZLMLMPE%`ZtCz9o#LWTamqyQ*|y02fSVN9QdAfxIxgL-VJJ3 za1sw2kP@TSl734p^l*ERef4H4;fzf(F_)otgZueS%qc}8vt6b=b$}bh6oR>jLNkTs z?+hMDQy$$cne$cW80&mz`VX%}@9vhhy-Gw%d^o5Q;VV%T4o9^x-0mU_pFNN?em-1| zg+-r`vxKxOxMyhV*Dc%0jKUEM&g)q`Cin){d~z~9If@-IVHV2Vbvm*ppBq}8OOq2Se7+oDh4fIOU- zQdG$uuI`j&_<6J2WP6wRd35pS;1HoB&%cZ8S{tOuXdBF%DaDA~aF2y2YoZj1i}_>c zx4?TL=qU@c4JGB-zy_7UOMlKp88Ro9hF3G>D)u2B*+H+{aGM-fNWkX$X49spk~6>E zX>~uRw7cPnxAEaO!!an!4wQWRDjvUrGmQt-opD7qS!w7WjJcnSrnQ#3wWqJuMQ`nG zZ9+s!BH`QV7-GJy-O5KQV=lkPw+1TV5O7Cp7RoRhMUVjIZ@{K=Dw-GNwwk1}==FM4 zOMmh9>=2cBdqiFzgpKbvgRk>jc1>QXbUXBb;N34X5XpU^JW{_{9Xy+Hl*1%2;!F$y z2nlSLU}avPkEvR60-TPlx(kX)#j;mmChc|DgmW_qhzJ|a!*}8uZa3B~{A((_5Q$;^ zv$xY@cFmb;Y2p>P8oj4OV{)fJ4ql&}iD|SGrqSHm5Z^!tPI(1L*)HV{N*>e+r3=st zoA>pkpQT|}qek2xo2E?AzxAon|1$o^;nhF1x3;!goo+*a+v`>QJ-RsBJrGsf`uqv% z|NX|uzQjbS$yc%G31@>#ZAA8s7owy#f8`V%;Os03`~=Kq4&#Lsjy-Mzw6Lt3zccX5 z{t`)gdn7eP8(#_+#?9Hn^yB$p^ksFv#QWhZDrqKa5q>d(LRf_5YycHg!BrbHht)P~ zU7NrRTX5U0E^}QJbC`x-{WKsNOL;yTE}4S|puS%hs37lRq!uBJE=o6|V&hS$dF?UejGd$QIDh(|I zMFhPq){M=jmw5ch1xqGrE7!5Je|j#`ZYV`9eb-hsSF*T&Q;^IRY17+cuex*3ocSx+ z9hc{tkIZ-SY!SX-zRymaWO5=ewdf5Jh0XZ%26-#K0s}!UFmvX1u}OQHI_q(;r$9Sv;8{S!&#e65RW0KsNdteAx8vWXx*;swNX-139LFV zy#Q=lx3d_ey4~tQG&?UNI2JIRcv*{Y?NTbai<9_{VY~+tXW96|8TCObxa?aoGs=de zmO1d0RRCXdr~#W34+wb?zKZsOp~rYX8v^ak@+3T4^xtUq91%-Ie_eU(>`7s8_f$x= zr|-_rB-Z92^S59@H>l(95d85XgloKv&WF&Jn8I3s-)@QW@{i;k#7tNxWX1Ebx&_KX z!=gz}#$71XR^oeDu?A>FByeO7Y(DQ$>uB{JRehD=a+JPfI+M>STK*bH01AKY*@2Sc zb-X5?Dl^1tUoJiKwa`yOtrCWuT%;-|poGHHtlalq49jjrcUNEBKB}Z-7pJE?m?mNr zDNJGFwh)0+%BdlM$x5+3CW%qzJK8E9(a;3(!rLi!JMH~<+yoi_I+Pe84;8#$m+0^{ z73w%>Oj_`wVgt$&RL~ln6*2D56i;t^kwr{6A<{#;x6!Fkd;EYd|9=`#LY5vI*4e3( z4R--YC(&t)+Ya(imV>fdXf7$7{FVg;q=ID|#V_c_GM~l}eE?NW5D zMhoU-CjCWTURJol%Sceu#h~zYKtu!Tg4eFquzh|M`FZb$Sm|%0VfJR>+7D zJLsw$>SvBpBp=Fz&cM;^|ND5hym=8rDQUPHwCm8}TBf2c6*GOp`_N=6r zk?2mR57>U>2*c7ovY6lt2sT0&_Tn4NX2(_|CQu3&0b``Xf&?2Ts9E_|m05(3CCE$ng-K8wUt zwgX%%<^(KZn5)~hoOqI&!R5%R10*BPL9|(Tm(2x7#FO40+rQcCb}-Kx7cRW?;zK-d z_C6mOtK~C}E@JfC>?xWd!HU=b=sO%ag-GV`D`6ivC4f?xBV_dV(T;3WG8-Q&K3LrV z5h*m6@WzrhdGX=oos+7?=Of?kidB4KrdV;#_g7VO#hxIb2CG^#^j;M!B zx(zzJ5xw5u-=wg611hNzUfgI!LXoXy{tJR|B=H%BYG?@vx&*iwqpGY5%P zxVg2~|9UsGmaz+Ws)Wf8AH}w$A$^eL_7@4k=uU1{4r3`!kkBP!6*Iz$@<~REKG1^_ zNa(7@ilA!%ec;2gfwAWmMSa!{Vje|uqO1G8>|;&iFDOVb{gY5I*R@_LlYHoldwO*@#=g! zt5pE;!npLtq_cSP&oSqiEE~W}+oPcr1xnT^r-B{2F__w5{)+cu6UXr%jbrw(YHS7z zl2{Ni-E1XOndapbI|;qMSIn}b0D7an9Ij{RmNlmnx$bDeV+OCkSE7)gbgsbJIW7j8d=fW7bbd!zflk+$ z0f}0|d69Ln*va$ajGL|?s>CAX^rqSkU#HQD_?=G0D|d_t!(-XcR(&!avFVix4C3Q* zoT{3*B^d+ubrmWNhpw9^h3C!x5mGfbrzV&AS*dG=6y?N01yxyu&B%6E+mQjQ*=_ak l(qVcXz4BL{BSqy+U1komw{b{-OgJdTf5|?2_J>aMzW{$zqY3~3 diff --git a/Corpus/Structured Pruning of Convolutional Neural Networks via L1 Regularization - CHEN YANG.txt b/Corpus/Structured Pruning of Convolutional Neural Networks via L1 Regularization - CHEN YANG.txt deleted file mode 100644 index f89abe5ef5d3be245cf71d42de2ff8aa4965b5db..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 55892 zcmcJ&>rz`s*YC^cbrthV-mdue5dwU0GXARA)(b`#C zTU~4Y4_^NDv&m?BHJOf>>&a+3=nZZ(zrEq$)386){CMIR0lh4D^$8qtg z*Dl^Pp~20x-yZe;vgPCl@4Y@aE`HxVepNJ=Tg7tm2fx31{g&S=Z|?rI|L#q3w0nNQ zhktp!dw#zAOVL_h(wl`?UL|Ra|-|yZS zGvq;G20iDX&qgvS=5sK(=?%Kw5zmPS)aUzd0QbQjlZU6rTPyav_wCW-P7KAOUk^u< zSao-~IKS-{<&)|};m;@C@ooEadsMVGnk`uQr2P>ZvnE?p_nlDlhM$4pM{-W^~C>6SH;`=E}N_=9ela( z4#wSaaXlK|)pqcpGYugW&uf2*>7aw?yc)G36cV#_a*otwrY?4WIy*nz-8+Y#-TR{7 zZI4W3+M}!6-h^#KkcwO8*yi`H6hr&2{g);&o}v=Q;{uZ0wFeJHd(`D)pL>&ATfzuM zGML12i+1NzdvMj&ZrL}kYT`IBaV!Q&4xjBEACI3m*woJqIa)4wbbn>iyMrz7?~|Sx zn_Tlr4PA`ksg*R`a5_eNv3!6c!=*B(6}E6A`-D(S`IHH%>iSGKKPR1+ zW(%Q4LpH5o+Wd4s97Bl7u(;x9c!FoZaU#W1e}cdj!x8kKj)sGAS`v&ul;&V65VJFt z%-Ys6FiJx#FoA{*leRpBlLfArjD|2HOxlJ&#vcpTSMa3A$76EcPwoCx76l>bjfynM zD;s1ww$bP2>pb^YcQo{Y#0{hF-4JGFZo|a6e7QGXPK%S2&Ca>q8Jda11dM7XKyC_5VqSv@4u%(&@M8b zo;SWl5O~0CdtxNW6Hl954mRz>(Awgi)yDb{%f-${WAg`N^R@NH>iQ1_O0&%#*_IEA zH(iY13I_PsSFcu@o7j1&{q9)ruC=5EQ2}V=KI+&+JL!9v<))9JzJwF|L)OGHV0#S4 z)k8(Q93Jl<{8pSFoF1LkE%1!UOT?>2cUkN`!_wtr4;#hd@%ic7{darkhi{MF?mzb5 z&?@Y^KI3xy08cb-!wP114{D1=HL6DbE_j}`8cpBC*#Qj90vIY z<7>>=#X5iL86rejEPNG7EN1u3KM1pVU0_5!*HFh_|7&KaU8&TC>Q>b+_I~( zqz^^E{UC#?IWrk-Qqk#+uh5ky4rv!|N5kpOt!*Y|QmP(2eA8VSV+*=$jpJoh!fg*? zz`;^$aAje1#QJf$w_#K8#}4i1`kR%lrl| zM%G+nB~If9kPIqi(F;tadd{Y%OmeIYS8vcLN8n;&ey&u*vJ zOI;SpN4EF~6o|oJp1j%>BX;jDyPXch#T=H4?&PZB+NaY)n(!)mObtSGy7&Fz1A||- z$AJ)x1CcT;0Wxk=c86@k>%UGN6KyZO|qN3~|U1Pdhzj(Z~)a@(MXLFQVKD zNn6Pnw$97$1oKv4K23;$oe4n&vS2K9NtZvA2$KDBm5`yK;T2|UVpZ(I&};(QMykp9 zKU)+7hIj2hp`fv!o~qA^xf5@pz}z>guCBP)!W2FFaz>Smn$9SBZ!B?IE-t4N5Y>1X zGauW(CVa84#V#fTUr)GgXr^a8nRc{;9VrIBdEJKNqyv5Wrisuke4krZnzI&YClN(eG>5jnUn6y@Op)5 zuHf-iHx_9aR(zJ2R16OUWr2uIaf3R$7Z)`ESBvdWaj}g^7$=FPYKl&*C+IE#&Mpa1 zv3EDU%gkU7Q!?MeSH*wh2xh39df9iS5h?`?#P2VhPX_41ziBM@su$mt~ydzX* z_p`*#kTmY-bd0XJM?40}Lo?R?q;H^0#F^EIm7MUsTO9i{@WeuibmR}}yTxjvyes~a z?^?Ul5zMjqNRh8BO>BO>@fb*-LQR=y=Q#iXF;I8%=R4u6w~IX-UcqjHDtC9Xhr1KN z1oAGN2#zn__%|HKnxR3bd7x)FKq|U8?UTvqj=VSZocm97ur?(h5=jr<;zFDoOE@8 zJoFz{a4QCgKg_W3F`6)f=B;iKsl+vn4OGNEE znnT?YMHi3m-2U=O{vO+PyzuEoyLV^sD3qFPCD5}1Q1FnN#mWFtb#*1nJN}qJDG@Z` z&J_HK9laT~G1>$cV(ikl?Dr-Up5GfRO&!iS89@r>h--+X&cn0>e``>jXnmYzC2HFi zWR$IM=YU5Hy_l%lpBnV-USE6Mz%;Ee0N74@o|sVR`%Y*Qs0SP3sYH~By}9pQeOwTs ziWx|Qr!_SSwt+Al4%`^C`1s@mUNt;wKW5-cdngR3Ra=CLRalfqZ-ARn=Me*rp#^`% zJaDsMAsc0`2E2Pu*u#-&72Z{0Q{kj6XTT!}hViHzs#17QV9Z>vAV;s6n<#|O`~jE4&jhsDp&{Qe;F!?VwIWuQp@|QVy8k@| zYVxfPhya_2(yiD(M-}&4xzWcXv0j8 zFq@AfEpvwJ2C&j(M1nj>61F$wLCm&YthI2^gyT~HG6mVrA%x<>utkK8@x?rDEJjL^ zW9`4=Q&Vk349LC-M5C)29adEW4Q_oZ>g(==W@_p^SU%GzsU&Ebl$FqFZyvtV5y8J*AA)3yL1&2H+z)ieIDzH0foRtC!QT~@L**}5@NefS2|l^8dt<6annb` z?HP3=GH9HysM4pz^~ni{Pa4bSMMs{lpS`*=&a%4GRa!Nl8}LTUS;pbd{e2ts`5f~ z>yAf(zY!=~q{K+cM#YfJ?ecNqO_+a<9q1G1 zQ+gMtFWAI5)l`C{xl|<{7UF>-=j4|8-wx5OGNO^dVrAoDpC@VmEl`ot|H!C(_?SUY zH4&e%qg??~7&;?Qu|T7E>l|e4MNZPy9*;1&lzvZGwuF+&B_T5)ULG6CWQfeF7+b~Q zGcF9Nb#fkQ>1KCGpJPL#Z~=C$vXWH0fdJb_O!|Ouo58mXp$x<*V?s2BsR-I7yB<2# z5~2KT3Z+hkkFCsX!npNi-{;mWsa1H?cySpNZLRQkwz>IErQt_jPHAq zNVT|1GFB*>;s5#H|F3d~a?L{iK!}D&nQu^ym5$rjT90!-#xb%QGRvU9h~%Tm!>_T=W(%|(lp8!}|1N7b1arF~&u#?*nLiN06H z9}~v`K@2S&T3`D`vl(f}Y+w`2V`AZzP-VKLa_G}XB^bp9Uo-)) z(28|SD`Tu>Ypo;)vpu-MC`3NP>QIt}QXzokB~ew1y(-WSC*_noV-mu3A6WSgv?%t3 z%CmNhS5{M-{zadBSss3&g2mHXcaPeFR6huzGiWX zHF(94D{2zsqux?71Ez&yAbEhRVzyhta4Z%p`+3outAatyq#_WkXOE#+bB; z%kU%guar_6c)=sO9}Y3$N)!`0&2=_q{ZPB}Cq{u!KGygXdXeI#jsB=#p_+ zfZ2#BR_Rb>h5aB%3Eh_8e=J)4jFjvNH|R|;ctR0va;7*SpnO+AhTK%g<@-qeYiFsm zwAWu!2~f6pO!*_}rxe1uCUIj~^u0nK#L^6@s)#+#*!#Tw&`&1E%3YsXMw^YbiD_GE zTFc+fg|LZsgs)C5fkki@cPr=jihHwKE`|<2-+V*n6N9>w!v`|y>nr%~`JCzUk__sHei~n%p_jBIr>UW2 zgZZQy0ggD^~Qy_||r|)+u8%=J-_-;r%lKpf(yvO9gRvii>djm?kd(%W-iYyc@ zP{2+M)JEqHR?vuYOXOLH=bTK?GTvM88TvHAB0#tg4mY_spb}xoD3-kzKr++p!;5t0 z!HA738o3M;qbZj&x;o?{`2LyxcSfTJo|w+AL7ji-n^-EddOW+ns`u!oR#D{ z%SEOi39%~Rqa|xA$6Y|aQT>Zq@rPciw@W#rwj^WsoW85WKYr~ueYa4ZZ7g_f|+0_Qo=Qy4nD8A35eOMbj~F9&KoQlc^&uMk`6 zXTH}o;RPw@Ve#iwB@&u_-LXuXNXUSXU4>_c;j9-hCv)e~qVx=~+ejO$T#-403=pfX z28WSGzi}lS&H!BePN>N@byu-q5_0vZE}RIwvASqM(OQEMPYp;wxoG z1c0Z4F!(=flP#PoO9Z*|Sy3Qv49vl7Gq^@u0LZd>Gx{grUI~nTvgjozmSyp(2@eaJ zGIb^w<_RMZZ6FB1HLXw1SoR7nm&Q76O4Z1sz@sS-!#jFZDugW=Acrn2Y~e;fh$v-l zN0Q7KBA7NAr-;-5n=cgy5tc@GuzzJU8^!y%kjU)b+j?t@tvrAzpXpu|Y|6sf+IAJy zg$R+-^KDJrBA#paKerz|m)PJKN{WPD^+s3IJ2|gc-4vfb=~C1;P#L{g=MQK@f`0dY z_=l<7;Uz_K5ypedDxdU?50&hEo}!{@vlYmxxJIId3XwC@mpm2BiT|P604{sGs>AtSis7Y^R=YzbZ0uyJ5& zmP)GPaD|^+5Obm`fm(boX}tRB3szsk#~lS!a7FFROkxfEuPNMM4k!R_ibuo#0NIk_ zR=6)CS*b^h#Clb6kq{&rP~cyHOp6urU^)2}cuSFGz3xChHDN~FQ zdPoR(ZMEVP*ZBmQv8(?er=aUx-4kPBRTvMH0bm^ZJK2+cnJrFl6W{(tB+V^j<%=s z4Y1z*bhMJwh7>f5NBO*_t$6w}#m*{%z`Cml4$}v3$Xnqvfy-I8!T_%4$^dsQuRyxN zi_8vM|8w^|o$z46Zlbkj&%&UM6-O7)upjt?5Dhf&qMye0Lj^yBba;=~^9ksiK%z(B5K*dccg?ZCi& z9#&Vj+GgV3pNVKu5O>oZ$@ajfSS_OK)$pHN*)k+!JkW`mg6*jAHTVEQMF;Uabwg&# z`MF)Z>JKjgT>n+Z8Wg|K zCgSo@jSM75Sf4`ef9iSGI)wq53haWyi81p!D4eaI_?g z;z-@zbB2V_Rc#EMGn{_;aqY#aBmb&JIYmwD1)QO;vrXdTmk%!{!qLj0Xtm~U46or{>MC^f9L?g#J)U(NH;tEX z-kzK-1I;sbtFUA)*WjbRIa=z7x9&YJn=Ehmux-p zfhYg8*Z9|U!Tx;HVeg+DEB$dd2k4vUu-!?jbUBGiA&;WX5;c>&`ott!BNd;iGjuaS-q4w5zte*=&nu9)W%^rsxg;g=Z7 z*jK2Ub+k^M;WRPYYiOILpq5awuiG6GZ={JZE7CNX82W5H$|HtDci|ql#mWSa zh+WVOlLD;8a}!l+mU01rg&8BpY2`+Grl2jeImib%qBy8cCK#ig>rlI+7mrT*s5`R5 zi`Br9mJMW42uc31S;Dl23P~P)=KLRATmxKQKl|-J1}^uO0CB5DK(A+;Vyx3HWfcx} z14<;S)s7Tsa+Z6-_wN*cuc2ajNfM1QwDmfdP?lw7^=YS{B??Q@G#P8L18V&6_DIZk*fH`Br2ho0wRe6NqoLMJU&=C z_l&T*pS&evd0Lzvyn6R$_w?`&OX@1#7}c?<49SF`tbxMh_S>A-T=jx`VgY10;-#h+ z=LfG24^nP8{oyPAO)@)EHL8PX=8Jjtr$a-bk6Z+bf95VoolBZUOzYX!^RL)$Vy{{9 z*yWnb2G&*nG>-`>r@{?XcVSGDIDW+%<0F!6p6F@d3SPtg%H^O{aFn;ZfF^CWh444Oqf^txWY%(*KIv!+3z2vQ zMHxRIF4C%!IKy1+OWG4~7#TY7eOGW#S*Lj)RZ}pp!b7NbBx1&m#s7oWHz|$HjKB89 z?$jukRPP?kczt?s_WJFcR1z~^UIvxhwF4yPG2zUp0f><}g&DwizoMppJ2^N#JUTc& zPrB-UN)L$SZzc_IQC!($Sx%G>S|HW&X;-DRDKBGtStPXf_@H)Ok&=5_jacb>Zowk^ z=v@E=B#rx0>@wz?Y?s!Pnge0|CFL@xrCQ%Yt>-iWu>s7(3j`0(u9^5~Y& z^IjzS!`|V`-P4ukYA7!F3-8+KmrN5o@yLMsXl6)C@`1;SpGx>oQ-hQ080izniJn+R zFQhd)0E4MiD+z;2{wsy0F0qg5Gp`kz7dVj~^Pp#WTaS|FR=AyquS%G=+_3$PdZt2# z1F|S#iu?Jg`y~M!_FCP(=n^(!sDAH~0}fIBQf)9}Nglo_fG7p#J0DcP@hHYz@ImG; zsF6YkC}>4hOsX^?IVcq076%(EtR#F%#b>k;BrQ?0_FteF<`ayhF4<1{H#9ty3@oELvlwn7U+)|cJtwory zRL(_E4#ZHa8xC`1w(_HDZuK1)(Y!0OsiXpmqV;@%`qJEpQ@vKa$yx~IolOJVV%*>f zuT1-Lw$B+M>B2-*df`}SAX$?M)rgjNcj{QT9!rTP^;I5NVOKI@bX7-hAnycOgNS#Ow&(NAkf6~AxO%j>Wxn!-iL{~=U&a1XI{SxbAE~%bC4TMIZ z<5}rZB+OF_Hd$p?`TvG*cJk0;TTK6$_V1y0ZBpZVe0 zwikJ=vKh0({Lfopg=CM66xULs?AGy%$BUfksMFOi_ZHBa}^o<@ZH(&0gC;w#|d3| z+ah)$ms`S`NT)&rpe;?5`ICn-{HI?_!#<0c-*l<{rl*hM+Qr41`k-+vMco~#(5$LO z#y1}MJWP>bi`i~hx8~pcyeO^=cK)~h9(+pYyFa~2y&ar1Qe2!(R;N25zB_e1Rr&tx z%fJ6f(XnlG+7;1keZA58#SNdigg`BM$|QaTfV{CMw8mPlHa09;sNGo@%1zjZ3_vvJ zvs{Mx7%?P`q2lRDYX(5F=`!>8W~s72E2XUQ6lk)av+}b1sWHm^@ai^cEA$O^LFg~S z&wsPO(h&rjRrT)rqiavg%jR?ww!*lGS!9&6K!yCxK+>?d*-Q-QSj8~u=zuMaw9wT3@QF?2lcH5)XD-W!|5I$y=$SG7ru@3qtMY4FBU_&vJm-B!8oY$t>*a}2-p^hMPqeSW5(Z8{PULzegbb|!vY=%Sj(O4} ziL^G_36UFERd%RszTn&*tRKaeM%DGkR)wsISqzI)T&u}SVJq}yVNdG9?LF1D8?tI` zS9x}h_~OkcUJe115X$+LFuRRiGHTjNP z5ISa{*HLe|wAs&2Ed=l!5KY40tT5=L-e)YZ_gCxHM0?jmF}ymQfZdg|u17KtBZPF{ zrf^}o5|YpgO^Zu7*+1N7XVkfkOC+z+8>Q51tfv6l zdS0oc7Q{Mkq{0X9%|Ss&Gv}KtGcf8UBO|6uZB;oQVkPA;AAv;FmbS4dYC=1&pUaUY z>4;R4H7Q4$5hvxiNU^>0Q(CD;I(<{3@RT0Syyl|*RsYpmvL$#@5hpJ-E1uX>fT{}U z_Ycl@58vn*B&noB9bTl7yq{vEcLqtT%By7O)zzj;bJ7+`QwuMt7kxrZNu9-@)r?r> z!Ha*aDaoC`QU4`qDwTN*it&u(U5#qUPy#AK5Bgi_a}XDZ_KBU#2h2cwvboBd@XNyt zyk^DAAcX*?w!er;a$`Bao3=cfQwy?N1TA%`u10O0U>;(y0jd^4*HmbBPiW3qX{r1w zQD={%h3z&fa^y$POPE@8E-JPz>)yPJf2Tx~mBI!nY>#?n{2(?W1f}&$B1n>p#3R}5 zv|{Z3^9V$@1l5e(ux%mJu}VwZ{%j5Z~JbNbU* zwkaeVy_eV_eMnxh9Z0aOshB)CUWXxS_4{dbt$Chl0>oNa3I+p#CVg5%iSpuVNo#9_ zzrOiHlBjaQFc(-^O|XefJPQ9NU0(G zBj5LeTfV8Tjl|^?qPbTLhx-YWcDtNcNkz-?ZE%~y zGXi$biXK_am3iaoof5bT*UBUl&b#Lv0xg>2`_{ccARBF7VSe^xd#$m(_Jf7h!^QR) zCKDeIuq#YBlj?^|{MrvJRQ0r@Z$}ECl%vEG6u8$3npBM~jVzx+VeS;IN`odj4P}>B zac0dQELF$Y)gmLGx)m*?fDn(5vrFCwCgnnO{Yw9V?TEh)uV;4GnPXYB;!|PlV-e;w zs~L&4!)E!8vDp+->)K-XleC`7mWjIYfGe)GU;=4HMGeO50TqM}__w*#`q=0*YT6Oc z!*zfjV>?4CchUSK@qmIda0KD!(ohdL9G2pH=Q%uvGe{Qij8_8ew0MLkz5lJ>$1TXHs%^?JR0_-W8~Jmt8n+KGEF8d9i4lGWvgDg2-R z{r~X4%)g2TA}#z6j)+W<69ulV6rATTq%5WFUph)Zr<)&t!$S=6+f(3C?YU>E;8RZfbC6LVViz@!4VvuIo>hrDV7z~|`oD>RWlxL-#;mRi8c}4E`hWJSJ zD3eNMS63UocsfZ&Y!^{w&|F*J7Yqb6s3X;5FT6CVL<^;*DU>aG$_Q}9aeeGjb{aw+ z4URPPls+Xl8L@-hEC>M`*E>u}=$Jt)VpmZPdf7%bmz%H_@dlkN+F9pz9wT^0UOdmc zbg>O~$~JD>NCwh)H}qjv>&Ry)?S#cD_wpn%RIx`B?BVMs^P@zk)kyJ}IBTkyP3Up4 zu%?(FBP;v2VF1f6wCt)OH#RyqHDhviwi;WTKe!!(;}mK+i8Mv&hjKiAB)9EM;c8CAjaSNANJ=gpW+1`<36h4vU6M3*>MttU;H&a+81}#Cx z<1AV77mw9w4B}x(d?c{N7)O{^L2sUI%0#<*pM6_v2$^{=&xNQhSPhR-Q;`!wX_q-D@TsVQyGv~-GEnn>K7 zQoyN2K~P+ON`W!Mmke5UJ(CNT{BUQUoW)3W{O-J4>V_!@_5k0+glf_^+XU64dWoge zEpyD7h22v5qVI(caluK0Ivi%Dh^^K3S^9G=H+s#etPVJn+E`ovAqI_mQ(S0FHy}c} zvM*}FkOVwv6;cF25;Nj*m8#6WpVWmsDU#L3<_1!q6F8;{^NLXuoOF$5@>M^ULlWK5 zt-2sdhVmtTeqc*)6~$(&v4$sG&cJl3R2w1Yw_{b0+YB&{RM_h#BBSAIk7;mrP-6>LLF6deY+4_BoB*ce&$GfLzsa8jyM>m6! z?G}|&%b#RFRvzaLMH=7dwa6>}4qoY;ds&-pV8}J)L#p7}P?6C?;_ez4PK~P-%3C2S zcw1j7g<*m13a}S!5Yq^%FCImc`m)mI$`+246bDGUUDLsCr-3%?U;FxpPQ z0y8s7lQ-$NF zhE zW@(c|v5=;mm4C1^4p(BkbpV69%WxWzf?WGywa+Y2%g)RF_TjQ3Ej8t1%Uy49L8+F@ zVQG1S-#(a;$b!Xn?3AKcuyq+f#JXpN`t3u_^KG3{_FIK9O{W>Ghc`Lxv3tDx=J&J1 zvkEUf@?%LjTU5F!LJtfBQJ(RTruneTa{kF04k(6B^rTq@R#k-6E-Swy;NaoNUOfIj3fN5X7&w>FYLK;6Y<2OxFNu>;P06{}LuNt@tr%C&Ir_TCii^;i_ zdeA!}S0z1jxIt95IIYsc`JkJ*-)g6oO+fKdxlqlav}mGD>gcx&@k$L#u8dt^EA@qN zl32pr?Bu_@E2YT5f)!fA0;o_n6V!(y*dTJb3O?seZx&}Y#Cojx5p^p2sWDv}?Nov9 zxHna&yz`bNIG-r2@hedxM{>ccny);YxV%oYNp#jEAM2)Dx^&hNu}WoQr5sGwU~z1U zu5;1=(p~bItjOHVod%LJ$=et~c7CVh=G@)f5U zVd^LNtRY<-RI(*d_QrOWpHH~vwI9py`#v2xS5!bq@d9en*r9N_1pckn4(x?j`tl5v0v0#diV#fwHZ!^>Q(KowCQz=%;c`^R)0>3l-(t( zNKWp*dcM(Q5v`ip|Ka7>-I-f%^Rv$-qbM z=q}RhzYF?VvY&&C$&FZ?^fM!tV%_5XZ-*PE5?E{Pet@!gysXQqSdg^l7%dut9=v>c zuy?L%hSZB49$)YhC2OPQ-AN&{EG?Dx1G(Adx$aN13wl!xph;J$P+k0MyCUa`trH~3 z-Gye<;IpOu)fcO4oHRX0R*_&k5%J8uy^YlG3OVtFn5>L0Xi9?MgH{cZmYFds;}U~& z{hJMOl_}w`HrKA}3YCPw;lLnpcpNsfu=Jit5tDjK+|nahJpz4bVj{PR<1~#bA3L2Y zaJ)*c7_YUl8On*rveLjwo`B)=OO_z$Mljf#CH>cLE*~A5}}^p3+eltq-^Wzfdr6+x;~DNN?IhvJqVA z^ieTEiy;)R2ynn?#C~+=o98zVbt3K<{LPIADC;G^5HC%uzTR*B&5XZYZ%|Kx*AGOE zmm{s(8=O#)3uNwm67;o0Q-=dIGjIO`hkMa%RqqQ~9yl#jJUc!-Iir()Yqg2jaDU&x z<=)v^YBs42`uFRy0+cX?ejuVE zQQv2W9+%6CK$#@P?zA?yT1&0X?bW@NzKFTQ)?vJ?}TJDcpTKbxR zI&^B9A6&R>Q|Lu7n%v*&BmjjqDm~H#A3jBuk$gV^I@h^+K}5vpCEf#74p-xX(kf0l zPp|}>jYqi=av0Dsglcu^y7ld)P1N4cZI`O;yitpm{B`eLY_dgm;rg<;)i$X34bjpT z84sfoW0!)5L(W&!y_0B&R|s%u9H8-iI9GSzbyqCgp^yNqR^KYsbJ}k4G1OvhpNcVd zqI(yL@B7S0o9=fpU<0}oq;%}`gzc5?Y7ibn+2I_Lmaq%$$yav1$L4H(h&=(qnf4Nv zJ82Yu5Y6)4!7TkjX3^P1} z5-{Nn9E=LtDQz59v#0JSwQirdNLe*EIcAKRlZ zy-$td=;lX4=|7fZ|9i%`*p}d&w|fQ+e(SZ-@oZLX1`8cQ;)-YdhVe|T zFWUVy6*jDrV7A+97F6o8N)^#KNVi}}$oUNAJ!gSsT(Lf>bw4e^0=8u%ZOcZp*;w6N z-TWTb`CGRokDX}on15MuW!=?WrB$SytK{_B$U7h2Crt)Qd*7n~0fJ$t&mTC0wp{$; z`@s!Qv>ncd+Et%Rk!0H9Po+u|i_bcp6$h}sb!j@P1hLn`-I1S0g3(GB$#sA@XF&C# z8lpMUIwD9$wt#zV28|3{mcHOJR>H{T0l@6|5spk6j}^;OB#%~f9LIFVpI&d(&6gA?jTL* z+qtCrjR<;3#;&Kj^b}kqUji4G?q|3>>gi;r$84HH8ZIg?{nG}kB`QRxpg7+*rBICf zmNif1K`EYNk-3PXyJX}oido5`yUQ6!Y-htESgQ#})(OGn+LRzH7NJPEdMsH9rVqjb zvPLWK_w#OT3VyLvjjLNc^TwndL6F>%U~T@mWfNkHP-ogW{(LrNAGJ%zf^EKQXRu+`%zVNf}`6&#&q8sHcH>b`hY7%bs%8KR(O0y6aaE*#>PO(xm z;9+dRbzU;2(>%- z)T7_HCKeo01#gD~5*;9Jo#+LzB_hQ$|KUFl)L{<+^{oKv(x&TZQ}GEn9zBwe!o#2h z_foeW+qMM}q@XS^N(f_d>SJLDBy+zz;4(+w7=9%%a7IgIA`vc4y3AB(%IO1$?p9~$n{Fae1k}>J z{6tbJs@M0zM9J1I(WYM4Xy&RAV;ppuP7fpsEQ-w4A^^>u9rT&}%Z(iz%t|CpstlLL zI{jInBWv~%myxA;Jp1XGE=JoQBIr-L8ExhpaM9fO@RG0~LWh1DiU3gKwd@XX01bCE zB-Hu2clDQnvRyUxzcyo}$eVWDNSJ7$EDLWaeZ})rpgi?>RS84ZW=X4i{xY;*Pz3n* zOLsb7yziizTS`mdEOU9CECa8L=B5JSgu3BmB6P+c zX6%UEIWEWh{e{jZ1EQ?R7QPnT<8~1b_Q@@0Jv2g0`F$zo1E-fVscRIka-Fx-<|vX#b8TvGf>$d=68-kW!4TnGL4i;quS zG`BtgdP>jQqaSS48#%(-BGiSUAXl$vx6|vTkY>QhmkCw&bUjYK0p;eSoFRX7OQeFE_XGl^wIdVr*F4zdoboi|-dtbT?6Ml%VV11?0C~;csp$q`1AY znfU{`)QQ6KBq7I7bb)dzXwO%+@}8VpwK&OVgv24u?GLhe1LWA0`s>fx>l37%$_0#V zs@E5Y=(u&&)KeCsdia9JJNjT_F#Oi@6F|b${WwMK#A7g#!iW}VueS9|GPApPBo;cj zBO-vR^bXMGPR#iML}U{m%~zgqMnLt~#y;qvvi2um35JWxO<;-jg&2wOVtLnE7KtbL zlAdl4iEeS`uas?OSZGHMrm0;iyJYdL3_)+$M(hZdye8gJ3eUW(>;d}*-9C?S+aOF` zhW6MJk`!5T(!BrKzfT1SQyS6GdL`^p<>%7p+RA%iAQYGq9gu}_AL|FLluQW_(h@9% zxVF8s)Li2{TqeE_CFzWdvy~Oqi>?|DKH*Rzox0KcUR`7RU>OE=nc{WPTK({IV+F&( zg@Q>zy~kT~#@|ILiEZy6R$O!v@k=*2vq3(0s7iG9S%q_Xo3mD`xSo!6FtOB^#tRr< zMgK_$rFiV>W@B|@Ylj?zOvfCmFeRS|lOFlc{(S_DZ>1TeQ9_K%dMamiJGF{OZ>OXe z`t*tSTTLZ$&wG~y*NA`OB01$xyHuEYS(%@ZR&qf?;eCfy*-cMW2rNIBt2%;I=!Vnb zL*@<-QRq0hZk>s<$Eu(0J3my*S!Nql1>w$M9XSP81i~W1uz^mlNhU9j2>Ogb>fyJ{$05OZ6xnGx^?bM_tPaB8yz^JXmO4} znn^^8-MwFq-@boyu>b0aI|;seD!R8VDS)pgthpavZLzi`Rr!5#_^ow81#<_HuoK$7 zng6WJsM80)5MH1Y$im+;j24LZ0$p$%l!Ws$SlI0=_NsD4!Ukk=k?!fe&V35V(KTSw zwN4Y5xsQY4XSyZcsQ$J3Nn|0s9n9jlJ;%|BcDi)m;gO~#$g zwGHZC*4Ik@JUw`MaC&gOckmU*7Ah0CXl>+uz+@dI?u7bmOKk)8)yA;?Zm zVoo=dV5R0}CxZy9q5Jz5p9_Zg`nMKV|Lu9wM{OkgjTe}iTv2=ycoW@hQAU|vYKk$? zx_L2TOLkN=rf|{PBsi|P=g$p_d46l0P*~$dB)=i$+6lG(csb(66JpN~D_nLAlf*Tg z$y_d6&mXT3j$gfgd;F^SefRhk2dnL(bJrn6M!F+J-PBBGF&=|buK9k>u%Wt>iVY1b z-EeHRaVqyX@fMbi5b1?eI=O9NnRQ&HE5oZ~)=%BupVEED-jDL#0E{=Hr%1hy2?h$^ zuDQCtwnSar+E??_>`aJFDmG{ewq-5808jS{_6uFgw~$7qV1?=x&Y~ykT(q`6phsSV zcL8CxO<1}W)QM%wB}99T6{2wG<{A3F5l^w0K6%hPWdQOOVf9J1RabZogae9X_h*mT zY8U7RFP&M!3)$TxtJ=m-tfZ>KUro>@hv0D>y(&IC_2#Kq{a$v}>eJZ0Aze%C)tt$4 zCyXQx)YW4`FpL&ZN{^sV5>H6RH?ePC4Y9ep)mU9?ZOljW76n)8vq-<;2K>nw)+Pkq z<({r7G0as^gc_r%E~n&=4WM1rL*&uZqrUxLm_RE|cc1uz$vlWvwbd+~q&2mXm)xd4 z27+#jIj9CxRQ#8Ar%ei&xDb~+blQXAr{2}F0&Zt&!0sS`-%h|z78!P4NxEan+QTlG zGUGC*T#2Kj5BN-j_0{$j<>q(2D{J>l@>i!c9oOc)mr%ou_1(&wwcS|VRsqZ6NZ5j@ z(2kyiUka}pHP9LytR3Q;rJ(dV7tUUdk9FwtiVNQS(zw0D9}fOxv6$r-v|FNDj& z@k30`O_vM{r%04vVZo_$uo0;`(Ui-L?A^xd#`e<2>gMLxXio0AlSsKPzk1GyeuRF2gERSr2w5G{itgQ-)DJXM8^WO*j1j3wWJc3dy-&>^SkK4N; zUG0atc;XzidvEvjF>cN@!`{4(+Ttz7xytQ|^tKRltt4VmTr5Wj$T)1aHntYCeVV+U z`UvlCDQBT4q|aWgH7U~J0L$`T1FrqOVX^!u5M{ZqRWIu7n?Qeg94W0ZJxl|$*ahsC zr8x1}qbNNk`%O7lA1sL0THRV-YOPjE)^tNT8)Z_4k^32S2GW#P;~$A3X|LB5ltjG? zPItXZFBi`t`wL&YKkdJJQyg(S=D~6C%j@0q^W9%+$ixsrb)vHyYW9??0RUFfSo^<< zKtrJj5}cm#WERfHp@_(=EPWV1lD%k~KL4R_=RJ)T<0NNwT8BUH?H-!r61b$b-lEcI zeXaF1(ls}i(%4GIC`&LDUc77k(y-%V02XjJ(r5)BX(AWglul)ZWzF$vv`_qMmC5E3 z_)f8*lqk8Ffm~}lmbj4Ph>`M{o0{G@P1erL?u0Y!#+lJ<4j7z)gCB&IWlWRQ_aE6r z@|sq?6Yx@XEryaFSD|{?+zY!;Fw9GwbUaqp2vbxxeyme>Z?49_8iMt*F%pqWtt~EL znH%hpH%jOPN;~1wwYFsnPTC(KmZ)zPYXuq&+oIcXBv-mI$4bdK%{)=*D{ofaI+>#O z_4S}k>4FzH1TdIHgw;H}N5EVLu*8QKpi3%#$rPnPlEE5f-30E4!w;{pHzWX&O*`}e zzKTuZQAGD-)WVb82K8BhuXHHuGy<|nlW4gFD7)RmReieORcpQzT@$b z<2UgB_E+yiZD`9QGDNcCI>1!c(gCFsyj0ZMjH@nE`2`W`0bfYI+MHyXdQ@vrUBhm< z_Wo5pwoEfW!rBUMEqq;%xWgEL^a*Hhw|fgLp>Bo%Cy^Wr&&9%Ot}1aCsc;-`y3w!C zZT|eO2z{@oRQmerPNTU&_G=c-xxZhD~Qz4YOvu4W94`KW;{wiT?{ zXl*?KsSmzzsiGDC?F~6xt~1R;z)+QtU_1KpsfRUCh>5Z=8fwu4*l_`P7q%^Hc{RyL zkV>A9KJ&!ORVk;f&{?|M)4d3}>jMfcfqG0ya*IB&t<}{EG{YEk;zX2Bm&{fUh+V8% zntPwWFUu(T4_GS;kc#uMPgR$BSDjxk+sMEb3$<)yM)e_8RT`CZbBRH{&gna3)Dj3{ z`M6EM>iBwt>i%XEI4jbNFeE$RnsBYf5_WfY52K=a7Z}0W7hV%ps);OGWKuyU)z$C< zvn8ud4^=+M6CVizROd+^k;7q)D)`vk;0xNGQABGyoZ3ZodXOZ<@c62G= z%JfU+4OWUXwkj14ie|@C;%WX=P1Oa2PN5j$XQDP~ZR?+SIM z6Qv+$`DjLvISWUpF{@)H+}*dsVM=IF;^$WIVr|E&r_hahDyK~jE@gnguv&te!{ovzM2H^5RtEOXgN8)kZQd zb1t5R5%%Vw>r=;5x3h?f0G3_Z*xp=fZtRrR{uk@3A27DRV_R*L1WQ$ZuwXM|3S3rMnuBzd@^`SC2o#z(Er~5s+L^F9PtoaT<%`+O;#q#@ z$Xar6RPcF#%+2~*DhcKFouy3z90YR9bh0k5RTwUxK)0pMB}uaLlZzbsR&}eb$+w%E z^C$Yvz1|$Yv&ecaTFgPY(ReDAA{atpMT)bd;At9VcfKa~M!FQ+H>BD~MS1y$x5p(? zQq>Oh>f)}0hgs#$V^BC;*+t>wMf!flZpH%Z)=OznPgg#BhP0g(DKF zbioOk5FET;AmSaj{bDho_fNT)dY9Tv1ve{!%<+UPHQ^{!knT%=#P6#VPntEfC}G88 zu%Tp!`lv)yZJLtEld7;2ht|9b9TeK(81>g$COvTeMHTnZLY*ccOWi)M4{g9R#>gE% z?5lRX(M}05a&ECY|I~PAkCfL}$mAoGdw+2J+u?EX$7|AMHHbDV(=~8WpG*?>S!t}h z+m+N#{p<1DV^2Aw+se@H;+_%d>L%8oIP<~cX{=ZOZv6K2SIVse-%Kri_XK@P-;hsN zz~sCvu>%WVmK&V^_#)9m1}&ieM7N3(7U4Hoo+1uVxpB^FPV%|%r3UdTnMyX%Qn{S!9%;5uSVyzz#W^ADcUfXF z_pn4br-GeMQ_LnGwNcAkz2b`tv*dFR{XTD4)_?8$JWKt^1Q)+YCVy^5?{|;ipiW-x zzGe3+$@iZcXTY5!`|o3Hb#!?A^Sj;Sf@i!+U&ee`blg>oa7iNn+|Ud1?2$RndUi(q z<{rTtLNKMq`KdRg36(|3Qu0-^KN8w6k9*HO2i{e--LMP))mN}8{DkDqCEYdRqF2o; z#ZOcssir+h0;2%*bvr)#xB@DpqWbVY*T>8J=;w+~uIkK1(}`mWd{G`dc3-(pi^8*7 zAO@X#t>lzQd_7=4_b`#;Y4uuDm~-FfjT}ADR`qFyoe^NUcL(|W;Nz4jm9c<~MIJA8 zleB1aPdwnNcW(|XC6klhO?$BoRroTv15b@;CMS98AvZ5-SdQI$d^!Zfif1Q>&$T_) z4xhVzv-7P8DJmA?;W}B7z+zM^TcHQMni4jxlu2u_!{&v0l!nrg@2s`zl2k-dhF*4MB0{oj&ZV`ur@Mi3l4-$>|jEJc(jY<#7&$A~m^WqwblAei>T5F~x0ToZx zi6nad$l|NaSp6JI$K)zIOC`c2x2E;$d3IL~XR!rz{0R8D122YUKIUs z+*5GBbhvu}j_@oTe)25N1Uv=*{%d9CzGUtKNU6IW_J;_GK-J!PUDy9wxw5J{Kd+W2 z)5+pud{&JX)u=k1UA-O8$5*H0>UwlNo>!xr>hRICqiSKA=R|iM0pH%m&2;~Y>c`pD=;mT{Ree7nUA>!*vmx=p^U3MOXgsYRjpiql ztHL0jA0AhaULPDjE4^)>r6&q+|Mu3q{_js0qseq|Ik_2(&z6m*E>ErF_V>m0YrK^I zdhp%rg$V>0Hk<&+tu^Y+%_`r z=f!wAA5E*P@y&BZILpO@oBb;^3rnIn{Ob0O-Ok1oeIP{`?Q zd3Cd>W^b#T`Dk(lCDF=yVa=>pHNU7%K2@{#y!&9|>-*K&_;fxVE!5xQW;P$4kLl>2 zmy`MUa;%m|S7+7a@_IgduSZU2m)Fai(ai)RnD+Go<-VQGFGr$d1~y4V{b6!*FPlM`**@rRXx-UMs8DCyAn)#=`PR1YUW@5ZDS+FRuidJU9(k#Z+ zXnuMzVf}8Fb5@VhYw6B3^SHR9`M5fpy#2rb_y75HIpy7%FJNDHZyB$5a5I|U+^;@d zK;qSh*>ZYTEhd+f>1aOv6c&m-(q3r28cQ+gYx>Q&ViYX1O=840u;!!rSsH4hg9^?Q zrO?yz>QsBUs20nUKsO5pHCv4Px?w;!lhJhYyY1ehx*VMuFUqdYj668oVnx3;dHy1F{teUX% zHICpWX@ zqOYU&Ixhwlyvs5|$v4b~C2>aXZkjVT0%tm2*s@+#!;SULjjvY_f(E&m{BB&ikXVGI z-Y#MN^{{U;8dxg+dOw>kT_U$QrY!{fro^PH=PwRlA6sjW4!=J*8g2}#gA>FH6o72i zDDE3=7w45IaylN(OS;}P&)X48G#5#k6v^2{b8!y4fco@klhFpK-&}7FtHbKW@zKl2 zKR-G?eEFjA$Z@SgHPP`DK17I0!YxJp><2m~QM^g{tCluiRBJ(H+^g1}jUO$qDn!_5 zI=EjAA3WIjHXqgxqs3x!GO7NldW}MTXU}bYTm8VyF>oXDtj0!k=I&2t{)~>Hwc#IVDIH! zYB{gQ^Yc%}0n5v4kzlp<>qy!otPu0u`?h*Mn_>=L&E7XZ-(~*GS@V;Eza2VIaa|uP52usPSn+3#5&K{}1Eo^k3(*ebg|6svdi% zIf_NJ3UW&anm7|r1uLhD=1krx)`#pHV@0j-6r+sE1|zMXff0to0s}IK1)`o7AgGU{ z%j@ZQP#xktOwKXQs-cXCr2VUfJ>WvxR=jE5*qBFMA(+n~ZgpF-wdG_aB+2|L3B4n; zLVOgO`{nFxJat}iU6*$zn+b|tvj6j?nVYrUwv^Qb-!`R2wI-=?&!$*&UbQBPa1XJ2 zI-j8;-jC)J)DZdt1FWEp=+z9}$;~G$_>p;#Q>=f+iYYB2frg8ggb^W{>vDNX!_&!` z*%xdq9c}tri|3l+R*t8XRzBaj{_*$!cpNS3k(tLVux*kP^x_oT2^%d~-7wZn&VVVM`8aW4`_*X6 zUyiWfug)f?7_E8#8g1<^;6Me3%jOlQZX#I5i$sK()u-&x;2(egPpFQruxB!lmvg;e z5A8m?hAmp1jAby3UHJ~lQyx#W(;_o7d@}=GTB+sh3oLh|ag84qNCuyF7gKru9uvQ& zSX(I{g7w`n$F+0$Fr*rJe>(bvFESVm&gY{uiC;9$;#B`KnP?rmZ8WORXES`JV6)(} z$P#CCk!^V+%maJM_Drt832vhV@#rlJlaFIQu7|3LFJ_kzo{h@Lu3lpGb?IAD^WS|r z<~s!?2OVx!2f}9XAwPafOS!nlH4&G%A6V-bBX)C*D%js9mRP3oJi<^8<2o+)^|c0k zieZe&J1cCpJDFoKPauBhJxm05vQWf5jY+uTNKa?1(`$R>tDlf zpN*mA)Kj$}a5?33;6lg;$Wst$O_f3`M$iob?`hl7oc4TG6F zUwqM@2T@}~q)&`Iutu+|wPWObnb2?RW@U@bJ`R)SS^EV;`Chl-#&B+XwEJqcG1xCH z?{>DD-8VM|5B7FSuXZ0q6#R6j=ejS_!_G$M!;QgSsV%zbJUIOFPB(!WTR$6v-BPDj zcW~JU%m4~(wH5yuZftB12gBhm%Q)EB96T5fHa7-48v~I&-mJde9P-uPU|4~FR>Q%T zz569EmWNPPuf}szDk$GK0i!q?U!or+LH*|rkNnYQ3f4)qW*-`6Q%lpNAUXotMfyxE zTCGjTZ*SZnlqhalLeUPZ1$4eNYXD9cG!QINmtak|z~pf4;;R!AVVn2;-|hRRB7>l7 zzPZq=d5^(fuGSXF`!My*dqI;+MWGtajnKBR1ksw6r-hsFxM{BnhK=A~F#iisUqE$} z)xHM!Wl*&yZAc!-<^ucq^$h#Aa#L7McM*%`i`XNAIOhIp31Glasdc4!DvOKR2R>s` zEF3qM{uCzA2wYVf5SYQWUSc=Y31_;HX}kS$1g$~*@y5iy-wyBHuH1LLqi6} zbJ+pHekXvC$!&{(V~d>H>r*l-BOzBd(X`$+{s{D}!h^N%H)D}lF9bKp=X=ca&jLn# z>=^M62_qgHBm${zJehXY6fN&Mz_j@4FZo2i+Ni#&9`hdnsLb~n*w5wUcd@&* zf>jIPv%os7GKXO8_0z`)D}mzCa0KkVx&PKkcU*9Gd(2Pp`g%!zvwmCKFfyEBbasYM zG`ni7NbFYAzWnN|f-z)`jo9DCyUWe$?Z&sm^WVPuO@p4mX$hKB@Fft8eG%N25Mjaf zq4kY?jq>%h{#cg~V)2cS!_?|j1Tw(QkyszcXZO-tReejdA>onK&~F8uURW;;Rh-z% zx%fb1{NoTAcyKA~#QdM%poibqhz)|$<5QL$=;*jZD&u?iN4zp>y^=7dd!~=xUA_~Q zvF>Yc$AX~hWxrPtA#Y+_#P}NK?+>DlmNzq47e~JnNYh?wl;*HD#kXX3zH6U}#z$dl zK?B_5$vHr@d5H28Oa0z2^yxIaQQf=WQEm4DG8J$I19r#*-!!ht9b!cRy2yKER|(-e zWQ4Yjn1iq!WK6;n8ZpF(_XOd9BtvD@y9aPutNgsb`8cr=5_v5!DH8@Ln2&?x36`w> z`HH2eX4p}k^#Cn^lCPvEYdPHr$0DWk&LwrbQkvF@g&I&Zey^yNEi(MY!>n|KgX8uJ zMXnL!6e6>bt-?RWPCwbAjIIG0uCYgMh`L5mEX7oy_rnlAbD7%;at`2y6kQv2)4$8D7b{t5-g$C&YZin9o3k|4Hi#{-vZg|frz73wc z<(l`|-qAtzvn4|a6|?=e*owtu3qAhh@BcwC>U7Ck7{;Oe2}x4kmJm-wW|W=xZDv}R z6i&v7yfNDU116cRB71|(qz7qU01oI-9*Tl>b=$ z*U+sfZwsmgrPOe4}~Is!bKOd?@q@a3>scJ~@%S z=%#$gtubCxJfFz_)p#9q_r}8IY0*SAT1kY^43@udvrp1*-iw90Go^t#x@rs>oK=&2 zT7hPCD)TYk&Q5{;XK>SqJ+gymmdz8A8PMr9W3yAf3F~xT2=E10HS0FSf>ei-Fj{2GvVgi-{=l z#GMm2;Ja(&5qR98cH&3gvAtevh7{lmG~Q)>8B|f!n;**{aRJU|N>T^}6!)n>iO{`9 z-V)87Awujhe`ZuP5}C|Fs2zF4YaRK|Czv_%G0ugdWkIHxRS>LqSMh|gprmu z&ELX1iKmY-ns=Del!#kRNE$gqnUU=!zgY+ctav)YGdYphbcMJ~#5p-r z2&kbg%tR7=&I&gpYk8l6GjPB{svKsJ<>ByhcYVcu4kVaoa5jc(=8@lF6$$Uq-uj7U zJcY$5p_o@bjF=e z8d3~760KH&R*p0INb}JO)cix*$r{6#F_KUtk#t0mBIy@!u3H6wp@FTZJ{z8Iy&e8| z5i%q7rvqG%1j}y*04sv@a2Wz_eYju!^7QFCDYK=vZu@K(JWT&x&t}tVxO>0a`uK13 z&v2J4&T8}H?zVrm`Ej^e#G-pUXtiANHjV0k)|0x~0+QR>R5$k7{c8K;<_=x!f#O`W z3L%iTNZqdN#@b7bR_fKCw_dkD?hF}&`cvopQSA2nN7XZrtYHk7|0isoZT-5r3#vXG?CE!)>gUzw z-u6%We2>oot2dq(n?H=vip4kI8KASnx|m|w=pD(#sx8-KlpEpoxKXS%@HDo z{@$J{!wj+de#pPQf9*8C(;IJ3!)x)6f*^>f@5Ab#p`frah%R9veq_vY=<+2~pm^`TFe&1N3Z)nO|kljoWA z)6tT|Fvj+DI-3*9Uh{wRt^B9zyYbceWM%{3ptbKv>L77e)-0~2Y-e(6Iy#dsWX{D< zdq=3-r9}Z+JF!$$*?d?>FN&X4R!zKIqNt|WO{q*vb(#NGt3EhvUDQA zF->&)_#*)jd=!kAIbMsV2LuvY^Fcy{gdjUE5g~l7-b^k4K?4}aF`d5A*Cb9tY>+0b0mL7D zj1o~DlGvkcHKV!13`QXDXEF~U_X5C9DL^a0{Xz92hBlEZLX7j#HmKLtfkiwG5tVRg zi&l%5Ud*n|1=6f$I6a2Jyt|rxn2yglX>>CyU}MaS(?)zLaiHC5ptgh826A`0idnt( zM%2oht;bj)=XAaUr?b+a^2wpqjmwz5f_sk8*MnG`k~e0F(^8Ve^TABy1$KuLkyPZ) z!t=89OB6i?l6!2SfSB?ZNn!A&y_)F!_ha*t6C97E4r!p>^G+H=8{jM;x;9zKMz}oW zk%1p^DCipTZ=q+?Kjw>TL2h&qgvcsX)IIb|b4D_09}{cHozCJU4Mq9gWKn*WAC zpcE9wB$=NO7P*djFOhDgig^a4*kVi|7|2zNM~Rq|Fi46`ppX{WnN*4BC8e47*NN!b z*+C%>V>@5N^4M{x?gklsVnlNI26EyUOp+ls-NwN49({z&pamJNJ&xwD8LSUHmnF$t zCziFpk$Tw7_TT>&*MSp$tmMfQ$Y^6+kO@Kp&gAr=`jrMDYu7WzbgC7y)sQ0KoCJ&#)k(NWaV=RX zsHv-*5Dh1{j;l4cgpH^(h(iWH-W0ZAbEa$oCuZH$we_>K;fv7}F7gox5lp{pD1DL1 zu*SAJrcjZBCXHnITON=f3bvsg#5~-mB$Z#CPM2re+$bjvh7{)M_=NFjKH*+4n2>9G z%c9(NOg9pPCw!GyI%Ckz2cA3F{P44J#C$r<<^)-hrC3!vo~XD9V#Hr0F!KMcff9a) zg9q{ZHTNQw$!ZVAzl(i<&*;VMXVvHBHR6r^tQkI7^|=y=3-26@uu=wZ;eqNipI{7o zrDj%rUL2Wbqvg(iR(;;vQa2aYTBW7l&wE=^hIjTuOTC|$mYoKS$jI!rELrE@XFZoVs2s_g(i5g{nt%i`< zdMFAgPrwTbgt5%ANb-J9z#$Ap(5o@!N^0_Q^wADR(3LDhOC`mE$y<0a^O^|*#`zb^ zYzlLYr?@m)OE&L5R{o&{p!Kgw2e_*CcD`=T)2w;v=wil5H&i+}mOAXX z8F)V)g}R$pK8Uu+R)vFCN*vLeg&pmrN!kn&MRW&}2neSn=uPZ~m`ZG@7xC0TvSm(7 zEEmmukEO4X@)@RjGATSlem~NS4&-shIbap}Hj5@Kjj;?v82R89s3*_(2@|svaHUFK zD+4nvDaOP>XM^%S1vFsxNV!Xu89k_@Nc-lc>4Ri!MdDwZ48~BJ=xJy8aO>-PK6ZSp z28fb(n3(<6Bi%Z)mCga;;Aq3GyUpRl;n$ck{`bGKKMTnX$yiY%#G@}KXJ?>ywoa@~ zc>OQc*2CSe<=3}Kh}oQDiDKg$w#W)JaD-kPtGi0tf|yHvJ<_Q>B@!v!LY@Qk(LGXC zu*rk8+*V5Rb%QgntHZMCP9W;nSwtfg--yuLVZzFMpY}_R7ItNGdUY(kJ#3HxOiHCH)E!0fG z0LDcK3hSPzem@7+Dt=iKd0>wz!w)7NpHuxj0Lh?BDR{B&p zh^gWI=|I>Huqh60oZDd{o6pneQP19`wLs z$xo9Kzl)||>V7RrA^P_dLurkmGIIaJl8ATA?^aDCz(JRJH?zFo7Eh6Q1M&Ct`|ioHP7yB+?Rcw=6fP%HQGKLxeYqE zJ1T$!S zV#6zA6VntydY~4Qr%$$4*yQ0RQvX;QM!`%e8+8Z|pO3DEG=CV63;0GhBFBQ-dyX!JDLDB9Ci_!e(% z!xDyCA$$lVlm+W_|9~ypwslA8Rz%x7c?@}npP%l0Cn zY6~nIjN7jvFZ7|@BkiTHd`4}JjBPvutjq9twq>JWIiKOQ)k4Q>R=Ze%d6RfuX6&z&OV!wE}})N6=>!w1`Xorn#@BF76zaN7#s#Sm`P2FrD1oVDSgd?N7yfk6gbT*$P!_FR7G6!oL+JcoP7SV?TG=`mXcZ zVYDKOFV^4Ys|}<~3Sr>~Jh6xss}@AD8Oj9MPob^P_~@XoIE2i`d%fSnviDYj&pkD2 ztChteHhWj54__{3-R30~Y&}+h$i>PG!`{*KA;`sA3zTyxZO01uxLli}cQko`g#Ofg ziE0dI&c?&jdAhc@vQ=zF57ZqXT&99y3!y&8?w0}oYY`s%bWo1*0K4eS12 zPZwEO!!5?8`bETU`NRPY6@^E?kk~Ef-P2I6D7zC%f2-TFR7sDtenHmoN>Q-1o=bmd zI7L~kc8a25wd$cFAKG01 z3^|e(jbB!^R~p6I$-Ne5S-vk=QNVZxYJ2VUbL3EdVlD4C`jQXtDBo~F{AiBlJJpBc z%n*&1Q!?fA{toJrE*_cj$}AjDR8)>*V+7%%jU6pxwHS6pdi_n!LrDStd`%|CroK!oGIv%(YsEanhQOMhD`FriBS;3nN zOM$~rXU|SZ#$}UeXA#WVu z9n=O0&f`-FW{3^fn0BH!AnSaBJs`FtG7*V_(F{sR-L@f@`apAxM0viJPLbw4+CCfe z3iR*_TMfkV)Dk2hsCd-^5<^Qjot+c#q;?IpfJPsoT!<)YmQg}8MU8ovO|}+reX8kE z!wSR+WQi=>^O;gukz)-GP^Yn2Ma?4}A~0Oo&e`X^O4Mb%-U{^^N7#87b>?T>3R1t? zD(0;|dimm)m(PAy!SdEsE1A~#0gYh;RRZcQ+51`oQY_pp_E}Gn5y@HI0-M?w ztIE&S1jeaUk5UWPTu{1)n<370ZmoF)P@T1o0p_#J&jkXcaeW83TpaUuoP=M9KuR^2 zP(W!UO3q0G_}E$FP+-;*ZsCfPg6N}-EN$4|`&Oolor9H3^w?{ND3YwM>oRz?y{Z0U zd#L#wOY~7|GEeb3f)PnCy z5QLireIjCvu~*wcx2>65Oyk}R<^`9ic+4RZuOy=D2L(QzdFhaPY^WYtUoEwi&zEc> z21uJth+C|cYpX*EdAd6#(r&SXWGjeG{m|WfcE!O|Zr?#LIM!O&Ty~p7(J^KD2Rj8t zrWERej_qV)Gr!pc*&Q;P72adDTh%NI%GA$Qhi%kEJrxBi@x21faM=Ud+QN2NMQhsS zt-z_}sb;j@`XI*&{!}bH5;J`UvUclCW3UelVhF!=zA`+-zS?DxWaH=LLyB}6{A^5; zD!w}oIF;USJXrp zTfP^=?HHX(tPJadqk}mx$5y%KK&lK>nbk~os*=s=A^E*&MnSCH=DQTKLR0gbq4{l4DIaa!^>*x9x$j|cTSQFL%W-U*yAQV+Ti%;e zGMuoZZwBeB>LzEJ{jxR{T8%W7n5|{)8{4u3|J~--r5?-)OAba&EOivqC@RbDPV@zH z$$2oBvDI;-;$bOKt8U;A5WM(4`FI&$<_j7aRY)W5Z{p5+EZQ42nbtHcJlmI^PN-qb z+EEaBexI+T}dYQq`v0;qHhhHpw{eDyG8jv?So|8KS{a zEb0=RzD6Es-e+gv^4RQZB@>X!k88v^daH39!NH}pP-RaVv~IiOuydvcnz?Z1K{x2S zC#fHx4V=V$Uu}Cs{}^LL4Q!A@vog)-TwE9mBt(P_bs6dpLuU@mE>RKrQk`;Ae{!`1 zKazT7fdo0xQ?J&?xUz`KvLA#M-#%3c(oCC807a(U6325{!OFnLDH%I|shm|!dMSDW1-*iiFHl_S_YJNzF4aFO8)^7FA*yc~poDMv;<9n65y`&OBQ zZ7zd^f!R?V646bP+&;wx8fogW3l(ejJz~MAOHsfss1ziYqHt*A+@t0LB@vnm8O*5> zyI{{>sEZ(3Ietq#PEsw&Kwb7QJ)7hU`%l&D5diGs-NcT*Bd;j=uQ8C`OD)Mz{8L2C z@hM6+jjqPrXvaKK9~z~qMqP%xhb1rTL6s6?9QoIK9r&)Ud_@cA=P*TxuoAB_%gal3 zH?B$!$L0idb+*2MQ-h)*@51HFvhP$61Od)S=JiaWUpWa1310y$do_LY1;>pN4lzR~`~`hgGqD(@IN zj9Y8kwl?xxA`<1Y9pV5c+q7u57J0|rl`+R6&C)E1&?AF)28(eFb}z)4JA!a+bHL4# zTtgI%z=Ek*8+M*QZ$a+pq13`BqP^adFR&uJvEbA6Q!wTN%YYgNNs~{tK5zp?y$y$_ z*8Kbk5y5y@1;)tcV2c8r18RS2TepE-vf;J|L4<+^JcXiUJU&A_TL_2RXcUr6c(AQ+ zHgvJA7d!Wx_YD0%1K`KO@@o^M@1PR_zalLs#^Tv$Ty`ggi<$Dibygz9lN?`->L?mZ z>fHzcmDt>WM0YFk;s@xtjeNpze0el3`)xb1X8lS0Z8!U_n4NZ) z3(vq<)n(_0Ok~H!I-lR^&tA9-ksA|`?4G&94)S_y>)=*g99F&sttacT(Iya75rgW< z5$s9fYd2U-g6+xKYrWZ%2FSWM9!tMQ95sE=`K1pfbVbv1I?+csz|m<;SP#gar_DNUsTZxDv{J46n(jP~EG%42)i;jL%ap>_NsawtEMV2=RVD+Wo!-xT+UeQm z{oc=e+UeQm{oc=e+UeQ;{oc<@?XVO`RxZHY;Cjq`Zpg%ZqcAvEWBLu5&RT!cLp=i! zUEEnL|0xfa1`_1D~5lzI5(4R14_ z6RMBn4A?O0hACNz@g457#Q(FJ&cZ5#LG55*XYnNlZpNuZWuK%UC(jUminP&IQ6QhbgM@2rkVCRkVPylv-Sq3r2%1p0ib}) zxR<-K?$5>M>2-E8I+<{$({2}eKAVCVT+QC+lWnxQ_bo-Tmy_RToNcYa5^5rH`Q=o} z-xMY7dXR=A)n%p4?<{J;ZOaxrc=5P8dh+_kljG|9mq#&KuHaatguFYH)0I1x{F(9l zndH#M>%W(+~6=GJazr`nutxbO#43P&f5M0h{v;wFY2j%wRts|KH1ejTEMxz=%)WO9#naJWJ zq(=8rnYwasKH0bomcyjG=n2ann$yk-*XeF71Ir>D%aW2~wS|r$ZP+b0kWEf6ln`lA zk@|-chC!GqK*gfxox(1 ztuL7?fvgdBEkJSH&~O4869ApT%UIq{9P2*4JPRMX>aKbD2G93fjFl2;SNS2LH4<|Y z^pVvY%eSm%9kuTqYHb5=Pu?m6SVUkmJ6G)h#j&TK9@@fL?{vcEb}lwLw>fYpY~CRl z{LC(b)9AboYJR^TB4j%C;^v^Y;qE9AJtC)(ZyXZdL}M|JFYoYBupIkra=Z@#voW*a?p z7q=@k;XtE>9H$)jIemwyxmFc(uMfi&cNUe4M^}RI%X9bhPCN84(1j`uHZ32K8NHv8 zA#5x}J)XA^aDm@_eN~XVR`DUhJg$AZ`32&L7s3#xdDguH=3CKuxP&5~LJ|i_>rkyu z0W$4hr<(6WuAbVc#@F1bOh5<2`TjQkNz-;VkW?3V=$E`RT!S6bhU>f9t(U;1GDi%s7cJ}*6AdbaN~`s@Lj_Igl&o^0K|4ho;sXYqu$u-?Ms&`?jFuVcWw+kRYk0_Lz^!W>{61=j=S0GYPL zCJMDSBR&Uph<9REYruK#2{jxLGp1s=&EyaIfdD<;K01R=b?Mzrg$$RlA8tn3E_k_g z3@!*_>4RW*$d2W_PK>#>OnK}jiP^R2 ztY!+)*GXuyozZ5M^1eepQ>)>EH{{o@>XvOcBs&IyD<}{MEIbcgW7u&Wc;f_h-nECF z(R;z{iR#bKHL_E)OU&QK;>e*&xiP`Qmkk{d5ZN=yI>Hd(y0*O2c*2J>w3lPB6-*@4 z<&885w~|AmpS}br8fMYp2rQ&bwdmdCeVQ$(%rlj+9FWHztCeXzvz4TyFIXpw>Qe#k|S47^Xxr` z{B=den8UM}V^fT3U`x!f*uojtnB&vn))UEU)F-uu4cs_4Dj908K&?991T1fZX zpy#+qaR+0n@mE91nS7c_^$yZ+)!Gm2UJdi^k`pKN8m(teWe0MI9B1o1k5DA;e!PDr z@6_+60*jLKAKW)^F)mb=a@`wRW_|5B8w_Y%XiFbE)XO2=zTW>H=PSBpfB}Ba(2WLA z_i@Y=b)LE1VhE|xvKsshKt8{EILRp3|9qpMspjlRh3u6`mU*XAbBnh2Px~0rU1M-p z{>LV2*)O}x-5^6W>*d7CydaK|!%#Imq-(O3)(LQ)cN#D`w*VBsxw7=8c;=vAAzKYs z={j+CNrrfsKaz3Yb6`0;wD1|-uI)bhm=OVyAs`=#lam~;4sDGyUrb7$Hz!86Td7O8A>QzaZcv@4b zHIpq`jyU8bVONY$sd=1el4!K0x9r5c%hZu|68Xykdst9)Lv{<>dBE@;SOZ^{17JL?R`q^(5{1d93F z5%=rR8RFh4Igxv7UG7WF6rrY_>Tz}W`q9s?UsG7407N|bsJp?~5!J5s=-iU~K+|?c zd9r5ruM}?fLSU!6TVp&Fx_Hakc3r`lFMAt@i$JgviLRC${`m`vN zt@+!r+=!+cdgUhrs+&*Oo`9KmM|jn))KP!Qa1`aHl?=Qk>I}GlLsz=f6q8a#B;)m) za2x?l-|eVE3_(3a)t|ZzAB2l8kR@ZqDy;a6o)5-+y|+CP_SC8EDS3>osgpcNtTP9*x^2Cs;tOR?B4MkU{x9p$?b(+y+$qE zB?m`!x!8140nom-fR1kaU76T_c5k6&7KFn#79dekFTWTzfZRc!rMq17er0i3_Pc53 zwqj8G@6pq;)9QPIJ%+MQYID4-x|Gj<%w(}`8#UeAZAZeI{BBmrV5KkF&EwCfVT46l zW%&jwRL!<%W|Gu|CGfjUH@56+B0Zi?;%2UhG^?B*XMo%)_zyaT5ibE(HLtoR(XFv4 z?QzXqk%ttT3ChYmLeBh&Qh^up8MlD7(jR$#*dGVry%1l#8zcBccI?KM&kTS|>YU-H z-5`lPiQKRN_Jhfdrf5%HSFBfCC`-ek&*uu`#lU0reO>Z}8!>bb^EeJboZF$YxUzO< zWR!sB@jH$XzAyMN0_-A%Bp&3TaVT#WQJI`(g5X77xLe=4h>;aIM`%+t3on6W1a8wT z2wst{&OJ>nn#^K0LW!LTcKY}0&547Wv>G9#&QV}-uy!a{Us7eMbWgI2Y6-aJwZZr{ zCeWBp^G-=qMy&X$%zC%6T#5IhhKmZhr3)sod=45pFeb})6k zsw5{P32GHZLaAdc@RtQFXF4IuZKFDlsC85Nh6AXqUT1k5RNC2^l9dp!f;IV+$wXRk zl&XKh}@&&bg7;WSNbfbq!V?%#&`uNjYn4-08-kdWokIQ(_Hnj|{u=IC z)5iEL#>u0L8ji)wAaKnS*WRp;q~tkFC#pw&Ew76mlB&qv*vQI!F?mTIF8eL8DjAe2 z(ohqRjG&YhDC00Kk;Ysmqj3<)6Xj zg*Cf0S_?`pxpH=)I@SRBpF~j}&BQAlCW!;+v~9kL)`m}HrnoB-bGXcA%==6XweGj&Oz`2LOj@`y*Me)b zxPL>^Ec8r8ju>$0nY0z93@I3e4rkQ4LoJ<;0TXY?L1t&AqGr>je9+pRC6H9#;T14& zon?m4nH@P$q%UJ&*z}uwo_EPe+%A^8G-X*4oi%ak=C=OE@K;;M0rI zg^p)lO%(6m7~a!X1QQ~vaFR|lC13`mf!{YX?%I+rYLx$6BXDprx-M{=T4FD5WF&cD zSBw-4`n30MpQh&hHXnyqTZYy935P<%`g^)2*~}M{g1TcPb$6K!63k>FyHzWis!2Fp zjg&Q-unLA;xE5s{i|yJ@O22+-T&*2V77LE3Ulk1P;e-8MERCbFVkp`>{6{+m>rj$k zZdG#|4EoLhuQudk*>54+Qea54e;ahZsl;z2m}D6y)#uZ4jcsSxb;GI35Bg1vYk#A< za|ofB93@j;?+5een}1%ap18Zyd52ob1B$K3Y6Z;}u4dA4r0^rvxjtvXiX5}${Ri>6 zV99E42Z9QxCh|_9v3Qo+vbCb@C z$@!J1`I}zJVL|gh{iI0`4Y4SShQSXrMr#xt0Z?)5e1>deNFB*wjmbV~({K`eZNDbp z)5k3uBt-_CDxQU*C_9qlH0a7s$B;4^CZS>~RIscmf6bPJS3gqvSCy*sZWo+whlD)= z^`4KG7VSmF;Y{d|W}(FhdB?WJr28LK@9?c6SY@XAm8GvEM_DM#c+Wd@XGGPg^E>ja zWCSaYT#=%tFho6Z<4KK(fdnE$F=wldL0RatHW*t_m3a-!FyuN@tX3yaDYQ$dD?xs+ ziR2Y)BP&NE5Mqk)!?UKDWG14K@@_wjr&A6Xad^jzyfK}VAeE>Q5|Y!Y_B}Z*>6s+I z*`2dc;Yy3$#o?|(l2`RGaY=H(FwYsB0SR@x8Tyyx=m+10vOYfh2#Q!=a< z&a@gRRe=^-U#;r-={xuCHDan!DL|Iq-A$8j>yRcF`2U}vE+J%SGMXClfK|<%9`i&oUPUY>= zyb#GM^H!{Xgu~1vz7SmNqqe)lX&RT~W+rNc9b3}KKJWK~u70(nWZ3=n(K>D~zF7*0 zT&d@fk~S2~5gew|F-#Jc3f?jUsib;d<2#^FgI<=bkhkTItz8UdBBE~|$gAi_U^!Bd zpJRV@yl9;W8z&(Pri_Vv3%r|&Id4{f6!VDEqCD~(39GCGvM@QBQ^z0si46ptIwLa4 zF_uGahi-3FzPRxvXx5Gs6AIm^(J)~C+XJh6&wzCA^w(U1;UNJ#o2^jJ<=I_jRK%c zldP!iv*_$HUOu6kika-W^@F@<>>r2okJ*&&kRduBvI*y;6@q9s-Qs?)O$ny00mf!p zrrSdKJiD=W6cU$Jpz>wLz?G7u#vc*RY$>L+yY!3x6HzH&utBYPj|DkX^S**?AZ$kc zc77AaEzMcrqNrAIoEshLI{_DYuR=1!Qhq2@M1-{xhTje^+D#dT2-A0(V2vFip}7zC z8YNyQ>J*&PmvTF)oY!zD){fGFO&I_Vbb>Xf)SIS^S|*U+x4%Od`3$KUhXEw_)>XY0 zBn{u}vV;jKizwvdHMYupEYkowxYd;ZA1(@LmnTfLO)pI=T5&FXa1dmV$|W(wSfW=a z5s5`fLB-UBf&$5B=sZ3}`r_DKTP7QjN@&weLnCagRoG9K}&V zgHWliSiWw@Oc#^{SRQ&^gYH!-ad?jlNESBs;#i;hGBcpNQF6dOvwF=z+9$Sg2dGbVvx zi~SAo@4Pw9-EXTG<4*vACs0{q`rF^;6Fr!7Hm&*49#-L_rdsmg5-2Q~#dqT?0(Xta zH|Q&`EV%P+)f!aZEt^WJrT8R7a)#(wE29SuBT+J0y&zl`9FT-CxQ;om8&$On} zx(Z%|+>EACugwa@0h5?;YEhwEE`PCP4Y6rV-)I!T&AaT#Y_N4eOXUE$R8S6%uy-mg z;wihAF`5v%LWjuE4SEO0?xc|v1nsN&L;@hZ$p{Cp@h!){yLo=dQqQIlWwG*)Vf~-a zD3CCIF-9(}5!%AgYHaUtk4>5v)>8P13l*8_q6%b?u4$UcBUooPGnp>p@3ic$q_O#6 z%Gq3%15iWB#7W)?k6q;}X;-;vaLrf+XIdYRuO`zlVgrvT36WqJw17c2*aA38=siq0 zw~d%Bom&uW9hDAzW@lwecJs4Ivw6@hc1_JCyVb=QD-R+kI{|-1tIgq!b*uommGT(e ztA`XYuU!f$i5G3`X<>cWbW59A%-9|2X%|K&J0)h9*9Z%{*NirCvzMpx{VgyS=qIhc z-;ij*u{eD8ra(qGgf$~6liPHQdGvzYy-OkT%~3J;XxAF10H7i#%G6-A+k)cGcQytm zV^oO%6|UZ?{52NN`GZZ<)Ua#_JebrCV&8+qV(9q<^ugR;8DMxf<`%Qdd5@$iahbhc zcrMshxmzjE&LxPm8<>o^&;&wSNlRKXTMPc|AcTeWJRs!`whV6TV1?!bMekK~_CD^1 znW1SYjQo;Bv(tQrqOs-~2pebO#BaL|6-25GOOK^;sQYYkemSTD9356Z@6=Gfgb?dc ziGVM|`)!pmHr+;@GHBJmfD+ArMe=wGB+}y2#1T94UJQ#aif(7#U)22CYe6B1RS}|e zUY#sphMXJLm~Uw#jI0?I>(sAm@-9t;ncE&akG}hWt4M46gG1BUAFkkj5IS=*QM8fM zc<<~hgcRni&fW`623ts#nDahQu-HleKp8mJh~TsmKAbYU4CCOIYyG7{lcCT^2=x^*ig_m5;Vss?Tn zJv3`Mt^E-3@3IBGi+`mZygD0Qoqhrc6tr?9nw%4xmo@gS3QJw7cr>4Whv{e^?~Hg+ z#N(%7auTf89?wdXZJ3>VG@+$YwDkPbHw~PR9A6egz`VxvYsZXbSA$%L6bXgYf#-aI zX={qKMVus^9p{!xKQ)aT=KL&CsXa-T7*Q$`bv33gLFaJLo=OH;6g}^|O?RzQT&+=y z?|qOGx~=*;ZDX-o`^Vq^<1zB-`-z^XCY2(HpZ@Xp|8%cl#y1Ze23=zjg+?{+W|j>h zAp8i+g0&N^Hll*|dJGj}>n8Q}9gzI^w8>=HL-wsp_Kulq+m1ChmFW}7-(;wTvGP^? zo%4v^l8W}a8P${v7|@YO8w(!Tzg@Muf zm_6kjpSpAe5MxF_q--#0I0Zpl2jnbz4?0!XoeKf=(UjvD_Da6Goi?;_M5`4X0#=u3 zGKmAa0$gdXUPy#KB^-xiDt=T6JC`Xk4Fg{OMOZgg6uLiu9PF-+o_zo0=*f#mPhRKK ze?20&I)3jd68EbkRBiQ?c*J+(3nFke%Hs<5*NMX}NXZ-Sz*NMplciGUcjSFl2Goj> z78TGUj$E|mj=H`7;rrl327HPnoCyr~B3O(MPtN2^AKp zs(u<>eHh_)ACJ!wZO=!#7UH-2)xqMo5$aPoD{UT(SQFH@3^cUZ{p#15<^IJkfwZg$tmGs4k*nuFv4$oJjwqEEVN`Gtd9|K^sN@#o26N?MPa(8? z_pce~fsVMN2;dZU-cnJ}7b;>QUpnuxii*Q{?-LiiUMkH<6WHC^)(m!cH;p{D3*&vo z9p_j*cE5*>`IQb$RX?VSJqFd!+GPWK)I*S5TGBspr29h~>HemVbgMAZr%U0}OW=7o zc@l-f2V*Gh&20J>T03Vqaqpb{j%Jt(H|#4+$HGh~a`sqTFdu)g;5Fr9f@xM-S8%~} zS}g9IJnF^Zl^?TXJq)xueDv(7mjOF89og&|qGS2-e)Vdki>#|(@G0<^?^iFU_$utc z>zK&7<>aFUfpy8%iV8}6dB^2%!cI8Ko zbX;FByrr0mltw(>@fGX#H_T%n`kV2D@PckMI$|cXOYE12)z8!v(Tz22OB_S(OgTZs zpmkiJkOomnhZ#r^toE3Eh7^2u1QD=37qe^Ijna5uP`d5CkJQ%n;1WwRCLN2YQDRo4 z)v}jysBPohqfA;h;$zpIkz3k}n4~)x&Hp-i&q8y|6BP(%%AXIZ>9#Z2*xlRS>K*X& z2~h*bR8caclhHek_8suW{p#6>P&kdTx|HO6q1ENd>U;bOGET^Jr5{cPy~525M_Fof zS>i==D-GHVI@__^n7o&AHiz5$l0Cy6Dq|umOC0@Ih9FlJt)zI#r)BlX2uSo!#yiD& zUX9?`D3xW#xS180v6-gl3esHSb&lv1AdSiaQFx6k26_e-U~_1{J;xU=at~{f%lesN zE`ojCl&{OoMD`&0-m>_xj^BfY5P5S{_u7|QY?0I6r^**Px5Jxlbio5}6) z3?V^!G4g8ifZ43cc0d?J_YKi;8El96ysO^f#*RpXe;)LaMoBuQN#6%Kn@_4Au#W;A zOvfLqpNQw86ioJ0Hw2%yM;qrvT$*+tU2pB|OB-+PJaDm8vUC25b%>Ut=;oYsA#rR= z&vmT^p}Lf>MP{Fp_+>{Hk~?b4{g!w+gkixhC(hD{qvRF6c66|M03Dy}HdxLytlBcM zlt!ZbKs`tKQ|tr$B&Jd772D^=k-uP9Z9kA`3zBW4M7)y^V36Rg*tl0GGD?0O1#S52 zQhtC*I%xP4J_GrJVO}niP@ZxG6A(uN=han9ilS44+z8&+|z8X69_b~Mi+8n)CYnu+V9x-Mi@oH|O1c?XmUGlzoo6YMee_4UQ% zHz7F3>@=Pdk;sec$rL0Jp+5h_7mvu&y!st)#}_c~kV2i=n*JD9_3`3NlI(m*_=Y~& zm7)T)Od1QcuRG|s-V=4N5_*V_|L-kx;e-rjz%i1~GLu|&(|^KZ}ik)!`+ z{l%%0<3j{Wg`t?cd1@u5DDZZL?l@cl20XH)XJ-A2<#bnThmRip0zRoQ2hJxBs^;ca zZHYmijyd!)C;0D4ICylVftZ8$30N&`rUdF^@(EA@o_M8MNQJ$-q9nywHT5t=x`zSW zi74%F*RR;w;NHW{oh@j-N5rLJK1l<8L3Rj85wLH#uN3WNW(QK|Vyry!`_75ceCJ_JSIqvcK&I^=^kYK`05=39ryM(!suLsg6uygR*pgXW{lK7D>8;T@K^=( zNMd*}3Vy&O`7k-J4$elGiXbch2T{q1^d~VK$63^7s4ZK+9;F(B_mP8D4JMp*>hVbT zHns;Fcs5QsB?8C4M^drjKd?4O@b4==0J~eRztP4X%a;-VKDwB!|1@6ONAmt(_|0f2 zA22eWe8_nlrPi&VL2%=%(guo1u(4xYCL*4Ag4!o;gQ-`?xWAo=)FzPH<%SxzHg`6P z+XJ_h4D;`2pk%=5FRDYP22P5IPUg{b$_HFPKX2GM_~OIl^!NMKPaOIMKRsA*B=ap) z5)7rF)P041;=o>M+N|0Q(Jgdo$`@m?U`1$R-HRo}6i0{AcQ&}+zEdwsd+@KQltqF6 z6HocD?CApeR~=TWS88*xvA4DBj8j?`wASpm$yi~2Q(-giGsf11IOM>LIKEkZZ^R*b zc#W(V)5sV`@I~aU0)4)^qOz=Gqbblmn;e*)JhJBp1AqP9TqFiCWxqQ+iy=p~E0qi9TkBo*bFwBcUF(?j8uUqeA~^k+_a7L*9C zyb>Q#6hM~`3S{cqh|kG1PPa?aN|ackuM*9(*o@XP z5QF{}zpS#n6MB$PJCk1sH?*=tilvTjgBj;PPOrY(dbIQFWkCo$x=>OV7t4kCKZ<(< zre6I#oyv{(-D}C!l$+6v#mf0I1r}jI&9yYxW+3Lp-SL{O%?I+OwzgC|n$;-TW%*PZ z+8%dBZ0v09KIq+vSEKjS8Mb%0 zN)prjhL|KweEbO~3-P8DHrRTum>5>mtU90+s88ib) zZn`8O+*b?ml%#8WXl<9PS~hmKxBDe`2YPHFj8Eag8D8mQaM&0%!qh&wm;uFIoiA~s zIR|Aq&63$MJ4g5eI(>;$zbZN*omuwsc1aZdMDSud#p4U9SH$Ywb#83ZD$06QMcN{5 z#%obgZLK$-PAL$sPL(A?{egAGkFjurooFsED&NYNbgj0PQn=OM%fJW@yUFqF~)-y=uzoeC5*6V&G0nnsfEHzgN!vj zbT&<;QFy;8skQrUeO30B%rDs;V#c9#N`4841gF&#odiE4ydR4bNk|kzc6BkM`RMl< zHYu|_9#lU`3km1s_oot_03$eHXy4Mz)b=lOAIFPpvXRdggM0TsTdP(>ToKEWd0?Tg zjSWo7!{btaujt535Zzf5qM8IXt+elAa5cGJ3{Fq|&u@kgHh0!%D%rH)43nmKefG9s zb^IvutYWz#{QTN~lxo7$*Xcyf)MA$K?cf+1z$6Ajl(*9jNewslE#AoO60*3te>$Ja z%jr3N)B{8+ypbM$jq3f8`e8eIpU`w>#eXPZadGl^7 z=KWQUa{VUJ3v7G=dU$ZgyhdOnGY~^yHuiXgBL9xeEKAQxDGP`4uh5*~#EMd6Ff}K- zKu1z5o@6U^N|WTxkZLz*+nPU;W~Sn3tX>D=WyOX&Yg_m&!=d@9uJlW8svCzE!B<>I zJ9OaS;oqnoMYU-m8^0)T$hfPeN}@UJ)hUSOa4OFyb|=`rhe2^|hdUt@Sz^%Fm~+3? zsk|ET5hJ{MryS)AWt1#oIl*v%*3DyZQKTl*wyO|qK&2hX!D!i1!)@2#JEH<^-f5J{ zKWL8x2dG)XVriL0C!^DMWJg?&lxAK$R@aAm?MNH%Y(Mb8=+4QPj`pbzSBzAOE@d;X z)c<=A9C9tOZhLv)Ba)m(pU9L4cz(Skp9Q-66ckCmyEW3L)MT9>Y?=6l2`Wa?yC${evBkq?Op5 z<26Jel$GmP^xwvW5CBqMPvljFsA538Do(|@^-F#v0PXyI2}+g%%*HShlTIh4p+=vE zE6rq63SLCiDZnqC-j=(MS~10Psx5gkw)q>oyG5Ok@li@3J0@3f{l}&7WDh|6z;HsW zMuP4arBVt;di^_K-D6H;Z~~g~`mv`*z0P86qw8O}@Xk;}AgD%aM`JDU@+vwa16)}6hs7M2#tC3;5p<{W@thj6QW0n-q9?+V$YzuU`YxwpjgyY-{JG8)J~ex*dP>}9 z@%!aOtm3&Sk7guR6Q~m=Fr8s{ib@g}xwqn07*m@Rl97y}R6RN^12%*i-XQtF2+=9! zK%bq3QZ!|;v*=ZV*QmM47gYd<LyG2`>2L~M2+ zNNATZe*#24=3pSK;}iXEGA6;@FiX`0CQtbH$?|+ebWBfNO|Y#2(r$i-vkP)nt8ygc zPX|4oaESm`gom@@RN^#z1uw?aYe)->eL-1gf$S^^y3V$j5;(Fg%v;sY&SwJJ3$_Jl z9<_jU?3!uCRfEc)2@!!30YsQDEwpP5z8C>Aqn|?Mfwh+}j;a^jgf!esNm{s0v%DdNY^Up z8X4VtN*1G~$Z&vZ^eOma4pB(6XykQlGMdoP3)c3z5jO7?l9T<8aAQjy?^81>ZO+B% zl%BISm1z#Mh=d>ucCK>{s{|qfo_N2*hg@%G$ugiIaa0ya6+gM5QdkTLvfHqIQ-RLzOrj=FS+fo`t7UVREcTCd=~EtyslB1uCaNXJ zYmtqjWRNZ3+FxuR1Zh593#N+kkbZ)E6La8Ip$o{VJ{@sDJWjFMQ)Jj8TdR_v3h?PX zF0!~(@xv4W7<(Eg=Xxj9kV5k&xo@7qqQDDYF>FyiFv-KATv>>=dmysSpq*12cUB{W}~zBC$zo8pG{l!P627XePTCkJssL3?pyew>c^6x&!oUXK_yPYk~Gz z$q%#TR0T+y8Xyx#$81%qYlhxVFJqGdlh4TUaek{ihu5*yCYQ@gY1g=Ky3fk!z~%sC zM%ff?sAiFGcr2u^`T(-RBv$I)3-td>-IEBzN^;v3T7CtdmNj5vCN{(XLWu3|UzNg^ z(_{UxltCYABjia1%coS0JIvFKbDGo*K5(pPFO|(fLW$ljj9|vPfGHzSeVjT|9%i~M zm}GaAwtyP0^0f(9j75{Cv&L2>s4)YsxmHDsZ#4)cHs0y=#RAkQ@W13z$1JtkS za^<;>1o%X*Vl^x4!G?2B>IU)fHBG7gB{XVvooo4^XM9)Gf(j}*CE!35$Pe-g)ewC} z^mcd3r=r^<*V4!e)Hpv!GnE=T$-QW_4k8Q!q_9~6~U!XOc)bSBQ^+E4(>HSIdoz}`uKO- zUq0O3(mc1myuIyGdz)Y0-X<5#mnzq@rg_PX_Cm868>P|k6ye2cXE@lRv;hADlijHZ z?NnQXUF`G+{I_8V#~XalexQ+6tgrTwD7yKCyo-*4lTY~N{VdLM1RO-qgc)amo>DP_H-cEsDOOJLwa-^b=$2)`@QRL-701BMNK@}7CT8mH2m}1qzuc=-l9P& zS24hXL$H9vG zRLEbb$eT?5$useM9&*}1T68k4;eCXc zw(zY6}h z6(UEN3iGIdxtZNGQdb;)#f-vHC0G%MSpiSukUHM3n>>6FdNAwMyiH=j+#zgWm$iKD z1|*FaW=ty)9E(NdYxpbTH--W@wjUtIF#9mG_?*I8Hf+I#+^?DUXU;5h>2+ll3(yST z-nh#g{P9l6hk4P6IlR%~#~Q{ubYLp)S1&mFmax4zXqmMZVj+F@1E7@yCK?5S)iaXs z%yqwY9ECmu6yYOVcGZaUt_th-e-Mu`Jn> zU7pYJvp(sA0*)o^?)82PSaT0;_h7%&nIEn{-RzgTAoIGfo!y>x z_KNKcd&g5!@>-4}$bstZ8&8f=F+kecDUPRiI1xN(&5mY@7)cq<;OXQsG@Ie`t>R$J z_>^5KWJAD)4v7zTu{TAtgPjOzV0Zi(PxXv}p{{Z>nlt$`p6ee)vnTtVg)4Hjv$V0t z-CJw&VURFja)5``+Ob%X&WfBUT6%EzZuIzG594MNT=7A9xEkc=jb*neLRNH~eI*lj zgGDlF=F`ib>HUBK7a3jclsUcJ6pt4fUG0=Py|mVZtM|53v$L!^n?#A5m zJyy$r&mapM$X$7PrEn+^SM(pq!wA1_SEJ>*&T|Lb6cf}fDXbuwV`jGsQXMrgb$y{g z9nVlTLliiPK)TsR*r^g^O6sWFsAH#?gHi1kKX{mQz^b@!E6tEt6I4dg;l4(sJIb)E5xz`ttZ#VUrIEs zU0EGM?n}u&VrYS@UsIS$pMF%pYGo`f1h8}pN1J!SrAm;-&Uz~b!mU^+-x}0w3bNe#H zbF+nl<{jEYyQ16JYy6k%f%sI@E9Yg(H>DJ^P>XYl@{-s}1l^ubQ--ec9^32VXic`i z#fKscRWEm!fKp;W-fh-}W)@_;aZqeMWFc_B53m5>jFI>m`NFz?%!C{Viv5ZKB(U9z z6I4pxY2drtbWEv#x>fSJa!xE4+ERT=yOH;r(q^f#j^_T_FfFMkU5#!{%)lQm#$-X) zfwd_4>zt8D__hFXsyEv6yvxxU=Nu8}fMy}%p_YWoQ|QC&wp%35L4w{VcS`9BeORM4Zx%g0L)f zXqbldcF0%11AdbW#V4d)y?p-a;7IvaN8I`T@_BXi1R;5Rc<}7-Z*~{FT?HS$d(ES- z508H>2-ridbVE2Ajf7hx?V3NUo^QKB4HYuUl!ZN>%g|A&e<_@vDcrgSrW_)Cl}dP6 z*%jGl`Ly|DIC$ZOS#$v7dpj$00ro$m9g-FL##3$Iy`8>{0B^e# z-YfM=W#fzG z-imGLw9@}4vySX=$C>oERrT_Np-zA*f5wPmh(D(p0KOG-x#L%m-=EM5o!yn>1e^Xj zJvL^ziaHZ{@Jr*BFZ;DgA%_tPLVSO-x$_N?oCHwb{#*#WkePM2A0za>MwpYHrY_*B+(@`0jj-DJ(i@2h5W`**`U#`xJrW7bPtknNBMU`~%8 z^_l4OxV$zwjlt2s9R0=zHBM<)n%O&Mr4Fj^`C3b2=hfFwy@JHJa0f>rC02bIY-G-s zfo`;g;j0FF+=pcUb9Uz1{jcz7&OxzGS_Ap$OpoJO!b2E#eFJVh>;wLDvpiE-RrA6H z&-p1Gy`%gW@RLHHs!zdGPIa$xOt-;5q>V6Ka~B(@;B>gkFl)|4X*_lfbXta6*EKAN zZJaG8MaK*f*I|tlPG~V0kY%oXRy8ORltD;le=n(_d0O{s-K8V!&!xBGGn zd*{J?@3aQKh_jG##2p5LW=RG6RH+&c?eIUK_%@cNm2Lz5Ehx<|f@h`jE<0lim7SSl zK^(+wiD0ryqS(&bcI!qk7KX#{6!!-s^W*k~E)5Upo-f45X2~KFOfZeR0d(t88xFA$ z)kw#b!ev}`Vz7w8e2>@irfO|9f3<+8wS(&M!Sm`~K{rJyypyD`&d9Ru6A(?~m2BfX zL-pDh02>soP2~(yQXl8u*|UOpf=}mMY-=Dj>nVy5g=c*`*;|k=rPT$M(BxBJU3rxt9uAUVf6fWa3vsc_$$W0>BG_wm&Vo`r zEsl3|I#-TzL%sU|eWIYBHzbWx_t1j^mbT|bIIRF0QXy)ke6d-b5y#k7`@}}fX)a($ zMDws1HT1;Db&8$FG(fns#1tL=%-2T?ySx&8G0SKiNS)?ml*A~qJ=8JtVFwa7V$d=O znX9+cAYfI=JmV%j7Y#aat06=!R?3R?!FgQBiM2(B7L8IR21FEUu{p(t;i66oG#rEK z`D~kXE$vTq8xdm1RXvaMgetmRMTamY!4>0V^bVcOX-6fc95 z33ajK(4bicQCu|FrH3|~5)Qgtwe-!_oIN#x~yWck4d-spO{|7CPsl`C5CM0B~HjzLTF zm!f@GJ-z4JB}ki=cw=L9(MK)!B|l;kh%2jnRi(Xxe5QMD%AF`!Us1@;)OC?Qjk6Xv zHGy_E5iw5_CXuF-cNoKYx9s5x&aTO&z2aFvM(o^m>aw@B8hM{>!AxxmNlVlKqS@Rh zWb=>WBNjigv#i(kc$I8cVJg~vF726g-YGJ_q4CiCa-pOlL?k8k^>)PF5=wT8LOPqY z%q+d#QU!U>LEdz1GL9F7_WP7Y&nn24L zO`AX$kIBn}_7NQQuE+B;`8quXLs)RlL#x@x7*^&?C&}pAuWOd6QiQ=Kx&{tG2uo2< zOrnV8rTl&ff#fyL)}4su+*ZoLS{V{bcXgKE<}YVOC9mxY={J# zwxNu(PIHp|Ueb$iY`EP~U<4Js(F`Q)elRSja{qxCcDIMVzkBi)!sEVDA!2RxVe8ud zHll!^HF1Y0`bO3I94_Sad``8khwk!+1uX)4-1oc!+O6o`_g3Dij^Gd#31v9`_=s-n zY-I?9fvk>3^i8q-xnd(UwcWpad|S>PZ#SgB6iZpz(A^^~JN*$G^`uaX9dT|~+3+LK zb^_XhEIK65sOlX0&j``naer4uO*GQJpH=VmuVJAzcE?+UC-Mf!y zMVM1#aN<`*nrf3P5%@qgC8`Jd-_8IM+R|@aXbGxRvrKr+9<}-t)i4#HunH8;bgDtY zdv_JotH^)VdaZh|e_z`z9*q}xW9w+$HO|p*VZe^&m&s=U>^fxT^IU$_`{rpmqyAKbO<*NsKG=*fQ?+;Qw5dD4|fx9(#wU`53Xm6T1UWJ8s$=5*tC)+Hb=eXFxpD58CcYpatPbrLB9n#dl3EeSnDf=m0@Khmr9z6Ll1MyH z))FOS#$t?*=96tE6B3+?kqD9bHuuWfy=5DOc#>#ut$cZBLW@F`VG->U#e!o@t zHd^}(@m?C??IL@)*#HB)pM;sx+|16z}+JQLJa6(B7b2ruVL5TJPU+ zKr0^${oSRlo}u-8Tosi@25Rk?=xq*>PfDZdst$Om28WvO*S~M_NU5n}`@D812_Kp- zKHK@Q)VIIhFY?L8^Q>Gip%EiFI#O+(M{=i8dr%Sw??kM|m`zgwCt+2U1b2!TPW`w|9L-k(KU` zU5U0)-7_i-C5)7R)?9|SYxdTLI=K!x2dQ@@-COpM<#`ASgaXA?+CU$y(Sik3} zxXNKU>ytK?mO)2xOG>xWzv4t)z9D0e$u!0)uJC`~ml_K=r<5J!UN$GJVdh+77xnuOm z@BTYDu5_`E7k4Pd%3gAw!;TE-dOs%{W=vk0T^Oe!hl(ChRNJWOBviEWElB($O-u#{ z_fdXWJj|IlmY=m$9KXh8ajA4Q@`gWE_UnBvI=!*9P@Q7XB3vm=HRk7rQz*8q?U4;Q zkXcHsx8pqHkEi2nE3jk67~|vub#BvO)-g*_oXW%tR+1eY(s!0QR$DY)d(>HJF(^Pw zhoSIQXj+*umR@EDoi0Y)SjRdwQtsmNP?}Z}MAIfihuigZQYu)HQLt!bI6;-KG4deT zoiRzNqM=VLAh+h|z)_u?v)@S0aW)ez1lskKDTW?Xbi-8&pGp+pq1}}9XkvgiC=7PN zp^e08ev)RU?BklvSYndKn7WfaS7ZkNo~&kmx`LpA+`Nx(GqGrQvQ`L%cT)-b9Q;L&#zVxt&q0zU*K|>m8W~l3-H+|f|DwBpEoGR zSiVOxu+3lQr0U~4c!DM29MY7<^edFC3FfB9SF&_4m5P~tx!efGPw-*eWvXF%Vaw!d5uc<4K&jimuUw1=FWFAKu#xUhPt6Fr&7WWH*LhVSqfatE>=iNpmB6T08+tYQ? zVve|hwxev^8|e7uP}%>}-nlKuab)Lu9;I&N03$#Q&ICEJ!@f%B<=J2x?@T421x6SLI>l%5^@J%MP{} zGaKE+cD9pONewkBtSahv`!60pZ+?2R|Mc0x8}iM18ECbXvwh(rJg;rK8`JAN>9nsv z1;|Q$p$NE}IdK+FE>$5mFR}r{wO+{}StTFqnC#1Qn@PQk)qYACJqxCAok571h(|kc z6FlKGRD|haSP5{d8IZ|s>0if~sV{Bfn64IQG9Flb6n(~ijf6k6UNO4R8Xw2mfXO){aQ<6F4MBU7t z)G3vI;wD@0baG0G-O7v%=cKuIBF$rnni3LApjk}<0iX{Uk-`sCEXMRKKSJL0+f#0} zb8${U|M?N6smzia;bZC80tY~h>A*s|V+sp_)N;9uK&6~AqI(5Q;Wj&zbV?&YNxUr? zXIn6olLlKxoLl7+w4eNp`jdk~@$o==Du=TQl6Y!wWlkN;@j2ICOeuDGuzMGg-Z8Ou zcoCUv_*=Q7`<@SbeichwX+lb*tLTVszgNwle&i2T@M>iNN``fv6TSP1I|j7yWJu+F zMwuo3G$Ni)0!_Eup^dU}nX|dM1^GI~lR-6_sVPHz%^a4?8$B0iQ~NWJrE=aJb1C2u zu1K6b(Wv{8y&leRE4c5X5AP2iC?|{{(?!0hY9aSvIg=eJB8F`=AW=Y)2#Hqb@W;9eC0cLW7_;pFe7y=_~ zD3l#~6hetuK4dpNgAMFnCA_6`qeXlq_1VN$+hF<@f_NnA$=wHBp)!V1NrO+%(P39P zc`2ewT*2dTt1%MhLRo-lw4y@L9p<<`L43vi_`y~fk+3i!Ltt1XkUKm-#n$xSOJaf_ zf9$Cl`M9R?Zzurqi+q}nPJ%xE!-{;KF4530n`(L6{r9DerE!+ev;veWy(#@!0Q$fa z%a{eqgNv)K{2bNCx5u@@%5a^JwzqDl(02ymkDz^z?JFwi{Z}_{sIeFCk4;+9!N(pi z+V0(03E$xjVp?|ipF%9{SN`uG-|oxN1p|$a*8MNd{pMe~u`PA={Iedckbp$+{&p(5 zh?P+&R}82Ha{wi?ZVoq6k0Vzs0dH*bzovHHzcvg1?uRmdX1LGy@4Pc#brj=V;vQ@#UYc z05Ou?&`kYpwosHMUnuf^sMZ!Vnvt?yrZHO-`q??<4r}#}?OPAq3pS1rj+b!zYef#poiznY6?Eg64f9cKj zAwl{=KQs5G(!vTBwR8PN*)F9~X|4H_Y=nd&h7zxY4xnEU^rvXeNeK)y95ij;peh1k zDMan^a?^x)fg^&Uiq^V`K!AzPzA#nn4pnpa35MRwNSnZ5!o^a6psy9&@d8;=ey7KO zuXs|^%?$o0G9?qY*(pWk5u9!l)h%T`yfgyGH2v-%7y=3<0;f2y=t zPG?nW9bhr^Z{DDQY5AlRNguW0Z%r%8DWifeOJ`0z{12lQvKSm`<#wsCRM-k_A)Sh$ zT-W!L6Hhm>sOeBuF4(Nfxif_0epo*}v-PNYtZIORK&iND=f>D|H<a}gG7o!;s*<4jQH5lPhKC}Uyy8W zSu$Yj*X*Q4+Q-Ip-yA( z$J;$>`|3>*jZi}$_pkjpO&bmE=ugoUsaB)V?VrVOl|;O}zIsRh<#)rXPEI23^63h4 zMA|^#q5}k>TzWx(@O4K1E&Pl)^9MR}% zD^ijzp%k9lj5VJi}f|70-{ z1yc%~Y0*REOu{^2W;EgQqYtz?0~za{cZdu}W^3K-G+vw0B-#-dO59ahN1{+H5Tjx(O6-#`)=g&Xior}2}!8lokz~weoMM!sKMq?u?JnL z7P+eb$Ch+%1G~mfgvlZvQ^tlKRXe7!NkZK&@gpD4-eG~V-ZH|6i`mENNyUIMp|O_2 zT6ko747dy^!YJ=V5DY%4-x^MzxR$836QHVq3`@Smwvt+0m?eNnkrKRC{vQ@iNSB$x z2vVZ`iw*4WJ9c?gx`jiu79TZCD&-HETQrI8}&^W0=xVyT$BfS}b;G2D1qE48kR34LdD*NC%o{Ku7#dH&9*AtGxC*Dqhk$ z>`!V8@Z#nEngx)wt#!jU47DzNr6e}$qd zhnV{BT-3RevoV9IDd2=i8w~+&av`tP$U$EuOLnLDR#-3eBm-bMDXVsy^u%c*QLXeU zB7_9fI{Q;h=k?`bXXPO^Bbp~0{N-(!jCbJEtCQnnUMpwp9YY|8-J>gke*pTG{i@;- z_=)DUznqsPWzG-aqMLJaHKQd#9A2W=4M=fKDTJ8? zlzqxByG@h;=+mCR6@(9=3JPh4>x(^gCrJxhYo0(#E0BFFuZ80_f|3Zy`P(f{Oc3gh zMcD$xqS+a3JOM65I3!-#qfQ^V$6Yd0PADP3D^XNkQ-wU<`(QIGEFQkP+t{t{o;5|A z!SC>~O2th8R_Mj-->CFC|NM_F)RuDllssN0S@UalzW|B;^EJEQ%Hyjgg-}hn*Y=}W zKC!fhhz8;CzgN>s3(kkW{uKImcB2ULhnIM#g87NF{0e!ZJ)Rutv*?yl{=O)5Q02xk zGB#Z3F3&^UEPQJURI3_5D5FC_@t7&LG8@{iL)s^6EjlOB4hou^dBDZ!_0io?+gIeN26nuc4*gf zfgQ2Z54G1Xy;izo<@xm6YZ{$Wc=u!t?0xuf?_PAK$HcA5j+ArJA3e7~Bcq{cOw{Dl z!zCoaw4H?&=>UwqdgSp0bHkLnm;DacB_I)d!7i7Vy3<)N!7T!BHLpmuYwe;D6*LeD z>P+s}S(zoA((d{yT|(Lf1BH|z==+{F!k5ZBi(goFV^rEIs}8NpCe4lpz8_v|K*1TC zSKNn6uqE?;%52c5+K=~?W6fw+TnWk4J2q5WV>xMDo62;lF?MIrc!%?JAR`P7>GDS> z*HXUP3mmLtUF0oi^h>GTv|^|5u-+`)Ogk;XxRwxZNYRIi!iz$}A&->PT9l5ksgwGv zgpyD9Y2tiwp@9WOJ(E-hwCwI9o>Lobi_QX+cL9ZwzFy7?q8O71aC0NV=>(e@#}!B} zzjUECzlR|TvlVN>N|095!XexY)5div%6Wymox_CThCZQ|uwna*1{No@v-^jWtE2Z| zuwi$?+drsujIq(_#re^DzgJK~@_)*C`*Y6UUz0-fZ`0$-8v8dHrf&DvS+-JES(yD{ z)#tc>zBmou>k;gMkZT zA%9^&Tfs0#N@+=`rom1bbH)sIHH~|7{TnSeyEQ8td=x1|POtC%fw=hQAF+hZ-lFRR z4Ci!?!s*SjyRr@3407uo_tbXpJPqu zs%t@A*p#!jC*Vk)&187ZRNDDYkMgH^6g(NT-n4Bi7TF~r<)r7bPs~l6G#h>M;rvn= zxM$}%>1SHI?S}aP^9mAj2K*2!B+Zd*+ZRva>WTc$>e6N)#tV6EJT?*tHC zxJY^fEk*8PG`kffPD<($oz;_%6`~pQdM}%mKZA>>ljiT5KTZ(F*EhEB+3k*cO1bi< z7=>{%^pt{?@G;{|&)&^Qxo50(nBn6S96s0Y-s@ISn2}?8KVx%lMpvU%QX|#%oS*8s z3kfKvERi;&5`KvtXL|g@_78Gc_AeAY-bL`nf??tI7D7EcD(B?8lz+|rWaSuK2|U%S zM&781W&rKW!;&BDIO2}{QnE3)+1c@Z$icj3@9M3+wJJv?pTsj2U__bgAIiK0C)Unxd+s$ZWPwf|`-z;1&!r0RL7%@(1!P-f}UM zlXuuPuil^HZd65JC(p19{Q{8%X;dio`V^_Vhkv08r&PI>h`U`)wW1BUsxk8a%_3Ui z7M*KH#^)=%`qTUCDmWc~;#yx{4;=y4*A-EwPd4Hw@#j|jWw&1OXUQsd>(!uuCX0Mwdx*~=xSpw1V2X2?FSp%lpd|VUPPGm z2Gr}IZuMR46l$o;&yGk9^aD6;Ppn9M-1uwG|1*>~;@v(OsB`!L zE*V6(aB1SR%*C_Kxs{L;Tq*8&TNhKa25X&y9 zn^$R_k>B`y8KW>q$c>t}$}c0>hL9^Mj3Hbq<}8|JKyT=~V9qEhU@jz6dS0dn8;i&n ze$A~Qb2K+(f3rHZJjFB+R-Eh_C^9{pT{$VXKH@%po=)AiNNSs@0&oU&DYNVyPv1_^ z{iNq#Yn#S`3_4<9t}k@~y4nV6xZYV#+tm+NyMS`dqOgF+$VQpfS}UO`QF7F&%-Hel zBN00aI5=^yN;xyefn<2l@tTxs*2DpRY!PJ^|Fz||bQvH9+#Q?E^XAFp*Dn;}g%|8e zh0k$rW^2u&k)(VF>w>F}=km(1P7j1?Pk2OXkZGJ=5vQPHC78>OO9!{=xu3$Tq{2nD zO-CLPz9S2;+dk!No*^@gtGv0Uo77)BtxV9po44VL*}VDBu` zj6%R6b@C`zt!-?qVS3Y9m*q6cvJ7?5S{s70zGVSy%|_#o>)rAl!2IYQSx`YIx0?0H1YiWOcln4rtR7+%qkdgL>sDsz+&i{8Ad^7 zfHNpwm84|@fSw-|a2-DigsmWso}?YxPwyd4`Ot+FU7~*u0`pD)kz@su;YXf#Q!K6Aw?2xWvdL(njy)xNBo}5}%6GzOy+-rcng$y7n_2xsW5lo|`6a3)BLo`W5>27h=){oaCZKX1$2^H>! zGksQJMqNY-&fM5Rr-cKe-3lNhlh5BDoZrkbuF(lhW{_6u*oA1T} z#85p&HQ}0H1?6}$fGH9t zx0c{fYm(PfNX7j-o*_G;NX&ubJVage@E`%!ZZ10v(F=*tV0VbFI54i9SI54Ka~x9; zCJd$F5EYOldO5N=8hIzfUc(Lt)cCFDWpnV1aMqWv4)(v_|Eu?;eEj_B%h&q{&%Uq4 zt~O8Cwh#bOs>{VCT^UV2HkS|Jk)IinnplfZJ?%?B3H)Za`9GYtcP`|TCzB8(izF=o#-Bg3G#li7)A zLWV)*+&4z4H8X*97a+Mk(6qgdY9Ceja$dlpLeAdpG!@W_%uHHcpst6CA~B}an-eZ= zL@UwKd(ENT>4Bn`Q+lvo76ewHh>AsJCA0`|y##{@C;^I=^9x+hnvHLSeIk;HrHI1T zEibYc{Q{SS%CR_F2inWR%i?9*CFZ*4q}_fXHApF`eJ#?0mmq!B8`A?&j7pjQ)AbBO zCTCT;+ksUK0oQG=tnyQWr%|O6sNxgtumwaDPtTtFXeCXN2ZkVx0%C4KO*UG^2#V1l=BwdpB z5+X>_vs14zM?D<)XPQN3OdQJFcZ{q$SvKfoQ6%`T2`!FHZ-IEaoC%ISX-a~(h za%O=lNLdL~&i`}cXr!P4ssndqyV-BN3-pgqnpdyAHS~Zf{Wdm4Q6f_v%y?~+> zJwT*C$KI*jx@f1M-`oMzrzamL#$;@<#&IBUCVjV025J>PxLLrH2~Yc@GSBL#dH#3OZuV3kAA0oZKe`|E}CXuC! zZq^tD?13TuIrdKF&IPKuwf>u+nkh~ObCzj5_2_y3E(UK@qLniGTEbY-GZ{fHUpI^2 zq>bvyGuqq2SWxx#Ds{|A>q_%;QwqJNRCUs{FkYZ+S>DO?DpVSa{hTa61o}_bVYEC{ z_S8+)Hzt>exhbF_g<~if#iaCDrk$;|OfPaG`;wyM<>WE^t6LMq%%N0lJDc9eKuPEV z;SZ=*QnxHQ&m0djn8pdN&WUMIQLDgrqZPbn-#?dPeT)_AjK-2!USWj*@KpAGD#^rP zooIk;xi5@gQC;V6f39fjNrK2-LGbOP)Tc63ckeAQ@KsbTJNmev#6zWoxj0gRU5u5H zZ(v9^F)JhJVVFyisCU+RE$#|PV3bR$C`CbK3`R+9rJQo!WKiM&dZJR^q0fbibVRP^ zVl?r}^l)_#+X(7#+!GWigXyY-9~G#(kRhj=we&@)W~LKR6Rwl z$21tR%X}=BLaZj9n0?c*GKchG;~t!Hm~zhGyZJ8IP+9}G?nRbZJ#-=c3ac<+C*?|s zRF$%SIplSJI2mrV`Ay<}f*CL1ZhQVfvWGkNzG`cdI&Hz=%UE6 zcZB(rfWwqbG->5B?qg?Gc&7P`g8MTAVrx|>sHx=bZaac72uA*Nlf&^>CVqoW-o1D^ ziPqf%IShMsEsS#NCaW2vj>dD`JQjDr`m}@f>BauR{^RG->GP-kHwUlx|Mv8?Y2iSQN+4k?u_LgY_~FIHYfKTx)?zkPc1s7 zx7H)&cZ*`rzS`nTK;|@^GoRpAFQ=VcVyC=<@GD;2^rbODq!SueiW4{y4q9^<9y!=k zIc3+y<&F~&StpsUr|4=7K@LYf>VV>0tPq)r5N>2HC(gW6)wP@1F#(PvHh5%5+!2pV zYAeU7;8K!Uhkn_7P<(UaV7FofSeKjP%UhW!KYJAG$@piN@xZd`d?BD|3FBZv$2-&M zAY&1n5fw8LCQ?1WJoj_p#~V-s=u`SJ~Ys8~m-VhnxJp%jZOHTRHTc&<7hN_}k^q%c)K z4*8n3@1C?dnEREC%i1gCQr12m&XTM>P5xu`HMe>b;aGIqiZ}zL@go&zB zq_!}G4@#8s5@Be)dN7=)R~0MQ7mVDTwwPGw&C2no^2S>mxfNjGk|34W!n0akv~z9h z?uDU!#$d}#jt|O`FtXfdz*otaR+ffii>jZ2Y3tF3oVGI+8@c*04dva!!S|pTiu>q_ zVQ+~-G8znw6QtJe5Nw`;G;U~QKH;bosiL{l4Ri5KVh@+j1g-~D1@ZJ zqvvNBOxBvG>OOhDoSH&y|BQqsnNOJ~?MPmS1${EtD6DEi%g@7A~f_}Bl-`5@i=$G`qR?3{Tk&H3LqXvJ16 z%P1qOE~gD`I753$yV7yfCip^zF?Rd2;!#wYg9$>^!uD^sSb==*|!{C=W^YC}kj+7+JcgiZ_)EuIgP`s6vWc zNrcK;jFz)1?X351(7Bz`I29&Cskb~LM=r@r3aU{M*TLuhwoR2Urww+l2){MTy68Ck z;#;FmQM&0Rm1wvO@iA9p+U7%D)namK)o^@C<&>1(yE^y4V+xuud^M}6=gG1c-!%GZgq9t0x}VWV!u5wTlpr+cUle@ zkvWERePEZ<#tY`<`$EG6#R?HMq}XEqCu@rn72)w*UF=GnA#A8;JDgX7gHETEa0Q4A zgdIeejf_Q+t9b7z*Ln#dYT=-7xca1~Ti~_$H%y=ER0<*{0S+SuhO0U2JFL_)Z1>+^ zT#W};MDgxNoBf_~w}`Fjj>up8%h3JXMihi)ODx;iQ#^$SfF zaG3|g$@J}&OF+5=b&A3lskd=gS}x~J;ogm`tCuX2m3mjZXyym_wHCeoA<)jgEs}?g1DA-Jdpz1~%Z#z-$O2xFhBqEx!z#Vk^ zxtUEWtW5`cFbMueBG&9D7qBUBaCNgVuO}YT7pv+ zC*g)<-8W5^wg-ml@3_6V^m0z4q&(_muO+Dw=RJ`6p_#}fSZ;V7+@L#DsnxTE1-EI( z{o~8$X0E2(^5+!Wo;O=jL6q2x{D8)T(-aqwr!fRYGFICT_FALaB`~t$aoTGy_t%6J zLG|17vtRDxsj;8i9=vV8?}Es!30k}g$0gQZ)#ol}dlPiFEe_Q>TLmVjTF>{a3HV8aiMY8p3IyXA_O^iE15+I* zu!!BP$o37uEb1Y911uuh0e*b?lpK;7ZBagPL;tS%>kMWrS968?d&$apLGvSQ<}}g< zq{rtU&f4SO|L*Z?xK{Dz{wb|PFD%FCb)_VExzTosMxHl!b^+swyvGuZ z#M(RVI<{iZZLC`r*94}1>GLqG4!%}v*T&dhr@e-dTbH3Ti<@M^M3j`X6p$uO7AzAP zuy?m}=t}u6ciP5}_{l~co3q;3-1E=S>nQcON*3YWU9t^9Q zHHy0ADMKi&yPOsF7wLZ1@=V&Fm}0XM0}B>bOxLw)S(nx$7yN(S>ga~+!#q3DVC_4I zYZ#vUv24L7o*pn6f=aO-7h7DgUKL}bM4G@NE*Ly`9W#|R(?WE0XV4*@YZV*_vUcJ+ zZh8C?`6|&Isx;;J$Y65?YMG`Q%BI{85?_J=1+Y(e&N5VOI4wYEB|Wd4R4J=43*}(b z%JB01LQxg1yx^9kY>%}Sf}wN)H41ff{gnYlXvsc4~^3OExJM1fT+X?DglT36#-pDVr^U&UJ-Bp%3YU$K-t7c?x6%r+Tc`@<|3RFP#LVnl^RTCD}pExTcs`f4G5mG_kd)3*`QH5WqoW<5@qcCXGIU17l{9v zHL|kDl_#lPTr%jJ7E=+@fjq~q`bV5Pl!3eki)vfDS3;lDLW3vVp(7GTaZ!61kf*pY zn63gzVBt|kSBFdv+S(HM4JjveOT9g3v~B!u7$xfqW$nZk2o~^CLk1izXC2r`qz@Qy zC_je6j-`@D!%j})1tlB4<~@E!(!;M(AW!%@Ecumt%@q@F7sMde(sKpU9Fv?{a=l{q zGELDArMlOGYxO?ZwD^VIhyLe{gL*<-LH9>>=<-APSJ$m_6)}=3S7MLYuii4ze)UT0 z_N#?d__LmOXrEe~hJU(OJNkCw+m^p>N9g>vzE-XJou1kAYUkr?N>4BK%j2Jr3gyoss|R zzWF(Sjp94^Z)fq0SxnPDrjj7w|7dLk?w%lw!|1^sUz*+VTT>5}yX|>E62^B20^D}& z%6pK6@!d7!tI?3*=5E<#-+ZWRzOk$H$A|jgj!40lezPvP#Oh_sxaq4m+WOeF&DJpR z>)6}lm!8fO;|~nXIP|d*g~eBJ==FimM=$*-lN$&Q9hhg~$4+3z9{UmmWE7KjZp_9Z zkz+e(GDbK0oPW8H2myVwz-mYo#q0%9C+)u z$M@)KPyZjgKMAh;=lFhV7pNdY!}ux=EN-MTup%u9UKC_Y^@gmj?jZXmQ+3u8bb zWwKVeeiVD0*}E0XB!-A`%20S*?<8k7I&C+CW7Xr(^iV%C;m8B^?ER&xV8{B`ZvSij z1q{@du8zh?D=&uy4nqmG&AW}uS_4om8ZTYx6!vC$!``p%Z8Y;4r~qg%1-Hd3Dn&+R zmroXia2-o{-u!-2an8cSz$Ei7DIDe#S%8iRm{2sUOf_1pnHI`f`^HfV7rvUFUYuWI zr3HY&)H7hd;lMZ7zY$O(B;;A;M8+JP(I ziG%j1t|zGRuHIaO3D!2n!VxeQER_}e-*;RuJ0E3za*M3VwqDH*H=5dv7b!K=+`sI_ z0uV6~#|V?$d>oYjinBXL7gKrR4HddKXpto2sJlZ+}iz{N;-76ie znb-vZc^3ed-^)kjx!AqiLL>k+Vi6Uyu9hg6mD3bmssP8M2(_eS!soP8P~p}$0)0_; z5KjlMT$U4d7LDV>d@XL5qIHOx5XGZLd5aW9b{{T_e**ACj!k3$CtOA;kye?tD`_RfMtf z7}Cq6LOhEV-a3E@xSUbJh{m`iAP+;~3!%!J9CTI8d4zzt_fbyHPGsob+MXSw-F@P# z6x)`LTM+>F#_lDtJ?*`$Y-5Y>En8bxvb_s&Mu@&iV zo}SOIDidn7!^w%NFTlF<>azp)Wkq(c5?|Ukbw8CC_f3hYwm8&iJB@DUi5Da2FQ7sm znw9Of%?C8k(HWy1Y-q5`R8s{Ks)MjV3MSfx?GhDzD-Tn+In`_8cb76wlo2hqT~3>A zz~F!l;hR>58Qs6cED9pV0P1X{p;(hn_Ln(=8Y|GczR7xb);2!IvXC|}HjFCVRlwEa zmEiWtTzzFvYMNhFk-QmYbB-M-}(MxsxoE>22k zy!|pT?M+ZDM|=Z8oOdF#u4qzO6LW$(l)V6W#Rq$7fQ;SN9(aF#LSTnUWU)B5 zid@@$(|0(y%QMjo=If7KunRy!Si4zyPQ2Q^TZF?Q@sI!c@BjA63dw84qlvHI#btJ0 zPV6-0o}eO{hH#G)ygtBguv4@oCdfRY62{>4j95-=VR2EYP}jQ-#1yL#>TVk?WL+*Z zKvB<84lWT4;kPjuc$*Y9uOHEeUpcWUUzDvTdaYF$qh^+3nc)QC%tU+?)zryOtNMkX zA=}-&x;$@ToU7K#e%3GUq(i+$b$LJkXm)Y}WuNVq;7Kt)7AD&DmETqA`Eph+*wJe{5O>3fb3#Zj@&#@PAJ9-F(53H3P}{1l0p_DpFA!=^|Lib zHGqvE-+9WQ*@Dz^?Fg+Ahf&0dg2A{=<*bfN8oGPGjlDa9>xg3|=(I=Z7yZwI6`?}0 z1Cc5x^i-MH-OJctFeJ9o4}Z|zQIT~Iw`Fzr<1^(#jQ(DIx|k8-eNxHM+Pq)tH&RU% z*<|YV71>CSDO!ZI3Crt&!O~`2^5J$T>(8pys8;qIRFqbbH8UuuX9fsju-k1$%TRwG zwq@so-4*oW%g+hrCWIYQAlTOuU!ltuvII0LY?F=hMPea(IN5*TTtJb)Btsg{2JBOr zW}m?sK!hCRsxg7`%!2~ORcQHxNimVgg#vu7h8!89)@B9tR-1pk#@OJ-NyJ>8RHt)g zIcv)hu`3^d#7*CO5Y3Xa_asaTQbM1tK^>)u?aVt2H5THkjAADjIkH- zSB`yMd}r>y@gZ;Qk;JV(KH`mCk~>RB0{l;k{uxDt;-u|^{-yJXJinPSI6}{4cX4CE zVGdMxs?WdpQ?Bav=*HS>ym+u>pZN7I@qar!5Kd<|x*p+(ev zxBe#wGnox$Ff{||K4v>D6}@Cy>B$r`lI0N&Bf9yxE+#5M|2CD3LoZbiP_qSl8M!hX zs&A1){=XaRkNp8!$FdAeg^jEiY}V@Oc`#T_*$!HJ2~4Y;>!yQDkC)Mwd~PBJ zyyQfp&aKjpDz1pMkVx9ISzOMj!i15c(u-dEiXjOy`NzJ9?tmgQp5TcT6OasQaw1+| zUQDd$kcOB8O$8LJ6bw}BZd^5p=7!H;-ITM%7jxD_mQ})V3;_j{Yc9YgRnG{_iFHqE zvu0PbYVwkEBJXTjLhG)46!GeUG~Eona!TQY+4GiehHj)E+>C*PBff_1iWwIxZ0>yM zV?aE3?ojInoLpS!z-^s1yKOYi>ag%Ll)Fp>BeP;c)`W-A^V|si zov0udZX7vg&6U>P%0~mj-o_esYt3UGN!>wg!CgCfvoc+Kx3=2-)YO%Y_R?*e14Z_0 z`J**p@XnQ+b#~R6qa9~cj>kxrJB&i{63~-i(9QtGPSb@_RM*gqMiC=TB~n6*j1)hD zSq(D)8vp5ABd+npTW=)n%D6y?*d3lPcU?JlfX)@16GS#U2N8hRg*n}`D ztP13oh*n5K%`?JbmTyJ+U6`5M-3HP|+~~?p=E(`MK5Z7bE~svK%aXc7Rd0Vi#y2%K zGESBqC08xa7U+fjDjJtUk8ol|La36-MXj-l6`ZBEC!Qnk{ShJyqyH*~jI{M4vvEzZ zqu{~$G1qOa`J+cGYv0*!EBDYVWkO_(S|WczyG|(o#3H!FB-Y2Gl1AqY2W*Zc-{Cd| z-C}@;CM4JnbnymB$jZpa7ug(J5liP74GKg(a&rIS-_t6BSAbhP=GhcR7Y?s?cW{7E(RrF z-^asDAyEgl4+PZ8X}umvgr5;tNc=^$R zC2*(x-&lK+`-$s+`llUX(Ls;#Ny~vvW1v?Hp_c zqaNpVu%+Mx^qS<9U7?0)M_n|}#}^_gQ9^L`t*yqthc@StIU76KOS^t!^SU5<>XUI3 z?yFZf*F0jz%Kw;4_psyvig@VqSNC~owPqAr&|z!&21dj7^qvX3ii=RI7m#j1gV<@h zg+bmlD}QQUzkK!bhlA!`g+GfxTUllXwm=j^q6k9!Nr<_&Yrpr)M>-04Q6)Pzj6+nU z1th}7=|^%PW;IpF)TJ0urgA1sACpVe$>xGmiIydh2XSyZGpZR+8*DZTMs^vodKJn; zzwO=deJ^tr$~j}TDS>Z6>67Mhq;7r;Krn}`Rg)*Qhg=rILjaaB7&P-rm<)1q3@I{9 z&V?1bARZ0qQH>}$@NO#P`z4d&;(;iBeh?RmYAF;*t5M~wu%}1mZg;J$UOrNO8y|Ra zyKJ8HZ9uSx!|!u3f2#1@L>Z7@beSCOvT(H+sOZw*v4k?}B=HzLv&BBN&~i7WB-{H$ z-HCetZfrP=9+*Yb%NykZB=(!lY`>ZB+J3XbG5XEoOzf9ix}g+6avOlihqAjD;x=wK z04#xOCSbdnbJj~M1FT&SXx2OX%_ReB#N|1uXPb6}`h2>MGob98@Xc*1g1npPjpOnb5J z+KBu40<5jHdk{mVRjG-FbCvztrlc-mfD?F7t4znXrpF&B8%zb=%S((3hcwrbfjj)G zP@V-JslDmIzuT(~aXI=qF!+Lx;VV%b@&I-E*v1c{6cxhU)@lE_V4fS-WmF7Q__tu8 zH~9L?@t*D?BO*|ee-0`W>9MF)+$Ey@ikDGWPaeVloLg2vvvtz~Rr zOuv=m=(kA!>JGj5-tqiR??*euP2U{fT05@&Hg-39k5>L1dwWb8iJIHa-#~MV+xT)C zocT*8XT#^v>X*6aO|C<^^5l-Fd&)y=E9nZjor(LKn@1#%)=fhX&)%6+lc9=bF)kj| zkf|6^F4oToywCi(<&u3q%8hl8{AK%HHh`dg$FRbX?~j{`7>90*1)HWdGl|~nTImkQ z+|T7oFLgVctkYn*ecU$@mv^H|D;E{{=&icu3LPH!*l}YLy$gcAdA&L@aLW0%t~Nv) zRFdt@&fy`|BLzw!>CNS4ZIKlHOM%Z8Arm$w zSC(A29D)bh1b3*#o5(9hcebP8j-9Y?+|{^}&*cLfj3x)V8ZQLitAIcUPG(_Wr=!;y z!=&A)@fHUadlSfApYS#dznqwDNE?^x8gD-fWxZ$ri;qzV{_|EVuE6TP7Jq*_xn8-1P^iV ztf!n-MjnUd7JE&bOF=HSlRgxI2|-arb;RkckX~n|Z0FmRRDK;=yr}0#6gvPLgz@O|a4_$2%gfCw=R<^qC`Kw@ zsL7DpVtKfkt&1+L13cd7W2`R zgrc%eRwpl%7oz}GU^*2%{SJm;6;3sZu}}}$Q!ii;ggdjKMIx<0lf6rFkg@|-FrR5PrBGmR@7YpkxC zp3g3Zhx8HGAcnoHWIFAs|VtGBP5aT(wHA)RN7Y3O7 zK-Et#&PnkL|!Bqk50caC@+qLV>Uy21R_cnplC^${D3Fiz_39^+LpL88f9={asnHEi) zH7`?Su~KB!O4ZsODvJvmm`b<;w=*iXs+yDa~Ux$4Vb_L3ffWsV>Gurj-9R#)agyP^NM>e73dw#yQ9GyWVB;w*B zE{>Z)(bd|$LcW*u+2JNAQ}A>o169n@h&`Gcb8yf9;k-m>x z^L@Y>O+~D1lu~{d7ecWVbBV~T8mb0b*V}pk&Rm+_{P_cMw0;zW*_Y)`Ba4IUhC3Hi zH+<$;ZrT^#TV_vT3puab>zQU&&giOIbv%%1a;(bwA$5#c>GFN@g?5`)&Fddt?7w*0 zeEagnk1wD9aIpXK#hd2GH_cl6)ptKUfBu)|{&GW~9dGh*AWz1+Q zc#uU6$6ljt{sDLp%`xzQB5kM53dLflXC!FV?Lz)kxrY^62oR_1MWc29bo3_ZtsK&j z)5>p7n~(P=AGx2QX=xCAj3|jtXGtHP>}BXv(VFR4KxuEvgxiV6tRgdt&`#2Ntt-SA z%Ls%wE~dLz$z<1PfaFWiHrv{Fh7_Bc0IBPil9N0k!!o5>4K@_et50TtO~Pl$8?^4t zk#3X#O;Tg4iTxq%Wnk%kf-QZr#A2YV^vXDs;|8rU)U7lh5j+8(M#^PU%KK5n+i zaPV{jcg&K71#sa9Ti)QZ_S}u_kGWYi{azb+@n@zA#i)m+T6Qil z#qR$4Rrhd$1SpK`_eoo;Z$HNQH=%0Z_MI0N+OCk`Y;p2rWq*F|uj^|Aanotwh4Bmj z6P-i}I-mcG+^u0%QR12%#n`*QLiKR%xbxxY*Yv~}s~y$s zeC|Qlkq4RV~*o%z0tQ8HZ}Ifm$;UG^~PrXVmEKc zdcAIW%FZ5U9N3mx!S1Ttr24;*?!a#w`X>FeAcp1N93^vU(D2Jx6hi;m^6#;?$GAyU z=63&E#xMkv*zh@cGALTLM>XR)?W-MR* zOl6@QaQGAHcUBSMy+7lBzdee@C#(mEedXUsqt3q@hU3!Vj3Htwf_QkX%YHM&3J%bT z@8vR7p(N}rN{9(dwSt%dclt8`)fboN+B<$;dVKwNM+`MiT{G zTmECye?Z4=iq1UaPYz>Xi!TSHbkDJ_SiOd#cy9Y#``h>%BXaT$&|dO(`qxf<+uwc- zv{y0R;q@(#_Txoumq&YQ*^aDdd9+_pqqc*N!}~W%HvDz`Cg8gL84=#U>(4Xt@q@yapNZxROpO6{^;0(<1lxR_ye;>DgD8tAXg;cQiznr4=bzwBpPvorQ1!=OQdlQ z%$xeTrw|hJiJG(~5lzi!3Ffg%TER6+cQR^>4S7J?2owvc8kp9cJME($#d>oG&nk|Z zpXKPGK3OVfRrA5vdE!GH^ZDSjsJLnhSI~FgMzK}rk;7WM^UB?_m2=*wj1(2T^8J|eUl{nD8hrTN zSdQQB-*5lE8|cDfS<`$f@0Tj~`o$#B;VpT&7JFCgp{t>jzbyFuRia$Q0UMH+`Eg%P z^XMp4;N!q5YiCo5oMn-*0H-N^?@rEXPJHr7!7;#95sy6L`k=g_fu_Y6M-DiGGgs{0HYcx%6tz+PMX)t9D`7uocwua9jmj`kb-c9G=KZ0FBe=dBe3^aR|7_ zdh21cl2G&>YX*%8R?)28s4#_l5C<*CvV$Umy4=MIZNCYR3_0Y&h4O%!mE*}=efPps z0YM8S?$i$Bs{BNtRykw35aDnQ9XmD*0(CbGEk8^yPd#|7WlkGbjLHtw%R=_IF1}LE z`s%d#-R`4@-)QZX9sV+3(-RwycI=5R)V%ndM}zb(ZUq?==M1eoxi+#2Ns}Y2Rk*2VH8$seP=#NH7?>FxW+@iViT=_3?-?X^)Xf!P$ zqiHMr9zxJ-q`?BnMK3A$F35&UO!a*QRtG}Vo`S&w3iOYJc&Lh=5v{OTyB%J_zMa3^ zXDAlh777L3oTg@h4t`$-P@{5QuZ^PtpdpsTv6++PLc zPTm62)Ty@IML`89LF;aDjwYgc!0&|*o0yp0=U~q#jd3)v(cS7kX?P4JyGIe=< z!-Oa;8FG{*KB#}_Vt-Z*rIPkx^yuon`Wp;z!vV2sGs>NRYMvZC-hW<;TswfAaH|nc zJ4GCoeF1)6veaV=DZf*EJWTk_?DYKX6F~aA=C3n)Y^W-8v$DCqu??uf+x=4lMj>Pd zq!NS#lqwg<=|Z@Ez#Xp7bd%%Bg#wzG=K1WpkwAB{W`jTU!Gm1v2N0;G(s1U~I5v7y z30-Gg)=$~jLqglBPl=%qhga>$#H zJD#3J0;f$W^3H<%#HFlzLKGmwENPWdv%9_`VWvV*e+t+NXUMKVq9$a$kP#>jAf#X_ zP77V6a}r67*&pg#(EN^RVy_OP1vbI&6=3)KELM7cK>%E3k*1y%V>mszey5OHPrFI0 zT{H#-zty*y7&&4gVI;e?<`SJ*{rNLx_?*oC*``}ONsz1wIHrU*(|8csqW3&+k^C6? zsMxVabq&2C#4hXXJDiDBQb24_wUgS|U2mor=SS~pL$I;2xw<`EG9byZbAng1@eQFn zoQnI5o`sqPDzF>n;CXf$Mb^ZKHa6p=i?;Zb&Mos96bAwFk~FB0sFwg+-UoK)PT7|-$l9t)Fz(h$QMf9mWY-%?Yi#=B(F-mtRl(V$#%5AR^ky6(mWKpRW6zd zAC-NV6p)JmjZ{cCsZreR$Zw_4I+ko?r|Bf!*j>o~g06JVUa&6#Ht9X&=(ka2Fwy)m z^b;+w|L-o0Eo)m129{(*S-{(pEHKF5$Zn1)%%JKs^1EBkvl}9*JG?PQ!Op>p5`tIf zSFPeb%ziA%Fd7Tv3mwXjtcR>f+cLY(9h@y^4MO&coj3&7tY?YlyE)P{Y2ls?swONU z!6Wb%M{4eBFH+l4bzsg4?sDlerqCK}j3G&u5W4o^Jjw*j9O(9e#AC=i>arX+{$0i_;@BhSfR(`*XSo+`7nuou{!=7OniI<`wvXP8=$wpj4+ z?v&xKxvWF8GDwmbD^RtiiJMKFceVC7<7ghWK~{h=VE^;q|80giqg8}YOk%m1eVjr# zl$EKkj-QS&a7SnIV9l3du zT}x%Oy{1a4{HJU3t}$ms86xA1LwnrVxxcj@(vKik^XUib-hID@Rr$Jcaz;g(rXo6JEmG$oI+Y-9nUsh3i;v>4Hso_T8#F5HD^k&$RF*r@zq zoch>G24teb<__{#D=L)Rwv%Vq$LAl;5Qr}TSC!U-kvg$mbnSMyD0$S$yGtT9E1+4m zKEX>u?#KKZ5tvkQ8RqaIVLp<9#-*oz)=tHa-|W+_soO!K^3;At3=M(K579Q!)Xbz* zqA$d^t*{{NN2$%-RAJpyw_l0+mB7s;1@CTl_31)oNmtDl(vyqgNjMqGu17&lephY>g=TdjYGbP(^AbvzM$M&WX?d`3E^Rc=^$nML9ZUlQEn3~)r zc$HH}G4g0~Cb_i^I;}ePDDiGJTAZm2eM6O?98#|0e#%KDv*OAS*i2BSszES*a%v=W z>WkRNCJMYk%8%WkNTpP}lteSloaM}}X2`spCS^gMigHPy1bO$%nwB1Vh13$Oi=*TSz8S1bcE!~esINHLeLiJH?nd5Q_=wyTGO(wTi5BD^s zeU`RXIkcQ6X$vI3LJ6E%*>d0`KRaEBX`4J1F)lJ0Yl^DQtvw1~yVmV$3}f0od*>|T zV34uW1R;?e321ULJH}=(nG-DGFpp)oJpUB|=l$%WJTSW|Qq7V~sqxl^p1ddz=oAnu>OHk#EjG|aIwh^q6R@q3x8zUMNu`aUj8ES$! zb=+(XT7IE?LH|8Lr4T5cxPb@MmQ`e?CVnuJGj%^UHP0>a*$n|$CFBd5)c*kStNQQ8 zR*)qpH&c-A=87A=?lF7&nVAR_u8@5U6EC0Dht-FnDS=uk)VA_s?}2uZB{$H67WquW zCC&Ba;F%~Bs$!ezjbHndH}mY4?j!ngWX`<@W%a{2fL3IYm+^zU`KiGA=5Y*hlhO9i z_daK;-gsnG-5cC+@7XP$h4bv~3}8JQ=G!s=D*{4a4&w>mh9dFt`TQr}4kv)?%fU_L zqx660#WDQjOP;tjp6!j{8DA3Hxgbfizux$_Ruz9Qe(K9HXh0#GZ5lRh?5^#R`Sp;T zFFJ0K@}*QTQlUj(?B88JIJO|0Y%G_nzs0+Hc3@G#g}Al%sQIyEt4P>aYHYLGaCh9~ zqe~erF31fOje8pvQIjTHL#B! z*jh&MrI`om`QyyP9>|%;;0{3MF_Js_WuV8y;OWsfc%IQWBjY{bE%Rq@I5zoN!;MVY zn@NNjK#dUBk+H_UV$fm23i%p4np={#k>#+OF@b@rPq~{kJ2BSLpO0tg6uU4|~H%X6M8fNZNzr>CM4a@a4Fi zJvQtZuUI5#4~@%$#_(&Aj6E{!81-HtUJs3`Q&~#(#?x09z8S%#?or%}Ewa=kF>)PNeNTUwY;tiZ9Q!x#EYBXNz^dLsc`|1uUbEGC20QQokZ zI{#bnd7iQu0xT6gTuUpO(S1L@D7x+Kfa3K-biSXU_U^b zT1eg%@oh)+-^RurSX3m`8#O}x8N&D7lWL_&-aVxr081aTLw}PPQT_L&ya$PAxcZRT zf{W;pg?P8~_?Hv@@K@wDNhMhp02!OH9pB*ZF}Acw3#dZ(_8+3^jLM9k%OrxU}He@C*0I{Ac@xd*hAzg?rBirt#G++(qL0U$C7qljv8s za2IT2T1ofGrn#$U2Z&45cD9Cj2RGqMxP|Y8b3|s*8i7poDL)RSqslKZNe;g<@QvOX z_{xIXhWP_Lz+UX1vi+?u2ac)w{4yVY74I#6XAJ0LF6OV|y@78`L;UI!{*_4d_ArV5 zRmJ;H3JIqFC~c4|$1^x)2_Wjmb}P1eJOVXP+99*J(hP#m&|U@=agK)0_L zeXhQv!D`jZ0+n4Ty@%(@A9{0;tcyOEue5WHe>pbn7|Hu0&g7l|ZNj^J@Z$lby|>Kg zFbELox4G3w0zI3HKgY%$SX30ucZR9{uPB%=PxDoDVOai+02%+ez4`wE8H>V= diff --git a/Corpus/TOWARDS THE SYSTEMATIC REPORTING OF THE ENERGY AND CARBON FOOTPRINTS OF MACHINE LEARNING.txt b/Corpus/TOWARDS THE SYSTEMATIC REPORTING OF THE ENERGY AND CARBON FOOTPRINTS OF MACHINE LEARNING.txt deleted file mode 100644 index 4f12479c46bdc37c278f6c035c82172e21369fbb..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 112992 zcmb@v+iqk@vZlAM_fss5rbo;f29t+m9%rk;WHJv`dDN5=+c8i!L)ff821O$*<{>YH|x!J!@+dXyj(QD>X*svV$n8-lgr83 zS@SpFp3%YOW!pTRF2=J@!_mlI_^cf_-z}!?+3eG`k1!dGMuX=4a6D)k=E>hM*Khq} z3vH+WF#P;@dvkMpV{`ZZ?$+n6rup}$vyJ_q{zKF3Y&G98()Q-|W=ECfpRSvuc{@Iv zO#96b!v()G9>EL8C`Ly2rP4i|rYS)`Lr!TaE?lGpg{JO9G{(NvUU9{8d zX6xR%&Fc4ky=nHI9vvU>ELQIoe#(7&E%{Zj0e;6>!uy|o6~lB z!hSAh?fIY?&YHn&HW<%`?Wkex&BFE@LiF{``Ng2Qn#|+#)6wvfH{V?9@baoXozFDT zWqW$T<~5^1J00s4)4_}%PcJr_4-7Y+PbdAw>7Z$wvuS%d_%fM(Vw85y7e5VV{HPhV zGZ?Koxo)nelg~q&NqckD8N&|2-GIS`lx>up&-7?GJw)^qa|eq}8@pR|v}RbTHd!ewYtO z!(W|xEG92=owqlq(}DJ;6@3t_lL;@fdB+9~&i?Cv`Cm?l5d7xA)4@>ec{;e{g56<;nG34cN zK0IeD*f!WJm^bg5ZRKl~*6V7)?$P=MdNC^r9WRsJG3`cxxyo)2122_ojo>$@$p$`RZaYo-pqa z67h9AHO4xDYfn!HBgqBZ++P@h&ZmRH_+qlCH}V`_S$pyF#hs?xsM+H5Li`px*dI=Q z8TJPaJ3RQ)V)(foiCE3r{<~&-@TcYuKTRiJMuY2hbo%C#SMT@t`DyFnosH(0&A4c% zmuHJnb1^{5FG?C4b=$MS{JJUB()mh~z|~SQKMzi|B*_2I|M z3#FilxZrBSj5WsR#Yj2=ZusYa{6CsCn_xMQnCyWoUfz~?bUL_%jS#$M)K1wKRM(t$ zB9~CCBiMQa3(u#+lZ7#7doo$f|7Lsl50ZlFLB6>(?q*CUrtksfU~@m6TwX1teVBeQ zTA#^``ytt5#AJe28sF67Yj0lPX)XuziwO^(volM=1jA3pQar|0YkPXn&<1+R`xs*I zt8X91^K~=3p3Mj7tBdwCTvXW@Heoh|8I427a|oZwU<~K%Q!Y5O*hX_Wkn(HHs9eBn ztPcHW`h@Mhz!DYKEOE#DVmeuzU({MlB$_SG&j(DDy>6#Ncs%Q(+AI_bYwc)0U7W~> z4CYAghSvP($mv%b;wfy6qjdBe~Mi8{zsnHBUa~pn%PFlb)CL7|`@nFz*Ry`Ta z<)AS^Gs;HBlj%fOC2g)25#n|>nar=Im`dAfw&0)t@jsOlfcs9+^qrZ|cMGPSF&bNr zdCUGqp*cIntgi)YCHl4__H&jE6Keen3|S7otn^N=o3)iRTW_ZAaK;;2=*hQcvf7N; zpW?4|+k>bM&afTQCr3CmEfmSy@$kv(_7M9xm{zu@d*J>ZUJBn}v|t(B5C(!9=d69J z`Drj;M@VLKv2nJkS1sxucAGTg$(%o*FA!~5*Ilqg5QKc$e3eu{x~4-s)@Hj>=6FpP z*Z5#?uWY*cN>^}`^ngT1hS_wJd?@MIJgfc$n)rm|38rZjToFUc3k-eE8_PLKV~hsB zV1jv-c6ow#j;Vz>JbAg%?00`yTAQ?m*28?z?@VgRwaj9RUog3h$(3WDPltW>SH=lv zixICTU)T=RAWBSr72kUC{s(z5{brvZoEOIJXY&{Mg70E~cy`8SqZj6r$;fP(SeJzj zXQmg?3iEdMX;w+(@r04hQ#F@#LOg)2>6>?OB@*LxW+UHoUhkRn$sBhiUQ+PG^NabH zf&SfjKjziQJsF}~#;_9`i1)-Q%GH~1koCX{UtadX)VVJ^sGo$6z3TLAwXj#?=15yO z854Wpk#;V*xTz19ohJwRsy)G#O?zLta7J`}0q=Ri-b$q9A0%Ij7q`Ppnbg^2^ts|G zd(?3tb@w{N@%H_XVcIgE0A(M3yJ7B+07sNGuWihS9 ziLh*okp^kBdmCHwGR+HSJEUCW%jR+V6rd^s)Z$u82)Z#V!t&-SnIry5E2P5-_ji7)3?g#UG3VYf3P(=4PwonuU2#VKpUMY*EKr2XDtf>f( z1TeIt*#sKITuNqQxP(ohrpGfjPW&ScbuszU=>HgQ{?v}DrTe^Pj8F`2NeJ_Kv*P zj3%csnpsBUs1#G8S}jlwg3X|;X1$gvzD*#NBX_iQIJ@P@?xlxII)u^stPTmZaB|j# z`!Wm&q#;JL$IIHb!504K4waD7{t9O%XS{$%%2{vSDJN*Ujkn)IALrxV*%UaXZ{C=; zw|w39>etEcYo5vII}kw}7P!tz_A{Y^BbF%rU}N>1%>h6Z3VNWK5I^EE%nU5rAM^lL zq4R*h$}(S#CfD8L&BbDT4uq&k3$G^a?1PQfi<%L+yW>{OeopkjZhRh2Cu7AJL=#to zQ;*?bu*28IYj;-CY5=HaYrsRX5O-)U!?n0M{HI{SQZ1VA#y3W2e3L>XODY{{lB3zJ z<=)CcJLf6;*acl=cEcM=Dz|&#?wyK1w+R_tgMR>stUfyMz)CN!!)?7E4L+RW?|)Hr zIcP5xq|Wej(P7LVcRG0t_-u}W*{LsBde9rt!sSa>p zqq*1Ygv^RxLn-8j%Ii#gA!SJ64<6Wz0>QS$&T1K2jy(;;qdem7<3nY-gyEYL@}Fnu^;6$ zhT}eF0BbUWlm-uthP>vL@L%^GZNdAP|HU{7(edE)Vmuj51f8xM!LyTPO2Q1pn|8!6 z?2}Jv&DiNlIkxuaGY-z`SqkdTVr!E+8O~2}tpG=4q;4O0GHr`NVaIaNJtuYba_!GL z@dFrsMs(T{;s!IM1;r+(uqST;Ap`i|IASDIGVv#Tw;5H{ozfwXotV5MIRX?+R?M*pujIAhqW$2o&@)M zhM{BbmG0bQb3loh>)5+oJ6b#7&!FKtjxrm3jlnq`T4n?yE=I~ygxiVW;Rm6V0Pi}L zv(daHJ||4ZM^!NBE)gL(=(?l~s203xqRa}yRZc;oi-k@fnG!+lEE6>{i-&_t)@>*? z1Wtu5%;rLN#&`kGZv?!I`SQ_tuwbTt{>T4KnFyTAI{GN50e)#o3v_b^{tRsa6iz(R z(Yqb#d8f?P0w5u@1i%!6fId3GduX=*zyEvR;NhE$&^%pKe%_mQ){YU%PA(taS${cM zTuVgx(HPJWDTMn}9B^ugy);xoDkFwXfWX|-UBqJ-Lx7=CsB>|h?M}J+PW7))of}~8 z7&SS{O3|Sg*$e*gc{oQy*&oc7BxeMsB5jAvti&;g23(PhgtzCDauRH<866o5kCT~l z;=@ScR1)+-oCeTgt5(|1P)dH~)x<^sMJq(V`6@h(FZ`u7un0~wh_y?JmUC6WkOGPn z6pMAMAS?8Emx`!!GKI_p{cY#Y$*K?~D%f>w*In*%^rZ0EES6u5S+G4jZ=xFz9P%G`nt9gOa6!%+dzy%gDUx$6120puaa2S7E~e!yYb+w5vcfZQy=Qs(Vtm(|Z(qMp zOichdAU(E#gy@w-572poLp_bZv`IHy9|;u6KXi*pcOWeI36gB~YC&QL06P?)N%UhS z!}15D_%H#^6$(zw);E@4*D`=Zsg?j$uQfZ$8SHv6#pxv1kNTgX4A?1t>&6vmOyhbrfb2>m+6Y+M4nD32vQgNc*G*{FG<8+?HYh>AR1-@%v%C!v6k6U zLmgv~&Jds&%aWLEI-FU;n_R{!$&?bVkl6&Kw}g5*lNSWi=<}(3=ItlX+H3U3{9>Hq z2mN`sD+zeA_j5ZQ7Jm26lc%I$S%8=U1;J2QrLli)Z`_}fchKMWwZPdxf8VM?pkoRq z?1^Vg5LYW4{{4xkTyrqD^@NDPzpfy@`F7;~A97AnH?W|jHG0r5>WI~rT7ud>9>-;+OVB?PXZh2#0S zk#iWeCytuqn`NH-k{Nf=Gge+GucD?z)$ zxk`S5G`wf`-Pbg{gz#4HR(2m=sgqAhLr2;R&(m_sj?{uvfLUI_JR@?-w(Qryz{7q! zlXBR3vS-C`*fslX?^p1p@7go|t&bl*d5P1_D@2JS`1KQRdgVgsA<#RxEpb%aoPcVd zrTr4($N%2E1_A_Ml<~YVK~}sM$;drw^}qzasD{;}dQ~LZnsdljNkErmMQlS>HQOL? z$B`0i;t}`PaKy{E>@-hVC$&Q;g6acF5v##F$}Wt|uj=PW_hbB=JBk5I+vBkn=s90MED#gT$)plhw1t=xm zR~O|$`drD^Emdrnl~cH!z0N>6FK;w&EG?~vg@g4J%0kGPOZvWPKd5ZZ?MygaoyP>Y zBsdZ}8)^>0N)eB}5}(mzU`BddVNmf$Qv8%sFcJCPGN>r>RGTOW$2QgZqhQo>`^B?- zw}|5qs^Fs=?w!J#-8*=+)6WB9-q`UU)$+x=GmaPLiJE}|ByC}9hI zZ1i`+t*+x!VAEn0*e(c%KaeL@lyT{A)Py-a0li9cONlBAjsTFa1k@Ehp+}_hbNGHIU^7tiAu?;7%$jURjN5TxeFu@=EA`<+TLsvL~NgkPH>L z`(P=5iU%k>T}lOBX3HXFkP_tO3`Cu!h2Xjfr*$_t50Uq?Bfm;3QNGj33E8jAV9CnC z4{Q%*2!ctb14=jr6P#2WaEmfow$t0qzI&g0tL=1rS}6K}9FY$3B=sYx6D6m_$~iOA zHsf|maWu@2UpY2cT;vi?qM0toyiE#pO9D#$ZQn8YE&Quu8c8&K3n=GTPBLXhY)~hU zcC+?vDCzH^5(cF8u8~p06Ol}|%SS1vAThp~^j0-O0PLoCf)6k%dfrA%?^oP_Ho9pJfCX7b3e!-3B!Do10Ix`0qB)~*VYglYX^@NOOMYGI*&vwJ9= z%-^x&!EvH{uupszJe+|v+kx`v0PeI~9%=6L3EqdOpyN{7&T{G6A!py4FNL-@(gW_r zAOt9aa2{j(3X(r)x>!A`#APMq5tn19ea}jX`57-sn*f7&Jph#^9>DRYP%eQF8$OqX zJh_Icrb@J(P3$E?kvq;Tr?7Uk-`jtu>agRB!M8yzDJ#lR$l8hI(9}%^8ArJ-%HU6k zi;1rKDi1K|!P3SSqd@9%=q{IxeUxJMvTt@tGe3Aw*b3Cov>KnIwy;g|X@#VSI(|HR z5kf$Y_PmE}zr;nf>d`(5O67JfdoSHr4j;r%2a`C7v|XNGVkg?>oXK+(mZHXxbstgX zZ|4eKOdsW_%JqPBt0-ub|i z4@LET0a@*9r_4EYjcw@&i!3g#Yv=Aa0^&Qr@wt_c{cZrgD+VTmK>*Uj@;->beI-xPpps1U0?=OCo%nVtI-M;jCbEhdJ}h}fJV1IvS*YqO2AseU$6stY>;veo;Ag=4nxU`(FLZ>>_0;^N7qm86B6bv9_d| zq$BIHD%%m?y;}#@rm1clHSbZTNvC{{VmqY+3baq3B+Tt7a?MV1)T{~b35RgeILO$N zO-Xf2w-&LvotI|PK@P0=x{^Fz#)WVL@9WqUDuaL)1+Jo$+mu9O(>v2nmG(eQ1_rm3_a7K9mv2dG6HDM9%9Bmd^xkYnnQZ~aKDt@v z%?tSMd&4|RdEhm&I_R4HcKj=PqfGqXzxf`H2kaX?D}7i@@BQ=F@7~XP7t^nog-Hm% zQoqqzwc>~_=Muy>tLV~{R+u$Oc7@HadKNl7C-G>BulII{=^evGLf#KCsi^vms^?Mv zy}d{RLZdfrzpo=sT{xSWgP2ikDwwSTk`jd)Sk%sM8kWVlXqZczEX7#bPmW+b`Un(%dswC4yRnxFZ(l~;~g&3JKn!i!N&IA&3D z6kv=@ryT#~`T1$COqMxisL%y?PmqE=rTKr z2PwrSyi8t8npt}MP6a%7`=-J$=PzWys>IfgCMT2AUvVuzl@j~#CVi0DL;un$f+GyC zy4)+n_=0QT(OvKI#N+g!R%erNzyr4YX#UV2t`)#y$0_YkKe{)*}Q_I^+# z2S`LUr9r_mkD6o5)(V|&%-Nna7WEXv+Hc$cWk1@s0Xn@b3)*b3%-m~(jdKFZht21$ z%}r05$;I0xPQISed*l1*@YjpMXLV)j7Q6lA9qxYj`)xw|VsPQdnmWl?xy$<=a2%vS zXcfU(#t%omggl39+&OBi5$?*EA2>ACY#mQj9}IcD-9z%@q%6eYWJ0Y8iJ$T1FE~Yk ze;c-g#p+KkG1)U~meX%Uxwk00SMQI|HVB7Dq2(&{Ua2g_rKc1(!UaGZuVA!L(ekbI zPy*VmYTSmi=(VU-6{5nE)10(JN~MZjb9^yGT!)lvS7Hqhia)BJS^L|$apK4DK*jVq z<|tk%{ipC&pi?mx9^<=|9`a?|e9x5DZ??+SXXG>nLUQ~tN6T^dd%G;!j@#lCg^s8n zxG$6Ig=3XATzH<24T6#^_BFHUImQpjpG%Fc!r(+NXkf>xD89imb&s&PqG z^6-waGCY}?IY=$V?$VWK`rP*4c(+p8qHTc)n4Va>xMRbqRL8xFp_@-VEG`77!mL`W z0FUI>j-S1!;2;|dowkrQ`EUm%6M?M*)RmXPj<@E|nQ=-S>2uEEusKn7?&IcUGUEnl zk*07n#)4WnDog1yttZz!)IoX-@0+IA292%O#)?VJd|MYaF$phU$mq3F41X|#S=jbU zNAHb@31}+>m`LBOJsnXqYwn`p@Uir1vf9@E*``Gd){rxPPCR2wd>sdH+bmL-o` zwurn&wXtj;n!OJC&(pLCd974W~8Rik zgbUVF`rSI!!1!>hm8hg5SggnlLx^S#paFA*{$hrabI-j}Dur`f1*Mr|*APPU<9weK zNG<3wS8qclQl|qTYn~B^&$C-ommZ<_*-nTjZzl|{aAn#9>S&-O@`TDj^uZOHVUCZC zcCbk(b;xja$cBarLHqhdOJeg9To`o$I^_i5F$>hT;Z3n;9snA)yvier^P-y54RD8* z8G4j3E=_2}3hcdlV>yQaA*n)g<(HB>kw9$Ld^wKcWC}Auaswn5Q%5vUt%;1r2jM^p zjWyG)%%0Yinnu(!NJXn0SQ%l+>_L5vL z_MSJ8yjR|Xw9=+D_i5fxG5pyydi9Zb!1BP53hwMYprvj#of2Z55Id|ETx)%s^e6Ak zpiWhM$gWC)-EjmsIsH_ZC`A?gWP%8D00R7Kc4>B8D&_`*MO?uk4p1bo&Vf;oJX-1s zmSZ^OIvun~!9^{F979n(mc&r!IHlFxH$|B^-^;~R3drZ41U1w{y*VkU+TL3^qwCZ_ z)xE}AnVbJ10aU;c^(@THT?mM5YnF^i9`j7s!xeP9OQz9t2#oY6yob$;*TcgUMwW>DJ z)h=1jU*-`0DTlD9KjUs_CP$PGVO_z4Hg}$^=p=}KQm;pQbU6IfUjF(2`eZfz_bTUa zWiJku#OY+dz-@qgak4Ew5wmkLsESkZY%(?i8cZu&|Kn>52ggP;7opndxK@-bkyejm z(q5wNXgLi&;Y~(SMv0j- zu2UG29G|st|GZ&@&W*XPl~$>^o-Zf(fC-N{B(@Mtn~vNgLD)|HWje4fD7Gq-%B7yo zSlRh0UMvFKwURC;pW>i8qm~rA=*Wh4SOMDKq@*IXU{5OEJyI3!0;A|PpkPlg;(3`Id@F;n_%J$_S#X{sjTjpg%vi29GlW-rp`1;P z!=4He^cLxl;k<5XVP-SoLFD)JeoUpM9tUHuM51a+U8B;}tV3lD99oEuE9%P8ebOS` zZpPL=3l}{!OV>Gr0;0)?)-<02FczYrE*AbJ6aKVo$fp$^Fw)Wh2g7KrJY#CgqCs zN{arF#(CVlS0W4hfe!=w(q*kDpe^JSX)T*JKti_2OiA^}bOHsWMq?qedcg|&vA{U3?f~&fY=%FH5dNkC{{V1AG zWA#e!V|vISpcF`(EYBdr8NLQRk;66&qzC9MuJ9sFTG2Yv<3Pb2I;6T?H#4~Xsi+4p zm}nOz5

pT1==%t03Q0N6e1FQM^5HWzNO&9FjMX`2)IH<$*t00)Xcz(Yi-1h3}U2BG@}d20eCx~;Q=1Fv(WDzoJ6Yg4)rhLiC_3P9Dyf9ApNndazPF;PLfSYWp z({j7=Z0QWVC}|z0=DuyV2Wc|6u_ax<4W-%>yT9?dm5=>yFgDe1EURFbBJbpEXY8{P zVT;vGTW(0mey8o7(-2lqwOXJ8+~Tf?E76i7V1ZW`?o!dLgfLybo=r%S$`kj&@$%4< ziSU>ovt*_(!!iY`0uLs?K)R-kILgs;6zz_gm1#wwwW>eeY2+VR6?L^n=$sf!=pn9+{OpcO~S)m0ir~4hP7jOUVCo?mAm?z&DgL@ zs!TFxAI20+tK*rxpz5aLKI$3$oz31m!tJ&?A8*h*d^&3K?(VTH4aUnIl#&dzIXR^n zMkI_xcb}9|z^VW-u08-R33P8cT0LI;5)Kpof^g6vk$p+VxF7dGm@)I^a6!Q)zJ+xo zA`@87L0b`w<~q}CUE$FK$kVK=pmseF^#$EgaOBw<9x0xKqSl5kHzQ^eTa~^$tUa~I z4b?q{EouK#ny-iv1amFp|-z>8-h+scdA zo=%8K%3>nOTTc9W?a4oW!)&dziz(%UqZSd5ISA_g!p!Mq(#l6f8xvM=H0(1qO#*ad zF#<|cDEffiL>JCUb)eg@Z z13%{~eF?9#i`H955!Wv&ei*kn9)T2>3C{bL=X1*)Zh{4jn ziKf!upaUMPsAvNDO8T40mFgEc`;yf2>gMLk++0;M>NY$`9Yv{?%{Qk_rODJwXich{ zSS9@B0@HUoB1?=h!g>LSgVivt9Hn40X;psh-m$b%okxHVSBY^NUM~xEU?wBXiT$}d zScDLs(7252|ER-6X4kSnym~L&h1H#{EDF?n5{#N$HhjM14A}179DWs@RPvgL+e^yR ztCjjl_o}ZDEA?>UP<^Q))}v8~#@l>XRd2KEyV}CQi>!mYxs@xQyyR9Y^p16IT=gcA z0F~mm7m7my!O^uykn-iPD+!msaf51W=3|7*hHl~=`+LpQ`m}%>tb7|9^i%VN)@L}w zS5z)8OL`^0In(Ifrq@YuO-}WGmx|Dg^KrLE(mFRa3+{qZa23 zAOKZBh@t8ZchWeZLPQeBb-MLhpt1rajqlpGu4byTT07Kf@mA)Za8RqsXXwj~(w-DW zSs)x}02i#n6w5k^1Ws`k6=E4jU65h;+0Be^upbT&-UW*8#J*n+|9q&krqFCJX~L~- zS-psXtWZB|L!7La$N1eTu*%8bBIq*nsR9u1{sI4W%OLqQpnS9(A9Za#OelFnzEGW| zzWpV%ZFK$e1U5e5PIG8+vN;vv@H9gddbFiV7BwMNEWEYOh2`K;wuTPM%0}3!Eh{Mw zrqux@$f4y}ngN{|SfE-*pz85qbRKegi}oYK15Kuqld^FVgef83F5cFj)O`Qxt;tQ>gX8#3<`7M~BEbJb3i zp``O<@VFHO-OYOABcpz#Tr}<^`{=gI{u(6CX|LJG;rL@YS}2E)6tLN8wYwbQEVv%t z_2uj?FXt<(Iegtau#{krIRoa~4YU>gt7~TS*lFS_rc2+VyY=9g;Ei(Lg{uDAkN3a! z;kQ2f-3XThN}bmD=xLRYLV!O31#FlD2r7{O;%R{9mF1|9{jqA9>8pQNKhv!J+vZwa zC>Oua)pu&NLnc(`_x{J)+NL*bkWH3NOOvALV91x6wfganDA{s8R=}Sf>YO3YCK=VJ zX>{wxtawHG=V4pR3LbxD=JROnW3yH9`^n?(WlP?pKXLllFq)nNg!wH^(|^oVdSs7A zRlrd^VLt{bf(EOnRD6f?kXNjcfB8Qg@(A_#QRv%8$40*Sn&OQO)-=7Miwdf(uby80 z%U}3SkA2ec3gLe5z)WA?x@9w0PqzFkoAfckQK%$)Rrz`Kvw>S5(Xt=a&&8{bt6$)W2vEa2n})Uqb!q@A!?z%8CcD)E^@>SfZ@!;AV^T4AS1JVFEG+ z_bRz$?yZsv?$o0_m7GnHq5F&;;j6@vT*Wc2QDjy<4k?`7kO(~Hk}8!8?5B+}iE7P- zf1JR{|5_ed+!d{2;2(9iVxl$%^98&R(mUO)|Hg$^Gm_YFWGSAx+>3Z%FZpaD3j+{xG= z!SyCBayr2q47iQYw7O0(yxv2rNqegzc~xjx`eVA#q5R;J6;lAG6h+b#=_QgTRr%>Y zYks`YG4iP27O51SD3HQI5$oh{r**~Gxx7WE&_XKI38Fq{p>KNDhtMD|=}r*Yvm`E; zrlk_Twr2gy@vJqj6*B9G&d>%&$fa(5X&^=6ktE)cbq8!%$$_$B`~ zTjuBiw-E~ywBYzg&pPpV2|*Ca8uA zy%t$KHK(PN6kL-0!c|MZLHgxH_X&7KEpilO#qJjPer_%6YsaGd%*wG@ve(zxzFC(l zhZxm0ZF~0*fgPEpT+1(vZ3~X^w;H$ENT@<6VWqoHIA2?!M<(pNxtv8f8*EBkOwnPX zk=pjmf^@Lji$D@@Fp^j-Hy7_fxm=Sw&VBqc9#{`|-ioKK#~azdOGDfGoBn-RuZvJGTY{VQN~swtQ>2?(=^b z_?Q2VPO?q2MpQNEKK$BGYg>0H4`?s-|K{IZQHOuE;D7ex`PG8|HFv7A_^-W8pIuc% zM?EW}8lvQvhsm1drWBSjww!;scKKPQGIWqrI*B8$)`{D@+`72Hy~|SPG#HLm8CVi8 zz0=@6Cgg%G^i?kpf@f7=f|MQ#A&W;#1y{663n9g9P)z8sb~!x7S_{re&^^jbwHMXO z2lZfhOfc)#Fww(`Y3<8plOWhhSsT>+yxXe>96j7hBC>`3aZ zjhu5^MwNGq2IjSl$_d*}y-N3TT3Ml1`A@+z5q51;PmppYH!~l(cbHXGMnnQLguYON zkj{jLQLOMoRWFL5B@v2m2$!W)Lk$VE|1*+ZT(x5-{JLPpO^m5zbtvJv8x!S@JK`VM zw5G~)$d`m#4pMwC7nYd~VE65%zR9WbyR*W`498w5#*BkfXwK%-kg|D1A>9X%icZUA zhJL05XP$)T#=L`FZ=SYa%aqq4DXj5vawpRwq*^_z7d}!sw%ny)CnXs@2kTd@goRN3 zb}yvQGP_EEPtFqZ`Q^)(jiS2cvb|EMDOc57Gb_VpM+M4JbB>b7!$zJDs9{87=!l(r zeYVeP)Cr}z@PtAV?M!4rHSg3VG*!eX<*;hg%aQZ0LD18S38(ay@>TX?1ysoxr^G6~ zlypRrBJA1egb3EsyD1TJlh$aT9K)9r(ED@X^uy7fg;ymDYXz7V6-}8f#tuuZn%>E? z*F(4FgOQau08yW6$yC?3!{(cx(`AZD)kpPSFpa9d;qdG!<$u@uPP8bF&Cz?AQG7YJ7RA=gb5{;YAsAPbhbDnfNqXyZ!?=__x zPp*&QC>m)Itrm91^PJtlTHDQuXA z_y~6$nhL>1r&xt@+C9gUFij8{jI0cuG#)m7H;Q0IbcmVH{?RB#Y=q-TUq5YG2~ zNvnO8z{yB!g&aqBR7l3o;3Xo7ZB1#cG6pMsDYvT%JCCC&I|b=>lk=r6z~!qQ*-YK& z>}%Dd9A0ACCczva)CG?m`IBV2>#;3e+7_d^L6sVtpg`<8guY#^JV zs~7P=Rl3V>+`8oOg1|198kne@KsB={s4%seO99fgt>w3N4GAarX6>s4@UlqquGn2t zuf~m?o2YZ3R3p~?0{&Qq54&J|YVX9ZkDI51hJr$+GLg6Hv2Qk7lzKCcqykTegEB&F zYwxPe=v1E1T%KO*>H+AiI^zM!s>tU?ab-SsS+2js;?mvEi2Z^*vU&z-1~eiA5aI4Q zEU}?8Sm$JjLiCFX_=%u1nQusldgJ6WeG=g}t8l_O^p_E=5SEt{L3XJJw=X6(TYh~8 zQ=n$iK*iV`ArWxhjyPYF{2;=Wa)kHcpi+ldcHxU2NL?z0)#a869(Y)QV~0C&!Qx^b z@E_S?ow*iLHBc(qGuX;(WCs7ewY|#Pylgf_EqT@y?MZV1>6(!Wt1CuBSKzi==jwWR zOFg}Si5Dl$D_v|0OAGu6eJ`VVO6l|&YoKM3UZH+kVX2l#)=7B2GD6}A%B)+cOo3CJ zwOg}g>h7^a&{nn}G=Zii^pKw0vUV1JvJ5MKMk{%(XO^nS0}+4rY@=Z*t}~~C{ge`_ z;*7nk!OA@r$-+p52&;Y$Zxw?_-MgP@H<-nP>&}sgi;AI@m=0F!K3XbP$lF^fBf|a8 z5rz%j*6-L--O_SUcy7QqH>RU=bxr<~P|kv~ray7VI-b@u8#?jGkeHXm<4d~^r4 zi{8?v1@d<`xjH=_{2N~J;NBf6eKie@CaE{HLMh~h8Yp=>3Br;_ShL!1T6|8j3Ue62 zA|qe2?sb1>SCFtmo%+|NciI4jV~3E$wLiXi;hZB#rX2oT#Iwaee)Q;3Ui#K!GSAwU zjyQb&Fn_Lx3HhO29Key;mM8i?dg#6?juS-?@+#j|UA^QO2Pp;nc8ow& zUIIp9X_{liX>spj@Ckfvi?&60cjB^(qOL!E&e0^RIx^Uhd|$=29?B?M(CG5n0;hE; zsZeq_iMZ@kv2nIWUQ6T6HBe$iN?rHioGB@$4+KLzRe1psdgGkW7C0tDN}h`V zmpCU+3%vWvVWcA$ZiMa?{>-hX|BU>mgm^U`9AK)rK^0cz_-lr5JIR@NCnLA{TUJ|= z-YY-{A+|2lUc2=TMk$BRiIC7$?)2$|jw=zEwwsRY6bcUP$Zs6NVMj@b<4m84eV<^? z(Dm7O%z%lt!vEo@96NCC^u7$CBppdgj2m;ciOGz)h9Mv>=5(8BJE)lpo-(KrlO(4- zf|x8^1xppfiwk1A<2EV>JXHb|O@!=BL_`i5cTI`;aC<_tLfoe&Tn+ah2aECVB$6PD}@*c)SKgw>YGLN zR(BVO(wHtQt@t5kcT<-hjhqpyAVtk5=53c zd!Yvsi|js@mKlOMSxudlMCN&0o6-DsjIoPxZT-T!G`-0N(6f6l@ZgQS3Y-W8GlV$SGklpFy z?#!Kn_&lGEE6dPb`?RbqYY*%``J)4OHOdWs-5rOkH)xM?ioRuN0bEpXYAvkQ#{>YC z{}}D*s3eXtLP}@nTi=lOjE3cooogQ!MTzz!9Yq~bt5{N}$QnKQj?2+f0^b{$taEcM z*f|WW)VrP9MZogJyj!YupwW&^Xz_!9B6C)FqlrdR#nBs!Zp*=K8R*ZzX)Q5G&_M8) z7s%x+lyA<|4QEep8Jfj^soO9Be}2_8AJS-d&J|ZaWn;?dDrt@U89OwWt5p;9FKI~FC&K`X-N#7p6PMV;Q{;I^5yvZ^I z5Ol!ersK0t)~KV>u8B+;Y0MW`Ad zoJg=$wc4GW?t)(9yPVFqF~pMAMj%dO%K{{@L5Jwev2z?~Z4pY=So8Bh7GmW+tt3r* zCiNg}G8bR4eYcvra^NnZqp;~}auv1EthgY%jjpp3jKG09?ItbDPcJa^ri=#^gM<0n znFw%4T;jJA3hbE8r{Pua^F3*-a#RaGmL6GqN{ueMnc%|K!bh z2qvTq`G!>94j!O*9Txz}VlQYNsRiWXg(6LZ)jQW|*6+N5vOeDeyIdPgRZaTg7vxItc$pPZ{$JQ}>K z`IlNNhx$;u#j*U*7~*})#Yz`$@9g3zEpQP9<8o)+b1Y8<^j#p5re^)zjwFO=MNRhN zdmQN?{b%_Y*$dqmJHNQ(RzacpmMW4sIeYlHew4`O`0(Mwe4m18HBx<(ig7X z_|w1nG1Lz`wnEDmNQG%wAjDG1mZPGRBcnxjw^QsfzBrvHQk3cg|@=PM+$F1KOxEc3txr#wOT`I53G^=fy`ING6H~+1_pF z8a%@fCMvMR#rEzd7luk&G^RVhl5uINJ)WR?qFni*-!l*?9|iv1qBU1Tw*#G1je1bbyaMhp#&1b9PtQQ-S(C z7c-bIy2?&MxD+^)ql1}qIXjmdotDOB37N={T)p_TEa9~r4iP^@ZnRj_23ME5`;VV! zoJ3Tr<(PjM{cpRbpnm+MG8jJ`VW7y~xj-ol9M9o^?Yckc2jFLFouG9mJsL*-neGAc z)WG6|fIf%>3L|0DL#1QKpWJ0I46*R-J1LZ#6=tc5AZ0&pc>X1|X*ZqwmcAMXkkfk( z7S0WA*YK?$CMmyy52|#H>6zpcn=|8?Q5+5{ZACaKqPDjEDtA$W&{@#B>vdj~XXs^I z;k7>ZI3xb|cM##vm2l>)ez0!Qq~pLlwf-e1XenQs;a+V00W*TPxBOpqk}qp^qE?B zJ49$pk5!g)U?}(kjWJCz z%E7ySi|Ry{aP;NdCX+jP|F{lV6JT+L(nY}_pkKUViD)irW?@3vTBtjojXX{bScvOS zH^bwTSk>iI)GH31J!*9rBU7g-OBL&||Dczyf<|2{&#KM;x-jqb%ZeR?Fowd7hmliMX&YpjPk* zxOznOk&lb1$#s)-XVspTn`{lR*2Iq7M1$)y4)53nA2@DwT= zfdy($A~Voq^FDKTv(q0lOfwM`S)dHMT}YR+b{oO#IR22>Bq;+O<0(myb0=(8eN9^u z*>twTE1=m>%!B5Nge7wxC@T&{#Z9PKL)Uni!hM+2D&)%18+SL2gHmMREtAx1(N#yM z%T+jP$CO}bMCyVfhH002sMwkpZ{?8R&IQvyiu0FuC1ta_XYHTtUmI5!SAQJ#|L)O) ze>MNRou!#=%U^$N1q0^k6%_*lIHM?onb3#s)8eYp z1m4@Ae|IsacaX4ylgTeqa4B0I-zq(3h0bz{lljSxFclD^#o(^LsNAXF^v*a-oPc|w z8`*$~pY~;?MsS9=GPt2x&k@y0|77NQaKo z%1s6QmI3$c4e=S3W0xB!^xn;-LCEWW)*y6Ln!AS@Sc*NUjiDaBbT z4nz;J{}hIxd9k2yCn;O8yoAg31ZQI~xVrAvB5-4cYzlcy&S+{Rgfg%`x;YgHZ;}Qu zpXEj~2oK6AL4ey-KEMc8-iDjw^QmwnxKykkW>dS=#W0hzGw^-guFAR$iYmLaUJm$7 zH&NrAOL9s&I10sEV|&1jw(m8stV?ucx zIVi)eIWeR^pv;q!L)=SIx7>`1Dw>Lkc(A&Torb%Yr)3@uRSvTmo{t4|+2tl+()~kH zndxw)wod|uz#=^9EXO{jIoR@Y_3#%wq`eLxk&7+-(1DTJ>V{|}5FOjxOH$@ESfk5k zC*Dve6s0$cAddd`2@DI8)H6^IEZxzICx>ppEAwk-;C}Ki0%S*oXA~}ZsjrHoEX%^C zEVM5-x1N1Ur7%|F1Y7$dr$p9r{Eos5eNJBjRdhL@FJlIjRZ-439?%g(h43LAx#&~4 zVEN_;%`=W5E{vYE5VpowAc_0d*qTa~@>v3`BAeaM3s>p33ZnxZ>KZq=LX?+&TTu=S z&=utrhXxRwa^#%S@eAmwm=YMS7lTK1XrcR@OyBi}j!OzhgJa#3cGV^M46>BjIrxQe zB-~ccjF;m-py=m?k^Pw`>U^pDo;iL`fFd>MF6gnm$?Dso^fZlBwL&u_a&k#JN-YXS z!mQ-UiSMdn1ViDZTeyE0p(cJl{ZtZ9aLi(<0Rub_6~6guy4!RoW^+hyu9TfIS9=go zRchBYhoZWp#L`kGvm|3GHQxt0U*<$u%bs?(dFw7lINrGjYW0c)4icAc@}_$g z`#gv%P`;<@>hRO#ibL^}I!(^4n-5W)&lNOB!Yw*ha`ylgNwFV>DiB-^f1!U0baNEK z;!bLd1Se{z)%c*tZaI&|(W1O(wRN-O9O&%b>R3!@|CvF9uDzj(*wb+dZuEUa{&IdD z_=QFOp*;~c5kILA0wN%9kV@T_E!AglC~1E|wguyqCuqv-3gpPryl=b!%~HsnvgxkQ zV8yZT_V#p@EHAXOyN*$CZie7D@2`(1Sh>odb4^tSmjsT$NzO)jVi@``*L}Qqw8MZl zay+N|^gI*R3A{W{C1lk8f-iy75+SB%;hd5HmZ75r^H=d0$}4WJvn7YVSXPcijQ8YT zc{OilnZ3*Q*U8vYw6fQ^WE57J%{J`f)z%*Io`QofGj&W<{pIW~B})0XUSxUh`qSKW z(6jo7Sq|!|`P;p&y^yP(M()XH=n?ML#~hjVZjL95g9mSaeD&pvoj*Wz0J$T$zw_+ zl=+Ol4Je&xFQKbbFH~Gm?om-V^B~SIYBU)K9GJCt^QyE4RfI`Rl~-H~qBFqo=HwF7 zipc|_LKUgNmkOY`wri)iyQvyCjxpBNRtZ9M)gj#gGYS}@@p%lP=EU_KavkelUk%k5 z?QRIq-TdWV_vO14YQfa~6tF`wh=~lSrN(F=fSTs{5uumUjp{FQMTLmstK@XsD-YAl$?&szHaPmwh=&7z zqEdQ${lR@o#gZBzXwKn|sZaGPV&hLLoMG$!23b&K&Kw2^#%}AiN}(x$qaNL`36_Mf z$l3eUT1mBOR=#IT&-D}<)GG|qX^$w=_jDa93E-h3!t#Ow=2~xd9u%ut8u#|aH5fb zSUq#FRNHv$RocP64yAbpsAm}`q5V% zdlK(0S7l$eQU%IB3oTpj36s_EPhTHY*9*`)RcglL5-V@PJ5bWnQ8s<0*hG=Eo7Qmt z$7YXj^AP{`HfI1D-= zF?rzm;^YdVYN(+1Ptd%z6d}HqTXV={#i&fp=25;@oFs73IxvJ@yliQ!popp*3$up~ zAZMwM3byYssAl^C*KFY#BCC~8VUqWbQ zOl=i`!zQ>V1BVvRMw`=;`X|?ilhJrchl38pR-=(;0~&x>=OnJu@j|N=4g8n_da@-C z>ziPc=vbyymh3>A&r?-lGE0;SbB0sKB)r<>C`)(5{>AnzxX~94V_O2@IKvIy5l}wfid{#?vsIjaEXDMHdu|+ zO?=&tS*07#h1n(FPlAWFkhIxK%Nf9|)b$l%T6iSvgW*PQ8xc$1rx}J0O%qMTu?w|L z%9Ovc16>>;vYC4wO?*%47>brN2!2^RgcFeXd?@3gUk2(jZ|Wh{sMOWv`#5}pgrPIj zjO{{hHB>JVl5J^6*Mdz&-gL%MKSbkOzYoR^;NXREpNgCo{1nv$%lMk0@C(l6lKoim zGLsJzC$N8jI2^;uD}1oA)16p_J_@?bWOb)ts^_sRQOpQ9iUkN<$~L6!tJfaf!B#k0 zoj^mT3=ic~RhoDAn(YqTx`j{(Sh-EzKH%l*ITe2?{t)Z_;kIh$z~+m@KNwETK41HcKp9fc!3Bx-h0_ zQ)}`m+g=Wzm7D78GS4VG>DdvaoYS7VW1RQXyA*{49Np2Br*b1Hs8GBk(KB>8QQXZN zu4~7`{GM*-*PqezS-85%ae%2|CT)Rv8jd~g;d%{=7zLF28fT9ZeMks#qpE2sxPed9 zMq^DZZN6yxdBPP)CPVTbjn@?d*_CW6z9NZ0A`|Yiy@^IN)~j6afY;|V%-_FIDL*xC zy{D7m3@TM89=j!>nu4xu#cjuA@_l(@Mw54q$%TdrKk4fri0j9uS}R*F0gn}Q<16D2JvTJV5fkAw8~d$=Ud-J zrc@~0?)W)6uA>k-KS15eufiO6ptUSQNfXmc1`jg;%`de1;mB|;n}}g~Ubj{#0DTS# zqoj>Df#T1Gx`Rowt8z0ExV24B7{u8Eo|2CAoOuq5Hbg?dQ_NYMM1pOWG)k&Zx9(ta zO;T(~B3{?dLx8j#yVnTPr0dB_Ug4P#9GN@4CznoJm=S#h!o9XdW_QiblL^ehi?Kqp zG=^0f+GI_C*U^qjH97rUC&Ju&bNYICQ@a6)Hj(h|u4IAo_T5y^fz4BRk(Ms*hW_St zB@@J^C&@1mn^)WlKZ>UY?NVN_&F`Y~?3h`mi< zC$HdtN(DhS>L4pqfh2iB(t@^w>}MjOTSS*Xb02O%V3D1**sc?(WKVntf*?tRAXG_v z(M-iW%+Cu`EVyGSrg1!eC#p9;Q{?MLA(5acml6<3kOi<}$5O+0EFM!rxs-$F!!-`P z)jlIxoBLSHs__)jg!?NFB~qlINZYn2XB8qJfuEpjNI(nT40TyLV5Vsbuj{C^QK}YF z4J7@L&MAUaafq;gf#fsl3*{ume|mV%R3o0Vd}W5MX9(-Ujh)W#Lf!PX_E+va-?&VJ zTkeaiSr#*0^nHFef3RH5O|9Z^-V`x1 zB>s8^KHV(|4ox4FUaf+0DSU+p35PTF>$Ma)A)N$R@S(#hC)aP!7uU#PXK}YJ!}FBZ zX-IMR=S{*CDsxCUZThNMX<4*}OOF`qSSg>D&cY?-PT@RZb`)sAcVkpWYoE09O=951vt zRpMy6Cfpym9Gx#8QDwPgEvo?ZP#W zQZUl?xaGltLDEUwQK*l3Q0_s-?!+XgR4#G$&eOs`O^Z^}5 z=$XDmdpmrdnmMmYUPf|QS@P=v%_yw@&2Hi`hdv?_%A77Q)v#8xN=hvSNoA^K;US4Y zxpK+fmr2jY^$~&D!G1=NRIX#@w%a7u=RQlfzmf!?35bMwve2d5y^7b<&XjLc;Twvz zGnrgU(atd@De(I|)Ikuzj7wXD&!Pas%d7bznRIjTmC&!te+oY#DfkcvX$5N*L+%$5 zpnyw)(2W~f#yh-eYi>Hw&lx{gh=?EQ%G92e6hBPh!$B7xTW);DRoQQB$En2wux@b( z31c`Z^2i#nBk%-ouE~UMT)%5h(z`c9p4Yr@3BUK8i=IR$)5O!w4s=elX+Z2&Lh5&vZo`=C#UiD_TLx{U=mvx^Gu8;Kdy+?L1wkOjb>-F`JlPn5@VOchwy2u zf~$hxb^O>)!?2V%kS(9l5I-LW;WC(^m>2|P%v{U?h141UeC&l{luFALDTg;(Qeb34 z=t}%40yoKImImD$9Q?k7`>DK0$GWVpILc0J$WV*gw4{L1{>TuLhX1+n?`h$cr&tgE98Qzf*M&&FzcxhtM zF?dEbQR>>}#13WR2+@_|G_Qf=SC3zM0vL9-n#zbA^N`Blw0@pJ>&50Oa8F$z`+he;$WC|gpSx0 zzaSJ|Qf`B#vAs7cO9d{=m8(_4O_>aKe=PWlYVy8mlPh;k&3^U`;ijM$$0ya$xx#eQ&(gF6$q${He$}PB{&nstDCm;SeXDjVb>%W zRD-SmdUI^}O*%0EjE&}}0WFeR-L@4Lnu^nOgPBK)n5w?oa-xp9628v8_7Ec~&AL5- zbR`i^htt!=rOJ&ef!UjV4>^h$Rba7~{rk$k1U@N9LT8ZUM8RyJ zMDS3wNi&G6o6<(}w!TosDw1@o`V|0fdX0RZ_pc=-44^Z%{?u#EOZ)<2I~@;3yeNtv zy>L}sfnPC0qQi(`*hjT~QIIT`mp+viJ6&CKn5XjZN5d09sNBCI`%KZ3K-@lyd%%0M z%OOgMx^J(0Dj$JCm@Fwl%QB_H;{<2s31G~&%U$VOfEw^D7sZpaoDoIB+@IEx;QEj* z2qNX~zJK9$vaT@N4T6;hFJZTH>?$ML) z!DPAg9G3uofA%x2>$nXDzwgnL7o3l(`qMo*l7r>AhaSOIDi*7vIls9`zU?bKvC|?j0$)&Mh;}BZHL(I86a|A;oNdIsD1>4S84YyvuT^$#z89*}=o# z!2MOuihWTYkZT;1*7?(duoe1rTcGOLm`$CN^$1n)K8~h8VYvxoPlUBvXBml5;VkWw zrO0OB=WOtzK?0REOk=X|Mpd2sq*$|xbbJO1yCVu##tv! z((R=F#=>5h-9{7fK{@iRwqK!&-GgEkK(5VoAOUL%&gA#X#r%@4&yjN4jyfg-R_&VS z6$cEg(-w%?@9e){t|CE5JvYt?ZZa25YPx)uX*|m39B9tno@^v{KOk)bq3f>@XC%a0 z_;-7H>2>cG4?3cB*P=r9_O8Mp-R=pwX#D+!Mgty*7O_wx_w>dYk$rHxqK0Uj+l*Y6 z+&xB=oEdL=q~_F;xiYU))q{9teu4q!t28VlZ|AiXyAa1b02yc?&~tlrH(g8d;L(3? zKN56^oW1RP+grPv5ASa+QE4Z?c_t;kG2?ty|FUlcMXKi?{l@K&E<7JyY0ow;=9i<& zDXl1V`!C$N@mKuwyOHSxL!qXSJt1~WzF=|{!hw_588DNDy^78i#>;h+s+&?7^&dH( zWXi!qEv)X%cUJp%q{F0P3xy>w2t+Mb3x(v_HWZZ{W0zeuJd3f3bx}#yXkHpgX5Ms5 ze3`>*_B~r%mqjRdxmbxorUnx%#@5RUWob^bnS#n@`SUEuhk=+DX zB^wh8ox-Hc(ajhzg{AAQs{_1-*8l|2ScFvzkM0;D&5lWV_9&a|xZ^e+2p6XuIu7l`CIr1G1j0X{ zaKjp|;LF%N(q+(rR&-lcA3b@f);DUbsns2-_c3KZ;)6p_73mz<4tJ$KA&Wd(WsrkZ z!Z_jSTMe4trUrDU3LXVO2(J|?VH>VP+n7Chq|?rnmJJx0kW~o);;(ov}jT z3rkX9EsS0+IZs!>UI1fkc>XRu?LAYVwb?ZQa!oTp6_pBb#Er~6%q{>^i+5uB+aX*z zK1c?>$g~xS9jUg6$o5CMWrj-Stt$Vt+FpC7V*?#ZxAv0CgPe_3`0?P$5w$#IM&W$T z;5A|Fu;11h0EH<}o^t6`e=;S!!lCwN+i}@oT4)w_B9?{hKd*wD*M1y~KM8|KNg3O} zOw&wKEo6!CdULeXDh>v~7x1bRB4KtCH|Mj2kK+qk#+P$4pt*LHmh`;>iqv}GN+nc6 z&#+|doMfpayPf@2Nf^??b|>hHC3RUK(Jb_f*4l|oQV!7+wlZNK+;s9vYl&fPIK8t* zVlUC6KdG-^H%tG||M;H|Y2D>2F88cM@?-T$t6HQ~-dIQn&N|Y}uQRB+bJ|5V#t2xC29!hGUp>H~A2nuWbuF2aYyw%V+D977O2w*+4IgaWTm8;gxI6Sw z%eZc*F>X>2b}f&J=hpSGb9EO*xfDu>R}XxUpjQ~h`&?yYA25K!t!i?CwAnN=hF`vV zMgj@m_WdX~B=auKo3Qf)U^kdJhe57Vvfk!N$*sah3Tk3tKtS&{U!d7{a>WrXG{@^o`Q^6YLAu>B}$NE*ufO}RBxs)9yZ z>DYiFgWaj4d;n~j4&Ts{Iy070fY{B1chm&6woHX4`=ILvt|D&oc))s%LMM-L4LZUM zr{rfiTE;iQBVv~rG@Bq#sZt{$OJ&MCdC)3hC~2yOh3;VSM3n7zRj@b2IN2ie3sNgv zC0)vQ?ys%`s-8KPTb(=>_x7-;4>vYxEci)PWzj$txx-YTl_+zhZeA51tk=k~eQ#su z-o1Rmf%?~hBw4l}t1aoO940c&->UNwfBwtOjh%F73RR0p_p)~6zPhw75kPxZX@O|U6d4druX>S>ZY@?|?JZy^GJ zH~o&?43 zetX8r?Q0epH^2eJzp`a(p;SkJEY7cHkMCMB@rDXe+M$7w{$uY9Kk%-N^ROD{I|_R^ z8rG-cAznu)$Dj6# zsHT4L^5>K3;&Q!FBN<@79j>jsA{j69&Jt6I*R z_JU$iZZQ29Z-)KJzqmuMycrL#H{X$+TeQs~=J4!ny?Hwn9zIhS4gRB#iHvE>gv;74 zoS)vPPt#$0a?(;U`9c>#Uj@WNz&n{%W!jCR5OT8Q-6#yFzpO%HCNEF0FW92G7{?zr zHhnd2UJ@m17IcHs&#zk&tAHb_l-S#7o(`_h2cus(7KJ=M-)Q>(!`|rh;?Muod%8HS zj>R_3fzo`x_~Z|nvL-aw%|nP6t!>u#`s6O7FTvXZu-=IM=u{|*LMcP06>aWj;OE^g zY(IKr`>?b52gecg&FEYAx9;^G>~3yX^w>MO82(HSuBxtk+D?hJo~0@~C_y2dmCb44 z;;7^pro(BR$`ciFf)#TG4k0CmzuNY?n{FhPb>GZJ7&3wD{j}Z? zEg01|rGQ*pvA^}`#-nCawZ3rN)vfi_(bKohqX!SS?(J^wY;KYltXTAz8+>F?CYpoD5~KpB?N^er)$2Gp4F8CG-bfDVM%b7ix5-Qo}kPw$pzYepYgo8norKvVHb=>k+14^U=dyXXRSezvC)s%wfWk<;L}? zQAe2ItE)ClEXfZd_vg3>&;Xqf7*^_4OhyDRs;WPI>`(;Q+;INi9m^3^POYYiJ73p0 z3J0k#9oxPug?69r(&*^TYU^I$xZqU!os;X$k2EzH(&2T$MW<}!qW@*cZSPin0n?Z- zF=nUPy754HAj#f^Z{QeIa4>%$JDY#fiCo^ht*xm)46W5BE!5v=!}68hzR4PT@_8GQ;IN$W>@#uc4Ust zT-|?Yq^up?y%9}0VVnJS{41h`GTrOrbG&b7zkX6tDsMelps(?|_ZOt}*)!Y|6osss z`cNQ;h?048@TW{boGi@IEY>a1leTJU)zCdPANi#LW+hp8j0oD}t^1q18=DXAKk&KN zY=B#O4nwxuVgB9w3Fd0P*RMYwzjww9<_HLS80blq@#(K{CU!-OHbF2|aj zHb3Is2dnMV8k1;L;8*iZsPY+#<4f=5!+C7akSVWP%MvHAgZQc5rq#hz z(PiK^8GE>Se`E9E_9KEzbJWY%?&}tM7NncvvDZpQj*u~Jo)XXPCrtLVy=G>7{CY7w zAL9+EH*W#*NYlD9oH#u({ zy?XKP{gKQN8+1f}^9}sf&l~H_?#|j4D)pXRDF}w6aV@G5{VBX@=z~wM)khC&&Mbn?Dy_NN3EYHzpCEMRdg z!_s8c9>na_Rg3d$y=%PzejGkxw1qHU$7kQxUYGy6MLW$dYc3() znCZgkC=w_)lG{|58Qz1{NV)#8f7$qem_sF=K#5pG1+Bju3i=%c)zcZUuImvxv%;LW zF_-b?g)S;F(EO2oASx!bKy^{XQKd{OoojK1C_JU{C_6H_wfJR9#+(d^Aww6FQ-k(# z^mEhOf}TEiaC&sIw>uwXWKIIho2J=ptuiJu*R~|TdB?r!wYpwGqM9&pB zhkRdnmWKMk-FuBq&SPbyT!a$AFQRmfn@Mqqo`vYTug7L_YJjghY?dO=o36yD<7lV?GD*x9_y3u_?lkzy-AJl!a$`A zagm|m0+zDtJuE%c>vaD37}YSZ@t=-T4`b`@Z+C?+f?iG?Pmm@Hq5EF=_H;cM)6 z2qkG7PH1GDXUGATr7)rB=zEg6&eo$AwY<8t&bBVEF3%EZb~zgR-Ane&$!tupS>qG@ z_P&^$J0xOZ&RA+?6&uf*Ll|_r{23=?@bJo6zy_fN$Kqv18=!bfkILzyOQm)q*?2|y zaEg;idFRXxEEK$=o1X%a5<;fkEsJza9~MtcxZ7KL<&0RG)fbn#g@8a>x3PsKx?u`v ze-J)8NHo2F(yf4ov=gY$U?@B|K?-phu!f_Zhz3&TrVH;6WyaIN1J)bv{|vBWWmVpc z3{T9^jz?m;2_Xyg-Ut8&pm%FkDA96jEe8x6jkm`~V7pWzx0|$)ZOf_DlZGXtVBX1y zDr7HUG;)YHY(U@wAdKhyrqAw>h$ex(!6eH!*BpzJNHb+@(eWYuKEf!%AQQ+Ha*%4e zD5nYPNc2yyd384l%|Zg3T&e8|RyUOC?hB2V>`}6*C9Sg^>k0GAfwNG=1reY^Eta|B zuRl3y(+owSEHIYggP;GW>uKpOk&m(03E}V%`8{k<8Ik5Yp?Y|)Sd`N%CjhSpV~=zP zi|+JVyt}v@Evg{HgE$4YukyYY_^X}N?D@8Vd1N?Tp(Ec*N%#W{FT4S>#WKfEs&+{G zO6n8^0|)Y*SKS(DH^t;%DDO$MW}dmkrxsN!7*oV4H<>OjpM1T|s(F)jU?|t08?^5) zu*&XxH{g|4GbgYp42bhze*ORVgT~=6c3k-Np(`$PVg#3d6NC%mIa?E_~~a9FXBmK#_2r5KDezmns6t z9F*oB)kBY3wA+>Cq}$!<5LvKeA)Y$r91N+=-TEtubDtt12Wr4RlzOct}yTE zkhY`kkV<4F=8UOp*^(Qao_Qirz70poSE3O;Xh=5^PnqT@S#v8=L$5Q}ng$021-kFh z8U0|eL*JJkX;=JNF981};?ul4B`Ela&*tMQ(2za3Ys~F@Xz!d+tY{2eJh?wOIh}FZ zwS7V{=@qmP1lFdjGk(hxa+_&+`ieQ50a?s8!Zne?5>oiia9Q^#Er{@CG1KEiyi z359RgpP;3#VB%C3DV`ZZ-Ng+$)q$l)vR`v9|t zfWlJh+bZgbbBQtzn$4JK4U_9;9bBf2Aj&9?vf{{A$< zIoxY}ni#{exK>0a6b}d)*=Pa^an}82u?S-or;ibw?1x>!^wl;`$oUP@M*-OZ_y@CZ zyuvimtP!a&5@hiNgG#Q0pw2)pee5BvRRt!bwn6REXhiS_1r8QrWW~j={|8cN%e6(6 zK{|LN@rL5Cp#+#}W&m4Cugii&%n%mhid#Z{iwL~`15j<~>|%?{^^NH1do!G1AAB%7 zP8tL#8P}3yQ&ora-nawtSCQ0li~#m)2`uYHV{2{;e11)?8~)#h%Zp{AQHM-PpVWoSOl**vI^}{A)mfz%96ipE$bFP4}VK!Fd51(8o}K z>%o{H5J@<=*<0)aOCqp6!kt1K(q#P2&@o(l-EZA@Kxu5#qldwxT*K?t-J7)XP|{=U}YWafMw^3RBmSMxyIP$qLZnz z^S*^BkF6k0TEABK1uC~PeA#DOpVY1KgOi<|h5Tyjq1A2J5}rxNY7pr8BdFN!k;U#B zPVO29L}KjJ5`Lr&=4U#jo@-KZA^Tn>hO+BVRwPd(c)MessEHTFQ4K2YzDu_ zn=jquaiC4U+SJ&!dy_@oMB-oDT2pk`+E%#mEFgbPspx<2hvm)hefnFU{cc#W4J${) z6UQVd99A2E^rMBwA#qOT3)$~srw@V26~z#&b)nqSK*gNiQhrHoa(AiA^FqOjS3>2m zdiQywdaxN zaHx)jL!ox5#G*1xMD@)4?=tao;uh&YZSUih<`j`(gtvg`dpQo^LGZj#ltcQ+ot0{^ z>`fSQm7EizkK=s?+f(@yi(537-Ma&G)b89DcN7xME(z?xdrF3&E5|SF%W~Nf8c0tm z`?3dJC;-T!si^<4-6dRXb%thhh$%sR`*-;0Zw;ju-QEb&debA&1%-vR4EY=lPo9{Y zn?l#=9f{{Cfq{g%PlGG{4=lkEZ5dHukT81m6bk{-Y-*4V&-`elYo$Jf#W_lS4DyF6 zC|}8FLG{Ye;|GZPV}))n;N80CS@@;QTyaeacC=?XkXWxADO=k*#I5ZXv2jwdYCIo$ z388WX(`~FbSJqc@%E#hB8Xp2yVngaU@Td~PfcDlerr1CWcvXu4N0o93wJg?>4?(}4 zB%*}{#k6xj<>H4^td!pToO343!7S0|&V%MOnz_s}0kn%iWTlY1N zC+9IjT^}?~ITrH2NqU4)@G~x2M&Yk;fc~%lW+9|J|LZ@i81P42!i9o3_zRb)@o~aw z2U23bKQl1MhWPRT2GnXatHi-Nj)GEiJw)dsR7Z>;R}G$_J@6&z1;436J+A`M2>UFcx}6bnUXp%^drsK{Zwd2xA}kjK1_quHJ8{x`xA6erU- z=uZpa)iIhRK0fE8`(46{CA4&GFnQ@#2e1pWr4WIs2V zQA1&?Hn`zznNGq&g3Gu%&8Un5&OvN#C^N*&JSucKq0#uKLIEBf7#3`fFf6=CqFv(R zFpHf7Sdbk`1z%@RoB>=~;~@jtJV^;Q$%dn(p`7LGsbfk{sN5!$x3E5DFzdHW`kbkM zu%HJfPmCo@^q4q#*l~yO56$!x_42TICu0?}GMM9(X@0Y-}pliVbD3vvn@>LcZZ9zZYyDKwzP_g6T8!u)m%#ik4{ ze*M2t>KUr}<-{!iz%})kvR1>89rbiL_`v#2zoN1AlFfspD@XZY;Al*D&3gYh)>8u zDi4R$4dI$<2H9TdM&9+2CQ4k+_ZLK2?US8kxYV)A7`lH!(e*A>x+0Fz8MdB{WypuK zzBWtZ$KbnT>t79rM!h0Fgwz+6jBGgC6p5A~4h1Hw+eOl~?8oGGP>e%hnt9o){?A>K8se|In}2wWN= zB;a^F?HWA?_DNBD_!<2e9Kp(G@}Wn)90*f}H6Wf{VUG#01sZ#t9SY@BxOHAvvhA(brr7q@ zip87M5C!b+9%&iJ&;hy%bMG~dNSl#?0ZXTMyal)R3tFgF*3~OMAlDl5;Nn8Ihd^O^{}mb(cR_G*bB#ktbRTW03JbMP|P zKghZ=)op8Kmg*J_iG75xb^b7}F}X&-_t3y^fBn}REWsGl8kEbV90+_}X{p2eZDXRu zRGYgab(8c_==y1*Kz>RRTS+jw_-zueh*qE(eF!CM6$Bf)(n%#Xh1=WN{+1|R~C+^3Y!yBekzC(1?z%f z5ER_Hn_t@$jeP|INbAM0gJA|FHeLs29{Zb%lRO44^Yf91MWQUZW)Qi1rpzSr_$B!z zL&2DnLY6Ur^!)@_OF=N?ty~cXWXr?mNsn80Nn%6R)xRSaFtOOkQFgQeRyh&A4MeHM z9}1L`Jw&XGLQT1=N17v;8mtUR&J%4NT~LJHDcb`G(3ArEn2CuyYlo&=(w z+^1n3+37dRg4>W)%VL1mSFDHJ#)`Ha(zzDzdcDQdCC1nA zODKCw8*8~?=h4Q>D^kcfhB4(p(Yz^eBPd;=0^|?-y}(wxB@XZu@I?~MtYi^iU>Q7G z23L9Q0=Oi*ju`LMbP*C=6s#}tFZ#^ceNKlN;Xv=Wk^=Z}faj&11f56T-)dQE%qkf$ zr?Pm2r{!n+NJID;56V^JPtqk7Tpi54X%cO2;N7&9zLw(ebsti5-b1lo?5a71%a( zHuORE5Z?0^|C5r4(ktw?R>^tt!RHR%?!0?9A2&@fhUhg0-u;b8>{bRO{mx&0Hxj#J z?n@;2Ef$szkMKxnc$ZXUavWq}odTVGYwV$G;zY!L6m)bXU-q%^RS3zfEAS>F`3!ap zCrjlhR6gh*`F>j5=356wbw8ADL@b^x4h)cjG&m&8WTb={RD=*)=aC&E6S|NojYmMb z+iy^2M}#9a z)y@UqYJMbjeMmPX6DIrkz3vJO0rWh%&QQol;zD?=5dBwd4^We2xjzv(3v*T9@^q^ROZbdLS_19q^wq zgnTp|4CHb+ESRr$QWa{HYlom0dMZ7hfm6T%JO^ft4vH1Y-Dtb2bJ~|FpU>&EUx zl43(2qZea2JRW{R zJf)sg0_r;gBCz?UZ{_sn;jvL;iXId5rMZJ1d@aQophQIz_W= zZ~}Fg8!IlL<8a+$&C*0jO%RnzuxCiw4LPyAbgcixC7zG(wGmd=md!uD)^eaKzt*)7 znAaAd^od%Kx)YD1aYUv`0FAaLiNAp-+z(01LY#)I3AHe$fw>9Ud~#KN#V62-4QXxb zE30mQ=8W>ga4O6X<3Zr`Td@u9IeFsTwIRtQ)I>NXz$4_7kQqxn5=3eo(skVSK^XQa zf})z*(%Ke-ldY1^{1J5_-*gSZSyXQ$s`!#SM_n2S5*bGnhtvz9Hlv2Y5~uJVB|RzS zndEldngCs}jzLY95JZl~qW@l#xu9EhO7^#LXX&z~DlBalrpptq#!_B_7KF%k&1+H5 zFM?Swt<`koqDOS5n(VVZ4!@mssmmy6Sh7dF2j28FCSS#Qy7uWCOndhY=S9OnUN^A&Whi#3e4!A*p)_ziGTIfcy~$n>lBkR$>KnN5OmX0hseXwoV+*z(#M zjy}jS2O$jUAr|L+*!c!BE#4E&^j%yfCa8{qmsIlNJ(1h5P=E&7S5zOgrJsjDagw33 ziikCw;0A|yriN{|6@IwO!JVJiL@43@;1v4=83GhoDqmZ+4ncMnVgO2IsulmDa^g!z z1tRzU9dI+*)1`{aIPiIh>q7CFWe_Hpkz~Um$`HGlGmx^RnMJaiUR3m-;DVdW%L_{@ zD_gU8_o?C$A2?p3j5%XebLlnmGjMUJg5*XgMS?^Ehm^mZ0=r=X59DTo7lA-XPcwa1 zoZQnY&G9W#N7XiK*kk$gB}l)u-f&W9XUMR}wHzYysx+rP_rE6Q+v$YKbYevXx2YPK z>-dywhX(EBxb1EaYXzQ#t%`(2=W%g+f>#9wjP0xfwtm%T`7NOMh>dy&riGLK@RV6X zUYm34_~X~tsqJn=n-45)oU^@dytn%u8xC5T5$`&yl?fNip?kPf%%?r7p5v?LFM?Kv&c@+MmFePz|vN z++3=hO48JmSZbPmEBO~gg9~k;-$}WW9q5_E$`7CO4uu?X;;(Nn%fGkcJCx&cRsI~_ z2?Y!vQE`e`-J@*m2joh3ICZi(mikp5u7>xPpg8n376nur;~7v9NTa zV95{j2xSBWx-N?t1#KX(zvB*u6d5%HCpa}~GQn{M3V`OOT|s4Vygd)?;fi6w)zsqI zfq7d}O(`>iYWirhN;*S5!hborxkjbY6i~2ahj5_m*;K;t3sY2_te^_6rN|_$#{uP# zXo}tS5ewYwf=dUne%=G#c)eBimAk0&448s~EPXV1dFT1(G*?1_hdH)e*muK?LIPQ_kD2U@R{Fb|_ zI5Hs%C`sB(PKk@D0L4JKu3oa8n2xa?JYCBa9wTsh)VngH)@%DL_7`U57FPqI*)Z^IWZI{V%maiD7Tok zlX!&r(L(M~>}=hw(_99jeNhSGjQ1)Qhz&*kvb26ldCxxdK)F8Wwh;dC-s-BTEb1%N zdAkldzV+k6sbp2@Z;`ECFWPzPg;kXurmDX!N{ z(B;-*ER>B&X(N1ANY3ZsWm#($oc=SWW_MiX^5weH4wr!pQ_3#(d#b?}IskB6U>R(e z%%rHPydDn67HB0O7}r42g;75`&Uy)m7JkI^KcO2O8b2#Od&pY(5uFOu66H9bw~u$L zkXk~{EhN`$_a^v|KF%5HmRVdyI3bu6NkSy#(8Vwlt zGoBDIailw4g>ATSRvP$&C4)b{?z(5>8O7@8r4~+{cpqUNmOizu*M^ z898`AncjH$NP)8|u7-J2-1OMSPtqLV9R|^rrtTAFm=|7JUUlO&znFg(HaMNck}iMl z4eS?%xf}y~ekBeYAr8%GaaQ~V%?bVfdcu|epU|g@EWv)h;^MefkMuHhzzXumlQ(<( zaqJ&omoaASETj%maUai&QHs%bUS`iSZgQG=rrfUSMGh~nZ>}m(WPLLj_Q;kj1)(kK zjr55Kp#NO+~S9iwEB^6H(?|Jd$V$v!2^kWGV zHXDosjlQwDxl+BVnv$lGjBV8X57? z2@IHqg-{Uu)8~IGVi*=6h63^#^?)i6FP*_)qNra~ttWREpv2??9u?EX8plUZodY2i zUK+s)?ki^V3fATA#^&%0Oy?Vx6_idOl;{V042AB4YFKSYM>v%ACQpt(5XGE;1l0J@ zB8o^%;2W<2V4fEuNU81T)*!4O&XbNEULpa+G__yJ2&$u>cYjDOa7h6cc&H0|CHGaJ z49fJV)BWA>PCqHLfIX9X=2j053(I{#*p}}halom)Y#>sbYvDPo_!|$*v9oFO4W9Wl zJ|z+5N-&@}NMlZq5d=7&u5Uz+E$w;LZ|({hB`OC@b~$S}8?f!hoI(5QjXX=F zOBKp1f0~SFZDENOyJOj`eSKNbchV(k3sa}s?GbH;9=EmvE3tm%WSbM?c_Dy~lz+^m z#@yD6R%>JaWrN@Y2Ao@Y0XD?1mekE}{$W^lV-7oWZD~o5e|UeYr`MLY^xF>SGPnL> zb#rcm)h;)Hs9J<;wiqh3Zef$6HO90z5_~P zEpT7Jj|L7zBr_R8h-RkW zun1J1dak>0wV*q0S)IGt<<+YRsj$ceghXtjGKsg5#*^A!A0N)%WL9q=N|1370 z12kc$ z0#G13ia6Z@Uq>`k{dU`;2#BGO-T8>vGxj^3!5nMWBd)fXazj-QyZq^MzI_9HM=V@W zFtiE}O1FRy;BZEifr}au`^?yVTsC|h!;zzL2Wbix*8xkFioGKMV^!0Y7z?fTH_}A>m>TA5x{zBa{zv@3^^z}oh z`g_BEk^Iqn^@aGel|CbvtBFFO&&YeY_~Cz~#}t`VvMO*th(M1kj-;<2id*|S2rBMh z8)W=d|BS`M^urhAul?a}qJQqE_}29Uv(!J#DIZo7Ook+bi`Zpbum7koiA%+Q?ITh= z^gI99uW^yp?ZGE9xEf#UAy{F#3xO@>GjL&$%;5pfrf=SSvKLTLeg&xB zgIH3lnQ$>NI1y8XC~z`=h`bunixq+!7STnvpK|=v3z<8N_n2bj2XF%^D^waJ0etLF zgifb_Oegh;l0(SDgwgxz$!Y!yJeq`*>-}ggDpQT%s!vg>j?NLYm#K>H`G!;z4O#K& z;2Mb`-Ib5$X~FqC{uLhTsiA>WBDwscK1&?#I8{rlsRc zk%RXZoSVF^&d*tyWQebmC0l<2qWh^-CSwV?O7}sJXpR=R)qyou&NsD7j*4wAhiOzHi`rz>-krm-3kPkgr?T9q3OF6JKZ308Q5@DDr+E=#u)*}!k?h7pCoH?|^OfD}YJ#kAQu~?}It;XE(r|Wqm&tyk=feIuGS&NSJ zWVnEKdEZ#%Zrbk0_QNkR>n5tP;EZnu- zVRsbNUHW)QQ%=YBt6SThpeNht&S#SS7ey2A)F;JL^@B`7^A$Y4F=*M$n<@^oy`5!V zHrJ6La7yRv|55cQN@R8~OVre)J5hQ%vYcusbj-#h!qGbGVb>q?7|NxSJdrNFdQJGY zj$SI%1KS-)MzkW1lx(ynWSi}Sn;@G^wXjrXg} zoXd_15Psn|wj9<^t)x(N?yf>n`R1Hz#Lj!vLphCf?6B5^NMzayhhVGWc$UOpely+~ z8&G5s5bKnxjnVs;HFr#~N=k{bt0?ixS~`c|N#<=JA~?(a#g7JsOT#D6(e6ydf1&vR z%$*Px<3Zt0CzR??%` zqgJM_+EXi3MB(FhG;$|&DI56Lzx=bSboy>NOgc6am55iwA9yxsl%#-17;16mJ@C{< zvCm=9CzSbN$C6N#BcLo8l>ys2c0WL%pwl=FVx$ncDkBrZgzT|+lBE=sud zL3{P3qKQC@kNj>L1@GcCDrxCn>jKntHdoY5ld>rpKMpzt1`$2A$pGX@p&K&sMbGSISand2Ara~E zU2>AMQwS^IA7D^JESP@am=$x*?s!F#kx|-Rw%ejTa&8KP0yvr&W6{VIQ?TT(ap)&k z5l$szDt#TQWyU_a9KzANVv=?lYmKK1g%%~sQb1)m2}SW+6GerN77=JLlEEWDktC$C z=`yOcYmmBmk0E$(`Cjf_dJ17ux@yML&zZkbJKV~?7x_s%0FUn?c%S)TNr_-Xi%@H4 z?{fu9QaMKEK)K>jPz&B8s>FEfJjf7i^_;{}p(n$;?&NmJ;t;3lx+#dLx_foE1AwKHcC`F~1}K z_|(HJlLyv#;0-sy1|ViBvmO|<=r72nBOjjUfAQd#7TzBSpYWLEO`ALtBRuSFmKR%p zpsW?zm^S;f-rz*6{Kmj!ajFJ6p^qe4$TMV6InOq_HeT=Tc@!2h(fQ4m3RkA8#3l-p(Qma;$oSGTap+d-nGeo<>~|y9Qf8z9n@uEbnmbWB z2GQE01V_QW;}grSZp?{_;xA~~RvbD8u}CoremKsK@KNi9D0)H}MZZc<$h$cB>&=0{VCW-V_l-Z^J(zQ42V@;n`cmT2eII(93d z3Z^dE8qv;Q)0F$ZyJ@tFKaw*QtASTJ3CfsDXgJ-V^2>Sb={t+Rs(mkTOc)OJlEh(U zamqj=)W)5-FnbG|^?eB7j>BVfA6oc-mbRCcS_)n*)T!dX@arm~INc-_(zhkXFD$QC z&u_16z_oJrj#5}|3p%1D)&v$0(uj04j#!gl&Ket@WRniB>yh~I&>Hcnb)r6PKr<&n|u82Fy!{CP1xu%lbEKQLmxL29u zuhfBb>s&QTwSl_sB6=$C`hSALl8#<+v8eXcRoTeZ0>7N4$v&mFzRXnBbXatsjh8bt z)t6FF6K!$pT6yRP0e&V%gzx+Vw^CWC3PK8+((MB1#z;oir!rx(e?t_hY?352A#N{>fFoq{Z>g*vc3=6O@UV1f1^5u& zLST1ywF+28v=*E`?rZSeH~R!nfX65+oh?ZxbeFCnQ__ zYzdO-#(OkVWWDZbIyNwMlsy2CNA3VD-UBAQCo`4{mfY(($(y(xuO)BH>a8zyObDJ_CY!YSvZ}7Ns5KzpyQBzIqo1Z87An?*o#q(pduo4 z^kR$65)<>d2Qj>yrM>j#clfkS(-+R24qkFcg`TR!!o7ren68_G`c3alRu*GbBkJTn z@$q!VVDVPiGSZR1KNud)#*e3g-Oj)l=T1Y`bt;s&H#+68rWk-$o@f>cOb%{sr~F2S zco|B;#&uUpCoy*t0ga?Il)0H1lYL)aTu~eVH9hTmvtwr%uqsdE?NrmnpIG?s+}ZAl zBTo*~2Tj(UauyaA2-{kI)3jBgcOEoA5==a6X4RN$ONsLq0dG`i4qfb5rvm&j_2q2xY> zoF)b&WOqGL^1k=H_W8dmVe zhod$Z#^YpFr$!4C$iS#@u?*!A)xifX5*Uj2BI52I*x@7bsnDj=??a$CV$xMMB@_Ta z^N74O5C41Jm|bW&+XjS2-5`ipKe~hfx!EiNyD5c*55tUa>KIk!ps2HlyqQynxrCE! zzNFsMq>qa(`6GC4dI;^-&W!J=?QilOP}PPWG&Az!8G#py#E6}RlLt3ndf)2gPr`l0 zNp}DA&Id|IuYlXlmygCYowREn5ey}aBgxZuL+aijT5fuG%`dBsIaObBO~k+ue0udK z2#mp3gx-~>xgu26xGub3g0WlW3F}%YPYCwO;i`lX$s7CA+r}IfXVu9b<*b_f%1i3e zb<~_P`Qv)Fpmfpjvl_|=378|sR3y{p){b3*j=Oj8YI8CnIEe%)GyhnSyyd`Ro^Rn% zQ(3%qeMiMfgQbp|l1?d9uf;+g%dHb)Or)4aXRd7o03=TDsxRMT%JHLsv7!{tBn7k* zTOf>{xbt)l$w}D2Bl6R(ak*F>j1T~1s%(%ip64LYbvwIH#$o>Ymw$!bwZ_?st8u1%~(m;q7IFhBQ+AjfK^(OjwcM+OPEEbaVrKm@A~ARUd^H1Vp^Z#z!+| z!tT+w%@PUB!dJ^9)KeLR8O)*4V$td>;)Ecrewsl*?u3m>AH_8%^usOZpKV?D4n-DZ zKjth{_HsFY=?2;y>9qN78N0J>OiwsD82g$q!+s;Z=VaV*!|}JKn_CR0C$y2rJ76a|upKXzEl7sSFlJZoWjA6Bh`fL{1Ic3Eo)6WYDt(RNu#gR`ibA5Vx820sF2TTWB3HKS?_?-a-b7saoM~ z8-OEuE#8Gg%Nv+nPkA)ayA=HkEjX7O2?#LDi5I8GsVgZ(SjBkjT7z7#Wb*^m+=_?Z zcV%UnU1%Iwma4uB33~3CVkEA(@*x%g#jlb!fXnDSWP7E9FE1U-RItJ0E5wkhpaL9k z+KFhi8)0kJBp^-5C^Z#vM7>wr65wZ|3-H*e?4x^QVu5Nng9SRIqlZkbMEOm0%V+-B zT7Bk&GavigwDZTpDu21(1ESqg>I5LnP;t0mrqrbO(D|8E?8pD*1MpiMWBFNQU^873 va1~=7D+3~$>&64>oavcUDd$2-*k`Qwd2@l#9SIPN^ocy$gWvo^YwbS(O4OYm diff --git a/Corpus/The 4 Research Techniques to Train Deep Neural Network Models More Efficiently.txt b/Corpus/The 4 Research Techniques to Train Deep Neural Network Models More Efficiently.txt deleted file mode 100644 index 05d66fd..0000000 --- a/Corpus/The 4 Research Techniques to Train Deep Neural Network Models More Efficiently.txt +++ /dev/null @@ -1,535 +0,0 @@ - The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - - To make Medium work, we log user data. By using Medium, you agree to our Privacy Policy, - including cookie policy. - - - - - - - - The 4 Research Techniques to - - Train Deep Neural Network - - Models More E:ciently - - - James Le Follow - Oct 29, 2019 · 9 min read - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Photo by Victor Freitas on Unsplash - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 1 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - Deep learning and unsupervised feature learning have shown - great promise in many practical applications. State-of-the-art - performance has been reported in several domains, ranging - from speech recognition and image recognition to text - processing and beyond. - - - It’s also been observed that increasing the scale of deep - learning—with respect to numbers of training examples, model - parameters, or both—can drastically improve accuracy. These - results have led to a surge of interest in scaling up the training - and inference algorithms used for these models and in - improving optimization techniques for both. - - - The use of GPUs is a signiFcant advance in recent years that - makes the training of modestly-sized deep networks practical. - A known limitation of the GPU approach is that the training - speed-up is small when the model doesn’t Ft in a GPU’s - memory (typically less than 6 gigabytes). - - - To use a GPU eLectively, researchers often reduce the size of - the dataset or parameters so that CPU-to-GPU transfers are not - a signiFcant bottleneck. While data and parameter reduction - work well for small problems (e.g. acoustic modeling for speech - recognition), they are less attractive for problems with a large - number of examples and dimensions (e.g., high-resolution - images). - - - In the previous post, we - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 2 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - talked about 5 diLerent - algorithms for ePcient deep - learning inference. In this - article, we’ll discuss the - upper right part of the - quadrant on the left. What - are the best research - techniques to train deep - neural networks more - ePciently? - - - - 1 — Parallelization Training - Let’s start with parallelization. As the Fgure below shows, the - number of transistors keeps increasing over the years. But - single-threaded performance and frequency are plateauing in - recent years. Interestingly, the number of cores is increasing. - - - - - - - - - - - - - - - - - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 3 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - - - So what we really need to know is how to parallelize the - problem to take advantage of parallel processing. There are a - lot of opportunities to do that in deep neural networks. - - - For example, we can do data parallelism: feeding 2 images - into the same model and running them at the same time. This - does not aLect latency for any single input. It doesn’t make it - shorter, but it makes the batch size larger. It also requires - coordinated weight updates during training. - - - For example, in JeL Dean’s paper “Large Scale Distributed Deep - Networks,” there’s a parameter server (as a master) and a - couple of model workers (as slaves) running their own pieces of - training data and updating the gradient to the master. - - - - - - - - - - - - - - - - - - - Another idea is model parallelism — splitting up the model - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 4 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - and distributing each part to diLerent processors or diLerent - threads. For example, imagine we want to run convolution in - the image below by doing a 6-dimension “for” loop. What we - can do is cut the input image by 2x2 blocks, so that each - thread/processor handles 1/4 of the image. Also, we can - parallelize the convolutional layers by the output or input - feature map regions, and the fully-connected layers by the - output activation. - - - - - - - - - - - - - - - - - ... - - - - - Machine learning models are moving closer - - and closer to edge devices. Fritz AI is here - - to help with this transition. Explore our - - suite of developer tools that makes it easy to - - teach devices to see, hear, sense, and think. - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 5 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - ... - - - 2 — Mixed Precision Training - Larger models usually require more compute and memory - resources to train. These requirements can be lowered by using - reduced precision representation and arithmetic. - - Performance (speed) of any program, including neural network - training and inference, is limited by one of three factors: - arithmetic bandwidth, memory bandwidth, or latency. - Reduced precision addresses two of these limiters. Memory - bandwidth pressure is lowered by using fewer bits to store the - same number of values. Arithmetic time can also be lowered on - processors that oLer higher throughput for reduced precision - math. For example, half-precision math throughput in recent - GPUs is 2× to 8× higher than for single-precision. In addition - to speed improvements, reduced precision formats also reduce - the amount of memory required for training. - - Modern deep learning training systems use a single-precision - (FP32) format. In their paper “Mixed Precision Training,” - researchers from NVIDIA and Baidu addressed training with - reduced precision while maintaining model accuracy. - - SpeciFcally, they trained various neural networks using the - IEEE half-precision format (FP16). Since FP16 format has a - narrower dynamic range than FP32, they introduced three - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 6 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - techniques to prevent model accuracy loss: maintaining a - master copy of weights in FP32, loss-scaling that minimizes - gradient values becoming zeros, and FP16 arithmetic with - accumulation in FP32. - - - Using these techniques, they - demonstrated that a wide - variety of network - architectures and - applications can be trained - to match the accuracy of - FP32 training. Experimental - results include convolutional - and recurrent network - architectures, trained for classiFcation, regression, and - generative tasks. - - - Applications include image classiFcation, image generation, - object detection, language modeling, machine translation, and - speech recognition. The proposed methodology requires no - changes to models or training hyperparameters. - - - - 3 — Model Distillation - Model distillation refers to the idea of model compression by - teaching a smaller network exactly what to do, step-by-step, - using a bigger, already-trained network. The ‘soft labels’ refer - to the output feature maps by the bigger network after every - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 7 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - convolution layer. The smaller network is then trained to learn - the exact behavior of the bigger network by trying to replicate - its outputs at every level (not just the Fnal loss). - - - The method was Frst proposed by Bucila et al., 2006 and - generalized by Hinton et al., 2015. In distillation, knowledge is - transferred from the teacher model to the student by - minimizing a loss function in which the target is the - distribution of class probabilities predicted by the teacher - model. That is — the output of a softmax function on the - teacher model’s logits. - - - So how do teacher-student - networks exactly work? - - - The highly-complex teacher - network is Frst trained - separately using the - complete dataset. This step - requires high computational - performance and thus can - only be done ohine (on - high-performing GPUs). - - While designing a student network, correspondence needs - to be established between intermediate outputs of the - student network and the teacher network. This - correspondence can involve directly passing the output of a - layer in the teacher network to the student network, or - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 8 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - performing some data augmentation before passing it to the - student network. - - Next, the data are forward-passed through the teacher - network to get all intermediate outputs, and then data - augmentation (if any) is applied to the same. - - Finally, the outputs from the teacher network are back- - propagated through the student network so that the student - network can learn to replicate the behavior of the teacher - network. - - ... - - - - - The future of machine learning is on the - - edge. Subscribe to the Fritz AI Newsletter - - to discover the possibilities and beneIts of - - embedding ML models inside mobile apps. - - ... - - - - 4 — Dense-Sparse-Dense Training - The research paper “Dense-Sparse-Dense Training for Deep - Neural Networks” was published back in 2017 by researchers - from Stanford, NVIDIA, Baidu, and Facebook. Applying Dense- - Sparse-Dense (DSD) takes 3 sequential steps: - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 9 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - Dense: Normal neural net training…business as usual. It’s - notable that even though DSD acts as a regularizer, the - usual regularization methods such as dropout and weight - regularization can be applied as well. The authors don’t - mention batch normalization, but it would work as well. - - - Sparse: We regularize the - network by removing - connections with small - weights. From each layer in - the network, a percentage of - the layer’s weights that are - closest to 0 in absolute value is selected to be pruned. This - means that they are set to 0 at each training iteration. It’s - worth noting that the pruned weights are selected only - once, not at each SGD iteration. Eventually, the network - recovers the pruned weights’ knowledge and condenses it in - the remaining ones. We train this sparse net until - convergence. - - Dense: First, we re-enable the pruned weights from the - previous step. The net is again trained normally until - convergence. This step increases the capacity of the model. - It can use the recovered capacity to store new knowledge. - The authors note that the learning rate should be 1/10th of - the original. Since the model is already performing well, the - lower learning rate helps preserve the knowledge gained in - the previous step. - - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 10 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - Removing pruning in the dense step allows the training to - escape saddle points to eventually reach a better minimum. - This lower minimum corresponds to improved training and - validation metrics. - - - Saddle points are areas in the multidimensional space of the - model that might not be a good solution but are hard to escape - from. The authors hypothesize that the lower minimum is - achieved because the sparsity in the network moves the - optimization problem to a lower-dimensional space. This space - is more robust to noise in the training data. - - - The authors tested DSD on image classiFcation (CNN), caption - generation (RNN), and speech recognition (LSTM). The - proposed method improved accuracy across all three tasks. It’s - quite remarkable that DSD works across domains. - - - DSD improved all CNN models tested — ResNet50, VGG, - and GoogLeNet. The improvement in absolute top-1 - accuracy was respectively 1.12%, 4.31%, and 1.12%. This - corresponds to a relative improvement of 4.66%, 13.7%, - and 3.6%. These results are remarkable for such Fnely- - tuned models! - - - DSD was applied to - NeuralTalk, an amazing - model that generates a - description from an image. - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 11 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - To verify that the Dense- - Sparse-Dense method works - on an LSTM, the CNN part of - Neural Talk is frozen. Only - the LSTM layers are trained. Very high (80% deducted by - the validation set) pruning was applied at the Sparse step. - Still, this gives the Neural Talk BLEU score an average - relative improvement of 6.7%. It’s fascinating that such a - minor adjustment produces this much improvement. - - Applying DSD to speech recognition (Deep Speech 1) - achieves an average relative improvement of Word Error - Rate of 3.95%. On a similar but more advanced Deep - Speech 2 model Dense-Sparse-Dense is applied iteratively - two times. On the Frst iteration, pruning 50% of the - weights, then 25% of the weights are pruned. After these - two DSD iterations, the average relative improvement is - 6.5%. - - - - Conclusion - I hope that I’ve managed to explain these research techniques - for ePcient training of deep neural networks in a transparent - way. Work on this post allowed me to grasp how novel and - clever these techniques are. A solid understanding of these - approaches will allow you to incorporate them into your model - training procedure when needed. - - ... - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 12 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - - - Editor’s Note: Heartbeat is a contributor-driven online - publication and community dedicated to exploring the emerging - intersection of mobile app development and machine learning. - We’re committed to supporting and inspiring developers and - engineers from all walks of life. - - - Editorially independent, Heartbeat is sponsored and published by - Fritz AI, the machine learning platform that helps developers - teach devices to see, hear, sense, and think. We pay our - contributors, and we don’t sell ads. - - - If you’d like to contribute, head on over to our call for - contributors. You can also sign up to receive our weekly - newsletters (Deep Learning Weekly and the Fritz AI - Newsletter), join us on Slack, and follow Fritz AI on Twitter for - all the latest in mobile machine learning. - - - - Neural Networks Deep Learning Heartbeat Guides And Tutorials - - Machine Learning - - - - - - - - - - - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 13 of 14 The 4 Research Techniques to Train Deep Neural Network Models More Efficiently 26/05/2020, 21:12 - - - - Discover Medium Make Medium Become a member - yours Welcome to a place where Get unlimited access to the - words matter. On Medium, Follow all the topics you best stories on Medium — - smart voices and original care about, and we’ll and support writers while - ideas take center stage - deliver the best stories for you’re at it. Just $5/month. - with no ads in sight. Watch you to your homepage and Upgrade - inbox. Explore - - - - - About Help Legal - - - - - - - - - - - - - - - - - - - - - - - - - - - - https://heartbeat.fritz.ai/the-4-research-techniques-to-train-deep-neural-network-models-more-efficiently-810ea2886205 Page 14 of 14 \ No newline at end of file diff --git a/Corpus/The State of Sparsity in Deep Neural Networks - Trevor Gale.txt b/Corpus/The State of Sparsity in Deep Neural Networks - Trevor Gale.txt deleted file mode 100644 index ba90caa..0000000 --- a/Corpus/The State of Sparsity in Deep Neural Networks - Trevor Gale.txt +++ /dev/null @@ -1,678 +0,0 @@ - The State of Sparsity in Deep Neural Networks - - - - Trevor Gale * 1y Erich Elsen * 2 Sara Hooker 1y - - - Abstract like image classification and machine translation commonly - have tens of millions of parameters, and require billions ofWe rigorously evaluate three state-of-the-art tech- floating-point operations to make a prediction for a singleniques for inducing sparsity in deep neural net- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - arXiv:1902.09574v1 [cs.LG] 25 Feb 2019 input sample.works on two large-scale learning tasks: Trans- - former trained on WMT 2014 English-to-German, Sparsity has emerged as a leading approach to address these - and ResNet-50 trained on ImageNet. Across thou- challenges. By sparsity, we refer to the property that a subset - sands of experiments, we demonstrate that com- of the model parameters have a value of exactly zero 2 . With - plex techniques (Molchanov et al.,2017;Louizos zero valued weights, any multiplications (which dominate - et al.,2017b) shown to yield high compression neural network computation) can be skipped, and models - rates on smaller datasets perform inconsistently, can be stored and transmitted compactly using sparse matrix - and that simple magnitude pruning approaches formats. It has been shown empirically that deep neural - achieve comparable or better results. Based on networks can tolerate high levels of sparsity (Han et al., - insights from our experiments, we achieve a 2015;Narang et al.,2017;Ullrich et al.,2017), and this - new state-of-the-art sparsity-accuracy trade-off property has been leveraged to significantly reduce the cost - for ResNet-50 using only magnitude pruning. Ad- associated with the deployment of deep neural networks, - ditionally, we repeat the experiments performed and to enable the deployment of state-of-the-art models in - byFrankle & Carbin(2018) andLiu et al.(2018) severely resource constrained environments (Theis et al., - at scale and show that unstructured sparse archi- 2018;Kalchbrenner et al.,2018;Valin & Skoglund,2018). - tectures learned through pruning cannot be trained Over the past few years, numerous techniques for induc-from scratch to the same test set performance as ing sparsity have been proposed and the set of models anda model trained with joint sparsification and op- datasets used as benchmarks has grown too large to rea-timization. Together, these results highlight the sonably expect new approaches to explore them all. Inneed for large-scale benchmarks in the field of addition to the lack of standardization in modeling tasks, themodel compression. We open-source our code, distribution of benchmarks tends to slant heavily towardstop performing model checkpoints, and results of convolutional architectures and computer vision tasks, andall hyperparameter configurations to establish rig- the tasks used to evaluate new techniques are frequentlyorous baselines for future work on compression not representative of the scale and complexity of real-worldand sparsification. tasks where model compression is most useful. These char- - acteristics make it difficult to come away from the sparsity - literature with a clear understanding of the relative merits - 1. Introduction of different approaches. - Deep neural networks achieve state-of-the-art performance In addition to practical concerns around comparing tech- - in a variety of domains including image classification (He niques, multiple independent studies have recently proposed - et al.,2016), machine translation (Vaswani et al.,2017), that the value of sparsification in neural networks has been - and text-to-speech (van den Oord et al.,2016;Kalchbren- misunderstood (Frankle & Carbin,2018;Liu et al.,2018). - ner et al.,2018). While model quality has been shown to While both papers suggest that sparsification can be viewed - scale with model and dataset size (Hestness et al.,2017), as a form of neural architecture search, they disagree on - the resources required to train and deploy large neural net- what is necessary to achieve this. Specifically,Liu et al. - works can be prohibitive. State-of-the-art models for tasks 2 The term sparsity is also commonly used to refer to the pro- - * Equal contribution y This work was completed as part of the portion of a neural networks weights that are zero valued. Higher - Google AI Residency 1 Google Brain 2 DeepMind. Correspondence sparsity corresponds to fewer weights, and smaller computational - to: Trevor Gale. and storage requirements. We use the term in this way throughout - this paper. The State of Sparsity in Deep Neural Networks - - (2018) re-train learned sparse topologies with a random Some of the earliest techniques for sparsifying neural net- - weight initialization, whereasFrankle & Carbin(2018) posit works make use of second-order approximation of the loss - that the exact random weight initialization used when the surface to avoid damaging model quality (LeCun et al., - sparse architecture was learned is needed to match the test 1989;Hassibi & Stork,1992). More recent work has - set performance of the model sparsified during optimization. achieved comparable compression levels with more com- - putationally efficient first-order loss approximations, andIn this paper, we address these ambiguities to provide a further refinements have related this work to efficient em-strong foundation for future work on sparsity in neural net- pirical estimates of the Fisher information of the modelworks.Our main contributions:(1) We perform a com- parameters (Molchanov et al.,2016;Theis et al.,2018).prehensive evaluation of variational dropout (Molchanov - et al.,2017),l0 regularization (Louizos et al.,2017b), and Reinforcement learning has also been applied to automat- - magnitude pruning (Zhu & Gupta,2017) on Transformer ically prune weights and convolutional filters (Lin et al., - trained on WMT 2014 English-to-German and ResNet-50 2017;He et al.,2018), and a number of techniques have - trained on ImageNet. To the best of our knowledge, we been proposed that draw inspiration from biological phe- - are the first to apply variational dropout andl0 regulariza- nomena, and derive from evolutionary algorithms and neu- - tion to models of this scale. While variational dropout and romorphic computing (Guo et al.,2016;Bellec et al.,2017; - l0 regularization achieve state-of-the-art results on small Mocanu et al.,2018). - datasets, we show that they perform inconsistently for large- A key feature of a sparsity inducing technique is if andscale tasks and that simple magnitude pruning can achieve how it imposes structure on the topology of sparse weights.comparable or better results for a reduced computational While unstructured weight sparsity provides the most flex-budget. (2) Through insights gained from our experiments, ibility for the model, it is more difficult to map efficientlywe achieve a new state-of-the-art sparsity-accuracy trade-off to parallel processors and has limited support in deep learn-for ResNet-50 using only magnitude pruning. (3) We repeat ing software packages. For these reasons, many techniquesthe lottery ticket (Frankle & Carbin,2018) and scratch (Liu focus on removing whole neurons and convolutional filters,et al.,2018) experiments on Transformer and ResNet-50 or impose block structure on the sparse weights (Liu et al.,across a full range of sparsity levels. We show that unstruc- 2017;Luo et al.,2017;Gray et al.,2017). While this is prac-tured sparse architectures learned through pruning cannot tical, there is a trade-off between achievable compressionbe trained from scratch to the same test set performance as levels for a given model quality and the level of structurea model trained with pruning as part of the optimization imposed on the model weights. In this work, we focusprocess. (4) We open-source our code, model checkpoints, on unstructured sparsity with the expectation that it upperand results of all hyperparameter settings to establish rig- bounds the compression-accuracy trade-off achievable withorous baselines for future work on model compression and structured sparsity techniques.sparsification 3 . - - 3. Evaluating Sparsification Techniques at2. Sparsity in Neural Networks Scale - We briefly provide a non-exhaustive review of proposed - approaches for inducing sparsity in deep neural networks. As a first step towards addressing the ambiguity in the - sparsity literature, we rigorously evaluate magnitude-based - Simple heuristics based on removing small magnitude pruning (Zhu & Gupta,2017), sparse variational dropoutweights have demonstrated high compression rates with (Molchanov et al.,2017), andl0 regularization (Louizosminimal accuracy loss (Strom¨ ,1997;Collins & Kohli,2014; et al.,2017b) on two large-scale deep learning applications: - Han et al.,2015), and further refinement of the sparsifica- ImageNet classification with ResNet-50 (He et al.,2016), - tion process for magnitude pruning techniques has increased and neural machine translation (NMT) with the Transformer - achievable compression rates and greatly reduced computa- on the WMT 2014 English-to-German dataset (Vaswani - tional complexity (Guo et al.,2016;Zhu & Gupta,2017). et al.,2017). For each model, we also benchmark a random - Many techniques grounded in Bayesian statistics and in- weight pruning technique, representing the lower bound - formation theory have been proposed (Dai et al.,2018; of compression-accuracy trade-off any method should be - Molchanov et al.,2017;Louizos et al.,2017b;a;Ullrich expected to achieve. - et al.,2017). These methods have achieved high compres- Here we briefly review the four techniques and introduce sion rates while providing deep theoretical motivation and our experimental framework. We provide a more detailed - connections to classical sparsification and regularization overview of each technique in AppendixA. - techniques. - 3 https://bit.ly/2ExE8Yj The State of Sparsity in Deep Neural Networks - - 3.1. Magnitude Pruning Table 1.Constant hyperparameters for all Transformer exper- - Magnitude-based weight pruning schemes use the magni- iments.More details on the standard configuration for training the - tude of each weight as a proxy for its importance to model Transformer can be found inVaswani et al.(2017). - quality, and remove the least important weights according Hyperparameter Value - to some sparsification schedule over the course of training. dataset translatewmtendepacked - For our experiments, we use the approach introduced in training iterations 500000 - Zhu & Gupta(2017), which is conveniently available in the batch size 2048 tokens - TensorFlow modelpruning library 4 . This technique allows learning rate schedule standard transformerbase - for masked weights to reactivate during training based on optimizer Adam - gradient updates, and makes use of a gradual sparsification sparsity range 50% - 98% - schedule with sorting-based weight thresholding to achieve beam search beam size 4; length penalty 0.6 - a user specified level of sparsification. These features enable - high compression ratios at a reduced computational cost rel- optimized directly using the reparameterization trick, and - ative to the iterative pruning and re-training approach used the expectedl0 -norm can be computed using the value of the - byHan et al.(2015), while requiring less hyperparame- cumulative distribution function of the random gate variable - ter tuning relative to the technique proposed byGuo et al. evaluated at zero. - (2016). - 3.4. Random Pruning Baseline - 3.2. Variational Dropout For our experiments, we also include a random sparsification - Variational dropout was originally proposed as a re- procedure adapted from the magnitude pruning technique - interpretation of dropout training as variational inference, ofZhu & Gupta(2017). Our random pruning technique - providing a Bayesian justification for the use of dropout uses the same sparsity schedule, but differs by selecting the - in neural networks and enabling useful extensions to the weights to be pruned each step at random rather based on - standard dropout algorithms like learnable dropout rates magnitude and does not allow pruned weights to reactivate. - (Kingma et al.,2015). It was later demonstrated that by This technique is intended to represent a lower-bound of the - learning a model with variational dropout and per-parameter accuracy-sparsity trade-off curve. - dropout rates, weights with high dropout rates can be re- - moved post-training to produce highly sparse solutions 3.5. Experimental Framework - (Molchanov et al.,2017). For magnitude pruning, we used the TensorFlow model - Variational dropout performs variational inference to learn pruning library. We implemented variational dropout and - the parameters of a fully-factorized Gaussian posterior over l0 regularization from scratch. For variational dropout, we - the weights under a log-uniform prior. In the standard for- verified our implementation by reproducing the results from - mulation, we apply a local reparameterization to move the the original paper. To verify ourl0 regularization implemen- - sampled noise from the weights to the activations, and then tation, we applied our weight-level code to Wide ResNet - apply the additive noise reparameterization to further reduce (Zagoruyko & Komodakis,2016) trained on CIFAR-10 and - the variance of the gradient estimator. Under this parame- replicated the training FLOPs reduction and accuracy re- - terization, we directly optimize the mean and variance of sults from the original publication. Verification results for - the neural network parameters. After training a model with variational dropout andl0 regularization are included in - variational dropout, the weights with the highest learned AppendicesBandC. For random pruning, we modified - dropout rates can be removed to produce a sparse model. the TensorFlow model pruning library to randomly select - weights as opposed to sorting them based on magnitude. - 3.3.l0 Regularization For each model, we kept the number of training steps con- - l0 regularization explicitly penalizes the number of non- stant across all techniques and performed extensive hyper- - zero weights in the model to induce sparsity. However, parameter tuning. While magnitude pruning is relatively - thel0 -norm is both non-convex and non-differentiable. To simple to apply to large models and achieves reasonably - address the non-differentiability of thel0 -norm,Louizos consistent performance across a wide range of hyperparame- - et al.(2017b) propose a reparameterization of the neural ters, variational dropout andl0 -regularization are much less - network weights as the product of a weight and a stochastic well understood. To our knowledge, we are the first to apply - gate variable sampled from a hard-concrete distribution. these techniques to models of this scale. To produce a fair - The parameters of the hard-concrete distribution can be comparison, we did not limit the amount of hyperparameter - tuning we performed for each technique. In total, our results 4 https://bit.ly/2T8hBGn encompass over 4000 experiments. The State of Sparsity in Deep Neural Networks - - - - - - - - - - - - - - Figure 2.Average sparsity in Transformer layers.Distributions - calculated on the top performing model at 90% sparsity for each - technique.l0 regularization and variational dropout are able to - learn non-uniform distributions of sparsity, while magnitude prun- - ing induces user-specified sparsity distributions (in this case, uni- - form). - form the random pruning technique, randomly removing - weights produces surprisingly reasonable results, which is - perhaps indicative of the models ability to recover from - Figure 1.Sparsity-BLEU trade-off curves for the Transformer. damage during optimization. - Top: Pareto frontiers for each of the four sparsification techniques - applied to the Transformer. Bottom: All experimental results with What is particularly notable about the performance of mag- - each technique. Despite the diversity of approaches, the relative nitude pruning is that our experiments uniformly remove the - performance of all three techniques is remarkably consistent. Mag- same fraction of weights for each layer. This is in stark con- - nitude pruning notably outperforms more complex techniques for trast to variational dropout andl0 regularization, where the - high levels of sparsity. distribution of sparsity across the layers is learned through - the training process. Previous work has shown that a non- - 4. Sparse Neural Machine Translation uniform sparsity among different layers is key to achieving - high compression rates (He et al.,2018), and variational - We adapted the Transformer (Vaswani et al.,2017) model dropout andl0 regularization should theoretically be able to - for neural machine translation to use these four sparsifica- leverage this feature to learn better distributions of weights - tion techniques, and trained the model on the WMT 2014 for a given global sparsity. - English-German dataset. We sparsified all fully-connected - layers and embeddings, which make up 99.87% of all of Figure2shows the distribution of sparsity across the differ- - the parameters in the model (the other parameters coming ent layer types in the Transformer for the top performing - from biases and layer normalization). The constant hyper- model at 90% global sparsity for each technique. Bothl0 - parameters used for all experiments are listed in table1. We regularization and variational dropout learn to keep more - followed the standard training procedure used byVaswani parameters in the embedding, FFN layers, and the output - et al.(2017), but did not perform checkpoint averaging. transforms for the multi-head attention modules and induce - This setup yielded a baseline BLEU score of 27.29 averaged more sparsity in the transforms for the query and value in- - across five runs. puts to the attention modules. Despite this advantage,l0 - regularization and variational dropout did not significantly - We extensively tuned the remaining hyperparameters for outperform magnitude pruning, even yielding inferior re- - each technique. Details on what hyperparameters we ex- sults at high sparsity levels. - plored, and the results of what settings produced the best - models can be found in AppendixD. It is also important to note that these results maintain a - constant number of training steps across all techniques and - that the Transformer variant with magnitude pruning trains4.1. Sparse Transformer Results & Analysis 1.24x and 1.65x faster thanl0 regularization and variational - All results for the Transformer are plotted in figure1. De- dropout respectively. While the standard Transformer train- - spite the vast differences in these approaches, the relative ing scheme produces excellent results for machine transla- - performance of all three techniques is remarkably consis- tion, it has been shown that training the model for longer - tent. Whilel0 regularization and variational dropout pro- can improve its performance by as much as 2 BLEU (Ott - duce the top performing models in the low-to-mid sparsity et al.,2018). Thus, when compared for a fixed training cost - range, magnitude pruning achieves the best results for highly magnitude pruning has a distinct advantage over these more - sparse models. While all techniques were able to outper- complicated techniques. The State of Sparsity in Deep Neural Networks - - - Table 2.Constant hyperparameters for all RN50 experiments. - Hyperparameter Value - dataset ImageNet - training iterations 128000 - batch size 1024 images - learning rate schedule standard - optimizer SGD with Momentum - sparsity range 50% - 98% - - - - 5. Sparse Image Classification - To benchmark these four sparsity techniques on a large- - scale computer vision task, we integrated each method into - ResNet-50 and trained the model on the ImageNet large- - scale image classification dataset. We sparsified all convolu- - tional and fully-connected layers, which make up 99.79% - of all of the parameters in the model (the other parameters Figure 3.Sparsity-accuracy trade-off curves for ResNet-50. - coming from biases and batch normalization). Top: Pareto frontiers for variational dropout, magnitude pruning, - and random pruning applied to ResNet-50. Bottom: All experi- The hyperparameters we used for all experiments are listed mental results with each technique. We observe large variation in - in Table2. Each model was trained for 128000 iterations performance for variational dropout andl0 regularization between - with a batch size of 1024 images, stochastic gradient descent Transformer and ResNet-50. Magnitude pruning and variational - with momentum, and the standard learning rate schedule dropout achieve comparable performance for most sparsity levels, - (see AppendixE.1). This setup yielded a baseline top-1 with variational dropout achieving the best results for high sparsity - accuracy of 76.69% averaged across three runs. We trained levels. - each model with 8-way data parallelism across 8 accelera- - tors. Due to the extra parameters and operations required for will be non-zero. 5 .Louizos et al.(2017b) reported results - variational dropout, the model was unable to fit into device applyingl0 regularization to a wide residual network (WRN) - memory in this configuration. For all variational dropout (Zagoruyko & Komodakis,2016) on the CIFAR-10 dataset, - experiments, we used a per-device batch size of 32 images and noted that they observed small accuracy loss at as low - and scaled the model over 32 accelerators. as 8% reduction in the number of parameters during training. - Applying our weight-levell0 regularization implementation - 5.1. ResNet-50 Results & Analysis to WRN produces a model with comparable training time - sparsity, but with no sparsity in the test-time parameters.Figure3shows results for magnitude pruning, variational For models that achieve test-time sparsity, we observe sig-dropout, and random pruning applied to ResNet-50. Surpris- nificant accuracy degradation on CIFAR-10. This result isingly, we were unable to produce sparse ResNet-50 mod- consistent with our observation forl els withl 0 regularization applied - 0 regularization that did not significantly damage to ResNet-50 on ImageNet.model quality. Across hundreds of experiments, our models - were either able to achieve full test set performance with The variation in performance for variational dropout andl0 - no sparsification, or sparsification with test set performance regularization between Transformer and ResNet-50 is strik- - akin to random guessing. Details on all hyperparameter ing. While achieving a good accuracy-sparsity trade-off, - settings explored are included in AppendixE. variational dropout consistently ranked behindl0 regulariza- - tion on Transformer, and was bested by magnitude pruningThis result is particularly surprising given the success ofl0 for sparsity levels of 80% and up. However, on ResNet-50regularization on Transformer. One nuance of thel0 regular- we observe that variational dropout consistently producesization technique ofLouizos et al.(2017b) is that the model - can have varying sparsity levels between the training and 5 The fraction of time a parameter is set to zero during training - test-time versions of the model. At training time, a parame- depends on other factors, e.g. theparameter of the hard-concrete - ter with a dropout rate of 10% will be zero 10% of the time distribution. However, this point is generally true that the training - and test-time sparsities are not necessarily equivalent, and that when sampled from the hard-concrete distribution. How- there exists some dropout rate threshold below which a weight that - ever, under the test-time parameter estimator, this weight is sometimes zero during training will be non-zero at test-time. The State of Sparsity in Deep Neural Networks - - - - - - - - - - - - - - Figure 4.Average sparsity in ResNet-50 layers.Distributions Figure 5.Sparsity-accuracy trade-off curves for ResNet-50 - calculated on the top performing model at 95% sparsity for each with modified sparsification scheme. Altering the distribution - technique. Variational dropout is able to learn non-uniform dis- of sparsity across the layers and increasing training time yield - tributions of sparsity, decreasing sparsity in the input and output significant improvement for magnitude pruning. - layers that are known to be disproportionately important to model - quality. 5.2. Pushing the Limits of Magnitude Pruning - Given that a uniform distribution of sparsity is suboptimal, - and the significantly smaller resource requirements for ap- - plying magnitude pruning to ResNet-50 it is natural to won- - models on-par or better than magnitude pruning, and that der how well magnitude pruning could perform if we were to - l0 regularization is not able to produce sparse models at distribute the non-zero weights more carefully and increase - all. Variational dropout achieved particularly notable results training time. - in the high sparsity range, maintaining a top-1 accuracy To understand the limits of the magnitude pruning heuristic,over 70% with less than 4% of the parameters of a standard we modify our ResNet-50 training setup to leave the firstResNet-50. convolutional layer fully dense, and only prune the final - The distribution of sparsity across different layer types in the fully-connected layer to 80% sparsity. This heuristic is - best variational dropout and magnitude pruning models at reasonable for ResNet-50, as the first layer makes up a small - 95% sparsity are plotted in figure4. While we kept sparsity fraction of the total parameters in the model and the final - constant across all layers for magnitude and random prun- layer makes up only .03% of the total FLOPs. While tuning - ing, variational dropout significantly reduces the amount of the magnitude pruning ResNet-50 models, we observed that - sparsity induced in the first and last layers of the model. the best models always started and ended pruning during - the third learning rate phase, before the second learning rateIt has been observed that the first and last layers are often drop. To take advantage of this, we increase the number ofdisproportionately important to model quality (Han et al., training steps by 1.5x by extending this learning rate region.2015;Bellec et al.,2017). In the case of ResNet-50, the Results for ResNet-50 trained with this scheme are plottedfirst convolution comprises only .037% of all the parame- in figure5.ters in the model. At 98% sparsity the first layer has only - 188 non-zero parameters, for an average of less than 3 pa- With these modifications, magnitude pruning outperforms - rameters per output feature map. With magnitude pruning variational dropout at all but the highest sparsity levels while - uniformly sparsifying each layer, it is surprising that it is still using less resources. However, variational dropout’s per- - able to achieve any test set performance at all with so few formance in the high sparsity range is particularly notable. - parameters in the input convolution. With very low amounts of non-zero weights, we find it likely - that the models performance on the test set is closely tied toWhile variational dropout is able to learn to distribute spar- precise allocation of weights across the different layers, andsity non-uniformly across the layers, it comes at a significant that variational dropout’s ability to learn this distributionincrease in resource requirements. For ResNet-50 trained enables it to better maintain accuracy at high sparsity levels.with variational dropout we observed a greater than 2x in- This result indicates that efficient sparsification techniquescrease in memory consumption. When scaled across 32 that are able to learn the distribution of sparsity across layersaccelerators, ResNet-50 trained with variational dropout are a promising direction for future work. completed training in 9.75 hours, compared to ResNet-50 - with magnitude pruning finishing in 12.50 hours on only 8 Its also worth noting that these changes produced mod- - accelerators. Scaled to a 4096 batch size and 32 accelerators, els at 80% sparsity with top-1 accuracy of 76.52%, only - ResNet-50 with magnitude pruning can complete the same .17% off our baseline ResNet-50 accuracy and .41% better - number of epochs in just 3.15 hours. than the results reported byHe et al.(2018), without the The State of Sparsity in Deep Neural Networks - - extra complexity and computational requirements of their - reinforcement learning approach. This represents a new - state-of-the-art sparsity-accuracy trade-off for ResNet-50 - trained on ImageNet. - - 6. Sparsification as Architecture Search - While sparsity is traditionally thought of as a model com- - pression technique, two independent studies have recently - suggested that the value of sparsification in neural net- - works is misunderstood, and that once a sparse topology - is learned it can be trained from scratch to the full perfor- - mance achieved when sparsification was performed jointly - with optimization. - Frankle & Carbin(2018) posited that over-parameterized - neural networks contain small, trainable subsets of weights, - deemed ”winning lottery tickets”. They suggest that sparsity - inducing techniques are methods for finding these sparse - topologies, and that once found the sparse architectures can - be trained from scratch withthe same weight initialization Figure 6.Scratch and lottery ticket experiments with magni- that was used when the sparse architecture was learned. tude pruning.Top: results with Transformer. Bottom: Results They demonstrated that this property holds across different with ResNet-50. Across all experiments, training from scratch - convolutional neural networks and multi-layer perceptrons using a learned sparse architecture is unable to re-produce the - trained on the MNIST and CIFAR-10 datasets. performance of models trained with sparsification as part of the - optimization process. Liu et al.(2018) similarly demonstrated this phenomenon - for a number of activation sparsity techniques on convolu- - tional neural networks, as well as for weight level sparsity To clarify the questions surrounding the idea of sparsifi-learned with magnitude pruning. However, they demon- cation as a form of neural architecture search, we repeatstrate this result using a random initialization during re- the experiments ofFrankle & Carbin(2018) andLiu et al.training. (2018) on ResNet-50 and Transformer. For each model, - The implications of being able to train sparse architectures we explore the full range of sparsity levels (50% - 98%) - from scratch once they are learned are large: once a sparse and compare to our well-tuned models from the previous - topology is learned, it can be saved and shared as with sections. - any other neural network architecture. Re-training then - can be done fully sparse, taking advantage of sparse linear 6.1. Experimental Framework - algebra to greatly accelerate time-to-solution. However, the The experiments ofLiu et al.(2018) encompass taking thecombination of these two studies does not clearly establish final learned weight mask from a magnitude pruning model,how this potential is to be realized. randomly re-initializing the weights, and training the model - Beyond the question of whether or not the original random with the normal training procedure (i.e., learning rate, num- - weight initialization is needed, both studies only explore ber of iterations, etc.). To account for the presence of spar- - convolutional neural networks (and small multi-layer per- sity at the start of training, they scale the variance of the - ceptrons in the case ofFrankle & Carbin(2018)). The initial weight distribution by the number of non-zeros in the - majority of experiments in both studies also limited their matrix. They additionally train a variant where they increase - analyses to the MNIST, CIFAR-10, and CIFAR-100 datasets. the number of training steps (up to a factor of 2x) such that - While these are standard benchmarks for deep learning mod- the re-trained model uses approximately the same number of - els, they are not indicative of the complexity of real-world FLOPs during training as model trained with sparsification - tasks where model compression is most useful.Liu et al. as part of the optimization process. They refer to these two - (2018) do explore convolutional architectures on the Ima- experiments as ”scratch-e” and ”scratch-b” respectively. - geNet datasets, but only at two relatively low sparsity levels Frankle & Carbin(2018) follow a similar procedure, but use(30% and 60%). They also note that weight level sparsity the same weight initialization that was used when the sparseon ImageNet is the only case where they are unable to re- weight mask was learned and do not perform the longerproduce the full accuracy of the pruned model. training time variant. The State of Sparsity in Deep Neural Networks - - For our experiments, we repeat the scratch-e, scratch-b and sparsity levels, we observe that the quality of the models - lottery ticket experiments with magnitude pruning on Trans- degrades relative to the magnitude pruning baseline as spar- - former and ResNet-50. For scratch-e and scratch-b, we also sity increases. For unstructured weight sparsity, it seems - train variants that do not alter the initial weight distribution. likely that the phenomenon observed byLiu et al.(2018) - For the Transformer, we re-trained five replicas of the best was produced by a combination of low sparsity levels and - magnitude pruning hyperparameter settings at each spar- small-to-medium sized tasks. We’d like to emphasize that - sity level and save the weight initialization and final sparse this result is only for unstructured weight sparsity, and that - weight mask. For each of the five learned weight masks, prior workLiu et al.(2018) provides strong evidence that - we train five identical replicas for the scratch-e, scratch- activation pruning behaves differently. - b, scratch-e with augmented initialization, scratch-b with - augmented initialization, and the lottery ticket experiments. 7. Limitations of This Study For ResNet-50, we followed the same procedure with three - re-trained models and three replicas at each sparsity level Hyperparameter exploration. For all techniques and - for each of the five experiments. Figure6plots the averages models, we carefully hand-tuned hyperparameters and per- - and min/max of all experiments at each sparsity level 6 . formed extensive sweeps encompassing thousands of exper- - iments over manually identified ranges of values. However, - 6.2. Scratch and Lottery Ticket Results & Analysis the number of possible settings vastly outnumbers the set - of values that can be practically explored, and we cannotAcross all of our experiments, we observed that training eliminate the possibility that some techniques significantlyfrom scratch using a learned sparse architecture is not able outperform others under settings we did not try.to match the performance of the same model trained with - sparsification as part of the optimization process. Neural architectures and datasets. Transformer and - ResNet-50 were chosen as benchmark tasks to represent aAcross both models, we observed that doubling the number cross section of large-scale deep learning tasks with diverseof training steps did improve the quality of the results for architectures. We can’t exclude the possibility that somethe scratch experiments, but was not sufficient to match the techniques achieve consistently high performance acrosstest set performance of the magnitude pruning baseline. As other architectures. More models and tasks should be thor-sparsity increased, we observed that the deviation between oughly explored in future work.the models trained with magnitude pruning and those trained - from scratch increased. For both models, we did not observe - a benefit from using the augmented weight initialization for 8. Conclusion - the scratch experiments. In this work, we performed an extensive evaluation of three - For ResNet-50, we experimented with four different learn- state-of-the-art sparsification techniques on two large-scale - ing rates schemes for the scratch-b experiments. We found learning tasks. Notwithstanding the limitations discussed in - that scaling each learning rate region to double the number section7, we demonstrated that complex techniques shown - of epochs produced the best results by a wide margin. These to yield state-of-the-art compression on small datasets per- - results are plotted in figure6. Results for the ResNet-50 form inconsistently, and that simple heuristics can achieve - scratch-b experiments with the other learning rate variants comparable or better results on a reduced computational bud- - are included with our release of hyperparameter tuning re- get. Based on insights from our experiments, we achieve a - sults. new state-of-the-art sparsity-accuracy trade-off for ResNet- - 50 with only magnitude pruning and highlight promisingFor the lottery ticket experiments, we were not able to repli- directions for research in sparsity inducing techniques.cate the phenomenon observed byFrankle & Carbin(2018). - The key difference between our experiments is the complex- Additionally, we provide strong counterexamples to two re- - ity of the tasks and scale of the models, and it seems likely cently proposed theories that models learned through prun- - that this is the main factor contributing to our inability to ing techniques can be trained from scratch to the same test - train these architecture from scratch. set performance of a model learned with sparsification as - part of the optimization process. Our results highlight theFor the scratch experiments, our results are consistent with need for large-scale benchmarks in sparsification and modelthe negative result observed by (Liu et al.,2018) for Im- compression. As such, we open-source our code, check-ageNet and ResNet-50 with unstructured weight pruning. points, and results of all hyperparameter configurations to By replicating the scratch experiments at the full range of establish rigorous baselines for future work. - 6 Two of the 175 Transformer experiments failed to train from - scratch at all and produced BLEU scores less than 1.0. We omit - these outliers in figure6 The State of Sparsity in Deep Neural Networks - - Acknowledgements Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., - Casagrande, N., Lockhart, E., Stimberg, F., van den Oord,We would like to thank Benjamin Caine, Jonathan Frankle, A., Dieleman, S., and Kavukcuoglu, K. Efficient NeuralRaphael Gontijo Lopes, Sam Greydanus, and Keren Gu for Audio Synthesis. InProceedings of the 35th Interna-helpful discussions and feedback on drafts of this paper. tional Conference on Machine Learning, ICML 2018, - Stockholmsmassan, Stockholm, Sweden, July 10-15, 2018¨ , - References pp. 2415–2424, 2018. - Bellec, G., Kappel, D., Maass, W., and Legenstein, R. A. Kingma, D. P. and Welling, M. Auto-encoding variational - Deep Rewiring: Training Very Sparse Deep Networks. bayes.CoRR, abs/1312.6114, 2013. - CoRR, abs/1711.05136, 2017. Kingma, D. P., Salimans, T., and Welling, M. Variational - Collins, M. D. and Kohli, P. Memory Bounded Deep Con- dropout and the local reparameterization trick. CoRR, - volutional Networks.CoRR, abs/1412.1442, 2014. URL abs/1506.02557, 2015. - http://arxiv.org/abs/1412.1442. LeCun, Y., Denker, J. S., and Solla, S. A. Optimal Brain - Dai, B., Zhu, C., and Wipf, D. P. Compressing Neural Damage. InNIPS, pp. 598–605. Morgan Kaufmann, - Networks using the Variational Information Bottleneck. 1989. - CoRR, abs/1802.10399, 2018. Lin, J., Rao, Y., Lu, J., and Zhou, J. Runtime neural pruning. - InNIPS, pp. 2178–2188, 2017.Frankle, J. and Carbin, M. The Lottery Ticket Hy- - pothesis: Training Pruned Neural Networks. CoRR, Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang,abs/1803.03635, 2018. URLhttp://arxiv.org/ C. Learning Efficient Convolutional Networks throughabs/1803.03635. Network Slimming. InIEEE International Conference - on Computer Vision, ICCV 2017, Venice, Italy, OctoberGray, S., Radford, A., and Kingma, D. P. Block- 22-29, 2017, pp. 2755–2763, 2017.sparse gpu kernels.https://blog.openai.com/ - block-sparse-gpu-kernels/, 2017. Liu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T. - Rethinking the Value of Network Pruning. CoRR, - Guo, Y., Yao, A., and Chen, Y. Dynamic Network Surgery abs/1810.05270, 2018. - for Efficient DNNs. InNIPS, 2016. Louizos, C., Ullrich, K., and Welling, M. Bayesian Com- - Han, S., Pool, J., Tran, J., and Dally, W. J. Learning both pression for Deep Learning. InAdvances in Neural In- - Weights and Connections for Efficient Neural Network. formation Processing Systems 30: Annual Conference - InNIPS, pp. 1135–1143, 2015. on Neural Information Processing Systems 2017, 4-9 De- - cember 2017, Long Beach, CA, USA, pp. 3290–3300, - Hassibi, B. and Stork, D. G. Second order derivatives for 2017a. - network pruning: Optimal brain surgeon. InNIPS, pp. - 164–171. Morgan Kaufmann, 1992. Louizos, C., Welling, M., and Kingma, D. P. Learn- - ing Sparse Neural Networks through L0Regularization. - He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learn- CoRR, abs/1712.01312, 2017b. - ing for Image Recognition. In2016 IEEE Conference on Luo, J., Wu, J., and Lin, W. Thinet: A Filter Level PruningComputer Vision and Pattern Recognition, CVPR 2016, Method for Deep Neural Network Compression. InIEEELas Vegas, NV, USA, June 27-30, 2016, pp. 770–778, International Conference on Computer Vision, ICCV2016. 2017, Venice, Italy, October 22-29, 2017, pp. 5068–5076, - 2017.He, Y., Lin, J., Liu, Z., Wang, H., Li, L., and Han, S. AMC: - automl for model compression and acceleration on mo- Mitchell, T. J. and Beauchamp, J. J. Bayesian Variablebile devices. InComputer Vision - ECCV 2018 - 15th Selection in Linear Regression.Journal of the AmericanEuropean Conference, Munich, Germany, September 8- Statistical Association, 83(404):1023–1032, 1988.14, 2018, Proceedings, Part VII, pp. 815–832, 2018. - Mocanu, D. C., Mocanu, E., Stone, P., Nguyen, P. H., - Hestness, J., Narang, S., Ardalani, N., Diamos, G. F., Jun, Gibescu, M., and Liotta, A. Scalable Training of Artifi- - H., Kianinejad, H., Patwary, M. M. A., Yang, Y., and cial Neural Networks with Adaptive Sparse Connectivity - Zhou, Y. Deep learning scaling is predictable, empirically. Inspired by Network Science.Nature Communications, - CoRR, abs/1712.00409, 2017. 2018. The State of Sparsity in Deep Neural Networks - - Molchanov, D., Ashukha, A., and Vetrov, D. P. Variational Zagoruyko, S. and Komodakis, N. Wide Residual Networks. - Dropout Sparsifies Deep Neural Networks. InProceed- InProceedings of the British Machine Vision Conference - ings of the 34th International Conference on Machine 2016, BMVC 2016, York, UK, September 19-22, 2016, - Learning, ICML 2017, Sydney, NSW, Australia, 6-11 Au- 2016. - gust 2017, pp. 2498–2507, 2017. Zhu, M. and Gupta, S. To prune, or not to prune: exploring - Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. the efficacy of pruning for model compression.CoRR, - Pruning Convolutional Neural Networks for Resource Ef- abs/1710.01878, 2017. URLhttp://arxiv.org/ - ficient Transfer Learning.CoRR, abs/1611.06440, 2016. abs/1710.01878. - - Narang, S., Diamos, G. F., Sengupta, S., and Elsen, E. Ex- - ploring Sparsity in Recurrent Neural Networks.CoRR, - abs/1704.05119, 2017. - - Ott, M., Edunov, S., Grangier, D., and Auli, M. Scaling - Neural Machine Translation. InProceedings of the Third - Conference on Machine Translation: Research Papers, - WMT 2018, Belgium, Brussels, October 31 - November 1, - 2018, pp. 1–9, 2018. - - Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic - Backpropagation and Approximate Inference in Deep - Generative models. InICML, volume 32 ofJMLR - Workshop and Conference Proceedings, pp. 1278–1286. - JMLR.org, 2014. - - Strom, N. Sparse Connection and Pruning in Large Dynamic¨ - Artificial Neural Networks. InEUROSPEECH, 1997. - - Theis, L., Korshunova, I., Tejani, A., and Huszar, F. Faster´ - gaze prediction with dense networks and Fisher pruning. - CoRR, abs/1801.05787, 2018. URLhttp://arxiv. - org/abs/1801.05787. - - Ullrich, K., Meeds, E., and Welling, M. Soft Weight- - Sharing for Neural Network Compression. CoRR, - abs/1702.04008, 2017. - - Valin, J. and Skoglund, J. Lpcnet: Improving Neural - Speech Synthesis Through Linear Prediction. CoRR, - abs/1810.11846, 2018. URLhttp://arxiv.org/ - abs/1810.11846. - - van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., - Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., - and Kavukcuoglu, K. Wavenet: A Generative Model for - Raw Audio. InThe 9th ISCA Speech Synthesis Workshop, - Sunnyvale, CA, USA, 13-15 September 2016, pp. 125, - 2016. - - Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, - L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Atten- - tion is All you Need. InAdvances in Neural Information - Processing Systems 30: Annual Conference on Neural In- - formation Processing Systems 2017, 4-9 December 2017, - Long Beach, CA, USA, pp. 6000–6010, 2017. The State of Sparsity in Deep Neural Networks: Appendix - - - - - A. Overview of Sparsity Inducing Techniques p(w)with observed dataDinto an updated belief over the - parameters in the form of the posterior distributionp(wjD).Here we provide a more detailed review of the three sparsity In practice, computing the true posterior using Bayes’ ruletechniques we benchmarked. is computationally intractable and good approximations are - needed. In variational inference, we optimize the parame-A.1. Magnitude Pruning tersof some parameterized modelq (w)such thatq (w) - Magnitude-based weight pruning schemes use the magni- is a close approximation to the true posterior distribution - tude of each weight as a proxy for its importance to model p(wjD)as measured by the Kullback-Leibler divergence - quality, and remove the least important weights according between the two distributions. The divergence of our ap- - to some sparsification schedule over the course of training. proximate posterior from the true posterior is minimized in - Many variants have been proposed (Collins & Kohli,2014; practice by maximizing the variational lower-bound - Han et al.,2015;Guo et al.,2016;Zhu & Gupta,2017), - with the key differences lying in when weights are removed, L() =D Lwhether weights should be sorted to remove a precise pro- KL (q (w)jjp(w)) + D () - - portion or thresholded based on a fixed or decaying value, PwhereLand whether or not weights that have been pruned still re- D () = Eq (w) [logp(yjx;w)] - (x;y)2D - ceive gradient updates and have the potential to return after Using the Stochastic Gradient Variational Bayes (SGVB)being pruned. (Kingma et al.,2015) algorithm to optimize this bound, - Han et al.(2015) use iterative magnitude pruning and re- LD ()reduces to the standard cross-entropy loss, and the - training to progressively sparsify a model. The target model KL divergence between our approximate posterior and prior - is first trained to convergence, after which a portion of over the parameters serves as a regularizer that enforces our - weights are removed and the model is re-trained with these initial belief about the parametersw. - weights fixed to zero. This process is repeated until the In the standard formulation of variational dropout, we as-target sparsity is achieved.Guo et al.(2016) improve on sume the weights are drawn from a fully-factorized Gaussianthis approach by allowing masked weights to still receive approximate posterior.gradient updates, enabling the network to recover from in- - correct pruning decisions during optimization. They achieve - higher compression rates and interleave pruning steps with wij q (wij ) =N(ij ; ij 2 )ij gradient update steps to avoid expensive re-training.Zhu - & Gupta(2017) similarly allow gradient updates to masked Whereandare neural network parameters. For eachweights, and make use of a gradual sparsification schedule training step, we sample weights from this distribution andwith sorting-based weight thresholding to maintain accuracy use thereparameterization trick(Kingma & Welling,2013; while achieving a user specified level of sparsification. Rezende et al.,2014) to differentiate the loss w.r.t. the pa- - Its worth noting that magnitude pruning can easily be rameters through the sampling operation. Given the weights - adapted to induce block or activation level sparsity by re- are normally distributed, the distribution of the activations - moving groups of weights based on their p-norm, average, Bafter a linear operation like matrix multiplication or con- - max, or other statistics. Variants have also been proposed volution is also Gaussian and can be calculated in closed - that maintain a constant level of sparsity during optimization form 7 . - to enable accelerated training (Mocanu et al.,2018). - q (bmj jA) N(mj ; mj ) - A.2. Variational Dropout - Consider the setting of a datasetDofNi.i.d. samples PK PK with (x;y)and a standard classification problem where the goal mj = ami ij andmj = a2 mi ij 2 and iji=1 i=1 - is to learn the parameterswof the conditional probability whereami 2Aare the inputs to the layer. Thus, rather - p(yjx;w). Bayesian inference combines some initial belief 7 We ignore correlation in the activations, as is done by over the parameterswin the form of a prior distribution Molchanov et al.(2017) The State of Sparsity in Deep Neural Networks: Appendix - - than sample weights, we can directly sample the activations andandstretch the distribution s.t.zj takes value 0 or 1 - at each layer. This step is known as thelocal reparame- with non-zero probability. - terization trick, and was shown byKingma et al.(2015) to On each training iteration,zreduce the variance of the gradients relative to the standard j is sampled from this distri- - bution and multiplied with the standard neural networkformulation in which a single set of sampled weights must weights. The expectedlbe shared for all samples in the input batch for efficiency. 0 -normLC can then be calcu- - lated using the cumulative distribution function of the hard-Molchanov et al.(2017) showed that the variance of the gra- concrete distribution and optimized directly with stochasticdients could be further reduced by using anadditive noise gradient descent.reparameterization, where we define a new parameter - - 2 =ij ij 2ij Xjj Xjj LC = (1Qs (0j)) = sigmoid(log - Under this parameterization, we directly optimize the mean j j log )j=1 j=1 - and variance of the neural network parameters. - Under the assumption of a log-uniform prior on the weights At test-time,Louizos et al.(2017b) use the following esti- - w, the KL divergence component of our objective function mate for the model parameters. - DKL (q (wij )jjp(wij ))can be accurately approximated - (Molchanov et al.,2017): - =~ z^ - z^=min(1;max(0;sigmoid(log)() +)) - DKL (q (wij )jjp(wij )) - k1 (k2 +k3 logij )0:5log(1 +1 +kij 1 ) Interestingly,Louizos et al.(2017b) showed that their ob- - k jective function under thel1 = 0:63576 k2 = 1:87320 k3 = 1:48695 0 penalty is a special case of a - variational lower-bound over the parameters of the network - under a spike and slab (Mitchell & Beauchamp,1988) prior.After training a model with variational dropout, the weights - with the highestvalues can be removed. For all their - experiments,Molchanov et al.(2017) removed weights with B. Variational Dropout Implementation - loglarger than 3.0, which corresponds to a dropout rate Verification - greater than 95%. Although they demonstrated good results, - it is likely that the optimalthreshold varies across different To verify our implementation of variational dropout, we - models and even different hyperparameter settings of the applied it to LeNet-300-100 and LeNet-5-Caffe on MNIST - same model. We address this question in our experiments. and compared our results to the original paper (Molchanov - et al.,2017). We matched our hyperparameters to those - used in the code released with the paper 8 . All results areA.3.l0 Regularization listed in table3 - To optimize thel0 -norm, we reparameterize the model - weightsas the product of a weight and a random vari- Table 3.Variational Dropout MNIST Reproduction Results. able drawn from the hard-concrete distribution. Network Experiment Sparsity (%) Accuracy (%) - original (Molchanov et al.,2017) 98.57 98.08 - ours (log= 3.0) 97.52 98.42LeNet-300-100 ours (log= 2.0) 98.50 98.40 - ours (log= 0.1) 99.10 98.13 - j =~j zj original (Molchanov et al.,2017) 99.60 99.25 - wherez LeNet-5-Caffe ours (log= 3.0) 99.29 99.26 - j min(1;max(0;s));s=s() + ours (log= 2.0) 99.50 99.25 - s=sigmoid((logulog(1u) +log)=) - andu U(0;1) Our baseline LeNet-300-100 model achieved test set accu- - racy of 98.42%, slightly higher than the baseline of 98.36% - reported in (Molchanov et al.,2017). Applying our varia-In this formulation, theparameter that controls the posi- tional dropout implementation to LeNet-300-100 with these tion of the hard-concrete distribution (and thus the proba- hyperparameters produced a model with 97.52% global spar-bility thatzj is zero) is optimized with gradient descent., sity and 98.42% test accuracy. The original paper produced, andare fixed parameters that control the shape of the - hard-concrete distribution.controls the curvature ortem- 8 https://github.com/ars-ashuha/variational-dropout-sparsifies- - peratureof the hard-concrete probability density function, dnn The State of Sparsity in Deep Neural Networks: Appendix - - Our baseline WRN-28-10 implementation trained on - CIFAR-10 achieved a test set accuracy of 95.45%. Using - ourl0 regularization implementation and al0 -norm weight - of .0003, we trained a model that achieved 95.34% accuracy - on the test set while achieving a consistent training-time - FLOPs reduction comparable to that reported byLouizos - et al.(2017b). Floating-point operations (FLOPs) required - to compute the forward over the course of training WRN- - 28-10 withl0 are plotted in figure7. - During our re-implementation of the WRN experiments - Figure 7.Forward pass FLOPs for WRN-28-10 trained withl0 fromLouizos et al.(2017b), we identified errors in the orig- regularization.Our implementation achieves FLOPs reductions inal publications FLOP calculations that caused the number comparable to those reported inLouizos et al.(2017b). of floating-point operations in WRN-28-10 to be miscalcu- - lated. We’ve contacted the authors, and hope to resolve this - issue to clarify their performance results. - a model with 98.57% global sparsity, and 98.08% test accu- - racy. While our model achieves .34% higher tests accuracy D. Sparse Transformer Experiments with 1% lower sparsity, we believe the discrepancy is mainly - due to difference in our software packages: the authors of D.1. Magnitude Pruning Details - (Molchanov et al.,2017) used Theano and Lasagne for their For our magnitude pruning experiments, we tuned four keyexperiments, while we use TensorFlow. hyperparameters: the starting iteration of the sparsification - Given our model achieves highest accuracy, we can decrease process, the ending iteration of the sparsification process, - thelogthreshold to trade accuracy for more sparsity. With the frequency of pruning steps, and the combination of other - alogthreshold of 2.0, our model achieves 98.5% global regularizers (dropout and label smoothing) used during train- - sparsity with a test set accuracy of 98.40%. With alog ing. We trained models with 7 different target sparsities: - threshold of 0.1, our model achieves 99.1% global sparsity 50%, 60%, 70%, 80%, 90%, 95%, and 98%. At each of - with 98.13% test set accuracy, exceeding the sparsity and these sparsity levels, we tried pruning frequencies of 1000 - accuracy of the originally published results. and 10000 steps. During preliminary experiments we identi- - fied that the best settings for the training step to stop pruningOn LeNet-5-Caffe, our implementation achieved a global at were typically closer to the end of training. Based on thissparsity of 99.29% with a test set accuracy of 99.26%, ver- insight, we explored every possible combination of start andsus the originaly published results of 99.6% sparsity with end points for the sparsity schedule in increments of 10000099.25% accuracy. Lowering thelogthreshold to 2.0, our steps with an ending step of 300000 or greater.model achieves 99.5% sparsity with 99.25% test accuracy. - By default, the Transformer uses dropout with a dropout - C.l rate of 10% on the input to the encoder, decoder, and before 0 Regularization Implementation each layer and performs label smoothing with a smooth- Verification ing parameter of .1. We found that decreasing these other - The originall regularizers produced higher quality models in the mid to 0 regularization paper uses a modified version - of the proposed technique for inducing group sparsity in high sparsity range. For each hyperparameter combination, - models, so our weight-level implementation is not directly we tried three different regularization settings: standard la- - comparable. However, to verify our implementation we bel smoothing and dropout, label smoothing only, and no - trained a Wide ResNet (WRN) (Zagoruyko & Komodakis, regularization. - 2016) on CIFAR-10 and compared results to those reported - in the original publication for group sparsity. D.2. Variational Dropout Details - - As done byLouizos et al.(2017b), we applyl For the Transformer trained with variational dropout, we 0 to the - first convolutional layer in the residual blocks (i.e., where extensively tuned the coefficient for the KL divergence - dropout would normally be used). We use the weight decay component of the objective function to find models that - formulation for the re-parameterized weights, and scale the achieved high accuracy with sparsity levels in the target - weight decay coefficient to maintain the same initial length range. We found that KL divergence weights in the range - scale of the parameters. We use the same batch size of 128 [:1 ;1 ], whereNis the number of samples in the training N N - samples and the same initial log, and train our model on a set, produced models in our target sparsity range. - single GPU. The State of Sparsity in Deep Neural Networks: Appendix - - (Molchanov et al.,2017) noted difficulty training some mod- E. Sparse ResNet-50 - els from scratch with variational dropout, as large portions - of the model adopt high dropout rates early in training be- E.1. Learning Rate - fore the model can learn a useful representation from the For all experiments, the we used the learning rate schemedata. To address this issue, they use a gradual ramp-up of the used by the official TensorFlow ResNet-50 implementation 9 .KL divergence weight, linearly increasing the regularizer With our batch size of 1024, this includes a linear ramp-upcoefficient until it reaches the desired value. for 5 epochs to a learning rate of .4 followed by learning - For our experiments, we explored using a constant regu- rate drops by a factor of 0.1 at epochs 30, 60, and 80. - larizer weight, linearly increasing the regularizer weight, - and also increasing the regularizer weight following the E.2. Magnitude Pruning Details - cubic sparsity function used with magnitude pruning. For For magnitude pruning on ResNet-50, we trained modelsthe linear and cubic weight schedules, we tried each com- with a target sparsity of 50%, 70%, 80%, 90%, 95%, andbination of possible start and end points in increments of 98%. At each sparsity level, we tried starting pruning at100000 steps. For each hyperparameter combination, we steps 8k, 20k, and 40k. For each potential starting point, wealso tried the three different combinations of dropout and la- tried ending pruning at steps 68k, 76k, and 100k. For everybel smoothing as with magnitude pruning. For each trained hyperparameter setting, we tried pruning frequencies of 2k,model, we evaluated the model with 11logthresholds 4k, and 8k steps and explored training with and without labelin the range[0;5]. For all experiments, we initialized all smoothing. During preliminary experiments, we observedlog2 parameters to the constant value10. that removing weight decay from the model consistently - caused significant decreases in test accuracy. Thus, for allD.3.l0 Regularization Details hyperparameter combinations, we left weight decay on with - For Transformers trained withl the standard coefficient. 0 regularization, we simi- - larly tuned the coefficient for thel0 -norm in the objective For a target sparsity of 98%, we observed that very few hy-function. We observed that much higher magnitude regu- perparameter combinations were able to complete traininglarization coefficients were needed to produce models with without failing due to numerical issues. Out of all the hyper-the same sparsity levels relative to variational dropout. We parameter configurations we tried, only a single model wasfound thatl 100 -norm weights in the range[1 ; ]produced N N able to complete training without erroring from the presencemodels in our target sparsity range. of NaNs. As explained in the main text, at high sparsity - For all experiments, we used the default settings for the levels the first layer of the model has very few non-zero - paramters of the hard-concrete distribution:= 2=3,= parameters, leading to instability during training and low - 0:1, and= 1:1. We initialized thelogparameters to test set performance. Pruned ResNet-50 models with the - 2:197, corresponding to a 10% dropout rate. first layer left dense did not exhibit these issues. - - For each hyperparameter setting, we explored the three reg- E.3. Variational Dropout Detailsularizer coefficient schedules used with variational dropout - and each of the three combinations of dropout and label For variational dropout applied to ResNet-50, we explored - smoothing. the same combinations of start and end points for the kl- - divergence weight ramp up as we did for the start and end - D.4. Random Pruning Details points of magnitude pruning. For all transformer experi- - ments, we did not observe a significant gain from using aWe identified in preliminary experiments that random prun- cubic kl-divergence weight ramp-up schedule and thus onlying typically produces the best results by starting and ending explored the linear ramp-up for ResNet-50. For each combi-pruning early and allowing the model to finish the rest of nation of start and end points for the kl-divergence weight,the training steps with the final sparse weight mask. For our we explored 9 different coefficients for the kl-divergenceexperiments, we explored all hyperparameter combinations loss term: .01 / N, .03 / N, .05 / N, .1 / N, .3 / N, .5 / N, 1 /that we explored with magnitude pruning, and also included N, 10 / N, and 100 / N.start/end pruning step combinations with an end step of less - than 300000. Contrary to our experience with Transformer, we found - ResNet-50 with variational dropout to be highly sensitive - to the initialization for the log2 parameters. With the - standard setting of -10, we couldn’t match the baseline accu- - racy, and with an initialization of -20 our models achieved - 9 https://bit.ly/2Wd2Lk0 The State of Sparsity in Deep Neural Networks: Appendix - - good test performance but no sparsity. After some exper- pruning frequencies of 2k, 4k, and 8k and explored training - imentation, we were able to produce good results with an with and without label smoothing. - initialization of -15. - While with Transformer we saw a reasonable amount of E.6. Scratch-B Learning Rate Variants - variance in test set performance and sparsity with the same For the scratch-b (Liu et al.,2018) experiments with ResNet- - model evaluated at different logthresholds, we did not 50, we explored four different learning rate schemes for the - observe the same phenomenon for ResNet-50. Across a extended training time (2x the default number of epochs). - range of logvalues, we saw consistent accuracy and nearly - identical sparsity levels. For all of the results reported in the The first learning rate scheme we explored was uniformly - main text, we used a logthreshold of 0.5, which we found scaling each of the five learning rate regions to last for - to produce slightly better results than the standard threshold double the number of epochs. This setup produced the best - of 3.0. results by a wide margin. We report these results in the main - text. - E.4.l0 Regularization Details The second learning rate scheme was to keep the standard - learning rate, and maintain the final learning rate for theForl0 regularization, we explored four different initial log extra training steps as is common when fine-tuning deepvalues corresponding to dropout rates of 1%, 5%, 10%, neural networks. The third learning rate scheme was to and 30%. For each dropout rate, we extenively tuned thel0 - maintain the standard learning rate, and continually drop norm weight to produce models in the desired sparsity range. the learning rate by a factor of 0.1 every 30 epochs. The lastAfter identifying the proper range ofl0 -norm coefficients, scheme we explored was to skip the learning rate warm-up,we ran experiments with 20 different coefficients in that and drop the learning rate by 0.1 every 30 epochs. Thisrange. For each combination of these hyperparameters, we learning rate scheme is closest to the one used byLiu et al.tried all four combinations of other regularizers: standard (2018). We found that this scheme underperformed relativeweight decay and label smoothing, only weight decay, only to the scaled learning rate scheme with our training setup.label smoothing, and no regularization. For weight decay, - we used the formulation for the reparameterized weights Results for all learning rate schemes are included with the - provided in the original paper, and followed their approach released hyperparameter tuning data. - of scaling the weight decay coefficient based on the initial - dropout rate to maintain a constant length-scale between the - l0 regularized model and the standard model. - Across all of these experiments, we were unable to produce - ResNet models that achieved a test set performance better - than random guessing. For all experiments, we observed that - training proceeded reasonably normally until thel0 -norm - loss began to drop, at which point the model incurred severe - accuracy loss. We include the results of all hyperparameter - combinations in our data release. - Additionally, we tried a number of tweaks to the learning - process to improve the results to no avail. We explored - training the model for twice the number of epochs, training - with much higher initial dropout rates, modifying the - parameter for the hard-concrete distribution, and a modified - test-time parameter estimator. - - E.5. Random Pruning Details - For random pruning on ResNet-50, we shifted the set of - possible start and end points for pruning earlier in training - relative to those we explored for magnitude pruning. At - each of the sparsity levels tried with magnitude pruning, - we tried starting pruning at step 0, 8k, and 20k. For each - potential starting point, we tried ending pruning at steps 40k, - 68k, and 76k. For every hyperparameter setting, we tried \ No newline at end of file diff --git a/Corpus/Tien-Ju_Yang_NetAdapt_Platform-Aware_Neural_ECCV_2018_paper.txt b/Corpus/Tien-Ju_Yang_NetAdapt_Platform-Aware_Neural_ECCV_2018_paper.txt deleted file mode 100644 index 610ac219bcc18b205b36b0e0e320943e0782f40b..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 58773 zcmcJY+iqOfcBXlqr&x^*2UIp$X7MO#!;R27*s>*AEh_f51z!}a7Rj=zYB6=7L7(FoP>a_VsWi`7<0@q{&ARN7KQ)w zvYejwI@i;O#j8PQdQpvsyQl9vYq|4iRk<9C1Nw-Engt#qdSYE;ll@uE8K z56a^7`g+js+Sijm`?C$Lyy=&t-LGcF_npyY(f;Ls{Nvj_et!9H|C9e8@Xz7N{(b#+ zu;2c%*eOm&y>a=z__AWWy<)%E`Mi2`RgUy&O-H})cdFutD|Nwxr-QQlMWs(KI^%a| zol$R4j`eHtZU2VhjLPEd=dx(?1%E5P?2M=H`3{pogR@^ksO ze>v<=TV-#y^EX#Uy3Wpd)rD|7%<%S_VX}rhoy%%x63DT0)nU4S`D6$j=ynE!vL`+>4%#WESDk5R zw!5K&X*Gna^zv50@UEwm4n35+^x7ZMUau^#i&5a=QQ+ZXS{0pQ7#Q;!Zr{|0c0+O5 zzk#PZ#kibQvvId9&S$;La@s1syAp#BJEPm8{Lp7P;ix>XNipdUVf~9uOB%;EDb70+ zhRyDoUX{gYHav$9;f)ukk0u5C>3!M1yqZpSit2jWANGG%Z__L2xq(joQLjHPyVGJ= zPRIT3gaO|cqiPCmx+NSr?hkI6TW`k9)bsq)JH=$yy(&7B4ZT3ja@4)GJyMR!@#SsN ztwxjCkik}?R`I4PCe`3ZUBC`-!ug)K~e85;x% zl-Gf-ugb~dv^LB};;+2v3}$rKD`pdDT@J7NkPm@4EIX6gxEz)vt;VEcN}Va}HtoO; zz2e=dde6U?o7RP;6}iQR#%7W()2bSDui)PTB594r9#jFKLg=M$`VlhSp^GO-sXu(3s(|;(go}A8IZwi%z$THtF8B ziYFhg%W+@S(~g=AShopc$f&qsSu(dEM*HOrQ#ePKvQ1L!#Z5V$%qB(RMr1*++^woD zcF?FeuclY*{!LRcwe{%LUl}h)Ve!SQzd}$|!)rLBSG-4X8HFZO7-YA)*kvVm(F}za zXH)x#E!s%r&I=NBWJ7?oil=P*{;+dd7TtlQGRqd93Q}SNxt{4&Xjb%)r)ZtVcFP2u zT?e7qZk_z;kCW>XIdIK}iag%i&_jQ8QI6U7Wu$m_%SZ#$vQHTN6(Xy-JPbR)0!B3{ zw!dxv`M3M`T5EJk8=W>CR|qGG5nty#UKaMW|M=xg6P=OPlY7Ng2N85$vXZbQyA0XL zq(_qu;>gx_&FYQBvg}&c*;b;@91%U`V9>u54R(+L19sV1l6h2ikltgYC)~b!HA73( zTnZT&fzKyH9=PdC3l}WhJ2ps=!45NMmsjUMNya0E_{StJsE92x_K^_O*$q19^jXnq zz%*uvoW8v_<$5cjWTctF?6i!+y|sCdBvQmZ=f0@xmDhvn7ItSUNu2CTF(fUW#GfQ- zoMh6^SJkMTNPHt#uKFXNTs=0uZO3wiguXJ-qIH5P$Lt+AU&{vW8l$Hre6(7Vbx5f= zMNnRj!iHnjJHraGs8-OVVgti7%n2Gob3F@dO~Acfy0O`!{KwS|tl(}TN&oU+|M?%f z8qcj7Pkh=Eq_xGfX$?C+2`kwn>KMt*0%f=m@qkaCfE3E&mp}aPMKJKoAO25Gcnzq= z9YaKO|9Wt{E8)s7NG+g8i3WxVd^oTzymC*hKi^!2FOUh!eCQ*+WhUsU#2K(zh78cd zrCcXbs#Tm0z(BK06US)z#rYYfXcYtpm8arXNpZn$?V5F5H7zfNHpJ>u@tVBhu7EEt z#+_lQ#SmOAZ!@SmFAGp|Py0!e6hvJ|t=KdxGY`917up3@(0~+NO(}%V++Iz)CaOGN zL>8D+VKW)?zGoeJ?69b==PWP%&7>n;Betf|zG%&x9qG6IH?7^TS~A&yKm(+C`osLc zug2g{gXYcy$6Yag}_HZ~Mqp8r4N z{Mh$?#VgG|Kl=4F|KhN1b3HuvILf*KNtoH#>~rPkxX2$n{mhkV34PJGs@v1gA3ZL9 zkh#33uNv4g{iECGz#?JoD>>3iNO*=;7^klYTvD#P^agtTJVj?KBX;FgAv1Oy06=L`F zdSrW`rygbw=7>n^fcC-H(iupQ_masP=2`zq_!4?f1(T-Egc2EAvZGT&X8W3MHMqmP zY=vqgRo}#cSr?=7!!!ahr=JxAM_tpUy_zZF8w;AJx?J)+(8jog6{U!;`q#1<-PO-25j(==n2|*7wLqHwx(cJw3U|@g;qE})v2vdVsGVd`CK?r9u z8AAnYwo_b`?*SS${xiQ*oYrIj9>oPE=PEuutUDSX&qgC3-@;o_4D`yIK6v|gMczny zeFukDm_fGwktm4^0K@GEpA-Z97UlqRCV|&zhS=QKqZRmLoGpJ`DA#g7(sQvrDa+!G znRsoi&DA2W5Pt)!&Gvb3Fi~ch;W+lBWQ6E!W&x|8fV;Vt!*n)1|MZ)P^29_=9|{~&+BG{05g&C)Vpbm1TEGXJNHqWsXoHHyKk zw=Oypq!&Bl4)l{5+0iPzn^xBaPMId|ZZ;-hYtKd=lshMRZ5#%e0O+%c-oj5YE-Ts3 z8YC;h#vAmfQx@oQ#+0!8nYR$Sx=mxZHS1Qhf#*0GmKdXYBPp!tOfI&|)@5rau`l*h zSbhV|vQRJ&E9zQI5IJtlW$Nu4xl)VHc{K}!lT+P%ufYxrZdbfK z^WM}Qh}9j;Oi*i(*|9qD{2_V@g0eL)aa2lKByE6YKkivXthvm{uPdAlEIk}ys!lR- zHtO8q29~UGs1{I<5e^lNF9MRIFgnIM%%biL#i7UKLo=gwK#(Cg0frU(e^g9vCmzwJ zwsEh18O7@+YEtBZ#1PbM??U!%O} zbo*>?>CZId2JQ=99(l5MJ>~Hc-={Gb2Oj(uylUXo?EuH`iKnY^41&zT4^)RuaLz?B zsH$tlCmh1`@fKE0UtxyH%p`YlF|LNz5N^@#xV-Mj5KeeW6wLk=5p8}5-uk}bwfQG) z&m`!Ftj;vDHvoU%|6l|$q%Ew`x$gpAX} za(HJ^MdDTAx97g2E?NzK%1n4f{N#Hkkz=Q?WU$@Coa zMUdrans(RRj9AG@G9WGDw>qBZ8+?#~mCzO%N5?AP1f9_cAW86N3r`}X8lnDb0Kdla z5C#fd7Xs-!U?BFQ>=>+&&QS4a)+GnVSdWOYv3(SIYbY(KNPy4bj0&MfTp{Y|0X7U1 zxgF6%H?;_gBL5?6`?R$t(^-#Dqcdni8KdBgLcznbH=Mfu+<{iB#GAB z%nm}Hi2nCiTI}RW(5CJx6(M1!zGp2=uF9T(G?@&7xEA+?4pEZXbdXG_>7mJ+xbBht zJPl)}8~BDD%T6Q70e+IP9|Q+uGl3|r^U#@iv2}`yi5`8Z3am$T!Yxe+2=~eUjk_(Z z_wq*edMLEu2LBV*&f+2ElRU`|>1K+7Y0Knhk=FEB1PG#hg{Z$9ZjBf}v}s{WCXLqs zIJJ_3S~=&yo66Q{mW!~ogxjJpo0CWlHL60<4ngR+NJ{&zxlP13F*7eer!1RVl3!vjP=+EW}DWYo$KzhxfPGNivImJ7K< z7%>Q$)DNi(SPK?>x6r!)97x@G){=4T@j4KXtRU3BW-f|pCi5|ntr3m(0`g5WH_2Xa z)r*f5$|ijg*oJnpt}qdZP_5qk{d|m!_FCDh(@eCEZ0M$sO#A|`{6V50{383Hi3E^L zwT|(6DDp-MLY9}tCA*U#VYzGh8i_#~v2Arc8Opcv!m`i+AK$h;la!qhbQhaaHk=LW zAG&}&$&!>_rrC``s2lfPa7JNldA7xV@mkJMHpzDi5;(JCPmIex{$qi>^Qw1SUC8Yy zLU#JY$+SwnE5)Mq9S_5uOb!Yt8t=5qsNX7cQ+8qp!GYK>5vi6(KuXn3iC0kCg|x9D zd1<6wie4tzlji`Td|tvm8+1X1L&==J%u@qriJ;##;0P%avmQV>S5*(({PuXKXdm+5 z@s9eQBwuq-SWcw^wR*`XiYrD)NpbZMve*@{eu$6L|di@J;=~9+gas^0$ zY1I}SDrrr{d02heAyM}7EO-%Kg72NRTNmOBQejH!wE26Pr>T+2?0nad27`|}8;1q1 zE;qPB$U(6Hlx;E^eL3>PVoA~Q6eR~64JI;Zh|HB4OSDPexB-7T3|p>Y!5f!3^Tm&= z><~6{Y$ilrpjk5dQ=b`|%%ug@Kuy+}w{R(3VPiDDPW{^YE#O-=r|p~tcS}+_Xi?;_qdMZ(zCB}<-ZkKFJOm-~uAqqms$1hus-{Kw=CySuiHxKcxpu6!1oseqa0bX?4{%vMm4(%H<{M7=4*6BUWmiGsjs z)}LI;3N$b~22tzZ+Vfx@hV-Vh4B~xDvTrg7O(ws|^=uLnqw&|mOMhxi#S?dv;j|OyW{9GqR ze*482tnh;$-MqE~rF;P-b$sV?>tPl%9<=^0yKESBZFoB8V1fZ<1{0bcabt6fUP+G) zV&ZP1fO%?`9FCC4jYhI+b?n6 z;AG)=^s;moc@Y2@DJdwS&ZZ84ELp`RIKRq^;`2(YUAzisUla(ptXe!(&eSsrWpWuy z#}hOJ%`2JE8lzH8pwUhFft_A^Vh|TPVdFZy8lW>SVab`TZg-nb@9|&b?`uA8x9b^>7Bvu^I8+4oSSX!zih$=Q z+|=ncW7|Lt<|vU99DXYvl^k;>5mZN9U~`_D@M`2F2R)H`HG~1U7=ST};_XD3NgNo{ zcI?@m7gaSSC_Un=NIC7|Sxc7sLNF#M5FID;an9%IJR4#TJYb7UXxs!C?3oTFk;tx~ zi^YFrk0Wh=YH0WKeT znvCT8c!;xTKTMB1>|2n&#wL9Ds@}W2A=ixFEcS|Rg?7Gv^v%m}O_@J^6j0;j-j3vo zMIGIy!`#?nICLNKPyGt-#(${7a(7N3!x_O8tS|3*H#q^TC(ZQ>v}$wNE;naTIOfPZla zogU%f`7AuVD?$`*!%99!KfjasZBMJO{dOa3efk%97X2+j5Ss?3fz+#DfZ03ov@gCS zWP(SR*doJ~1E_pC8(PA0$vHkG)RuxxO}|jJRc&qQ_>-aaa!VWjLK)7yu(1oXu|kMJ zvLegb=@tQ{X^Jp(XS1yto$yVN-UJT*;#05jW1GNB0*Lr(w+iW+Gc+3?KUkYou8hc&4%lD{LFDmePmKZgQ&V zEuPh+b)2qp=pvd{W^mzMt(2SBLp}(RDFAGC+x#qw&<7!0SqS_MJmqwgXA+z~bE{yQ zL^kVSame&nio3BEScV5DF<+xVv)(n1gqD-QKu`_jf_8{5>dg2_r z<+Y9yI`hd6U2|rqHUqZHJ`Rf&#-j@gO1talbi!OiNRatJV2en}vLup(27whscDc$` z86c(n++O3xe#X#EMF!?nytOlMdNEHy1IWu)WF7>idZg_m(9c-R5ro)=T#E)kit;+W zh^ltZSUJQ|)Xdus^JglwD-l()N%Rrjs#OITn$wEboPKIIM$@^Gd(`AItz?PC(g#II zQL$*KAC)_3?KjZWH&CA1Bkw)nh3Oax)g>d;@-sLdJ|lnGlo?)b%f=z?(l|Do`C+q5 zJ1Rdqjn*^F=F;_(9nmVj;nm>i7u|xt%Y;z0G)&;72wELH((ZF88}|iXPRrZC!rK-O zv_qs8B$I(lwF)`Qx1T<`7m;Bqz`^UdgF5_Yt{kTm(zjsQJm72=rBev<1PyK|s6GEY? z#-w=i*DZ?H;83&$Nl>-c9XV7;m*#GC6O;{t!(a)47uygx;IQI1;++UKVEFj0L6G#d zL;mwY)g=l`?_nM>?_$LOZH*Lt4FX%IxaB>DirJ0i^x-(->oo~cT_uLzA+YIm62qND zp}QUN%yMj(%*e&ytrt?=F%Y)&zvH{b{+8ZbHm*Dg(tI_pV*OS4!kK^lSTV;A`THi1 zd|da4f+NM(v4vNNIMlc&baOxT2VwRig7%`260*M~{62Shn!kJp)?z;c82n_?y7Bx; zP(2W8$t`SMq{V4g&mNMby%+*57wEdcV{AVOj%)ywj zmRP|z{fZ#W^lD6+207Q)fPXQIWel^FjP$%3^zIql1E~!xTi>Iw^`S|^SBtPlSR|!! z5y(F%%;2^vZzR|?9MH*7@jOXiCl`@=squ{UC&oxPilQiqNijC!;cQW!eVPqX4~wUn z1hZ1xNbYnyFB;$x2_7+QVrH(!lBtbj9Q$L*NG&cgN4qZ~%qk4Ep;#7^DA#>6_wOR_t!&FBr>&t}?5#hbe zH3cJt5B;*su=iECh)4;!Eit7H?lSKThoqc_v;2brU^Y9qt{n<%=8_44J9Dj~O%UDb zsrEqydoyT}db8z`(7RYMDv{8+HepPtxA&YrRtjH)N7{O7cZCFwNpz-GXTvA}oGuO) za?P_dgg9LiX3&xY0-W=>mUxaun4%_U*EcpkxvX9?U4cv@RH!^T8krXrSYGlU~%STt%|oN1m3WRHLNLNH~cA5Ng>nKn9!LQ8T$+4l1zXJkl4y`hjQi%3dgySFe*H` z0_!iZ?dD(8W++a)eb{W{dBn-_yYTCN^NHe7XZjmF^d#t(l+Li9`geEsjV9Vy;Ro^e zeIhQ$g&gg?=z2!4}#sK1h-7Q1bJ8AsJF7-szQ zA>C-q{OBV+7Dt;$dTisg0EIsCjy3T-M4mUlf8YAFfP_Bk?Gyg{EuTN53{o-l3kMs1 zKWNS2cTOD40&oldWs?HCmX^zr*mt|)$Z#-JJ|o0lD2<>x1Q~%jgKqluVj?B!8$OU^ z^JURwt~qV3c$~$y#gac>J=kd)GRou8iJ;ENU#zAc1`BRK;a-Dnxlc_nI6au7&h|!c zN`PDV3@b8!4We$u?7+uA|q#7g&lAv`u zY0?<_rSvSTk0g&KGTaW)ab#OSRhV$2=9{zTYpn-mRgc&}Ri;@}nI*mD4Alf%Gjnwr zdkMovdKhN4B0a%5zsO=^6i2WF%Ihr!ma*>@_f zU_(q{dZ!c9YVz%CeSlYr%G<)dtcju%G%+z&$_NW;GX3Sh{?q>MNN4r(AdG39ZcuXX z2z@AGH-Rwx#lEaE zrd1cp)`5T+qR>XX@VM7D>1m7QvCM!KxJV3c@P*}xrhlNLpH+BlSt=e zqsIV|rBXpEdoFi`1@N5t#Py74r&CeXB!ro4N{~q&ZIigBE=_nO7l71}(?t(^R$D7X zcY}>8O&2`m)=F9-J=Tz3@5vKZvD{EX31sN;ui49MSR+^jnGqDv>q=P!1(Z5~oTwK_ zHp9yRLDRWa9YEi3DuDOQanx{CiO~pF*(xAg8l6<<9E8C2w2B|&V-6|pfV-tOHyCZn zb(LHWdH>)L2^5H9GCAp@#e)`zPjCm|AL=koy^ig5CKC)omP_Y=y_J6y%P;Y zlNfUiC6_EvWRoT6%mtfT(?Khc?mx5Y(6u$xJk!&8r@b6UVmZlXilM$#e@!CZ=|Qfa ze@N#-lT^|1$wqN5)jE_0tf`tS6|ovtp{NVs zFmPj&y56dojdzHR7r2QtniIBEI*O>R6-_bP8>H0=Cjea2&kLG}d(|S|k3nvX-Wo0x zpf6mFVTDA9_pZF^e?W`33#&3WubFAgSqbR(Nqvmw+ji}6?JHB(%%qSIp`P~l4>^gY zQz%Yz#N*_&U%X;N2>N{1A6O+cmmHyQQ1)(=KAxmmD;lkNWo2N8mLJM4*&N6-ku&RM zQ;4#{@e4T<(eq}XYuYAqt=(YzkmQm)N{z|PiA;1v;)gXoL@gXp?y6bIOi_I|AVcj? zFl`ro4(7{)PdvmTCRNNuN4*0#=IS`1*Dy)iH|t%~o=9dh>zV}S1Hc>TRZBqbqJ&Y! zh7S!q=;d0-8+w`B#iGaOq?!~h)IkMvSk9x^a{*{ z!xtH7u{mA!Lt3c_2I7sgb~?#*H3;`MMEFqqOjYQ@iBY&ERLKa8VLmzp*~^@Q-mclt=DAO-C#?J2hrv7VVMb2v1_N-hY1+r|@UWFvdpFs7jXP7@}pDmO1C zGxs8ex_US;n-_`<>x;ags0v;(`*TA3SDs9!s>r*{MIkQ$D=0;q;os(6K5-0n&?nx# z6TNrtL7}bUMYuYH14k2dwr!N)W3x(Ub>W%pr4mCVdP!%d>H7w6@g~p#5T36OMWWq3bXe zz0`^-+j#7A&YqSO#|X7gI^A(aeI67XVn#>d7m`G^XF`^!-wiJ==Z}W%Cfy%lRz2II z9{U0gW3GNT0k~1t;Sz5uya&=G z(OKFKOFJ>c%6u7)KQk+tvsmlJR>j?9Thwe($(CO~9GGX_6*E!^`v>5!q#<9z zPzDF_H|n-DOz=>rOf^r0T{4-Y+6Wr7$qU=?htvGrV`543Wj6Dl)nhHXR@Kas(G<#G zIh`z4#R>>pBHqq`tkEICTdc4{eMa9~w>N~I6Ny6d4iOY9DVZo_s@`a6vMl0Ga4{j7 zCQV1qJVXl9rE@!NdkKVX1F0ZnlwcY9G-^S4b9jz-F4o95Uy-f%Vwd~HVwZdqx&)!5 zS`5w{fVG$nkQ2_Sk2chb$K%68jX!ZDpD3pJ{*c@4qNk=P-JnsFG5 zW%wf@K&dO0;#?GJ`U5V+PPnMhnIV`c3=VIh#kC^P^Js48*iPkA*C|Q#Q7Q!9oS5`E z@*qckz$4QubCypimpoom_`9LJvON}Jn6mmh!$19zx@t^JrTQDs29rbv8;K!$$eC?D z5VJ^(cAcXAr$4qKq1$Xcjj5VZ^blg?%94=d)ahtpkpgTss+qCAIF4+S&J`f7`*V7! zC((!&WQ+hU%nzKGL4#VFf;l2=`@o}4KYDSspj_D75{V(0kPO2*aZ!{1)^2I*bYdtvD=BTsjfD`hnY?w@lm8U&yE0Z9IS?FXQ zlaOk?DB@ut$2v}x6_%#vObO?&B2feYaFekZ8hm;;?&H)XPNfuOi33}^{}g*J-smVJ zYbFr^BGG*)oOshNCTt_J8Hz#bD5XJtTMzysBMDeK8i?7ANo~YVkP;CgXE|3#3g@D!*=FWAWF7`m#_1=ws&swLeknmS z657gG$K?pt1se#doft%Hr#7F0;U|Yb;kK# z&^-AEY(r%ZS?EZzI+n=G#GZ}pHk%w>bvm(nvzDk3M|#nr2qXNeN=N4UHq*iq8jR>` z2jW)3OK{t&B5s!->`-cUM4or1xIWDt$r=L((LXFjNGE)791m|L%xV(Fna+CFkSg74 z?HvYTXx*n5oW0_23)-O}@hXijtOx6?Qu!0B!WdV??knLMnca4d)yrAF*E;w_qS={G zXJ7}uw7Ly*F)cD}A`Tau3}t?~PVf_|ssu`FmP-8A&Ad)erp~l?$PlVW$lO8-tmVe+ zwe~-0TAK2lSdy=uw9{6|DO*l29VGbeq(A4~)#TD%$n6QGb0Y*&-}b4o9>u8p%m6iD z;=rwrpW!$aOf%|}+2X5$fVn^+sr018{__~!c~Rfjh>DPk3xOD0(l-Pm{j0>EDw6sf z(Z(S_T}(ng&VDFJTi<2Obn-3n=^kh*z76Duwl2;_zKKo+)Jqz887`iG6>?pXg2j{2 z-}c$dS5NMl3vhLh+-hVQy3T?r(i*PMwruTT@PRQ*K&8pxNFx}f;ZOGhSSzI|DVG+A zEp(@6EGg@u_|U-71JACEe?R%e!mevud1D(V9S10mjyAN(KE2z|i`pIM5OqU4Tdmes zYimm>6Xp1*?ex2aw`#732OD~9um3x(E8qE~cu-K;qkVA4p`<}2;h;dh$7dE0IfJK?V@}E-^>S+{V4DJbH%mp$WHnEtK*E8YXLun!8U>oA#<-5MLrN(;%3vuB^K%6S%=v$s645+#Z4LlU z;%sr@1rs~bDpaUH#gA%FG|!IlrCG zrg#o+6$cJJ^F?PDH#f$>q}_HDf-0#_JSphFg8-3%Cu_1hjk9ZguzG`Hfka6mr)#{C zQ!|>@QUY7hn?xj4uq>O>0C*X&pk4W(0|!tiJ4g)3gA2`^u@DgVx>J z#;9!i^TaoK59p!VnD&-8RI_Ro$>NmTI7ui-&Q6avzn4Vj?*HkJk=nK)jE4|v>dCgf zGmFL$O)}oWRU_kt-d5}qoZ+4`Z)w{E;k#mkr26$~ZyYxYY9|6ZlUZQ5AQ~1oqDG=V z@D2kYu_@_Urfp%;EPk-D&mMLdCq$YaXhj@zhq;Ys=Jgcb_3t_c% zJt!v^bjOfc;z8c=x-;vn5pe~?GKsoSXu?5D9r@r!%Nq`K$fuxkNM<5s+KNP>+av{? zP9iJSB3(*_)l3SyCK9xnhD0q|U8}27l(p0!C(Vs@VLPOh|XYJOhf7keFyX^tn0D@2U*CXr`5V-v$ z%)H{?1uN`?i1Nyxt=8dTw21QG?`mird!~*|ktG|-S~ZEMH( z7iY^fmj={qHE*%w4ffXY@nX}>X7cCU-uCC~Hf4VPE+5`Rce=t_7yZlDz2gn0_3`0) zKBCiuJ(3NIlfB{r8NfIU+6Nz-1^hhX|Bd$A@kiNfO90|m*=tGLEuab6Bl9L$Gb>W6 zBuO%d&7!#V4oRFy@R)ds z_->SJ_(f!Q7s2866>*>ohd)kmIKAA_ri5(&ZvK^}#SNaEhJy=tOo~xB!(N?81T00L z;I-)AX>4aPn&%5tDN+tWOeu>p#|SrMhI^oL4;Fuw^pxd874yThy zq=k|Z7JLseN*b(Pf6eWTO*iK=f?Do*b&U$QV(l|1j$8ZO_OX2c6iq{jNGbjr_ddIo zaez1!lEx%*rjhgl)u*_X#10Uzo{->oIaXtlf%WKJL(k20I2?| zAjis46vaM~oreDi2O7Cc=IAVlkK;c3?H*0>(>`b&2ltwOKWrTz#oxS^^-^*WKA>z@_-*ZIlX(zEd;B0!l`H7=#lfCJ z5Zez9Y_um2^h>*a=#SP~N&7T3SuC`%CV%VkaVRwb@3=g*w}I|D^t@Uxi)@% zPCz>%OlvR~Kw|474Ch1MQ6?c{YQxRxB%ZIbWD2vfkQ!VYN1Tf8sqK-4gY;PxaPd!Z z9DdL)E~)p=v7405N!yKv`07>(p6f7xoIR@_*zoO~H1g`d&C9G?js~x_RCG9D#n0zK zjXi(*h6BdWa48x_@xpT}8qinmy?2dworm~3_$<}e!RpZ*E;U4;+@sNfWG<{I(d z_D89|KuqsNM636w^=%%&6|09_zjttWbbNCEL3^*=Mxo*5fwq`xn>&2o@aNRcEqrP| zig#O$-55d)9d%Qz)|23!1j2$TLkR=z#d#7;sQK!#toY_%IC%3u%;n3d_Lg?#%TDNZ8SO7l?EblfSZm;xM!rW}9Ph1}bgdfXQiY zhpnSe}GN9z9c9MvLw?Tj-J+)d;>`Ya}~UKEeX6eYm_f{bf!W?OAP=R;{Ke`}9( zo{`66Ng1b)WgUF4nG&i$uWHS}f7Fk5pvX%2JAGtKf+0&WDjht&M5b69m?2jAD4cW( z?$5anb@(#{=dA94)5-EgECx4c-&&8$^?R{Cf^}rsu}=Fcho<7;Vd3l)L+#xqyPLR^ z9p%tC=1)Xehv9{s5bksm?B&?FV9%($q(U-wT|774;wCNwfEyhXjKk?^?_<1l(ZF^(8AS_g&A5jgiO_Lr4v12f zloV@-s933GXFy1~5zW2hbYUE2@5WPRI5{}WSP93VS2CA&CRFeXRedrC*zM(sS**Og z>v6Al`I)J|H_~9KE^xlu&Q2K0B6f_@a!7yJYoPAkopC0B57i#RdLaxRyDX zNe)?(zOlBKH48F5`&N=8T54*g=hEL72JAWZiN9(tJ2e>jc$q>OP-)>@ddUu6;NG@t zFgnAE(u(G0=Nb%WF55adSf4zsHAXNxvf7gKd0@AUS8NEJx7V?bq2FOaxl=CY>YP?U8Z6`rACkxqO;G%G-D0u7v? zUqJ+?BTj~eI?a@_uPS9&hNQ^?^J&8U9wDC(!-}%Sx*5+_MZ;L)l{?^}=Q8_N!l4I^ z(p>>#&f5`ijc0%|<6)xdYS_|P3mdtivSsW+6v7U$u3h;=o{hGwLKE0U9MDQBa<;`X zeccqvkVMh}=qe*y#-}cRTrg`}#Tm^8i_1fMcK3+0>|s~2taf_W?v@FgB!9&w5sB>T zO3Jo$#&Ebrv4|~Nc~u<>lhZnu4|h7Mlkou6+*huvPy)YdDa5`i-?!QwAQCQY9-Fb# zRU+*j81Z$|?yqxA-E`^nBpPJ<4S)_W+Tm00CeX0;f?K@-!`ESmENid{d)QM*z-EEg zwwDZ>Z=P1oiIiMqF@W(Z%z|97Ov)@wGB?^T^C&4FSFnr2{?FH1#Y>g6c2mQiwY%Rm zNq8yuMj_6S9qV+|tw7(hdg8XiHK1!@)x71{*R6-%>V^ZiLSd!PPiB{@a_(FV$SZ+3Aay{0%@N_0;S2^TtfF}s#4yyBCoxTT23I{f#mc=m{@0;V4Y1xYHvv>({wA%<%o?z}+fRzZIerPepf*tbn z&~k&Paf^i(Ll`)OY>T4OhNtbt|Q+RWQUF*)~&}52CrU;~hs}w(U`l#dJ zk)A5YvjX!Pl~?%87k%E}c_^_F63mDe`4Blei=y0ll1-9i>7mHr0=&+26!REQnzxgM zD>pdnt5gsKzFjT@>oW+uG2=uIl=fc8VZR7P_~Rdm%{$%HD4>~XBgGRdr6Il2DfHA8 z2@0wyB2QZSYC>XE=-ufDXWrHAwIGR=8Zf|_b3Hb8y|XZBM2^#sDvzQF8B6D;qB^3^ zmY7%o3UCNpE?e`nHwsicEs>`oEkY$jn5ZFCsuT4-DW za6%yL%L^TufeyUs*DQ!vtG*QeL(#?r{9AQsTK;)X>AJ3-Cxt(sXIfCH8Rvu`& zXrpt72A0T8RDE3S^`BbUY1ym}{f}cOYD6%!k(n#e~EelD9 z!13DE&SVS&k0KPyUS4+ND%*)Ax0QCh35GnKeo`r=1b}MiA$_%n3kSUGzhp z0Z+2<2|0Lb@<=W*GEFVJ(;+H?fnwsE3CH4ryTkIaNOTPN1a9OH+W$&a+N#wWP&usB&MeB6}TVf%2h{FZAHF=$B$eOJKP76_3N%9#fOwcaz zj@X_n7{-@Odu15P>4(|on}fA94ZE!2d#hs`Ch)H)h}Q*#_W|MEWNp?q!v!eIo{*0sYImi+(X_%veq}n%P0B%&QnWyb%*MXt{Vds@}r{ zmmbcD(rbnbb{xvI%5Q2xzo4n(18FRlZPti`vNXf%KK;ULUecAJ3{kh{*uYrxKaA;$M0Q#M*k8 zEybW75slp`zHfcWM4k+F-N9FUI{TT^+TSvhC)ic}$wX5+yJp<<*>uC$oSvA3^_ZSH zJV}-Ai^o&|3m3~{hgc}5u5cZTNCD@?@W%fSi&x=hD+4G&o;dtLH-`rl6FnV`v|pde z9TF@CemlD*qh`|Atp_7|)u9d6V{Rhb&(0p5x|HII+5NrtvHSP82lJHt8lqi|Iy*R7 zHjtHanSp$nNd5$0%M!!vRjF1byx@;0xv#Cm&5*Vbk97wkV4zVpe3__Eo;>k;$b3Z& zoQiU1yJz??xlX)4?#`(EaJ0We(C!50Y9HXCSx=I~AGIeq2``3vj1t-@z7uOzT^!y! z#cMu4@AF?vf7hq4`S)qr-{n7j{VlKy~bj_=IYD+!}}0t|LBPB z)YXRlNA>S75PGCs6sJu7$$5uQPx$l_#Wox66yNYU?9k8oqzYwXDG}_^RRxuvKqby7 ztl4|lS4P06GO+(V)Gsg@ZMbU}ehQa|p;!1Y>zWA9r4HF~>GF{^T{O1q8u}LN$>@_m z(AT)_zP|yFe=NQf_0FKpD=IPtwtF+ycgBDnyJr162=*MAYGoC$$gY4zC{xv`1uH_1 zWCi-;k>qNsNH*gjY0AK5r@b3h4zjX@aRXA?FFRgLrlA_#GXTi4vs=tRUGgBVYPSz0 z&^cCtCxuOS3+}D;_4$SdR_h(%IU+_#=vq|nmT5`YM}ugL2_yX3%>dD zRZ)A$vRJtuoACGj8&q>iL=O03zdmf=-`j8P9UMNuVa-CV;a5tfeObMy{I$0J7mWXh ztJzNRf)Dk6|C%c$MqJBOj^0&>Q2zFvEvdxlcjYY;@lv(mAZeb{Q4f$Yo$|I?KJ9go zDPNk*kV^&0fM(B%N9^BF9V091=o5ywZReZn8HUP?H&ImTTw1*6rR7lSgo8tShxd=y zWGsni@%xrY^u~zwsP#1yeiT0I93?;_FtUab zAA{6zrDy}gB(KJy6w9d&FCv|>Znb)o2Kj7HH9^Ii%bFab9PgdLJ|qyWXZ*Kq5a^dP zfF%&)8B=|PIOn&1XZ9g1^;tFR$=DauJQ?hwIr=w2=YFp8WiZ6|P~~$%(_;#68!2;i|K!A|wPuR|OTX^3=f7`>5nfTTFyM>tD>xt!O)x^TB7lqM zd^)3mkvQieY*mZR%8T5?BvHmLOOMPqMiTkGO#CY8^xE>Oquem3Mt9wAG}o+0kE1sE zk;g}rX<4ft-?qV|&w;bjg3=k_(|5t-G38hd2JEM^)+tjrvkJw2uKW(-3*nj@VaUY% z>()QVX-8F74YkNb)%)Brku~v4H22fiF3| z!RNDCk2!tYdc>Sw>Yl6tRwJ<*g{&|35dT)xOS0^{P{+$er)`8$Zen-q!aM#wNmLX+0sD}ev3SXKzDcXBS)EBt7 z1`X}+Kal;of3VMtIX*$6FSxu_TurCflZT&v+N=7lYJB-Ac}`S9c<||0uU>!isHKNm z`zOck6Zg(!D@XsvO4#R8^$&Bi@87>~5kREAeK;(nE&^Wobjmj*vVH0hwN{UH>=*&D;;320Ev~{q?eh}ly z^LBxk*L(;!$Zyww85~1%{z<=w`CXMgm%)`1odbs_$8ruF-rwLHc)Jh!iJaFjN4sCm zj5~tB{eHIlCERA;M2@V=z&?S4`!~EZDq${vv@f_Ek^E;|^-qX_k!BK-YlDMR>V_C3 z-fFBVc-lzzGF%*xgp!w{1Fa2tP5Oh1Af}pX2%EG1C1^n6AUI9Jw4#?6xnkVvj9T?_ z?nIOYDm+H<-~bdd+vNSOfk<~w5-)7qsK;Y-9gIswz0Bw|dFxC-r171aNn=Y4_`XZO z=?%04?|qN(H3#nOU3t;DLf32ayy^@t5WrGKGCUr4CjEEpD=B_HOI)-6I#+{th7BJU zQZ(1)z`bR(0JBe~6>$!cyf+#p(ummYP+wx*fa=PCA0lu@TiO5mg+JIfA54Z#zlB$F5+MbczxAOVs{fdCH%NsP8* zmQ~NtH*HpR+gJTm`2zh4{rzh#A2tX|R8(feA_M~1d%OFZCtuz))$7xfmoFbRzYHd` zd3QMYt>0^&^!u01%l>@Q9r8B&Jf3`uR?@q^~ zW^c%|N7H6^G97kD&7bE(#`ur!qQ1_=&E9VFaQoi&y`>L0y_lTOSDVpUUeIdu*ArjNYSaB~K0F<;jMZlP z(Pw}5Z=+9x-k`gFGQOw*wR=3BO}ZzuzyIz3JUs6=7vt%y`8??LhgZ#f3Z5>yC+CAv zzZv$slhI&w)|^hd7yUI``{q4f--1Ht-n}0lcc=ZKp5kbp^#5Z%nDjNkWZKNe%}ICC zKb;R5b2jIy83>v*7lw5%>YnwR>6mZ3GY!VVLu_o_0&EZN{g4MO6Z=R01rYy{kGuA)&x5H@4E6$HH@(@N-O2mGr$?Qg zdmHQb9&|RgK5aJ5(aCiE$=-*i*?7?W`+V4J-0SQ#&GXkkuQ!M1gK0DAL%V|$L*se( z;!@Pq)wj^<$$5YB_rLvLQ*iR%`(FmfgQ0K$Vy3fs@5+YkJh%lOcJBSq^aiJ={fXqx zHE!q*&&HF%?EGT7D(Q714gvKS8mZ6s{fVeN1G)R0F#r}#{|jsgj_t=$^UL$+Yn`3dW{g-LpXO%>u@a*&uuUk| zbUq)QO|BfpETq?+q5XTNp#l8NS)qA0I9qQzkFLSuv-I`cctp=X4yA*ZbS7JqUh|RG z`pK3!F~K|raj+O<+8wS7Vf~TVa~ktUBSK}h*0>xG2i&?c?4PoNFN5hF$)tWW8JwN_ z7tQp1{5eS2)yZ*p2GyFr+1TD`RveIGY2@4U$AYEL?vAhsLd2tCAmXvp*T7wEtf*u4K{SR}xZPFYLAob?GF%U|)*PM*8K#anXF|A3hHjFwM z&(FkoMQ|DK1KDZc0Wg|h97_(5`d?;RhFy!rAW|_Xb{_oDKq4>`?0dDW0hx@NymS@Ia20_%v^g!<=fVM(*93j!%lM!TMflpQ}FazXv#xh z&SwDa(_ku09Cbbb$jg2oLSemQM4DmfXoK%1`9@tdENjEqKbxBj4hmrD)A_}vT|GnB z_e{2l9`-M|PO|c-v-tty8MWkprI?TS8J#r3k#WMuh@H+Q4yDj5lF!ucWM$at1Ncwk zb@;TAl3Z_+V7Em{`2aiE^`yU=7q4+9-JUIyRq^wzzcxNyW8Q1s34-X7`N1`PNoeUz zJ`M*kjw1v~f^dN(g6$c9dw~d?I-O@W8pHD%SC1h5elO3%OycY!V8@(~2LCaK>CZv4 zxMkXx&SQq-)7fY7k+~7>4!HdQseqqQEb?_{5{W{Fd&OuzV+jPxG=z2D4>NfD$NN7c-m2othYu%suLC!}VIIDLpYvi%GO7`04_+#52q;M;GwgpFK;M8|H(|aHdd=ne)l{};5(;GrdP1H7C|<$13K=uKLiAiT zP~Rv_aJ$VOYc{{09}o}&IE66dW9*a*O#m5(oHFSG&0dI~5YOv*JX@N4)XFtj$u{Wz z+MaxnvBlVscw?x>21jFAxDIA31~E!5JLWBq^?F#A&}#`|;cm}{eYF2a@N*Bm$lyA7 zfhd!6BDpT(-}|w2K~+t?aQV{bm%tsG~!?DV~ra-ZZc0$EfOaxDUzI6$@G<2*1VripGg;>mlPb z`%j-fy}P@2fLKEbVQIRSGfbO$%b^Cup#=-%X+nbX6Bl8a=kWFlN|+9aMxvV!1_W7Z z2oSj3-+S|FZR;T-U-2;{O6vvlx=^~A<3xg21zIenuRO+;K!MS(M*CXgWcj(F$g+iG?jm9#*)w92urwg-)6PB9ClA+ z03cQDWm0p5wGGoDOR%yO9X53n5+j4UhB5QabCA^aC)&zTBZM?Uh5#1~g>JG4XM<0! zrH}iB>(T0Zfh7yTwIe7JIad~)aMVG<1W2cnfq7&RrmkZ=3qJ^or(;p7-!tj=n~&gY zd`t-6N1i_NK?}~Yum?jEkD+sSpH?A8Br((mIi>I-0=NbGpt6IHF4-vG|tmRC?ZuQyvG z4i&JwD3-!@k?;Z3;+65KO(dv1A~<)7noumJva{h+&1*dAnH5@`8~GZAGQY&?Sg71;N}xh5b+6;oL3Mwm8pgnh@4AYe%*qLBSzn5ZK_HrN3D$v2OQ41vQH*dB7$h z8j)f24hp&_YAlN32Em@khb|Pq$)F7@`;YEn+%XN!pgEEyjGsMp z7l~vfCqbSE?~X7%2iO4S!^@w7@nZ%;A?$+yGXz?8qC!-&(!(FImlm8=;sc94bbPz_h?$xX1`wR?DxY z4sgmb=)$pKs64hn&i%&ao_T;&=P1R{bNp6OrLdxjR@|9@QNRc55HNK?kFPwjykm=v zkW`}*daKhhrtMuPBLHPeo6{Wt$06uXJTKt3f+m9WN8P~LBZ?lr3$B~*j8(Y*UOdBS_Yp)g0TGYLYQ+ndc&}N~4 z5NvsDJc2NR2s>V-l`Qh20DTcs!pXF0z-eXK3UZ2a6AlOIT_R#GxPfHEwrAbx$7w-= z;1;T8xjIQD6Qa0_C}@!KihzN0^X!TooNK(O!O^0-LvB8cJP?Z1xTDCPl)p@LWfdAw zNpE6i8bFXGuY4!rLxIFZJGMP_7VfVRPzlOjgt_2wEl3Yv7QY~6_f2Y;mQSut(oyCH zt8N8;cls6XO^P;6q|wn)%uFi>KG55lc_hNs&;FImRaPCk3b}$%2Kv%;U~8VE*!t3=iP11qOrrF;+$&LrgMS zvykxyBMfAbnWHMgca40C`3~U;&#d(d3#}Dv3!^*_?F%NOa@jxelsOb{J^ zKmYIJZg09G&<-IFbUYbZQkQ*VVo28&O^uo=I851fX%OQt;<_dvA~C2UnAsT7-4+BH z);VUv%ue&7d`7)7D70x=UKA3M;&4HWtTFoUe>tc4XGV1&H7b^;cDWiXc+DWzat3AT zEX;(N1rvJMBUcP)0>B;NAiOnud{SQu`I6Ljdl)ewFBU6Xr3{fgHh>B3M%ZC~6xpsx z_!2L>GeC>~1Q;cv0RMa<6&6gF93^yFmSEYNO%@3&J|1{HO3%p-tqK1OX>kzF+8L%3 zOP2XFlo>1H8Yx*nGzDBi53|6^B6y@NpUHXVOe*Ao#DXJ@f;JLY0oEud6)Iv1&C1@Zm%m83>nb;Md~(go?%tt(xg29) zafg51nnO#H1;l-kgmH*_imqX;T5L5duV1}-p#j0!LV7gM>6t>ay9>U3O=c>B@K8=! zED#Z*1xx{b*f&x{J*8NbW>ZHiEJEpd6XKIKqEq77$OvG=&A$3VawsMo25c0U~5%!WdfCGi~ySK(i>lc5&2U@96mqnCPa83 z3RRA>vGOfb;?;~&&YR`uVTtjW<6}}8xyf$|Pfp0a0Y*4*s_2?b8y8rsL{rH{SRx0T zDyEWK8w>ZOf*TU6P5IMOvL70CZVJ+`S>XlKoJA=h(kd2MsIK_Yvf0Ai}C|_ zk{OX9Q^3wMPcx_afSbf~VjD!6iVhGJNhH=|TL2NhmT*y(i3fot8Fbs(RjCmJ5-EvC zLf1^GFb^ec2>BoWa#vCyj}0g^eln?C^tIq`;(Cyh2}c{GQZ_pLzrp{T{7>p@gNlI- z(q9{-zcwE5zoiQSDB2leEVdGp3v~d@DRns+IN!|&!x^!F0s?5jS>{+}csxpa`r~N) znULEVg?CwPFA#_rG}f=dpY1KK`_ytzT&16tV=mJyqnj;ENjfSVZ<~ZFW;98Z5}^9c zrFWzi-B5!^QS6ltiaha2RG$ULZ%Ag=gDa~Qdb$jmV`8pLnK>)K{&lUBti{NQI!)Oc zQ=Gx{AvL~ACp0A&wJ3Gq?#f(?B!gLU%J&uzS6D_KV`WO?DYJ7;=H$Y7@hWBM_&lWB zPtL8XvGC^PCf3JonMrK89ephn0OFFPhP#>#8p->~_*kiWW9eZ0B7Qt~3jt$sE@q@c zw(W%jXp-tid6#~8pZ=)c?YkGU>nxN?09mr}*QT-Wnm7H^6NK2GnvLon|4Co0bV~fc zy5QD-a{rQR{aY9Y5PgL2vsz+W*IxFWOZ4~R5ue=3#*&G>^P88xU3=~gUr1ZT5Sw=> z`n10@K!B#rXX$&nwET6$D7W}}88UBi+rlM2^A{=~U3}$5l^=j=Q%Itdh=dEu#ee_X z|E(L!7v=WquN8n-zkY3sT-i7E?7wCLv7zfbR;UN(^=v+J^P&Dte^*a*_TszV`=)33 z`N|*Fb8ZCj`zS1`uKoI-4voxAK(77r-`~CUZ*O|9?zskz@|%Hn8K>AK6`Bd(a1?qnm15r_nY_L_5xpckv4gVT(BOdc5+@RKfeD-e z3CM51g17)^9`)NdUAtl#_xN`*>;5_h{^rf*F+dgh7KL(F-Vysj(5hf-<-Ky#T9sK~ zfzLi+1TC?cxI|vl6dZQxQ@p1y?cH=;QW`7UZCT_uq;Gl8GhBG&Z9cZBIc}(UQ01QT zHLy|^;+#@`z4t(&_p9KQX!ntC*3nn#wbETO|CQtV%Z~QPOlAi?l32`VPYo(B8#|>pn5XYWdrD+QN)Uv}X{4ZFv4Sg;h!QQti_47qs`=_-m>$nnElTMv%j);!q)~oyqFt-G zc(Pf!B3eUXvJn`OWj42_T|H|S!(`vfc(fL>qK~*lSVH&)B83NeA;V?SDuw4O?az+=zB9i|f6PsFVDlMj2tRNzgVsVwXq$_f11NuW57 zDz=B_?oH1J+T_x`F}-4TQkR!^>tdG`_AtmcFaNDDKTug+1NKlK}9Qqf%5JQzFkfg%ttF z6(f^7A*n)Rz*SVzIqAet$~Le+2t4q59BJBSVRjy$He|>WP=3gQ^m$p!VAfs&%pycv z5Rzyiz)z%=;vsIMnh=oL7awy_Ts{E(HBwO4{WSpN4McQ*HGH@wrl;bn(CU)czx~<}7Rk>R$NA&{&adDw zKsfpXW{j|BOh{zk3xs?%?iKOyL+)0cs(7w>O6tOLA|2fraE6uultx)V(0jC58tYE3 zA=-!&Ej6$Tuy)p6Ij+Z#(#`Tp?9h zD@R0m56H>$_E{`J?YF)PLi0D#K|(U}aO9gW%<|X7Y6OV$3p(}d_1B;5t3Lo2kGc63 zLv}Y8oY$W%7y?I$&t>j+KXeTfnpQ(~>@nw^7if{C^itvRB>;dz) z5P55}@fsSGP6Mp;qZ>&2tnXEtwxVE!_8`f<)G%qAVj;8M`xOVRF^;~6cJO|3_Q=-Y zP^ewgRV%7U#9dP*X%d{Psxp0Xefqqxu3WtC@V@3`AlW37J>4j_4L!Fh*g};=Kl3t- z8UlPOs({dw<>_2MKJC3{;(=PO{N}4hfmsWJlYBPeqg}-9HQa?mS|mwktYjUt)fMqB zP09L|WGRu1;VcP^GBWToZ!MHIs!$xtwc^fJWMQ!!%`A;9fnXSu4)Bdt_-T!X{4|XP zby0@T)`f0ivzodi;YBA0Iv2ipPC7lWLKz!n0TZbuRHP<tMUVxx4!I;Z8WR;8S~M(JHo8Xd~Rg6^n=ghpqK{ zcO)h}KRX?K3711v0hPBx2&{?%WdK2vGK|RS+NP*{NhFOa1Jr(!Ya)Q^DvDB@y)MBo zs=veMb&oZ(k_b@yppqBO1kxTXNf4@Yihq+QBoa8pHz(#Imr8XBMv7^MiGCH>yBI{` zSvMKy#|X@+I7yI?$HzeWv%{!cxCt^wSXteLh^TKk1YH>bLZVp@tetX;@@bHI&xemRE#08}2O z!E9e!G{Vqjp z^TVn?cy3cJ*o>Z4&tq$w3+w9gZ~XZzDqv*c=LT!vxM%s#^^;6!7HYTdc(-Y;)ogCx z1B?|YmXFmm^ta=unDI>)-T3|M{zTqAu)7+#TIS^(CN*tkuT zxP>vl{*kVUY_RiYHebI|zitD|O#F1hyG`#v^{lJc(9pr~+arDRiVbjU82jdx|0@?H z&=@*IN#EqbJepur!HQ;r-@aK*@^1fPKGguXzprW3EAw`3`@ZL?O{di#^ft7*kr#X$ zHf<1a%>rDT=9?(VV{rR7FNVf{Cs4mVk>=k2QUgR8aDTbiF+=Vz_cr2vGv2r2eLLQF z;{ATSKZy5-{!S*-p4W-z(O}I#$MZVzyiPoieml?hG`jxXuzxo*psZA2>(OHZpdWim z6Cnq$iL~td%l=KaAprzBVJ^Osyr?2;`I0?iB-6nu0qk2gykpB}4!$SrDQ)h`E5KS( z?#GPzT7p+9IL{#cVXNMLg!m7$I&7Z2_0JpRG%Iwni73=DsU#Mxh8KD0^tju+`!0>> zI+hGYAFrK2ZJA9n&oE7X*NEoT$inhMXbT0Tgou^5n|8LE@)xQ7AXYh{<>@7D9Hwrq zl~|8_9okx|J!1-(VV`?^$PBXNeOiV{7FzOyqU@p=0$Nt_`fl_cOK?BUn*n_rstuSz zB-&RwT!QaI=1_G~tk=eaG{eBV(~x&(ptgB|SIR(n zHmOBrU&$^qhH~=N_7QEQ17M5|r5j?+?TJ+gV_w`TOWwP{BxF$9A1yVb9i837)QF0y zD_()IOuyCMuWBixhKfsCUCwzqzIY~LlilAx+K+30Ru?q0O=_3QyFnV9H z*1vIYHr~GX2STA2eL0<;%6`4KFHnqnH2uV>thc?|f-+}%hW1CM-e1KTvEG+Tae80g z9ON>GCXtZXz!yn!C&HtH=>sN$PCa;hmkm7@n5_!ewZo*jw|v8+0v@IB)5*(S-h`pFHw#a(+KOnqG0o0arvNr| zl_nlL{&0L2@0Z;Mx&@zDcNyD4(aTpZ_!OWkcL4#EP@0N5w2hJJS#la8BZeY6Q;n_S zSGLQ_Cg8bxEEmbMYH;mMi+oSC)G~o*>6_F^sI&0^tx0AB1qGrdx}7$&18?7bS!}>Vc2isWC9htJn=@5(PQc=DPCki+gW2hH_7!0#Zch;=nCe!19-O; zeW@*1HY5x)lHrIUeS%;c3k$A#7;lvWAha#`u+Iq(lV_-0mJ37<>0vQRJHP!)o zgo-|DGo8vXy}#^M>D8PE=24g+L=2;sLs+Pi80r9)LeSces{;|tMB>qN9*u4&(yC=T ze8nEu^ucc)Kh?Ui`=pD^mdTFH%lnXQ&rhzZvc;G5JAC6CxT~Jtz9X=lrTl( z(n^kJlyZm#LMD-pm88N8zymbue*UB&A_kr)(=zTB4b3%%@39QHRd~uyNn0|4F$7!z zy`pz5*&E$0>LwWwx*qzuYncf45_uRJ+qnf}VL~l3C_EXDqT3Q-YTcSA=&np*H<3bA z&Gs&?Bos%m?t2QQl(XR|BOBK*0nE0EVQ%;J zzi_w%yKi$nWbiFDFi5gES|p4I%xlV8+V~|B>|IsZLvFD5gnAzOof^j{+Yh-Z8SWgA zyWF%E?)2m_Zu!fkGhmRZ1ZPQaMTfM%X%I+W|^bpzBg2qv{<%R zPTX6I8j&zkgPR;2ITed;0NXyC>jI}H} z;%yf>kFK&UkYZ+q5($jACVaPh^)EqV?PMpb2}$I6DnX!Z|6S})S4B=XhY>5)KI+Y^PsA4Hp-@=FXbecwSN*97st6eC?EsDyrkklxN^vw+CBG| z%%}(lSep*~8s*8rKK0B{Tj}k_j@zi~QY#s_r-cRu~r9FrAnIeRZGImu7RA~5SAqPgwT-TV2; zuKAm2UQ2x^;4jpyT84+QWs69hd3v`}u&H|;42%_FDQm4gw!xb$QglL!04o5Q{R)Ho z;B}GjRiw<=&NIz&olX8#y4ULVoIRnIjQRwu4Nc`FCK^eKfQ|v7|#^$ zjW7465?Y9RU^?4666k*lJl%p~#GF<1Y}^6iwzygz<%WrPjU900|lEl$5H0 zzj(Ph+T8kpyijXitvkm)@ERNuZH|M4FGM{F1KeO{1Lu9@S7$2)$e%@kD*sgd!By$9 z>f838?H#pwarH%yQ#YfODoj7V&I_om<)Dma<$|6roH>-PzVTh>61}l*F=~JyH+AVW zZoGf-ho$>n`BGVlZ@rXy_#LAgWWM$6YrnS_?EvGV7uwZg|L6|3kXb3x()V&{`OAi> z*YM!m%O(e8x9Oky*8Jr^ZbereJu-?GFwgL zs5ZICzf0LHZJYORw(?*9KV-NV%U+i1x&P>?wf>g{pX+UDu+Z87n>Jyglg-LA|3+px`rfZ_64O7@^$7O+( zdbyjniMhfz+vnN!UMT6&I#M!a@Zvi@k54R&>|dUun5F4U9CS7yIDa$ zV-`fsuL39Q&CVkaU)tP?23)2Ppn)N-1&f77>Wc#Ry_E2KN^p32Q4 zbQF!DFbkJuZz@xJy?JAsFLXdx<+WHyjEXkbP&!#Fny7`jtPWA@ZAB<~h^XgxyuPN|Q!VoQh$|$eo%-Jiv%(26FR>952_lXf<~+rO zAjlXy%>9pK83C%&Ze~irYvCao75Qn^Q@$R*SOmunIzK;n@%pcSHqB)J=(?qvH+^3& zEq~d=^U$qRh_c%rjGH47(R?_k2Z;IlD?cYb|Kr|nrH=FPila!pv|JSBs_3Vt(}q?W(#un!Rxj9s7=yp7Gc1n_T44ziZXE*XjBB#)@mCJcNHE>OVgIYov?e zVzp<=t~1_O(Wc~E7{4Fn}jPsQX7k@JG|KNLcb6ZlcCE9XS(Cz{CN)RU<4=s_^ z3SF>VmM5pr<K>>_iM}cEBfR=@G)DVKixPAN!BJ6csb6ayXUtflcA@p9z|7u}O7C|s z6l`IAs_xctz!vHWiR_)V@j zJ({OvJQVqBbb85gvGgGp4!!)|Z0!XBprQF*VkCthmsugDQbq>6RD5z!dUAM0UlPW3i5pv0s>%mD3M}sGVj!d7T`dWr`@q-R!sv0ey z<3T+aClgtCRl*Zq;DHo>MTc$c%C30R&b^D0aR2~qxjJf2JT!$UG znu#WXidCyf$34%yUeeI4>}*v%E0%@PIe;q8-O(dc)Yo>ZI-Y<=`WAvpl~Thi?5TDk zhFT1A1VR!yfJ}u)`m~{DXj6TRue2oSLJ)l)5Az>{vQhn7sAc zy&SOhqmwep>ug(dI#d3Tz0TeVt5v=Qaa164b+Z(K!9p4MZpfm{JiAk@z8myDjv;Gl zlhRkR$S4cA4|ECATh)O)H$HyJ4?+gOjvc?^=;*ODyh;s{r-qCc!M>mBcm%IHAzDo* zdZkOh`a)t9gUC`1;=<@fY0MBI^CVDIT2&(E1jQ+<)mfLw9r=|U^@^))XY^WR9Nv{( zVoahZCrrNl<;DKXp9rLHll7pZH@~5RfSAhy?yYDnRDNv(*oGpv-7z*e>WYwlPeFSZ zMi8RT-_XH%5FeZX zk)qGlkQoC^nboR;5_GsM!Xl;R6`nOen^>z5E*)4JK_`C!+RXdCKtg~5&?Yi#s275$ zQUexc%SXv*@p`$mOAKwH^`dE}G>TWDO$M~hTAo2XRdFFBPu;L6iCt#}8OjPs02d}B z^i!fp;MMljfr6+Gpx+15rnKzJvoIDCs=Q$;%Uy^zrroeeMQaLD64JS%vg}1C6`Rgb z4FZgENv%m>RXe6)Ff3+OZ8Czati0$D7S?vmFCd_61ck1mh9)VPxZ9*~iVbbhK&f4C zR6z%cvV1$6Z>e0UC@3)G)Dwd58SpH-acxL82KXNO_(UsDC^cmErtzYD#yt4arZ?zW zVuze`s7=G%PLA6mYt;Acg=tb1P5T0?78W$HKxv7rEc7W357snFsWyj(^w25VYTRMk@eg5j#T{z&mH_`F;r?e-CmBOsItSCKFaDt_#2NDlvKSrh@T{7b{hA z_by&0H2XpUnEl?P7;XQF$ik=%w$(^J3zladpjiiS9QID{ThWP?$02B!Y;R-V3^TB2 zPme|)*8VGndqd77=ra@5^l{1`nQ~SBq_}KcP~j%qpID{en#&>x{4@J@+OvQg{FDr) z;g#H`rI^qu1{iWkCI@b}PyBZdN-_sYjB~6>JbU+~{znPA$NDP&6I`ilBb=AIXu=-$ zik7$O>*#!ZZ;KVoD8;mT=tqxE(KMI^wmd^Rot)S%UP$0h^MCx#1#L48;OIyH7<_=E z_aFJ4kUErB;dM_9S^>Jvb=Ydt!e(ha{c(6GnYb8nzy5gE5qtc_R_giqNf`8NyovyH_OB^~q8uV%| z`O}}oA$R`Ee`)=kX3+2b=}$(mcjzAXZLHoHm%R5dW-Yi3yDpz>J`$2i(Iu9VrfaL2 zuR#Y(qWOF|(#1K>R+ zRxS4WlIp_+*AxhcH`nHs@4V>^?g((bR<=omPVjmm~QeD$BE}!)n-}kgCSO+0gy9 z15=E}$Qr9*r;%{n5VG2m1hX+wZ5ot!RUMCkZ0q#$O|yIzq~b806HMTGp1!U;H~rL( z=cWZ!F09O@0-Sb82V4-&l2aaYXv?0oB=$1pG2>C#J?5Mw*gCVyB=a9d>6Y@WeOGzVeuB=ZaZLR9ttqwI*0>Oi%kcnH*ob6i)bf3 zBHyDhzApTuLEA^;84+XBHVke}`!M|Bl&LOjRvmCm{x)HsyAd5ZMz!Gn_%7CguKm>&Dv&Ha7oVq_d1R+knSrep z5mE5GKlou!EE0HKLI>6s-L-RET68KKS!gY7pk?!eHkd>ek7w1Ys9Tn%79JydX~qR` zkdHAYMq>FN-7Og#RF_;g@f&{h^_LdG-XSbiU<n0f&Et8#2hOLN={PLImC;PhuS#|o(JKNnNKzNcCc}a6v0{W{Go-n@a zQ-O+|Eb+;O5pZ3qlmIoCSJ+E&Ip{nV%0w(2nqv4*yis}x*ia(n;Vt4x7yWNInnJBN zyRzt;EuT?Z=K$m*+^W1l-nQar5U?w7NLP45#-(hhq`EBtd5&)C4zXE#7N{cWBFL#< zm|@4cpAe&+co_sZy|aiQE67u=6#x)Ok#)U+lJ4mgZUegZyu>O)?mbWfM@Q}wiJ?1( zoz?6cdP4*y5Stu8fif-9Ka#a{%<8j)fTOXZyP4#U#^hNxd&5Ih?S{s2n^1y8J zusD9iYMEh-Y=9Rsm4G4?Wsk=o1JgMOiqO+E=r&9q&IeAI>;0-kClg9*)(^1iO@eMc z&$CjyR)7&3H`ncGiT8;K0u>Y$W}T4&YU}9_P5FAEp&p8c1n+_tDyxnt*evr@Dv&|f zNF^)_x#W7V1dARZrR#EylMYlPybDh~UvyKl-9u ziZhnwWD^0;g3}+lZspFp;8G1{(oAz6Un<2Y@JgIld>dg%-4||DhD$UM;r6V$0Z`Wu z1LcXf{oNnQ-8K|mSw$9l#dLYF&Ioh~Xi9Cgc8p^GC`_hnv=8U^l@Dw8j*$biH1-n( z2))N!F>|O;WvOSxn(b&3FU(3Yn!&|d zH(7#Z7)ih+;!{WLFdm_a+OWadL6j=n=hdi;V)81fS9UX@3M|qY4HRHJ1U3iPUh*hY zKt7Vw!1w;^E3{&v>>@0E#tJ7Lw*^K8999u?M23!<|GlAgYU-SlpksIv$rxe68R_H+ zc-A*1#kheycv0w#pvq!Pfr6mlN5!NE9N>dfpSk5#WYw9#t0x`R#>DU&eF_gCB!Vm>ojElbuu6)E0IOSu48x-2Ad!ibIUlrRhhp#&SGz0W$X*mw6Pyj|U#%Obmaj{?qw*}k{!X(dwd7AQH+U7WMA|RA(Nz1iG0f2<1wg@e?)AdLyDiqX~hDIShlfCYTK(}@#n4F*!pul!~wScy8$jNMkZ;9

*npT!Z_%9Mvm|K_59t019wKzFj*uK)sS>)6G0Ve3e<+mY18W7iPlLV=0LUY| zou>RNutPJ2+INPc6pzLTMpo&JZV-mP*IbM@O@X9|6v|s~encIcUxC3C-40Tr4v!!b zl_UlO))u*IlcZ}29NP#>WC>O<7P=uF`$gD>PVhyd^9cNtJ7Qs5plTUX)3$hq(@6$G zFry#SAdt%!aBY0w5=e5-o?C7L3H4pI;8G+Y;6YeRXSIhd?zq@botj~Z&lO^Vu zM`v;3Vx_ZB)cM%yJp!sx5AwN-MN~>t$EUIpwVWFcXDZ|_YSXN2?y1x=k6kDzlOWU) zGYIJn?IoBn$7FX}2NfOg9=L$g5L}raVUB@?q|poH z334(UNf< z$}BTyDt1}r3V4NM^0tiLcjfvu>;XO&{O*-D~X5&{r$l;p=1l6Pw20{ahjLQW~h zX5o;WnK$sVmRfZW4^ZY+>*w@*CW~#b#I*y(xl<4qVcRp};25Y|Ckp{NHtwfK6%sSK ziLB5-i>vm%9@m^8VJxXPGanFX)?Puu2Vb(Eqovk=km~HST3I=tuBA3+*7PVbW?pW_ zzSEJ}%4QB_Z>@#&AA=oi5oHB7hdj~^6tK>*jvy%y-4-X=KTG&5YC~SKfh&&E^BQ9N zCTO;h88SV>JF+sCK)=OgUV@IcX5C(wRHMJ1fl&_>TDIkyWuXe?G&^flLg*oAAz{~G zgM^Ho36~rJVilbk4zWKm)!RPw`e(U{1hJqb7GKo({@SkV&vZU~GL}Uj^b8|`DE`ti z3^tPTak!rpObU1E^Vt;%;8U8z7kdXiHZC;>Q`5}8&ZpPsS| zDG)E%+lE>w7}#um zW=k_rix`k;ARDaoRyyw!&ZnK&Nt}EEk+0G8g7mbkuY6ETtPlMdcn7x$@_tNfhLNzu zPjv)Vq4uf8OJMG0FGa1BQ1=>WZCy1w9@$=Yn{@`9BBkW*aQC2Jp*A_UjW*S@zcUyrH2J)N^UR$WagKhDM7$=6HH0wr0AkqL+i3xRV zw6&pA>RfG}0Q(ez!5PK-6Z?@Qq;c;i z!NRFO;~L_!cYGyZRW^$|j+*R57(kMd54o{73-FHJ-Y{eY(wRwKZ`5WZTTY+_JB1T> zPy4h$BFK>BNO&mv7fO%KUUMA6BG zV_NhH9OCxnEP}NdKV(rOv}wnSNg9#&sb+tK7Tkw1;mcZ6L;4!yAykgven!&(NB!H}(ij$Y~>7O?M z-d5M^W3k2h(XwngQ+~reU66z_H1X-d!16gbq{MQK3t6MB*g7J!?VvY+q9aXd23TwH zk39tL`y_mai)j}d*2-|S1O5h|J@w7YYhMTig!-*gmFBT5<~4jk)%dVB7AYZ?mJiiR zieHZequ$vDFzHq)q)5bh0g~J*1rm^7Vbvz>C<`nce7Rz!*cT{6C(5`gPfH}k1Qq)# zAl<-Du$O3qEm>?FYjJ?Gd=jc;|0RmtJQU6?EVSA#I=o=r$uB2+e zZ`)zJ(IeNhTWKv;4K>u@G@hoOLOax>n4yw4tjOXjdEpBb&-&X_sRG}K)h}V9nYAz9 z1l1yKdbbMfE2<{Kv%SUPQ6O5YJ+OK3qqRzasLYJj&T8J5o0RCa0_{(gdG0kLR=eROQUX1>N2!>O8ga7l=Ta?A8)O zj7_OyAbwcwv-eV#$VH`T==@y6aYZ^JZWdNMNz^${C{IYTANQu(PmK`jk020}LK?dc z7HekKLbpJIp=5P?%JKvS?bj6XS9@_5hRUvz)Vzp*;Fq83=5$fZJmc*82~|Ws<2Vr# z%M8{r3fJT#Mx=aL1rw6C{1|1OZc-E%!%R!_BlQ5EOb+DKQqP08pVh!cC!5w$iC@bOBF)_pzp$^s`;0gbl& z#TV2HlU33W1~Dx{M@xtevw}4VY5>A#$)rv*>~prdk&;UOOEa8*nM1iK6%X@Ae0JHY z9oB25`9xNqdAY)trL}Cy7RAhiSId5@odX7ZC0D_zk*)PYPN)w6h=y60R+84wN6t*> zQ$?DeXomDi)4^a>g?UboaATf;(WB95M3QuaZ#cSO|N4T<8)t6Bu!NF~xoFFhSDk#I zh`buKO%5b_s8HT>HK2RLwlyKvozFOgHF+KjaXlY;G+tBoFv8_XuxKB0W010FswA@o z>5$|}+_dpqJwE9>qc_)APHUMV*xaiyEaj_4O>v@4a`a(~HP|-VK7kleUBv*trU=|> zk-GADv(2?ilLKcHO(roJL+)t>S3paXjsoX&6frv14kaF9kt%U){r8}Et|vw2o0dFl zOan({To9v4X22!_9*gLIIVay8Kf8mMxKD8eYgo_pO0)vmL^{Fa7{DO?&A^O#0q`Xv+?hXr$YmK+%Q zR?F@t>s&AaN>PQ8u-K@Qe&MzM3(fm&6qZcnM46!|Mlm6f&;G3_tuU-ogx6nB4}X01 z<-T=iHc;d<0c3aN#P;k)TLic=*1Cq z&*#+U56ZkyiBqLWD-gT6kg|Nb*MvAEl%RmL4pHuD&DhUnnpwWu8YPw#S%o@J_NELx z2nT+f%~hKvW)*rf1{kS}5z2-^(ndC}x93t8Nc9{HJL;d&E{FWLn1BUWxzudFE;g)k zy=Z4H)fh?{aAr|tdQa&!AWvGG<81P_2Vp>LRHVbJwwxl?XcA1qqJLr~30goYKG5V_ z?e=_ltg*cO?)=OdE<*bm+)i|+{V6G$UmRm42v)t+=cuXcvWU!Ik7A^+l4(_dUZGV& zM$ty~MN?kRNZW`kp|Elp(t2-d>%qBQPDsl;R{>jXo5_o}VIV@+X+@}%Rz9elsN;$$ zBR7p3fh=50+?cH@9$v%H7R+5#} zLn>qu#qft7=aL9H!55dd(QcCq=u~!&^xCNpXmsMoa^D-UPj!kOJdgu#Qnvo zHQ%ZZyK6dks>5H3qkQdwT;KD=>MUD+60s-R~?53R(Pi1$-s#7m;8VRx?I8+Fs z-onsg~yV=v@b9O$n5B<>X08uOSZ{U+SNH-IpU6X=$e9%(bAglmMO6 z6LTkFlqm&buoZ{PV1c#n39WiA2*Zm)+3ccI$*xi_x@VNv9*@T#^5F&c8i$y*j6-G6XB!)MQeDdD-=&j zr*O^;AO@hsbn9Y1Jp{pO;?!^;s7H_>;r_-pQ`7`|O<7U*A_P6tj=M<$LSmJY3UwrO z;&(A?lH8>%={?+n)A>ZSDYFmjS{i4`>O;+Ij4@MH3-Q+KwV0ZRuylZF4s09xD|ac< zQ4A!Fj=@LUIB3=D4U+LIBWr%)2RuhWU5iXn^w#*bq&FIMCqHTmgPUawszcz(WMEH4 z26A88xZGUYjYpdgR-4U-A09Q#%K{lnha~*Sc8;R@*7yoXRf?zJz-h<26h=))hz*bX3l@1Z6);5Tkf)LaeLf3i9i{~_wT#VY)_C1OeU|Roc9w`kU zzE?5Hl1gMG*xgDGKvQf(ZmnDM7SEb1^EG)SR*uq<^ZrHCJ+-)g-XvmjFk#kYRw`PC zMr5#v-E9!&EZ&K;vxl7ML}n9~E3Txb5&2BZ;nuc>@n)Y!U<;zabgYdl<0-fD+^9mL zdaUr47&qbACXiupZLSLeHa!K5YWIUVAHF-#a&R>bOJN@O)sz{wNF3T=iYn;zQu+JC zF~OSgS*ZlqliLgw-lKk4`oh;^x~i=Q9<{5CS5&#)hbZb1$0re8Eah+er4efyimWCmLYV)`H_r~(lJn`0#L$Cs)xX~Jv*n+q*)1D2hDUses8S0 zK*`6P0+-umwpy(}ZMkv@`?jHzEN=D4@#+0Th1uVd*$#1#gWpt6b-!$G?s%}nLml%A zndOiICbqH#ZhuxH%#;%%4w`5_ojp1j$Lb>6Y| zvA&sA;?)aGwNlM30eap$>r=WfMjd-R)X?8dnJ*6U3Tf3%-(13{Od;f4JWtX|XN{XA za_gmFUmUk2J(d)2q8k~`e!^bH`?~74wABdm_l-{Y9WY*2WOToW8ZG7ROKA(45^aW+ zv6DTqZDuBdEoeM$Oz|_e#kK?}{gzXSc^J*kLs49rQxt*7;H9S7%xCN#=7W$^G}^7w zY85pN7reCQu~a}T`O=uo2tCO^Re%9NJOk5Jv};ljKg^dpm{r-gCdo}y8pSR~g~D%u zQJhL;swTTA2}7+(mda(9g?`2EzP5($C#o2_UlMV~)_m$sKb_s#;Lu@y9`A8{^;@gKCF!>HhKS*(q>U zi~4{309$2%hJ)t*@dIpB12q5e$JY<1py}ey%Hi%K1U+ANvaRr4nXo?DVC4!x#ORbZD)<%x5HU z%t2zIbZkPoHD9%{y>q8E0qaTE={(wHK=Hg;4N(yg8Z(5!uz!j#%K>IMrPXO>r8AmE zIi+or6hkV?kkDi6M^$akDB)1->@ho^&UM-wlJzUEQzf#rxasT)hb}x{8#eR{me`4q zH=WH%dxA}#E~~68d5tQ`T%}rk|p1jk(QOj?h=b z3fUH}Gvv#JWNC&gV`XxAe897&1I+eyu#8ZN^D1k`rt&D0!chnPm1*E;6I&NAN%0y7 zq{_(3c0dtnYfacy3hLNi^E`ARpQFxwo+nIuleQEB#KN%iC{>-|g_J=_)%Q~CrtR<; zCOQ|LVC6aa9gKamO6{C1cN7f?dH5AkhQK86JRs#JA}uE^GOGz~HqZ9ln4aEgjyhYk zjgoc7VwQl}4NL%yG3l{8YqDR>HjjSD;Av36RmELWESDif4k@zZ!B=%B0P^G0%?%dq z3&#AHt6(P#nB^F>E&329y{biUv1jBd6xfoQo1Lt)_X2$4en5@fD&I;VQZXQhBdMMi z&x&qg3h86l+xhkQLPoRsp)!WCccSyT;^V4Xvb{O8CP$}^wmQ~-D{YsK8DD2La+PnK z-I;zIrcwyeX{OK-q>%blxqEp1fqe7y?awa`-o2-hmuOx@bfwx9;h1g%}`l2v;j6 zoP$mddwDfdRMpho`KmGi{GL}`m9P=2A;XF^TMfx1TCI+#NQ=q{(R7MwY{eLIokg_J zEEVH~v!@KkWfDMD-bry7Gst7xezW41v$ZANc+fh$&+B0n%m;grZ@Og@2t@QlW~W zr~-caJ_DR>&g0RQVsy;z&$PzCTj!G`8P0)l#}mP=Kom2DdofV7S!Ss- z2@b4fNgg!9)utXHeMw&k&H*D<$j(t3RJMXeLk!kI=T1mD(i}3v{UlDB)ep8&pF+$* z0X2xS&^@SO(G>Op$EE%@@f|RNbdwfy{Vc-Atx8P!9bnH(j}-;Nm7*yqMwahv-NgvH zYd{u8BubD3grEt?BriCJ8X}NUK0C2{r3|Q}?}_wkNw9JNyF{*ah57_z`JS+~AsGQt zn|Qd*fwBt-ULt8AZH4)a!7w>%3g9EUVS)7PJ;Zw@!zv;VOl8KU8J>Jp370XscZ*7i zEZTnQf%`_4ADmg-NJF7Q)AY&C(NK^|L4EU!EFYyRQtSQc`eCIUwKf~Pww$06qHkpH z$iQ;$0WvA3FVWVHhl{!EBY;=_Pz69UO<%VQOlh2u89-M^ND+jClX|=<$B{{gD)VG? zqCP2bpuj>{vvuZzfGTG!h^kUXEEUlES$=YUBE$#{=nUUv(wGHc2I^M(D`z%9@i=xf z8X-hlC&G!X;#SBnax!_i@r0@knX!zUlvOI&5P_eMFUHy4MlWLL44$`Y?}Z+%DW=z zlq@JTM-4K3yzz$(^@0?(a)*bgUhAS}ja3oog+FOlV}Y@J1e zi6j)fzpw~E_BWuU5EH@SJ#O;xSXO?W0S_3-=PD_B)BA<4Nu_3d^qWImuy1m*U_DWQtI{0hz( zR%hyg;)3NtMR+#!qJ$55%6@fiJ924jQ}!KelK3VFt;Af-KPF=x|6YpM5C@*%*5;Q1 zECDLJ>@&l5kS#A?+QnrH87QfgJ+CZP81)fFLm}2o9F}7Us#5j5ZYdQ2uZ)C?7YBNW z^EYL|Er4Bi7()IAVH)h1A|l4zUc)nG6A6YB!Z;7y*^!5$)-0MOa!Akabaw7;@c$YEmPn-&ST;|Uywy&WyeZWfZw$R82_WuG8-@*FXhn1Y7My}7dH9q#=VI1cFB7TK zlh+1R92Of63S|Q@9=xoz0F|A$V`T3_Cn2pV`M30n$-~wIf!vCWoTx;@Z49Yk$scus>XgR}5$xG71qD6cGN}Y#ThMC*{bB18Na6 z0M1|EnGhhlA2&X83n?!oJ@9t#k$w4ncRzWvdjPIuTU8x?s0M!=o#U$^I@_cUG;+G^ zm7hgs8?u@e<$J#C)!Dh1AH~@*-d~|Nu797C!WN)NB;W`A>qGph#tH7c;8frNNUFNGq?U@ z5jp9g=n`ZvVa*hqS$JQ3L`uRezT1h@f@J>TTGBn*nTz7dZPoK~+s(o)l6S4UdNm3C>7#ExqDW+L zLGj90#FfOnDLWxMkGb%?uYNTBU~z_USxqhE@+~xADyl2m77Z1VD3rP?1ln6GbL3Z+ zS|S5;Y%RK!r%7zd1_QyIhvd#fGIGpHLan)lmQIF{@v0p^7)vWy9@bEYSlpzGW%A!n zga9ee3R4P+Tv2#UW3Oyw3`?Dr=5670@awpxxPNY8CEpn@TahQW!m(T{kC;j#QVlXl zB7BCkv>T@erSwsul^0OJHNK_DAK*s8qiA5ztt2>k>9M@z>gmY9* zvdY>nkFWd~gdA3dg|kvT)}krbsfL9M7}o4wWmZO<*mN5mNAKbm)VJB*eQmOzvqr=ElWV9e>Qzl&Zn;WykxrY|<*CVaTkH}@Iu(Ehb3N#dfpT|sIM2EW2 zC@DYlB$lc`;|vle4Zm5@-Bn$fe&T z1NNt_NB4hd2}JR_+&GSpXD>bvF|50hva=P$bnl1o6N^0l7A<65mUV zO4K4Irc{e)V~)pdC2(?!8j(fDORE``E8fJw=H*HZ=x*6M!F_D|a1Dt?0F{0k zOcc$v6rrQ87%kQ~BTCtl*@CC89ctaFtwt~_J2LDm%?}jqwR%;1CrAfq1O`G1-^fA7 z#pc-FaOmIP`9WGNC{6o~VW4bEETd5-lJvR}87QDgh^Pgdb**e9zDPEamW|-J(02 zM*Rv?*sOB+Hfz^5pjZN9$NrS+hec>w;no)?C@SlBtg|24QT_- zIP;@TBt1AKYr(C3cHqB&pwazu+_X()TS`*PRxBs-PhXgx7Bald z*{FzO2a*rPrp=QH3y<3?Ixgk^+>Cwin}1VCOq>Ey$kunRsqVZk)k$x4wy`gmp(-fy zz#-q=%yYiBytW_5Qaw^ey6v+m2em*WW$Oz0GBV==B&T&+J9N`Y<~Rs7#M`v5D7!{g zXSpgoxFo>3e9v7T18$7EpN0Gvbz%tJ!4&C+3Eb6|fl0!-D4~RGCF7P9-A6jVDvfCq z1qlU2e=9JWbLg1)Y>plEPbf)rw9*GEOmECXJ*LB#E>Vl+XJi3HLmvv6B z^i+lt`UQGm2L!c|GJSSBZ5zj0%;i&FXvC)QYj-#!ZOXi`*&5pqeqhqu+dtGdIxs`i?QH7`ZSv3JI}dzJZ2~39R!nx* zy_8_&IyIjI0xIQI_wgXK#6+2eaGQ2Ef9O#z*d^Bidab9i*(pL-5>QHM35H&R;$ART zTAqXLogyG0VzweR0dbiOGa4{s$F!A&%IvNHU#ia{E3HVDVixpuk_Vsq37LOo(UCm5 zQMifQw1Fj+*_j<_4gqR_Rw~loHIPwQDa8VKmq;<^lz^vkxp2@rhl&t~YKtg^N<~dK zw;&AMTW&fUD9MmYd*HldQduF(R~(IVPTdvFE3Rnhn}4QNl?{8V)`23?L=6w4AMcvL ze#rd{vjeZLdTXJ(8%7lFi*gdVCh9Sfm%|g47ya=54`oU3w5?enKJO1nG#1xEg=ZdH z@mWOj4fSrP_n4!RhO+C*N~zzGcji^}4>0tD*^_F>qhlcVuXWO_((<;AeZWlZ&qwcmF$5nP`pFE};%_i**vI zi=j;z?!Erm53Rj6ID9U+CP30jI**-j;~aXoG`8E~05xGeO( zC&nu&Rvdt3_F&U&eaD_VtMq}8y`rkKGaFOy;W4cY&Cq={wDX$Tjrtw?<0vEJ5K8S> zmg-MqV-j?>)D9o7OLbjaj~-(8%*H$SAJxL;d$vZh?YfSTP?XBSGxXxf8*U(%%?)YK z?g*a)?TwdNE$Vx6>!#jss2&Z++AT&=Q9C+ZjfnWWB~7*hNtMbG2WXL2n4TO*wd@4c zX-7b7Kz6w>Bcb_Ukz=?0zz(7DRgzuu4{-)e0 z|4rFZ{#%v2dOdXQn5>1MNk$4`U`WLBQz0xi3!EQ8^b*l?F}3}O-|p5Eubz=qq-}Xy zt#(1Rc7TZqlZwtlr{LH+jdI9=r8*NLRMO9e_f3kXWZmJKR`K#{hwMEe;Hf=0-yB637{iz) zgym$9jfNlUP@qjy9$0C&V)eL2S(!zl(&m(GPn&$qWs?TvdP+iE1;FGU*RIpD2&dQt z%RFc=&p|X~JCCIPQ#`INnrfo8HJKOeWGkR19Wf-+m!AR8jz01GDJ?cO>}b|Vh}lX~ zT0umTf{OKe`1_ar_2y0U^u_Ms(#LT7U5=}#kB1*@qXhq$jYxznaUA8hrQSVx;Hr@*>*BFr&Ab`C2{E(;^)4&H+aBZ- zfH-wZg@qXY>|`L$^wR$#qeb_NsvFTIz8DKx2DNK=5=F;_)XUHMC+DL9=LB#>?$+jN zv$e(l+x(9%+`9ik(hTf}7SX;{#aa4FmE={IK!W#;RQrfzZCSL_3Qw&*H-p^bFrpqp zU-E8U3%in!HR82G5h^H5a87{8Bi}^TH#)f~_RX?-Y$nC1rzTi&lfb7E$#(22M2{>D zd<2n~q+Dxtsvs~`L%*w5BucaMCuDZBI!2&`SX11k%z}{VY@j3l6z@u>dQ*WojjnNJ z>?+t{?M|~8&gGt+4*tzcN<{%Drm<6)>RgE_{N|yeqpb%Y5ahmEFpAXko-+`^Lt{qQ zwT=+0Y|lVXngaFYIc5c^SWvZv&H@f5IP3CR_Xic!4ui_#^S1l6Mm*rD*)E7b4}TN5 zCGvDcxb28igt--7HerjBw4Ij~%^PW&+6wx#Gli@s;<+mS`-RLSRpSRq;kqU(aUnV; zk!kvC+L-=t$TuQp*PEm5dy9}=sd2 zk`?JXP>-kPoeaIaUa*J2Z4YuH@46$gh$ZR6lDv+2=As$Tn}OhJb18(%_694uF)Nw` zj;tZ7IV?;>m4n3)8Q*dN2(!E)Uecu4pvQx=Gwt1m>&S>8Ob8T6254#?NhX+Vn~gXI z#Y`S69B?Q(C#p!8zBJ~!Bnz09ioT^6xvEFCU|(%24Q7{65*I;!ggqdfcDnnzRJ9O` zZQuP%O`4+Tmc{s7*G{|w$<`!%wMt0q{C9YEPz`oeN6j@@R3gU zWtNg2dMjO0qwJO+8)TbTAXV5k5$TmPaBUxuF=eJtKj!&pb zfV^N2Yc*?Euv7y%x!^1fY3nV!7uavz z(O^oiHV`$P8>K5oTL{{0M=Lo94?u+pTDO=+-Ah3K@`fkPLXC*7#khvXUjxSO0QDHE zt@FVU{kqkTz@(AHIL#wk%{!{-jol_-(0_rt9RxFk!@Bd6@eN!|LZ*i=9!sxt{48~L zX34N>**GGCl=I9~q;)4&uR}PdK=lN8kcUId>Dp{G(0*K%^(G5DPHo{y=vUucW)sCmpN*{wfd|(USa1GJn?4yq zj^$073#8qmQD362?$%k7sn-G|TkhY}!bJOXsd}|i1x}Le;XNS+5%SaV>C8Tm#yI&X zU|OecjaYlNdHL$^s}z82&BgdvDK*_LrKSnjjq)sxU=XgBTG4fj4AvW%A$>NKyxFrxT^?8xWO>*|kr8Q-j9 zeWGFh{&?*1w>o zi?s-P3$a;qT5$S~b65nBmvr`r>hrk5OgRwG91&OXQT1Fje>?7H=(K@f`I(}tfB)P6 zY&Pz7c1pC$ZHy{fQ7OLC#zTCUOaTZ1Apk5pAD+|cX+Y%?{P|X@cyu5wAe z)#l&%3qGT+lb?U&ly7bOg_bh}Dcn@V$JDVCk68oY8>*X)-JNL#AS8%}lyhJZ`xLam zER8Sz!en{90JEMg5C z#2Pk;HEi7Hf0__)JX~$IH|s7!$j1|!VY6j*_wD|k556}ZrS3)0`GM6}_nSn)Jf%fY ze&{Y*lOsyg(LOG`%?wy#jB4&$H)}oWY^c`1PlQ6EB0psfeAWkGC(9f z15mGX-xbj&Q!FZYR2(IilZ86a32~84(S?E_TW*!Gf9VmSmk|*PsCOq{2A|f)lQUcS zR$(R9;4uki1oYpSQ z@pcmU%ySF5{`U0R$6+hbuq0{${l2D z8h9A~sI&9o9V)-)s`|73EIRsNptcVfEl`?8a3Q$nt@icgRja@@$9c;RrwGC;i6h`; z*iho@4wRv^t*`g6BcC(CyRLW1=}(p@Re>_8{)hqjdXg)bhpOhqGVg!b1A)Kwv4xO) z@79|agZXOn>w5F=1GMUEezVR~{aIk06BP0qQ_7ygC$GtW@bHJyRXfO0uCA|}B9^U)GQD#;hmj{bRF)-FS4iFJD7SVY?wDG9(Kz7iNoHqh9 zf-7UdKL_pr=i5H2&;bbVqQ^!YOT(frO%`!%u(mz2Y~mYzbE1xw?`0znd7_F!8wmvO>z;gU>D05p10UCS)04c^dl0-MWGa5z4 zymP?%bpq?&PcL7+d-3$i^Mj`^4;P=R&#%^-Ln=nuu_}5SbVo0{<45p{eqG5KbZoj8Zb3S}+fXgzTACov6KoVrC z+!|O*)a)xG8H7~kX6rK2>d&9x$Y4eVDR>QtWLEV6;ZNXxzyp2GTmdPT_ z+ZF4)09|vuOuz+-UO~zj_M2-^4k5qy${d}AEqm7QZ z)aeF7dohPCF+faD@g61#>?<23QyiaP3|8MfefIRt)0cZs-!31jCfxTG;o9O`b}28H!~h8Rc7|ESt$mC-Mxdm`>=f5Z)@IODW5iAlf4`+**D39Y)PIZ z@eBReEPXk-~!xYr>(Lsr5SA*l1A% zuG{o{guVm$Pl?$$=&>a*YYgthK)3{dd3Tgycw}B5V%YbWx*ba&zd`SOjcx z^WyE{f%70hod}XaI!iITfR^j{tw&eMLEO+(LWTH-+`Yl5*t8K@riO7l;eg>QALcF1 z^LlH9Aqd9p4JIc@OK0{~1S9XU= zCfsSBshQ*36ND+or$f2zw{!x&dk6ppgs};XkK4G2Tb)rn3X?}Z+WK&4hi2GmB{p%n z^izC#1sjPPaZt1U*7odu(e^L)e2uN}ql|c@!t3#n ztcoIfeED3$gF7-jnJ(90lDRvm`KcGBPOY_sq1>&&`pdqARiZ>&Cz9*WC2i9N?6yw`SZ~g+7&V+%536|_!x*-W&s!)X@z@IZ?IhvL$ThblV!=Y2%Sb&K zA1mbii_v1p3UC~X!=_zp>o&fZW6g^9Dze-A`N=MG9}hoLf^&a;J9{an!5>L7sUXlT zn0n1$L%+>z$v^EV@6pE22crCM`k1F+1F!5Ie%^%&VOAR`V7E6Oo_dr5f{_;4N?L?{ z_eLxr?KLQdbU>OTT*k)Z0Zy22m^|Q=o)+b!c3$*hd|8}&M~EB@6uDS!^b-e(&!8*p zJiP3nNUemT?5txU2e*`E6!2j z{~QE7>5o1p^Nx=gA46B<$~6h%kCoxCm-Kt6?g`X$)-UW+&wDvQ^N2?q_dh_FMveuJMmQ=E5Pv~43Oz)u&7EXJVTGHe_Aq|>y%ty{OnB7$forE-YMMBHwz_wX zV;>@*l!OOpJKEY7H~pwiU&jOE4h4=-q&TrqoENz?uk37&C)&zrqS3e{ua_BTrPfw4 z$UqqhCKcofyy??vFpmGSbtVJ!wO6`6!q=>Dh&M#M3Zd(^Qjm3vF%Lj>w9jr|1$S6( zYEiret)`!sqw)rSq_tu<&|41Et>`TKG~jTv;P7bUfgDK`5t36Ph7T{hC>Ek92lCq> z?a`RXfl0Z?-C>v5=R}_5(bi5jlVuhomQ6r;ut9{;vSX8usTae(6#^@6RSW&_zeQrv zj6XuQh*cQv9sOI#xgv`6OnVV3JD12)WuJYEkSnFWvT=%cEk!JN$7Dn_!o#4Jr3?~m5ra_oK5-n;BBJik!a9ucm+ zzQvR%sw(loEqkN~rpMjM4KrONO?khFFA#b^BQ*It*U)IvSpir${YMm1>WS zUFCO3wJZ_;aR_5DYw@yG=O6#pt*bBSyg5$qep5y-I! zHpWF|ivuwNl%iG+M3g^%ey^uzr)QBMi4QJ=rQMnSH{Gw_%n;QGfA3dUu;Yk$N4EQ% zM_R4NFZOBNU$OeyAFR~wocUjDMeCZL>h!X0`+Z zeiS!A>wEX8Gm@Cxv`ZT&=t{XE4bA2`+-;rP$TI2wPIOU(>mzQvE{1CshqC;=>-uz| zR7NfY%`)i}#W!W>ea0rk*}JHlCyFTZjw{ZS)8fxoJtLQC?3F~X{D1ayY8NGvE7y}I z1fxOmsHzao+)9s3_fcUlrNg97(8992(rdJl1F+6hIGUhwQETlFZ#C&>6`38Jv10`& zjf#wVNSjA>&}IHaF+4C^ErHf$1 z@3?U{@sSfvw5sWHYM6wB_TESi$!}_Rdd7>Q>V_3V6Do!zs=?p2`w-^O?%|R+heaz; zffr>;&C#k0-ETauKgwNFPQ=-cY6=8kQyA*EmxgTeU9bFqxpqr5ya>NqABfV|Zd+x)c?5tUeTxMz_6C z{3b0_ZuQiK`+jN+-u|&#x;5(mpOUTu0u$iIua&MQTB_lOM-RwY0>W4iCZU)t>?lO61$agiFDb^ic>03O?<+gSV(#d<2?g5I$3R^!L5pYNEZQs-^nL43!)TpiOqXyVEN*+O7u9#RQq}ED*fccbsQX z1FnS_A{y0uO8mzFgeWq2X88RZ+h!bLPVd+rfB7&y{xu;TDdDh;?JGH0dUI58Km8{| zuY-~5mFx;HP73Wc7vA1xxtv@+XqA(xVfLvx$r3)DQw3RgqWP%*QMWS8ZFkMw3{rEC zWzWIQ*5L}8)^K1XL=!j(1{;(fdetI7yNQI$W{UHb2ETuUV>-phsUjpn~DM zse^7DRSdWST`aimcDR(ndV)Diz$k{>HhWrC4^1ZNB`;xoFC_l83%*zniTmlz46PrjQ<(j$CCge^8M-!5U*cT^ZK6zGTL zWV?TweKa5*JFlWO9?mB;i=bnFKHuuE?wnnI*B*#|t%}0u7Sex_d_F8!WHh%Ug2ZE2 P-8ZMpgz}wj#n%4;x1)ur