Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks

Parameters of recent neural networks require a huge amount of memory. These parameters are used by neural networks to perform machine learning tasks when processing inputs. To speed up inference, we develop Partition Pruning, an innovative scheme to reduce the parameters used while taking into consideration parallelization. We evaluated the performance and energy consumption of parallel inference of partitioned models, which showed a 7.72x speed up of performance and a 2.73x reduction in the energy used for computing pruned layers of TinyVGG16 in comparison to running the unpruned model on a single accelerator. In addition, our method showed a limited reduction some numbers in accuracy while partitioning fully connected layers.


Introduction
Neural networks have become ubiquitous in applications that include computer vision, speech recognition, and natural language processing. The demand for processing neural network applications on edge devices, including smart phones, drones, and autonomous vehicles, is increasing [1]. Meanwhile, the size of neural network models has been drastically increased over time, reaching beyond the Peta scale [1]. In 1998, a handwritten digits classifier had about 1 M parameters [2], but in 2012, an image classifier for the ImageNet [3] dataset had more than 60 M parameters. In addition, Neural Talk, which automatically creates proper captions for ImageNet dataset has more 230 M parameters [4]. The top 5 error accuracy has been reduced by 30% each year, suggesting why this trend drastically increases the number of layers, parameters, and operations [1].
Large deep neural networks (DNNs) models consume a significant amount of energy because they are required to be stored in DRAMs or on-chip SRAMs, and thus are fetched every time they are processed. From 2012 to 2015, the energy efficiency of DRAMs increased due to CMOS scaling based on Moore's Law. As of 2015, CMOS scaling no longer provided substantial improvements in either energy efficiency or memory density. Because SRAM is realized using CMOS transistors, its energy efficiency is typically bounded by Moore's Law [18] [19]. Therefore, the energy efficiency of the memory cannot keep up with the increasing size of the neural networks. This leads to consuming more energy to accomplish the same processing tasks. Therefore, 2 Overview Figure 1 illustrates a high-level diagram of the proposed framework. First, a neural network model is trained. Section V discusses the baseline accuracy for different neural network models that were used to evaluate the framework. Then, fully connected layers of each model were pruned using the Partition Pruning approach. Section III explains how the partitioning algorithm was applied to these layers. Then, inference was  Fig. 1. Overview of the procedure used. Note that Partition Pruning is applied to a trained neural network since it is dependent on the weights of the fully connected layer(s). The illustration shows only one fully connected layer. performed on multiple processing cores. Section IV explains multi-core architecture, which provides the ability to run parallel matrix multiplication. Section VI evaluates our framework in terms of performance and accuracy.

System Model
Our framework targets neural networks that have some or all of their nodes fully connected to the subsequent nodes. The set of starting nodes, N initial is fully connected to the subsequent nodes N f inal , i.e. fully-connected layers. A link, which is a parameter, is a connection represented by L ij , where i is the starting node number and, j is the connected node number within a layer. The link's value (i.e the parameter's weight) is represented by w i,j . L i,j = 0 if the link is pruned, and if not, L i,j = 1. Note that w i,j may contain any value. The set of weights, W i , consists of links, L i , that connect between the set of Nodes, N i , and N j . Figure 2a shows an example of a fully connected layer of size 6 × 8. Figure 2b shows the matrix representation of the fully connected layer. While Figure 2c indicates the weight matrix of the fully connected layer. The connectedness number, C, is simply; A fully connected layer is annotated as C f ull and thus; Therefore, the connectedness ratio, R, is: Figure 3 shows an example of a 2-partition pruning of the fully connected layer from Figure 2. Figure 4 visually illustrates the partitions of Fig 3 and   P x ∈ P , then any given N initial,j ∈ P x will not be in any other partition. The same goes for nodes in N f inal,i . More formally, Equation 4 is the constraint of the groupings of nodes in N initial and N f inal . That is, once a particular node is in a particular partition, it cannot be a member of another partition. Another way of stating this is: Note that there is an upper, |Ninitial| |P | , and lower, |Ninitial| |P | , bound to the number of N initial,i nodes that are members of a partition P n . The same is true for N f inal,i nodes. In addition, the number of partitions that contain the upper limit is |N initial | mod |P |, while the number that contain the lower limit is |P | − (|N initial | mod |P |).As an example, if |N initial | = 22 and |P | = 5 (i.e number of partitions), then an example of partition sizes for N initial , ignoring N f inal , would be Therefore, the example suggests that there are three partitions of size 4 and two partitions of size 5. This bound description also applies to N f inal .

Partition Pruning Overview
The objective of Partition Pruning is two-fold: pruning with the objective of having balanced partitions, and pruning with the objective of having the least absolute weight- shows the connection's representation, with 0s representing the absence of a link. Note that in the above case,C = Cfull = 12 and R = 0.5. loss. The second objective guarantees a smaller loss of accuracy, while the first allows for maximum parallelism. Note that the number of parameters pruned is directly related to the number of partitions desired. The connectedness ratio, in relation to the number of partitions is R |P | = 1 |P | . Thus, for a given |P |, Partition Pruning will find the following: min subject to x i,j = 0 or 1 From the objective function, we determine which 1−R |P | C f ull parameters are pruned for a particular fully connected layer while minimizing the cumulative weight-loss.

Input/Output
The input to the Partition Pruning algorithm is a matrix representation, W f c,i , of the targeted fully connected layer, i. This is exemplified in Figure 2c. Note that the fully connected layer is assumed and asserted to be trained. That is, the parameters have the correct values for the targeted neural network's base accuracy. In a fully connected layer, every element of the matrix L f c,i is 1 (see Equation 2). After Partition Pruning, the output will be L part,i and the sum of all its elements would be RC f ull . This is exemplified in Figure 3b.

Methodology
This section the methodology of selecting the links to prune, taking into consideration the partitioning. The example of |N initial | = 7, |N f inal | = 10, and |P | = 3, will be used to describe the process. Figure 1 shows an overview of the methodology and where Partition Pruning resides.
Start: Selection of N initial,i , and N f inal,j1,j2.. : In the first stage, a row in the matrix is randomly selected. That is, a random N initial,i is selected for processing. Note that currently |P n | = 0 for all n, because no pair of nodes, has joined a partition. After choosing an N initial,i , a set of N f inal nodes is chosen, and in this case, the set size is |Nfinal| |P | . The node N initial,i , and the nodes N f ianl,j1,j2.. are chosen to be part of the first partition, P 1 . Those selected will have their L i,j = 1, while those not selected will have their L i,j = 0. Note that the links selected have the highest magnitudes (refer to Figure 5a as an example). Figure 5b illustrates an example of the change in values and a pictorial representation of the first partition.

Non-Start: Selection:
Moving forward, another N initial,i node is selected at random. The highest, nonpartition members, w i,j s are sorted from the highest to the lowest magnitude, as was done previously. The sum of the highest upper bound (or a lower bound if all upper bound partitions are fulfilled) are compared with the sum of the magnitude of partitionmember weights/links that still have capacity (as per the upper and lower bounds of the number of nodes of type N initial ). shows the situation in case of |wi,7| + |wi,3| + |wi,4| + |wi,5| > |wi,9| + |wi,8| + |wi,1|. c) is the case scenario.

End and Try Again:
This process is repeated until every partition P m , is at capacity in terms of N initial nodes and N f inal nodes. Note that the partitioning is dependent on which row, i.e N initial,i was selected at each iteration. Once the process is completed, the weightloss is recorded.  Figure 7 shows the architecture of an System on Chip (SoC) that consists of general purpose cores, memory controllers, a DMA engine, and matrix multiplication accelerators all of which are connected through the system bus. To understand how the system level affects the accelerators' behavior, simulation infrastructures that can model these heterogeneous systems are needed. gem5-Aladdin system simulator is used to evaluate the proposed architecture. This tool is an integration of a gem5 system simulator with an Aladdin accelerator simulator. It is a pre-RTL simulation infrastructure that models multiple accelerators and interactions with central processing units (CPUs) in an SoC that consists of Processing Elements (PEs), fixed-function accelerators, memory controllers, and interfaces. This simulator can model the accelerators' performance, area, and power [27] [28]. Multiple matrix multiplication units are connected to the bus. In the gem5-Aladdin system, the accelerators can invoke the DMA engine already present in the Gem5. The DMA is used to transfer bulk data without the CPU's intervention. The internal SRAM stores the weights, input features, and the outputs of the matrix multiplication. Each accelerator uses a 32 x 32 Systolic Array (SA). The SA architecture is a specialized form of parallel computing in which tightly coupled processing elements are connected to a small number of their nearest neighbors in a mesh-like topology. This architecture has a very low amount of global data transfer and can achieve a high clock frequency. However, SA architecture suffers from scalability issues due to the shape being fixed.

Multi-Core Organization
In an SA, the horizontal systolic movements are for implementing data broadcasts, and the vertical ones are for implementing accumulations.

Experimental Setup
Fully connected layers are pruned by using Partition Pruning for three networks that use a TinyImageNet [23] dataset. which consists of 100,000 training images, 10,000 validation images, and 10,000 testing images that have dimensions of 64x64x3, and  that classify 200 labels. These images are taken from the ImageNet [3] dataset, cropped into squares, and resized to 64x64. For each network, the fully connected layers are partitioned to 2, 3, 4, and 5 partitions, resulting in the pruning of 50%, 66%, 75%, and 80%, of the fully connected links, respectively. Initially, the neural networks are trained and evaluated on a TinyImageNet dataset, as shown in Table 2. Convolutional neural networks represent the state-of-the-art in image classification. AlexNet [24] and VGG16 [2] are well-known deep convolutional neural networks that have previously won ImageNet competitions. TinyVGG16 and TinyAlexNet use a 56x56x3 input image instead of 228x228x3, as do the original VGG16 and AlexNet. Each network has three fully connected layers at the end its structure. Partition Pruning prunes the first two of these three fully connected layers. The omission of pruning the last fully connected layer is due to the fact that every link is required for classification. If pruned, the classification accuracy would be affected the considerably and detrimental to the performance of the Neural Network model. Table  2 shows the benchmarks' baseline performances. After training the networks, Partition Pruning is applied to two, of the three, fully connected layers. Google's TensorFlow [25] version 1.7 was used to model the benchmarks. Partition Pruning was implemented in Python 2.7 and was given the NumPy matrices from the first two fully connected layers of the benchmarks. Then, the weights were updated in the TensorFlow model files using the resulting output filters. Note that, as mentioned earlier, gem5-Aladdin is used to evaluate the performance. Table 2 shows the initial baseline accuracies, without pruning, of the TensorFlow implementations of the neural network benchmarks. Figure 8   results for retraining are also shown. Accuracy loss increases when the number of partitions is increased, given that more parameters are pruned. After pruning, retraining the models reduces the loss of accuracy. For example, in 3-Partition, retraining reduces accuracy loss in TinyVGG16 from 10.59% to 0.87%. As Figure 7 shows, running inference of partitioned TinyVGG16 layers on different accelerators speeds performance and reduces energy consumption. These results are in comparison to running inference of the unpruned layers on signle accelerator. For example, running this benchmark on a triple-core accelerator executes 7.72x faster while consuming 2.73x less energy. This is because pruning reduces the size of the benchmarks by a factor correlated to the partition number (for example, by a factor of 2x for two partitions). In addition, running inference in parallel on multiple accelerators speeds the execution time. Therefore, the performance speed and the energy consumed by processing partitioned models were both improved by reducing the size of the models and using multiple hardware resources. Running the same benchmarks on multiple accelerators does not increase speed as expected. For example, running two identical workloads on two accelerators can increase speed 1.8x, and on three accelerator, 2.5x. This happens because all accelerators are connected to the same bus with one DMA, which leads to bus congestion. It is expected that using multiple large SAs, for example 256 x 256, would cause bandwidth bottlenecks and sizeable bus congestion. Although using a small SA does not provide high throughput processing, it leads to low power design because of the number of processing elements used in each accelerator.

Conclusions
This paper presented Partition Pruning, an approach that prunes fully connected layers of neural network models with the aim of partitioning for parallelization in order to improve speed and energy. The idea behind Partition Pruning approach is to target low overall weight loss to reduce the impact on accuracy. The approach shows that by partitioning fully dense layers of TinyVGG16 to 3-Partition and executing the model on multiple accelerators, a speed increase of 7.72x and an energy reduction of 2.73x can be obtained. Future work will evaluate a system that has multiple high-bandwidth memories and neural network accelerators. In addition, more optimizations will be applied to the accelerators to minimize power consumption and increase throughput.