Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL) optimization framework

Simplicity and ﬂ exibility of meta-heuristic optimization algorithms have attracted lots of attention in the ﬁ eld of optimization. Different optimization methods, however, hold algorithm-speci ﬁ c strengths and limitations, and selecting the best-performing algorithm for a speci ﬁ c problem is a tedious task. We introduce a new hybrid optimization framework, entitled Shuf ﬂ ed Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme. SC-SAHEL explores performance of different EAs, such as the capability to escape local attractions, speed, convergence, etc., during population evolution as each individual EA suits differently to various response surfaces. The SC-SAHEL algorithm is benchmarked over 29 conceptual test functions, and a real-world hydropower reservoir model case study. Results show that the hybrid SC- SAHEL algorithm is rigorous and effective in ﬁ nding global optimum for a majority of test cases, and that it is computationally ef ﬁ cient in comparison to algorithms with individual EA. © 2018 Elsevier Ltd. All rights reserved.


Introduction
Meta-Heuristic optimization algorithms have gained a great deal of attention in science and engineering (Blum and Roli, 2003;Boussaïd et al., 2013;Lee and Geem, 2005;Maier et al., 2014;Nicklow et al., 2010;Reed et al., 2013). Simplicity and flexibility of these algorithms, along with their robustness make them attractive tools for solving optimization problems (Coello et al., 2007;Lee and Geem, 2005). Many of the meta-heuristic algorithms are inspired by a physical phenomenon, such as animals social and foraging behavior and natural selection. For example, Simulated Annealing (Kirkpatrick et al., 1983), Big Bang-Big Crunch (Erol and Eksin, 2006), Gravitational Search Algorithm (Rashedi et al., 2009), Charged System Search (Kaveh and Talatahari, 2010) are inspired by various physical phenomena. Ant Colony Optimization (Dorigo et al., 1996), Particle Swarm Optimization (Kennedy, 2010), Batinspired Algorithm , Firefly Algorithm (Yang, 2009), Dolphin Echolocation (Kaveh and Farhoudi, 2013), Grey Wolf Optimizer (Mirjalili et al., 2014), Bacterial Foraging (Passino, 2002), Genetic Algorithm (Golberg, 1989;Holland, 1992), and Differential Evolution (Storn and Price, 1997) are examples of algorithms inspired by animal's social and foraging behavior, and the natural selection mechanism of Darwin's evolution theorem. According to the No-Free-Lunch (NFL) (Wolpert and Macready, 1997) theorem, none of these algorithms are consistently superior to others over a variety of problems, although some of them may outperform others on a certain type of optimization problem.
The NFL theorem has been a source of motivation for developing optimization algorithms (Mirjalili et al., 2014;Woodruff et al., 2013). It has encouraged scientists and researchers to combine the strengths of different algorithms and devise more robust and efficient optimization algorithms that suit a broad class of problems (Qin and Suganthan, 2005;Vrugt and Robinson, 2007;Vrugt et al., 2009;Hadka and Reed, 2013;Sadegh et al., 2017). These efforts led to emergence of multi-method and self-adaptive optimization algorithms such as Self-adaptive DE algorithm (SaDE) (Qin and Suganthan, 2005), A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO) (Vrugt and Robinson, 2007;Vrugt et al., 2009) and Borg . They all regularly update the search mechanism during the course of optimization according to the information obtained from the response surface.
Here, we propose a new self-adaptive hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid Evo-Lution (SC-SAHEL). The SC-SAHEL framework employs multiple Evolutionary Algorithms (EAs) as search cores, and enables competition among different algorithms as optimization run progresses. The proposed framework differs from other multi-method algorithms as it grants independent evolution of population by each EA. In this framework, population is partitioned into equally sized groups, so-called complexes; each assigned to different EAs. Number of complexes assigned to each EA is regularly updated according to their performance. In general, the newly developed framework has two main characteristics. First, all the EAs evolve population in a parallel structure. Second, each participating EA works independent of other EAs. The architecture of SC-SAHEL is inspired by the concept of the Shuffled Complex Evolution algorithm -University of Arizona (SCE-UA) (Duan et al., 1992). The SCE-UA algorithm is a population-evolution based algorithm (Madsen, 2003), which evolves individuals by partitioning population into different complexes. The complexes are evolved for a specific number of iterations independent of other complexes, and then are forced to shuffle.
Application of the SCE-UA is not limited to solving single objective optimization problems. The Multi-Objective Complex evolution, University of Arizona (MOCOM-UA), is an extension of the SCE-UA for solving multi-objective problems (Boyle et al., 2000;Yapo et al., 1998). Besides, the SCE-UA architecture has been used to develop Markov Chain Monte Carlo (MCMC) sampling, named Shuffled Complex Evolution Metropolis algorithm (SCEM-UA) and the Multi-Objective Shuffled Complex Evolution Metropolis (MOSCEM) to infer posterior parameter distributions of hydrologic models (Vrugt et al. 2003a(Vrugt et al. , 2003b. The Metropolis scheme is used as the search kernel in the SCEM-UA and MOSCEM-UA (Chu et al., 2010;Vrugt et al. 2003aVrugt et al. , 2003b. There is also an enhanced version of SCE-UA, which is developed by Chu et al. (2011) entitled the Shuffled Complex strategy with Principle Component Analysis, developed at the University of California, Irvine (SP-UCI). Chu et al. (2011) found that the SCE-UA algorithm may not converge to the best solution on high-dimensional problems due to "population degeneration" phenomenon. The "population degeneration" refers to the situation when the search particles span a lower dimension space than the original search space (Chu et al., 2010), which causes the search algorithm to fail in finding the global optimum. To address this issue, the SP-UCI algorithm employs Principle Component Analysis (PCA) in order to find and restore the missing dimensions during the course of search (Chu et al., 2011).
Both SCE-UA and SP-UCI start the evolution process by generating a population within the feasible parameters space. Then, population is partitioned into different complexes, and each complex is evolved independently. Each member of the complex has the potential to contribute to offspring in the evolution process. In each evolution step, more than two parents may contribute to generating offspring. To make the evolution process competitive, a triangular probability function is used to select parents. As a result, the fittest individuals will have a higher chance of being selected. Each complex is evolved for a specific number of iterations, and then complexes are shuffled to globally share the information attained by individuals during the search.
The Competitive Complex Evolution (CCE) and Modified Competitive Complex Evolution (MCCE) are the search cores of the SCE-UA and SP-UCI algorithm, respectively. The CCE and MCCE evolutionary processes are developed based on Nelder-Mead (Nelder and Mead, 1965) method with some modification. The evolution process in the SCE-UA is not limited to these algorithms. In fact, several studies have incorporated different EAs into the structure of the SCE-UA algorithm. For example, the Frog Leaping (FL) is developed by adapting Particle Swarm Optimization (PSO) algorithm to the SCE-UA structure for solving discrete problems (Eusuff et al., 2006;Eusuff and Lansey, 2003). Mariani et al. (2011) proposed an SCE-UA algorithm which employs DE for evolving the complexes. These studies revealed the flexibility of the SCE-UA in combination with other types of EAs; however, the potential of combining different algorithms into a hybrid shuffled complex scheme has not been investigated.
The unique structure of the SCE-UA algorithm along with the flexibility of the algorithm for using different EAs, motivated us to use the SCE-UA as the cornerstone of the SC-SAHEL framework. The SC-SAHEL algorithm employs multiple EAs for evolving the population in a similar structure as that of the SCE-UA, with the goal of selecting the most suitable search algorithm at each optimization step. On the one hand, some EAs are more capable of visiting the new regions of the search space and exploring the problem space, and hence are particularly suitable at the beginning of the optimization (Olorunda and Engelbrecht, 2008). On the other hand, some EAs are more capable of searching within the visited regions of the search space, and hence boosting the convergence process after finding the region of interest (Mirjalili and Hashim, 2010). Balancing between these two steps, which are referred to as exploration and exploitation (Moeini and Afshar, 2009), is a challenging task in stochastic optimization methods ( Crepin sek et al., 2013). The SC-SAHEL algorithm maintains a balance between exploration and exploitation phases by evaluating the performance of participating EAs at each optimization step. EAs contribute to the population evolution according to their performance in previous steps. The algorithms' performance is evaluated by comparing the evolved complexes before and after evolution. In this process, the most suitable algorithm for the problem space become the dominant search core.
In this study, four different EAs are used as search cores in the proposed SC-SAHEL framework, including Modified Competitive Complex Evolution (MCCE) used in the SP-UCI algorithm, Modified Frog Leaping (MFL), Modified Grey Wolf Optimizer (MGWO), and Differential Evolution (DE). To better illustrate the performance of the hybrid SC-SAHEL algorithm, the framework is benchmarked over 29 test functions and compared to SC-SAHEL with single EA. Among the 29 employed test functions, there are 23 classic test functions (Xin et al., 1999) and 6 composite test functions (Liang et al., 2005), which are commonly used as benchmarks in comparing optimization algorithms.
Furthermore, the SC-SAHEL framework is tested for a conceptual hydropower model, which is built for the Folsom reservoir located in the northern California, USA. The objective is to maximize the hydropower generation, by finding the optimum discharge from the reservoir. The study period covers run-off season in California from April to June, in which reservoirs have the highest annual storage volume (Field and Lund, 2006). Using the proposed framework, we compared different EAs' capability of finding a nearoptimum solution for dry, wet, and below-normal scenarios. The results support that the proposed algorithm is not only competitive in terms of increasing power generation, but also is able to reveal the advantages and disadvantages of participating EAs.
The rest of the paper is organized as follow. In section 2, structure of the SC-SAHEL algorithm and details of four EAs are presented. Section 3 presents the test functions, settings of the experiments, and results obtained for each test function. Section 4 introduces the reservoir model and the optimization results for the case study. Finally, in section 5, we draw conclusion, summarize some limitations about the newly introduced framework, and suggest some directions for future work.

Methodology
The SC-SAHEL algorithm is a parallel optimization framework, which is built based on the original SCE-UA architecture. SC-SAHEL, however, differs from the original SCE-UA algorithm by using multiple search mechanisms instead of only employing the Nelder-Mead simplex downhill method. In this section, we first introduce the main structure of SC-SAHEL. Then, we present four different EAs, which are employed as search cores in the SC-SAHEL framework. These algorithms are selected for illustrative purpose only and can be replaced by other evolutionary algorithms. Some modifications are made to the original form of these algorithms, to allow fair competition between EAs. These modifications are detailed in appendix A-D.

The SC-SAHEL framework
The proposed SC-SAHEL optimization strategy starts with generating a population with a pre-defined sampling method within feasible parameters' range. The framework supports userdefined sampling methods, besides built-in Uniform Random Sampling (URS) and Latin Hypercube Sampling (LHS). The population is then partitioned into different complexes. The partitioning process warrants maintaining diversity of population in each complex. In doing so, population is first sorted according to (objective) function values. Then, sorted population is divided into NGS equally-sized groups (NGS being the number of complexes), ensuring that members of each group have similar objective function values. Each complex subsequently will randomly select a member from each of these groups. This procedure maintains diversity of the population within each complex. The complexes are then assigned to EAs and evolved. In contrast to the original concept of the SCE-UA, the complexes are evolved with different EAs rather than single search mechanism. At the beginning of the search, an equal number of complexes is assigned to each evolutionary method. For instance, if population is partitioned into 8 complexes and 4 different EAs are used, each algorithm will evolve 2 complexes independently (2-2-2-2). After evolving the complexes for pre-specified number of steps, the Evolutionary Method Performance (EMP) metric (Eq. (1)) will be calculated for each EA, EMP ¼ meanðFÞ À meanðF N Þ meanðFÞ ; (1) in which, F and F N are objective function values of individuals in each complex before and after evolution, respectively. The EMP metric measures change in the mean objective function value of individuals in each complex in comparison to their previous state. A higher EMP value indicates a larger reduction in the mean objective function value obtained by the individuals in the complex. The performance of each evolutionary algorithm is then evaluated based on the mean value of EMP calculated for each evolved complex. EAs are then ranked according to the EMP values. Ranks are in turn used to assign number of complexes to each evolutionary method for the next iteration. The highest ranked algorithm will be assigned an additional complex to evolve in the next shuffling step, while, the lowest ranked evolutionary algorithm will lose one complex for the next step. For instance, if all the EAs have 2 complexes to evolve (2-2-2-2 case), the number of complexes assigned to each EA can be updated to 3-2-2-1. In other words, this logic is an "award and punishment" process, in which the algorithm with best performances will be "awarded" with an additional complex to evolve in the next iteration, while the worstperforming algorithm will be "punished" by losing one complex.
It is worth mentioning that as some of the algorithms may have poor performance in the exploration phase, they might lose all their complexes during the adaptation process. This might be troublesome as these algorithms may be superior in the exploitation phase. If such algorithms are terminated in the exploration phase, they cannot be selected during the convergence steps. Hence, EAs termination is avoided to fully utilize the potential of EAs in all the optimization steps and balance the exploration and exploitation phases. The minimum number of complexes assigned to each evolutionary method is restricted to at least 1 complex in this case. If the lowest ranked EA has only 1 complex to evolve, it won't lose its last complex. If an algorithm outperforms others throughout the evolution of complexes, the number of complexes assigned to the superior EA will be equal to the total number of complexes minus the number of EAs plus one. In this case, all other algorithms are evolving one complex only. As all algorithms are evolving at least one complex, they have the chance to outperform other EAs and gain more complexes during the optimization process, and to potentially become the dominant search method as the search continues toward exploitation phase. Fig. 1 briefly shows the flowchart of the SC-SAHEL algorithm, pseudo code of which is as follows: Step 0 Initialization. Select NGS >1 and NPS (suggested NPS > 2nþ1, where n is dimension of the problem), where NGS is the number of complexes and NPS is the number of individuals in the complexes. NGS should be proportional to the number of evolutionary algorithms so that all the participating EAs have an equal number of complexes at the beginning of the search.
Step 1 Sample NPT points in the feasible parameter space using a user-defined sampling method, where NPT equals to NGS Â NPS. Compute objective function value for each point.
Step 2 Rank and sort all individuals in the order of increasing objective function value.
Step 3 Partition the entire population into complexes. Assign complexes to the participating EAs.
Step 4 Monitor and restore population dimensionality using PCA algorithm (Optional).
Step 5 Evolve each complex using the corresponding EA.
Step 6 After evolving the complexes for a pre-defined number of iterations, calculate the mean EMP for each EA.
Step 7 Rank the participating EAs according to the mean EMP value of each evolutionary method. The highest ranked method will get additional complex in the next iteration, while the worst evolutionary method will lose one.
Step 8 Shuffle complexes and form a new population.
Step 9 Check whether the convergence criteria are satisfied, otherwise go to step 3. SC-SAHEL allows for different settings that can influence the performance of the algorithm. Careful consideration should be devoted to the selection of these settings, including number of complexes, number of individuals within each complex, number of evolution steps before each shuffling, and stopping criteria thresholds. Some of these settings are adopted from the suggested settings for the SCE-UA. For instance, the number of individuals within each complex is set to 2d þ 1, where d is dimension of the problem. However, some of the suggested settings cannot be applied to the SC-SAHEL framework due to use of different EAs. These settings can be changed according to the complexity of the problem and the EAs employed within the framework. For instance, the number of complexes, the number of points within each complex, and the number of evolution steps before each shuffling are problem dependent.
The SC-SAHEL framework employs three different stopping criteria which are adopted from SCE-UA and SP-UCI. These stopping criteria include number of function evaluations, range of samples that span the search space, and improvement in the objective function value in the last m shuffling steps. These criteria are compared to pre-defined thresholds, which can in turn be tuned according to the complexity of the problem. Improper selection of these thresholds may lead to early or delayed convergence.

Evolutionary algorithms employed within SC-SAHEL
In this paper, we employ four different EAs to illustrate the flexibility of the SC-SAHEL framework in adopting various EAs and show the algorithms competition. These algorithms are briefly presented here. The pseudo code and details of these algorithms can be found in Appendix A-D.

Modified Competitive Complex Evolution (MCCE)
The MCCE algorithm is an enhanced version of CCE algorithm used in the SCE-UA framework; which provides a robust, efficient, and effective EA for exploring and exploiting the search space. The MCCE algorithm is developed based on the Nelder-Mead algorithm, however, Chu et al. (2011) found that the shrink concept in the Nelder-Mead algorithm can cause premature convergence to a local optimum. Interested readers can refer to (Chu et al., 2010(Chu et al., , 2011 for further details on MCCE algorithm. The pseudo code of the MCCE algorithm is detailed in Appendix A. SC-SAHEL has similar performance to SP-UCI, when the MCCE algorithm is used as the only search mechanism and PCA and resampling settings of SP-UCI are enabled. For simplification and comparison, SC-SAHEL with the MCCE algorithm as search core is referred as SP-UCI, hereafter.

Modified Frog Leaping (MFL)
The Frog Leaping (FL) algorithm uses adapted PSO algorithm as a local search mechanism within the SCE-UA framework (Eusuff and Lansey, 2003). FL has shown to be an efficient search algorithm for discrete optimization problems, and can find optimum solution much faster as compared to the GA algorithm (Eusuff et al., 2006). In order to adapt the FL algorithm to the SC-SAHEL parallel framework, we introduce a slightly modified version of FL algorithm entitled MFL. Further details and pseudo code of the MFL can be found in Appendix B. The original FL algorithm and the MFL have four main differences. First, the original FL is designed for discrete optimization problems, however, the MFL is modified for continuous domain. Second, the modified FL uses the best point in the subcomplex for generating new points, however, in the original FL framework new points are generated using the best point in the complex and the entire population. The reason for this modification is to avoid using any external information by participating EAs. In other words, the amount of information given to each EAs is limited to the complex assigned to the EAs. Third, as the MFL algorithm only uses the best point within the complex for generating the new generation, two different jump rates are used. The reason for different jump rates is to allow MFL to have a better exploration and exploitation ability during optimization process. These jump rates are selected by trial and error and may need further investigation to achieve a better performance by MFL algorithm. Fourth, when the generated offspring is not better than the parents, a new point is randomly selected within the range of individuals in the subcomplex. This process, which is referred to as censorship step in the FL algorithm (Eusuff et al., 2006), is different from the original algorithm. The MFL algorithm uses the range of points in the complex rather than the whole feasible parameters range. Resampling within the whole parameter space can decrease the convergence speed of the FL algorithm. Hence, the resampling process is carried out only within the range of points in the complex. Hereafter, the SC-SAHEL with MFL algorithm as the only search core is referred as SC-MFL.

Modified Grey Wolf Optimizer (MGWO)
The Grey Wolf Optimizer is a meta-heuristic algorithm inspired by the social hierarchy and hunting behavior of grey wolves (Mirjalili et al., 2014(Mirjalili et al., , 2016. Grey wolves hunting strategy has three main steps: first, chasing and approaching the prey; second, encircling and pursuing the prey, and finally attacking the prey (Mirjalili et al., 2014). The GWO process resembles the hunting strategy of the grey wolves. In this algorithm, the top three fittest individuals are selected and contribute to the evolution of population. Hence, the individuals in the population are navigated toward the best solution. The GWO algorithm has shown to be effective and efficient in many test functions and engineering problems. Furthermore, performance of GWO is comparable to other popular optimization algorithms, such as GA and PSO (Mirjalili et al., 2014). GWO follows an adaptive process to update the jump rates, to maintain balance between exploration and exploitation phases. The adaptive jump rate of the GWO is removed here and 3 different jump rates are used instead. The reason for this modification is that the information given to each EA is limited to its assigned complex. Similar to MFL algorithm, the modified GWO (MGWO) algorithm uses the range of parameters to resample individuals, when the generated offspring are not superior to their parents. Details and pseudo code of the MGWO algorithm can be found in Appendix C. Hereafter, the SC-SAHEL with MGWO algorithm as the only search core is referred as SC-MGWO.

Differential Evolution (DE)
The DE algorithm is a powerful but simple heuristic populationbased optimization algorithm (Qin and Suganthan, 2005;Sadegh and Vrugt, 2014) proposed by Storn and Price (1997). In 2011, Mariani et al. (2011) integrated the DE algorithm into SCE-UA framework and showed that the new framework is able to provide more robust solutions for some optimization problems in comparison to the SCE-UA. Similar to the work by Mariani et al. (2011), we use a slightly modified DE algorithm based on the concepts from Qin and Suganthan (2005), in order to integrate the DE algorithm into the SC-SAHEL framework. As the DE algorithm has slower performance in comparison to other EAs used here, we have added multiple steps to the DE. Here, the DE algorithm uses three different mutation rates in three attempts. In the first attempt, the algorithm uses a larger mutation rate. This helps exploring the search space with larger jump rates. In the second attempt, the algorithm reduces the mutation rate to a quarter of the first attempt. This will enhance the exploitation capability of the EA. If none of these mutation rates could generate a better offspring than the parents, in the next attempt the mutation rate is set to half of the first attempt. Lastly, if none of these attempts generate a better offspring in comparison to the parents, a new point is randomly selected within the range of individuals in the complex. The pseudo code of the modified DE algorithm is detailed in Appendix D. The SC-SAHEL algorithm is referred to as SC-DE, when the DE algorithm is used as the only search algorithm.

Test functions
The SC-SAHEL framework is benchmarked over 29 mathematical test functions using single-method and multi-method search mechanisms. This includes 23 classic test functions obtained from Xin et al. (1999). The name and formulation of these functions along with their dimensionality and range of parameters are listed in Table 1. We selected these test functions as they are standard and popular benchmarks for evaluating new optimization algorithms (Mirjalili et al., 2014). The remaining 6 are composite test functions, cf 1À6 , (Liang et al., 2005), which represent complex optimization problems. Details of the composite test functions can be found in the work of Liang et al. (2005) and Mirjalili et al. (2014). Classic test functions have dimensions in the range of 2e30, and all the composite test functions are 10 dimensional. Figs. 2 and 3 show response surface of the test functions which can be shown in 2dimension form. The SC-SAHEL settings used for optimizing these test functions are listed in Table 2 for each test function. Number of points in each complex and number of evolution steps for each complex are set to 2d þ 1 and max(dþ1,10), respectively, where d is the dimension of the problem. The number of evolution steps is set to max(dþ1,10), to guarantee that EAs evolve the complexes for enough number of steps, before evaluating the EAs. In the highdimension problems, the maximum number of function evaluation should be selected with careful consideration.
Several experiments were conducted to find an optimal set of parameters for the SC-SAHEL setting. These experiments revealed that a low number of evolutionary steps before shuffling the complexes, may not show the potential of the EAs. On the other hand, using a large value for the number of evolution steps may shrink the complex to a small space, which cannot span the whole search space (Duan et al., 1994). Maximum number of function evaluation is determined according to the complexity of the problem and is different for each of the test cases. In addition to the maximum number of function evaluation, the range of the parameters in the population and the improvement in the objective Table 1 The detailed information of 23 test functions from Xin et al. (1999), including mathematical expression, dimension, parameters range and global optimum value (f min ).

Function Number
Name Function Dim Range f min  function values are used as convergence criteria. The optimization run is terminated if the population range is smaller than 10 À7 % of the feasible range or the improvement in (objective) function value is smaller than 0.1% of the mean (objective) function value in the last 50 shuffling steps. The LHS mechanism is used as the sampling algorithm of SC-SAHEL for generating the initial population. The framework provides multiple settings for boundary handling, which can be selected by user. SC-SAHEL uses reflection as the default boundary handling method. Other initial sampling and boundary handling methods are also implemented in the SC-SAHEL framework. Sensitivity of the initial sampling and boundary handling on the performance of the SC-SAHEL algorithm is not studied in this paper. The aforementioned settings can be applied to a wide range of problems. Table 3 illustrates the statistics of the final function values at 30 independent runs on 29 test functions using the hybrid SC-SAHEL and individual EAs, with the goal to minimize the function values. The best mean function value obtained for each test function is expressed in bold in Table 3. Results show that the hybrid SC-SAHEL achieved the lowest function values in 15 out of 29 test functions, compared to the mean function values achieved by all individual algorithms. It is noteworthy that in 20 out of 29 test functions, the hybrid SC-SAHEL was among the top two optimization methods in finding the minimum function value. A two-sample t-test (with 5% significance level) also showed that the results generated with the SC-SAHEL algorithm is generally similar to the best performing algorithms. Comparing among single-method algorithms, in general, the statistics obtained by SP-UCI are superior to other participating EAs. In 12 out of 29 test functions, the SP-UCI algorithm achieved the lowest function value. SC-MFL, SC-MGWO, and SC-DE were superior to other algorithms in 6, 10, and 11 out of 29 test functions, respectively. In test functions f 6 , f 16 , f 17 , f 18 , f 19 , f 20 , and f 23 , the single-method and multi-method algorithms achieved same function values on average in most cases. In these cases, according to the statistics shown in Table 3, the SP-UCI and SC-SAHEL algorithms offer lower standard deviation values and show more consistent results as compared to other EAs. The low standard deviation values obtained by SP-UCI and SC-SAHEL indicate the robustness and consistency of these two algorithms in comparison to other algorithms.

Results and discussion
In the test functions that the hybrid SC-SAHEL algorithm was not able to produce the best mean function value, the achieved mean function values deviation from that of the best-performing algorithms are marginal. For instance, on the test functions f 2 , f 4 , f 10 , and f 22 , the statistics of the values obtained by SC-SAHEL are similar to that achieved by the best-performing methods, which are SP-UCI, and SC-MGWO. In general, the hybrid SC-SAHEL algorithm is superior to algorithms with individual EA on most of the test functions, although on some test functions, the SC-SAHEL algorithm is slightly inferior to the best-performing algorithm with only marginal differences. The performance of the SC-SAHEL in these test functions can be attributed to two main reasons. First, in the hybrid algorithm, all the EAs are involved in the evolution of the population. Hence, if one of the algorithms have poor performance in comparison to other EAs, it still evolves a portion of the population. As the complexes are evolved independently, the poorperforming EAs may devastate a part of the information in the evolving complex. On the other hand, when the algorithms are used individually in the SC-SAHEL framework, the EA utilizes the information in all the complexes and the whole population. In this case, better result will be achieved in comparison to the hybrid SC-SAHEL, if the EA is the fittest algorithm for the problem space. Second, some of the EAs are faster and more efficient in a specific optimization phase (exploration/exploitation) than others. However, they might not be as effective as other EAs for other optimization phases. Hence, dominance of these algorithm during the exploration or exploitation phases can mislead other EAs and cause early (and premature) convergence. Engagement of other algorithms in the evolution process may prevent early convergence in these cases. Generally, the performance criteria, EMP, is responsible for selecting the most suitable algorithm in each optimization step, however, the criteria used in the SC-SAHEL is not guaranteed to perform well in all problem spaces. The performance criteria are problem dependent and need further investigations based on the problem space and EAs. However, the EMP metric seems to be a  suitable metric for a wide range of problems.
To further evaluate the performance of the hybrid SC-SAHEL algorithm, we present the success rate of the algorithms in Fig. 4. The success rate is defined by setting target values for the function value for each test function. When the function value is smaller than the target value, the goal of optimization is reached, and therefore, the algorithm is considered successful. A higher success rate resembles a better performance. We use same target value for all algorithms in order to have a fair comparison. According to Fig. 4, in 16 out of 29 test functions, the hybrid algorithm achieved 100% success rate. In other cases, the success rates achieved by the proposed hybrid algorithm are comparable to the best-performing algorithm with single EA. For instance, on the test function f 9 , the SC-MGWO, SC-DE and SC-MFL are not successful in finding the optimum solution (success rates are 0%, 0%, and 10%, respectively). However, the hybrid SC-SAHEL algorithm has similar performance (80% success rate) to SP-UCI (97% success rate). On the test function f 21 , the success rate of the hybrid SC-SAHEL algorithm (87%) is close to the SC-MGWO (93%), which is the most successful algorithm. The hybrid SC-SAHEL algorithm also achieved a higher success rate than SP-UCI algorithm (33%) in this test function. According to Fig. 4, the average success rate of SC-SAHEL is about 80% over all 29 test functions, and it is the highest compared to the average success rate of other EAs, i.e., 73%, 58%, 58%, and 54% for SP-UCI, SC-DE, SC-MGWO, and SC-MFL algorithm, respectively.
In some situations, the poor performing EAs may mislead other EAs and cause early (and premature) convergence. For instance, on the test function cf 5 , the hybrid algorithm achieved 57% success rate, which is still better success rate than SP-UCI, SC-MFL and SC-MGWO, which are 0%, 10%, and 50%, respectively. On this test function (cf 5 ), the performance of the hybrid SC-SAHEL is less affected by the most successful algorithm (DE). This may be due to the low evolution speed of the DE algorithm, as the SC-SAHEL algorithm maintains both convergence speed and efficiency during the entire search. The hybrid SC-SAHEL presents promising performance on the test functions cf 2 and cf 3 . On test functions cf 2 and cf 3 , the success rate of hybrid SC-SAHEL is significantly higher than other EAs, most of which have 0% success rates. For test function cf 2 , the SC-DE algorithm achieved the lowest objective function value and the highest success rate (37%) among single-method algorithms. However, when EAs are combined in the hybrid form, the objective function value and the success rate are significantly improved. This shows that SC-SAHEL has the capability of solving complex problems by utilizing the potentials and advantages of all participating algorithms and improving the search success rate.
In Table 4, we present the mean and standard deviation of the number of function evaluation, which indicates the speed of each algorithm. The lowest mean number of function evaluation is expressed in bold in Table 4. As one of the stopping criteria in SC-SAHEL framework is the maximum number of function evaluation, some algorithms may terminate before they show their full potential. For instance, the SC-DE and the SC-MFL, usually reach the maximum number of function evaluations, while other algorithms satisfy other convergence criteria in much less number of function evaluations. In this case, the objective function value doesn't represent the potential of the slow algorithms. To give a better insight into this matter, the mean and standard deviation (Std) of the number of function evaluations are compared in Table 4. The goal is to compare the speed of the individual EAs and the hybrid optimization algorithm. According to Table 4, in most of the test cases, the SP-UCI algorithm has the least number of function evaluations, regardless of the objective function value achieved by the EAs.
Comparing the success rate and the number of function Fig. 4. The success rate of the SC-SAHEL algorithm using multi-method and single-method search mechanism for 30 independent runs for 29 test functions. In the test function f 7 , the exploration process starts with the dominance of the MCCE and shifts between MGWO and MFL after the first 20 shuffling steps. In some of the test functions, such as f 7 , a more random fluctuation is observed in the number of complexes assigned to each EA. The reason for this behavior is the close competition of EAs in these shuffling steps. Due to the noisy response surface of the test function f 7 , most of the EAs cannot significantly improve the (objective) function values during the exploitation phase. On test functions f 8 and f 18 , the MFL and DE algorithms are the dominant search methods, respectively, during the beginning of the run, while MCCE algorithm becomes dominant only when the algorithm is in exploitation phase. Lastly, on test functions f 9 , f 22 , cf 1 , and cf 4 , the variations of the number of complexes and the precedence of different EAs as the most dominant search algorithm are observed.
It is worth mentioning that, Figs. 5e7 show the number of complexes assigned to each EA for a single optimization run. Our observation of each individual run results (not shown herein) shows variation of the number of complexes among different runs is similar to each other for most test cases. The observed variation for individual runs follows a specific pattern and is not random. The similarity of the EAs dominance pattern indicates that the selection of the EAs by the SC-SAHEL framework only depends on the characteristics of the problem space and the EAs employed. This also indicates that different EAs have pros and cons on different optimization problems.
As a summary of our experiments on the conceptual test functions (Tables 3 and 4, and Figs. 4e7), the main advantage of the SC-SAHEL algorithm over other optimization methods is its capability of revealing the trade-off among different EAs and illustrating the competition of participating EAs. Different optimization problems have different complexity, which introduces various challenges for each EA. By incorporating different types of EAs in a parallel computing framework, and implementing an "award and punishment" logic, the newly developed SC-SAHEL framework not only provides an effective tool for global optimization but also gives the user insights about advantages and disadvantages of participating EAs on individual optimization tasks. This shows the potential of the SC-SAHEL framework for solving different class of problems with different level of complexity. Besides, the hybrid SC-SAHEL algorithm is superior to shuffled complex-based methods with single search mechanism, such as SP-UCI, in an absolute majority of the test functions.

Example application and results
In this section, we demonstrate an example application of the newly developed SC-SAHEL algorithm. A conceptual reservoir model is developed with the goal of maximizing hydropower generation on a daily-basis operation. The model is applied to the Folsom reservoir in Northern California.

Reservoir model
A conceptual model is set up based on the relationship between the hydropower generation, storage, water head and bathymetry of the Folsom reservoir. Daily releases from the reservoir in the study period are treated as the parameters of the model, which in turn determines the problem dimensionality. The model objective is to maximize the hydropower generation for a specific period. The total hydropower production is a function of the water head difference between forebay and tailwater and the turbine flow rate. The driving equation of the model is based on mass balance (water budget), which is formulated as, where S t is storage at time step t, I t and R t signify total inflow and release from the reservoir at time t, respectively. M t is total outflow/ inflow error which is derived by setting up mass balance for daily observed data. The objective function employed here is, where P c is total power plant capacity in MW and P t is total power generated in day t in MW. For each day P t is derived as follow, where h signifies turbine efficiency, r is water density (Kg/m 3 ), g is gravity (9.81 m/s 2 ) and Q t is discharge (m 3 /s) at time step t. H t is hydraulic head (m) at time step t, which is defined as, where h f and h tw are water elevation in forebay and tailwater, respectively. h f and h tw are derived by fitting a polynomial to reservoir bathymetry data. In the reservoir model coined above, multiple constraints are considered for better representation of the real behavior of the system. These constraints include power generation capacity, storage level, spill capacity, and changes in the daily hydropower discharge. Total daily power generation is compared to maximum capacity of the hydropower plant. Also, rule curve is used to control reservoir storage level during the operation period. Besides, final simulated reservoir storage is constrained to 0.9e1.1 of the observed storage. In another word, 10% variation from the observation data is allowed for the final simulated storage level. This constraint adds information from real reservoir operation into the optimization process. This constraint can be replaced by other operation rules for simulation purposes. The spill capacity of dam is calculated according to the water level in the forebay and compared to simulated spilled water. A quadratic function is fitted to the water level and spill capacity data, to derive the spill capacity at each time step. The change in daily hydropower release is also constrained to better represent actual hydropower discharge and avoid large variation in a daily release.
The reservoir model used here is non-linear and continuous. The constraints of the model render finding the feasible solution a challenging task for all the EAs. The SC-SAHEL framework is used to maximize the hydropower generation by minimizing the objective function value. The settings used for the SC-SAHEL is similar to the settings used for the mathematical test functions. However, the maximum number of function evaluations is set to 10 6 . Lower bound of the parameters' range varies monthly due to the operational rules; however, upper bound is determined according to the hydraulic structure of the dam.

Study basin
Folsom reservoir is located on the American river, in Northern California and near Sacramento, California. Folsom dam was built by US Army Corps of Engineers during 1948e1956, and is a multipurpose facility. The main functions of the facility are flood control, water supply for irrigation, hydropower generation, maintaining environmental flow, water quality purposes, and providing recreational area. The reservoir has a capacity of 1,203,878,290 m 3 and the power plant has a total capacity of 198.7 MW. Three different periods are considered here. The first study period is April 1st, 2010 to June 30 th , 2010. The year 2010 is categorized as below-normal period according to California Department of Water Resources. The same period is selected in 2011 and 2015, as former is categorized by California Department of Water Resources as wet, and latter is classified as critical dry year. The input and output from the reservoir are obtained from California Data Exchange Center (http://cdec.water.ca.gov/). Note that demand is not included in the model because demand data was not available from a public data source.

Results and discussion
The boxplot of the objective function values is shown in Fig. 8 for the Folsom reservoir during the runoff season in 2015, 2010, and 2011, which are dry, below-normal, and wet years, respectively. The presented results are based on 30 independent optimization runs; however, infeasible objective function values are removed. The feasibility of the solution is evaluated according to the objective function values. Due to the large values returned by the penalty function considered for infeasible solutions, such solutions can be distinguished from the feasible solutions. For wet year (2011) case, SC-MGWO, and SC-DE didn't find a feasible solution in 2, and 4 runs out of 30 independent runs, respectively. The hybrid SC-SAHEL found feasible solutions in all the cases; however, some of these solutions are not global optima. On average, the hybrid SC-SAHEL algorithm is able to achieve the lowest objective function value as compared to other algorithms during dry and below-normal period. During dry and below-normal periods, SC-SAHEL, SP-UCI, and SC-DE show similar performance. In the wet period, the SP-UCI algorithm achieved the lowest objective function value. The SC-SAHEL algorithm ranked second, comparing the mean objective function values. In this period, the results achieved by the SC-DE is also comparable to SC-SAHEL and SP-UCI. The results show that overall, the hybrid SC-SAHEL algorithm has similar or superior performance in comparison to the single-method algorithms. Also, the results achieved by SC-SAHEL and SP-UCI algorithms has less variability in comparison to other algorithms, which show the robustness of these algorithms. The worst performing algorithm is the SC-MGWO, which achieved the least mean objective function value in all the study periods.
In Fig. 9, boxplot of the number of function evaluations is presented for successful runs from the 30 independent runs during dry, below-normal and wet period years. Although the SC-MGWO algorithm satisfied convergence criteria in the least number of function evaluation, the SC-MGWO was not successful in achieving the optimum solution in many cases. The SP-UCI algorithm is the second fastest method among all the algorithms. The hybrid SC-SAHEL, SC-MFL, and SC-DE are the slowest algorithm for satisfying the convergence criteria, in almost all cases. The slow performance of the hybrid SC-SAHEL is due to the fact that 2 out of 4 (DE and MFL) participating EAs have very slow performance over the response surface. Fig. 10 demonstrates the number of complexes assigned to each EA during the search, which indicates the dominance of the participating algorithms, and the "award and punishment" logic in the reservoir model. As seen in Fig. 10, the MGWO algorithm is dominant in the beginning of the search; although, it is not capable of finding the optimum solution in most cases. The reason for the dominance of the MGWO is the speed of the algorithm in exploring the search space. MGWO is superior to other EAs in the beginning of the search, however, after a few   iterations, the MCCE algorithm took the precedence and become the dominant algorithm over other EAs. MGWO and DE are less involved in the rest of the optimization process after the initial steps. However, competition between MCCE and MFL continues. Although contribution of MGWO and DE are at minimum in the rest of the optimization process, they are utilizing a part of information within the population. This can affect the speed and performance of the SC-SAHEL algorithm. In both the wet and below-normal cases, the hybrid SC-SAHEL algorithm is mostly terminated by reaching the maximum number of function evolution. However, the mean objective function value obtained by the hybrid SC-SAHEL is still superior to most of the algorithms.
The performance of the SC-SAHEL can be affected by the settings of the algorithm. Different settings have been tested and evaluated for the reservoir model. The results show that the number of evolution steps before shuffling can influence the performance of the hybrid SC-SAHEL algorithm. In the current setting, the number of evolution steps within each complex is set to dþ1 (d is dimension of the problem). Although this setting seems to provide acceptable performance for a wide range of problems, it may not be the optimum setting for all the problems spaces and EAs. In the reservoir model, as the study period has 91 days, the model evolves each complex for 92 steps. This number of evolution steps allows the algorithms to navigate the complexes toward local solutions and increase the total number of function evaluations without specific gain. Decreasing the number of evolution steps allows the algorithms to communicate more frequently, so they can use the information obtained by other EAs. Here, for demonstrative purposes, the same setting has been applied to all the problems. However, better performance is observed for the hybrid SC-SAHEL algorithm when the number of evolution steps are set to a value smaller than 92. The algorithm is less sensitive to other settings for the reservoir model, however they can still affect the performance of the algorithm.
In Fig. 11, we present the simulated storage for different study periods achieved by different EAs. During the dry period, not only the SC-SAHEL algorithm achieved the lowest objective function value, but also the storage level is higher than the observed storage level in most of the period. This is due to the fact that, power generation is a function of water height, as well as discharge rate. During below-normal period, SC-SAHEL, SP-UCI, and SC-DE algorithms show a similar behavior in terms of the storage level. During wet period, storage level simulated by SP-UCI and SC-SAHEL algorithm is lower than all other algorithms. It is worth noting that, during wet period, SC-SAHEL and SP-UCI algorithms are able to find optimum solution (which objective function value is 0) in some of the runs. However, the simulated storage by these algorithms show some level of uncertainties (Fig. 11). This shows equifinality in simulation, which means that same hydropower generation can be achieved by different sets of parameters (Feng et al., 2017). This equifinality can be due to deficiencies in the model structure, or the boundary conditions (Freer et al., 1996). The wet period seems to offer a more complex response surface for the reservoir model. During the wet period, some algorithms, such as SC-DE, are not capable of finding a feasible solution in some of the runs. In this period, the large input volume and the rule curve added more complexity to the optimization problem.
The results of the real-world application show the potential of the newly developed SC-SAHEL framework for solving high dimension problems. In general, the hybrid algorithm was more successful in finding a feasible solution in comparison to singlemethod algorithms. In some cases, the hybrid SC-SAHEL was terminated due to the large number of function evaluations. However, the performance of the hybrid SC-SAHEL is always comparable to the best performing method. This shows the potential of the SC-SAHEL for solving a broad class of optimization problems. Besides, the framework provides insight into the performance of the algorithms at different steps of the optimization process. This feature of the SC-SAHEL algorithm can aid user to select the best setting and EA for the problem.

Conclusions and remarks
We developed a hybrid optimization framework, named Shuffled Complex Self Adaptive Hybrid EvoLution (SC-SAHEL), which uses an "award and punishment" logic in junction with various types of Evolutionary Algorithms (EAs), and selects the best EA that fits well to different optimization problems. The framework provides an arsenal of tools for testing, evaluating and developing optimization algorithms. We compared the performance of the hybrid SC-SAHEL with single-method algorithms on 29 test functions. The results showed that the SC-SAHEL algorithm is superior to most of single-method optimization algorithms and in general offers a more robust and efficient algorithm for optimizing various problems. Furthermore, the proposed algorithm is able to reveal the characteristics of different EAs during entire search period. The algorithm is also designed to work in a parallel framework which can take the advantage of available computation resources. The newly developed SC-SAHEL offers different advantages over conventional optimization tools. Some of the SC-SAHEL characteristics are: -Intelligent evolutionary method adaptation during the optimization process -Flexibility of the algorithm for using different evolutionary methods -Flexibility of the algorithm for using initial sampling and boundary handling method -Independent parallel evolution of complexes -Population degeneration avoidance using PCA algorithm -Robust and Fast optimization process -Evolutionary algorithms comparison for different types of problems Although the presented results support advantage of the hybrid SC-SAHEL to algorithms with individual EAs, there are multiple directions for further improvement of the framework. For example, EAs' performance metric for evaluating the search mechanism. In the current algorithm, the complex allocation to different EA is carried out by ranking the algorithm according to the EMP metric. The performance criteria can change the allocation process and affect the performance of the algorithm. Depending on the application a more comprehensive performance criterion may be necessary for achieving the best performance. However, the current EMP criterion does not affect the conclusion and comparison of different EAs. In addition, the current SC-SAHEL framework is designed to solve single objective optimization problems. A multiobjective version can be developed to extend the scope of the application. This paper serves as an introduction to the newly developed SC-SAHEL algorithm. We hope that more investigation on the interaction among different EAs, boundary handling schemes and response surface in different case studies and optimization problems reveal the advantages and limitations of SC-SAHEL. 1331915), NOAA/NESDIS/NCDC (Prime award NA09NES4400006 and NCSU CICS and subaward 2009-1380-01), and the U.S. Army Research Office (award W911NF-11-1-0422). The Folsom reservoir bathymetry information used here is provided by Dr. Erfan Goharian from UC Davis, who also helped us for setting up the reservoir model. The authors would like to thank the comments of the editors and four anonymous reviewers which significantly improved the quality of this manuscript.

Appendix A. Modified Competitive Complex Evolution (MCCE)
MCCE algorithm pseudo code is as follow: Step 0 Initialize i ¼ 1, and get maximum number of iteration allowed, I.
Step 1 Sort individuals in order of increasing objective function value. Assign individuals a triangular probability (except for the fittest point) according to: where NPS is the number of individuals in the complex and n is the rank of the sorted individuals.
Step 2 Select dþ1 individuals (d is problem dimension) from the complex including the fittest individual in the complex.
Step 3  . Let i ¼ i þ 1: If i I, go to (Step 1); otherwise sort the points in the complex and return the evolved complex.

Appendix B. Modified Frog Leaping (MFL)
Modified FL (MFL) algorithm is as follow, Step 0 Initialize i ¼ 1, and get maximum number of iteration allowed, I.
Step 1 Sort individuals in order of increasing objective function.
Assign individuals a triangular probability using following equation: where NPS is the number of individuals in the complex and n is the rank of the sorted individuals.
Step 2 Select dþ1 individuals (d is problem dimension) from the complex.
Step 3 The selected individuals are stored in S, forming a subcomplex. Generate offspring according to following steps. I. Generate a new point with the worst point in S, w . and best point b .
in the subcomplex, as follow, in the subcomplex, as follow, . Let i ¼ i þ 1: If i I, go to (Step 1); otherwise sort the points in the complex and return the evolved complex.

Appendix C. Modified Grey Wolf Optimizer (GWO)
Modified Grey Wolf Optimizer is as follow: Step 0 Initialize i ¼ 1, and get maximum number of iteration allowed, I.
Step 1 Sort the individuals in the order of increasing objective function value. Assign individuals a triangular probability (except for the fittest point) using following equation: where NPS is the number of individuals in the complex and n is the rank of the sorted individuals.
Step 2 Select dþ1 individuals (d is problem dimension) from the complex, with triangular probability, including the fittest point in the complex and store them in S.
Step 3 Select the best three points in the S and store them in a . , b . and g . , respectively. The worst point in the S, is stored in w .
Step 4  . and g . as follow, IV. Generate new point by finding the centroid of Z a . , Z b . and Z g . ,

C
V. Calculate and store objective function value for the new point, f C . If the new point is better than the worst point among the selected points, f C < f w , set o . ¼ C .
, go to step 7.
Step 5 If f C > f w , go to step 4, and use a smaller range for A . . In this step, A . is calculated as follow: A . ¼ 2 Â r . 1 À 1; (C7) Step 6 If the newly generated individual is worse than the worst individuals in subcomplex, generate a new point with uniform random sampling within the range of individuals in the complex. Store the new point in o .

.
Step 7 Replace the worst individual among selected points in the complex with the offspring, o .
. Let i ¼ i þ 1: If i I, go to (Step 1); otherwise sort the points in the complex and return the evolved complex.

Appendix D. Modified Differential Evolution (DE)
Modified differential evolution algorithm is as follow: Step 0 Initialize i ¼ 1, and get maximum number of iteration allowed, I.
Step 1 Sort the individuals in the order of increasing objective function value. Assign individuals a triangular probability, using following equation: where NPS is the number of individuals in the complex and n is the rank of the sorted individuals.
Step 2 Select dþ1 points (d is problem dimension) from the complex with the assigned probability and store them along with the fittest point in the complex in S.
Step 3 The selected individuals are sorted and stored in S, forming a subcomplex. Generate offspring according to following steps. I. Generate a new point with the worst point in S, w . and using the top three individuals in the subcomplex, where w . is the worst point in the S, s 1 . , s 2 . , and s 3 . are three selected individuals. Then mutation, and crossover operator is applied to the w . and V 1 .
to generate V n1 .
. The objective function value for the new point is calculated and stored in f n1 . If f n1 < f w ; set o . ¼ V n1 . and go to (V).
II. If f w < f n1 , generate a new point with the worst point in S, w . and using the top three points in the subcomplex as follow, þ 0:5f s 2 . À s 3 . ; After mutation, crossover operator is applied to the w . and V 2 .
to generate V n2 .
. Then, the objective function for the new point is derived and stored in f n2 . If f n2 < f w ; set o . ¼ V n2 . and go to (V).
III. If f w < f n2 , generate a new point with the worst point in S, w . and using the top three points in the subcomplex as follow, After mutation, crossover operator is applied to the w . and V 3 . to generate V n3 .
. The objective function value is calculated and stored in f n3 . If f n3 < f w ; set o . ¼ V n3 . Let i ¼ i þ 1: If i I, go to (Step 1); otherwise sort the points in the complex and return the evolved complex.