An efficient Planet Optimization Algorithm for solving engineering problems
Scientific Reports volume 12, Article number: 8362 (2022) Cite this article
2232 Accesses
24 Citations
3 Altmetric
Metrics details
In this study, a meta-heuristic algorithm, named The Planet Optimization Algorithm (POA), inspired by Newton's gravitational law is proposed. POA simulates the motion of planets in the solar system. The Sun plays the key role in the algorithm as at the heart of search space. Two main phases, local and global search, are adopted for increasing accuracy and expanding searching space simultaneously. A Gauss distribution function is employed as a technique to enhance the accuracy of this algorithm. POA is evaluated using 23 well-known test functions, 38 IEEE CEC benchmark test functions (CEC 2017, CEC 2019) and three real engineering problems. The statistical results of the benchmark functions show that POA can provide very competitive and promising results. Not only does POA require a relatively short computational time for solving problems, but also it shows superior accuracy in terms of exploiting the optimum.
In recent years, many nature-inspired optimization algorithms have been proposed. Some of swarm-inspired algorithms are appreciated such as Particle Swarm Optimization algorithm (PSO)1, Firefly Algorithm (FA)2, Dragonfly Algorithm (DA)3, Whale Optimization Algorithm (WOA)4, Grey Wolf Optimizer (GWO)5, Monarch Butterfly Optimization (MBO)6, Earthworm Optimization Algorithm (EWA)7, elephant herding optimization (EHO)8, moth search (MS) algorithm9, Slime Mould Algorithm (SMA)10, Colony Predation Algorithm (CPA)10 and Harris Hawks Optimization (HHO)11. Besides, quite a number of physics-inspired algorithm simulated physical laws in the universe or nature, such as Curved Space Optimization (CSO)12, Water Wave Optimization (WWO)13, etc. Moreover, some algorithms based on the mathematical foundations are also creative approaches, e.g. Runge Kutta optimizer (RUN)14.
On the other hand, some algorithms simulate human behavior such as Teaching–Learning-Based Optimization (TLBO)15, and Human Behavior-Based Optimization (HBBO)16. Meanwhile, Genetic Algorithm (GA)17 is inspired by evolution, and achieves a lot of success in solving optimization problems in many fields. With the growing popularity of GA, many evolutions-based algorithms are proposed in the literature, including Evolutionary Programming (EP)18, and Evolutionary Strategies (ES)19.
Nowadays, metaheuristic algorithms become an essential tool for solving complex optimization problems in various fields. Many researchers applied such algorithms to make an effort to deal with difficult issues in biology20, economics21, engineering22,23, etc. Therefore, constructing new algorithms to meet such complex requirements has a significant merit.
In this study, a strong algorithm is constructed for solving local and global optimization problems. The idea comes from the natural motion of planets in our solar system and the interplanetary interactions throughout their lifecycle. Newton's law of gravity reflects the gravitational interaction of the Sun with planets orbiting to find the optimized position through individual planets characteristics. These planets characteristics are their masses and distances.
In this paper, we propose an optimization algorithm using Newton's law of universal gravitation as the basis for its development. In this algorithm, a number of pre-eminent features are considered, such as local search, global search, to increase the ability for finding the exact solutions built into simulating the planets’ movement in the universe.
This research paper is structured into several sections as follows. In the next section, the construction of a meta-heuristic algorithm is presented. The structural POA is simulated based on Newton's law of universal gravitation and astronomical phenomena. Then, a wide range of applications of various benchmark problems is used to demonstrate how effective POA is. At the same time, we present the applications of the POA to real engineering problems. Finally, based on the results presented, the last section reports the conclusions.
Physics is a fundamental science whose laws governs everything from the tiniest object electrons, neutrons, or protons to extremely massive stars or galaxies (about a hundred thousand light-years across). The laws of physics are widely applied in everyday life from transportation to medicine, from agriculture to industry, etc. In science, it is also the foundation for many other sciences such as chemistry, biology, even math. In the field of artificial intelligence (AI), the laws of physics are the inspiration for many optimization algorithms. In the study, we also present an algorithm based on such a physical law.
Inspired by the laws of the motion in the universe, an algorithm is proposed from the interaction of mutual gravitational between the planets. Specifically, this optimization algorithm simulates the universal gravitation laws of Isaac Newton. The core of this algorithm is given as follows:
where \(\overrightarrow {F}\): The gravitational force acting between two planets; \(G\) : The gravitational constant; \(R\) : The distance between two planets; \(m_{1} ,m_{2}\): The mass of the two planets.
The gravitation of a two-planet (as shown in Fig. 1) is dependent on as Eq. (1). However, in this study, we find that the value of force \(\overrightarrow {F}\) will give less effective results than when using the moment \((M)\) as a parameter in the search process of the algorithm.
The force F acting between two planets.
The universe is infinitely big and has no boundary, and it is a giant space that is filled with galaxies, stars, planets, and many and many interesting astrophysical objects. For simplicity and ease of visualization, we use the solar system to make representation for this algorithm simulation.
First of all, a system that consists of the Sun, the Earth and the Moon (as shown in Fig. 2) is considered in this case. Of course, everybody understands that the Sun maintains its gravitation to keep the Earth moving around it. Interestingly, the mass of the Sun is 330,000 times higher than that of the Earth. However, the Earth also creates a gravitational force large enough to keep the Moon in orbit around the Earth. This demonstrates that two factors influence the motion of a planet, not only the mass but also the distance between the two planets. An algorithm simulating the law of universal gravitation is, therefore, presented as follows:
The Sun will act as the best solution. In the search space, it will have the greatest mass, which means it will have a greater gravitational moment for the planets around and near it.
Between the Sun other planets, there is a gravitational attraction moment between each other. However, this moment depends on the mass as well as the distance between these two objectives. This means that, although the Sun has the largest mass compared to other planets, its moment on the too distant planets is negligible. This helps the algorithm to avoid local optimization as illustrated in Fig. 3.
The gravitational force acting between planets.
Local and global optimization: (a) 3D view; (b) plane.
In the tth iteration, the mass of red planet (see Fig. 3) is the biggest, so it represents the Sun. As the pink planets are close to the Sun, they will move to the location of the Sun because of a gravitational attraction moment \((M_{p}^{t} )\) between the Sun and the planets.
Nevertheless, the red planet (or the Sun) in the tth iteration does not have the desired position that we are looking for, i.e. a minimum optimum. In other words, if all planets move to the red planet, the algorithm is stuck in the local space. In contrast, the blue planet is a potential location and far from the Sun. The interaction of the Sun with the blue planet \((M_{b}^{t} )\) is small, because it is far from the Sun in the tth iteration. Thus, it is quite free for the blue planet to search a better location in the next iterations.
The main core of the algorithm is based on the above 2 principles. Besides, the Sun is the true target of searching, and of course we don’t have its exact location. In this case, the planet with the highest mass in the tth iteration would act as the Sun at the same time.
The implementation of the algorithm is as follows:
Ideally, a good algorithm is the one in which the final best solution should be independent of the initial positions. Nevertheless, the reality is exactly the opposite for almost all stochastic algorithms. If the objective region is hilly and the global optimum is located in an isolated minor area, an initial population has an important role. If an initial random population does not create any solution in the vicinity of the global search level of the original population, the probability that the population concentrates on true optimum can be very low.
In contrast, with building initial solutions near the global optimal position, the probability of the convergence of the population to the optimal location is very high. Globalization is indeed very high, and consequently, population initialization plays a vital role. Ideally, the initiation should use the critical sampling method, such as techniques applied to the Monte Carlo method in order to sample the solutions for an objective context. This, however, requests enough intellect of the problem and cannot be satisfied for most algorithms.
Similar to choosing initial population, choosing the best solution in the original population to the role of the Sun with respect to all the other planets moving to the position is important. This selection will determine the convergence speed as well as the accuracy of the algorithm in the future.
Therefore, the algorithm's first step is to find an effective solution to play a role of the best solution to increase the convergence and accuracy of the search problem in the first iterations.
In Eq. (3), the following parameters are defined:
The mass of the planets:
where \(a = 2\) is a constant parameter, and \(\alpha = \left| {\max (obj) - obj_{sun} } \right|\) . This means that if the objective function value of a planet is smaller, the mass of this planet is larger. \(obj_{i,j} ,\max (obj),obj_{sun}\) are the values of objective function of the ith or jth planet, the worst planet and the Sun, respectively.
The distance between any 2 objects i and j with “Dim” as dimensions, Cartesian distance, is calculated by Eq. (5):
G is a parameter, and it is equal to unity in this algorithm.
From the above, a formula built to simulate global search is indicated by Eq. (6)
The lefthand side of the formula illustrates the current position of a planet ith in the (t + 1) iteration, while the righthand side consists of the main elements as follows:
\(\overrightarrow {{X_{i}^{t} }}\) is the current position of a planet ith in the iteration tth.
\(\beta = M_{i}^{t} /M_{{_{\max } }}^{t}\),\(r_{1} = rand(0,1),b\) is a constant parameter.
\(\overrightarrow {{X_{Sun}^{t} }}\) is the current position of the Sun in the iteration tth.
where \(\beta\) is a coefficient that depends on M, as shown in Eq. (3), in which \(M_{i}^{t}\) is the Sun's gravity on a planet ith at t iteration, and \(M_{\max }^{t}\) is the value of \(\max (M_{i}^{t} )\) at t iteration. Therefore, the \(\beta\) coefficient contains values in the interval (0, 1).
In the search process, the true location is always the desired target to be found. However, this goal will be difficult or easy to achieve in this process depending on the complexity of the problem. In most cases, it is only possible to find an approximate value that fits the original requirement. That is to say, the true Sun location yet is in the space between the found solutions.
Interestingly, although Jupiter is the most massive planet in the solar system, Mercury is the planet, for which its location is the closest the Sun. It means that the best solution position to true Sun location at the t iteration may not be closer than the location of some other solutions to the true location of the Sun.
When the distance between the Sun and planets is small, the local search process is run. As mentioned above, the planet with the biggest mass will operate as the Sun, and in that case, it is Jupiter. Planets near the Sun will go to the location of the Sun. In other words, the planets move a small distance between it and the Sun at t iteration instead of going straight towards the Sun. The aim of this step is to increase accuracy in a narrow area of search space. Eq. (7) indicates the process for local search as follows:
where \(c = c_{0} - t/T\), t is the tth iteration, T is the maximum number of iterations, and c0 = 2. \(r_{2}\) is Gauss distribution function illustrated by Eq. (8).
Many evolutionary algorithms are also randomized by applying common stochastic processes such as power-law distribution and Lévy distribution. However, Gaussian distribution or normal distribution is the most popular since the large number of physical variables (see Fig. 4), including light intensity, errors/uncertainty in measurements, and many other processes, obey this distribution.
Gauss distribution.
The coefficient \(r_{2}\) is the Gaussian distribution with mean value \(\mu = 0.5\) and standard deviation \(\sigma = 0.2\). It means that 68.2% of \(r_{2}\) is in zone 1 about \((\mu - \sigma ) = 0.3\) to \((\mu + \sigma ) = 0.7\), and 27.2% of its values is in zone 2 from \((\mu \pm 2\sigma )\) to \((\mu \pm \sigma )\). In other words, POA will move to around the Sun without ignoring potential solutions in local search.
Exploitation employs any data obtained from the issue of interest to create new solutions, which are better than existing solutions. This process and information (for instance gradient), however, are normally local. Therefore, this search procedure is local. The result of search process typically leads to high convergence rates, and it is the strong point of exploitation (or local search). Nevertheless, the weakness of local search is that normally it gets stuck in a local mode.
In contrast, exploration is able to effectively explore the search space, and it typically creates many diverse solutions far from the current solutions. Thus, exploration (or global search) is normally on a global scale. The great strength of global search is that it rarely gets stuck in a local space. The weakness of the global search, however, is slow convergence rates. Besides, in many cases, it wastes effort and time since a lot of new solutions can be far from the global solution.
Figure 5 shows the operation of this algorithm, in which two local and global search processes are governed by the distance parameter Rmin. This means that a planet far away from the Sun will be moved depending on Newton law. In contrast, for planets very close to the Sun, the effect of the newton force is so great. They are only moving, therefore, around the Sun. A planet, which is close to the Sun, will support the Sun in exploring a local search space, as shown in Eq. (7), while the motion of distant planets from the Sun is less affected by this star at the same time. It means they have a chance to find new potential stars. Search local and global spaces runs simultaneously. This guarantees the enhancement of the accuracy of the search process, but this algorithm does not miss the potential locations.
Flow chart of the proposed POA.
The parameter Rmin must satisfy the following two conditions:
If the Rmin is too large, the algorithm will focus on local search in the first iterations. Therefore, the probability of finding a potential location far away from the present one is difficult.
In contrast, if Rmin is too small, the algorithm focuses on global search. In other words, the exploration of POA in the zone around the Sun is not thorough. Consequently, the best value of the search process may not satisfy the condition.
In this study, Rmin is chosen by dividing the search space into 1000 (R0 = 1000) zones. Where ‘low’ and ‘up’ are lower and upper bounds of each problem, respectively. With an explicit structure consisting of 2 local and global search processes, POA has satisfied the above two issues and promises to be effective and saving of time in solving complex problems.
In this section, POA is compared with a series of algorithms using well-known problems. The investigations run on the operating system of Windows 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00 GHz 1.80 GHz with RAM 16.0 GB.
In this subsection, POA is employed to handle a wide range of applications of various benchmark problems. A set of mathematical functions with known global optima is commonly employed to validate the effectiveness of the algorithms. The same process is also followed, and a set including 23 benchmark functions in the literature as test beds are employed for this comparison24,25,26. These test functions consist of 3-group, namely unimodal (F1–F7), multi-modal (F8–F13), and fixed-dimension (F14–F23) multimodal benchmark functions. POA is compared with seven algorithms, namely PSO1, GWO5, GSA27, FA2 and ASO28, HHO11, HSG29 on a set of 23 benchmark functions as shown in Table 1.
Each benchmark function runs 30 times by the POA algorithm. A sample size of POA with 30 planets is selected to perform 500 iterations. The statistical results (average–Ave, and standard deviation–Std) are summarized in Tables 2, 3 and 4.
The results from the comparison to F5 and F6 are quite good for POA compared with the others. The F1 to F4 and F7 functions witness that POA algorithm's accuracy is the most superior compared with all other algorithms.
In comparison with 7-unimodal functions, the most of multimodal functions consist of a lots of local optimization areas with the number increasing exponentially with dimensions. This makes them good conditions to evaluate the exploratory ability of a meta-heuristic optimization algorithm.
Table 3 indicates that POA outperforms in F10 and F11 functions, and is quite competitive with the rest.
Similar to the unimodal functions, once again the multimodal and fixed-dimension multimodal functions prove the competitiveness of POA with other algorithms, and show that the obtained results from F14 to F23 are promising.
Figure 6 illustrates the convergence of POA after 100 iterations of 100 planets. The first two metrics are qualitative metrics that illustrate the history of planets through the course of generations. During the whole optimization process, the planets are represented using red points as shown in Fig. 6. The trend of planets explores potential zones of the search space, and exploit quite accurately the global optimum. These investigations demonstrate that POA is able to get the high effectiveness in approximating the global optimum of optimization problems.
Level of convergence of POA after 100 iterations.
The third metric presents the movement of the 1st planet in the first dimension during optimization. This metric helps us to monitor if the first planet, which represents all planets, faces sudden movement in the initial generations and has more stability in the final generations. This movement is able to guarantee the exploration of the search region. Finally, the movement of planets is very short, which causes exploitation of the search region. Obviously, POA demonstrates that this is an algorithm that meets a requirement of accuracy, as well as a high degree of convergence.
The final quantitative metric is the convergence level of the POA algorithm. The best value of all planets in each generation is stored and the convergence curves are shown in Fig. 6. The decreasing of fitness over the generations demonstrates the convergence of the POA algorithm.
To validate the performance of POA with respect to high-dimensional optimization problems, the first 13 classical benchmark functions of the above-mentioned ones with Dim = 1000 are employed to investigate POA. For a fair comparison, seven of the above mentioned meta-heuristic optimization algorithms and POA with population size N = 30 independently run in 30 times. Additionally, the maximum number of iterations is fixed at 500 for all test functions.
The test discussed in this subsection demonstrate that POA is promising for dealing with 13 classical benchmark problems. Among the tested 23 benchmark functions, 13 functions had Dim = 1000, as presented in Table 5 and Fig. 7. This subsection confirmed the ability of POA to deal with high-dimensional problems as the dimension of those 13 classical benchmarks has been increased from 30 to 1000.
Performance comparison of algorithms.
In this experiment, a comparison is made between POA and the other seven algorithms in the time-consuming computation experiments of the 13 functions. The time-consuming calculation method is that each benchmark function independently implements 30-times all algorithms, then the values of 30-time running is saved in Table 6. For Dim = 30, not only does the computation of POA outperform some algorithms, while taking less time, such as GSA, ASO, and FA, but also it is sometimes far superior to GWO, even the time-consuming calculation of PSO. For Dim = 1000, POA always ranks first in computational time. These results show that the POA has merit for optimization problems in high dimensional problems.
In order to further clarify the efficiency of the proposed algorithm, POA is tested on the complex challenges, namely Evaluation Criteria for the CEC 201730 and CEC 201931. Its results are compared with those of well-known and modern meta-heuristic algorithms: DA, WOA, and the arithmetic optimization algorithm (AOA)32. These algorithms are selected because of the reasons:
All of them are based on the principle of PSO as with POA.
All algorithms are well cited in the literature, and AOA is a recently published study.
These algorithms were proven that they were superior performance both on benchmark test functions and real-world problems.
They are publicly provided by their authors.
Like the 23 classical benchmark functions, each function of the CEC Benchmark Suite is run 30 times, and each algorithm was allowed to search the landscape for 500 iterations using 30 agents.
In this subsection, the IEEE CEC 2017 problems is employed to test the performance of POA. The CEC'17 standard set consists of 28 real challenging benchmark problems. The first is unimodal function, 2–7 are multimodal one. While ten functions next are Hybrid, the rest of CEC 2017 are 10 composition functions. Table 7 presents a brief description of CEC 2017.
As shown in Table 8, POA is highly efficient, because compared to WOA, DA and AOA, it outperforms all algorithms in 21/28 of CEC 2017 standard set. In addition, the Wilcoxon signed rank test with α = 0.05 significance level is shown in Table 9 in order to analyze the significant differences between the results of POA and other algorithms. These results have proven that POA provides a great performance in terms of solution quality when handling the functions of CEC 2017.
Table 10 presents a brief description of CEC 2019. It can be seen from Table 11 that POA outperforms other optimization algorithms in all CEC 2019 functions. Indeed, results in many test functions (e.g. F52, F53, F56) show that POA is more powerful than others not only at the average value of 30 runs, but also at the other statistical values, such as the best, worst and Std value. Once again, The Wilcoxon signed rank test (as shown in Table 12) demonstrated the superior performance of POA to solve CEC 2019 problems.
In the next section, some classical engineering design problems are employed to further evaluate the performance of the POA. Besides, POA is also compared with other well-known techniques to confirm its results.
In this study, three constrained engineering design problems, namely tension/compression spring, welded beam, pressure vessel designs, are used to investigate the applicability of POA. The problems have some equality and inequality constraints. The POA should be, therefore, equipped with a constraint solving technique. Meanwhile, POA can optimize constrained problems as well at the same time. It should be noted that the population size and the number of iterations are, respectively, set to 30 and 500 for 50 runs to find the results for all problems in this section.
The main aim of this problem is to minimize the weight of a tension/compression spring. The design problem is subject to three constraints, namely surge frequency, shear stress, and minimum deflection. This problem consists of three variables: Wire diameter (d), mean coil diameter (D), and the number of active coils (N).
Tension/compression spring design problem has been solved by both mathematicians and heuristic techniques. Some researchers have made efforts to employ several methods for minimizing the weight of a tension/compression spring (Ha and Wang: PSO33; Coello and Montes: The Evolution Strategy (ES)34 and GA35; Mahdavi et al.: Harmony Search (HS)36; Belegundu: Mathematical optimization37 and Arora: Constraint correction38; Huang et al.: Differential Evolution (DE)39). Additionally, GWO5 algorithms and HHO11 have also been employed as heuristic optimizers for this problem. The comparison of the results of these methods and POA is shown in Table 13.
The aim of welded beam design problem is to minimize its fabrication cost. The constraints of the problem are shear stress \((\tau )\), bending stress in the beam \((\theta )\), buckling load of the bar \((P_{c} )\), end deflection of the beam \((\delta )\) and side constraints. Welded beam design problem has four variables, namely thickness of weld \((h)\), length of attached part of bar \((l)\), the height of the bar \((t)\), and thickness of the bar \((b)\). This problem is illustrated in the literature5,40,41.
Lee and Geem40 employed HS to deal with this problem, while Deb42,43 and Coello44 used GA. Seyedali Mirjalili applied GWO5 to solve this problem. Richardson’s random approach, Davidon-Fletcher-Powell, Simplex technique, Griffith and Stewart’s successive linear approximation are the mathematical methods that have been adopted by Ragsdell and Philips41 for this problem. More recently, Heidari, et al.11 and Yang, et al.29 have used HHO and HGS, respectively, to solve the problem. Table 14 shows a comparison between the different methods. The results indicate that POA reaches a design with the minimum cost compared to other optimizations. The best result of the cost function obtained by the POA is 1.72564.
Pressure vessel design problem is well-known, where the fabrication cost of the total cost consisting of material, forming, and welding of a cylindrical vessel should be minimized. There are four variables, namely thickness of the shell \((T_{s} )\), thickness of the head \((T_{h} )\), Inner radius \((R)\) and length of the cylindrical section without considering the head \((L)\), and four constraints.
Pressure vessel design problem has also been popular among optimization studies in different researches. Several heuristic techniques, namely DE39, PSO33, GA35,45,46, ACO47, ES [59], GWO5, MFO48, HHO11 and SMA10, that have been adopted for the optimization of this problem. Mathematical approaches employed are augmented Lagrangian Multiplier49 and branch-and-bound50. We can see that POA is again able to search a design with the minimum cost as shown in Table 15.
In the paper, a meta-heuristic algorithm, inspired by the gravitational law of Newton, is proposed. POA's structure in search processes consists of 2 phases that aim for proper balance exploration and exploitation. Several outperform features are shown through the accuracy of 23 classical benchmark functions and 38 IEEE CEC test functions (CEC2017, CEC 2019). In many functions, POA showed that the obtained results are more accurate than the others many times.
In the final evaluation section, a set of well-known test cases, including three engineering test problems, are thoroughly investigated to examine the operation of POA in practice. Each problem is a type of distinct engineering, including very diverse search spaces. Therefore, these engineering problems are employed to test the POA thoroughly. The obtained results demonstrate that POA is able to solve effectively real challenging problems with unknown search spaces and a large number of constraints. The results compared to GSA, GWO, PSO, DE, ACO, MFO, SOS, CS, HHO, SMA, HGS, etc., suggest that POA is superior.
The structure of POA is simple and explicit, very effective, even fast. Experiments revealed short computational time for handling complex optimization problems. Therefore, we firmly authenticate that POA is a powerful algorithm to solve optimization problems.
All data generated or analyzed during this study are included in this published article.
Kennedy, J. & Eberhart, R. In Proceedings of ICNN'95 - International Conference on Neural Networks. 1942–1948 vol.1944.
Yang, X.-S. In Stochastic Algorithms: Foundations and Applications. (eds Osamu Watanabe & Thomas Zeugmann) 169–178 (Springer, Berlin) (2009).
Mirjalili, S. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 27, 1053–1073 (2016).
Article Google Scholar
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016).
Article Google Scholar
Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 (2014).
Article Google Scholar
Wang, G.-G., Deb, S. & Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 31, 1995–2014 (2019).
Article Google Scholar
Wang, G.-G., Deb, S. & Coelho, L. D. S. Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 12, 1–22 (2018).
Article Google Scholar
Wang, G.-G., Deb, S. & Coelho, L. D. S. In 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI). 1–5 (IEEE).
Wang, G.-G. Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 10, 151–164 (2018).
Article Google Scholar
Li, S., Chen, H., Wang, M., Heidari, A. A. & Mirjalili, S. Slime mould algorithm: a new method for stochastic optimization. Futur. Gener. Comput. Syst. 111, 300–323 (2020).
Article Google Scholar
Heidari, A. A. et al. Harris hawks optimization: algorithm and applications. Futur. Gener. Comput. Syst. 97, 849–872 (2019).
Article Google Scholar
Moghaddam, F. F., Moghaddam, R. F. & Cheriet, M. Curved space optimization: a random search based on general relativity theory. arXiv preprint arXiv:1208.2214 (2012).
Zheng, Y.-J. Water wave optimization: a new nature-inspired metaheuristic. Comput. Oper. Res. 55, 1–11 (2015).
Article MathSciNet Google Scholar
Ahmadianfar, I., Heidari, A. A., Gandomi, A. H., Chu, X. & Chen, H. RUN beyond the metaphor: an efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 181, 115079 (2021).
Article Google Scholar
Rao, R. V., Savsani, V. J. & Vakharia, D. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43, 303–315 (2011).
Article Google Scholar
Ahmadi, S.-A. Human behavior-based optimization: a novel metaheuristic approach to solve complex optimization problems. Neural Comput. Appl. 28, 233–244 (2017).
Article Google Scholar
Goldberg, D. E. Genetic algorithms in search. Optimization, and MachineLearning (1989).
Juste, K., Kita, H., Tanaka, E. & Hasegawa, J. An evolutionary programming solution to the unit commitment problem. IEEE Trans. Power Syst. 14, 1452–1459 (1999).
Article ADS Google Scholar
Holland, J. H. Outline for a logical theory of adaptive systems. J. ACM 9, 297–314 (1962).
Article Google Scholar
Patro, S. P., Nayak, G. S. & Padhy, N. Heart disease prediction by using novel optimization algorithm: a supervised learning prospective. Inform. Med. Unlocked 26, 100696, doi:https://doi.org/10.1016/j.imu.2021.100696 (2021)
Li, X. & Sun, Y. Stock intelligent investment strategy based on support vector machine parameter optimization algorithm. Neural Comput. Appl. 32, 1765–1775 (2020).
Article Google Scholar
Sang-To, T. et al. Combination of intermittent search strategy and an improve particle swarm optimization algorithm (IPSO) for model updating. Frattura ed Integrità Strutturale 59, 141–152 (2022).
Google Scholar
Minh, H.-L. et al. In Proceedings of the 2nd International Conference on Structural Damage Modelling and Assessment. 13–26 (Springer).
Yao, X., Liu, Y. & Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 3, 82–102, doi:https://doi.org/10.1109/4235.771163 (1999).
Digalakis, J. G. & Margaritis, K. G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 77, 481–506 (2001).
Article MathSciNet Google Scholar
Yang, X.-S. Test problems in optimization. arXiv preprint arXiv:1008.0549 (2010).
Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. GSA: a gravitational search algorithm. Inf. Sci. 179, 2232–2248 (2009).
Article Google Scholar
Zhao, W., Wang, L. & Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 163, 283–304. https://doi.org/10.1016/j.knosys.2018.08.030 (2019).
Article Google Scholar
Yang, Y., Chen, H., Heidari, A. A. & Gandomi, A. H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 177, 114864 (2021).
Article Google Scholar
Wu, G., R. Mallipeddi, and P. N. Suganthan. Problem definitions and evaluation criteria for the CEC 2017 competition on constrained real-parameter optimization. (researchgate, 2017).
Price, K., Awad, N., Ali, M. & Suganthan, P. The 100-digit challenge: problem definitions and evaluation criteria for the 100-digit challenge special session and competition on single objective numerical optimization. Nanyang Technol. Univ. (2018).
Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M. & Gandomi, A. H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 376, 113609 (2021).
Article MathSciNet ADS Google Scholar
He, Q. & Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 20, 89–99 (2007).
Article Google Scholar
Mezura-Montes, E. & Coello, C. A. C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen Syst 37, 443–473 (2008).
Article MathSciNet Google Scholar
Coello, C. A. C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 41, 113–127 (2000).
Article Google Scholar
Mahdavi, M., Fesanghary, M. & Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 188, 1567–1579 (2007).
Article MathSciNet Google Scholar
Belegundu, A. D. & Arora, J. S. A study of mathematical programming methods for structural optimization. Part I: theory. Int. J. Numer. Methods Eng. 21, 1583–1599, doi:https://doi.org/10.1002/nme.1620210904 (1985).
Arora, J. Introduction to optimum design with MATLAB. Introduction to Optimum Design, 413–432 (2004).
Huang, F.-Z., Wang, L. & He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 186, 340–356 (2007).
Article MathSciNet Google Scholar
Lee, K. S. & Geem, Z. W. A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 194, 3902–3933 (2005).
Article ADS Google Scholar
Ragsdell, K. M. & Phillips, D. T. Optimal design of a class of welded structures using geometric programming. J. Eng. Ind. 98, 1021–1025. https://doi.org/10.1115/1.3438995 (1976).
Article Google Scholar
Deb, K. Optimal design of a welded beam via genetic algorithms. AIAA J. 29, 2013–2015 (1991).
Article ADS Google Scholar
Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl Mech Eng 186, 311–338 (2000).
Article ADS Google Scholar
Coello Coello, C. A. Constraint-handling using an evolutionary multiobjective optimization technique. Civ. Eng. Syst. 17, 319–346 (2000).
Article Google Scholar
Coello, C. A. C. & Montes, E. M. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 16, 193–203 (2002).
Article Google Scholar
Deb, K. In Evolutionary Algorithms in Engineering Applications 497–514 (Springer, 1997).
Kaveh, A. & Talatahari, S. An improved ant colony optimization for constrained engineering design problems. Eng. Comput. (2010).
Mirjalili, S. Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 89, 228–249 (2015).
Article Google Scholar
Kannan, B. K. & Kramer, S. N. An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 116, 405–411. https://doi.org/10.1115/1.2919393 (1994).
Article Google Scholar
Sandgren, E. Nonlinear integer and discrete programming in mechanical design optimization. J. Mech. Des. 112, 223–229. https://doi.org/10.1115/1.2912596 (1990).
Article Google Scholar
Download references
The authors acknowledge the financial support of VLIR-UOS TEAM Project, VN2018TEA479A103, 'Damage assessment tools for Structural Health Monitoring of Vietnamese infrastructures', funded by the Flemish Government. The authors would like to acknowledge the support from Ho Chi Minh City Open University under the basic research fund (No. E2021.06.1). The authors wish to express their gratitude to Van Lang University, Vietnam for financial support for this research.
Laboratory Soete, Department of Electromechanical, Systems and Metal Engineering, Ghent University, Technologiepark Zwijnaarde 903, 9052, Zwijnaarde, Belgium
Thanh Sang-To & Minh Hoang-Le
Faculty of Civil Engineering, Ho Chi Minh City Open University, Ho Chi Minh City, Vietnam
Thanh Sang-To, Minh Hoang-Le & Thanh Cuong-Le
Faculty of Mechanical - Electrical and Computer Engineering, School of Engineering and Technology, Van Lang University, Ho Chi Minh City, Vietnam
Magd Abdel Wahab
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
T.S.T., M.A.W. and T.C.L. Conceptualization, Software, Methodology, Validation, Writing – Original draft; M.L. Writing, formal analysis and data curation; M.A.W. and T.C.L. Supervision.
Correspondence to Magd Abdel Wahab or Thanh Cuong-Le.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
Sang-To, T., Hoang-Le, M., Wahab, M.A. et al. An efficient Planet Optimization Algorithm for solving engineering problems. Sci Rep 12, 8362 (2022). https://doi.org/10.1038/s41598-022-12030-w
Download citation
Received: 23 March 2022
Accepted: 04 May 2022
Published: 19 May 2022
DOI: https://doi.org/10.1038/s41598-022-12030-w
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Scientific Reports (2023)
Frontiers of Structural and Civil Engineering (2023)
Soft Computing (2023)
International Journal of Mechanics and Materials in Design (2023)
Nonlinear Dynamics (2023)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.