Introduction

Gradient-based and Hessian-based algorithms are widely use in the literature to solve optimization problems in engineering. This is due to their computational efficiency, which typically require little problem-specific parameter tuning1. The Gradient-based and Hessian-based algorithms employ iterative process that involves multivariate scalar function that packages all the information of the partial derivatives of the objective function (i.e., Gradient function or Hessian matrix) to reach the solution. Some of the most common methods in this category are: interior point, quasi-Newton, descent gradient, and conjugate gradient1. The main drawback with such algorithms includes volatility to reach local optima solutions, difficulty in solving discrete optimization problems, intricacy in the implementation for some complex optimization multivariable problems, and susceptibility to numerical noise2. To tackle the presented deficiencies, some authors suggest the use of metaheuristics optimization techniques.

In the last two decades, the use of metaheuristics optimization techniques to solve complex, multimodal, high dimensional and nonlinear engineering problems have become very popular. This is attributed to their simplicity of implementation, straightforward adaptability to solve optimization problems, and robust search capability to achieve effective global optima3. Even though there are different metaheuristics optimization techniques, these can be classified into four main groups as presented in Fig. 1. The first group is called evolutionary algorithm (EA). The theory behind the EA is based on the evolution in nature. In this field, Genetic Algorithm (GA) emerge as the most popular. GA was proposed by Holland in 19924. GA is inspired by the Darwin evolution theory4, in which a population (solution candidates) evolves (best solution) by crossover and mutation processes. In this way, the solution to the global optima in every iteration (next generation) is assured. Recently, an extensive recompilation of GA applications can be found in5. Other advanced EAs are Biogeography-Based Optimizer6, Clonal Flower Pollination7, Fuzzy Harmony Search8, Mutated Differential Evolution9, Imperialist Competitive (2017)10, and Deep Learning with Gradient-based optimization11.

Figure 1
figure 1

Classification of metaheuristics optimization techniques.

The second metaheuristic group corresponds to swarm intelligence (SI). These algorithms incorporate mathematical models that describe the motion of a group of creatures in nature (swarms, schools, flocks, and herds) based on their collective and social behavior. The most well-known SI algorithm presented in the literature is the Particle Swarm Optimization (PSO), and was proposed by Kennedy and Eberhart in 199512. In general, SI algorithms initialize with multiple particles located in random positions (solution candidates). The particles look to enhance their position based their position based on their own best positions obtained so far and best particle of the swarm. The motion process is repeated (iterations) until most of the particle converge to the same position (best solution). The theory behind SI have been exploited, resulting in novel innovative optimization techniques (i.e. Dynamic Ant Colony Optimization (2017)13, Bacterial Foraging (2016)14, Fish School Search (2017)15, Moth Firefly (2016)16, and Chaotic Grey Wolf17) for different engineering applications.

The third metaheuristic group is physics-based (PB). Their formulation involves any physical concept used to describe the behavior through space and time of the matter. PB algorithms can be classified into two main classes: classical and modern. The term ‘classical’ refers to the optimization techniques that employs classical physics in their formulations to reach the optima global. In this branch, fits the Greedy Electromagnetism like18, Improved Central Force19, Multimodal Gravitational Search20, Exponential Big Bang-Big Crunch (BB-BC)21, and Improved Magnetic Charged System Search22 algorithms. On the other hand, the term ‘modern’ refers to the algorithms that employs quantum physics to determine the optima global. Some of the recent algorithms that fit in this branch are Neural Network Quantum States23, Adiabatic Quantum24, and Quantum Annealing25 optimizations. PB algorithms’ optimization process starts with a random initialization of the matter’s position (solution candidates). Then, depending on the physical interaction (i.e. kinematic, dynamic, thermodynamic, hydrodynamic, momentum, energy, electromagnetism, quantum mechanics, etc.) defined in the search space, the particles improve their position (best solution) and the process is repeated until certain physical rules are satisfied.

The last metaheuristic group is Hybrid algorithm (HA). This group combines the characteristics of the previous metaheuristics groups to bring new optimization techniques. These algorithms can be classified into four main groups: EA-SI (i.e. Evolutionary Firefly26), EA-PB (i.e. Harmony Simulated Annealing Search27), SI-PB (i.e. Big Bang-Big Crunch Swarm Optimization28) and EA-SI-PB (i.e. Electromagnetism-like Mechanism with Collective Animal behavior search29). Particularly, the research scope lies on the SI-PB group since it combines the quantum physics concepts and swarm particle behavior.

The recent literature presents new approaches that use the concepts of quantum mechanics and swarm intelligence for different applications. For instance30, exhibits a Quantum-inspired Glow-worm Swarm Optimisation (QGSO) to minimize the array style with maximum relative sidelobe level of array (discrete optimization problem). The algorithm employs the concepts of quantum bits combined with the mathematical behaviour of social glow-worm swarm to determine the best solution in terms of the position of the best quantum glow-worm. Authors in31, propose a novel Accelerated Quantum Particle Swarm Optimization (AQPSO) that uses the concept of quantum mechanics to derive an expression that deals with the position of the quantum particle trapped in a delta potential well. In order to accelerate the convergence process, the inclusion of an odd number of observers greater than unity is incorporated to the model. The AQPSO shows high performance resulting in the application of different power systems application such as optimal placement of static var compensators for maximum system reliability31, maximization of savings due to electrical power losses reduction32, and optimal maintenance schedule to minimize the operational risk of static synchronous generators33 and power generators34. In35,36,37, quantum ant colony algorithm is used to solve path optimization problems. In this algorithm, every ant carries a group of quantum bits to represents the position of its own. The ants move according through quantum rotation gates, which lead to an improvement in their position. As presented, there is plenty of evidence that demonstrate the computation effective robustness of the quantum SI-PM algorithms. Most of them are driven by quantum bits30,35,36,37, and quantum potential wells31,38,39. Nevertheless, to the best of our knowledge, there is no quantum SI-PM in the literature mimicking a quantum particle swarm bounded by Lorentz, Rosen–Morse, and Coulomb-like Square Root potential fields. This fact encourages the attempt to propose three novel algorithms and investigate its abilities in solving benchmark optimization problems.

The motivation of this research lies on the No Free Lunch (NFL) theorem, which states that “any two algorithms are equivalent when their performance is averaged across all possible problems”40. The given statement infers that there is not a best algorithm able to solve any optimization problem. Some algorithms may show effective performance for a set of problems; however, the same algorithms can result inefficient for a different set of problems. Therefore, NFL open a pathway to improvement of the existing approaches. Given the foregoing, this paper proposes three novel hybrid metaheuristics optimization techniques inspired by the movement behaviour of quantum particle swarm bounded in three different potential fields: Lorentz, Rosen-Morse, and Coulomb-like Square Root. The given potentials fields are considered due to their simplicity in the analytical solution to the Schrödinger equation, which are widely studied in the Physics literature41,42,43. Moreover, these potentials fields offer certain features that allow to predict the qualitative behaviour of the proposed algorithms in terms of exploitation and exploration. The base for this statement lies in the probability density function (solution to the Schrödinger equation), which presents two regime behaviour. The first regime is in between the limits of the quantum well, while second regime is related to the asymptotic trend at \((z \rightarrow \pm \infty )\), as presented in Fig. 2. The local amplitude of the probability density function (local “height” or probability) represents the strength of the potential in a region of space. In this sense, the behaviour of the probability density function in between the limits of the quantum well illustrates the probability of the particle to be near the local attractor that is associated with the exploration capabilities of the search algorithm44. It is important to highlight that the exploration is defined as the ability of examining a promising area(s) as broadly as possible. By defining a “promising area” as the region near the local attractor, then more points are expected to be searched in the promising area if more probability weight is given locally. This means that a higher amplitude (near zero) of the probability density function leads to a more thoroughly/broadly search in that region, giving rise to more exploration. Hence, the probability density function that is expected to produce the most exploration in the search algorithm is the Rosen Morse probability density function, followed by the Lorentz and the Coulomb like Square Root probability density functions. The behaviour of the probability density function at \((z\rightarrow \pm \infty )\) is associated with the global search capabilities of the algorithm, known in this manuscript as exploitation. Therefore, a slowly decaying probability density function (weak potential at \((z\rightarrow \pm \infty )\)) will be expected to exhibit high exploitation, i.e. further values from the local attractor will be searched with non-negligible probability44. In this sense it is expected the Lorentz probability density function to present the best exploitation, followed by the Rosen Morse and the Coulomb-like Square Root probability density function.

Figure 2
figure 2

Probability density functions associated with the potential fields.

To verify the computational robustness of the proposed approach, several benchmark functions are solved. Then, the results are compared with the ones obtained by particle swarm optimization, genetic algorithm, and firefly algorithms. The rest of the paper is organized as follows: “Quantum Particle Swarm Optimization general formulation” section presents quantum concepts that describe the scenario of the particle swarm. “Time independent Schrödinger equation and particle position” section describes the nature of a quantum particle in a bounded potential field. “Quantum-inspired optimization algorithms” section exhibits the quantum-inspired proposed optimization algorithms. “Case study” section describe the case study used to test the efficacy of the proposed algorithms. In “Results” section, the results are analysed and discussed. Finally, “Discussion and conclusion” section incorporates the conclusions.

Methodology

Quantum particle swarm optimization general formulation

Quantum particle swarm optimization (QPSO) is an advanced heuristic optimization technique that employs the concept of quantum particle motion to reach the optimal solution. QPSO follows the process described in Fig. 3. The process starts defining the initial population of the particles SS and total number of iterations MaxIt. The position of the particle (x) represents a solution candidate to the optimization problem; thus, it can be used to evaluate the objective function.

The next step is to identify the positions called ‘personal best’ and ‘global best’. In this step is relevant to consider two specific attributes of the particle, which are related to memory and communication. The memory attribute refers to the ability to save the best position of the particle by comparing its actual position with the position after the motion. For instance, Fig. 4a shows two scenarios of particle motion. In scenario 1, the particle has the possibility to move close to the optimum position, therefore, it proceeds to move and saves this position as its best position. In scenario 2, the particle has the possibility to move far from the optimum position, therefore, it will not move and saves its actual position as its best position. The memory attribute is known as ‘personal best’ and denoted by q45,46. The communication attribute refers to the ability to save the particle with the best position among the swarm. Figure 4b shows a swarm with three particles, resulting in ‘particle 3’ as the best particle since is the one nearest to the optimum position. The communication attribute is known as ‘global best’ and is denoted by g45,46.

Figure 3
figure 3

QPSO flowchart.

Figure 4
figure 4

(a) Memory attribute of the particle. (b) Communication attribute of the particle.

The process continues with the update of the particles position. For the given purpose, the general mathematical topology of the swarm intelligence optimization is employed. This is as follows47:

$$\begin{aligned} x_{\ell }^{(k+1)}=z\left( x_{\ell }^{(k)},V\right) +f_1 \left( w_1,u_1 \right) \left( q_{\ell }^{(k)}-x_{\ell }^{(k)} \right) +f_2 \left( w_2,u_2 \right) \left( g^{(k)}-x_{\ell }^{(k)} \right) \end{aligned},$$
(1)

where z represent the displacement of a particle and is a function that depends on \(x_{\ell }^{(k)}\) and V, which represent the actual position x of the particle \(\ell\) at iteration k and the physics phenomena that drives the movement of the particle, respectively. The functions \(f_1\) and \(f_2\) correspond to the nature of the swarm intelligence, in which \(f_1\) drives the new position of the particle towards the local optima particle position q, while \(f_2\) associates the new position of the particle with the global optima particle position g. To avoid traps (‘local optima’) that may appear in the objective function, the authors of the first swarm intelligence12 introduced random numbers u and coefficients of acceleration w such that \(0 \le w \le 2\). The suffix ‘1’ and ‘2’ is used to refer to local and global position, respectively.

A simplification of the general formulation of the swarm intelligence optimization, is proposed by authors in47. The model is based on a trajectory analysis in which the best position of the particle is localized in the search space that lies in between the best local and global position. Authors in31 refer to this term as local attractor \(D_{\ell }^{(k)}\), and is used to guide the particle towards a better position. Such model has the form47:

$$\begin{aligned} x_{\ell }^{(k+1)}= \,& {} z(x_{\ell }^{(k)},V)+D_{\ell }^{(k)}, \nonumber \\ D_{\ell }^{(k)}= &\, {} \left( (r_1 u_1)/(r_1 u_1+r_2 u_2 )\right) q_{\ell }^{(k)}+(1-(r_1 u_1)/(r_1 u_1+r_2 u_2 )) g^{(k)}. \end{aligned}$$
(2)

Reference48 presents that for the scenario of a quantum particle trapped in a bounded field, the phenomena that drives the movement of the particle is given by the relative width a and a function that depends on the potential well being used, such that48

$$\begin{aligned}&z\left( x_{\ell }^{(k)},V\right) =af(V), \nonumber \\&a=\left| x_{\ell }^{(k)} -\frac{1}{SS} \sum _{\ell =1}^{SS} q_{\ell }^{(k)} \right| \end{aligned},$$
(3)

where SS is the total number of particles. This last formulation is employed to estimate the new position of the particle. The function f(V) is analyzed in the following sections of the manuscript.

The last step is to verify the termination criterion using the total number of iterations MaxIt, and convergence tolerance value \(\xi\). The process finishes if one of the conditions given in Eq. (4) is satisfied49.

$$\begin{aligned} Convergence~criteria:\left\{ \begin{matrix} k = \textit{MaxIt}\\ \left| \sum _{\ell =1}^{SS} q_{\ell }^{(k)} - SS g^{(k)} \right| \le \xi \end{matrix}\right. \end{aligned}.$$
(4)

Time independent Schrödinger equation and particle position

QPSO algorithm can be described through the quantum behaviour of motion of particles. In particular, the case of a particle moving in a bounded potential field is considered of great interest. Under the given context, particles’ states are depicted by the probability density function \(|\psi ({\vec {r}},t)|^2\), in the search space \({\vec {r}}\), at time t. The probability density function must satisfy49.

$$\begin{aligned} \int _{-\infty }^{+\infty } \left| \psi ({\vec {r}},t) \right| ^2 d{\vec {r}} = 1 \end{aligned}.$$
(5)

In order to determine the position of the particle, a measurement is required. For this purpose, a random number generation simulation is performed \(u=rand(0,1)\). In this sense, the wave function squared must be normalized with respect to its maximum value, such that

$$\begin{aligned} \frac{\left| \psi ({\vec {r}},t) \right| ^2}{max \left( \left| \psi ({\vec {r}},t) \right| ^2\right) } = u \end{aligned}.$$
(6)

On the other hand, the time evolution of the wave function \(\psi ({\vec {r}},t)\) of a quantum system is generally described by the Schrödinger equation49,50,51:

$$\begin{aligned} {\widehat{H}}\psi ({\vec {r}},t)=i\hbar \frac{\partial }{\partial t}\psi ({\vec {r}},t) \end{aligned}.$$
(7)

The formulation presented in Eq. (7) is also called the time-dependent Schrödinger equation. In this equation, \({\widehat{H}}\) is the Hamiltonian operator and \(\hbar\) is the reduced Planck Constant. Nevertheless, for the purpose of this research, a slowly varying process (adiabatic) in which the eigen estate at \(E=0\) of the particle change accordingly with the evolution of the potential V is considered. Then, the one-dimension stationary Schrödinger equation can be used and written as follows50,51:

$$\begin{aligned} \frac{\hbar ^2}{2m}\frac{\partial ^2 }{\partial z^2}\psi (z) - V(z)\psi (z) = 0 \end{aligned}.$$
(8)

The last formulation is relevant for the derivation of the term f(V) required in Eq. (3).

Quantum-inspired optimization algorithms

The following analysis considers wave functions that belongs to the Hilbert space (\(\psi (z) \in {\mathcal {H}}\)), that is, the wave function squared \(\left| \psi (z) \right| ^2\) must be normalized with the boundary condition stablished in Eq. (9)50,51.

$$\begin{aligned} \lim _{z\rightarrow \pm \infty }\psi (z)=0 \end{aligned}.$$
(9)

The proposed quantum-inspired optimization algorithms follow the methodology described in Fig. 1. In the methodology it is required to determine the term f(V), and complete the formulation presented in Eq. (3). In the following, f(V) is derived for each proposed potential field, which are shown in Fig. 5.

Figure 5
figure 5

(a) Lorentz potential field to model donor acceptor interaction. (b) Rosen–Morse potential field to model molecular vibrations. (c) Coulomb-like square root potential field to model the electron confinement in graphene.

Lorentz potential field (QPSO-LR)

The following algorithm is inspired in the localization of electrons in bonds formation. This occurs when there is an interaction between electron–donor and electron–acceptor atoms, as presented in Fig. 5a. The potential field that describes this phenomenon is mathematically described as given in Eq. (10)52.

$$\begin{aligned} V(z)=\frac{\hbar ^2(2z^2 - a^2)}{2m(z^2 + a^2)^2} \end{aligned}.$$
(10)

The displacement of the particle under the Lorentz potential field is obtained by replacing Eq. (10) in Eq. (8), resulting in Eq. (11).

$$\begin{aligned} \frac{\hbar ^2}{2m}\frac{\text {d}^2 \psi (z)}{\text {d} z^2}- \frac{\hbar ^2(2z^2 - a^2)}{2m(z^2 + a^2)^2\psi (z)}=0 \end{aligned}.$$
(11)

The solution for the Second Order Linear Ordinary Differential Equation (SLODE) presented in Eq. (11) has the form given in Eq. (12).

$$\begin{aligned} \psi (z)=C_1\frac{1}{\sqrt{a^2+z^2}}+C_2\frac{3a^2z+z^3}{3\sqrt{a^2+z^2}} \end{aligned}.$$
(12)

By applying the boundary condition given in Eq. (9), the result is:

$$\begin{aligned} C_1=\sqrt{\frac{a}{\pi }}; C_2=0 \end{aligned}.$$
(13)

Then,

$$\begin{aligned} \psi (z) = \sqrt{\frac{a}{\pi (a^2+z^2)}} \end{aligned}.$$
(14)

By replacing Eq. (14) in Eq. (6) and solving for z, the result is as follows:

$$\begin{aligned} z=a\sqrt{\frac{1-u}{u}} \end{aligned}.$$
(15)

By replacing Eq. (15) in Eq. (3), the conclusion is:

$$\begin{aligned} f(V)=\sqrt{\frac{1-u}{u}} \end{aligned}.$$
(16)

Rosen–Morse potential field (QPSO-RM)

The Rosen-Morse potential field can be used to model the vibration energy spectra produced by the interaction of atoms in a diatomic molecule53,54, as presented in Fig. 5b. A generalized form of the Rosen-Morse potential field is given by Eq. (17).

$$\begin{aligned} V(z)=\frac{\hbar ^2\left[ \tan h^2(z/a) - \sec h^2(z/a) \right] }{a^22m} \end{aligned}.$$
(17)

The displacement of the particle under the Rosen–Morse potential is obtained by replacing Eq. (17) in Eq. (8), resulting in Eq. (18).

$$\begin{aligned} \frac{\hbar ^2}{2m}\frac{\text {d}^2 \psi (z)}{\text {d} z^2}- \frac{\hbar ^2}{2m}\frac{\left[ \tan h^2(z/a) - \sec h^2(z/a) \right] }{a^2} \psi (Z)=0 \end{aligned}.$$
(18)

Using Eq. (9) to normalize the solution to Eq. (18) and solving the SLODE, the result is given by (19).

$$\begin{aligned} \psi (z)=\frac{1}{\sqrt{2a}}\sec h(z/a) \end{aligned}.$$
(19)

Plugging in Eq. (19) back into Eq. (6) and solving for z, the result is as follows:

$$\begin{aligned} z = a \sec h^{-1}(\sqrt{u}) \end{aligned}.$$
(20)

Substituting Eq. (20) in Eq. (3), the conclusion is

$$\begin{aligned} f(V) = \sec h^{-1}(\sqrt{u}) \end{aligned}.$$
(21)

Coulomb-like square root field (QPSO-CS)

A potential that contains an inverse square root and a linear symmetric potential is evaluated. The Coulomb-like square root potential field is commonly employed to model the electron confinement in graphene55, as presented in Fig. 5c. Mathematically, the Coulomb-like square root potential field can be written as given in Eq. (22).

$$\begin{aligned} V(z)=\frac{\hbar ^2}{2m}\left[ -\frac{0.4}{a^{3/2}}\left| z \right| ^{-0.5} + \frac{0.6}{a^3}\left| z \right| \right] \end{aligned}.$$
(22)

Similarly, the displacement of the particle under the Coulomb-like Square Root potential is obtained by replacing Eq. (22) in Eq. (8), which leads to Eq. (23):

$$\begin{aligned} \frac{\hbar ^2}{2m}\frac{\text {d}^2 \psi (z)}{\text {d} z^2}- \frac{\hbar ^2}{2m}\left[ -\frac{0.4}{a^{3/2}}\left| z \right| ^{-0.5} + \frac{0.6}{a^3}\left| z \right| \right] \psi (Z)=0 \end{aligned}.$$
(23)

Using Eq. (9) to normalize the solution to Eq. (23) and solving the SLODE, the result is given by Eq. (24).

$$\begin{aligned} \psi (z)=\frac{1}{1.69\sqrt{a}}e^{-\left( \frac{\left| z\right| }{a} \right) ^{3/2}} \end{aligned}.$$
(24)

Plugging in Eq. (24) back into Eq. (6) and solving for z, the result is as follows:

$$\begin{aligned} z = a\left[ ln\left( \frac{1}{u} \right) \right] ^{2/3} \end{aligned}.$$
(25)

Substituting Eq. (25) in Eq. (3), the conclusion is:

$$\begin{aligned} f(V) = \left[ ln\left( \frac{1}{u} \right) \right] ^{2/3} \end{aligned}.$$
(26)

The implementation of the proposed optimizations techniques is presented in Algorithm 1. Notice that the main difference among the proposed algorithms lies in step 11, in which the particle updates its position. This is because the particle follows a trajectory that mainly depends on the boundary potential field defined in f(V).

figure a

Case study

To show the performance of the proposed algorithms (QPSO-LR, QPSO-RM and QPSO-LR), 24 benchmark functions are used56. The benchmark functions are categorized by unimodal, multimodal and fixed-dimension multimodal that are mathematically described in Table 78 and 9, respectively. Unimodal functions are used to analyze the impact of the algorithms when there is one minimum value in a certain interval. In contrast, multimodal functions are utilized to analyze the algorithms in the presence of several local minima through the search space. The simulation incorporates for the unimodal and multimodal functions a dimension (total of variables) of 30, while for fixed-dimension multimodal is as shown in Table 9. Concerning the population size and total number of iterations, these are 50 and 1000, respectively. To corroborate the significance of the results, a total of 30 experiments (simulations) are conducted. All the algorithms are tested in MATLAB R2020a and numerical experiment is set up on Intel Core (TM) i7-6500 Processor, 2.50GHz, 8 GB RAM.

Results

The performance of QPSO-LR, QPSO-RM and QPSO-LR is measured in terms of exploitation (accuracy and precision), exploration (search speed and acceleration) and simulation time. In addition, to explore the advantages of the proposed algorithms, the same optimization problems are solved using particle swarm optimization (PSO), genetic algorithm (GA), and firefly optimization (FFO). The results are shown as follows.

Exploitation: accuracy and precision

The exploitation refers to the local search capability around the promising regions. This can be quantified based on two statistics metrics: accuracy \((\delta )\) and precision \((\phi )\). The term accuracy is defined as the absolute value of the difference between the average value and the true value (reference value) of the quantity being measured, that is, the closeness of the measurements to the true value. On the other hand, the term precision indicates the closeness of the measurements to each other. The introduced terms can be mathematically obtained using the true value \((x_{opt})\), \(({\bar{x}})\)), and standard deviation \((\sigma )\) of a set of data, as given in Eq. (27) and Eq. (28), respectively57.

$$\begin{aligned}&\delta =\left| x_{opt}-{\bar{x}}\right| \end{aligned},$$
(27)
$$\begin{aligned}&\phi =\left| \sigma /{\bar{x}}\right| \end{aligned}.$$
(28)

Using the data obtained from the experiments performed with each optimization technique, the mean and standard deviation can be obtained. Then, by the employment of (27) and (28), the accuracy and the precision of each optimization technique are obtained. Tables 1, 2 and 3 shows the statistical metrics for unimodal, multimodal, fixed-multimodal benchmark function, accordingly (best values are highlighted in blue). The results reveal that for each algorithm there is a better match in terms of accuracy and precision depending on the function. Focusing on the unimodal benchmark functions \((f_1-\ f_8)\), QPSO-LR presents the best accuracy for \(f_1,\ f_2,\ f_4,\ f_6,\ f_7,\) followed by GA in functions \(f_1,\ f_2,\ f_6,\ f_8\) and PSO for the functions \(f_4\) and \(f_7\). Likewise, if only precision is considered, there is considerable variation in the algorithms with respect to the functions, e.g., GA is the most precise for \(f_1,\ f_2,\ f_7\), while for \(f_4\) and \(f_6\) the QPSO-RM and QPSO-LR results to be the most precise, respectively.

Analyzing the algorithms for the multimodal benchmark functions \((f_9-\ f_{15})\), there is considerable variation in accuracy and precision. However, by considering these features separately, the most accurate, but not necessarily the most precise algorithms in descendant order are: QPSO-LR, GA, PSO, FFO, QPSO-CS, QPSO-RM. In contrast, the algorithms that are more precise, but not necessarily the most accurate in descendant order are: QPSO-RM, FFO, GA, QPSO-LR, QPSO-CS, and PSO.

Table 1 Accuracy and precision metrics for unimodal benchmark functions with N = 30, Tmax = 1000 and Texp = 30.
Table 2 Accuracy and precision metrics for multimodal benchmark functions with N = 30, Tmax = 1000 and Texp = 30.
Table 3 Accuracy and precision metrics for fixed-multimodal benchmark functions with Tmax = 1000 and Texp = 30.

For fixed multimodal functions \((f_{16}-f_{24})\), the results expose that QPSO-LR and QPSO-RM show excellent results for functions \(f_{16}\) and \(f_{19}\) in terms of accuracy and precision, while PSO responds better to \(f_{17}\). For the rest of the functions, all algorithms present acceptable accuracy and precision.

Exploration: speed and acceleration

The exploration is defined as the ability of examining the promising area(s) of the search space as broadly as possible56. The exploration is closed related with the convergence behaviour, which is shown in Figs. 67 and 8. These graphs represent the evolution of the best solution through every iteration performed. It can be appreciated that depending on the type of function on which the algorithms are applied, presents certain patterns. For instance, functions \(f_1,\ f_2,\ f_3,\ f_4,\ f_5\) (independently of the employed algorithm) present a linear convergence behaviour, while for the rest of functions the behaviour is exponential.

To quantify the exploration, the average search speed and acceleration of each algorithm is calculated using the Allan variances58. The first Allan variance index measures search speed, i.e. the distance variation of best search agent C in every iteration k, which is mathematically described in Eq. (29). The second Allan variation index measures the search acceleration, i.e. the search speed variation, which is mathematically described in Eq. (30).

$$\begin{aligned} \omega= & {} \left| \frac{\sum _{k=1}^{k_{max}}{C\left( k+1\right) -C\left( k\right) }}{\Delta k}\right| \end{aligned},$$
(29)
$$\begin{aligned} \alpha= & {} \left| \frac{\sum _{k=1}^{k_{max}}{\omega \left( k+1\right) -\omega \left( k\right) }}{\Delta k}\right| \end{aligned}.$$
(30)

Tables 45 and 6 shows the search speed and acceleration for each optimization technique grouped by unimodal, multimodal, and fixed multimodal functions, respectively (best values are highlighted in blue). For all types of functions, GA algorithm exhibits the highest degree of average search speed and acceleration. However, speed and acceleration attributes does not assure good precision, accuracy, or simulation time. Therefore, a more scrutiny analysis is required to develop a proper comparison between optimization methods, as presented in the next sections.

Figure 6
figure 6

Convergence behaviour of the unimodal benchmark functions.

Figure 7
figure 7

Convergence behaviour of the multimodal benchmark functions.

Figure 8
figure 8

Convergence behaviour of the fixed-multimodal benchmark functions.

Table 4 Convergence speed and acceleration metrics for unimodal benchmark functions with N = 30, Tmax = 1000 and Texp = 30.
Table 5 Accuracy and precision metrics for multimodal benchmark functions with N = 30, Tmax = 1000 and Texp = 30.
Table 6 Accuracy and precision metrics for fixed-multimodal benchmark functions with N = 30, Tmax = 1000 and Texp = 30.

Simulation time

Another important aspect to evaluate is the simulation time performance. Figure  9 shows the average execution time for each optimization algorithm. The results reveal that FFO and GA present higher time simulation (exceeds in approximately between 34 to 48 times) compared with the rest of the optimization techniques. Therefore, QPSO-LR, QPSO-RM, QPSO-CS and PSO present better performance in terms of simulation time than FFO and GA.

Figure 9
figure 9

Average simulation time.

Overall performance

The performance in terms of accuracy, precision, search speed, search acceleration, and simulation time of each optimization technique is quantitative defined by the grade rules presented in (31) to (35), respectively. These rules are developed to facilitate the comparison between optimization techniques under the following criterion: ‘+ 3’ excellent performance, ‘+ 2’ good performance, ‘+ 1’ fair performance, ‘+ 0’ low performance. Once the values are assigned, the average of the function by groups (unimodal, multimodal, fixed multimodal) is taken. Then, the values are normalized based on the highest average.

$$rule_{\delta } = \left\{ {\begin{array}{*{20}l} { + 3} \hfill & {if\;\delta < 1 \times 10^{{ - 6}} } \hfill \\ { + 2} \hfill & {if\;1 \times 10^{{ - 6}} \le \delta < 1 \times 10^{{ - 3}} } \hfill \\ { + 1} \hfill & {if\;1 \times 10^{{ - 3}} \le \delta < 1} \hfill \\ 0 \hfill & {if\;\delta \ge 1} \hfill \\ \end{array},} \right.$$
(31)
$$rule_{\sigma } = \left\{ {\begin{array}{*{20}l} { + 3} \hfill & {if\;\sigma < 1 \times 10^{{ - 3}} } \hfill \\ { + 2} \hfill & {if\;1 \times 10^{{ - 3}} \le \sigma < 1} \hfill \\ { + 1} \hfill & {if\;1 \le \sigma < 3} \hfill \\ 0 \hfill & {if\;\delta \ge 3} \hfill \\ \end{array},} \right.$$
(32)
$$rule_{\omega } = \left\{ {1\begin{array}{*{20}l} { + 3} \hfill & {if\;\omega > 1 \times 10^{6} } \hfill \\ { + 2} \hfill & {if\;1 \times 10^{3} < \omega \le 1 \times 10^{6} } \hfill \\ { + 1} \hfill & {if\;1 < \omega \le 1 \times 10^{3} } \hfill \\ 0 \hfill & {if\;\omega \le 3} \hfill \\ \end{array},} \right.$$
(33)
$$rule_{\alpha } = \left\{ {1\begin{array}{*{20}l} { + 3} \hfill & {if\;\alpha > 1000} \hfill \\ { + 2} \hfill & {if\;100 < \alpha \le 1000} \hfill \\ { + 1} \hfill & {if\;1 < \alpha \le 100} \hfill \\ 0 \hfill & {if\;\alpha \le 1} \hfill \\ \end{array},} \right.$$
(34)
$$rule_{\tau } = \left\{ {\begin{array}{*{20}l} { + 3} \hfill & {if\;\tau < 2} \hfill \\ { + 2} \hfill & {if\;2 \le \tau < 4} \hfill \\ { + 1} \hfill & {if\;4 \le \tau < 6} \hfill \\ 0 \hfill & {if\;\tau \ge 6} \hfill \\ \end{array},} \right.$$
(35)

Finally, the results are integrated into a spider-chart as shown in Fig. 10 to show the overall performance of each optimization technique.

Figure 10
figure 10

Optimization techniques performance categorized by type of function.

As a general trend, it can be seen in Fig. 10 that for the three types of functions the algorithm that is closest on average to 100% performance in all five indexes is the QPSO- LR, followed by the PSO, the QPSO- CS and finally the GA and FFA method. A closer examination reveals:

  • With respect to accuracy, the QPSO-LR, PSO and QPSO-CS go first, second and third respectively with an average performance of 97%, 91% and 88%.

  • In terms of precision the FFO, the QPSO-LR and GA can be ranked as first, second and third, respectively with an average performance over the three type of functions of 94%, 91%, and 87%.

  • Referring to speed of convergence, GA technique can be ranked first with 100% average performance, followed by the QPSO-LR, 83% and the QPSO-RM, 80%.

  • In terms of acceleration of convergence GA has 100%, followed by QPSO-RM and QPSO-LR with 80% and 74% of average performance, respectively.

  • Regarding the simulation time, PSO can be ranked at the top with an average performance of 100% being followed by the QPSO-CS, QPSO-RM and the QPSO-LR with an average performance between 85% and 80%.

To get a better understanding of the performance in terms of exploitation, the accuracy and precision are averaged for each type of function (Unimodal, Multimodal and Fixed multimodal). The same process is done with search speed and acceleration to quantify exploration (see Table 10). As a result, the exploitation performance of QPSO-LR is ranked first, followed by the QPSO-RM and the QPSO-CS (just considering the proposed approaches). This is expected since the Lorentz potential field is the weakest as \((z \rightarrow \pm \infty )\) and the Coulomb like potential diverges as \((z \rightarrow \pm \infty )\), being the strongest. Regarding exploration the QPSO-RM is ranked first, followed by the QPSO-LR and the QPSO-CS. Again, this is also expected since the Rosen Morse potential field is the strongest in between the limits of the quantum well and the Coulomb like potential diverges towards \(-\infty\), being the weakest (steepest too).

An increase in exploration and a slightly decrease in exploitation is observed in the multimodal functions compared to the unimodal, due to the high number of traps that may exist in the hypersurface being searched. Also, an increase in exploitation and decrease in exploration are observed in the fixed multimodal functions compared to unimodal, attributed to irregular hypersurface formed. While the behaviour of the search algorithm may change for each type of function, the QPSO-LR always exhibited higher exploitation and moderate-high exploration. Being the combination of weak potential at \((z \rightarrow \pm \infty )\) and moderate steepness between the limits of the quantum well that are responsible of such trend.

Discussion and conclusion

Three novel quantum-behaved swarm optimization algorithms based on Lorentz (QPSO-LR), Rosen–Morse (QPSO-RM) and Coulomb-like Square Root (QPSO-CS) potential fields are proposed. The QPSO-LR, QPSO-RM, and QPSO-CS are inspired in the models of donor acceptor interaction between particles, molecular vibrations, and electron confinement in graphene, respectively.

To verify the efficacy of the proposed optimization techniques, twenty-four test functions grouped by unimodal, multimodal, fixed multimodal are employed to benchmark their performance in terms of exploration (accuracy and precision), exploitation (search speed and acceleration), and simulation time. The results show that the proposed approaches (QPSO-LR, QPSO-RM, and QPSO-CS) present several advantages in comparison to traditional optimization techniques, such as, PSO, FFO, and GA. For instance, QPSO-LR exhibits better accuracy performance than PSO, FFA, and GA, for all the type functions; QPSO-CS shows better precision than GA and PSO in overall; QPSO-RM and QPSO-LR have a faster speed and acceleration of search than PSO and FFO; QPSO-LR, QPSO-RM, and QPSO-CS show significant computation time performance than FFO and GA.

Among the proposed optimizations techniques, there is not one that show the best performance in all the given attributes (exploration, exploitation, and simulation time), however, QPSO-LR is the most balanced, which makes it a powerful global searcher for different applications. Therefore, the aim of future research is to investigate the use of QPSO-LR combined with other computational intelligence methodologies, such as fuzzy systems and neural networks for optimization in multivariable processes in order to enhance the performance of the approach.