Xiaoqi Peng, Yanpo Song, Zhuo Chen and Junfeng Yao
a) To converse multi-object optimization into single-object optimization. The
comprehensive score is given to each sample according to some standards. The
samples are then classified in terms of a score limit, or thinking of multi-object
comprehensively, and they can be optimized by classes.
b) To find out the common optimization zone. The optimization zone of a single
object, in which optimum samples are situated, is searched by some spacial
transmission. The overlapped optimization zones of individual objects are then the
common ones of the multi objects, in which many targets are considered. The
samples appeared in each common optimization zone have common
characteristics of optimization.
c) To optimize by combining neural network with multi-outputs and genetic
algorithm (GA). In this method, the multi objects are regarded as output
parameters, the original industrial parameters act as input ones, and the network is
trained by iterations so that the functions of the input and output are constructed.
The network function is consisted of the values of the multi optimization objects,
which is the fitness function of GA, and the optimization values could be found by
iterations.
The process of the optimization can be divided into three steps (Nanjing
University, 1978):
a) Optimization of a single target variable:
üSample classification: the training samples of the industry production are
classified according to a single target variable, i.e. each variable corresponds to a
certain standard.
üMapping information: firstly, the noise samples are filtrated by applying
belonging degree. Then, the best two-dimensional mapping graph of each of the
good samples are obtained by applying the primary component analysis (PCA),
the optimum decision plate (ODP), and the partial least square (PLS) separately.
Thirdly, the best mapping graph is selected. The optimum industrial parameters of
any single variable could be gotten in details from the best mapping graph of each
of the single target.
b) Comprehensive target optimization (Qin et al., 1980; Gong, 1979):
üComprehensive classification of samples: the samples that met the indices of
multi targets simultaneously are regarded as the first level; otherwise they belong
to the second level. The classification pattern recognition research is conducted by
applying belonging degree of samples and the back-propagation neural network
(BPN).
üOptimization direction: the target values of the sample pattern of the two
class centers are forecasted with BPN. The two class centers can show the features
of each of the sample class if the two target values differ greatly. Thus, the first
class center corresponds to the stable and optimized sample pattern, which is the
center of a high dimensional optimization space. The typical variable parameters