Abstract: About twenty years ago, the research activities aiming at the development of the optimal maintenance and rehabilitation strategies (models) of roads and bridges started in several countries, including Hungary. In the first foreign models, the deterioration depending on time and other parameters was given by Markov transition probability matrices. Due to the inaccuracies and inconsistencies of earlier models, a continuous model upgrading could have been carried out by many researchers world-wide. Besides, basically new models appeared in the literature, which are able to describe the actual processes more reliably. The research work of the authors of the paper has concentrated on Pavement Management Systems (PMSs) and Bridge Management Systems (BMSs). Since a common financing of roads and bridges is typical, a combined model of road pavement and bridge managements was developed by the authors increasing considerably the efficient use of available funds.
Keywords: Pavement Management; Bridge Management; Markov deterioration model; maintenance-rehabilitation and operation cost distribution (allocation)
Most floating point computations, in particular the additive operations, can have an annoying side-effect: The results can be inaccurate (the relative error can be any large) which may lead to qualitatively wrong answers to a computational problem. For the most prevalent applications there are advanced methods that avoid unreliable computational results in most of the cases. However, these methods usually decrease the performance of computations. In this paper, the most frequently occurring vector-vector addition and dot-product operations are investigated. We propose a faster stable add method which has been made possible by the appropriate utilization of the advanced SIMD architecture. In this work we focus only on the widely used Intel systems that are available for most of users. Based on it, stable vector addition and dot-product implementations are also introduced. Finally, we show the performance of these techniques.
Keywords: Scientific computations, Numerical accuracy, Accelerating stable methods, Heuristics, Intel’s SIMD architecture, Cache techniques
Optimization is a widely used field of science in many applications. Optimization problems are becoming more and more complex and difficult to solve as the new models tend to be very large. To keep up with the growing requirements the solvers need to operate faster and more accurately. An important field of optimization is linear optimization which is very widely used. It is also often the hidden computational engine behind algorithms of other fields of optimization. Since linear optimization solvers use a high amount of special linear algebraic vector operations their performance is greatly influenced by their linear algebraic kernels. These kernels shall exploit the general characteristics of large-scale linear optimization problem models as efficiently as possible. To construct more efficient linear algebraic kernels the critical implementational factors influencing operation performance were identified via performance analysis and are presented in this paper. With the results of this analysis a new kernel has been developed for the open-source linear optimization solver called Pannon Optimizer developed at the Operations Research Laboratory at the University of Pannonia. A novel application of indexed dense vectors is also introduced which is designed specifically for linear optimization solvers. Finally a computational study is performed comparing the performance of vector operations of different linear optimization kernels to validate the high efficiency of our kernel. It shows that in case of large scale operations the indexed dense vector outperforms the state-of-the-art open-source linear optimization kernels.
Keywords: Linear optimization; Simplex method; Optimization software; Computational linear algebra; Sparse data structures
Abstract: In the first two sections of the paper, stream flow is investigated on a probability theoretical basis. We will show that under some realistic conditions its probability distribution is of gamma type. In the model of the third section the optimal capacity of a storage reservoir is determined. In the model of the fourth section optimal water release policy is sought, given that water demands should be met by a prescribed large probability. Finally, in the last fifth section, in addition to the before mentioned reliability type constraint an upper bound is imposed on the number of days when demands may not be met and the cost of the intake facility is to be minimized.
Keywords: reservoir capacity; release policy; stochastic programming
Abstract: Sales promotion aims to capture the market and increase sales volume. Therefore, an important task is the forecasting of the demand during the sales period. We present two dynamic methodologies for calculating the quantity which has to be placed on the shelves at the beginning of each day such that we keep some constraints expressing lower and upper bounds on the quantities. Both methodologies are new to this field and are useful because of some specific properties of the problem. Our new methods use historical data of the demands in previous promotions and the consumptions registered in the previous days. Since the promotion period is relatively short, other methods such as time series analysis can hardly be used.
Keywords: inventory control; dynamic forecasting; information driven forecasting
Abstract: We solve probability maximization problems using an approximation scheme that is analogous to the classic approach of p-efficient points, proposed by Pr´ekopa to handle chance constraints. But while p-efficient points yield an approximation of a level set of the probabilistic function, we approximate the epigraph. The present scheme is easy to implement and is immune to noise in gradient computation.
Keywords: stochastic programming; probabilistic constraints; applications.
In multi-objective optimization problems several objective functions have to be minimized simultaneously. In this work, we present a new computational method for the linearly constrained, convex multi-objective optimization problem. We propose some techniques to find joint decreasing directions for both the unconstrained and the linearly constrained case as well. Based on these results, we introduce a method using a subdivision technique to approximate the whole Pareto optimal set of the linearly constrained, convex multi-objective optimization problem. Finally, we illustrate our algorithm by solving the Markowitz model on real data.
Keywords: multi-objective optimization; joint decreasing direction; approximation of Pareto optimal set; Markowitz model 2000 MSC: 90C29
Abstract: A significant field of species abundance distribution (SAD) has a population dynamical character, in which it is supposed that the stochastic speciation process and the evolution of different species are determined by the same linear birth and death process. The distributions of the number of individuals after the speciation tend to a discrete limit distribution depending on some condition if the observation time increases. In the earlier publications, in general, the speciation process was supposed to be a homogeneous Poisson process. In a more realistic case, if the speciation process is inhomogeneous Poisson, the investigation of the model is obviously more difficult. In this paper we deal with the models, in which the birth and death intensities are identical, the speciation rate is bounded, locally integrable and has asymptotically power type behaviour. Limit parameters for these models, depending on the speciation rate, are proportional to a logarithmic or (exactly or asymptotically) Yule distribution. In connection with the sample statistics some results are derived in general and also in special cases (logarithmic and Yule distribution), which are related to the random choice of a species or an individual from the whole population of the system.
Keywords: population dynamic model; species abundance distribution; Kendall process; Poisson process; logarithmic distribution; Yule distribution
Abstract: We provide here an elementary derivation of the bounding scheme applied for proving the Wright conjecture on delay differential equations. We also report a minor extension of the parameter range where the conjecture was proven, to a 2 [1:5;1:57065].
Keywords: bounding scheme; delay differential equation; interval arithmetic; validated computation; Wright conjecture
Abstract: We develop and test a Bolzano or bisection type global optimization algorithm for continuous real functions over a rectangle. The suggested method combines the branch and bound technique with an always convergent solver of underdetermined nonlinear equations. The numerical testing of the algorithm is discussed in detail.
Keywords: global optimum, nonlinear equation, always convergent method, Newton method, branch and bound algorithms, Lipschitz functions
Abstract: This paper elaborates a new probability distribution, namely, the epsilon probability distribution with implications for reliability theory and management. This probability distribution is founded on the so-called epsilon function that is introduced here. It is also shown that the asymptotic epsilon function is just an exponential function. The properties of this probability distribution suggest that it may serve as a viable alternative to the exponential probability distribution. As the epsilon probability distribution function is a power function, it is more convenient than the exponential probability distribution function from a computational point of view. The main findings and a practical example indicate that the new probability distribution can be utilized to describe the probability distribution of the time to first failure random variable both in the second and third phases of the hazard function.
Keywords: exponential probability distribution; epsilon probability distribution; hazard function, failure rate modeling
Abstract: In this study we examine the performance of the Markowitz portfolio optimization model using stock time series data of various stock exchanges and investment period intervals. Several methods are used to estimate expected returns, then different “noise” filtering techniques are applied on the correlation matrix containing the pairwise correlations of the time series. The performance of the methods is compared using the estimated and realized returns and risks, respectively. The results show that the estimated risk is closer to the realized risk using filtering methods in general. Bootstrap analysis shows that ratio between the realized return and the estimated risk (Sharpe ratio) is also improved by filtering. In terms of the expected return estimation results show that the James-Stein estimator improves the reliability of the portfolio, which means that the realized risk is closer to the estimated risk in this case.
Keywords: Portfolio optimization; Markowitz model; Random matrix theory; Hierarchical clustering
Abstract: A measure of possibilistic correlation between marginal possibility distributions of a joint possibility distribution can be defined as (see Fullér, Mezei and Várlaki, An improved index of interactivity for fuzzy numbers, Fuzzy Sets and Systems, 165(2011), pp. 56-66) the weighted average of probabilistic correlations between marginal probability distributions whose joint probability distribution is defined to be uniform on the level sets of their joint possibility distribution. Using the averaging technique we shall discuss three quantities (correlation coefficient, correlation ratio and informational coefficient of correlation) which are used to measure the strength of dependence between two possibility distributions. We discuss the inverse problem, as we introduce a method to construct a joint possibility distribution for a given value of possibilistic correlation coefficient. We also discuss a special case when the joint possibility distribution is defined by the so-called weak t-norm and based on these results, we make a conjecture as an open problem for the range of the possibilistic correlation coefficient of any t-norm based joint distribution.
Keywords: possibility theory, fuzzy numbers, possibilistic correlation, possibilistic dependence.