In this paper, a smoothing algorithm for training max-min neural networks is proposed. Specifically, we apply a smooth function to approximate max-min functions and use this smoothing technique twice, once to eliminate the inner min operator and once to eliminate the max operator. In place of actual network output by its approximation function, we use all partial derivatives of the approximation function with respect to weight to substitute those of the actual network output. Then, the smoothing algorithm is constructed by the gradient descent method. This algorithm can also be used to solve f...