In this paper, a smooth function is constructed to approximate the nonsmooth output of max-min fuzzy neural networks (FNNs) and its approximation is also presented. In place of the output of max-min FNNs by its smoothing approximation function, the error function, defining the discrepancy between the actual outputs and desired outputs of max-min FNNs, becomes a continuously differentiable function. Then, a smoothing gradient decent-based algorithm with Armijo-Goldstein step size rule is formulated to train max-min FNNs. Based on the existing convergent result, the convergence of our proposed a...