This paper gives an improved supermemory gradient for the problem of minimizing a function without any restriction. The method (including the memory gradient method) is shown to be convergent. Moreover the rate of convergence is shown to be n-step superlinear, and n-step quadratic under some special conditions. With the supermemory gradient method, the requirement for multidimensional optimal search in the original algorithm is avoided. Hence it suffices to make one-dimensional optimal search in the algorithm.