GPUs are becoming increasingly important in lattice calculations, for they combine a outstanding computational power in an economic and compact package. In order to allow the lattice community to profit from GPUs computing capabilities, a special library called "QUDA" has been developed, comprising a number of kernels that accelerate by GPUs the heaviest LQCD calculations, but surprisingly, tools to evaluate disconnected diagrams were missing from the library. In this work we expose our contribution to the QUDA library, and show how GPUs can remove in an effective way, if not all, most errors derived from the fact that we are estimating stochastically the inverse of the fermionic matrix.