For the analysis and solution of discretized ordinary or partial differential equations it is necessary to solve systems of equations or eigenproblems with coefficient matrices of different sparsity patterns, depending on the discretization method. In many cases, the use of the finite element method (FE) results in largely unstructured systems of equations. The main computational cost in iterative methods for solving these problems consists of matrix-vector products and vector-vector operations; usually the main work in each iteration is the computation of matrix-vector products. When iterative solvers are parallelized on a multiprocessor system with distributed memory, the data distribution and the communication scheme { depending on the data structures used for sparse matrices { are of the greatest importance for an efficient execution. Here, data distribution and communication schemes are presented that are based on the analysis of the column indices of the non-zero matrix elements. Performance tests, using the conjugate gradient method (CG) and the Lanczos algorithm for the symmetric eigenproblem, were carried out on the distributed memory systems INTEL iPSC/860 and PARAGON XP/S 10 of the Research Centre J?ulich with sparse matrices from FE models. The parallel variants of the algorithms showed good scaling behavior for matrices with very different sparsity patterns.