Parallelization strategies are presented for Virtual Quake, a numerical simulation code for earthquakes based on topologically realistic systems of interacting earthquake faults. One of the demands placed upon the simulation is the accurate reproduction of the observed earthquake statistics over three to four decades. This requires the use of a high-resolution fault model in computations, which demands computational power that is well beyond the scope of off-the-shelf multi-core CPU computers. However, the recent advances in general-purpose graphic processing units have the potential to address this problem at moderate cost increments. A functional decomposition of Virtual Quake is performed, and opportunities for parallelization are discussed in this work. Computationally intensive modules are identified, and these are implemented on graphics processing units, significantly speeding up earthquake simulations. In the current best case scenario, a computer with six graphics processing units can simulate 500 years of fault activity in California at 1.5 km × 1.5 km element resolution in less than 1 hour, whereas a single CPU requires more than 2 days to perform the same simulation.