Lattice QCD
Early history
Nowadays, Lattice Field Theory is a well established method which even has its own Wikipedia page for the layman (which happens to cite one of my articles).
However, this is a rather old idea that went a long way. At the very beginning, in 1974, it is a theoretical idea of Kenneth Wilson. For him it is a way to have a cleanly defined quantum field theory (non-perturbatively regularised), on which one can already make a few computations by hand and understand better how the confinement of quarks might emerge. And in this paper no mention is made of any potential use of a computer, which is nowadays a defining feature of Lattice QCD.
At this point, even the concept of confinement of quarks is very new. This is just one year after the paper of Gross&Wilczek on asymptotic freedom, the other side of this coin, and 5 years after the Nobel prize of Gell-Mann.
Very quickly, the concept is applied to the first Monte-Carlo calculations on computers, with a series of papers of Creutz in 1979-1980 and a first numerical estimate of some concrete physical observables a couple of years later, by people such as Parisi who established the quenched approximation.
It will take almost three more decades to turn this into a reliable quantitative tool, with lattices of reasonable size, getting rid of the crude quenched approximation, and taking the appropriate limits. Plus a few extra years to be able to directly use the physical quark masses in our simulations.
Objectives and successes
Lattice Field Theory now deals with almost any problem involving non-perturbative QCD (or strong nuclear interactions). As soon as the problem cannot be expressed as a small perturbation, Lattice QCD is essentially the only method which is both ab-initio and model-independent. On top of these conceptual advantages, its level of precision regularly outpaces on more and more topics the traditional methods, such as quark models or sum rules.
Many of the precision results of the field are summarised every year by the Flavour Lattice Averaging group. This includes quantities such as form factors and matrix elements, decay constants, low-energy constants, and some more fundamental quantities such as quark masses or the running of the strong coupling. Other important results are the muon anomalous dimension, masses of exotic hadrons and scattering parameters, or some less usual quantities used for instance in neutrino physics or cosmology. Many of these quantities are computed at the percent level, or sometimes sub-percent.
These quantities are then used to typically either determine fundamental parameters of the Standard Model (such as CKM matrix elements), or test violations of this Standard Model and evaluate the merit of candidate theories Beyond the Standard Model (for instance the SMEFT parameters constrained by B anomalies and our form factors).
Markov Chain Monte Carlo
Quantum Field Theory is (usually) expressed in terms of path integrals from a known Lagrangian. With a relatively harmless Wick rotation, and provided that we are not in a situation where the sign problem arises, this can be turned into a statistical problem: the exponential of the action (which contains all of the physics) can be interpreted as a probability distribution function. Moreover, thanks to the discretisation of space-time (the lattice), this function acts on a finite-dimensional (continuous) space, and can be represented on a computer with a finite amount of memory. The main challenge is therefore to produce a good sampling according this probability distribution, through Markov Chain Monte Carlo methods. More specifically, methods based on HMC, a physically-inspired algorithm which has been developped for Lattice QCD and is now used as a well in the broad Machine Learning community.
Any configuration of the quantum field is therefore assigned a probability, which essentially corresponds to how likely it is as a quantum fluctuation of the void. Ironically, most of the computer time is usually spent there, simulating the void. This defines a measure which can be associated with an arbitrary integrand to produce an integral for any observable. Computing this integrand -where matter is introduced- is a second step coming after the HMC.
Both steps are usually performed on supercomputers (or sometimes a GPU or a cluster for smaller systems), using highly-optimised code, usually in C/C++ (sometimes with a bit of assembly).
Inverters
Both for the computation of the HMC forces and for the Wick contractions of the observables, the elementary problem we face is the repeated solving of linear systems with huge sparse matrices, the Dirac operator.
Expressed like this, this is obviously a very general problem. And some of the techniques we use are quite general, applied in many other fields. It is the case of the Conjugate Gradient and other (Krylov subspace) iterative methods, and to some extent some ideas about preconditioning, deflation or multigrid solvers.
However, adapting these methods to the specific physics we are studying is very important for maximal performance. These solvers cannot simply be used as a black-box in an external library.
Data analysis
Eventually, once we have computed our observables on each configuration, this makes our data that we can analyse offline. To some extent, this behaves as if our Monte-Carlo data was some kind of actual experimental data.
This is much less intensive numerically, and most of the time this can even be done on a laptop with some Python/Julia code. This is however another big part of the work, involving statistical analysis and well-motivated modelling.
A particular attention is paid to having the most correct evaluation of error bars (both statistical and systematic), in coherence with the typically high levels of reliability of particle physics. Traditional tools to estimate those are resamplings of weighted non-linear least squares, combined with Bayesian model averaging.