This thesis presents a measurement of Higgs boson decays to bottom and charm quarks, where the Higgs boson is produced in association with a $W$ boson, using proton-proton ($pp$) collision data with a total luminosity of 140 fb$^{-1}$ at a center-of-mass energy of $\sqrt{s} = 13$ TeV, collected with the ATLAS detector during Run 2 of the Large Hadron Collider (LHC).The analysis makes use of machine learning techniques to identify bottom and charm quarks and discriminate between signal and background, and implements a coherent treatment in the modelling of the systematic uncertainties affecting the measurement of the $WH(H \to b\bar{b})$ and $WH(H \to c\bar{c})$ processes, which leads to an enhancement in the sensitivity of the measurement compared to previous similar analyses.
The $WH(H \to b\bar{b}/c\bar{c})$ processes are found to be compatible with the predictions of the Standard Model.
For the $WH(H \to b\bar{b})$ process, an excess of events over the background expectation is observed with a significance of $4.8\sigma$ ($5.3\sigma$ expected).
The ratio between the signal yield and the Standard Model prediction is measured to be $0.88 \pm 0.14~(\text{stat.}) \pm 0.14~(\text{syst.})$.
For the $WH(H \to c\bar{c})$ process, no excess is observed over the background prediction and an observed (expected) upper limit of 19.1 (18.4) times the Standard Model predicted cross section times the branching ratio is set at the 95\% confidence level.
Direct constraints on the Higgs-bottom and Higgs-charm coupling modifiers are also set, with the ratio $|\kappa_c/\kappa_b|$ being constrained to be less than 4.9 ($|\kappa_c/\kappa_b| < 4.5$ expected) at the 95\% confidence level.
For the identification of Higgs decays to bottom and charm quarks, jet reconstruction and flavor tagging represent crucial data processing tasks.They depend, in turn, on information about the reconstructed tracks which correspond to the trajectories of charged particles produced in $pp$ collisions at the LHC and recorded with the ATLAS detector.
The ATLAS track reconstruction algorithm, based on a combinatorial Kalman filter, requires an initial estimate (seed) of the track parameters in order to subsequently perform track finding.
This thesis presents the implementation and performance of the seed finding algorithm used by ATLAS during Run 3 of the LHC.
The efficiency of seed finding is found to be high ($85\% - 95\%$ for a fiducial selection of charged particles produced in $t\bar{t}$ events), as is required in order to maintain a high efficiency during the subsequent track finding stage.
The duplicate and fake track seed creation rates are also found to be comparatively high, with the latter depending directly on the number of interactions per bunch crossing, $\mu$.
While the seed finding algorithm has been extensively optimized for Run 3, the high-luminosity upgrade of the LHC
(HL-LHC), slated to begin operation around 2029, is expected to have $\mu$ up to 200, which is well in excess of the average value of $\langle\mu\rangle = 45$ seen in Run 3.
Hence, the seed finding algorithm must be revisited to cope with the much harder combinatorics and updated to exploit the characteristics of the planned Inner Detector upgrade: the ATLAS Inner Tracker (ITk).
The measurement of $H \to c\bar{c}$ decays will significantly benefit from the at least 3000 fb$^{-1}$ dataset that will be collected during the operation of the HL-LHC.However, the harsh conditions at the HL-LHC pose a significant problem for track reconstruction, given that the time needed for reconstructing complex events scales combinatorially with pile-up.
In order to stay within the limited computing budget, the track reconstruction algorithm for the ITk must be updated to minimize the total processing time per event.
This thesis presents a study of the CPU timing and tracking performance of the fast ITk track reconstruction chain, a variant of the offline track reconstruction chain with significantly reduced CPU requirements, within the context of performing track reconstruction at trigger level where the latency requirements are stringent.
The fast ITk reconstruction is found to provide nearly the same tracking performance as the offline ITk reconstruction, but with only a quarter of the CPU needs.
This makes the fast ITk track reconstruction algorithm a viable candidate for track reconstruction at trigger level.