* LAVA benchmark pipeline
Overview of the CI changes for running performance tests
** Agenda
- Why?
- What?
- How?
- What's next?
- Implementation
** Why?
- Where is the limit of handled workload?
- How to measure performance impact of proposed changes?
** What?
- Dummy database generator
- Supplying dummy database
- Performing benchmarks
** How? (dummy generator)
- Generic approach is too complex at this stage
- Supports specific scenarios
- [[https://gitlab.collabora.com/igo95862/lava/-/commits/generate-dummy-db][Currently developed]] by @igo95862
** How? (testing environment)
- Regenerating mock database can be time-consuming
- Mounting volumes is not straightforward in the GitLab CI
- Reusing [[https://gitlab.collabora.com/pawiecz/ci-images/-/commits/benchmark-demo][ci-images approach]]
** How? (running benchmarks)
- =pytest-benchmark= extension
- Currently focused on frequently used API endpoints
- [[https://gitlab.collabora.com/pawiecz/lava/-/commits/benchmark-demo][Example available]]
** How? (pipeline)
- Dummy generation per patch to the generator tree
- Image build (to be) triggered by the artifact creation
- Benchmarks run within the new =benchmark= ci-image
** What's next?
- New scenarios
- Optimizing artifact storage (dummy database)
- Minimizing built images
- Tuning generator performance
- Tagging benchmark jobs to specific runners
** Questions?