Contributing to quadrants#

Good practice reminder#

  • testing: Any new features or modified code should be tested. You have to run the test suite using python tests/run_tests.py which sets up the right test environment for pytest. CLI arguments are forwarded to pytest. Do not use pytest directly as it behaves differently.

  • format/linter: Before pushing any commits, ensure you set up pre-commit and run it using pre-commit run -a

  • No need to force push to keep a clean history as the merging is eventually done by squashing commits.

Creating your build/dev environment#

It is recommended to use a virtual env. Quadrants supports Python 3.10–3.13 for building and testing. However, pre-commit is configured with a pinned Python 3.10 to ensure consistent formatting, so you will need Python 3.10 available for running pre-commit hooks.

After cloning, make sure to initialize submodules:

git submodule update --init --recursive

uv could be handy when initializing such an environment:

# create the venv for development
uv venv --python 3.10

# activate it
source .venv/bin/activate

# install deps groups from pyproject.toml
uv pip install --group dev --group test

build.py#

build.py is a python script to automatically set up the build environment for you before invoking the build commands:

  • LLVM libraries: downloads an archive for LLVM libraries, decompresses it and sets LLVM_DIR.

  • clang: depending on the platform, download clang or just check if available with the right version.

build.py can be used at least two ways:

  • build.py wheel to build the wheel currently using setup.py bdist_wheel

  • build.py --shell to enter a shell with environment variables set up as with build.py wheel in order to let you invoke yourself the commands.

python setup.py develop provides incremental builds:

./build.py --shell # run a new shell with environment variables
python setup.py develop

To write the environment variables to a file, use ./build.py -w [filename]. For example:

./build.py -w env.sh
source env.sh
python setup.py develop

Building the package for release purposes#

To build the release package:

./build.py wheel

We use cmake to build the C++ core. The build directory depends on the host architecture and the python version. For example: _skbuild/linux-x86_64-3.10/cmake-build.

You can modify the cmake options to your liking in order to enable or disable some features you need or don’t need. To discover them, you can use ccmake:

ccmake _skbuild/linux-x86_64-3.10/cmake-build

You could then set the environment variable QUADRANTS_CMAKE_ARGS that will be appended to the cmake command used to configure the cmake build. For instance, to disable the CUDA and AMDGPU backends:

export QUADRANTS_CMAKE_ARGS="-DQD_WITH_CUDA=OFF -DQD_WITH_AMDGPU=OFF"

To direct cmake where to look at for some dependencies, for example LLVM, you could either use an environment variable LLVM_DIR or specify the cmake option LLVM_ROOT:

# using an env var
export LLVM_DIR="/path/to/llvm/"
# or with a cmake option
export QUADRANTS_CMAKE_ARGS="$QUADRANTS_CMAKE_ARGS -DLLVM_ROOT=/path/to/llvm"

Advanced usage#

CI Convention about compilers/LLVM#

Quadrants comprises at least three important parts:

  1. quadrants host runtime: Made with a mix of Python and C++. The C++ core is compiled using the OS default C/C++ compiler.

  2. quadrants device runtime (bitcode): C++ code compiled using clang++ from the distribution/OS. Using clang++ is required as it has to support the same targets as LLVM.

  3. LLVM libraries used by host runtime: statically or dynamically linked, used to lower the kernel’s final IR to machine code on the host. The CI uses an LLVM version compiled from source.

Building LLVM for debugging it#

Sometimes, it could be useful to have a LLVM version that allows to print intermediate passes or with debug symbols to find out where and why LLVM fails (for example, when Instruction Selection fails). To do so you would have to build LLVM by yourself. If so, you should take some inspiration from our CI pipeline to build LLVM to tweak a little bit to your liking (and not enable/disable options that would create discrepancies).

You can then use LLVM_DIR to point to the LLVM build directory.

CI checks#

Pull requests are validated by several CI jobs. Most run automatically; a failing check blocks merge.

Pre-commit / linters (linters.yml)#

Runs pre-commit run -a which enforces:

  • black — Python formatting with a 120-character line limit

  • clang-format — C/C++ formatting

  • trailing-whitespace and end-of-file-fixer — whitespace hygiene

  • ruff — Python linting

  • pylint — additional Python linting (scoped to python/quadrants/)

You can run these locally with pre-commit run -a after pip install pre-commit.

Other CI jobs#

  • pyright (pyright_linter.yml) — Python type checking

  • clang-tidy (clang_tidy.yml) — C++ static analysis

  • check-markup-links (check_markup_links.yml) — validates links in documentation

  • linux / macosx / win — build and test on each platform

  • test-gpu — GPU-specific tests

  • coverage report — a one-line diff coverage summary is posted as a PR comment on each push, linking to the full annotated report. This includes kernel-level branch coverage. See Kernel code coverage for details.

Line wrapping check (check_wrapping.yml)#

Uses an AI agent to check that lines in changed files follow wrapping conventions:

  • Markdown files (.md): lines should not be hard-wrapped. Each paragraph should be a single long line.

  • Code comments and docstrings: lines should be wrapped at 120 characters, not at 80.

The check runs only on lines changed in the PR and reports up to 3 violations. This check is delayed by 30 minutes, to avoid running repeatedly if multiple commits pushed with a short delay between each.

Deleted comments check (check_deleted_comments.yml)#

Uses an AI agent to check that comments and docstrings have not been unnecessarily deleted. Reports up to 10 violations. This check is delayed by 30 minutes, to avoid running repeatedly if multiple commits pushed with a short delay between each.

Test coverage check (check_test_coverage.yml)#

Uses an AI agent to verify that new or modified source code in a PR has corresponding test coverage. The agent examines the diff of non-test source files and cross-references them against test files in the repo (existing or added in the PR). It flags up to 5 violations. This check is delayed by 30 minutes, to avoid running repeatedly if multiple commits pushed with a short delay between each.

Feature factorization check (check_feature_factorization.yml)#

Uses an AI agent to flag feature-specific code being piled into heavily-tracked core files when it could live in its own feature-specific file instead. The concern is not that the new code is in the “wrong” place semantically — it is usually topically related to the host file — but that the host file is already a hot, central, frequently-edited file, and adding more self-contained feature code to it makes review, merge conflicts, and future churn worse. The fix is almost always to extract the feature-specific block (top-level function, class, large block, or even a cluster of new methods on an existing class) into its own module, with the host file delegating to it via a narrow interface.

The agent reports up to 5 violations, each annotated with the host file’s hotness numbers (commits / authors / size). This check is delayed by 30 minutes, to avoid running repeatedly if multiple commits pushed with a short delay between each.

PR change report (pr_change_report.yml)#

Posts a fresh PR comment on every push. The comment is a single line: the totals (file count, code lines added, code lines removed) formatted as a markdown link to a GitHub Check whose page contains the full per-file / per-function breakdown. “Code lines” exclude blank lines, comment-only lines, and (in Python) lines whose only token content is a string literal (i.e. docstrings and continuation lines of multi-line strings). C/C++ /* */ block comments are stripped before counting.

The number columns on the Check page (without a + or - sign) are code-line counts in the BASE (pre-PR) version: file size before this PR (0 for newly-added files), function body size before this PR (0 for new functions; original body size for deleted functions). +<n> / -<n> are code lines added / removed by this PR.

Files are sorted by added lines descending. Within each file, functions are split into a New: group (added by this PR), an Existing: group (modified by this PR), and a Deleted: group (removed by this PR), and within each group sorted by added lines descending, then removed lines descending. Files that are deleted in their entirety appear as a single per-file row (so the totals stay accurate) but skip the per-function breakdown. Sample shape:

quadrants/program/program_stream.cpp 0 +151
    New:
      StreamManager::create_event()             0     +18
      StreamManager::create_stream()            0     +18
      StreamManager::record_event()             0     +15
      StreamManager::destroy_event()            0     +13
      StreamManager::destroy_stream()           0     +13

python/quadrants/lang/stream.py 0 +111
    New:
      Event.destroy()             0      +9
      Stream.destroy()            0      +9
      Event._destroy_prog()       0      +8
      Stream._destroy_prog()      0      +8
      Event.__del__()             0      +7

quadrants/program/legacy_stream.cpp 42 -42
    # entire file deleted (per-function breakdown skipped)

The 0 in the LoC column for the two new files reflects that both files did not exist before this PR (their pre-PR code-line count is 0). The 42 -42 row for legacy_stream.cpp is a fully-deleted file: 42 code lines existed before this PR and all 42 were removed.

This check is delayed by 30 minutes, to avoid running repeatedly if multiple commits pushed with a short delay between each.