# Getting Started With CodePerf and Python

After writing benchmarks and generating profiles with Python, you can upload your benchmark data to CodePerf as part of your CI workflow.

This documentation page will show you how to get a simple Python project up and running Poetry, tox and pytest.

After we have our sample project up and running we'll learn how to generate micro-benchmarks and run then locally and on CI.

Finally we'll visualize the auto generated code profiles with CodePerf. You can view the entire source code of this post at our example repository (opens new window).

Here's a sample profile output that you check LIVE here (opens new window)!

# Creating the project

We'll start by creating the sample project folder with Poetry (Poetry is a tool for dependency management and packaging in Python. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you.). You can check more about poetry at https://python-poetry.org/docs/.

poetry new example-python-poetry
cd example-python-poetry

# Setting up our dev dependencies

We'll start by adding tox, and pytest-benchmark as a dependency as follow:

poetry add tox --dev
poetry add pytest-benchmark --dev

and then including the following tox.ini file on the root project folder. Note: make sure to adjust the envlist according to your python versions to test

[tox]
isolated_build = true
envlist = py38

[testenv]
whitelist_externals = poetry
commands =
    poetry install -v
    poetry run pytest tests/```

At the end of this step you should have the following dir structure:

example-python-poetry % tree . . ├── README.rst ├── example_python_poetry │   └── init.py ├── poetry.lock ├── pyproject.toml ├── tests │   ├── init.py │   └── test_example_python_poetry.py └── tox.ini

2 directories, 7 files



Finally, if you run:

tox


You should see a similar output:

(...) Installing the current project: example-python-poetry (0.1.0) py38 run-test: commands[1] | poetry run pytest tests/ =========================================================================================== test session starts ============================================================================================ platform darwin -- Python 3.8.8, pytest-5.4.3, py-1.11.0, pluggy-0.13.1 cachedir: .tox/py38/.pytest_cache rootdir: /Users/fco/codeperf/example-python-poetry collected 1 item

tests/test_example_python_poetry.py . [100%]

============================================================================================ 1 passed in 0.00s ============================================================================================= _________________________________________________________________________________________________ summary __________________________________________________________________________________________________ py38: commands succeeded congratulations 😃


## Writing the benchmark


We'll start by adding a file named "fib.py" with the following function defined:

def Fib(n): if n < 2: return n return Fib(n - 1) + Fib(n - 2)


After, we'll use the "tests/test_example_python_poetry.py" file and add a benchmark to it:

```python
def test_bif(benchmark):
    # benchmark bif of 10
    result = benchmark(fib,10)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 55

You should be able to test locally that the benchmark is running as expected, and producing the desired profile as follow:

go test -bench=BenchmarkFib10 -cpuprofile profile.out

After the benchmark ran, you should have an output similar to this one:

go test -bench=BenchmarkFib10 -cpuprofile profile.out                                  
goos: darwin
goarch: arm64
pkg: codeperfio/example-go
BenchmarkFib10-8   	 1949052	       605.2 ns/op
PASS
ok  	codeperfio/example-go	2.084s

# Uploading to CodePerf

Now that we are able to run benchmarks locally, we can begin uploading these reports to CodePerf.

# Installing CodePerf's Go bash uploader

We'll upload the profile to codeperf.io by taking advantage of CodePerf's bash uploader to do this -- codeperf (opens new window). If you already codeperf bash utility skip to the next step. You can install the latest bash uploader using the following bash command:

bash <(curl -s https://raw.githubusercontent.com/codeperfio/codeperf/main/get.sh)

# Uploading the profile with codeperf

The only required arguments to push the data to codeperf is the benchmark name ("--bench" argument) and the profile output. You can upload the generated profile.out to codeperf.io, as follows:

codeperf --bench BenchmarkFib10 profile.out

For a deeper understanding of the tweaks you can do, just codeperf --help

# Automated Github Actions Integration

We will not go into detail about setting up a pipeline, given the above section describes all steps in detail, but CodePerf will work with any CI/CD. Below, we show you basic configurations for GitHub Actions.

name: Benchmark and push to CodePerf

on: [push, pull_request]

jobs:
  profile:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
        with:
          fetch-depth: 2
      - uses: actions/setup-go@v2
        with:
          go-version: '1.17'
      - name: Run Benchmark
        run: go test -bench=BenchmarkFib10 -cpuprofile profile.out
      - name: Get pprof
        run: go install github.com/google/pprof@latest
      - name: Check pprof is available on path
        run: pprof --help
      - name: Get CodePerf Go utility
        run: bash <(curl -s https://raw.githubusercontent.com/codeperfio/codeperf/main/get.sh)
      - name: Upload profile data to CodePerf
        run: codeperf --bench BenchmarkFib10 profile.out

After running the action you should have a link with the profile data viewer. If you follow that link, you should see all benchmarks.

Here's a sample one: https://www.codeperf.io/gh/codeperfio/example-go/commit/7376a9b/bench/BenchmarkFib10/cpu (opens new window)