>>> py3-pytest-benchmark: Building community/py3-pytest-benchmark 3.4.1-r1 (using abuild 3.10.0_rc1-r2) started Wed, 26 Oct 2022 17:12:06 +0000 >>> py3-pytest-benchmark: Checking sanity of /home/buildozer/aports/community/py3-pytest-benchmark/APKBUILD... >>> py3-pytest-benchmark: Analyzing dependencies... >>> py3-pytest-benchmark: Installing for build: build-base python3 py3-pytest py3-py-cpuinfo py3-setuptools py3-pytest-xdist py3-freezegun py3-pygal py3-elasticsearch (1/29) Installing libbz2 (1.0.8-r3) (2/29) Installing libffi (3.4.3-r0) (3/29) Installing gdbm (1.23-r0) (4/29) Installing xz-libs (5.2.7-r0) (5/29) Installing mpdecimal (2.5.1-r1) (6/29) Installing readline (8.2.0-r0) (7/29) Installing sqlite-libs (3.39.4-r0) (8/29) Installing python3 (3.10.8-r3) (9/29) Installing py3-attrs (22.1.0-r0) (10/29) Installing py3-iniconfig (1.1.1-r3) (11/29) Installing py3-parsing (3.0.9-r0) (12/29) Installing py3-packaging (21.3-r2) (13/29) Installing py3-pluggy (1.0.0-r1) (14/29) Installing py3-py (1.11.0-r0) (15/29) Installing py3-tomli (2.0.1-r1) (16/29) Installing py3-pytest (7.1.3-r1) (17/29) Installing py3-py-cpuinfo (8.0.0-r0) (18/29) Installing py3-setuptools (65.5.0-r0) (19/29) Installing py3-apipkg (2.1.0-r0) (20/29) Installing py3-execnet (1.9.0-r0) (21/29) Installing py3-pytest-forked (1.4.0-r1) (22/29) Installing py3-pytest-xdist (2.5.0-r1) (23/29) Installing py3-six (1.16.0-r3) (24/29) Installing py3-dateutil (2.8.2-r1) (25/29) Installing py3-freezegun (1.2.2-r0) (26/29) Installing py3-pygal (3.0.0-r1) (27/29) Installing py3-urllib3 (1.26.12-r0) (28/29) Installing py3-elasticsearch (7.11.0-r1) (29/29) Installing .makedepends-py3-pytest-benchmark (20221026.171219) Executing busybox-1.35.0-r27.trigger OK: 327 MiB in 121 packages >>> py3-pytest-benchmark: Cleaning up srcdir >>> py3-pytest-benchmark: Cleaning up pkgdir >>> py3-pytest-benchmark: Fetching https://distfiles.alpinelinux.org/distfiles/v3.17/pytest-benchmark-3.4.1.tar.gz >>> py3-pytest-benchmark: Fetching https://distfiles.alpinelinux.org/distfiles/v3.17/pytest-benchmark-3.4.1.tar.gz >>> py3-pytest-benchmark: Checking sha512sums... pytest-benchmark-3.4.1.tar.gz: OK python-3.10.patch: OK >>> py3-pytest-benchmark: Unpacking /var/cache/distfiles/v3.17/pytest-benchmark-3.4.1.tar.gz... >>> py3-pytest-benchmark: python-3.10.patch patching file tests/test_cli.py running build running build_py creating build creating build/lib creating build/lib/pytest_benchmark copying src/pytest_benchmark/cli.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/utils.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/session.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/csv.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/plugin.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/stats.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/__init__.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/table.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/__main__.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/fixture.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/logger.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/pep418.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/compat.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/histogram.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/hookspec.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/timers.py -> build/lib/pytest_benchmark creating build/lib/pytest_benchmark/storage copying src/pytest_benchmark/storage/__init__.py -> build/lib/pytest_benchmark/storage copying src/pytest_benchmark/storage/elasticsearch.py -> build/lib/pytest_benchmark/storage copying src/pytest_benchmark/storage/file.py -> build/lib/pytest_benchmark/storage running egg_info creating src/pytest_benchmark.egg-info writing src/pytest_benchmark.egg-info/PKG-INFO writing dependency_links to src/pytest_benchmark.egg-info/dependency_links.txt writing entry points to src/pytest_benchmark.egg-info/entry_points.txt writing requirements to src/pytest_benchmark.egg-info/requires.txt writing top-level names to src/pytest_benchmark.egg-info/top_level.txt writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' reading manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.py[cod]' found anywhere in distribution warning: no previously-included files matching '__pycache__/*' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.dylib' found anywhere in distribution adding license file 'LICENSE' adding license file 'AUTHORS.rst' writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' running install /usr/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py running egg_info writing src/pytest_benchmark.egg-info/PKG-INFO writing dependency_links to src/pytest_benchmark.egg-info/dependency_links.txt writing entry points to src/pytest_benchmark.egg-info/entry_points.txt writing requirements to src/pytest_benchmark.egg-info/requires.txt writing top-level names to src/pytest_benchmark.egg-info/top_level.txt reading manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.py[cod]' found anywhere in distribution warning: no previously-included files matching '__pycache__/*' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.dylib' found anywhere in distribution adding license file 'LICENSE' adding license file 'AUTHORS.rst' writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' running install_lib creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10 creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/cli.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/utils.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/session.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/csv.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/storage/__init__.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/storage/elasticsearch.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/storage/file.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/plugin.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/stats.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/__init__.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/table.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/__main__.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/fixture.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/logger.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/pep418.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/compat.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/histogram.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/hookspec.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/timers.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/cli.py to cli.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/utils.py to utils.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/session.py to session.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/csv.py to csv.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage/__init__.py to __init__.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage/elasticsearch.py to elasticsearch.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage/file.py to file.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/plugin.py to plugin.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/stats.py to stats.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/__init__.py to __init__.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/table.py to table.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/__main__.py to __main__.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/fixture.py to fixture.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/logger.py to logger.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/pep418.py to pep418.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/compat.py to compat.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/histogram.py to histogram.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/hookspec.py to hookspec.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/timers.py to timers.cpython-310.pyc running install_egg_info Copying src/pytest_benchmark.egg-info to /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark-3.4.1-py3.10.egg-info running install_scripts Installing py.test-benchmark script to /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/bin Installing pytest-benchmark script to /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/bin ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1, configfile: setup.cfg, testpaths: tests plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0 collecting ... collected 232 items / 10 deselected / 222 selected tests/test_benchmark.py::test_help PASSED [ 0%] tests/test_benchmark.py::test_groups PASSED [ 0%] tests/test_benchmark.py::test_group_by_name PASSED [ 1%] tests/test_benchmark.py::test_group_by_func PASSED [ 1%] tests/test_benchmark.py::test_group_by_fullfunc PASSED [ 2%] tests/test_benchmark.py::test_group_by_param_all PASSED [ 2%] tests/test_benchmark.py::test_group_by_param_select PASSED [ 3%] tests/test_benchmark.py::test_group_by_param_select_multiple PASSED [ 3%] tests/test_benchmark.py::test_group_by_fullname PASSED [ 4%] tests/test_benchmark.py::test_double_use PASSED [ 4%] tests/test_benchmark.py::test_only_override_skip PASSED [ 4%] tests/test_benchmark.py::test_fixtures_also_skipped PASSED [ 5%] tests/test_benchmark.py::test_conflict_between_only_and_disable PASSED [ 5%] tests/test_benchmark.py::test_max_time_min_rounds PASSED [ 6%] tests/test_benchmark.py::test_max_time PASSED [ 6%] tests/test_benchmark.py::test_bogus_max_time PASSED [ 7%] tests/test_benchmark.py::test_pep418_timer PASSED [ 7%] tests/test_benchmark.py::test_bad_save PASSED [ 8%] tests/test_benchmark.py::test_bad_save_2 PASSED [ 8%] tests/test_benchmark.py::test_bad_compare_fail PASSED [ 9%] tests/test_benchmark.py::test_bad_rounds PASSED [ 9%] tests/test_benchmark.py::test_bad_rounds_2 PASSED [ 9%] tests/test_benchmark.py::test_compare PASSED [ 10%] tests/test_benchmark.py::test_compare_last PASSED [ 10%] tests/test_benchmark.py::test_compare_non_existing PASSED [ 11%] tests/test_benchmark.py::test_compare_non_existing_verbose PASSED [ 11%] tests/test_benchmark.py::test_compare_no_files PASSED [ 12%] tests/test_benchmark.py::test_compare_no_files_verbose PASSED [ 12%] tests/test_benchmark.py::test_compare_no_files_match PASSED [ 13%] tests/test_benchmark.py::test_compare_no_files_match_verbose PASSED [ 13%] tests/test_benchmark.py::test_verbose PASSED [ 13%] tests/test_benchmark.py::test_save PASSED [ 14%] tests/test_benchmark.py::test_save_extra_info PASSED [ 14%] tests/test_benchmark.py::test_update_machine_info_hook_detection PASSED [ 15%] tests/test_benchmark.py::test_histogram PASSED [ 15%] tests/test_benchmark.py::test_autosave PASSED [ 16%] tests/test_benchmark.py::test_bogus_min_time PASSED [ 16%] tests/test_benchmark.py::test_disable_gc PASSED [ 17%] tests/test_benchmark.py::test_custom_timer PASSED [ 17%] tests/test_benchmark.py::test_bogus_timer PASSED [ 18%] tests/test_benchmark.py::test_sort_by_mean PASSED [ 18%] tests/test_benchmark.py::test_bogus_sort PASSED [ 18%] tests/test_benchmark.py::test_xdist PASSED [ 19%] tests/test_benchmark.py::test_xdist_verbose PASSED [ 19%] tests/test_benchmark.py::test_cprofile PASSED [ 20%] tests/test_benchmark.py::test_disabled_and_cprofile PASSED [ 20%] tests/test_benchmark.py::test_abort_broken PASSED [ 21%] tests/test_benchmark.py::test_basic FAILED [ 21%] tests/test_benchmark.py::test_skip FAILED [ 22%] tests/test_benchmark.py::test_disable FAILED [ 22%] tests/test_benchmark.py::test_mark_selection PASSED [ 22%] tests/test_benchmark.py::test_only_benchmarks FAILED [ 23%] tests/test_benchmark.py::test_columns PASSED [ 23%] tests/test_calibration.py::test_calibrate PASSED [ 24%] tests/test_calibration.py::test_calibrate_fast PASSED [ 24%] tests/test_calibration.py::test_calibrate_xfast PASSED [ 25%] tests/test_calibration.py::test_calibrate_slow PASSED [ 25%] tests/test_calibration.py::test_calibrate_stuck[True-0-1] PASSED [ 26%] tests/test_calibration.py::test_calibrate_stuck[True-0-0.01] PASSED [ 26%] tests/test_calibration.py::test_calibrate_stuck[True-0-1e-09] PASSED [ 27%] tests/test_calibration.py::test_calibrate_stuck[True-0-1e-10] PASSED [ 27%] tests/test_calibration.py::test_calibrate_stuck[True-0-1.000000000000001] PASSED [ 27%] tests/test_calibration.py::test_calibrate_stuck[True-1-1] PASSED [ 28%] tests/test_calibration.py::test_calibrate_stuck[True-1-0.01] PASSED [ 28%] tests/test_calibration.py::test_calibrate_stuck[True-1-1e-09] PASSED [ 29%] tests/test_calibration.py::test_calibrate_stuck[True-1-1e-10] PASSED [ 29%] tests/test_calibration.py::test_calibrate_stuck[True-1-1.000000000000001] PASSED [ 30%] tests/test_calibration.py::test_calibrate_stuck[True--1-1] PASSED [ 30%] tests/test_calibration.py::test_calibrate_stuck[True--1-0.01] PASSED [ 31%] tests/test_calibration.py::test_calibrate_stuck[True--1-1e-09] PASSED [ 31%] tests/test_calibration.py::test_calibrate_stuck[True--1-1e-10] PASSED [ 31%] tests/test_calibration.py::test_calibrate_stuck[True--1-1.000000000000001] PASSED [ 32%] tests/test_calibration.py::test_calibrate_stuck[False-0-1] PASSED [ 32%] tests/test_calibration.py::test_calibrate_stuck[False-0-0.01] PASSED [ 33%] tests/test_calibration.py::test_calibrate_stuck[False-0-1e-09] PASSED [ 33%] tests/test_calibration.py::test_calibrate_stuck[False-0-1e-10] PASSED [ 34%] tests/test_calibration.py::test_calibrate_stuck[False-0-1.000000000000001] PASSED [ 34%] tests/test_calibration.py::test_calibrate_stuck[False-1-1] PASSED [ 35%] tests/test_calibration.py::test_calibrate_stuck[False-1-0.01] PASSED [ 35%] tests/test_calibration.py::test_calibrate_stuck[False-1-1e-09] PASSED [ 36%] tests/test_calibration.py::test_calibrate_stuck[False-1-1e-10] PASSED [ 36%] tests/test_calibration.py::test_calibrate_stuck[False-1-1.000000000000001] PASSED [ 36%] tests/test_calibration.py::test_calibrate_stuck[False--1-1] PASSED [ 37%] tests/test_calibration.py::test_calibrate_stuck[False--1-0.01] PASSED [ 37%] tests/test_calibration.py::test_calibrate_stuck[False--1-1e-09] PASSED [ 38%] tests/test_calibration.py::test_calibrate_stuck[False--1-1e-10] PASSED [ 38%] tests/test_calibration.py::test_calibrate_stuck[False--1-1.000000000000001] PASSED [ 39%] tests/test_cli.py::test_list PASSED [ 39%] tests/test_cli.py::test_compare[short-] PASSED [ 40%] tests/test_cli.py::test_compare[long-] PASSED [ 40%] tests/test_cli.py::test_compare[normal-] PASSED [ 40%] tests/test_cli.py::test_compare[trial-] PASSED [ 41%] tests/test_doctest.rst::test_doctest.rst PASSED [ 41%] tests/test_elasticsearch_storage.py::test_handle_saving PASSED [ 42%] tests/test_elasticsearch_storage.py::test_parse_with_no_creds PASSED [ 42%] tests/test_elasticsearch_storage.py::test_parse_with_creds_in_first_host_of_url PASSED [ 43%] tests/test_elasticsearch_storage.py::test_parse_with_creds_in_second_host_of_url PASSED [ 43%] tests/test_elasticsearch_storage.py::test_parse_with_creds_in_netrc PASSED [ 44%] tests/test_elasticsearch_storage.py::test_parse_url_creds_supersedes_netrc_creds PASSED [ 44%] tests/test_elasticsearch_storage.py::test__mask_hosts PASSED [ 45%] tests/test_normal.py::test_normal PASSED [ 45%] tests/test_normal.py::test_fast PASSED [ 45%] tests/test_normal.py::test_slow PASSED [ 46%] tests/test_normal.py::test_slower PASSED [ 46%] tests/test_normal.py::test_xfast PASSED [ 47%] tests/test_normal.py::test_parametrized[0] PASSED [ 47%] tests/test_normal.py::test_parametrized[1] PASSED [ 48%] tests/test_normal.py::test_parametrized[2] PASSED [ 48%] tests/test_normal.py::test_parametrized[3] PASSED [ 49%] tests/test_normal.py::test_parametrized[4] PASSED [ 49%] tests/test_pedantic.py::test_single PASSED [ 50%] tests/test_pedantic.py::test_setup PASSED [ 50%] tests/test_pedantic.py::test_setup_cprofile PASSED [ 50%] tests/test_pedantic.py::test_args_kwargs PASSED [ 51%] tests/test_pedantic.py::test_iterations PASSED [ 51%] tests/test_pedantic.py::test_rounds_iterations PASSED [ 52%] tests/test_pedantic.py::test_rounds PASSED [ 52%] tests/test_pedantic.py::test_warmup_rounds PASSED [ 53%] tests/test_pedantic.py::test_rounds_must_be_int[0] PASSED [ 53%] tests/test_pedantic.py::test_rounds_must_be_int[x] PASSED [ 54%] tests/test_pedantic.py::test_warmup_rounds_must_be_int[-15] PASSED [ 54%] tests/test_pedantic.py::test_warmup_rounds_must_be_int[x] PASSED [ 54%] tests/test_pedantic.py::test_setup_many_rounds PASSED [ 55%] tests/test_pedantic.py::test_cant_use_both_args_and_setup_with_return PASSED [ 55%] tests/test_pedantic.py::test_can_use_both_args_and_setup_without_return PASSED [ 56%] tests/test_pedantic.py::test_cant_use_setup_with_many_iterations PASSED [ 56%] tests/test_pedantic.py::test_iterations_must_be_positive_int[0] PASSED [ 57%] tests/test_pedantic.py::test_iterations_must_be_positive_int[-1] PASSED [ 57%] tests/test_pedantic.py::test_iterations_must_be_positive_int[asdf] PASSED [ 58%] tests/test_sample.py::test_proto[SimpleProxy] PASSED [ 58%] tests/test_sample.py::test_proto[CachedPropertyProxy] PASSED [ 59%] tests/test_sample.py::test_proto[LocalsSimpleProxy] PASSED [ 59%] tests/test_sample.py::test_proto[LocalsCachedPropertyProxy] PASSED [ 59%] tests/test_skip.py::test_skip SKIPPED (bla) [ 60%] tests/test_stats.py::test_1 PASSED [ 60%] tests/test_stats.py::test_2 PASSED [ 61%] tests/test_stats.py::test_single_item PASSED [ 61%] tests/test_stats.py::test_length[1] PASSED [ 62%] tests/test_stats.py::test_length[2] PASSED [ 62%] tests/test_stats.py::test_length[3] PASSED [ 63%] tests/test_stats.py::test_length[4] PASSED [ 63%] tests/test_stats.py::test_length[5] PASSED [ 63%] tests/test_stats.py::test_length[6] PASSED [ 64%] tests/test_stats.py::test_length[7] PASSED [ 64%] tests/test_stats.py::test_length[8] PASSED [ 65%] tests/test_stats.py::test_length[9] PASSED [ 65%] tests/test_stats.py::test_iqr PASSED [ 66%] tests/test_stats.py::test_ops PASSED [ 66%] tests/test_storage.py::test_rendering[short] PASSED [ 67%] tests/test_storage.py::test_rendering[normal] PASSED [ 67%] tests/test_storage.py::test_rendering[long] PASSED [ 68%] tests/test_storage.py::test_rendering[trial] PASSED [ 68%] tests/test_storage.py::test_regression_checks[short] PASSED [ 68%] tests/test_storage.py::test_regression_checks[normal] PASSED [ 69%] tests/test_storage.py::test_regression_checks[long] PASSED [ 69%] tests/test_storage.py::test_regression_checks[trial] PASSED [ 70%] tests/test_storage.py::test_regression_checks_inf[short] PASSED [ 70%] tests/test_storage.py::test_regression_checks_inf[normal] PASSED [ 71%] tests/test_storage.py::test_regression_checks_inf[long] PASSED [ 71%] tests/test_storage.py::test_regression_checks_inf[trial] PASSED [ 72%] tests/test_storage.py::test_compare_1[short] PASSED [ 72%] tests/test_storage.py::test_compare_1[normal] PASSED [ 72%] tests/test_storage.py::test_compare_1[long] PASSED [ 73%] tests/test_storage.py::test_compare_1[trial] PASSED [ 73%] tests/test_storage.py::test_compare_2[short] PASSED [ 74%] tests/test_storage.py::test_compare_2[normal] PASSED [ 74%] tests/test_storage.py::test_compare_2[long] PASSED [ 75%] tests/test_storage.py::test_compare_2[trial] PASSED [ 75%] tests/test_storage.py::test_save_json[short] PASSED [ 76%] tests/test_storage.py::test_save_json[normal] PASSED [ 76%] tests/test_storage.py::test_save_json[long] PASSED [ 77%] tests/test_storage.py::test_save_json[trial] PASSED [ 77%] tests/test_storage.py::test_save_with_name[short] PASSED [ 77%] tests/test_storage.py::test_save_with_name[normal] PASSED [ 78%] tests/test_storage.py::test_save_with_name[long] PASSED [ 78%] tests/test_storage.py::test_save_with_name[trial] PASSED [ 79%] tests/test_storage.py::test_save_no_name[short] PASSED [ 79%] tests/test_storage.py::test_save_no_name[normal] PASSED [ 80%] tests/test_storage.py::test_save_no_name[long] PASSED [ 80%] tests/test_storage.py::test_save_no_name[trial] PASSED [ 81%] tests/test_storage.py::test_save_with_error[short] PASSED [ 81%] tests/test_storage.py::test_save_with_error[normal] PASSED [ 81%] tests/test_storage.py::test_save_with_error[long] PASSED [ 82%] tests/test_storage.py::test_save_with_error[trial] PASSED [ 82%] tests/test_storage.py::test_autosave[short] PASSED [ 83%] tests/test_storage.py::test_autosave[normal] PASSED [ 83%] tests/test_storage.py::test_autosave[long] PASSED [ 84%] tests/test_storage.py::test_autosave[trial] PASSED [ 84%] tests/test_utils.py::test_clonefunc[] PASSED [ 85%] tests/test_utils.py::test_clonefunc[f2] PASSED [ 85%] tests/test_utils.py::test_clonefunc_not_function PASSED [ 86%] tests/test_utils.py::test_get_commit_info[git-True] PASSED [ 86%] tests/test_utils.py::test_get_commit_info[git-False] PASSED [ 86%] tests/test_utils.py::test_get_commit_info[hg-True] SKIPPED (%r not a...) [ 87%] tests/test_utils.py::test_get_commit_info[hg-False] SKIPPED (%r not ...) [ 87%] tests/test_utils.py::test_missing_scm_bins[git-True] PASSED [ 88%] tests/test_utils.py::test_missing_scm_bins[git-False] PASSED [ 88%] tests/test_utils.py::test_missing_scm_bins[hg-True] SKIPPED (%r not ...) [ 89%] tests/test_utils.py::test_missing_scm_bins[hg-False] SKIPPED (%r not...) [ 89%] tests/test_utils.py::test_get_branch_info[git] PASSED [ 90%] tests/test_utils.py::test_get_branch_info[hg] SKIPPED (%r not availa...) [ 90%] tests/test_utils.py::test_no_branch_info PASSED [ 90%] tests/test_utils.py::test_commit_info_error PASSED [ 91%] tests/test_utils.py::test_parse_warmup PASSED [ 91%] tests/test_utils.py::test_parse_columns PASSED [ 92%] tests/test_utils.py::test_get_project_name[False-None] PASSED [ 92%] tests/test_utils.py::test_get_project_name[False-git] PASSED [ 93%] tests/test_utils.py::test_get_project_name[False-hg] SKIPPED (%r not...) [ 93%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo-None] PASSED [ 94%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo-git] PASSED [ 94%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo-hg] SKIPPED [ 95%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo.git-None] PASSED [ 95%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo.git-git] PASSED [ 95%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo.git-hg] SKIPPED [ 96%] tests/test_utils.py::test_get_project_name[c:\\foo\\bar\\pytest_benchmark_repo.gitfoo@example.com:pytest_benchmark_repo.git-None] PASSED [ 96%] tests/test_utils.py::test_get_project_name[c:\\foo\\bar\\pytest_benchmark_repo.gitfoo@example.com:pytest_benchmark_repo.git-git] PASSED [ 97%] tests/test_utils.py::test_get_project_name[c:\\foo\\bar\\pytest_benchmark_repo.gitfoo@example.com:pytest_benchmark_repo.git-hg] SKIPPED [ 97%] tests/test_utils.py::test_get_project_name_broken[git] PASSED [ 98%] tests/test_utils.py::test_get_project_name_broken[hg] PASSED [ 98%] tests/test_utils.py::test_get_project_name_fallback PASSED [ 99%] tests/test_utils.py::test_get_project_name_fallback_broken_hgrc PASSED [ 99%] tests/test_with_testcase.py::TerribleTerribleWayToWriteTests::test_foo PASSED [100%] =================================== FAILURES =================================== __________________________________ test_basic __________________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1028: in test_basic result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-220/test_basic0' E and: 'plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_basic.py::*test_basic PASSED*' E and: '' E fnmatch: 'test_basic.py::*test_basic PASSED*' E with: 'test_basic.py::test_basic PASSED [ 20%]' E nomatch: 'test_basic.py::test_slow PASSED*' E and: 'test_basic.py::test_fast PASSED [ 40%]' E fnmatch: 'test_basic.py::test_slow PASSED*' E with: 'test_basic.py::test_slow PASSED [ 60%]' E fnmatch: 'test_basic.py::test_slower PASSED*' E with: 'test_basic.py::test_slower PASSED [ 80%]' E fnmatch: 'test_basic.py::test_xfast PASSED*' E with: 'test_basic.py::test_xfast PASSED [100%]' E nomatch: 'test_basic.py::test_fast PASSED*' E and: '' E and: '' E and: '---------------------------------------------------------------------------------------------------------- benchmark: 4 tests ----------------------------------------------------------------------------------------------------------' E and: 'Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations' E and: '----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: 'test_fast 122.1150 (1.0) 3,438.7782 (1.0) 129.5483 (1.0) 34.1428 (1.0) 128.7088 (1.0) 1.8254 (1.0) 299;1579 7,719,129.9401 (1.0) 74962 100' E and: 'test_xfast 450.7601 (3.69) 25,507.0627 (7.42) 577.5402 (4.46) 211.4634 (6.19) 547.6177 (4.25) 96.8575 (53.06) 9859;11035 1,731,481.3138 (0.22) 119199 1' E and: 'test_slow 1,017,555.5944 (>1000.0) 1,339,904.9640 (389.65) 1,085,674.2579 (>1000.0) 25,861.0357 (757.44) 1,070,041.2095 (>1000.0) 46,133.9951 (>1000.0) 208;3 921.0866 (0.00) 887 1' E and: 'test_slower 10,088,264.9422 (>1000.0) 10,143,458.8432 (>1000.0) 10,119,828.3861 (>1000.0) 6,550.5972 (191.86) 10,119,911.2833 (>1000.0) 7,209.3681 (>1000.0) 18;4 98.8159 (0.00) 99 1' E and: '----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: '' E and: 'Legend:' E and: ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.' E and: ' OPS: Operations Per Second, computed as 1 / Mean' E and: '============================== 5 passed in 5.54s ===============================' E remains unmatched: 'test_basic.py::test_fast PASSED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-220/test_basic0/runpytest-0 -vv --doctest-modules /tmp/pytest-of-buildozer/pytest-220/test_basic0/test_basic.py in: /tmp/pytest-of-buildozer/pytest-220/test_basic0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-220/test_basic0 plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0 collecting ... collected 5 items test_basic.py::test_basic PASSED [ 20%] test_basic.py::test_fast PASSED [ 40%] test_basic.py::test_slow PASSED [ 60%] test_basic.py::test_slower PASSED [ 80%] test_basic.py::test_xfast PASSED [100%] ---------------------------------------------------------------------------------------------------------- benchmark: 4 tests ---------------------------------------------------------------------------------------------------------- Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- test_fast 122.1150 (1.0) 3,438.7782 (1.0) 129.5483 (1.0) 34.1428 (1.0) 128.7088 (1.0) 1.8254 (1.0) 299;1579 7,719,129.9401 (1.0) 74962 100 test_xfast 450.7601 (3.69) 25,507.0627 (7.42) 577.5402 (4.46) 211.4634 (6.19) 547.6177 (4.25) 96.8575 (53.06) 9859;11035 1,731,481.3138 (0.22) 119199 1 test_slow 1,017,555.5944 (>1000.0) 1,339,904.9640 (389.65) 1,085,674.2579 (>1000.0) 25,861.0357 (757.44) 1,070,041.2095 (>1000.0) 46,133.9951 (>1000.0) 208;3 921.0866 (0.00) 887 1 test_slower 10,088,264.9422 (>1000.0) 10,143,458.8432 (>1000.0) 10,119,828.3861 (>1000.0) 6,550.5972 (191.86) 10,119,911.2833 (>1000.0) 7,209.3681 (>1000.0) 18;4 98.8159 (0.00) 99 1 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Legend: Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. OPS: Operations Per Second, computed as 1 / Mean ============================== 5 passed in 5.54s =============================== __________________________________ test_skip ___________________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1052: in test_skip result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-220/test_skip0' E and: 'plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_skip.py::*test_skip PASSED*' E and: '' E fnmatch: 'test_skip.py::*test_skip PASSED*' E with: 'test_skip.py::test_skip PASSED [ 20%]' E nomatch: 'test_skip.py::test_slow SKIPPED*' E and: 'test_skip.py::test_fast SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 40%]' E fnmatch: 'test_skip.py::test_slow SKIPPED*' E with: 'test_skip.py::test_slow SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 60%]' E fnmatch: 'test_skip.py::test_slower SKIPPED*' E with: 'test_skip.py::test_slower SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 80%]' E fnmatch: 'test_skip.py::test_xfast SKIPPED*' E with: 'test_skip.py::test_xfast SKIPPED (Skipping benchmark (--benchmark-skip active).) [100%]' E nomatch: 'test_skip.py::test_fast SKIPPED*' E and: '' E and: '========================= 1 passed, 4 skipped in 0.05s =========================' E remains unmatched: 'test_skip.py::test_fast SKIPPED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-220/test_skip0/runpytest-0 -vv --doctest-modules --benchmark-skip /tmp/pytest-of-buildozer/pytest-220/test_skip0/test_skip.py in: /tmp/pytest-of-buildozer/pytest-220/test_skip0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-220/test_skip0 plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0 collecting ... collected 5 items test_skip.py::test_skip PASSED [ 20%] test_skip.py::test_fast SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 40%] test_skip.py::test_slow SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 60%] test_skip.py::test_slower SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 80%] test_skip.py::test_xfast SKIPPED (Skipping benchmark (--benchmark-skip active).) [100%] ========================= 1 passed, 4 skipped in 0.05s ========================= _________________________________ test_disable _________________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1066: in test_disable result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-220/test_disable0' E and: 'plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_disable.py::*test_disable PASSED*' E and: '' E fnmatch: 'test_disable.py::*test_disable PASSED*' E with: 'test_disable.py::test_disable PASSED [ 20%]' E nomatch: 'test_disable.py::test_slow PASSED*' E and: 'test_disable.py::test_fast PASSED [ 40%]' E fnmatch: 'test_disable.py::test_slow PASSED*' E with: 'test_disable.py::test_slow PASSED [ 60%]' E fnmatch: 'test_disable.py::test_slower PASSED*' E with: 'test_disable.py::test_slower PASSED [ 80%]' E fnmatch: 'test_disable.py::test_xfast PASSED*' E with: 'test_disable.py::test_xfast PASSED [100%]' E nomatch: 'test_disable.py::test_fast PASSED*' E and: '' E and: '============================== 5 passed in 0.07s ===============================' E remains unmatched: 'test_disable.py::test_fast PASSED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-220/test_disable0/runpytest-0 -vv --doctest-modules --benchmark-disable /tmp/pytest-of-buildozer/pytest-220/test_disable0/test_disable.py in: /tmp/pytest-of-buildozer/pytest-220/test_disable0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-220/test_disable0 plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0 collecting ... collected 5 items test_disable.py::test_disable PASSED [ 20%] test_disable.py::test_fast PASSED [ 40%] test_disable.py::test_slow PASSED [ 60%] test_disable.py::test_slower PASSED [ 80%] test_disable.py::test_xfast PASSED [100%] ============================== 5 passed in 0.07s =============================== _____________________________ test_only_benchmarks _____________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1095: in test_only_benchmarks result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-220/test_only_benchmarks0' E and: 'plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_only_benchmarks.py::*test_only_benchmarks SKIPPED*' E and: '' E fnmatch: 'test_only_benchmarks.py::*test_only_benchmarks SKIPPED*' E with: 'test_only_benchmarks.py::test_only_benchmarks SKIPPED (Skipping non-benchmark (--benchmark-only active).) [ 20%]' E nomatch: 'test_only_benchmarks.py::test_slow PASSED*' E and: 'test_only_benchmarks.py::test_fast PASSED [ 40%]' E fnmatch: 'test_only_benchmarks.py::test_slow PASSED*' E with: 'test_only_benchmarks.py::test_slow PASSED [ 60%]' E fnmatch: 'test_only_benchmarks.py::test_slower PASSED*' E with: 'test_only_benchmarks.py::test_slower PASSED [ 80%]' E fnmatch: 'test_only_benchmarks.py::test_xfast PASSED*' E with: 'test_only_benchmarks.py::test_xfast PASSED [100%]' E nomatch: 'test_only_benchmarks.py::test_fast PASSED*' E and: '' E and: '' E and: '--------------------------------------------------------------------------------------------------------- benchmark: 4 tests ---------------------------------------------------------------------------------------------------------' E and: 'Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations' E and: '--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: 'test_fast 128.5970 (1.0) 3,493.2420 (1.0) 137.4903 (1.0) 34.3568 (1.0) 136.6809 (1.0) 1.8254 (1.0) 299;1929 7,273,241.1725 (1.0) 73868 100' E and: 'test_xfast 447.0348 (3.48) 24,095.1777 (6.90) 554.0788 (4.03) 192.8061 (5.61) 510.3648 (3.73) 55.8794 (30.61) 11782;14831 1,804,797.3734 (0.25) 125087 1' E and: 'test_slow 1,037,053.7639 (>1000.0) 1,487,102.3595 (425.71) 1,086,556.2547 (>1000.0) 30,315.3778 (882.37) 1,083,046.1979 (>1000.0) 47,444.3659 (>1000.0) 47;3 920.3389 (0.00) 903 1' E and: 'test_slower 10,097,272.6941 (>1000.0) 10,144,177.8243 (>1000.0) 10,120,098.3004 (>1000.0) 5,873.4289 (170.95) 10,119,426.9955 (>1000.0) 6,725.0803 (>1000.0) 20;3 98.8133 (0.00) 99 1' E and: '--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: '' E and: 'Legend:' E and: ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.' E and: ' OPS: Operations Per Second, computed as 1 / Mean' E and: '========================= 4 passed, 1 skipped in 5.51s =========================' E remains unmatched: 'test_only_benchmarks.py::test_fast PASSED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-220/test_only_benchmarks0/runpytest-0 -vv --doctest-modules --benchmark-only /tmp/pytest-of-buildozer/pytest-220/test_only_benchmarks0/test_only_benchmarks.py in: /tmp/pytest-of-buildozer/pytest-220/test_only_benchmarks0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-220/test_only_benchmarks0 plugins: benchmark-3.4.1, forked-1.4.0, xdist-2.5.0 collecting ... collected 5 items test_only_benchmarks.py::test_only_benchmarks SKIPPED (Skipping non-benchmark (--benchmark-only active).) [ 20%] test_only_benchmarks.py::test_fast PASSED [ 40%] test_only_benchmarks.py::test_slow PASSED [ 60%] test_only_benchmarks.py::test_slower PASSED [ 80%] test_only_benchmarks.py::test_xfast PASSED [100%] --------------------------------------------------------------------------------------------------------- benchmark: 4 tests --------------------------------------------------------------------------------------------------------- Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- test_fast 128.5970 (1.0) 3,493.2420 (1.0) 137.4903 (1.0) 34.3568 (1.0) 136.6809 (1.0) 1.8254 (1.0) 299;1929 7,273,241.1725 (1.0) 73868 100 test_xfast 447.0348 (3.48) 24,095.1777 (6.90) 554.0788 (4.03) 192.8061 (5.61) 510.3648 (3.73) 55.8794 (30.61) 11782;14831 1,804,797.3734 (0.25) 125087 1 test_slow 1,037,053.7639 (>1000.0) 1,487,102.3595 (425.71) 1,086,556.2547 (>1000.0) 30,315.3778 (882.37) 1,083,046.1979 (>1000.0) 47,444.3659 (>1000.0) 47;3 920.3389 (0.00) 903 1 test_slower 10,097,272.6941 (>1000.0) 10,144,177.8243 (>1000.0) 10,120,098.3004 (>1000.0) 5,873.4289 (170.95) 10,119,426.9955 (>1000.0) 6,725.0803 (>1000.0) 20;3 98.8133 (0.00) 99 1 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Legend: Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. OPS: Operations Per Second, computed as 1 / Mean ========================= 4 passed, 1 skipped in 5.51s ========================= =============================== warnings summary =============================== ../../../../../../../usr/lib/python3.10/site-packages/_pytest/config/__init__.py:1199 /usr/lib/python3.10/site-packages/_pytest/config/__init__.py:1199: PytestRemovedIn8Warning: The --strict option is deprecated, use --strict-markers instead. self.issue_config_time_warning( tests/test_utils.py:35 /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_utils.py:35: PytestDeprecationWarning: @pytest.yield_fixture is deprecated. Use @pytest.fixture instead; they are the same. @pytest.yield_fixture(params=(True, False)) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ------------------------------------------------------------------------------------------------------------------------------------ benchmark: 58 tests ------------------------------------------------------------------------------------------------------------------------------------ Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- test_calibrate_stuck[False--1-1e-10] 5.0500 (1.0) 5.0500 (1.0) 5.0500 (1.0) 0.0000 (1.0) 5.0500 (1.0) 0.0000 (1.0) 0;0 198,019,801.9802 (1.0) 1 2 test_calibrate_stuck[False-0-1e-10] 5.0500 (1.0) 5.0500 (1.0) 5.0500 (1.0) 0.0000 (1.0) 5.0500 (1.0) 0.0000 (1.0) 0;0 198,019,801.9802 (1.0) 1 2 test_calibrate_stuck[False-1-1e-10] 5.0500 (1.0) 5.0500 (1.0) 5.0500 (1.0) 0.0000 (1.0) 5.0500 (1.0) 0.0000 (1.0) 0;0 198,019,801.9802 (1.0) 1 2 test_calibrate_stuck[True--1-1e-10] 10.0000 (1.98) 10.0000 (1.98) 10.0000 (1.98) 0.0000 (1.0) 10.0000 (1.98) 0.0000 (1.0) 0;0 100,000,000.0001 (0.51) 1 1 test_calibrate_stuck[True-0-1e-10] 10.0000 (1.98) 10.0000 (1.98) 10.0000 (1.98) 0.0000 (1.0) 10.0000 (1.98) 0.0000 (1.0) 0;0 100,000,000.0001 (0.51) 1 1 test_calibrate_stuck[True-1-1e-10] 10.0000 (1.98) 10.0000 (1.98) 10.0000 (1.98) 0.0000 (1.0) 10.0000 (1.98) 0.0000 (1.0) 0;0 100,000,000.0001 (0.51) 1 1 test_calibrate_stuck[False--1-1e-09] 50.5000 (10.00) 50.5000 (10.00) 50.5000 (10.00) 0.0000 (1.0) 50.5000 (10.00) 0.0000 (1.0) 0;0 19,801,980.1981 (0.10) 1 2 test_calibrate_stuck[False-0-1e-09] 50.5000 (10.00) 50.5000 (10.00) 50.5000 (10.00) 0.0000 (1.0) 50.5000 (10.00) 0.0000 (1.0) 0;0 19,801,980.1981 (0.10) 1 2 test_calibrate_stuck[False-1-1e-09] 50.5000 (10.00) 50.5000 (10.00) 50.5000 (10.00) 0.0000 (1.0) 50.5000 (10.00) 0.0000 (1.0) 0;0 19,801,980.1981 (0.10) 1 2 test_calibrate_stuck[True--1-1e-09] 100.0000 (19.80) 100.0000 (19.80) 100.0000 (19.80) 0.0000 (1.0) 100.0000 (19.80) 0.0000 (1.0) 0;0 10,000,000.0000 (0.05) 1 1 test_calibrate_stuck[True-0-1e-09] 100.0000 (19.80) 100.0000 (19.80) 100.0000 (19.80) 0.0000 (1.0) 100.0000 (19.80) 0.0000 (1.0) 0;0 10,000,000.0000 (0.05) 1 1 test_calibrate_stuck[True-1-1e-09] 100.0000 (19.80) 100.0000 (19.80) 100.0000 (19.80) 0.0000 (1.0) 100.0000 (19.80) 0.0000 (1.0) 0;0 10,000,000.0000 (0.05) 1 1 test_xfast 122.6153 (24.28) 688.0079 (136.24) 133.0991 (26.36) 20.4515 (inf) 129.4272 (25.63) 6.8120 (inf) 12;19 7,513,196.5143 (0.04) 1998 35 test_calibrate_xfast 146.5902 (29.03) 3,468.3570 (686.80) 153.7351 (30.44) 37.3364 (inf) 152.7742 (30.25) 2.0117 (inf) 2955;9564 6,504,695.2142 (0.03) 682174 100 test_rounds_iterations 426.5457 (84.46) 1,499.8019 (296.99) 593.9603 (117.62) 264.1470 (inf) 514.0901 (101.80) 153.4820 (inf) 1;1 1,683,614.2499 (0.01) 15 10 test_proto[LocalsSimpleProxy] 461.7497 (91.44) 16,563.3857 (>1000.0) 485.9036 (96.22) 137.3900 (inf) 482.0526 (95.46) 6.8918 (inf) 449;1498 2,058,021.5358 (0.01) 87268 20 test_calibrate_fast 538.6770 (106.67) 34,030.1543 (>1000.0) 575.9520 (114.05) 214.5976 (inf) 572.2046 (113.31) 12.2935 (inf) 4826;22646 1,736,255.9069 (0.01) 1838600 10 test_iterations 659.0039 (130.50) 659.0039 (130.50) 659.0039 (130.50) 0.0000 (1.0) 659.0039 (130.50) 0.0000 (1.0) 0;0 1,517,441.8089 (0.01) 1 10 test_rounds 689.1787 (136.47) 1,821.6670 (360.73) 882.3971 (174.73) 289.7420 (inf) 759.9592 (150.49) 230.9680 (inf) 1;1 1,133,276.6226 (0.01) 15 1 test_proto[LocalsCachedPropertyProxy] 696.6293 (137.95) 317,130.2378 (>1000.0) 815.0831 (161.40) 753.3152 (inf) 804.6627 (159.34) 44.7035 (inf) 168;2968 1,226,868.7521 (0.01) 193537 1 test_proto[CachedPropertyProxy] 707.8052 (140.16) 20,097.9412 (>1000.0) 1,034.0272 (204.76) 206.2484 (inf) 1,061.7077 (210.24) 63.3299 (inf) 18834;21306 967,092.5574 (0.00) 121355 1 test_warmup_rounds 715.2557 (141.63) 1,117.5871 (221.30) 821.0540 (162.58) 172.1229 (inf) 722.7063 (143.11) 178.8139 (inf) 1;0 1,217,946.7151 (0.01) 5 1 test_proto[SimpleProxy] 972.3008 (192.53) 20,880.2521 (>1000.0) 1,323.2341 (262.03) 470.0151 (inf) 1,102.6859 (218.35) 167.6381 (inf) 24735;24759 755,724.1852 (0.00) 101604 1 test_single 1,717.3588 (340.07) 1,717.3588 (340.07) 1,717.3588 (340.07) 0.0000 (1.0) 1,717.3588 (340.07) 0.0000 (1.0) 0;0 582,289.4924 (0.00) 1 1 test_setup_many_rounds 1,844.0187 (365.15) 12,259.9304 (>1000.0) 3,452.2265 (683.61) 3,186.4371 (inf) 2,291.0535 (453.67) 651.9258 (inf) 1;2 289,668.1299 (0.00) 10 1 test_can_use_both_args_and_setup_without_return 3,132.9691 (620.39) 3,132.9691 (620.39) 3,132.9691 (620.39) 0.0000 (1.0) 3,132.9691 (620.39) 0.0000 (1.0) 0;0 319,186.0357 (0.00) 1 1 test_setup_cprofile 4,183.5010 (828.42) 4,183.5010 (828.42) 4,183.5010 (828.42) 0.0000 (1.0) 4,183.5010 (828.42) 0.0000 (1.0) 0;0 239,034.2440 (0.00) 1 1 test_setup 5,342.0663 (>1000.0) 5,342.0663 (>1000.0) 5,342.0663 (>1000.0) 0.0000 (1.0) 5,342.0663 (>1000.0) 0.0000 (1.0) 0;0 187,193.4840 (0.00) 1 1 test_args_kwargs 14,521.1816 (>1000.0) 14,521.1816 (>1000.0) 14,521.1816 (>1000.0) 0.0000 (1.0) 14,521.1816 (>1000.0) 0.0000 (1.0) 0;0 68,864.9194 (0.00) 1 1 test_fast 16,305.5956 (>1000.0) 423,926.8601 (>1000.0) 63,237.4693 (>1000.0) 8,304.1925 (inf) 63,717.3653 (>1000.0) 331.5508 (inf) 189;2810 15,813.4095 (0.00) 11419 1 test_calibrate_slow 21,193.1765 (>1000.0) 441,398.4716 (>1000.0) 71,976.5027 (>1000.0) 8,756.3291 (inf) 72,084.3673 (>1000.0) 219.7921 (inf) 6442;45914 13,893.4230 (0.00) 430254 1 test_foo 21,919.6081 (>1000.0) 408,325.3443 (>1000.0) 63,307.7182 (>1000.0) 8,065.5978 (inf) 63,028.1866 (>1000.0) 227.2427 (inf) 126;1177 15,795.8623 (0.00) 12565 1 test_parametrized[3] 22,981.3159 (>1000.0) 416,576.8623 (>1000.0) 71,991.9514 (>1000.0) 8,809.2984 (inf) 72,084.3673 (>1000.0) 223.5174 (inf) 181;1160 13,890.4417 (0.00) 11633 1 test_parametrized[1] 23,059.5469 (>1000.0) 399,578.3627 (>1000.0) 72,093.0335 (>1000.0) 9,000.2618 (inf) 72,117.8949 (>1000.0) 238.4186 (inf) 174;1112 13,870.9658 (0.00) 11461 1 test_parametrized[0] 23,391.0978 (>1000.0) 428,535.0442 (>1000.0) 71,724.4011 (>1000.0) 8,335.4016 (inf) 72,088.0926 (>1000.0) 256.1137 (inf) 173;2498 13,942.2565 (0.00) 12363 1 test_parametrized[2] 23,599.7140 (>1000.0) 421,136.6177 (>1000.0) 72,047.0571 (>1000.0) 9,154.0610 (inf) 72,102.9937 (>1000.0) 216.0668 (inf) 163;1169 13,879.8174 (0.00) 11642 1 test_parametrized[4] 23,614.6152 (>1000.0) 423,401.5942 (>1000.0) 72,153.6238 (>1000.0) 9,279.1582 (inf) 72,073.1914 (>1000.0) 249.5944 (inf) 190;1608 13,859.3178 (0.00) 11545 1 test_calibrate 69,681.5550 (>1000.0) 469,606.3697 (>1000.0) 71,508.9972 (>1000.0) 8,139.5689 (inf) 70,948.1537 (>1000.0) 469.3866 (inf) 2494;8896 13,984.2543 (0.00) 142907 1 test_slow 1,041,527.8375 (>1000.0) 1,473,579.5557 (>1000.0) 1,108,211.7476 (>1000.0) 18,150.9814 (inf) 1,112,096.0116 (>1000.0) 7,063.1504 (inf) 180;184 902.3546 (0.00) 930 1 test_slower 10,096,322.7451 (>1000.0) 10,143,708.4377 (>1000.0) 10,120,363.3100 (>1000.0) 7,011.9188 (inf) 10,120,064.0202 (>1000.0) 7,249.4149 (inf) 25;5 98.8107 (0.00) 100 1 test_calibrate_stuck[False--1-0.01] 504,999,999.9992 (>1000.0) 504,999,999.9992 (>1000.0) 504,999,999.9992 (>1000.0) 0.0000 (1.0) 504,999,999.9992 (>1000.0) 0.0000 (1.0) 0;0 1.9802 (0.00) 1 2 test_calibrate_stuck[False-0-0.01] 504,999,999.9992 (>1000.0) 504,999,999.9992 (>1000.0) 504,999,999.9992 (>1000.0) 0.0000 (1.0) 504,999,999.9992 (>1000.0) 0.0000 (1.0) 0;0 1.9802 (0.00) 1 2 test_calibrate_stuck[False-1-0.01] 504,999,999.9992 (>1000.0) 504,999,999.9992 (>1000.0) 504,999,999.9992 (>1000.0) 0.0000 (1.0) 504,999,999.9992 (>1000.0) 0.0000 (1.0) 0;0 1.9802 (0.00) 1 2 test_calibrate_stuck[True--1-0.01] 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 1.0000 (0.00) 1 1 test_calibrate_stuck[True-0-0.01] 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 1.0000 (0.00) 1 1 test_calibrate_stuck[True-1-0.01] 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 1.0000 (0.00) 1 1 test_calibrate_stuck[False--1-1.000000000000001] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False--1-1] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-0-1.000000000000001] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-0-1] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-1-1.000000000000001] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-1-1] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[True--1-1.000000000000001] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True--1-1] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-0-1.000000000000001] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-0-1] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-1-1.000000000000001] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-1-1] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Legend: Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. OPS: Operations Per Second, computed as 1 / Mean ----------------------------- cProfile (time in s) ----------------------------- tests/test_pedantic.py::test_setup_cprofile ncalls tottime percall cumtime percall filename:lineno(function) 1 0.0000 0.0000 0.0000 0.0000 pytest-benchmark-3.4.1/tests/test_pedantic.py:29(stuff) 1 0.0000 0.0000 0.0000 0.0000 ~:0() 1 0.0000 0.0000 0.0000 0.0000 ~:0() =========================== short test summary info ============================ SKIPPED [1] tests/test_skip.py:5: bla SKIPPED [5] tests/test_utils.py:47: %r not availabe on $PATH SKIPPED [4] tests/test_utils.py:160: %r not availabe on $PATH FAILED tests/test_benchmark.py::test_basic - Failed: nomatch: '*collected 5 i... FAILED tests/test_benchmark.py::test_skip - Failed: nomatch: '*collected 5 it... FAILED tests/test_benchmark.py::test_disable - Failed: nomatch: '*collected 5... FAILED tests/test_benchmark.py::test_only_benchmarks - Failed: nomatch: '*col... = 4 failed, 208 passed, 10 skipped, 10 deselected, 2 warnings in 430.67s (0:07:10) = >>> ERROR: py3-pytest-benchmark: check failed >>> py3-pytest-benchmark: Uninstalling dependencies... (1/29) Purging .makedepends-py3-pytest-benchmark (20221026.171219) (2/29) Purging py3-py-cpuinfo (8.0.0-r0) (3/29) Purging py3-setuptools (65.5.0-r0) (4/29) Purging py3-pytest-xdist (2.5.0-r1) (5/29) Purging py3-execnet (1.9.0-r0) (6/29) Purging py3-apipkg (2.1.0-r0) (7/29) Purging py3-pytest-forked (1.4.0-r1) (8/29) Purging py3-pytest (7.1.3-r1) (9/29) Purging py3-attrs (22.1.0-r0) (10/29) Purging py3-iniconfig (1.1.1-r3) (11/29) Purging py3-packaging (21.3-r2) (12/29) Purging py3-parsing (3.0.9-r0) (13/29) Purging py3-pluggy (1.0.0-r1) (14/29) Purging py3-py (1.11.0-r0) (15/29) Purging py3-tomli (2.0.1-r1) (16/29) Purging py3-freezegun (1.2.2-r0) (17/29) Purging py3-dateutil (2.8.2-r1) (18/29) Purging py3-six (1.16.0-r3) (19/29) Purging py3-pygal (3.0.0-r1) (20/29) Purging py3-elasticsearch (7.11.0-r1) (21/29) Purging py3-urllib3 (1.26.12-r0) (22/29) Purging python3 (3.10.8-r3) (23/29) Purging libbz2 (1.0.8-r3) (24/29) Purging libffi (3.4.3-r0) (25/29) Purging gdbm (1.23-r0) (26/29) Purging xz-libs (5.2.7-r0) (27/29) Purging mpdecimal (2.5.1-r1) (28/29) Purging readline (8.2.0-r0) (29/29) Purging sqlite-libs (3.39.4-r0) Executing busybox-1.35.0-r27.trigger OK: 258 MiB in 92 packages