>>> py3-pytest-benchmark: Building community/py3-pytest-benchmark 3.4.1-r1 (using abuild 3.10.0_rc1-r2) started Wed, 26 Oct 2022 14:14:33 +0000 >>> py3-pytest-benchmark: Checking sanity of /home/buildozer/aports/community/py3-pytest-benchmark/APKBUILD... >>> py3-pytest-benchmark: Analyzing dependencies... >>> py3-pytest-benchmark: Installing for build: build-base python3 py3-pytest py3-py-cpuinfo py3-setuptools py3-pytest-xdist py3-freezegun py3-pygal py3-elasticsearch (1/29) Installing libbz2 (1.0.8-r3) (2/29) Installing libffi (3.4.3-r0) (3/29) Installing gdbm (1.23-r0) (4/29) Installing xz-libs (5.2.7-r0) (5/29) Installing mpdecimal (2.5.1-r1) (6/29) Installing readline (8.2.0-r0) (7/29) Installing sqlite-libs (3.39.4-r0) (8/29) Installing python3 (3.10.8-r1) (9/29) Installing py3-attrs (22.1.0-r0) (10/29) Installing py3-iniconfig (1.1.1-r3) (11/29) Installing py3-parsing (3.0.9-r0) (12/29) Installing py3-packaging (21.3-r2) (13/29) Installing py3-pluggy (1.0.0-r1) (14/29) Installing py3-py (1.11.0-r0) (15/29) Installing py3-tomli (2.0.1-r1) (16/29) Installing py3-pytest (7.1.3-r1) (17/29) Installing py3-py-cpuinfo (8.0.0-r0) (18/29) Installing py3-setuptools (65.5.0-r0) (19/29) Installing py3-apipkg (2.1.0-r0) (20/29) Installing py3-execnet (1.9.0-r0) (21/29) Installing py3-pytest-forked (1.4.0-r1) (22/29) Installing py3-pytest-xdist (2.5.0-r1) (23/29) Installing py3-six (1.16.0-r3) (24/29) Installing py3-dateutil (2.8.2-r1) (25/29) Installing py3-freezegun (1.2.2-r0) (26/29) Installing py3-pygal (3.0.0-r1) (27/29) Installing py3-urllib3 (1.26.12-r0) (28/29) Installing py3-elasticsearch (7.11.0-r1) (29/29) Installing .makedepends-py3-pytest-benchmark (20221026.141451) Executing busybox-1.35.0-r27.trigger OK: 360 MiB in 121 packages >>> py3-pytest-benchmark: Cleaning up srcdir >>> py3-pytest-benchmark: Cleaning up pkgdir >>> py3-pytest-benchmark: Fetching https://distfiles.alpinelinux.org/distfiles/v3.17/pytest-benchmark-3.4.1.tar.gz >>> py3-pytest-benchmark: Fetching https://distfiles.alpinelinux.org/distfiles/v3.17/pytest-benchmark-3.4.1.tar.gz >>> py3-pytest-benchmark: Checking sha512sums... pytest-benchmark-3.4.1.tar.gz: OK python-3.10.patch: OK >>> py3-pytest-benchmark: Unpacking /var/cache/distfiles/v3.17/pytest-benchmark-3.4.1.tar.gz... >>> py3-pytest-benchmark: python-3.10.patch patching file tests/test_cli.py running build running build_py creating build creating build/lib creating build/lib/pytest_benchmark copying src/pytest_benchmark/plugin.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/cli.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/histogram.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/pep418.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/csv.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/timers.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/hookspec.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/utils.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/table.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/__main__.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/session.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/logger.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/fixture.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/__init__.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/compat.py -> build/lib/pytest_benchmark copying src/pytest_benchmark/stats.py -> build/lib/pytest_benchmark creating build/lib/pytest_benchmark/storage copying src/pytest_benchmark/storage/elasticsearch.py -> build/lib/pytest_benchmark/storage copying src/pytest_benchmark/storage/__init__.py -> build/lib/pytest_benchmark/storage copying src/pytest_benchmark/storage/file.py -> build/lib/pytest_benchmark/storage running egg_info creating src/pytest_benchmark.egg-info writing src/pytest_benchmark.egg-info/PKG-INFO writing dependency_links to src/pytest_benchmark.egg-info/dependency_links.txt writing entry points to src/pytest_benchmark.egg-info/entry_points.txt writing requirements to src/pytest_benchmark.egg-info/requires.txt writing top-level names to src/pytest_benchmark.egg-info/top_level.txt writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' reading manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.py[cod]' found anywhere in distribution warning: no previously-included files matching '__pycache__/*' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.dylib' found anywhere in distribution adding license file 'LICENSE' adding license file 'AUTHORS.rst' writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' running install /usr/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py running egg_info writing src/pytest_benchmark.egg-info/PKG-INFO writing dependency_links to src/pytest_benchmark.egg-info/dependency_links.txt writing entry points to src/pytest_benchmark.egg-info/entry_points.txt writing requirements to src/pytest_benchmark.egg-info/requires.txt writing top-level names to src/pytest_benchmark.egg-info/top_level.txt reading manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*.py[cod]' found anywhere in distribution warning: no previously-included files matching '__pycache__/*' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.dylib' found anywhere in distribution adding license file 'LICENSE' adding license file 'AUTHORS.rst' writing manifest file 'src/pytest_benchmark.egg-info/SOURCES.txt' running install_lib creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10 creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/plugin.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/cli.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/histogram.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/pep418.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/csv.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/timers.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/hookspec.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark creating /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/storage/elasticsearch.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/storage/__init__.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/storage/file.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage copying build/lib/pytest_benchmark/utils.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/table.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/__main__.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/session.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/logger.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/fixture.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/__init__.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/compat.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark copying build/lib/pytest_benchmark/stats.py -> /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/plugin.py to plugin.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/cli.py to cli.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/histogram.py to histogram.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/pep418.py to pep418.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/csv.py to csv.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/timers.py to timers.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/hookspec.py to hookspec.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage/elasticsearch.py to elasticsearch.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage/__init__.py to __init__.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/storage/file.py to file.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/utils.py to utils.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/table.py to table.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/__main__.py to __main__.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/session.py to session.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/logger.py to logger.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/fixture.py to fixture.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/__init__.py to __init__.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/compat.py to compat.cpython-310.pyc byte-compiling /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark/stats.py to stats.cpython-310.pyc running install_egg_info Copying src/pytest_benchmark.egg-info to /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/lib/python3.10/site-packages/pytest_benchmark-3.4.1-py3.10.egg-info running install_scripts Installing py.test-benchmark script to /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/bin Installing pytest-benchmark script to /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/test_install/usr/bin ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1, configfile: setup.cfg, testpaths: tests plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0 collecting ... collected 232 items / 10 deselected / 222 selected tests/test_benchmark.py::test_help PASSED [ 0%] tests/test_benchmark.py::test_groups PASSED [ 0%] tests/test_benchmark.py::test_group_by_name PASSED [ 1%] tests/test_benchmark.py::test_group_by_func PASSED [ 1%] tests/test_benchmark.py::test_group_by_fullfunc PASSED [ 2%] tests/test_benchmark.py::test_group_by_param_all PASSED [ 2%] tests/test_benchmark.py::test_group_by_param_select PASSED [ 3%] tests/test_benchmark.py::test_group_by_param_select_multiple PASSED [ 3%] tests/test_benchmark.py::test_group_by_fullname PASSED [ 4%] tests/test_benchmark.py::test_double_use PASSED [ 4%] tests/test_benchmark.py::test_only_override_skip PASSED [ 4%] tests/test_benchmark.py::test_fixtures_also_skipped PASSED [ 5%] tests/test_benchmark.py::test_conflict_between_only_and_disable PASSED [ 5%] tests/test_benchmark.py::test_max_time_min_rounds PASSED [ 6%] tests/test_benchmark.py::test_max_time PASSED [ 6%] tests/test_benchmark.py::test_bogus_max_time PASSED [ 7%] tests/test_benchmark.py::test_pep418_timer PASSED [ 7%] tests/test_benchmark.py::test_bad_save PASSED [ 8%] tests/test_benchmark.py::test_bad_save_2 PASSED [ 8%] tests/test_benchmark.py::test_bad_compare_fail PASSED [ 9%] tests/test_benchmark.py::test_bad_rounds PASSED [ 9%] tests/test_benchmark.py::test_bad_rounds_2 PASSED [ 9%] tests/test_benchmark.py::test_compare PASSED [ 10%] tests/test_benchmark.py::test_compare_last PASSED [ 10%] tests/test_benchmark.py::test_compare_non_existing PASSED [ 11%] tests/test_benchmark.py::test_compare_non_existing_verbose PASSED [ 11%] tests/test_benchmark.py::test_compare_no_files PASSED [ 12%] tests/test_benchmark.py::test_compare_no_files_verbose PASSED [ 12%] tests/test_benchmark.py::test_compare_no_files_match PASSED [ 13%] tests/test_benchmark.py::test_compare_no_files_match_verbose PASSED [ 13%] tests/test_benchmark.py::test_verbose PASSED [ 13%] tests/test_benchmark.py::test_save PASSED [ 14%] tests/test_benchmark.py::test_save_extra_info PASSED [ 14%] tests/test_benchmark.py::test_update_machine_info_hook_detection PASSED [ 15%] tests/test_benchmark.py::test_histogram PASSED [ 15%] tests/test_benchmark.py::test_autosave PASSED [ 16%] tests/test_benchmark.py::test_bogus_min_time PASSED [ 16%] tests/test_benchmark.py::test_disable_gc PASSED [ 17%] tests/test_benchmark.py::test_custom_timer PASSED [ 17%] tests/test_benchmark.py::test_bogus_timer PASSED [ 18%] tests/test_benchmark.py::test_sort_by_mean PASSED [ 18%] tests/test_benchmark.py::test_bogus_sort PASSED [ 18%] tests/test_benchmark.py::test_xdist PASSED [ 19%] tests/test_benchmark.py::test_xdist_verbose PASSED [ 19%] tests/test_benchmark.py::test_cprofile PASSED [ 20%] tests/test_benchmark.py::test_disabled_and_cprofile PASSED [ 20%] tests/test_benchmark.py::test_abort_broken PASSED [ 21%] tests/test_benchmark.py::test_basic FAILED [ 21%] tests/test_benchmark.py::test_skip FAILED [ 22%] tests/test_benchmark.py::test_disable FAILED [ 22%] tests/test_benchmark.py::test_mark_selection PASSED [ 22%] tests/test_benchmark.py::test_only_benchmarks FAILED [ 23%] tests/test_benchmark.py::test_columns PASSED [ 23%] tests/test_calibration.py::test_calibrate PASSED [ 24%] tests/test_calibration.py::test_calibrate_fast PASSED [ 24%] tests/test_calibration.py::test_calibrate_xfast PASSED [ 25%] tests/test_calibration.py::test_calibrate_slow PASSED [ 25%] tests/test_calibration.py::test_calibrate_stuck[True-0-1] PASSED [ 26%] tests/test_calibration.py::test_calibrate_stuck[True-0-0.01] PASSED [ 26%] tests/test_calibration.py::test_calibrate_stuck[True-0-1e-09] PASSED [ 27%] tests/test_calibration.py::test_calibrate_stuck[True-0-1e-10] PASSED [ 27%] tests/test_calibration.py::test_calibrate_stuck[True-0-1.000000000000001] PASSED [ 27%] tests/test_calibration.py::test_calibrate_stuck[True-1-1] PASSED [ 28%] tests/test_calibration.py::test_calibrate_stuck[True-1-0.01] PASSED [ 28%] tests/test_calibration.py::test_calibrate_stuck[True-1-1e-09] PASSED [ 29%] tests/test_calibration.py::test_calibrate_stuck[True-1-1e-10] PASSED [ 29%] tests/test_calibration.py::test_calibrate_stuck[True-1-1.000000000000001] PASSED [ 30%] tests/test_calibration.py::test_calibrate_stuck[True--1-1] PASSED [ 30%] tests/test_calibration.py::test_calibrate_stuck[True--1-0.01] PASSED [ 31%] tests/test_calibration.py::test_calibrate_stuck[True--1-1e-09] PASSED [ 31%] tests/test_calibration.py::test_calibrate_stuck[True--1-1e-10] PASSED [ 31%] tests/test_calibration.py::test_calibrate_stuck[True--1-1.000000000000001] PASSED [ 32%] tests/test_calibration.py::test_calibrate_stuck[False-0-1] PASSED [ 32%] tests/test_calibration.py::test_calibrate_stuck[False-0-0.01] PASSED [ 33%] tests/test_calibration.py::test_calibrate_stuck[False-0-1e-09] PASSED [ 33%] tests/test_calibration.py::test_calibrate_stuck[False-0-1e-10] PASSED [ 34%] tests/test_calibration.py::test_calibrate_stuck[False-0-1.000000000000001] PASSED [ 34%] tests/test_calibration.py::test_calibrate_stuck[False-1-1] PASSED [ 35%] tests/test_calibration.py::test_calibrate_stuck[False-1-0.01] PASSED [ 35%] tests/test_calibration.py::test_calibrate_stuck[False-1-1e-09] PASSED [ 36%] tests/test_calibration.py::test_calibrate_stuck[False-1-1e-10] PASSED [ 36%] tests/test_calibration.py::test_calibrate_stuck[False-1-1.000000000000001] PASSED [ 36%] tests/test_calibration.py::test_calibrate_stuck[False--1-1] PASSED [ 37%] tests/test_calibration.py::test_calibrate_stuck[False--1-0.01] PASSED [ 37%] tests/test_calibration.py::test_calibrate_stuck[False--1-1e-09] PASSED [ 38%] tests/test_calibration.py::test_calibrate_stuck[False--1-1e-10] PASSED [ 38%] tests/test_calibration.py::test_calibrate_stuck[False--1-1.000000000000001] PASSED [ 39%] tests/test_cli.py::test_list PASSED [ 39%] tests/test_cli.py::test_compare[short-] PASSED [ 40%] tests/test_cli.py::test_compare[long-] PASSED [ 40%] tests/test_cli.py::test_compare[normal-] PASSED [ 40%] tests/test_cli.py::test_compare[trial-] PASSED [ 41%] tests/test_doctest.rst::test_doctest.rst PASSED [ 41%] tests/test_elasticsearch_storage.py::test_handle_saving PASSED [ 42%] tests/test_elasticsearch_storage.py::test_parse_with_no_creds PASSED [ 42%] tests/test_elasticsearch_storage.py::test_parse_with_creds_in_first_host_of_url PASSED [ 43%] tests/test_elasticsearch_storage.py::test_parse_with_creds_in_second_host_of_url PASSED [ 43%] tests/test_elasticsearch_storage.py::test_parse_with_creds_in_netrc PASSED [ 44%] tests/test_elasticsearch_storage.py::test_parse_url_creds_supersedes_netrc_creds PASSED [ 44%] tests/test_elasticsearch_storage.py::test__mask_hosts PASSED [ 45%] tests/test_normal.py::test_normal PASSED [ 45%] tests/test_normal.py::test_fast PASSED [ 45%] tests/test_normal.py::test_slow PASSED [ 46%] tests/test_normal.py::test_slower PASSED [ 46%] tests/test_normal.py::test_xfast PASSED [ 47%] tests/test_normal.py::test_parametrized[0] PASSED [ 47%] tests/test_normal.py::test_parametrized[1] PASSED [ 48%] tests/test_normal.py::test_parametrized[2] PASSED [ 48%] tests/test_normal.py::test_parametrized[3] PASSED [ 49%] tests/test_normal.py::test_parametrized[4] PASSED [ 49%] tests/test_pedantic.py::test_single PASSED [ 50%] tests/test_pedantic.py::test_setup PASSED [ 50%] tests/test_pedantic.py::test_setup_cprofile PASSED [ 50%] tests/test_pedantic.py::test_args_kwargs PASSED [ 51%] tests/test_pedantic.py::test_iterations PASSED [ 51%] tests/test_pedantic.py::test_rounds_iterations PASSED [ 52%] tests/test_pedantic.py::test_rounds PASSED [ 52%] tests/test_pedantic.py::test_warmup_rounds PASSED [ 53%] tests/test_pedantic.py::test_rounds_must_be_int[0] PASSED [ 53%] tests/test_pedantic.py::test_rounds_must_be_int[x] PASSED [ 54%] tests/test_pedantic.py::test_warmup_rounds_must_be_int[-15] PASSED [ 54%] tests/test_pedantic.py::test_warmup_rounds_must_be_int[x] PASSED [ 54%] tests/test_pedantic.py::test_setup_many_rounds PASSED [ 55%] tests/test_pedantic.py::test_cant_use_both_args_and_setup_with_return PASSED [ 55%] tests/test_pedantic.py::test_can_use_both_args_and_setup_without_return PASSED [ 56%] tests/test_pedantic.py::test_cant_use_setup_with_many_iterations PASSED [ 56%] tests/test_pedantic.py::test_iterations_must_be_positive_int[0] PASSED [ 57%] tests/test_pedantic.py::test_iterations_must_be_positive_int[-1] PASSED [ 57%] tests/test_pedantic.py::test_iterations_must_be_positive_int[asdf] PASSED [ 58%] tests/test_sample.py::test_proto[SimpleProxy] PASSED [ 58%] tests/test_sample.py::test_proto[CachedPropertyProxy] PASSED [ 59%] tests/test_sample.py::test_proto[LocalsSimpleProxy] PASSED [ 59%] tests/test_sample.py::test_proto[LocalsCachedPropertyProxy] PASSED [ 59%] tests/test_skip.py::test_skip SKIPPED (bla) [ 60%] tests/test_stats.py::test_1 PASSED [ 60%] tests/test_stats.py::test_2 PASSED [ 61%] tests/test_stats.py::test_single_item PASSED [ 61%] tests/test_stats.py::test_length[1] PASSED [ 62%] tests/test_stats.py::test_length[2] PASSED [ 62%] tests/test_stats.py::test_length[3] PASSED [ 63%] tests/test_stats.py::test_length[4] PASSED [ 63%] tests/test_stats.py::test_length[5] PASSED [ 63%] tests/test_stats.py::test_length[6] PASSED [ 64%] tests/test_stats.py::test_length[7] PASSED [ 64%] tests/test_stats.py::test_length[8] PASSED [ 65%] tests/test_stats.py::test_length[9] PASSED [ 65%] tests/test_stats.py::test_iqr PASSED [ 66%] tests/test_stats.py::test_ops PASSED [ 66%] tests/test_storage.py::test_rendering[short] PASSED [ 67%] tests/test_storage.py::test_rendering[normal] PASSED [ 67%] tests/test_storage.py::test_rendering[long] PASSED [ 68%] tests/test_storage.py::test_rendering[trial] PASSED [ 68%] tests/test_storage.py::test_regression_checks[short] PASSED [ 68%] tests/test_storage.py::test_regression_checks[normal] PASSED [ 69%] tests/test_storage.py::test_regression_checks[long] PASSED [ 69%] tests/test_storage.py::test_regression_checks[trial] PASSED [ 70%] tests/test_storage.py::test_regression_checks_inf[short] PASSED [ 70%] tests/test_storage.py::test_regression_checks_inf[normal] PASSED [ 71%] tests/test_storage.py::test_regression_checks_inf[long] PASSED [ 71%] tests/test_storage.py::test_regression_checks_inf[trial] PASSED [ 72%] tests/test_storage.py::test_compare_1[short] PASSED [ 72%] tests/test_storage.py::test_compare_1[normal] PASSED [ 72%] tests/test_storage.py::test_compare_1[long] PASSED [ 73%] tests/test_storage.py::test_compare_1[trial] PASSED [ 73%] tests/test_storage.py::test_compare_2[short] PASSED [ 74%] tests/test_storage.py::test_compare_2[normal] PASSED [ 74%] tests/test_storage.py::test_compare_2[long] PASSED [ 75%] tests/test_storage.py::test_compare_2[trial] PASSED [ 75%] tests/test_storage.py::test_save_json[short] PASSED [ 76%] tests/test_storage.py::test_save_json[normal] PASSED [ 76%] tests/test_storage.py::test_save_json[long] PASSED [ 77%] tests/test_storage.py::test_save_json[trial] PASSED [ 77%] tests/test_storage.py::test_save_with_name[short] PASSED [ 77%] tests/test_storage.py::test_save_with_name[normal] PASSED [ 78%] tests/test_storage.py::test_save_with_name[long] PASSED [ 78%] tests/test_storage.py::test_save_with_name[trial] PASSED [ 79%] tests/test_storage.py::test_save_no_name[short] PASSED [ 79%] tests/test_storage.py::test_save_no_name[normal] PASSED [ 80%] tests/test_storage.py::test_save_no_name[long] PASSED [ 80%] tests/test_storage.py::test_save_no_name[trial] PASSED [ 81%] tests/test_storage.py::test_save_with_error[short] PASSED [ 81%] tests/test_storage.py::test_save_with_error[normal] PASSED [ 81%] tests/test_storage.py::test_save_with_error[long] PASSED [ 82%] tests/test_storage.py::test_save_with_error[trial] PASSED [ 82%] tests/test_storage.py::test_autosave[short] PASSED [ 83%] tests/test_storage.py::test_autosave[normal] PASSED [ 83%] tests/test_storage.py::test_autosave[long] PASSED [ 84%] tests/test_storage.py::test_autosave[trial] PASSED [ 84%] tests/test_utils.py::test_clonefunc[] PASSED [ 85%] tests/test_utils.py::test_clonefunc[f2] PASSED [ 85%] tests/test_utils.py::test_clonefunc_not_function PASSED [ 86%] tests/test_utils.py::test_get_commit_info[git-True] PASSED [ 86%] tests/test_utils.py::test_get_commit_info[git-False] PASSED [ 86%] tests/test_utils.py::test_get_commit_info[hg-True] SKIPPED (%r not a...) [ 87%] tests/test_utils.py::test_get_commit_info[hg-False] SKIPPED (%r not ...) [ 87%] tests/test_utils.py::test_missing_scm_bins[git-True] PASSED [ 88%] tests/test_utils.py::test_missing_scm_bins[git-False] PASSED [ 88%] tests/test_utils.py::test_missing_scm_bins[hg-True] SKIPPED (%r not ...) [ 89%] tests/test_utils.py::test_missing_scm_bins[hg-False] SKIPPED (%r not...) [ 89%] tests/test_utils.py::test_get_branch_info[git] PASSED [ 90%] tests/test_utils.py::test_get_branch_info[hg] SKIPPED (%r not availa...) [ 90%] tests/test_utils.py::test_no_branch_info PASSED [ 90%] tests/test_utils.py::test_commit_info_error PASSED [ 91%] tests/test_utils.py::test_parse_warmup PASSED [ 91%] tests/test_utils.py::test_parse_columns PASSED [ 92%] tests/test_utils.py::test_get_project_name[False-None] PASSED [ 92%] tests/test_utils.py::test_get_project_name[False-git] PASSED [ 93%] tests/test_utils.py::test_get_project_name[False-hg] SKIPPED (%r not...) [ 93%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo-None] PASSED [ 94%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo-git] PASSED [ 94%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo-hg] SKIPPED [ 95%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo.git-None] PASSED [ 95%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo.git-git] PASSED [ 95%] tests/test_utils.py::test_get_project_name[https://example.com/pytest_benchmark_repo.git-hg] SKIPPED [ 96%] tests/test_utils.py::test_get_project_name[c:\\foo\\bar\\pytest_benchmark_repo.gitfoo@example.com:pytest_benchmark_repo.git-None] PASSED [ 96%] tests/test_utils.py::test_get_project_name[c:\\foo\\bar\\pytest_benchmark_repo.gitfoo@example.com:pytest_benchmark_repo.git-git] PASSED [ 97%] tests/test_utils.py::test_get_project_name[c:\\foo\\bar\\pytest_benchmark_repo.gitfoo@example.com:pytest_benchmark_repo.git-hg] SKIPPED [ 97%] tests/test_utils.py::test_get_project_name_broken[git] PASSED [ 98%] tests/test_utils.py::test_get_project_name_broken[hg] PASSED [ 98%] tests/test_utils.py::test_get_project_name_fallback PASSED [ 99%] tests/test_utils.py::test_get_project_name_fallback_broken_hgrc PASSED [ 99%] tests/test_with_testcase.py::TerribleTerribleWayToWriteTests::test_foo PASSED [100%] =================================== FAILURES =================================== __________________________________ test_basic __________________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1028: in test_basic result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-11/test_basic0' E and: 'plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_basic.py::*test_basic PASSED*' E and: '' E fnmatch: 'test_basic.py::*test_basic PASSED*' E with: 'test_basic.py::test_basic PASSED [ 20%]' E nomatch: 'test_basic.py::test_slow PASSED*' E and: 'test_basic.py::test_fast PASSED [ 40%]' E fnmatch: 'test_basic.py::test_slow PASSED*' E with: 'test_basic.py::test_slow PASSED [ 60%]' E fnmatch: 'test_basic.py::test_slower PASSED*' E with: 'test_basic.py::test_slower PASSED [ 80%]' E fnmatch: 'test_basic.py::test_xfast PASSED*' E with: 'test_basic.py::test_xfast PASSED [100%]' E nomatch: 'test_basic.py::test_fast PASSED*' E and: '' E and: '' E and: '--------------------------------------------------------------------------------------------------------- benchmark: 4 tests ---------------------------------------------------------------------------------------------------------' E and: 'Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations' E and: '--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: 'test_xfast 560.1719 (1.0) 754.2968 (1.0) 567.9812 (1.0) 7.4592 (1.0) 567.4362 (1.0) 3.2783 (1.0) 132;234 1,760,621.5414 (1.0) 17478 100' E and: 'test_fast 568.5911 (1.02) 5,347.9150 (7.09) 577.7529 (1.02) 37.1190 (4.98) 576.8239 (1.02) 3.4273 (1.05) 104;187 1,730,843.8964 (0.98) 17310 100' E and: 'test_slow 1,019,071.7876 (>1000.0) 1,468,677.0737 (>1000.0) 1,081,298.7054 (>1000.0) 13,457.9307 (>1000.0) 1,079,998.9104 (>1000.0) 2,544.3733 (776.14) 23;44 924.8138 (0.00) 922 1' E and: 'test_slower 10,058,779.2695 (>1000.0) 10,559,368.8786 (>1000.0) 10,167,523.1010 (>1000.0) 182,009.0550 (>1000.0) 10,085,796.9373 (>1000.0) 6,202.6083 (>1000.0) 18;21 98.3524 (0.00) 100 1' E and: '--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: '' E and: 'Legend:' E and: ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.' E and: ' OPS: Operations Per Second, computed as 1 / Mean' E and: '============================== 5 passed in 6.00s ===============================' E remains unmatched: 'test_basic.py::test_fast PASSED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-11/test_basic0/runpytest-0 -vv --doctest-modules /tmp/pytest-of-buildozer/pytest-11/test_basic0/test_basic.py in: /tmp/pytest-of-buildozer/pytest-11/test_basic0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-11/test_basic0 plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0 collecting ... collected 5 items test_basic.py::test_basic PASSED [ 20%] test_basic.py::test_fast PASSED [ 40%] test_basic.py::test_slow PASSED [ 60%] test_basic.py::test_slower PASSED [ 80%] test_basic.py::test_xfast PASSED [100%] --------------------------------------------------------------------------------------------------------- benchmark: 4 tests --------------------------------------------------------------------------------------------------------- Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- test_xfast 560.1719 (1.0) 754.2968 (1.0) 567.9812 (1.0) 7.4592 (1.0) 567.4362 (1.0) 3.2783 (1.0) 132;234 1,760,621.5414 (1.0) 17478 100 test_fast 568.5911 (1.02) 5,347.9150 (7.09) 577.7529 (1.02) 37.1190 (4.98) 576.8239 (1.02) 3.4273 (1.05) 104;187 1,730,843.8964 (0.98) 17310 100 test_slow 1,019,071.7876 (>1000.0) 1,468,677.0737 (>1000.0) 1,081,298.7054 (>1000.0) 13,457.9307 (>1000.0) 1,079,998.9104 (>1000.0) 2,544.3733 (776.14) 23;44 924.8138 (0.00) 922 1 test_slower 10,058,779.2695 (>1000.0) 10,559,368.8786 (>1000.0) 10,167,523.1010 (>1000.0) 182,009.0550 (>1000.0) 10,085,796.9373 (>1000.0) 6,202.6083 (>1000.0) 18;21 98.3524 (0.00) 100 1 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Legend: Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. OPS: Operations Per Second, computed as 1 / Mean ============================== 5 passed in 6.00s =============================== __________________________________ test_skip ___________________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1052: in test_skip result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-11/test_skip0' E and: 'plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_skip.py::*test_skip PASSED*' E and: '' E fnmatch: 'test_skip.py::*test_skip PASSED*' E with: 'test_skip.py::test_skip PASSED [ 20%]' E nomatch: 'test_skip.py::test_slow SKIPPED*' E and: 'test_skip.py::test_fast SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 40%]' E fnmatch: 'test_skip.py::test_slow SKIPPED*' E with: 'test_skip.py::test_slow SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 60%]' E fnmatch: 'test_skip.py::test_slower SKIPPED*' E with: 'test_skip.py::test_slower SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 80%]' E fnmatch: 'test_skip.py::test_xfast SKIPPED*' E with: 'test_skip.py::test_xfast SKIPPED (Skipping benchmark (--benchmark-skip active).) [100%]' E nomatch: 'test_skip.py::test_fast SKIPPED*' E and: '' E and: '========================= 1 passed, 4 skipped in 0.09s =========================' E remains unmatched: 'test_skip.py::test_fast SKIPPED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-11/test_skip0/runpytest-0 -vv --doctest-modules --benchmark-skip /tmp/pytest-of-buildozer/pytest-11/test_skip0/test_skip.py in: /tmp/pytest-of-buildozer/pytest-11/test_skip0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-11/test_skip0 plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0 collecting ... collected 5 items test_skip.py::test_skip PASSED [ 20%] test_skip.py::test_fast SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 40%] test_skip.py::test_slow SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 60%] test_skip.py::test_slower SKIPPED (Skipping benchmark (--benchmark-skip active).) [ 80%] test_skip.py::test_xfast SKIPPED (Skipping benchmark (--benchmark-skip active).) [100%] ========================= 1 passed, 4 skipped in 0.09s ========================= _________________________________ test_disable _________________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1066: in test_disable result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-11/test_disable0' E and: 'plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_disable.py::*test_disable PASSED*' E and: '' E fnmatch: 'test_disable.py::*test_disable PASSED*' E with: 'test_disable.py::test_disable PASSED [ 20%]' E nomatch: 'test_disable.py::test_slow PASSED*' E and: 'test_disable.py::test_fast PASSED [ 40%]' E fnmatch: 'test_disable.py::test_slow PASSED*' E with: 'test_disable.py::test_slow PASSED [ 60%]' E fnmatch: 'test_disable.py::test_slower PASSED*' E with: 'test_disable.py::test_slower PASSED [ 80%]' E fnmatch: 'test_disable.py::test_xfast PASSED*' E with: 'test_disable.py::test_xfast PASSED [100%]' E nomatch: 'test_disable.py::test_fast PASSED*' E and: '' E and: '============================== 5 passed in 0.11s ===============================' E remains unmatched: 'test_disable.py::test_fast PASSED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-11/test_disable0/runpytest-0 -vv --doctest-modules --benchmark-disable /tmp/pytest-of-buildozer/pytest-11/test_disable0/test_disable.py in: /tmp/pytest-of-buildozer/pytest-11/test_disable0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-11/test_disable0 plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0 collecting ... collected 5 items test_disable.py::test_disable PASSED [ 20%] test_disable.py::test_fast PASSED [ 40%] test_disable.py::test_slow PASSED [ 60%] test_disable.py::test_slower PASSED [ 80%] test_disable.py::test_xfast PASSED [100%] ============================== 5 passed in 0.11s =============================== _____________________________ test_only_benchmarks _____________________________ /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_benchmark.py:1095: in test_only_benchmarks result.stdout.fnmatch_lines([ E Failed: nomatch: '*collected 5 items' E and: '============================= test session starts ==============================' E and: 'platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3' E and: 'cachedir: .pytest_cache' E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)' E and: 'rootdir: /tmp/pytest-of-buildozer/pytest-11/test_only_benchmarks0' E and: 'plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0' E fnmatch: '*collected 5 items' E with: 'collecting ... collected 5 items' E nomatch: 'test_only_benchmarks.py::*test_only_benchmarks SKIPPED*' E and: '' E fnmatch: 'test_only_benchmarks.py::*test_only_benchmarks SKIPPED*' E with: 'test_only_benchmarks.py::test_only_benchmarks SKIPPED (Skipping non-benchmark (--benchmark-only active).) [ 20%]' E nomatch: 'test_only_benchmarks.py::test_slow PASSED*' E and: 'test_only_benchmarks.py::test_fast PASSED [ 40%]' E fnmatch: 'test_only_benchmarks.py::test_slow PASSED*' E with: 'test_only_benchmarks.py::test_slow PASSED [ 60%]' E fnmatch: 'test_only_benchmarks.py::test_slower PASSED*' E with: 'test_only_benchmarks.py::test_slower PASSED [ 80%]' E fnmatch: 'test_only_benchmarks.py::test_xfast PASSED*' E with: 'test_only_benchmarks.py::test_xfast PASSED [100%]' E nomatch: 'test_only_benchmarks.py::test_fast PASSED*' E and: '' E and: '' E and: '--------------------------------------------------------------------------------------------------------- benchmark: 4 tests ---------------------------------------------------------------------------------------------------------' E and: 'Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations' E and: '--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: 'test_xfast 560.3954 (1.0) 857.3011 (1.0) 567.5488 (1.0) 9.3827 (1.14) 566.7284 (1.0) 2.9057 (1.0) 129;325 1,761,963.0628 (1.0) 17602 100' E and: 'test_fast 569.0753 (1.02) 873.0963 (1.02) 577.2308 (1.02) 8.2168 (1.0) 576.5632 (1.02) 3.1292 (1.08) 120;235 1,732,409.3445 (0.98) 17123 100' E and: 'test_slow 1,019,977.0331 (>1000.0) 1,486,018.3001 (>1000.0) 1,081,938.9360 (>1000.0) 23,862.8368 (>1000.0) 1,079,987.7346 (>1000.0) 2,199.7839 (757.05) 11;97 924.2666 (0.00) 924 1' E and: 'test_slower 10,029,084.9805 (>1000.0) 10,583,035.6479 (>1000.0) 10,153,433.7550 (>1000.0) 170,200.8527 (>1000.0) 10,085,977.6139 (>1000.0) 6,081.5364 (>1000.0) 15;23 98.4888 (0.00) 100 1' E and: '--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------' E and: '' E and: 'Legend:' E and: ' Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.' E and: ' OPS: Operations Per Second, computed as 1 / Mean' E and: '========================= 4 passed, 1 skipped in 5.99s =========================' E remains unmatched: 'test_only_benchmarks.py::test_fast PASSED*' ----------------------------- Captured stdout call ----------------------------- running: /usr/bin/python3 -mpytest --basetemp=/tmp/pytest-of-buildozer/pytest-11/test_only_benchmarks0/runpytest-0 -vv --doctest-modules --benchmark-only /tmp/pytest-of-buildozer/pytest-11/test_only_benchmarks0/test_only_benchmarks.py in: /tmp/pytest-of-buildozer/pytest-11/test_only_benchmarks0 ============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-7.1.3, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /tmp/pytest-of-buildozer/pytest-11/test_only_benchmarks0 plugins: benchmark-3.4.1, xdist-2.5.0, forked-1.4.0 collecting ... collected 5 items test_only_benchmarks.py::test_only_benchmarks SKIPPED (Skipping non-benchmark (--benchmark-only active).) [ 20%] test_only_benchmarks.py::test_fast PASSED [ 40%] test_only_benchmarks.py::test_slow PASSED [ 60%] test_only_benchmarks.py::test_slower PASSED [ 80%] test_only_benchmarks.py::test_xfast PASSED [100%] --------------------------------------------------------------------------------------------------------- benchmark: 4 tests --------------------------------------------------------------------------------------------------------- Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- test_xfast 560.3954 (1.0) 857.3011 (1.0) 567.5488 (1.0) 9.3827 (1.14) 566.7284 (1.0) 2.9057 (1.0) 129;325 1,761,963.0628 (1.0) 17602 100 test_fast 569.0753 (1.02) 873.0963 (1.02) 577.2308 (1.02) 8.2168 (1.0) 576.5632 (1.02) 3.1292 (1.08) 120;235 1,732,409.3445 (0.98) 17123 100 test_slow 1,019,977.0331 (>1000.0) 1,486,018.3001 (>1000.0) 1,081,938.9360 (>1000.0) 23,862.8368 (>1000.0) 1,079,987.7346 (>1000.0) 2,199.7839 (757.05) 11;97 924.2666 (0.00) 924 1 test_slower 10,029,084.9805 (>1000.0) 10,583,035.6479 (>1000.0) 10,153,433.7550 (>1000.0) 170,200.8527 (>1000.0) 10,085,977.6139 (>1000.0) 6,081.5364 (>1000.0) 15;23 98.4888 (0.00) 100 1 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Legend: Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. OPS: Operations Per Second, computed as 1 / Mean ========================= 4 passed, 1 skipped in 5.99s ========================= =============================== warnings summary =============================== ../../../../../../../usr/lib/python3.10/site-packages/_pytest/config/__init__.py:1199 /usr/lib/python3.10/site-packages/_pytest/config/__init__.py:1199: PytestRemovedIn8Warning: The --strict option is deprecated, use --strict-markers instead. self.issue_config_time_warning( tests/test_utils.py:35 /home/buildozer/aports/community/py3-pytest-benchmark/src/pytest-benchmark-3.4.1/tests/test_utils.py:35: PytestDeprecationWarning: @pytest.yield_fixture is deprecated. Use @pytest.fixture instead; they are the same. @pytest.yield_fixture(params=(True, False)) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ------------------------------------------------------------------------------------------------------------------------------------ benchmark: 58 tests ------------------------------------------------------------------------------------------------------------------------------------ Name (time in ns) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- test_calibrate_stuck[False--1-1e-10] 5.0500 (1.0) 5.0500 (1.0) 5.0500 (1.0) 0.0000 (1.0) 5.0500 (1.0) 0.0000 (1.0) 0;0 198,019,801.9803 (1.0) 1 2 test_calibrate_stuck[False-0-1e-10] 5.0500 (1.0) 5.0500 (1.0) 5.0500 (1.0) 0.0000 (1.0) 5.0500 (1.0) 0.0000 (1.0) 0;0 198,019,801.9803 (1.0) 1 2 test_calibrate_stuck[False-1-1e-10] 5.0500 (1.0) 5.0500 (1.0) 5.0500 (1.0) 0.0000 (1.0) 5.0500 (1.0) 0.0000 (1.0) 0;0 198,019,801.9803 (1.0) 1 2 test_calibrate_stuck[True--1-1e-10] 10.0000 (1.98) 10.0000 (1.98) 10.0000 (1.98) 0.0000 (1.0) 10.0000 (1.98) 0.0000 (1.0) 0;0 100,000,000.0001 (0.51) 1 1 test_calibrate_stuck[True-0-1e-10] 10.0000 (1.98) 10.0000 (1.98) 10.0000 (1.98) 0.0000 (1.0) 10.0000 (1.98) 0.0000 (1.0) 0;0 100,000,000.0001 (0.51) 1 1 test_calibrate_stuck[True-1-1e-10] 10.0000 (1.98) 10.0000 (1.98) 10.0000 (1.98) 0.0000 (1.0) 10.0000 (1.98) 0.0000 (1.0) 0;0 100,000,000.0001 (0.51) 1 1 test_calibrate_stuck[False--1-1e-09] 50.5000 (10.00) 50.5000 (10.00) 50.5000 (10.00) 0.0000 (1.0) 50.5000 (10.00) 0.0000 (1.0) 0;0 19,801,980.1980 (0.10) 1 2 test_calibrate_stuck[False-0-1e-09] 50.5000 (10.00) 50.5000 (10.00) 50.5000 (10.00) 0.0000 (1.0) 50.5000 (10.00) 0.0000 (1.0) 0;0 19,801,980.1980 (0.10) 1 2 test_calibrate_stuck[False-1-1e-09] 50.5000 (10.00) 50.5000 (10.00) 50.5000 (10.00) 0.0000 (1.0) 50.5000 (10.00) 0.0000 (1.0) 0;0 19,801,980.1980 (0.10) 1 2 test_calibrate_stuck[True--1-1e-09] 100.0000 (19.80) 100.0000 (19.80) 100.0000 (19.80) 0.0000 (1.0) 100.0000 (19.80) 0.0000 (1.0) 0;0 10,000,000.0000 (0.05) 1 1 test_calibrate_stuck[True-0-1e-09] 100.0000 (19.80) 100.0000 (19.80) 100.0000 (19.80) 0.0000 (1.0) 100.0000 (19.80) 0.0000 (1.0) 0;0 10,000,000.0000 (0.05) 1 1 test_calibrate_stuck[True-1-1e-09] 100.0000 (19.80) 100.0000 (19.80) 100.0000 (19.80) 0.0000 (1.0) 100.0000 (19.80) 0.0000 (1.0) 0;0 10,000,000.0000 (0.05) 1 1 test_xfast 650.2325 (128.76) 877.8139 (173.82) 658.7384 (130.44) 10.9873 (inf) 661.0697 (130.90) 10.8372 (inf) 86;1 1,518,053.2124 (0.01) 688 22 test_calibrate_xfast 731.4527 (144.84) 26,130.1580 (>1000.0) 867.5639 (171.79) 396.8166 (inf) 748.4594 (148.21) 5.5070 (inf) 56969;69124 1,152,652.7930 (0.01) 588933 23 test_rounds_iterations 819.1913 (162.22) 945.1061 (187.15) 874.8968 (173.25) 38.8682 (inf) 872.0905 (172.69) 68.3591 (inf) 5;0 1,142,991.8928 (0.01) 15 10 test_iterations 943.6160 (186.85) 943.6160 (186.85) 943.6160 (186.85) 0.0000 (1.0) 943.6160 (186.85) 0.0000 (1.0) 0;0 1,059,753.0833 (0.01) 1 10 test_proto[CachedPropertyProxy] 1,917.5932 (379.72) 2,799.7419 (554.40) 1,978.7855 (391.84) 38.3374 (inf) 1,976.4528 (391.38) 19.9303 (inf) 469;366 505,360.4826 (0.00) 23203 20 test_proto[LocalsCachedPropertyProxy] 1,963.6005 (388.83) 3,020.0928 (598.04) 2,064.9319 (408.90) 37.1364 (inf) 2,063.4383 (408.60) 21.0479 (inf) 544;295 484,277.4653 (0.00) 24639 20 test_proto[LocalsSimpleProxy] 2,021.3425 (400.27) 3,825.8731 (757.60) 2,081.6307 (412.20) 51.7147 (inf) 2,078.7120 (411.63) 20.1166 (inf) 384;538 480,392.6127 (0.00) 39940 10 test_proto[SimpleProxy] 2,023.9502 (400.78) 3,838.1666 (760.03) 2,079.7766 (411.84) 54.5401 (inf) 2,076.4768 (411.18) 18.9990 (inf) 219;474 480,820.8809 (0.00) 38055 10 test_warmup_rounds 2,384.1858 (472.12) 2,734.3631 (541.46) 2,457.9465 (486.72) 154.5749 (inf) 2,387.9111 (472.85) 93.1323 (inf) 1;1 406,843.6738 (0.00) 5 1 test_rounds 2,387.9111 (472.85) 2,764.1654 (547.36) 2,485.7620 (492.23) 121.3150 (inf) 2,443.7904 (483.92) 74.5058 (inf) 3;3 402,291.1220 (0.00) 15 1 test_calibrate_fast 2,660.6023 (526.85) 46,413.0193 (>1000.0) 5,057.7373 (>1000.0) 2,567.9740 (inf) 3,034.9940 (600.99) 4,824.2509 (inf) 88480;1120 197,716.8736 (0.00) 375908 10 test_single 2,745.5389 (543.67) 2,745.5389 (543.67) 2,745.5389 (543.67) 0.0000 (1.0) 2,745.5389 (543.67) 0.0000 (1.0) 0;0 364,227.2130 (0.00) 1 1 test_can_use_both_args_and_setup_without_return 3,937.6318 (779.73) 3,937.6318 (779.73) 3,937.6318 (779.73) 0.0000 (1.0) 3,937.6318 (779.73) 0.0000 (1.0) 0;0 253,959.7502 (0.00) 1 1 test_setup_many_rounds 4,000.9618 (792.27) 4,816.8004 (953.82) 4,238.2628 (839.26) 257.1263 (inf) 4,120.1711 (815.88) 298.0232 (inf) 2;0 235,945.7291 (0.00) 10 1 test_setup_cprofile 4,805.6245 (951.61) 4,805.6245 (951.61) 4,805.6245 (951.61) 0.0000 (1.0) 4,805.6245 (951.61) 0.0000 (1.0) 0;0 208,089.5008 (0.00) 1 1 test_args_kwargs 4,861.5038 (962.67) 4,861.5038 (962.67) 4,861.5038 (962.67) 0.0000 (1.0) 4,861.5038 (962.67) 0.0000 (1.0) 0;0 205,697.6674 (0.00) 1 1 test_setup 5,014.2407 (992.92) 5,014.2407 (992.92) 5,014.2407 (992.92) 0.0000 (1.0) 5,014.2407 (992.92) 0.0000 (1.0) 0;0 199,431.9881 (0.00) 1 1 test_foo 12,688.3388 (>1000.0) 89,548.5282 (>1000.0) 62,451.2391 (>1000.0) 2,118.2466 (inf) 62,536.4482 (>1000.0) 178.8139 (inf) 110;314 16,012.4925 (0.00) 15136 1 test_fast 16,741.4546 (>1000.0) 74,993.8190 (>1000.0) 63,258.2552 (>1000.0) 2,712.0816 (inf) 63,430.5179 (>1000.0) 182.5392 (inf) 118;407 15,808.2135 (0.00) 14617 1 test_parametrized[1] 24,326.1456 (>1000.0) 83,219.2600 (>1000.0) 71,499.7850 (>1000.0) 1,802.3950 (inf) 71,544.2002 (>1000.0) 171.3634 (inf) 74;281 13,986.0560 (0.00) 13632 1 test_parametrized[3] 24,668.8724 (>1000.0) 83,833.9329 (>1000.0) 71,539.2820 (>1000.0) 1,645.2600 (inf) 71,566.5519 (>1000.0) 175.0886 (inf) 81;284 13,978.3343 (0.00) 13522 1 test_parametrized[0] 24,974.3462 (>1000.0) 83,144.7542 (>1000.0) 71,521.1912 (>1000.0) 1,741.7431 (inf) 71,559.1013 (>1000.0) 171.3634 (inf) 73;276 13,981.8700 (0.00) 13297 1 test_parametrized[4] 25,574.1179 (>1000.0) 83,301.2164 (>1000.0) 71,581.9076 (>1000.0) 1,390.1623 (inf) 71,596.3542 (>1000.0) 178.8139 (inf) 83;287 13,970.0105 (0.00) 13475 1 test_parametrized[2] 26,658.1774 (>1000.0) 86,732.2087 (>1000.0) 71,519.3132 (>1000.0) 1,681.2250 (inf) 71,540.4749 (>1000.0) 175.0886 (inf) 88;302 13,982.2372 (0.00) 13539 1 test_calibrate_slow 30,480.3252 (>1000.0) 10,209,985.0774 (>1000.0) 87,124.9596 (>1000.0) 70,953.2697 (inf) 87,991.3568 (>1000.0) 10,736.2866 (inf) 315;3355 11,477.7672 (0.00) 213214 1 test_calibrate 360,995.5311 (>1000.0) 1,970,559.3586 (>1000.0) 419,154.3450 (>1000.0) 113,350.1945 (inf) 367,224.2165 (>1000.0) 4,604.4588 (inf) 3713;4956 2,385.7560 (0.00) 22698 1 test_slow 1,019,306.4809 (>1000.0) 1,328,747.7195 (>1000.0) 1,082,246.5762 (>1000.0) 15,703.7446 (inf) 1,081,109.0469 (>1000.0) 2,650.5440 (inf) 29;49 924.0038 (0.00) 940 1 test_slower 10,063,115.5074 (>1000.0) 10,865,967.7207 (>1000.0) 10,437,847.2641 (>1000.0) 216,167.3555 (inf) 10,550,765.3207 (>1000.0) 470,243.3944 (inf) 28;0 95.8052 (0.00) 100 1 test_calibrate_stuck[False--1-0.01] 505,000,000.0001 (>1000.0) 505,000,000.0001 (>1000.0) 505,000,000.0001 (>1000.0) 0.0000 (1.0) 505,000,000.0001 (>1000.0) 0.0000 (1.0) 0;0 1.9802 (0.00) 1 2 test_calibrate_stuck[False-0-0.01] 505,000,000.0001 (>1000.0) 505,000,000.0001 (>1000.0) 505,000,000.0001 (>1000.0) 0.0000 (1.0) 505,000,000.0001 (>1000.0) 0.0000 (1.0) 0;0 1.9802 (0.00) 1 2 test_calibrate_stuck[False-1-0.01] 505,000,000.0001 (>1000.0) 505,000,000.0001 (>1000.0) 505,000,000.0001 (>1000.0) 0.0000 (1.0) 505,000,000.0001 (>1000.0) 0.0000 (1.0) 0;0 1.9802 (0.00) 1 2 test_calibrate_stuck[True--1-0.01] 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 1.0000 (0.00) 1 1 test_calibrate_stuck[True-0-0.01] 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 1.0000 (0.00) 1 1 test_calibrate_stuck[True-1-0.01] 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 1,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 1.0000 (0.00) 1 1 test_calibrate_stuck[False--1-1.000000000000001] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False--1-1] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-0-1.000000000000001] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-0-1] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-1-1.000000000000001] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[False-1-1] 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 50,500,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0198 (0.00) 1 2 test_calibrate_stuck[True--1-1.000000000000001] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True--1-1] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-0-1.000000000000001] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-0-1] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-1-1.000000000000001] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 test_calibrate_stuck[True-1-1] 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 100,000,000,000.0000 (>1000.0) 0.0000 (1.0) 0;0 0.0100 (0.00) 1 1 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Legend: Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. OPS: Operations Per Second, computed as 1 / Mean ----------------------------- cProfile (time in s) ----------------------------- tests/test_pedantic.py::test_setup_cprofile ncalls tottime percall cumtime percall filename:lineno(function) 1 0.0000 0.0000 0.0000 0.0000 pytest-benchmark-3.4.1/tests/test_pedantic.py:29(stuff) 1 0.0000 0.0000 0.0000 0.0000 ~:0() 1 0.0000 0.0000 0.0000 0.0000 ~:0() =========================== short test summary info ============================ SKIPPED [1] tests/test_skip.py:5: bla SKIPPED [5] tests/test_utils.py:47: %r not availabe on $PATH SKIPPED [4] tests/test_utils.py:160: %r not availabe on $PATH FAILED tests/test_benchmark.py::test_basic - Failed: nomatch: '*collected 5 i... FAILED tests/test_benchmark.py::test_skip - Failed: nomatch: '*collected 5 it... FAILED tests/test_benchmark.py::test_disable - Failed: nomatch: '*collected 5... FAILED tests/test_benchmark.py::test_only_benchmarks - Failed: nomatch: '*col... = 4 failed, 208 passed, 10 skipped, 10 deselected, 2 warnings in 508.78s (0:08:28) = >>> ERROR: py3-pytest-benchmark: check failed >>> py3-pytest-benchmark: Uninstalling dependencies... (1/29) Purging .makedepends-py3-pytest-benchmark (20221026.141451) (2/29) Purging py3-py-cpuinfo (8.0.0-r0) (3/29) Purging py3-setuptools (65.5.0-r0) (4/29) Purging py3-pytest-xdist (2.5.0-r1) (5/29) Purging py3-execnet (1.9.0-r0) (6/29) Purging py3-apipkg (2.1.0-r0) (7/29) Purging py3-pytest-forked (1.4.0-r1) (8/29) Purging py3-pytest (7.1.3-r1) (9/29) Purging py3-attrs (22.1.0-r0) (10/29) Purging py3-iniconfig (1.1.1-r3) (11/29) Purging py3-packaging (21.3-r2) (12/29) Purging py3-parsing (3.0.9-r0) (13/29) Purging py3-pluggy (1.0.0-r1) (14/29) Purging py3-py (1.11.0-r0) (15/29) Purging py3-tomli (2.0.1-r1) (16/29) Purging py3-freezegun (1.2.2-r0) (17/29) Purging py3-dateutil (2.8.2-r1) (18/29) Purging py3-six (1.16.0-r3) (19/29) Purging py3-pygal (3.0.0-r1) (20/29) Purging py3-elasticsearch (7.11.0-r1) (21/29) Purging py3-urllib3 (1.26.12-r0) (22/29) Purging python3 (3.10.8-r1) (23/29) Purging libbz2 (1.0.8-r3) (24/29) Purging libffi (3.4.3-r0) (25/29) Purging gdbm (1.23-r0) (26/29) Purging xz-libs (5.2.7-r0) (27/29) Purging mpdecimal (2.5.1-r1) (28/29) Purging readline (8.2.0-r0) (29/29) Purging sqlite-libs (3.39.4-r0) Executing busybox-1.35.0-r27.trigger OK: 288 MiB in 92 packages