RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2079938 - qemu coredump when boot with multi disks (qemu) failed to set up stack guard page: Cannot allocate memory
Summary: qemu coredump when boot with multi disks (qemu) failed to set up stack guard ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Kevin Wolf
QA Contact: qing.wang
URL:
Whiteboard:
: 2080930 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-28 14:55 UTC by qing.wang
Modified: 2022-11-15 10:19 UTC (History)
19 users (show)

Fixed In Version: qemu-kvm-7.0.0-4.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:54:42 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 87 0 None opened coroutine: Fix crashes due to too large pool batch size 2022-05-13 14:02:33 UTC
Red Hat Issue Tracker RHELPLAN-120390 0 None None None 2022-04-28 15:32:25 UTC
Red Hat Product Errata RHSA-2022:7967 0 None None None 2022-11-15 09:55:24 UTC

Internal Links: 2143006

Description qing.wang 2022-04-28 14:55:20 UTC
Description of problem:

Guest boot failed and coredump when boot with many disks (>10)

(qemu) failed to set up stack guard page: Cannot allocate memory

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux release 9.1 Beta (Plow)
5.14.0-80.el9.x86_64
qemu-kvm-7.0.0-1.el9.x86_64
seabios-bin-1.16.0-1.el9.noarch


How reproducible:
100%

Steps to Reproduce:
1.create images
qemu-img create -f qcow2 /home/kvm_autotest_root/images/mstg0.qcow2 1G
...

2.Boot vm
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 8G \
    -object memory-backend-ram,size=8G,id=mem-machine_mem  \
    -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
    -cpu 'EPYC-Rome',+kvm_pv_unhalt \
    \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel900-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-1,addr=0x0 \
    -blockdev node-name=file_stg0,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg0.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=stg0,drive=drive_stg0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_stg1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg1.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=stg1,drive=drive_stg1,write-cache=on,bus=pcie-root-port-3,addr=0x0 \
    -blockdev node-name=file_stg2,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg2.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg2,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg2 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-blk-pci,id=stg2,drive=drive_stg2,write-cache=on,bus=pcie-root-port-4,addr=0x0 \
    -blockdev node-name=file_stg3,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg3.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg3,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg3 \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=6 \
    -device virtio-blk-pci,id=stg3,drive=drive_stg3,write-cache=on,bus=pcie-root-port-5,addr=0x0 \
    -blockdev node-name=file_stg4,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg4.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg4,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg4 \
    -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x1.0x6,bus=pcie.0,chassis=7 \
    -device virtio-blk-pci,id=stg4,drive=drive_stg4,write-cache=on,bus=pcie-root-port-6,addr=0x0 \
    -blockdev node-name=file_stg5,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg5.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg5,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg5 \
    -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x1.0x7,bus=pcie.0,chassis=8 \
    -device virtio-blk-pci,id=stg5,drive=drive_stg5,write-cache=on,bus=pcie-root-port-7,addr=0x0 \
    \
-blockdev node-name=file_stg6,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg6.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg6,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg6 \
    -device pcie-root-port,id=pcie-root-port-8,port=0x8,multifunction=on,bus=pcie.0,addr=0x3,chassis=9 \
    -device virtio-blk-pci,id=stg6,drive=drive_stg6,write-cache=on,bus=pcie-root-port-8,addr=0x0 \
    -blockdev node-name=file_stg7,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg7.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg7,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg7 \
    -device pcie-root-port,id=pcie-root-port-9,port=0x9,addr=0x3.0x1,bus=pcie.0,chassis=10 \
    -device virtio-blk-pci,id=stg7,drive=drive_stg7,write-cache=on,bus=pcie-root-port-9,addr=0x0 \
    -blockdev node-name=file_stg8,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg8.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg8,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg8 \
    -device pcie-root-port,id=pcie-root-port-10,port=0xa,addr=0x3.0x2,bus=pcie.0,chassis=11 \
    -device virtio-blk-pci,id=stg8,drive=drive_stg8,write-cache=on,bus=pcie-root-port-10,addr=0x0 \
    -blockdev node-name=file_stg9,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg9.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg9,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg9 \
    -device pcie-root-port,id=pcie-root-port-11,port=0xb,addr=0x3.0x3,bus=pcie.0,chassis=12 \
    -device virtio-blk-pci,id=stg9,drive=drive_stg9,write-cache=on,bus=pcie-root-port-11,addr=0x0 \
    -blockdev node-name=file_stg10,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg10.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg10,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg10 \
    -device pcie-root-port,id=pcie-root-port-12,port=0xc,addr=0x3.0x4,bus=pcie.0,chassis=13 \
    -device virtio-blk-pci,id=stg10,drive=drive_stg10,write-cache=on,bus=pcie-root-port-12,addr=0x0 \
\
    -device pcie-root-port,id=pcie-root-port-29,port=0x1d,addr=0x5.0x5,bus=pcie.0,chassis=30 \
    -device virtio-net-pci,mac=9a:bf:2f:a0:13:6c,id=idkGiUhA,netdev=idjf89nC,bus=pcie-root-port-29,addr=0x0  \
    -netdev tap,id=idjf89nC,vhost=on  \
    \
    -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server=on,wait=off \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x6,chassis=31


3.

Actual results:
guest hang on boot stage and get coredump.
Expected results:
Boot succeed
Additional info:
  No issue found on

Red Hat Enterprise Linux release 9.0 (Plow)
5.14.0-70.13.1.el9_0.x86_64
qemu-kvm-6.2.0-11.el9_0.2.x86_64

automation:
python3 ConfigTest.py --testcase=multi_disk.max_disk --guestname=RHEL.9.0.0 --machines=q35 --driveformat=virtio_blk

Comment 1 Klaus Heinrich Kiwi 2022-05-03 10:33:52 UTC
Hanna, can you take this one?

Comment 2 Hanna Czenczek 2022-05-04 11:56:31 UTC
I don’t know, this isn’t my area of expertise.  Grepping for the error string yields an `mprotect(PROT_NONE)` in `qemu_alloc_stack()`, and I don’t know how that could fail.  `mprotect(2)` says the most likely reason is `ENOMEM` because “Changing the protection of a memory region would result in the total number of mappings with distinct attributes exceeding the allowed maximum.”

I presume the reason has something to do with the number of allocated stacks increasing, which causes so many fragmented memory regions to appear that the kernel doesn’t like.  Perhaps because of commit 4c41c69e05fe28c0f95f8abd2ebf407e95a4f04b (“util: adjust coroutine pool size to virtio block queue”)?

Qing Wang, can you perhaps try configuring all virtio-blk-pci devices with queue-size=4,num-queues=1?  (Just trying to see whether that might reduce the auto-adjusted coroutine pool size.)

Cc-ing Stefan for his thoughts on this.

(The only thing I myself can reproduce is a kernel panic in the guest if I configure those virtio-blk-pci devices with queue-size=1024, but that doesn’t have to do anything with said commit...  That also happens with RHEL 8.6’s qemu.)

Hanna

Comment 3 qing.wang 2022-05-05 06:57:54 UTC
It may boot succeed after add options on all virtio-blk-pci devices with queue-size=4,num-queues=1


/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 8G \
    -object memory-backend-ram,size=8G,id=mem-machine_mem  \
    -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
    -cpu 'EPYC-Rome',+kvm_pv_unhalt \
    \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel900-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-1,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg0,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg0.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=stg0,drive=drive_stg0,write-cache=on,bus=pcie-root-port-2,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg1.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=stg1,drive=drive_stg1,write-cache=on,bus=pcie-root-port-3,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg2,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg2.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg2,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg2 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-blk-pci,id=stg2,drive=drive_stg2,write-cache=on,bus=pcie-root-port-4,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg3,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg3.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg3,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg3 \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=6 \
    -device virtio-blk-pci,id=stg3,drive=drive_stg3,write-cache=on,bus=pcie-root-port-5,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg4,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg4.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg4,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg4 \
    -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x1.0x6,bus=pcie.0,chassis=7 \
    -device virtio-blk-pci,id=stg4,drive=drive_stg4,write-cache=on,bus=pcie-root-port-6,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg5,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg5.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg5,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg5 \
    -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x1.0x7,bus=pcie.0,chassis=8 \
    -device virtio-blk-pci,id=stg5,drive=drive_stg5,write-cache=on,bus=pcie-root-port-7,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg6,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg6.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg6,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg6 \
    -device pcie-root-port,id=pcie-root-port-8,port=0x8,multifunction=on,bus=pcie.0,addr=0x3,chassis=9 \
    -device virtio-blk-pci,id=stg6,drive=drive_stg6,write-cache=on,bus=pcie-root-port-8,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg7,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg7.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg7,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg7 \
    -device pcie-root-port,id=pcie-root-port-9,port=0x9,addr=0x3.0x1,bus=pcie.0,chassis=10 \
    -device virtio-blk-pci,id=stg7,drive=drive_stg7,write-cache=on,bus=pcie-root-port-9,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg8,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg8.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg8,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg8 \
    -device pcie-root-port,id=pcie-root-port-10,port=0xa,addr=0x3.0x2,bus=pcie.0,chassis=11 \
    -device virtio-blk-pci,id=stg8,drive=drive_stg8,write-cache=on,bus=pcie-root-port-10,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg9,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg9.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg9,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg9 \
    -device pcie-root-port,id=pcie-root-port-11,port=0xb,addr=0x3.0x3,bus=pcie.0,chassis=12 \
    -device virtio-blk-pci,id=stg9,drive=drive_stg9,write-cache=on,bus=pcie-root-port-11,addr=0x0,queue-size=4,num-queues=1 \
    \
    -blockdev node-name=file_stg10,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg10.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg10,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg10 \
    -device pcie-root-port,id=pcie-root-port-12,port=0xc,addr=0x3.0x4,bus=pcie.0,chassis=13 \
    -device virtio-blk-pci,id=stg10,drive=drive_stg10,write-cache=on,bus=pcie-root-port-12,addr=0x0,queue-size=4,num-queues=1 \
    \
    -device pcie-root-port,id=pcie-root-port-29,port=0x1d,addr=0x5.0x5,bus=pcie.0,chassis=30 \
    -device virtio-net-pci,mac=9a:bf:2f:a0:13:6c,id=idkGiUhA,netdev=idjf89nC,bus=pcie-root-port-29,addr=0x0  \
    -netdev tap,id=idjf89nC,vhost=on  \
    \
    -vnc :5 \
    -monitor stdio \
    -qmp tcp:0:5955,server=on,wait=off \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x6,chassis=31

Comment 4 Kevin Wolf 2022-05-06 10:10:59 UTC
*** Bug 2080930 has been marked as a duplicate of this bug. ***

Comment 5 Kevin Wolf 2022-05-06 10:24:23 UTC
(In reply to Hanna Reitz from comment #2)
> I presume the reason has something to do with the number of allocated stacks
> increasing, which causes so many fragmented memory regions to appear that
> the kernel doesn’t like.

Agreed, each coroutine adds two mappings for its stack, the stack itself and the guard page. If we create enough coroutines, we hit vm.max_map_count. Now, on my Fedora system this is 65530 by default, so we can have ~32k coroutines. That should be more than enough. Maybe we are leaking coroutines somewhere?

> Perhaps because of commit
> 4c41c69e05fe28c0f95f8abd2ebf407e95a4f04b (“util: adjust coroutine pool size
> to virtio block queue”)?

In theory, a larger pool size shouldn't increase the number of coroutines existing because the pool should only contain coroutines that were in use at the same time (otherwise they would have been allocated from the pool).

But you're right, there is one more detail to this: We're batching released coroutines before they go back to the allocation pool, and the batch size has been increased by this commit, too. It should probably only increase the alloc_pool size, but not the release_pool size, so that we can keep many allocated coroutines around, but we don't let accumulate large numbers of coroutine in the release_pool before they are reused.

Comment 6 Yanan Fu 2022-05-07 09:10:09 UTC
Hit the same issue during vm installation with qemu-kvm-7.0.0-2.el9.


One virtio-blk-pci device
Two ide-cd device


Full command line:
 /usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -device i6300esb,bus=pcie-pci-bridge-0,addr=0x1 \
    -watchdog-action reset \
    -m 14336 \
    -object memory-backend-ram,size=14336M,id=mem-machine_mem  \
    -smp 32,maxcpus=32,cores=16,threads=1,dies=1,sockets=2  \
    -cpu 'EPYC-Milan',x2apic=on,tsc-deadline=on,hypervisor=on,tsc-adjust=on,vaes=on,vpclmulqdq=on,spec-ctrl=on,stibp=on,arch-capabilities=on,ssbd=on,cmp-legacy=on,virt-ssbd=on,rdctl-no=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,kvm_pv_unhalt=on \
    -device intel-hda,bus=pcie-pci-bridge-0,addr=0x2 \
    -device hda-duplex \
    -chardev socket,path=/tmp/avocado_pho551ss/monitor-qmpmonitor1-20220507-002938-ynZo9emc,server=on,wait=off,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,path=/tmp/avocado_pho551ss/monitor-catch_monitor-20220507-002938-ynZo9emc,server=on,wait=off,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idrByIMw \
    -chardev socket,path=/tmp/avocado_pho551ss/serial-serial0-20220507-002938-ynZo9emc,server=on,wait=off,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0 \
    -object rng-random,filename=/dev/random,id=passthrough-CVOJa5xt \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-rng-pci,id=virtio-rng-pci-7DnjKxGR,rng=passthrough-CVOJa5xt,bus=pcie-root-port-1,addr=0x0  \
    -chardev socket,id=seabioslog_id_20220507-002938-ynZo9emc,path=/tmp/avocado_pho551ss/seabios-20220507-002938-ynZo9emc,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220507-002938-ynZo9emc,iobase=0x402 \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.0x7,multifunction=on,bus=pcie.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0x0,firstport=0,bus=pcie.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.0x2,firstport=2,bus=pcie.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.0x4,firstport=4,bus=pcie.0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device qemu-xhci,id=usb2,bus=pcie-root-port-2,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb2.0,port=1 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel910-64-virtio.qcow2,cache.direct=off,cache.no-flush=on \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=off,cache.no-flush=on,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=1,write-cache=on,bus=pcie-root-port-3,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-net-pci,mac=9a:75:a9:de:b5:a1,id=idvHOcW0,netdev=idinQZFr,bus=pcie-root-port-4,addr=0x0  \
    -netdev tap,id=idinQZFr,vhost=on,vhostfd=19,fd=7 \
    -blockdev node-name=file_cd1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/linux/RHEL-9.1.0-20220506.0-x86_64-dvd1.iso,cache.direct=off,cache.no-flush=on \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=off,cache.no-flush=on,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,write-cache=on,bus=ide.0,unit=0 \
    -blockdev node-name=file_unattended,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel910-64/ks.iso,cache.direct=off,cache.no-flush=on \
    -blockdev node-name=drive_unattended,driver=raw,read-only=on,cache.direct=off,cache.no-flush=on,file=file_unattended \
    -device ide-cd,id=unattended,drive=drive_unattended,bootindex=3,write-cache=on,bus=ide.1,unit=0  \
    -kernel '/home/kvm_autotest_root/images/rhel910-64/vmlinuz'  \
    -append 'inst.sshd inst.repo=cdrom inst.ks=cdrom:/ks.cfg net.ifnames=0 console=ttyS0,115200'  \
    -initrd '/home/kvm_autotest_root/images/rhel910-64/initrd.img'  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=d,strict=off \
    -smbios type=1,product=gating \
    -enable-kvm \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=6 \
    -device virtio-balloon-pci,id=balloon0,bus=pcie-root-port-5,addr=0x0 \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=7

Comment 7 Kevin Wolf 2022-05-09 10:09:43 UTC
Can you confirm that this happens only with virtio-blk?

Especially for the configuration in comment 6, can you provide the output for 'cat /proc/sys/vm/max_map_count'? A single virtio-blk device with 32 queues shouldn't normally quite exceed the maximum number of mappings yet without iothreads, though getting somewhat close.

Comment 8 Lukas Kotek 2022-05-09 13:25:02 UTC
(In reply to Kevin Wolf from comment #7)
> Can you confirm that this happens only with virtio-blk?
> 
> Especially for the configuration in comment 6, can you provide the output
> for 'cat /proc/sys/vm/max_map_count'? A single virtio-blk device with 32
> queues shouldn't normally quite exceed the maximum number of mappings yet
> without iothreads, though getting somewhat close.

I saw this behaviour multiple times including recent gating stage job run in case of Windows guests. While installation using virtio-blk failed multiple times, it always worked in case of virtio-scsi. 

(I put details I observed into description of https://bugzilla.redhat.com/show_bug.cgi?id=2080930)

Comment 9 Kevin Wolf 2022-05-10 15:45:20 UTC
Thanks Lukas, this confirms that I was looking at the right thing.

I sent patches upstream:
https://lists.gnu.org/archive/html/qemu-block/2022-05/msg00252.html

Comment 15 xiagao 2022-05-19 02:23:01 UTC
Hit same issue on qemu-kvm-7.0.0-3.el9.x86_64 with 1 ide-hd,1 virtio-blk-pci,1 virtio-scsi-pci and 2 ide-cd devices.

MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 4096 \
    -object memory-backend-file,size=4G,mem-path=/dev/shm,share=yes,id=mem-mem1  \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -numa node,memdev=mem-mem1,nodeid=0  \
    -cpu 'Skylake-Server-IBRS',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rsba=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,kvm_pv_unhalt=on \
    -chardev socket,wait=off,path=/tmp/avocado_xgd9ivaj/monitor-qmpmonitor1-20220518-212003-MykgqTb6,id=qmp_id_qmpmonitor1,server=on  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,wait=off,path=/tmp/avocado_xgd9ivaj/monitor-catch_monitor-20220518-212003-MykgqTb6,id=qmp_id_catch_monitor,server=on  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idEJ7FVj \
    -chardev socket,wait=off,path=/tmp/avocado_xgd9ivaj/serial-serial0-20220518-212003-MykgqTb6,id=chardev_serial0,server=on \
    -device isa-serial,id=serial0,chardev=chardev_serial0 \
    -chardev socket,wait=off,path=/tmp/avocado_xgd9ivaj/serial-vs-20220518-212003-MykgqTb6,id=chardev_vs,server=on \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-serial-pci,id=virtio_serial_pci0,bus=pcie-root-port-1,addr=0x0 \
    -device virtserialport,id=vs,name=vs,chardev=chardev_vs,bus=virtio_serial_pci0.0,nr=1 \
    -object rng-builtin,id=builtin-6k9dp7R4 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-rng-pci,id=virtio-rng-pci-S5fVOzF1,rng=builtin-6k9dp7R4,bus=pcie-root-port-2,addr=0x0  \
    -chardev socket,id=seabioslog_id_20220518-212003-MykgqTb6,path=/tmp/avocado_xgd9ivaj/seabios-20220518-212003-MykgqTb6,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220518-212003-MykgqTb6,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-3,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/win10-64-virtio-scsi_avocado-vt-vm1.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device ide-hd,id=image1,drive=drive_image1,write-cache=on,bus=ide.0,unit=0 \
    -blockdev node-name=file_stg0,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/stg0.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg0 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-blk-pci,id=stg0,drive=drive_stg0,write-cache=on,bus=pcie-root-port-4,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=6 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-5,addr=0x0 \
    -blockdev node-name=file_stg1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/stg1.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg1 \
    -device scsi-hd,id=stg1,drive=drive_stg1,write-cache=on \
    -chardev socket,id=char_virtiofs_fs,path=/tmp/avocado_xgd9ivaj/avocado-vt-vm1-fs-virtiofsd.sock \
    -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x1.0x6,bus=pcie.0,chassis=7 \
    -device vhost-user-fs-pci,id=vufs_virtiofs_fs,chardev=char_virtiofs_fs,tag=myfs,queue-size=1024,bus=pcie-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x1.0x7,bus=pcie.0,chassis=8 \
    -device e1000e,mac=9a:99:96:08:12:01,id=iduswwYt,netdev=idZKK4jX,bus=pcie-root-port-7,addr=0x0  \
    -netdev tap,id=idZKK4jX,vhost=on,vhostfd=19,fd=15 \
    -device pcie-root-port,id=pcie-root-port-8,port=0x8,multifunction=on,bus=pcie.0,addr=0x3,chassis=9 \
    -device virtio-net-pci,mac=9a:99:96:08:12:02,id=idXqIMT8,netdev=idZVtjn7,bus=pcie-root-port-8,addr=0x0  \
    -netdev tap,id=idZVtjn7,vhost=on,vhostfd=21,fd=16 \
    -blockdev node-name=file_cd1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on,bus=ide.1,unit=0 \
    -blockdev node-name=file_virtio,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/windows/virtio-win-1.9.25-2.el9_0.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_virtio,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_virtio \
    -device ide-cd,id=virtio,drive=drive_virtio,write-cache=on,bus=ide.2,unit=0  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie-root-port-9,port=0x9,addr=0x3.0x1,bus=pcie.0,chassis=10 \
    -device virtio-mouse-pci,id=input_input1,bus=pcie-root-port-9,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-10,port=0xa,addr=0x3.0x2,bus=pcie.0,chassis=11 \
    -device virtio-balloon-pci,id=balloon0,bus=pcie-root-port-10,addr=0x0 \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x4,chassis=12

Comment 17 Yanan Fu 2022-05-23 05:43:32 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 20 qing.wang 2022-05-24 08:09:32 UTC
Passed test on:
Red Hat Enterprise Linux release 9.1 Beta (Plow)
5.14.0-96.el9.x86_64
qemu-kvm-7.0.0-4.el9.x86_64
seabios-bin-1.16.0-2.el9.noarch
edk2-ovmf-20220221gitb24306f15d-1.el9.noarch
virtio-win-prewhql-0.1-219.iso


Boot succeed :

/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 8G \
    -object memory-backend-ram,size=8G,id=mem-machine_mem  \
    -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
    -cpu 'EPYC-Rome',+kvm_pv_unhalt \
    \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel900-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-1,addr=0x0 \
    -blockdev node-name=file_stg0,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg0.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=stg0,drive=drive_stg0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_stg1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg1.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=stg1,drive=drive_stg1,write-cache=on,bus=pcie-root-port-3,addr=0x0 \
    -blockdev node-name=file_stg2,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg2.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg2,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg2 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-blk-pci,id=stg2,drive=drive_stg2,write-cache=on,bus=pcie-root-port-4,addr=0x0 \
    -blockdev node-name=file_stg3,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg3.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg3,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg3 \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=6 \
    -device virtio-blk-pci,id=stg3,drive=drive_stg3,write-cache=on,bus=pcie-root-port-5,addr=0x0 \
    -blockdev node-name=file_stg4,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg4.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg4,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg4 \
    -device pcie-root-port,id=pcie-root-port-6,port=0x6,addr=0x1.0x6,bus=pcie.0,chassis=7 \
    -device virtio-blk-pci,id=stg4,drive=drive_stg4,write-cache=on,bus=pcie-root-port-6,addr=0x0 \
    -blockdev node-name=file_stg5,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg5.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg5,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg5 \
    -device pcie-root-port,id=pcie-root-port-7,port=0x7,addr=0x1.0x7,bus=pcie.0,chassis=8 \
    -device virtio-blk-pci,id=stg5,drive=drive_stg5,write-cache=on,bus=pcie-root-port-7,addr=0x0 \
    \
-blockdev node-name=file_stg6,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg6.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg6,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg6 \
    -device pcie-root-port,id=pcie-root-port-8,port=0x8,multifunction=on,bus=pcie.0,addr=0x3,chassis=9 \
    -device virtio-blk-pci,id=stg6,drive=drive_stg6,write-cache=on,bus=pcie-root-port-8,addr=0x0 \
    -blockdev node-name=file_stg7,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg7.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg7,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg7 \
    -device pcie-root-port,id=pcie-root-port-9,port=0x9,addr=0x3.0x1,bus=pcie.0,chassis=10 \
    -device virtio-blk-pci,id=stg7,drive=drive_stg7,write-cache=on,bus=pcie-root-port-9,addr=0x0 \
    -blockdev node-name=file_stg8,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg8.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg8,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg8 \
    -device pcie-root-port,id=pcie-root-port-10,port=0xa,addr=0x3.0x2,bus=pcie.0,chassis=11 \
    -device virtio-blk-pci,id=stg8,drive=drive_stg8,write-cache=on,bus=pcie-root-port-10,addr=0x0 \
    -blockdev node-name=file_stg9,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg9.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg9,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg9 \
    -device pcie-root-port,id=pcie-root-port-11,port=0xb,addr=0x3.0x3,bus=pcie.0,chassis=12 \
    -device virtio-blk-pci,id=stg9,drive=drive_stg9,write-cache=on,bus=pcie-root-port-11,addr=0x0 \
    -blockdev node-name=file_stg10,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/mstg10.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg10,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_stg10 \
    -device pcie-root-port,id=pcie-root-port-12,port=0xc,addr=0x3.0x4,bus=pcie.0,chassis=13 \
    -device virtio-blk-pci,id=stg10,drive=drive_stg10,write-cache=on,bus=pcie-root-port-12,addr=0x0 \
\
    -device pcie-root-port,id=pcie-root-port-29,port=0x1d,addr=0x5.0x5,bus=pcie.0,chassis=30 \
    -device virtio-net-pci,mac=9a:bf:2f:a0:13:6c,id=idkGiUhA,netdev=idjf89nC,bus=pcie-root-port-29,addr=0x0  \
    -netdev tap,id=idjf89nC,vhost=on  \
    \
    -vnc :5 \
  -monitor stdio \
  -qmp tcp:0:5955,server=on,wait=off \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x6,chassis=31

Comment 22 errata-xmlrpc 2022-11-15 09:54:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7967


Note You need to log in before you can comment on or make changes to this bug.