You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
terraform -v
Terraform v1.10.3
on linux_amd64
+ provider registry.terraform.io/dmacvicar/libvirt v0.8.1
Provider and libvirt versions
terraform-provider-libvirt -version
v0.8.1
If that gives you "was not built correctly", get the Git commit hash from your local provider repository:
Using the official provider, not from git.
Checklist
Is your issue/contribution related with enabling some setting/option exposed by libvirt that the plugin does not yet support, or requires changing/extending the provider terraform schema?
Make sure you explain why this option is important to you, why it should be important to everyone. Describe your use-case with detail and provide examples where possible.
If it is a very special case, consider using the XSLT support in the provider to tweak the definition instead of opening an issue
Maintainers do not have expertise in every libvirt setting, so please, describe the feature and how it is used. Link to the appropriate documentation
Is it a bug or something that does not work as expected? Please make sure you fill the version information below:
Description of Issue/Question
With a pretty standard install of Fedora 41 (selinux disabled - for testing, no apparmor), most of my disk is assigned to /home, and a minimal amount assigned to / (root).
So if I want to build out bigger setups, using the default volume pool is kind of out of the question.
I setup a volume pool at "${abspath(path.root)}/pool", but during the terraform apply I get:
Plan: 4 to add, 0 to change, 0 to destroy.
libvirt_pool.local_pool: Creating...
libvirt_pool.local_pool: Creation complete after 0s [id=a175f530-c2ff-4ffc-94b3-1dbe0e40839a]
libvirt_volume.fedora_cloud: Creating...
libvirt_volume.fedora_cloud: Creation complete after 1s [id=/home/bhundven/Projects/github.com/bhundven/libvirt_xslt_bug/pool/fedora_cloud_41]
libvirt_volume.os_disk: Creating...
libvirt_volume.os_disk: Creation complete after 0s [id=/home/bhundven/Projects/github.com/bhundven/libvirt_xslt_bug/pool/os_disk]
libvirt_domain.fedora_vm: Creating...
╷
│ Error: error creating libvirt domain: Cannot access storage file '/home/bhundven/Projects/github.com/bhundven/libvirt_xslt_bug/pool/os_disk' (as uid:107, gid:107): Permission denied
│
│ with libvirt_domain.fedora_vm,
│ on main.tf line 37, in resource "libvirt_domain" "fedora_vm":
│ 37: resource "libvirt_domain" "fedora_vm" {
│
╵
virsh pool-dumpxml --pool local_pool outputs:
<pooltype='dir'>
<name>local_pool</name>
<uuid>[a uuid]</uuid>
<capacityunit='bytes'>[lots of storage]</capacity>
<allocationunit='bytes'>[lots of storage]</allocation>
<availableunit='bytes'>[lots of storage]</available>
<source>
</source>
<target>
<path>/home/bhundven/Projects/github.com/bhundven/libvirt_xslt_bug/pool</path>
<permissions>
<mode>0711</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
and virsh vol-dumpxml --pool local_pool --vol os_disk outputs:
The thing I don't understand is that libvirt is creating the pool directory in the same directory as main.tf. It can create the disk, but it can't attach the disk to the domain?
Ok, so let's xslt this (maybe the wrong way?). If I add:
(on fedora uid/gid 107 is qemu)
xml {
xslt=file("${abspath(path.root)}/pool_permissions.xsl")
}
Looks good to me, but the pool-dumpxml and vol-dumpxml show the same old root permissions, and I get the exact same error as before.
It's possible that something is wrong with my xslt. Or I have something wacky going on? This install has been release upgraded since Fedora 37 Workstation.
Setup
terraform {
required_version="~> 1.10"required_providers {
libvirt={
source ="dmacvicar/libvirt"
version ="0.8.1"
}
}
}
provider"libvirt" {
uri="qemu:///system"
}
# Most of my disk is partitioned to use /home, so...resource"libvirt_pool""local_pool" {
name="local_pool"type="dir"target {
path="${abspath(path.root)}/pool"
}
# See above for pool_permissions.xsl#xml {# xslt = file("${abspath(path.root)}/pool_permissions.xsl")#}
}
resource"libvirt_volume""fedora_cloud" {
name="fedora_cloud_41"source="/home/bhundven/Downloads/Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2"pool=libvirt_pool.local_pool.name# See above for volume_permissions.xsl#xml {# xslt = file("${abspath(path.root)}/volume_permissions.xsl")#}
}
# not using a cloud-init to resize, but this is just an example...resource"libvirt_volume""os_disk" {
name="os_disk"base_volume_id=libvirt_volume.fedora_cloud.idsize="42949672960"# 40GBpool=libvirt_pool.local_pool.name# See above for volume_permissions.xsl#xml {# xslt = file("${abspath(path.root)}/volume_permissions.xsl")#}
}
resource"libvirt_domain""fedora_vm" {
name="Fedora VM"memory="4096"vcpu=2qemu_agent=truemachine="q35"autostart=trueconsole {
type="pty"target_port="0"target_type="serial"
}
console {
type="pty"target_type="virtio"target_port="1"
}
graphics {
type="spice"listen_type="address"autoport=true
}
network_interface {
network_name="default"hostname="fedora41"wait_for_lease=true
}
disk {
volume_id=libvirt_volume.os_disk.id
}
}
Steps to Reproduce Issue
All provided above.
Additional information:
Do you have SELinux or Apparmor/Firewall enabled? Some special configuration?
Have you tried to reproduce the issue without them enabled?
SELinux has been disabled in /etc/selinux/config (and rebooted to take affect), no apparmor. I can't imagine that a firewall is blocking, but:
$ sudo iptables -nvL
[sudo] password for bhundven:
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
The text was updated successfully, but these errors were encountered:
So if I change the pool path to /srv/vmpool (0755), that works. I assume this is because /home/bhundven is 0700.
I'm still curious why you cannot change the permissions of the pool/volume(s)?
System Information
Linux distribution
Fedora release 41 (Forty One)
Terraform version
Provider and libvirt versions
If that gives you "was not built correctly", get the Git commit hash from your local provider repository:
Using the official provider, not from git.
Checklist
Is your issue/contribution related with enabling some setting/option exposed by libvirt that the plugin does not yet support, or requires changing/extending the provider terraform schema?
Is it a bug or something that does not work as expected? Please make sure you fill the version information below:
Description of Issue/Question
With a pretty standard install of Fedora 41 (selinux disabled - for testing, no apparmor), most of my disk is assigned to /home, and a minimal amount assigned to / (root).
So if I want to build out bigger setups, using the default volume pool is kind of out of the question.
I setup a volume pool at
"${abspath(path.root)}/pool"
, but during theterraform apply
I get:virsh pool-dumpxml --pool local_pool
outputs:and
virsh vol-dumpxml --pool local_pool --vol os_disk
outputs:The thing I don't understand is that libvirt is creating the pool directory in the same directory as
main.tf
. It can create the disk, but it can't attach the disk to the domain?Ok, so let's xslt this (maybe the wrong way?). If I add:
(on fedora uid/gid 107 is qemu)
pool_permissions.xsl
:to the pool and:
volume_permissions.xsl
:to the volume(s), then I should be able to change the permissions, right?
If we test those xslt files before we go jumping to conclusions:
Looks good to me, but the
pool-dumpxml
andvol-dumpxml
show the same old root permissions, and I get the exact same error as before.It's possible that something is wrong with my xslt. Or I have something wacky going on? This install has been release upgraded since Fedora 37 Workstation.
Setup
Steps to Reproduce Issue
All provided above.
Additional information:
Do you have SELinux or Apparmor/Firewall enabled? Some special configuration?
Have you tried to reproduce the issue without them enabled?
SELinux has been disabled in /etc/selinux/config (and rebooted to take affect), no apparmor. I can't imagine that a firewall is blocking, but:
The text was updated successfully, but these errors were encountered: