summaryrefslogtreecommitdiff
path: root/docs/drvqemu.html.in
blob: fa1eca78a29dc637f150268015d1adadf898dc61 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <body>
    <h1>KVM/QEMU hypervisor driver</h1>

    <ul id="toc"></ul>

    <p>
      The libvirt KVM/QEMU driver can manage any QEMU emulator from
      version 0.12.0 or later.
    </p>

    <h2><a name="project">Project Links</a></h2>

    <ul>
      <li>
        The <a href="http://www.linux-kvm.org/">KVM</a> Linux
        hypervisor
      </li>
      <li>
        The <a href="http://wiki.qemu.org/Index.html">QEMU</a> emulator
      </li>
    </ul>

    <h2><a name="prereq">Deployment pre-requisites</a></h2>

    <ul>
      <li>
        <strong>QEMU emulators</strong>: The driver will probe <code>/usr/bin</code>
        for the presence of <code>qemu</code>, <code>qemu-system-x86_64</code>,
        <code>qemu-system-microblaze</code>,
        <code>qemu-system-microblazeel</code>,
        <code>qemu-system-mips</code>,<code>qemu-system-mipsel</code>,
        <code>qemu-system-sparc</code>,<code>qemu-system-ppc</code>. The results
        of this can be seen from the capabilities XML output.
      </li>
      <li>
        <strong>KVM hypervisor</strong>: The driver will probe <code>/usr/bin</code>
        for the presence of <code>qemu-kvm</code> and <code>/dev/kvm</code> device
        node. If both are found, then KVM fullyvirtualized, hardware accelerated
        guests will be available.
      </li>
    </ul>

    <h2><a name="uris">Connections to QEMU driver</a></h2>

    <p>
    The libvirt QEMU driver is a multi-instance driver, providing a single
    system wide privileged driver (the "system" instance), and per-user
    unprivileged drivers (the "session" instance). The URI driver protocol
    is "qemu". Some example connection URIs for the libvirt driver are:
    </p>

<pre>
qemu:///session                      (local access to per-user instance)
qemu+unix:///session                 (local access to per-user instance)

qemu:///system                       (local access to system instance)
qemu+unix:///system                  (local access to system instance)
qemu://example.com/system            (remote access, TLS/x509)
qemu+tcp://example.com/system        (remote access, SASl/Kerberos)
qemu+ssh://root@example.com/system   (remote access, SSH tunnelled)
</pre>

    <h2><a name="security">Driver security architecture</a></h2>

    <p>
      There are multiple layers to security in the QEMU driver, allowing for
      flexibility in the use of QEMU based virtual machines.
    </p>

    <h3><a name="securitydriver">Driver instances</a></h3>

    <p>
      As explained above there are two ways to access the QEMU driver
      in libvirt. The "qemu:///session" family of URIs connect to a
      libvirtd instance running as the same user/group ID as the client
      application. Thus the QEMU instances spawned from this driver will
      share the same privileges as the client application. The intended
      use case for this driver is desktop virtualization, with virtual
      machines storing their disk images in the user's home directory and
      being managed from the local desktop login session.
    </p>

    <p>
      The "qemu:///system" family of URIs connect to a
      libvirtd instance running as the privileged system account 'root'.
      Thus the QEMU instances spawned from this driver may have much
      higher privileges than the client application managing them.
      The intended use case for this driver is server virtualization,
      where the virtual machines may need to be connected to host
      resources (block, PCI, USB, network devices) whose access requires
      elevated privileges.
    </p>

    <h3><a name="securitydac">POSIX users/groups</a></h3>

    <p>
      In the "session" instance, the POSIX users/groups model restricts QEMU
      virtual machines (and libvirtd in general) to only have access to resources
      with the same user/group ID as the client application. There is no
      finer level of configuration possible for the "session" instances.
    </p>

    <p>
      In the "system" instance, libvirt releases from 0.7.0 onwards allow
      control over the user/group that the QEMU virtual machines are run
      as. A build of libvirt with no configuration parameters set will
      still run QEMU processes as root:root. It is possible to change
      this default by using the --with-qemu-user=$USERNAME and
      --with-qemu-group=$GROUPNAME arguments to 'configure' during
      build. It is strongly recommended that vendors build with both
      of these arguments set to 'qemu'. Regardless of this build time
      default, administrators can set a per-host default setting in
      the <code>/etc/libvirt/qemu.conf</code> configuration file via
      the <code>user=$USERNAME</code> and <code>group=$GROUPNAME</code>
      parameters. When a non-root user or group is configured, the
      libvirt QEMU driver will change uid/gid to match immediately
      before executing the QEMU binary for a virtual machine.
    </p>

    <p>
      If QEMU virtual machines from the "system" instance are being
      run as non-root, there will be greater restrictions on what
      host resources the QEMU process will be able to access. The
      libvirtd daemon will attempt to manage permissions on resources
      to minimise the likelihood of unintentional security denials,
      but the administrator / application developer must be aware of
      some of the consequences / restrictions.
    </p>

    <ul>
      <li>
        <p>
          The directories <code>/var/run/libvirt/qemu/</code>,
          <code>/var/lib/libvirt/qemu/</code> and
          <code>/var/cache/libvirt/qemu/</code> must all have their
          ownership set to match the user / group ID that QEMU
          guests will be run as. If the vendor has set a non-root
          user/group for the QEMU driver at build time, the
          permissions should be set automatically at install time.
          If a host administrator customizes user/group in
          <code>/etc/libvirt/qemu.conf</code>, they will need to
          manually set the ownership on these directories.
        </p>
      </li>
      <li>
        <p>
          When attaching USB and PCI devices to a QEMU guest,
          QEMU will need to access files in <code>/dev/bus/usb</code>
          and <code>/sys/bus/pci/devices</code> respectively. The libvirtd daemon
          will automatically set the ownership on specific devices
          that are assigned to a guest at start time. There should
          not be any need for administrator changes in this respect.
        </p>
      </li>
      <li>
        <p>
          Any files/devices used as guest disk images must be
          accessible to the user/group ID that QEMU guests are
          configured to run as. The libvirtd daemon will automatically
          set the ownership of the file/device path to the correct
          user/group ID. Applications / administrators must be aware
          though that the parent directory permissions may still
          deny access. The directories containing disk images
          must either have their ownership set to match the user/group
          configured for QEMU, or their UNIX file permissions must
          have the 'execute/search' bit enabled for 'others'.
        </p>
        <p>
          The simplest option is the latter one, of just enabling
          the 'execute/search' bit. For any directory to be used
          for storing disk images, this can be achieved by running
          the following command on the directory itself, and any
          parent directories
        </p>
<pre>
chmod o+x /path/to/directory
</pre>
        <p>
          In particular note that if using the "system" instance
          and attempting to store disk images in a user home
          directory, the default permissions on $HOME are typically
          too restrictive to allow access.
        </p>
      </li>
    </ul>

    <h3><a name="securitycap">Linux process capabilities</a></h3>

    <p>
      The libvirt QEMU driver has a build time option allowing it to use
      the <a href="http://people.redhat.com/sgrubb/libcap-ng/index.html">libcap-ng</a>
      library to manage process capabilities. If this build option is
      enabled, then the QEMU driver will use this to ensure that all
      process capabilities are dropped before executing a QEMU virtual
      machine. Process capabilities are what gives the 'root' account
      its high power, in particular the CAP_DAC_OVERRIDE capability
      is what allows a process running as 'root' to access files owned
      by any user.
    </p>

    <p>
      If the QEMU driver is configured to run virtual machines as non-root,
      then they will already lose all their process capabilities at time
      of startup. The Linux capability feature is thus aimed primarily at
      the scenario where the QEMU processes are running as root. In this
      case, before launching a QEMU virtual machine, libvirtd will use
      libcap-ng APIs to drop all process capabilities. It is important
      for administrators to note that this implies the QEMU process will
      <strong>only</strong> be able to access files owned by root, and
      not files owned by any other user.
    </p>

    <p>
      Thus, if a vendor / distributor has configured their libvirt package
      to run as 'qemu' by default, a number of changes will be required
      before an administrator can change a host to run guests as root.
      In particular it will be necessary to change ownership on the
      directories <code>/var/run/libvirt/qemu/</code>,
      <code>/var/lib/libvirt/qemu/</code> and
      <code>/var/cache/libvirt/qemu/</code> back to root, in addition
      to changing the <code>/etc/libvirt/qemu.conf</code> settings.
    </p>

    <h3><a name="securityselinux">SELinux basic confinement</a></h3>

    <p>
      The basic SELinux protection for QEMU virtual machines is intended to
      protect the host OS from a compromised virtual machine process. There
      is no protection between guests.
    </p>

    <p>
      In the basic model, all QEMU virtual machines run under the confined
      domain <code>root:system_r:qemu_t</code>. It is required that any
      disk image assigned to a QEMU virtual machine is labelled with
      <code>system_u:object_r:virt_image_t</code>. In a default deployment,
      package vendors/distributor will typically ensure that the directory
      <code>/var/lib/libvirt/images</code> has this label, such that any
      disk images created in this directory will automatically inherit the
      correct labelling. If attempting to use disk images in another
      location, the user/administrator must ensure the directory has be
      given this requisite label. Likewise physical block devices must
      be labelled <code>system_u:object_r:virt_image_t</code>.
    </p>
    <p>
      Not all filesystems allow for labelling of individual files. In
      particular NFS, VFat and NTFS have no support for labelling. In
      these cases administrators must use the 'context' option when
      mounting the filesystem to set the default label to
      <code>system_u:object_r:virt_image_t</code>. In the case of
      NFS, there is an alternative option, of enabling the <code>virt_use_nfs</code>
      SELinux boolean.
    </p>

    <h3><a name="securitysvirt">SELinux sVirt confinement</a></h3>

    <p>
      The SELinux sVirt protection for QEMU virtual machines builds to the
      basic level of protection, to also allow individual guests to be
      protected from each other.
    </p>

    <p>
      In the sVirt model, each QEMU virtual machine runs under its own
      confined domain, which is based on <code>system_u:system_r:svirt_t:s0</code>
      with a unique category appended, eg, <code>system_u:system_r:svirt_t:s0:c34,c44</code>.
      The rules are setup such that a domain can only access files which are
      labelled with the matching category level, eg
      <code>system_u:object_r:svirt_image_t:s0:c34,c44</code>. This prevents one
      QEMU process accessing any file resources that are prevent to another QEMU
      process.
    </p>

    <p>
      There are two ways of assigning labels to virtual machines under sVirt.
      In the default setup, if sVirt is enabled, guests will get an automatically
      assigned unique label each time they are booted. The libvirtd daemon will
      also automatically relabel exclusive access disk images to match this
      label.  Disks that are marked as &lt;shared&gt; will get a generic
      label <code>system_u:system_r:svirt_image_t:s0</code> allowing all guests
      read/write access them, while disks marked as &lt;readonly&gt; will
      get a generic label <code>system_u:system_r:svirt_content_t:s0</code>
      which allows all guests read-only access.
    </p>

    <p>
      With statically assigned labels, the application should include the
      desired guest and file labels in the XML at time of creating the
      guest with libvirt. In this scenario the application is responsible
      for ensuring the disk images &amp; similar resources are suitably
      labelled to match, libvirtd will not attempt any relabelling.
    </p>

    <p>
      If the sVirt security model is active, then the node capabilities
      XML will include its details. If a virtual machine is currently
      protected by the security model, then the guest XML will include
      its assigned labels. If enabled at compile time, the sVirt security
      model will always be activated if SELinux is available on the host
      OS. To disable sVirt, and revert to the basic level of SELinux
      protection (host protection only), the <code>/etc/libvirt/qemu.conf</code>
      file can be used to change the setting to <code>security_driver="none"</code>
    </p>

    <h3><a name="securitysvirtaa">AppArmor sVirt confinement</a></h3>

    <p>
      When using basic AppArmor protection for the libvirtd daemon and
      QEMU virtual machines, the intention is to protect the host OS
      from a compromised virtual machine process. There is no protection
      between guests.
    </p>

    <p>
      The AppArmor sVirt protection for QEMU virtual machines builds on
      this basic level of protection, to also allow individual guests to
      be protected from each other.
    </p>

    <p>
      In the sVirt model, if a profile is loaded for the libvirtd daemon,
      then each <code>qemu:///system</code> QEMU virtual machine will have
      a profile created for it when the virtual machine is started if one
      does not already exist. This generated profile uses a profile name
      based on the UUID of the QEMU virtual machine and contains rules
      allowing access to only the files it needs to run, such as its disks,
      pid file and log files. Just before the QEMU virtual machine is
      started, the libvirtd daemon will change into this unique profile,
      preventing the QEMU process from accessing any file resources that
      are present in another QEMU process or the host machine.
    </p>

    <p>
      The AppArmor sVirt implementation is flexible in that it allows an
      administrator to customize the template file in
      <code>/etc/apparmor.d/libvirt/TEMPLATE</code> for site-specific
      access for all newly created QEMU virtual machines. Also, when a new
      profile is generated, two files are created:
      <code>/etc/apparmor.d/libvirt/libvirt-&lt;uuid&gt;</code> and
      <code>/etc/apparmor.d/libvirt/libvirt-&lt;uuid&gt;.files</code>. The
      former can be fine-tuned by the administrator to allow custom access
      for this particular QEMU virtual machine, and the latter will be
      updated appropriately when required file access changes, such as when
      a disk is added. This flexibility allows for situations such as
      having one virtual machine in complain mode with all others in
      enforce mode.
    </p>

    <p>
      While users can define their own AppArmor profile scheme, a typical
      configuration will include a profile for <code>/usr/sbin/libvirtd</code>,
      <code>/usr/lib/libvirt/virt-aa-helper</code> (a helper program which the
      libvirtd daemon uses instead of manipulating AppArmor directly), and
      an abstraction to be included by <code>/etc/apparmor.d/libvirt/TEMPLATE</code>
      (typically <code>/etc/apparmor.d/abstractions/libvirt-qemu</code>).
      An example profile scheme can be found in the examples/apparmor
      directory of the source distribution.
    </p>

    <p>
      If the sVirt security model is active, then the node capabilities
      XML will include its details. If a virtual machine is currently
      protected by the security model, then the guest XML will include
      its assigned profile name. If enabled at compile time, the sVirt
      security model will be activated if AppArmor is available on the host
      OS and a profile for the libvirtd daemon is loaded when libvirtd is
      started. To disable sVirt, and revert to the basic level of AppArmor
      protection (host protection only), the <code>/etc/libvirt/qemu.conf</code>
      file can be used to change the setting to <code>security_driver="none"</code>.
    </p>


    <h3><a name="securityacl">Cgroups device ACLs</a></h3>

    <p>
      Recent Linux kernels have a capability known as "cgroups" which is used
      for resource management. It is implemented via a number of "controllers",
      each controller covering a specific task/functional area. One of the
      available controllers is the "devices" controller, which is able to
      setup whitelists of block/character devices that a cgroup should be
      allowed to access. If the "devices" controller is mounted on a host,
      then libvirt will automatically create a dedicated cgroup for each
      QEMU virtual machine and setup the device whitelist so that the QEMU
      process can only access shared devices, and explicitly disks images
      backed by block devices.
    </p>

    <p>
      The list of shared devices a guest is allowed access to is
    </p>

<pre>
/dev/null, /dev/full, /dev/zero,
/dev/random, /dev/urandom,
/dev/ptmx, /dev/kvm, /dev/kqemu,
/dev/rtc, /dev/hpet, /dev/net/tun
</pre>

    <p>
      In the event of unanticipated needs arising, this can be customized
      via the <code>/etc/libvirt/qemu.conf</code> file.
      To mount the cgroups device controller, the following command
      should be run as root, prior to starting libvirtd
    </p>

<pre>
mkdir /dev/cgroup
mount -t cgroup none /dev/cgroup -o devices
</pre>

    <p>
      libvirt will then place each virtual machine in a cgroup at
      <code>/dev/cgroup/libvirt/qemu/$VMNAME/</code>
    </p>

    <h2><a name="imex">Import and export of libvirt domain XML configs</a></h2>

    <p>The QEMU driver currently supports a single native
      config format known as <code>qemu-argv</code>. The data for this format
      is expected to be a single line first a list of environment variables,
      then the QEMu binary name, finally followed by the QEMU command line
      arguments</p>

    <h3><a name="xmlimport">Converting from QEMU args to domain XML</a></h3>

    <p>
      The <code>virsh domxml-from-native</code> provides a way to
      convert an existing set of QEMU args into a guest description
      using libvirt Domain XML that can then be used by libvirt.
      Please note that this command is intended to be used to convert
      existing qemu guests previously started from the command line to
      be managed through libvirt.  It should not be used a method of
      creating new guests from scratch.  New guests should be created
      using an application calling the libvirt APIs (see
      the <a href="apps.html">libvirt applications page</a> for some
      examples) or by manually crafting XML to pass to virsh.
    </p>

    <pre>$ cat &gt; demo.args &lt;&lt;EOF
LC_ALL=C PATH=/bin HOME=/home/test USER=test \
LOGNAME=test /usr/bin/qemu -S -M pc -m 214 -smp 1 \
-nographic -monitor pty -no-acpi -boot c -hda \
/dev/HostVG/QEMUGuest1 -net none -serial none \
-parallel none -usb
EOF

$ virsh domxml-from-native qemu-argv demo.args
&lt;domain type='qemu'&gt;
  &lt;uuid&gt;00000000-0000-0000-0000-000000000000&lt;/uuid&gt;
  &lt;memory&gt;219136&lt;/memory&gt;
  &lt;currentMemory&gt;219136&lt;/currentMemory&gt;
  &lt;vcpu&gt;1&lt;/vcpu&gt;
  &lt;os&gt;
    &lt;type arch='i686' machine='pc'&gt;hvm&lt;/type&gt;
    &lt;boot dev='hd'/&gt;
  &lt;/os&gt;
  &lt;clock offset='utc'/&gt;
  &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
  &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
  &lt;on_crash&gt;destroy&lt;/on_crash&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu&lt;/emulator&gt;
    &lt;disk type='block' device='disk'&gt;
      &lt;source dev='/dev/HostVG/QEMUGuest1'/&gt;
      &lt;target dev='hda' bus='ide'/&gt;
    &lt;/disk&gt;
  &lt;/devices&gt;
&lt;/domain&gt;
</pre>

    <p>NB, don't include the literal \ in the args, put everything on one line</p>

    <h3><a name="xmlexport">Converting from domain XML to QEMU args</a></h3>

    <p>
      The <code>virsh domxml-to-native</code> provides a way to convert a
      guest description using libvirt Domain XML, into a set of QEMU args
      that can be run manually.
    </p>

    <pre>$ cat &gt; demo.xml &lt;&lt;EOF
&lt;domain type='qemu'&gt;
  &lt;name&gt;QEMUGuest1&lt;/name&gt;
  &lt;uuid&gt;c7a5fdbd-edaf-9455-926a-d65c16db1809&lt;/uuid&gt;
  &lt;memory&gt;219200&lt;/memory&gt;
  &lt;currentMemory&gt;219200&lt;/currentMemory&gt;
  &lt;vcpu&gt;1&lt;/vcpu&gt;
  &lt;os&gt;
    &lt;type arch='i686' machine='pc'&gt;hvm&lt;/type&gt;
    &lt;boot dev='hd'/&gt;
  &lt;/os&gt;
  &lt;clock offset='utc'/&gt;
  &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
  &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
  &lt;on_crash&gt;destroy&lt;/on_crash&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu&lt;/emulator&gt;
    &lt;disk type='block' device='disk'&gt;
      &lt;source dev='/dev/HostVG/QEMUGuest1'/&gt;
      &lt;target dev='hda' bus='ide'/&gt;
    &lt;/disk&gt;
  &lt;/devices&gt;
&lt;/domain&gt;
EOF

$ virsh domxml-to-native qemu-argv demo.xml
  LC_ALL=C PATH=/usr/bin:/bin HOME=/home/test \
  USER=test LOGNAME=test /usr/bin/qemu -S -M pc \
  -no-kqemu -m 214 -smp 1 -name QEMUGuest1 -nographic \
  -monitor pty -no-acpi -boot c -drive \
  file=/dev/HostVG/QEMUGuest1,if=ide,index=0 -net none \
  -serial none -parallel none -usb
</pre>

    <h2><a name="qemucommand">Pass-through of arbitrary qemu
    commands</a></h2>

    <p>Libvirt provides an XML namespace and an optional
      library <code>libvirt-qemu.so</code> for dealing specifically
      with qemu.  When used correctly, these extensions allow testing
      specific qemu features that have not yet been ported to the
      generic libvirt XML and API interfaces.  However, they
      are <b>unsupported</b>, in that the library is not guaranteed to
      have a stable API, abusing the library or XML may result in
      inconsistent state the crashes libvirtd, and upgrading either
      qemu-kvm or libvirtd may break behavior of a domain that was
      relying on a qemu-specific pass-through.  If you find yourself
      needing to use them to access a particular qemu feature, then
      please post an RFE to the libvirt mailing list to get that
      feature incorporated into the stable libvirt XML and API
      interfaces.
    </p>
    <p>The library provides two
      API: <code>virDomainQemuMonitorCommand</code>, for sending an
      arbitrary monitor command (in either HMP or QMP format) to a
      qemu guest (<span class="since">Since 0.8.3</span>),
      and <code>virDomainQemuAttach</code>, for registering a qemu
      domain that was manually started so that it can then be managed
      by libvirtd (<span class="since">Since 0.9.4</span>).
    </p>
    <p>Additionally, the following XML additions allow fine-tuning of
      the command line given to qemu when starting a domain
      (<span class="since">Since 0.8.3</span>).  In order to use the
      XML additions, it is necessary to issue an XML namespace request
      (the special <code>xmlns:<i>name</i></code> attribute) that
      pulls in <code>http://libvirt.org/schemas/domain/qemu/1.0</code>;
      typically, the namespace is given the name
      of <code>qemu</code>.  With the namespace in place, it is then
      possible to add an element <code>&lt;qemu:commandline&gt;</code>
      under <code>driver</code>, with the following sub-elements
      repeated as often as needed:
    </p>
      <dl>
        <dt><code>qemu:arg</code></dt>
        <dd>Add an additional command-line argument to the qemu
          process when starting the domain, given by the value of the
          attribute <code>value</code>.
        </dd>
        <dt><code>qemu:env</code></dt>
        <dd>Add an additional environment variable to the qemu
          process when starting the domain, given with the name-value
          pair recorded in the attributes <code>name</code>
          and optional <code>value</code>.</dd>
      </dl>
      <p>Example:</p><pre>
&lt;domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'&gt;
  &lt;name&gt;QEmu-fedora-i686&lt;/name&gt;
  &lt;memory&gt;219200&lt;/memory&gt;
  &lt;os&gt;
    &lt;type arch='i686' machine='pc'&gt;hvm&lt;/type&gt;
  &lt;/os&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
  &lt;/devices&gt;
  &lt;qemu:commandline&gt;
    &lt;qemu:arg value='-newarg'/&gt;
    &lt;qemu:env name='QEMU_ENV' value='VAL'/&gt;
  &lt;/qemu:commandline&gt;
&lt;/domain&gt;
</pre>

    <h2><a name="xmlconfig">Example domain XML config</a></h2>

    <h3>QEMU emulated guest on x86_64</h3>

        <pre>&lt;domain type='qemu'&gt;
  &lt;name&gt;QEmu-fedora-i686&lt;/name&gt;
  &lt;uuid&gt;c7a5fdbd-cdaf-9455-926a-d65c16db1809&lt;/uuid&gt;
  &lt;memory&gt;219200&lt;/memory&gt;
  &lt;currentMemory&gt;219200&lt;/currentMemory&gt;
  &lt;vcpu&gt;2&lt;/vcpu&gt;
  &lt;os&gt;
    &lt;type arch='i686' machine='pc'&gt;hvm&lt;/type&gt;
    &lt;boot dev='cdrom'/&gt;
  &lt;/os&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
    &lt;disk type='file' device='cdrom'&gt;
      &lt;source file='/home/user/boot.iso'/&gt;
      &lt;target dev='hdc'/&gt;
      &lt;readonly/&gt;
    &lt;/disk&gt;
    &lt;disk type='file' device='disk'&gt;
      &lt;source file='/home/user/fedora.img'/&gt;
      &lt;target dev='hda'/&gt;
    &lt;/disk&gt;
    &lt;interface type='network'&gt;
      &lt;source network='default'/&gt;
    &lt;/interface&gt;
    &lt;graphics type='vnc' port='-1'/&gt;
  &lt;/devices&gt;
&lt;/domain&gt;</pre>

    <h3>KVM hardware accelerated guest on i686</h3>

        <pre>&lt;domain type='kvm'&gt;
  &lt;name&gt;demo2&lt;/name&gt;
  &lt;uuid&gt;4dea24b3-1d52-d8f3-2516-782e98a23fa0&lt;/uuid&gt;
  &lt;memory&gt;131072&lt;/memory&gt;
  &lt;vcpu&gt;1&lt;/vcpu&gt;
  &lt;os&gt;
    &lt;type arch="i686"&gt;hvm&lt;/type&gt;
  &lt;/os&gt;
  &lt;clock sync="localtime"/&gt;
  &lt;devices&gt;
    &lt;emulator&gt;/usr/bin/qemu-kvm&lt;/emulator&gt;
    &lt;disk type='file' device='disk'&gt;
      &lt;source file='/var/lib/libvirt/images/demo2.img'/&gt;
      &lt;target dev='hda'/&gt;
    &lt;/disk&gt;
    &lt;interface type='network'&gt;
      &lt;source network='default'/&gt;
      &lt;mac address='24:42:53:21:52:45'/&gt;
    &lt;/interface&gt;
    &lt;graphics type='vnc' port='-1' keymap='de'/&gt;
  &lt;/devices&gt;
&lt;/domain&gt;</pre>

  </body>
</html>