Which of the following is a common solution for bios time and setting resets?

A computer losing time or having the date and time reset is a symptom of an issue relating to the computer hardware or software. There are multiple causes for date and time loss or resetting issues. The most common causes are detailed below. Review each possible reason for help with how to fix the problem.

Computer CMOS battery failing or bad

If the date reset to the BIOS manufacturer date, epoch, or a default date [1970, 1980, or 1990], the CMOS battery is failing or is already bad.

Before replacing the battery, set the date and time to the correct values in CMOS setup and save and exit the setup. If the values are lost again, set the values again, but leave your computer on for 2-3 days without turning it off. In some cases, this helps enable the CMOS battery to retain its settings for longer.

If this does not resolve your issue, replace your CMOS battery.

  • How to enter and exit the BIOS or CMOS setup.
  • How to replace the CMOS battery.

Note

Older computers may not have a BIOS that is compatible with any year 2000 dates. If your computer was manufactured before 1995, we recommend you contact the manufacturer to determine if the motherboard's BIOS is Y2K compatible.

Issue with APM

APM, or Advanced Power Management, can cause issues with the computer keeping time. Verify this is not the issue by entering CMOS setup and disabling APM or Power Management.

If this does resolve your issue, consult with the motherboard manufacturer or computer manufacturer for a possible BIOS update.

  • How to enter and exit the BIOS or CMOS setup.
  • Help with computer BIOS updates.

Third-party utility or program

Third-party programs or screen savers can cause the time to stop or decrease significantly. If you are running Windows, close and disable all screen savers, and End Task all TSRs to ensure your lost time is not being caused by these programs.

  • How to remove TSRs and startup programs.

If this resolves your issue, reboot the computer and determine what TSR or screen saver was causing this issue by disabling one TSR at a time. You can also leave the screen saver disabled to eliminate that as causing the problem. Once the culprit is found, see if the program has any available updates to resolve your issue.

Virus infection

Some computer viruses can infect a computer and cause the date and time in the operating system to be incorrect or reset to a wrong time zone. A virus can conflict with operating system files that manage the date and time or cause operating system files to become corrupted.

We recommend you run a virus scan to see if your computer is infected. If any viruses are found, remove the viruses from your computer to eliminate the infection. Change the date, time, and time zone back to the correct settings, then restart the computer.

  • How to scan or check for computer viruses.
  • How to remove a virus and malware from my computer.

If the date and time are incorrect again after restarting the computer, there may be corrupt operating system files causing the issue. Determine when the problem started to occur, then restore the operating system to a previous date before the problem occurred the first time.

  • How to restore Windows to an earlier copy.

Corrupt operating system files

It is possible for operating system files to become corrupted, causing the date and time to be incorrect. Corrupt files can occur due to a virus infection, as mentioned above, or for other reasons. The best option to fix corrupt operating system files is to restore the operating system to a previous date before the problem occurred the first time. The restore process replaces the corrupt files with good files and hopefully fix the date and time issue.

  • How to restore Windows to an earlier copy.

Windows 95, Windows 98, or Windows ME user

When changing the year in Windows 9x or Windows ME, the time stops until the Apply button is pressed.

When PC components are not getting enough power, common fixes include disconnecting all extraneous peripheral devices that might be putting too much load on the Power Supply Unit [PSU], reseating the PSU cable connectors inside the computer case, and using a PSU tester to check if the power supply is working properly. [T/F]

a] True
b] False

Many modern CPUs have built-in features that improve the performance of virtual machines [VM], up to the point where virtualised systems are indistinguishable from non-virtualised systems. This allows us to create virtual machines on a Linux host platform without compromising performance of the [Windows] guest system.

For some benchmarks of my current system, see Windows 10 Virtual Machine Benchmarks

The Solution

In the tutorial below I describe how to install and run Windows 10 as a KVM virtual machine on a Linux Mint or Ubuntu host. The tutorial uses a technology called VGA passthrough [also referred to as “GPU passthrough” or “vfio” for the vfio driver used] which provides near-native graphics performance in the VM. I’ve been doing VGA passthrough since summer 2012, first running Windows 7 on a Xen hypervisor, switching to KVM and Windows 10 in December 2015. The performance – both graphics and computing – under Xen and KVM has been nothing less than stellar!

The tutorial below will only work with suitable hardware! If your computer does not fulfill the basic hardware requirements outlined below, you won’t be able to make it work.

The tutorial is not written for the beginner! I assume that you do have some Linux background, at least enough to be able to restore your system when things go wrong.

I am also providing links to other, similar tutorials that might help. See the below. Last not least, you will find links to different forums and communities where you can find further information and help.

Note: The tutorial was originally posted on the Linux Mint forum.

Disclaimer

All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.

You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.

For a glossary of terms used in the tutorial, see Glossary of Virtualization Terms.

Tutorial

Note 1: My tutorial uses the “xed” command found in Linux Mint Mate to edit documents. You will have to replace it with “gedit”, “nano” or whatever editor you use on your Linux distro / desktop…

Note 2: I’ve just published a new tutorial for a Ryzen based system running Pop!_OS. See “Creating a Windows 10 VM on the AMD Ryzen 9 3900X using Qemu 4.0 and VGA Passthrough“.

Important Note: This tutorial was written several years ago and has been updated to Linux Mint 19 and Ubuntu 18.04 syntax. It uses QEMU 2.11 or QEMU 2.12. Today I use libvirt / virt-manager and either QEMU 4.2 [on Linux Mint 20 / Ubuntu 20.04], or QEMU 6.0 on Manjaro [see the link above].

If you are following this tutorial when running a newer version of the OS and QEMU [e.g. QEMU 4.2, 5.x or 6.0], some of the QEMU syntax has changed. For the changes see the QEMU User Documentation.

I simply haven’t found the time to revise the tutorial. That said, you should still be able to use it and find valuable information.

Part 1 – Hardware Requirements

For this tutorial to succeed, your computer hardware must fulfill all of the following requirements:

IOMMU support

In Intel jargon its called VT-d. AMD calls it variously AMD Virtualisation, AMD-Vi, or Secure Virtual Machine [SVM]. Even IOMMU has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:

  • Reddit VFIO group – Hardware configuration for successful VFIO
  • Arch Linux wiki – PCI passthrough via OVMF examples
  • Community reports on working systems – //passthroughpo.st/vfio-increments/
  • The Reddit VFIO community – the most active GPU passthrough community around
  • Intel – //ark.intel.com/Search/FeatureFilter?productType=processors&VTD=true
  • Intel processors with ACS support –//vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html
  • Wikipedia – //en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware
  • AMD – //products.amd.com/en-us and check the processor specs
  • AMD – B350 motherboards are reported to NOT support IOMMU.

Both Intel and AMD have improved their IOMMU support in recent years. There are still differences between CPUs – specifically ACS [Access Control Services] may vary between CPU models. Generally speaking, the high-end Intel or AMD CPUs provide better ACS or device isolation capabilities. That is not to say that the more down-to-earth CPUs won’t work, as long as they support IOMMU.

The first link above provides a non-comprehensive list of CPU/motherboard/GPU configurations where users were successful with GPU passthrough. When building a new PC, make sure you purchase components that support GPU passthrough.

Most PC / motherboard manufacturers disable IOMMU by default. You will have to enable IOMMU in the BIOS. To check your current CPU / motherboard IOMMU support and enable it, do the following:

  1. Reboot your PC and enter the BIOS setup menu [usually you press F2, DEL, or similar during boot to enter the BIOS setup].
  2. Search for IOMMU, VT-d, SVM, or “virtualisation technology for directed IO” or whatever it may be called on your system. Turn on VT-d / IOMMU.
  3. Save and Exit BIOS and boot into Linux.
  4. Edit the /etc/default/grub file [you need root permission to do so]. Open a terminal window [Ctrl+Alt+T] and enter [copy/paste]:
    xed admin:///etc/default/grub
    [use gksudo gedit /etc/default/grub for older Linux Mint/Ubuntu releases]
    Here is my /etc/default/grub file before the edit:
    GRUB_DEFAULT=0
    #GRUB_HIDDEN_TIMEOUT=10
    #GRUB_HIDDEN_TIMEOUT_QUIET=true
    GRUB_TIMEOUT_STYLE=countdown
    GRUB_TIMEOUT=0
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”quiet”
    GRUB_CMDLINE_LINUX=””

    Look for the line that starts with GRUB_CMDLINE_LINUX_DEFAULT=”…”. You need to add one of the following options to this line, depending on your hardware:
    Intel CPU:
    intel_iommu=on
    AMD CPU:
    amd_iommu=on
    Save the file and exit. Then type:
    sudo update-grub
  5. Now check that IOMMU is actually supported. Reboot the PC. Open a terminal window.
    On AMD machines use:
    dmesg | grep AMD-Vi
    The output should be similar to this:

    AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
    AMD-Vi: Lazy IO/TLB flushing enabled
    AMD-Vi: Initialized for Passthrough Mode

    Or use:
    cat /proc/cpuinfo | grep svm

On Intel machines use:
dmesg | grep "Virtualization Technology for Directed I/O"

The output should be this:
[ 0.902214] DMAR: Intel[R] Virtualization Technology for Directed I/O

If you do not get this output, then VT-d or AMD-V is not working – you need to fix that before you continue! Most likely it means that your hardware [CPU or motherboard] doesn’t support IOMMU, in which case there is no point continuing this tutorial 😥 . Check again to make sure your CPU supports IOMMU. If yes, the cause may be a faulty motherboard BIOS. See the troubleshooting section further below. You may need to update your motherboard BIOS [be careful, flashing the BIOS can potentially brick your motherboard].

Two graphics processors

In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors [GPU]:

1. One GPU for your Linux host [the OS you are currently running, I hope];

2. One GPU [graphics card] for your Windows guest.

We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS, at least not in an easy way. [There are ways to reset the graphics card as well as the X server in Linux so you could get away with one graphics card, but I personally believe it’s not ideal. See for example here and here for more on that.]

If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, you’ll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor [IGP]. [You can also create a Linux VM with GPU passthru if you need Linux for gaming or graphics intensive applications.]

The graphics card to be passed through to Windows [or Linux] must be able to reset properly after VM shutdown. I’ve written a separate post on AMD vs. Nvidia graphics cards. The good news today: the AMD 6000 series seem to work well. But do yourself a favor and avoid the older AMD graphics cards! See also passthroughpo.st and open “Guest GPUs” towards the bottom.

UEFI support in the GPU used with Windows

In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI – most newer cards do. You can check here if your video card and BIOS support UEFI. If you run Windows, download and run GPU-Z and see if there is a check mark next to UEFI. [For more information, see here.]

GPU-Z with UEFI graphic card information

There are several advantages to UEFI, namely it starts faster and overcomes some issues associated with legacy boot [Seabios].

If you plan to use the Intel IGD [integrated graphics device] for your Linux host, UEFI boot is the way to go. UEFI overcomes the VGA arbitration problem associated with the IGD and the use of the legacy Seabios.
If, for some reason, you cannot boot the VM using UEFI, and you want to use the Intel IGD for the host, you need to compile the i915 VGA arbiter patch into the kernel. Before you do, check the note below. For more on VGA arbitration, see here. For the i915 VGA arbiter patch, look here or under .

Note: If your GPU does NOT support UEFI, there is still hope. You might be able to find a UEFI BIOS for your card at TechPowerUp Video BIOS Collection. A Youtube blogger calling himself Spaceinvader has produced a very helpful video on using a VBIOS.

If there is no UEFI video BIOS for your Windows graphics card, you will have to look for a tutorial using the Seabios method. It’s not much different from this here, but there are some things to consider.

Laptop users with Nvidia Optimus technology: Misairu_G [username] published an in-depth guide to VGA passthrough on laptops using Nvidia Optimus technology – see GUIDE to VGA passthrough on Nvidia Optimus laptops. [For reference, here some older posts on the subject: .]

Note: In recent years AMD graphics cards have suffered from a bug that is termed “reset bug”. Modern AMD graphics cards are often not capable of performing a proper “function level reset” [FLR, in short]. They will boot fine, but when you shut down the VM and boot it again, you’ll get an “internal error: Unknown PCI header type ‘127’“.

There are some workarounds for this error. See the Troubleshooting section below.

Modern Nvidia graphics cards series 1000, 2000, and 3000 sometimes require you to pass through a modified VBIOS [ROM] file. I wrote a tutorial on that here.

Part 2 – Installing Qemu / KVM

The Qemu release shipped with Linux Mint 19 is version 2.11 and supports the latest KVM features.

In order to have Linux Mint “remember” the installed packages, use the Software Manager to install the following packages:

qemu-kvm
qemu-utils
seabios
ovmf
hugepages
cpu-checker
bridge-utils

Software Manager

For AMD Ryzen, see also [note that Linux Mint 19/Ubuntu 18.04 only require the BIOS update]. Generally, AMD has had a range of issues with VFIO/GPU passthrough support. Read through the trouble shooting section further below and check the links under hardware compatibility for further information.

Alternatively, use

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
0
to install the required packages.

Part 3 – Determining the Devices to Pass Through to Windows

We need to find the PCI ID[s] of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP [the GPU inside the processor] will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:

  • GPU for Linux: Nvidia Quadro 2000 residing in the first PCIe graphics card slot.
  • GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot.

To determine the PCI bus number and PCI IDs, enter:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
1

Here is the output on my system:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] [rev a1]
02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 [rev a1]

The first card under 01:00.0 is the Quadro 2000 I want to use for the Linux host. The other card under 02:00.0 I want to pass to Windows.

Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
2

Substitute “02:00.” with the bus number of the graphics card you wish to pass to Windows, without the trailing “0“. Here is the output on my computer:
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] [rev a1]
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] [rev a1]

Write down the bus numbers [02:00.0 and 02:00.1 above], as well as the PCI IDs [10de:13c2 and 10de:0fbb in the example above]. We need them in the next part.

Now check to see that the graphics card resides within its own IOMMU group:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
3

For a sorted list, use:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
4

Look for the bus number of the graphics card you want to pass through. Here is the [shortened] output on my system:

/sys/kernel/iommu_groups/19/devices/0000:00:1f.3
/sys/kernel/iommu_groups/20/devices/0000:01:00.0
/sys/kernel/iommu_groups/20/devices/0000:01:00.1
/sys/kernel/iommu_groups/21/devices/0000:02:00.0
/sys/kernel/iommu_groups/21/devices/0000:02:00.1
/sys/kernel/iommu_groups/22/devices/0000:05:00.0
/sys/kernel/iommu_groups/22/devices/0000:06:04.0

Make sure the GPU and perhaps other PCI devices you wish to pass through reside within their own IOMMU group. In my case the graphics card and its audio controller designated for passthrough both reside in IOMMU group 21. No other PCI devices reside in this group, so all is well.

If your VGA card shares an IOMMU group with other PCI devices, see  for a solution!

Note on newer Nvidia graphics cards [series 1000, 2000, 3000]: These modern Nvidia GPUs often require you to download and edit the VBIOS file to be passed through to the VM. The process is described in my post “Passing Through a Nvidia RTX 2070 Super GPU“. You can skip the xml stuff in that post. Simply replace the following lines in my start script below [see Part 10]:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
5
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
6

with these lines [you need to edit them to match your PCI IDs, etc.]:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
7
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
8

Next step is to find the mouse and keyboard [USB devices] that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.

About keyboard and mouse

Depending whether and how much control you want to have over each system, there are different approaches:

1. Get a USB-KVM [Keyboard/VGA/Mouse] switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or [the more expensive] DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution. See also my Virtualization Hardware Accessories post.
Advantages:
– Works without special software in the OS, just the usual mouse and keyboard drivers;
– Best in performance – no software overhead.
Disadvantages:
– Requires extra [though inexpensive] hardware;
– More cable clutter and another box with cables on your desk;
– Requires you to press a button to switch between host and guest and vice versa;
– Many low-cost KVM are unreliable and do not initialize the keyboard or mouse properly when switching USB ports;
– Need to pass through a USB port or controller – see below on IOMMU groups.

2. Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
Advantages:
– Easy to implement;
– No money to invest;
– Good solution for setting up Windows.

There are at least two ways to accomplish this task. I will describe both options.

3. Synergy [//symless.com/synergy/] is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
Advantages:
– Most versatile solution, especially with dual screens;
– Software only, easy to configure;
– No hardware purchase required.
Disadvantages:
– Requires the installation of software on both the host and the guest;
– Doesn’t work during Windows installation [see option 2];
– Costs $10 for a Basic, lifetime license;
– May produce lag, although I doubt you’ll notice unless there is something wrong with the bridge configuration.

4. “Multi-device” bluetooth keyboard and mouse that can connect to two different devices and switch between them at the press of a button [see for example here]:
Advantages:
– Most convenient solution;
– Same performance as option 1.
Disadvantages:
– Price.
– Make sure the device supports Linux, or that you can return it if it doesn’t!

I first went with option 1 for simplicity and universality, but have replaced it with option 4. The USB-KVM soon started to malfunction and gave me lots of trouble.

I’m now using a Logitech MX master 2S mouse and a Logitech K780 BT keyboard. See for how to pair these devices to the USB dongles.

Both option 1 and 4 usually require to pass through a USB PCI device to the Windows guest. I had a need for both USB2 and USB3 ports in my Windows VM and I was able to pass through two USB controllers to my Windows guest, using PCI passthrough.

For the VM installation we choose option 2 [see above], that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
9

Here my system output [truncated]:

Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500
Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600

Note down the IDs: 045e:076c and 045e:0750 in my case.

Part 4 – Prepare for Passthrough

In order to make the graphics card available to the Windows VM, we will assign a “dummy” driver as a place holder: vfio-pci. To do that, we first have to prevent the default driver from binding to the graphics card. This can sometimes be tricky, as some drivers load early in the boot process and prevent binding to vfio-pci.

[One way to accomplish that is by blacklisting driver modules, or by using Kernel Mode Settings. For more on Kernel Mode Setting, see //wiki.archlinux.org/index.php/kernel_mode_setting.]

Note: If you have two identical graphics cards for both the host and the VM, the method below won’t work. In that case see .

The method I describe below uses module alias [thanks to this post]. Another promising method is described in this tutorial.

Run the following command:
xed admin:///etc/default/grub0
where 0000:02:00.0 is the PCI bus number of your graphics card obtained in Part 3 above. The output will look something like:
pci:v000010DEd000013C2sv00001458sd00003679bc03sc00i00

Repeat above command with the PCI bus number of the audio part:
xed admin:///etc/default/grub1
where 0000:02:00.1 is the PCI bus number of your graphics card’ audio device.

In the terminal window, enter the following:
xed admin:///etc/default/grub2
followed by your password to have a root terminal.

Open or create /etc/modprobe.d/local.conf:
xed admin:///etc/default/grub3
and copy and paste the results from the two cat /sys/… commands above. Then precede the lines with “alias” and append the lines with “vfio-pci”, as shown below:
xed admin:///etc/default/grub4
xed admin:///etc/default/grub5

At the end of that file, add the following line:
xed admin:///etc/default/grub6
where 10de:13c2 and 10de:0fbb are the PCI IDs for your graphics card’ VGA and audio part, as determined in the previous paragraph.

You can also add the following option below the options vfio-pci entry:
xed admin:///etc/default/grub7
[The above entry is only valid for 4.1 and newer kernels and UEFI guests. It helps prevent VGA arbitration from interfering with host devices.]

Save the file and exit the editor.

Some applications like Passmark and Windows 10 releases 1803 and newer require the following option:
xed admin:///etc/default/grub8

To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:
xed admin:///etc/default/grub9

Note: if you run Ubuntu 20.04, Linux Mint 20 or similar, then the following modules have been integrated into the kernel and you do not need to load them. See my recent tutorial using virt-manager.

At the end of the file add in the order listed below :
gksudo gedit /etc/default/grub0
gksudo gedit /etc/default/grub1
gksudo gedit /etc/default/grub2
gksudo gedit /etc/default/grub3
gksudo gedit /etc/default/grub4

Save and close the file.

Any changes in /etc/modprobe.d require you to update the initramfs. Enter at the command line:
gksudo gedit /etc/default/grub5

Part 5 – Network Settings

For performance reasons it is best to create a virtual network bridge that connects the VM with the host. In a separate post I have written a detailed tutorial on how to set up a bridge using Network Manager.

Note: Bridging only works for wired networks. If your PC is connected to a router via a wireless link [Wifi], you won’t be able to use a bridge. The easiest way to get networking inside the Windows VM is NOT to use any network setup. You also need to delete the network configuration in the qemu command [script]. If you still want to use a bridged network, there are workarounds such as routing or ebtables [see ].

Once you’ve setup the network, reboot the computer and test your network configuration – open your browser and see if you have Internet access.

Part 6 – Setting up Hugepages

Moved to Part 18 – Performance Tuning. This is a performance tuning measure and not required to run Windows on Linux. See .

Part 7 – Download the VFIO drivers

Download the VFIO driver ISO to be used with the Windows installation from //docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html. Below are the direct links to the ISO images:

Latest VIRTIO drivers: //fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso

Stable VIRTIO drivers: //fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

I chose the latest driver ISO.

Part 8 – Prepare Windows VM Storage Space

We need some storage space on which to install the Windows VM. There are several choices:

  1. Create a raw image file.
    Advantages:
    – Easy to implement;
    – Flexible – the file can grow with your requirements;
    – Snapshots;
    – Easy migration;
    – Good performance.
    Disadvantages:
    – Takes up the entire space you specify.
  2. Create a dedicated LVM volume.
    Advantages:
    – Familiar technology [at least to me];
    – Excellent performance, like bare-metal;
    – Flexible – you can add physical drives to increase the volume size;
    – Snapshots;
    – Mountable within Linux host using kpartx.
    Disadvantages:
    – Takes up the entire space specified;
    – Migration isn’t that easy.
  3. Pass through a PCI SATA controller / disk.
    Advantages:
    – Excellent performance, using original Windows disk drivers;
    – Allows the use of Windows virtual drive features;
    – Can use an existing bare-metal installation of Windows in a VM;
    – Possibility to boot Windows directly, i.e. not as VM;
    – Possible to add more drives.
    Disadvantages:
    – The PC needs at least two discrete SATA controllers;
    – Host has no access to disk while VM is running;
    – Requires a dedicated SATA controller and drive[s];
    – SATA controller must have its own IOMMU group;
    – Possible conflicts in Windows between bare-metal and VM operation.

For further information on these and other image options, see here: //en.wikibooks.org/wiki/QEMU/Images

Although I’m using an LVM volume, I suggest you start with the raw image. Let’s create a raw disk image:

gksudo gedit /etc/default/grub6

for performance, or simply: gksudo gedit /etc/default/grub7
Note: Adjust size [100G] and path to match your needs or resources.

See also my post on Tuning VM disk performance.

Part 9 – Check Configuration

It’s best to check that we got everything:

KVM: gksudo gedit /etc/default/grub8
INFO: /dev/kvm exists
KVM acceleration can be used

KVM module: gksudo gedit /etc/default/grub9
kvm_intel 200704 0
kvm 593920 1 kvm_intel
irqbypass 16384 2 kvm,vfio_pci

Above is the output for the Intel module. [Note that Ubuntu 20.04 etc. won’t list irqbypass.]

VFIO: intel_iommu=on0
vfio_pci 45056 0
vfio_virqfd 16384 1 vfio_pci
irqbypass 16384 2 kvm,vfio_pci
vfio_iommu_type1 24576 0
vfio 32768 2 vfio_iommu_type1,vfio_pci

The above step is not needed for Ubuntu 20.04 and later [Linux Mint 20].

QEMU: intel_iommu=on1
You need QEMU emulator version 2.5.0 or newer. On Linux Mint 19 / Ubuntu 18.04 the QEMU version is 2.11. Ubuntu 20.04 comes with QEMU 4.2.

Did vfio load and bind to the graphics card?
intel_iommu=on2
where 02:00 is the bus number of the graphics card to pass to Windows. Here the output on my PC:
02:00.0 0300: 10de:13c2 [rev a1]
Subsystem: 1458:3679
Kernel driver in use: vfio-pci
02:00.1 0403: 10de:0fbb [rev a1]
Subsystem: 1458:3679
Kernel driver in use: vfio-pci

Kernel driver in use is vfio-pci. It worked!

Interrupt remapping: intel_iommu=on3
[ 3.288843] VFIO – User Level meta-driver version: 0.3
All good!

If you get this message:
vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param “allow_unsafe_interrupts” to enable VFIO IOMMU support on this platform
enter the following command in a root terminal [or use sudo -i]:
intel_iommu=on4
followed by:
gksudo gedit /etc/default/grub5
In this case you need to reboot once more.

Part 10 – Create Script to Start Windows

To create and start the Windows VM, copy the script below and safe it as windows10vm.sh [or whatever name you like, just keep the .sh extension]:

intel_iommu=on6

intel_iommu=on7

intel_iommu=on8
intel_iommu=on9
amd_iommu=on0

amd_iommu=on1

amd_iommu=on2
amd_iommu=on3
amd_iommu=on4
amd_iommu=on5
amd_iommu=on6

amd_iommu=on7

amd_iommu=on8
amd_iommu=on9
sudo update-grub0
sudo update-grub1
sudo update-grub2
sudo update-grub3
sudo update-grub4
sudo update-grub5
sudo update-grub6
sudo update-grub7
sudo update-grub8
sudo update-grub9
dmesg | grep AMD-Vi0
dmesg | grep AMD-Vi1dmesg | grep AMD-Vi2dmesg | grep AMD-Vi3
dmesg | grep AMD-Vi4

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
5
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
6
dmesg | grep AMD-Vi7
dmesg | grep AMD-Vi8
dmesg | grep AMD-Vi9
cat /proc/cpuinfo | grep svm0
cat /proc/cpuinfo | grep svm1
cat /proc/cpuinfo | grep svm2
cat /proc/cpuinfo | grep svm3
cat /proc/cpuinfo | grep svm4

cat /proc/cpuinfo | grep svm5
cat /proc/cpuinfo | grep svm6

Make the file executable:
cat /proc/cpuinfo | grep svm7

You need to edit the file and change the settings and paths to match your CPU and configuration. See below for explanations on the qemu-system-x86 options:

-name $vmname,process=$vmname
Name and process name of the VM. The process name is displayed when using cat /proc/cpuinfo | grep svm8 to show all processes, and used in the script to determine if the VM is already running. Don’t use win10 as process name, for some inexplicable reason it doesn’t work!

-machine type=q35,accel=kvm
This specifies a machine to emulate. The accel=kvm option tells qemu to use the KVM acceleration – without it the Windows guest will run in qemu emulation mode, that is it’ll run real slow.
I have chosen the type=q35 option, as it improved my SSD read and write speeds. See . In some cases type=q35 will prevent you from installing Windows, instead you may need to use type=pc,accel=kvm. See the post . To see all options for type=…, enter the following command:
cat /proc/cpuinfo | grep svm9
Important: Several users passing through Radeon RX 480 and Radeon RX 470 cards have reported reboot loops after updating and installing the Radeon drivers. If you pass through a Radeon graphics card, it is better to replace the -machine line in the startup script with the following line:
dmesg | grep "Virtualization Technology for Directed I/O"0
to use the default i440fx emulation.
Note for IGD users: If you have an Intel CPU with internal graphics [IGD], and want to use the Intel IGD for Windows, there is a new option to enable passthrough:
dmesg | grep "Virtualization Technology for Directed I/O"1 controls IGD GFX passthrough support [default=off].
In most cases you will want to use a discrete graphics card with Windows.

-cpu host
This tells qemu to emulate the host’s exact CPU. There are more options, but it’s best to stay with host.

-cpu host,kvm=off
The kvm=off option is a workaround that was needed for Nvidia graphics cards. Nvidia has since removed the VM check in its new drivers, so you don’t need the kvm=off option. Likewise if you have an AMD/Radeon card for your Windows guest. So normally you would only specify -cpu host.

-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. -smp 4 tells the system to use 4 [virtual] processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. It’s probably best not to assign all CPU resources to the Windows VM – the host also needs some resources [remember that some of the processing and I/O coming from the guest takes up CPU resources in the host]. In the above example I gave Windows 4 virtual processors. sockets=1 specifies the number of actual CPU sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and threads=2 specifies 2 threads per core. It may be enough to simply specify -smp 4, but I’m not sure about the performance consequences [if any].
If you have a 4-core Intel CPU with hyper-threading, you can specify -smp 6,sockets=1,cores=3,threads=2 to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games and applications.
Note: If your CPU doesn’t support hyper-threading, specify threads=1.

-m 8G
The -m option assigns memory [RAM] to the VM, in this case 8 GByte. Same as -m 8192. You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn’t make sense to give it less than 4G, unless you are really stretched with RAM. If you use hugepages, make sure your hugepage size matches this!

-mem-path /dev/hugepages
This tells qemu where to find the hugepages we reserved. If you haven’t configured hugepages, you need to remove this option.

-mem-prealloc
Preallocates the memory we assigned to the VM.

-balloon none
We don’t want memory ballooning.

-rtc clock=host,base=localtime
-rtc clock=host tells qemu to use the host clock for synchronization. base=localtime allows the Windows guest to use the local time from the host system. Another option is utc.

-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.

-nographic
Totally disables SDL graphical output. For debugging purposes, remove this option if you don’t get to the Tiano Core screen.

-serial none
-parallel none
Disable serial and parallel interfaces. Who needs them anyway?

-soundhw hda
Together with the amd_iommu=on3 shell command, this option enables sound through PulseAudio.
If you want to pass through a physical audio card or audio device and stream audio from your Linux host to your Windows guest, see here: Streaming Audio from Linux to Windows.

-usb
-device usb-host,vendorid=0x045e,productid=0x076c
-device usb-host,vendorid=0x045e,productid=0x0750
-usb enables USB support and -device usb-host… assigns the USB host devices mouse [045e:076c] and keyboard [045e:0750] to the guest. Replace the device IDs with the ones you found using the

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
9 command in Part 3 above!
Note the new syntax. There are also many more options that you can find here: .
There are three options to assign host devices to guests. Here the syntax:
dmesg | grep "Virtualization Technology for Directed I/O"4
dmesg | grep "Virtualization Technology for Directed I/O"5
dmesg | grep "Virtualization Technology for Directed I/O"6
passes through the keyboard and mouse to the VM. When using this option, remove the -vga none and -nographic options from the script to enable switching back and forth between Windows VM and Linux host using CTRL+ALT.
dmesg | grep "Virtualization Technology for Directed I/O"7
passes through the host device identified by bus and addr.
dmesg | grep "Virtualization Technology for Directed I/O"8
passes through the host device identified by vendor and product ID

-device vfio-pci,host=02:00.0,multifunction=on
-device vfio-pci,host=02:00.1
Here we specify the graphics card to pass through to the guest, using vfio-pci. Fill in the PCI IDs you found under Part 3 above. It is a multifunction device [graphics and sound]. Make sure to pass through both the video and the sound part [02:00.0 and 02:00.1 in my case].

If you need to pass through a ROM [VBIOS] file, use the following syntax as an example:

-device vfio-pci,host=0000:02:00.0,multifunction=on,romfile=/home/heiko/.bin/Gigabyte_RTX2070Super_8192_191021_edit.rom \
-device vfio-pci,host=0000:02:00.1 \
-device vfio-pci,host=0000:02:00.2 \
-device vfio-pci,host=0000:02:00.3 \

-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn’t contain the variables, which are loaded separately [see right below].

-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.

-boot order=dc
Start boot from CD [d], then first hard disk [c]. After installation of Windows you can remove the “d” to boot straight from disk.

-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. With the options above it will be accessed as a paravirtualized [if=virtio] drive in raw format [format=raw].
Important: file=/… enter the path to your previously created win.img file.
Other possible drive options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for entire disks or partitions.
For some basic -drive options, see my post here. For the new Qemu syntax and drive performance tuning, see Tuning VM Disk Performance.

-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
This attaches the Windows win10.iso as CD or DVD. The driver used is the ide-cd driver.
Important: file=/… enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
This attaches the virtio ISO image as CD. Note the different index.
Important: file=/… enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note 1: There are many ways to attach ISO images or drives and invoke drivers. My system didn’t want to take a second scsi-cd device, so this option did the job. Unless this doesn’t work for you, don’t change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network interface and network driver. It’s best to define a MAC address, here 00:16:3e:00:01:01. The MAC is specified in Hex and you can change the last :01:01 to your liking. Make sure no two MAC addresses are the same!
vhost=on is optional – some people reported problems with this option. It is for network performance improvement.
For more information: and .

Important: Documentation on the installed QEMU can be found here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.

For syntax changes in newer versions, see //wiki.qemu.org/Features/RemovedFeatures.

Linux Mint 19.2 and Ubuntu 18.04 come with QEMU 2.11, Ubuntu 18.10 with 2.12. Ubuntu 19.04 uses QEMU 3.1. The latest stable version of QEMU is 4.1.0. For additional documentation on QEMU, see //www.qemu.org/documentation/. Some configuration examples can be found in the following directory:
dmesg | grep "Virtualization Technology for Directed I/O"9

Part 11 – Install Windows

Start the VM by running the script as root:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
00
[Make sure you specify the correct path.]

You should get a Tiano Core splash screen with the memory test result.

You might land in an EFI shell. Type exit and you should be getting a menu. Enter the “Boot Manager” menu, select your boot disk and hit Enter. [See below.]

UEFI shell [OVMF]
UEFI menu [OVMF]
UEFI boot manager menu [OVMF]

Now the Windows ISO boots and asks you to:
Press any key to start the CD / DVD…

Press a key!

Windows will then ask you to:
Select the driver to install

Click “Browse”, then select your VFIO ISO image and go to “viostor“, open and select your Windows version [w10 for Windows 1o], then select the “AMD64” version for 64 bit systems, click OK.

Note: Instead of the viostor driver, you can also install the vioscsi driver. See qemu documentation for proper syntax in the qemu command – make sure to change the startup script before you choose this driver. The vioscsi driver supports trim for SSD drives.

Windows will ask for the license key, and you need to specify how to install – choose “Custom”. Then select your drive [there should be only disk0] and install.

Windows may reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.

Windows should be looking for a display driver by itself. If not, install it manually.

Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these “optimizations” can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:

1. Inside Windows 10, right-click the Start menu.
2. Select “Command prompt [admin]”.
3. At the command prompt, run:

cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort | uniq 0,6 1,7 2,8 3,9 4,10 5,11
01
4. It will run a while and then print the Windows Experience Index [WEI].
5. Please share your WEI in a comment below!

To check that Windows correctly identified your SSD:
1. Open Explorer
2. Click “This PC” in the left tab.
3. Right-click your drive [e.g. C:] and select “Properties”.
4. Select the “Tools” tab.
5. Click “Optimize”
You should see something similar to this:

Use Optimize Drives to optimize for SSD

In my case, I have drive C: [my Windows 10 system partition] and a “Recovery” partition located on an SSD, the other two partitions [“photos” and “raw_photos”] are using regular hard drives [HDD]. Notice the “Optimization not available” 😀 .

Turn off hibernation and suspend ! Having either of them enabled can cause your Windows VM to hang, or may even affect the host. To turn off hibernation and suspend, follow the instructions for hibernation and suspend.

Turn off fast startup ! When you shut down the Windows VM, fast startup leaves the file system in a state that is unmountable by Linux. If something goes wrong, you’re screwed. NEVER EVER let proprietary technology have control over your data. Follow these instructions to turn off fast startup.

By now you should have a working Windows VM with VGA passthrough.

If this article has been helpful, click the “Like” button below. Don’t forget to share this page with your friends.

Which of the following could help when troubleshooting a system that attempts to boot to an incorrect device select 2?

Answer: Safe Mode with Networking.

Where are BIOS settings stored quizlet?

The BIOS performs a POST to check what devices are connected to the computer. It also initializes hardware on boot. Where are your BIOS settings stored? BIOS settings are stored in the CMOS chip.

Which of the following are a series of basic hardware diagnostic tests performed by a start up BIOS after a computer is powered on?

A power-on self-test [POST] is a process performed by firmware or software routines immediately after a computer or other digital electronic device is powered on.

What is a function of the BIOS quizlet?

Its primary function is to identify and test the devices attached to the computer that are used to input and output information, such as the keyboard, monitor, hard drives, serial communications, and so on. Some newer computers, such as Apple Macintosh computers, use EFI instead of BIOS.

Chủ Đề