[PTLsim-devel] better guest SMP support?

Matt T. Yourst
Thu Oct 18 16:49:16 EDT 2007


On Thursday 18 October 2007 12:51, Sasa Tomic wrote:
>
> I don't know how stable is SVM part of the Xen. I've seen on mailing
> lists that is sometimes blocks without reason (probably due to some
> deadlock or something similar).
>
> KVM: the support for guest SMPs was added just recently, with the new
> 2.6.23 kernel, so it also _has_ to be unstable for some time.
>
> On the other hand, both Xen and KVM are based on the QEMU, which has a
> kqemu - kernel module for the acceleration, and very good guest SMP
> support. I ran 128 guest cpus on my T60 dual core laptop, without any
> problems, and with very very good performance.
> The biggest problem was the Linux kernel which had trouble running on
> this many cores. After using the latest available kernel version, the
> booting was successful, with all 128 cores :)
>
> The question is: are there any attempts to run kqemu + QEMU + PTLsim,
> i.e. to port the PTLsim to QEMU?
> The installation of kqemu + QEMU is very straightforward, easy and fast.
> kqemu installs without even rebooting the machine (requires root account
> for the installation), and doesn't require root access to run, after the
> installation is successful.
> QEMU itself doesn't require root account, for compiling, installing or
> running, but it has very bad performance if it is used without kqemu.

I realize Xen is becoming increasingly inconvenient, and we plan to fix that 
in the near future. Here's our current plan:

Currently, PTLsim runs in (virtualized) kernel mode inside the Xen domain, and 
uses Xen hypercalls to communicate with the outside world. In this model, 
PTLsim acts as its own kernel and does not depend on Linux or any other host.

This is very different than QEMU, which simulates the virtual machine using a 
userspace Linux process (kqemu still uses this model when the kernel module 
cannot handle certain cases).

For the KVM/QEMU/etc. version, we plan to use the same philosophy, but take it 
one step further. PTLsim will be modified to run on the bare x86 hardware in 
kernel mode (like a real hypervisor), so in theory you could boot your entire 
machine under it (as you now do with Xen's dom0). It will not depend on any 
Xen hypercalls.

However, this is obviously not very convenient for development purposes, so 
the plan is to run the PTLsim "kernel" inside a virtual machine. Unlike the 
current Xen-based PTLsim, this is compatible with any virtual machine 
monitor, including KVM, QEMU/kqemu, a Xen HVM domain, and in theory even 
VMware or other VMMs.

We only need two modifications to the VMM software itself:

1. Ability to switch into PTLsim after the virtual machine has booted and is 
running in 32-bit or 64-bit protected mode (since PTLsim doesn't currently 
support real mode code).

This is fairly easy to implement - in response to some outside command, the 
virtual machine will stop executing whatever code it's running, and will 
switch to 64-bit mode with a certain set of page tables and registers. This 
will be conceptually similar to how system management mode (SMM) is entered 
on real x86 chips, based on a special external interrupt.

2. Interface between PTLsim and the outside world.

Currently, the Xen-based PTLsim uses virtual interrupts and a shared memory 
page to make a subset of Linux-like system calls, which are executed by the 
PTLsim monitor process running on the Linux host system.

In the new model, we will modify the VMM so it includes a virtual PCI device 
with memory mapped registers that PTLsim will access to initiate the same 
actions we do now via Xen.

This is very easy to implement from the PTLsim side, but will require some 
additions to the VMM software. Conveniently, all the relevant VMMs (KVM, Xen, 
kqemu) all use QEMU for the virtual device models, so no matter which VMM you 
choose, the same code will need to be modified.

Just keep in mind that we would really, really like to keep PTLsim running on 
the bare hardware (even if that hardware is inside a virtual machine) rather 
than switching to a QEMU-like model where it runs as a normal Linux process. 
I can't elaborate on the reasons for needing this model, but it would save us 
considerable effort (and avoid the need to maintain an internal fork of the 
code) if PTLsim was kept separate from any specific host OS and userspace 
libraries, and instead was entirely self contained and only supported running 
in 64-bit mode.

In summary, we are going to make PTLsim work with *all* VMMs by only using the 
bare x86 hardware features, plus a few small VMM modifications to let PTLsim 
talk to the outside world and let us switch between native mode and 
simulation mode. Since KVM, Xen and kqemu all use QEMU as a base, the 
modifications all affect the same code. PTLsim itself will continue to be 
self-contained and will run in 64-bit kernel mode inside the virtual machine.

I can't give a timeline for this work since I don't have time to do it myself 
and everyone else on our team is busy doing the full cycle accurate MESI 
model (which, by the way, should be released within the next month).

If you're interested in working on this (and are willing to follow the model 
I've outlined above), I'd be glad to give advice and help debug problems. 
Actually, we're hoping to hire someone to work on this specific project full 
time (your code will be contributed back to the open source PTLsim version), 
so if you're interested, please send me a private e-mail. Thanks,

- Matt

-------------------------------------------------------
 Matt T. Yourst                    yourst at peptidal.com
 Peptidal Research Inc., Co-Founder and Lead Architect
-------------------------------------------------------


More information about the PTLsim-devel mailing list