This is the Tashi package. Currently, we are using KVM and Xen. Quick start ======================================================================================================================== XXX: This needs to be rewritten Notes on the VMs ======================================================================================================================== KVM - uses Intel VT and is open source KQEMU - syscalls faster than KVM, but everything else is slower QEMU - same as KVM, but slower Xen - crashed for me, but will try it again in the future VMware - Not open-source -- does this exclude it? Filename Description ======================================================================================================================== STYLE Specifies some rules about what should and shouldn't be done to the code README This file doc Project documentation doc/external_OC2_pitch_04_03_08 First round of the external "OC2" pitch doc/reading_group_03_10_08 Reading group presentation doc/notes Notes from project meetings doc/html Automatically generated HTML doc for the project (made by mkhtmldoc.sh) mkhtmldoc.sh Automatically generates HTML doc for the project .pydevproject Eclipse project file? .project Eclipse project file? TODO List of things to do for the project src Root of the python packages src/tashi Base tashi package src/tashi/__init__.py Contains some universally useful functions src/tashi/messaging Messaging subsystem src/tashi/client Client package src/tashi/client/client.py Client executable src/tashi/client/__init__.py Package stub src/tashi/data Data backend package (for Cluster manager) src/tashi/data/__init__.py Package functions src/tashi/data/schema.py Database schema src/tashi/data/util.py Utility functions src/tashi/services Generated by tashi/thrift/build.py (thrift generated code) src/tashi/nodemanager Node manager package -- needs to be reorganized src/tashi/thrift Thrift stuff src/tashi/thrift/services.thrift Thrift spec src/tashi/thrift/build.py Tool to build the thrift code and put it in the right place src/tashi/clustermanager Cluster manager package src/tashi/clustermanager/__init__.py Cluster manager functions src/tashi/clustermanager/policies.py Simple policy implementation (XXX: this needs to be reorganized) src/tashi/clustermanager/service.py Service implemenation (for thrift RPCs) src/tashi/clustermanager/demo.py Populate the data backend with test data src/tashi/clustermanager/clusterman... Cluster manager executable etc Configuration files etc/ClusterManager.cfg Cluster manager configuration file etc/ClusterManagerLogging.cfg Cluster manager logging configuration file (going away) guest Guest stuff guest/tashi Script for setting the hostname from the IP and registering the IP Client ======================================================================================================================== The client uses thrift RPCs to communicate with the Cluster Manager Guest ======================================================================================================================== Steps to setup a guest: XXX: Optional Remove /etc/hostname so that the hostname is not fixed Place "oc2" script n /etc/network/if-up.d/ to set the hostname and register the IP with the master Comment out eth0 in /etc/iftab so that multiple mac addresses show up as eth0 Add "acpi=force" to the kernel arguments to support shutdown Add "noapictimer" if configuring a 64-bit guest Install SSH so that the machine can be accessed NodeManager ======================================================================================================================== The steps currently involved in preping a machine to be a host include: XXX: This list needs to be rewritten # Enable VT in the BIOS (for Dell machines, "./tokenCtlS --token 0x014b --activate"), rebooting if necessary # Install KVM ("cd /; tar xvjf kvm-60-bin.tar.bz2") # Load the new kernel modules ("rmmod kvm; rmmod kvm-intel; depmod -a; modprobe kvm-intel") # Make sure SDL is installed ("apt-get install libsdl1.2debian-oss") # Make sure bridge-utils is installed ("apt-get install bridge-utils") # Setup a bridge for the guests ("brctl addbr vmbr") # Add a physical NIC to the bridge ("brctl addif vmbr eth1") # Setup that physical NIC to be up and in promiscuous mode ("ifconfig eth1 0.0.0.0 up promisc") # Setup the bridge to be up and in promiscuous mode ("ifconfig vmbr up promisc") Make sure the disk images are available ("mkdir /mnt/mryan3; mount mryan3-d3:/export /mnt/mryan3") To prepare an image for booting natively on a host: XXX: This also needs to be rewritten Add losetup to the initrd in /sbin Apply the diff "initrd-real-boot-diff.txt" to the initrd Rebuild the initrd Place the image at /x/hd.img on the host machine (this could be part of initrd) Set the kernel parameters to "root=/dev/hda1 rw --" ClusterManager ======================================================================================================================== XXX: There is a server that runs here -- more doc needed later Packages ======================================================================================================================== Python [Python, does not affect code] KVM [GPL & LPGL, external binary -- shouldn't affect code] Xen [?, external binary or library?] SQLAlchemy [MIT] Thrift [ASL eventually?]