I wanted my own ec2 cloud, to expermiment with and use for testing poolparty. I searched for a vmware instance already installed and configured that I could just download and play around with. Unfortunatley, I couldn’t find one. So, I went ahead and created my own. You can download it and take it for a spin. I tried to keep the settings as general as possible so it will require as few changes as possible
The front end is a VMware Ubuntu 9.04 server image with a single ethernet device in bridging mode. The image has a user “administrator” with password “s3cr3t” and hostname “jauntyserver”. The front end is setup to use SYSTEM networking. Basicly this means that your eucalyptus cloud will be on the same network, and use the same dhcp servers as your real network. This doesn’t give you all the features of ec2, but it seemed the most foolproof to start out with.
I set up a bridged network device on my front end with a static address:
auto br0
iface br0 inet static
address 192.168.4.20
netmask 255.255.252.0
network 192.168.4.0
broadcast 192.168.4.255
gateway 192.168.4.1
dns-nameservers 192.168.4.1
bridge_ports eth0
Depending on your network, you may want to change this to use dhcp or different network values.
Once the image is running, you can log into the admin interface at https://192.168.4.20:8443
The eucalyptus admin username is admin and the password has been set to “s3cr3t”
The admin email is eucalyptus@mailinator.com (*Note: this means any system emails sent to the admin will be publicly viewable by anyone at http://www.mailinator.com/maildir.jsp?email=eucalyptus You can go there to check for mail, and delete it.
The Walrus url (S3) is set to
http://eucalyptus.stimble.net:8773/services/Walrus
This is walrus URL is specific to my setup. You can either reset this by changing the walrus URL at https://192.168.4.20:8443/#conf, or add eucalyptus.stimble.net to your hosts file. Whether you change the Walrus URL or use the images default, you need to be sure that the URL is resolvable by your compute nodes. It is also important to note that if you change the walrus URL you need to redownload the certificates and update your local ec2 configuration.
(On mac, I use the ruby gem ghost to manage my hosts settings)
To setup to use the new eucalyptus cloud, download the admin credentials from https://192.168.4.20:8443/#credentials. Clicking on the “Download Certificate” button downloads a zip containing your credentials and a small script, ecuarc, that will setup your shell to work with the eucalyptus server. I unziped this download to $HOME/.euca and then sourced the eucarc file:
. $HOME/.euca/eucarc
Make sure that your cloud controller is working by issuing an ec2-command. Try:
ec2-describe-images
You should see:
IMAGE emi-39F01613 ubuntu-image-bucket/ubuntu.9-04.x86-64.img.manifest.xml admin available public x86_64 machine eki-AE9D17D8 eri-17561931
IMAGE eki-AE9D17D8 ubuntu-kernel-bucket/vmlinuz-2.6.28-11-generic.manifest.xml admin available public x86_64 kernel
IMAGE eri-17561931 ubuntu-ramdisk-bucket/initrd.img-2.6.28-11-generic.manifest.xml admin available public x86_64 ramdisk
These are the ubuntu images I downloaded from euca-ubuntu-9.04-x86_64.tar.gz, uploaded and registered with the frontend already.
Now, you have a functioning front end cloud controller, but that’s not much use without some compute nodes. You need a computer that supports hardware virtualization. I setup a Jaunty server on my computers and installed just the node-controller as per the instructions. I also had to update the VNET_INTERFACE in /etc/eucalyptus/eucalyptus.conf. It was set to “peth0”. Probably the right option if you are using xen. I am using kvm and changed it to
VNET_INTERFACE='br0'
Change it to whatever interface your packets will be using to get to your frontend.
and then updated the configuration on the front end with the name of this node, “saint” in my case.
/usr/sbin/euca_conf -addnode compute_node_name /etc/eucalyptus/eucalyptus.conf
If your compute node is named saint, you can skip this step. Also, once again, be sure the names all resolve, or use ip addresses.
Now, you need to add a keypair
ec2-add-keypair keypair_name |tee $HOME/.euca/keypair_name
chmod 600 $HOME/.euca/keypair_name
Then edit the $HOME/.euca/keypair_name and remove first line of the file; the line with the fingerprint.
Now, in theory at least, you should be able to
ec2-run-instance emi-39F01613 -k keypair_name
After a few minutes (note: the first time it takes longer, as the instance image has to be copied over. Subsequent boots use a cached copy and boot much quicker.), we should be able to see the instance running after a few ec2-describe-instances. Once it is running:
ssh -i $HOME/euca/keypair_name -l root ip-or-name-of-instance
Now dance a little jig and enjoy your micro cloud!
1 comment:
Hi there. Great post. Have a couple of questions that I am trying to resolve generally with eucalyptus to automate master and nodes for centos. I am hoping to do a kickstart configuration for these. The first issue is a partitioning strategy.
I have yet to see something that recommends a particular approach both for master or node. There are two types of configurations that I see as important. One is direct disk, where you will use a volume locally for Walrus. The other would be use of a filesystem of some form, whether NFS or a distributed fs. So would like to know in these instances what would be most suitable partitioning approach.
I have been thinking of redundancy of master. Here I see need for second master be continuously synced with data and storage of original master with direct disk. If you are exporting a filesystem, you would still need to sync data from master but not Walrus since you are managing your storage elsewhere. In any case would like to hear more thoughts about this to work out a strategy for master redundancy and a failover approach.
Post a Comment