Thứ Hai, 24 tháng 10, 2011

Glance Authentication With Keystonen (2012.1-dev)

Glance may optionally be integrated with Keystone. Setting this up is relatively straightforward: the Keystone distribution includes the requisite middleware and examples of appropriately modified glance-api.conf and glance-registry.conf configuration files in the examples/paste directory. Once you have installed Keystone and edited your configuration files, newly created images will have their owner attribute set to the tenant of the authenticated users, and the is_public attribute will cause access to those images for which it is false to be restricted to only the owner.
The exception is those images for which owner is set to null, which may only be done by those users having the Admin role. These images may still be accessed by the public, but will not appear in the list of public images. This allows the Glance Registry owner to publish images for beta testing without allowing those images to show up in lists, potentially confusing users.

Configuring the Glance Client to use Keystone

Once the Glance API and Registry servers have been configured to use Keystone, you will need to configure the Glance client (bin/glance) to use Keystone as well.
Just as with Nova, the specifying of authentication credentials is done via environment variables. The only difference being that Glance environment variables start with OS_AUTH_ while Nova’s begin with NOVA_.
If you already have Nova credentials present in your environment, you can use the included tool, tools/nova_to_os_env.sh, to create Glance-style credentials. To use this tool, verify that Nova credentials are present by running:
$ env | grep NOVA_
NOVA_USERNAME=<YOUR USERNAME>
NOVA_API_KEY=<YOUR API KEY>
NOVA_PROJECT_ID=<YOUR TENANT ID>
NOVA_URL=<THIS SHOULD POINT TO KEYSTONE>
NOVA_AUTH_STRATEGY=keystone
Note
If NOVA_AUTH_STRATEGY=keystone is not present, add that to your novarc file and re-source it. If the command produces no output at all, then you will need to source your novarc.
Also, make sure that NOVA_URL points to Keystone and not the Nova API server. Keystone will return the address for Nova and Glance’s API servers via its “service catalog”.
Once Nova credentials are present in the environment, you will need to source the conervsion script:
$ source ./tools/nova_to_os_env.sh
The final step is to verify that the OS_AUTH_ crednetials are present:
$ env | grep OS_AUTH
OS_AUTH_USER=<YOUR USERNAME>
OS_AUTH_KEY=<YOUR API KEY>
OS_AUTH_TENANT=<YOUR TENANT ID>
OS_AUTH_URL=<THIS SHOULD POINT TO KEYSTONE>
OS_AUTH_STRATEGY=keystone

Configuring the Glance servers to use Keystone

Keystone is integrated with Glance through the use of middleware. The default configuration files for both the Glance API and the Glance Registry use a single piece of middleware called context, which generates a request context without any knowledge of Keystone. In order to configure Glance to use Keystone, this context middleware must be replaced with two other pieces of middleware: the authtoken middleware and the auth-context middleware, both of which may be found in the Keystone distribution. The authtoken middleware performs the Keystone token validation, which is the heart of Keystone authentication. On the other hand, the auth-context middleware performs the necessary tie-in between Keystone and Glance; it is the component which replaces the context middleware that Glance uses by default.
One other important concept to keep in mind is the request context. In the default Glance configuration, the context middleware sets up a basic request context; configuring Glance to use auth_context causes a more advanced context to be configured. It is also important to note that the Glance API and the Glance Registry use two different context classes; this is because the registry needs advanced methods that are not available in the default context class. The implications of this will be obvious in the below example for configuring the Glance Registry.

Configuring Glance API to use Keystone

Configuring Glance API to use Keystone is relatively straight forward. The first step is to ensure that declarations for the two pieces of middleware exist. Here is an example for authtoken:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
The actual values for these variables will need to be set depending on your situation. For more information, please refer to the Keystone documentation on the auth_token middleware, but in short:
  • Those variables beginning with service_ are only needed if you are using a proxy; they define the actual location of Glance. That said, they must be present.
  • Except for auth_uri, those variables beginning with auth_ point to the Keystone Admin service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens.
  • The auth_uri variable must point to the Keystone Auth service, which is the service users use to obtain Keystone tokens. If the user does not have a valid Keystone token, they will be redirected to this URI to obtain one.
  • The admin_token variable specifies the administrative token that Glance uses in its query to the Keystone Admin service.
The other piece of middleware needed for Glance API is the auth-context:
[filter:auth_context]
paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory
Finally, to actually enable using Keystone authentication, the application pipeline must be modified. By default, it looks like:
[pipeline:glance-api]
pipeline = versionnegotiation context apiv1app
(Your particular pipeline may vary depending on other options, such as the image cache.) This must be changed by replacing context with authtoken and auth-context:
[pipeline:glance-api]
pipeline = versionnegotiation authtoken auth-context apiv1app

Configuring Glance Registry to use Keystone

Configuring Glance Registry to use Keystone is also relatively straight forward. The same pieces of middleware need to be added as are needed by Glance API; see above for an example of the authtoken configuration. There is a slight difference for the auth-context middleware, which should look like this:
[filter:auth-context]
context_class = glance.registry.context.RequestContext
paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory
The context_class variable is needed to specify the Registry-specific request context, which contains the extra access checks used by the Registry.
Again, to enable using Keystone authentication, the application pipeline must be modified. By default, it looks like:
[pipeline:glance-registry] pipeline = context registryapp
This must be changed by replacing context with authtoken and auth-context:
[pipeline:glance-registry]
pipeline = authtoken auth-context registryapp

Sharing Images With Others

It is possible to allow a private image to be shared with one or more alternate tenants. This is done through image memberships, which are available via the members resource of images. (For more details, see glanceapi.) Essentially, a membership is an association between an image and a tenant which has permission to access that image. These membership associations may also have a can_share attribute, which, if set to true, delegates the authority to share an image to the named tenant.

References:
http://glance.openstack.org/authentication.html

Thứ Ba, 18 tháng 10, 2011

Monitoring Tool part 2 : Ganglia 3.0.7 on Ubuntu 10.10

Ganglia: Installation

This is part one of a three-part tutorial on installing and configuring Ganglia on Debian. The full tutorial includes

Supporting Packages

Ganglia uses RRDTool to generate and display its graphs and won't work without it. Fortunately, rrdtool doesn't need to be installed from source. Just run
apt-get install rrdtool librrds-perl librrd2-dev
In order to see the pie charts in PHP, you'll need an additional package:
apt-get install php5-gd

Getting Ganglia

Except for the frontend, Ganglia is in the Debian repository and easy to install. On the webserver, you'll need both packages, so
apt-get install ganglia-monitor gmetad
On all of the worker nodes (and head node, if it isn't running your webserver), you'll only need ganglia-monitor. You can apt-get this one at a time or see the Cluster Time-saving Tricks for tips on how to write a script.

Getting the Ganglia Frontend

Unfortunately, the frontend requires an entire build of Ganglia. That doesn't detract from the convenience of having the other two packages in the Debian repository, though, since you'll only need to build this one.
Visit Ganglia's SourceForge download page and copy the file location of the most recent version (the file should end in .tar.gz). Then, from whenever you keep your source files, run
wget http://downloads.sourceforge.net/ganglia/ganglia-3.0.7.tar.gz?modtime=1204128965&big_mirror=0
or similar. Then untar the file with
tar xvzf ganglia*.tar.gz
and cd into the new directory. Ganglia follows the typical source installation paradigm. Run ./configure --help to see all of the options. Important ones include
  • --prefix= - specify where the binaries should be installed, optional ones are often put in /opt
  • --enable-gexec - use gexec support
  • --with-gmetad - compile and install the metad daemon
The full line to enter should look similar to this:
./configure --prefix=/opt/ganglia --enable-gexec --with-gmetad
If it errors out and is unable to find rrd.h, make sure you installed everything listed above under "RRDTool". When it finishes successfully, you should see a screen like this:
Welcome to..
     ______                  ___
    / ____/___ _____  ____ _/ (_)___ _
   / / __/ __ `/ __ \/ __ `/ / / __ `/
  / /_/ / /_/ / / / / /_/ / / / /_/ /
  \____/\__,_/_/ /_/\__, /_/_/\__,_/
                   /____/

Copyright (c) 2005 University of California, Berkeley

Version: 3.0.7 (Fossett)
Library: Release 3.0.7 0:0:0

Type "make" to compile.
Go ahead and enter make, and after that finishes, make install.
Finally, when it's done, create a ganglia directory and copy all of /web to it:
mkdir /var/www/ganglia
cp web/* /var/www/ganglia

Apache

For convenience, you can update Apache to redirect any HTML requests for the root directory (ie, yourserver.yourdomain.com) to go straight to /ganglia. Open /etc/apache2/sites-enabled/000-default and find the block of code for the /var/www directory and add a redirect. The block should now look like this:
<Directory /var/www/>
                Options FollowSymLinks MultiViews
                AllowOverride None
                Order allow,deny
                allow from all
                RedirectMatch ^/$ /ganglia/
        </Directory>


After this, restart Apache:
apache2ctl restart
When you visit yourserver.yourdomain.com for the first time, you should be redirected to yourserver.yourdomain.com/ganglia, and you should have an "unspecified Grid report" as shown to the right. Cool!

Configuration Gmetad

/etc/gmetad.conf

The base of controls for gmetad in Debian is /etc/gmetad.conf. Going through the file from the beginning to the end, here are a few values to possibly change. You can search for these inside your favorite text editor.

Required Changes

Some of these are commented out by default (they have a # in front of the line). They need to be uncommented to work.
  • authority - This should be set to yourhost.yourdomain.com/ganglia. If you're behind a firewall and the URL appears as the firewall's, you should use that. For instance, my webserver is gyrfalcon, but through NAT with IPTables, my url appears as eyrie.mydomain.edu, and so I use that URL for authority.
  • trusted_hosts - If your webserver has multiple domain names, they should all be listed here. Otherwise, this can remain empty.

Optional Changes

  • gridname - If you don't like having the overall wrapper named "Grid", you can change it to something else.
  • rrd_rootdir - Ganglia needs to store a lot of data for RRD. If you want this stored some place other than /var/lib/ganglia/rrds, change this value.

Restarting Ganglia

After any changes, gmetad will need to be restarted. Do this with
/etc/init.d/gmetad restart

Configuring Gmon

The host running gmetad is probably also running gmon, if you want to monitor this host. It will need to be configured as a client node also.

/etc/gmond.conf

The file responsible for connecting each node appropriate to the server hosting Ganglia is /etc/gmond.conf. This file needs to be edited appropriately for each node. This can be done individually, or one file can be created on one node and it can be scripted and copied out to each one of the nodes.
The following values need to be edited:
  • name - This is the name of the cluster this node is associated with. This will show up on the web page.
  • owner - Different owners will be used to separate different clusters into administrative domains. If you only have one cluster, it's not such a big deal.
  • mcast_if - If the node has multiple interfaces, the one to be used to connect to the host should be specified.
  • num_nodes - The number of nodes in the cluster.

Restarting Ganglia Monitoring

After making the changes, gmond needs to be restarted on the node. Do this with
/etc/init.d/ganglia-monitor restart

Restarting Ganglia Host

After making the changes on all the nodes, gmetad on the webserver needs to be restarted. Do this with
/etc/init.d/gmetad restart
You may need to wait around ten minutes to see your changes take affect.
------------------------------------------------------------------------------------------

Sources:  http://debianclusters.org/index.php/Ganglia:_Installation

Thứ Hai, 17 tháng 10, 2011

Nova-Volume

Concept:
The nova-volumes service uses iSCSI-exposed LVM volumes to the compute nodes which run instances. Thus, there are two components involved:
  1. lvm2, which works with a VG called "nova-volumes" (Refer to http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux) for further details)
  2. open-iscsi, the iSCSI implementation which manages iSCSI sessions on the compute nodes
Here is what happens from the volume creation to its attachment (we use euca2ools for examples, but the same explanation goes with the API):
  1. The volume is created via $euca-create-volume; which creates an LV into the volume group (VG) "nova-volumes"
  2. The volume is attached to an instance via $euca-attach-volume; which creates a unique iSCSI IQN that will be exposed to the compute node.
  3. The compute node which run the concerned instance has now an active ISCSI session; and a new local storage (usually a /dev/sdX disk)
     4.  libvirt uses that local storage as a storage for the instance; the instance get a new disk (usually a /dev/vdX disk).

Thứ Sáu, 14 tháng 10, 2011