温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

OpenStack Nova调度服务学习及其过滤器编写的示例分析

发布时间:2021-12-29 15:22:09 来源:亿速云 阅读:125 作者:小新 栏目:云计算

这篇文章将为大家详细讲解有关OpenStack Nova调度服务学习及其过滤器编写的示例分析,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。

初步分析

查看相关进程

$ ps -aux | grep nova | awk '{for(i=11;i<=NF;i++) printf "%s ", $i};NF>=11 {print ""}'
/usr/bin/python /usr/bin/nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf 
--log-file=/var/log/nova/nova-compute.log 
/usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log 
/usr/bin/python /usr/bin/nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-conductor.log 
/usr/bin/python /usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-scheduler.log 
/usr/bin/python /usr/bin/nova-consoleauth --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-consoleauth.log 
/usr/bin/python /usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-novncproxy.log 
/usr/bin/python /usr/bin/nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-conductor.log 
/usr/bin/python /usr/bin/nova-conductor --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-conductor.log 
/usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log 
/usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log 
/usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log 
/usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf --log-file=/var/log/nova/nova-api.log 
/usr/bin/python2.7 /usr/bin/privsep-helper --config-file /etc/nova/nova.conf --config-file /etc/nova/nova-compute.conf 
--privsep_context vif_plug_linux_bridge.privsep.vif_plug --privsep_sock_path /tmp/tmpZtOLbU/privsep.sock 
/usr/bin/python2.7 /usr/bin/privsep-helper --config-file /etc/nova/nova.conf --config-file /etc/nova/nova-compute.conf 
--privsep_context os_brick.privileged.default --privsep_sock_path /tmp/tmpjXCHkt/privsep.sock 
pluma /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py

查看相关服务

  • 查看所有Nova服务:

$ systemctl list-units | grep nova
  nova-api.service                                loaded active running   OpenStack Compute API
  nova-compute.service                            loaded active running   OpenStack Compute
  nova-conductor.service                          loaded active running   OpenStack Compute Conductor
  nova-consoleauth.service                        loaded active running   OpenStack Compute Console
  nova-novncproxy.service                         loaded active running   OpenStack Compute novncproxy
  nova-scheduler.service                          loaded active running   OpenStack Compute Scheduler
  • 查看服务所属软件包:

$ apt-get install apt-file
$ apt-file update

$ apt-file search /lib/systemd/system/nova-api.service
nova-api: /lib/systemd/system/nova-api.service

$ apt-file search /lib/systemd/system/nova-compute.service
nova-compute: /lib/systemd/system/nova-compute.service

$ apt-file search /lib/systemd/system/nova-conductor.service
nova-conductor: /lib/systemd/system/nova-conductor.service

$ apt-file search /lib/systemd/system/nova-consoleauth.service
nova-consoleauth: /lib/systemd/system/nova-consoleauth.service

$ apt-file search /lib/systemd/system/nova-novncproxy.service
nova-novncproxy: /lib/systemd/system/nova-novncproxy.service

$ apt-file search /lib/systemd/system/nova-scheduler.service
nova-scheduler: /lib/systemd/system/nova-scheduler.service
  • 查看Nova调度器服务状态:

$ systemctl status nova-scheduler.service
● nova-scheduler.service - OpenStack Compute Scheduler
   Loaded: loaded (/lib/systemd/system/nova-scheduler.service; enabled; vendor preset: enabled)
   Active: active (running) since 三 2017-12-06 23:23:39 CST; 10h ago
  Process: 3150 ExecStartPre=/bin/chown nova:adm /var/log/nova (code=exited, status=0/SUCCESS)
  Process: 3076 ExecStartPre=/bin/chown nova:nova /var/lock/nova /var/lib/nova (code=exited, status=0/SUCCESS)
  Process: 3016 ExecStartPre=/bin/mkdir -p /var/lock/nova /var/log/nova /var/lib/nova (code=exited, status=0/SUCCESS)
 Main PID: 3241 (nova-scheduler)
   CGroup: /system.slice/nova-scheduler.service
           └─3241 /usr/bin/python /usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf 
--log-file=/var/log/nova/nova-scheduler.

12月 07 09:59:13 UbuntuStack nova-scheduler[3241]: 2017-12-07 09:59:13.690 3241 INFO 
nova.scheduler.host_manager [req-27631738-6333-
...
  • 查看Nova调度器服务配置脚本:

$ cat /lib/systemd/system/nova-scheduler.service
[Unit]
Description=OpenStack Compute Scheduler
After=postgresql.service mysql.service keystone.service 

[Service]
User=nova
Group=nova
Type=simple
WorkingDirectory=/var/lib/nova
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /var/lock/nova /var/log/nova /var/lib/nova
ExecStartPre=/bin/chown nova:nova /var/lock/nova /var/lib/nova
ExecStartPre=/bin/chown nova:adm /var/log/nova
ExecStart=/etc/init.d/nova-scheduler systemd-start
Restart=on-failure
LimitNOFILE=65535
TimeoutStopSec=15

[Install]
WantedBy=multi-user.target
  • 查看Nove调度器服务init脚本:

$ cat /etc/init.d/nova-scheduler
#!/bin/sh
### BEGIN INIT INFO
# Provides:          nova-scheduler
# Required-Start:    $network $local_fs $remote_fs $syslog
# Required-Stop:     $remote_fs
# Should-Start:      postgresql mysql keystone
# Should-Stop:       postgresql mysql keystone
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Nova Scheduler
# Description:       Schedules instances, volumes, etc. for Nova
### END INIT INFO

# Author: Julien Danjou <acid@debian.org>

DESC="OpenStack Compute Scheduler"
PROJECT_NAME=nova
NAME=nova-scheduler

...

if [ -z "${DAEMON}" ] ; then
	DAEMON=/usr/bin/${NAME}
fi

...

if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then
    DAEMON_ARGS="--config-file=${CONFIG_FILE} ${DAEMON_ARGS}"
fi

...

do_systemd_start() {
	exec $DAEMON $DAEMON_ARGS
}

...

systemd-start)
	do_systemd_start
;;  
...

exit 0
  • 查看Nova调度器启动脚本:

$ cat /usr/bin/nova-scheduler
#!/usr/bin/python
# PBR Generated from u'console_scripts'

import sys

from nova.cmd.scheduler import main


if __name__ == "__main__":
    sys.exit(main())

查看软件包信息

  • 查看系统中Nova相关软件包状态:

$ dpkg -l "*nova*"
期望状态=未知(u)/安装(i)/删除(r)/清除(p)/保持(h)
| 状态=未安装(n)/已安装(i)/仅存配置(c)/仅解压缩(U)/配置失败(F)/不完全安装(H)/触发器等待(W)/触发器未决(T)
|/ 错误?=(无)/须重装(R) (状态,错误:大写=故障)
||/ 名称                      版本              体系结构:        描述
+++-=========================-=================-=================-=======================================================
ii  nova-api                  2:14.0.1-0ubuntu1 all               OpenStack Compute - API frontend
hi  nova-common               2:14.0.1-0ubuntu1 all               OpenStack Compute - common files
ii  nova-compute              2:14.0.1-0ubuntu1 all               OpenStack Compute - compute node base
un  nova-compute-hypervisor   <无>              <无>              (无可用描述)
ii  nova-compute-kvm          2:14.0.1-0ubuntu1 all               OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt      2:14.0.1-0ubuntu1 all               OpenStack Compute - compute node libvirt support
ii  nova-conductor            2:14.0.1-0ubuntu1 all               OpenStack Compute - conductor service
un  nova-console              <无>              <无>              (无可用描述)
ii  nova-consoleauth          2:14.0.1-0ubuntu1 all               OpenStack Compute - Console Authenticator
ii  nova-novncproxy           2:14.0.1-0ubuntu1 all               OpenStack Compute - NoVNC proxy
ii  nova-scheduler            2:14.0.1-0ubuntu1 all               OpenStack Compute - virtual machine scheduler
hi  python-nova               2:14.0.1-0ubuntu1 all               OpenStack Compute Python libraries
ii  python-novaclient         2:6.0.0-0ubuntu1~ all               client library for OpenStack Compute API - Python 2.7
un  python2.7-nova            <无>              <无>              (无可用描述)
  • 查看Nova的Python源码所属软件包:

$ apt-file search /usr/lib/python2.7/dist-packages/nova* | awk -F '/' '{print $1 $6}' | sort -u
python-nova-adminclient: nova_adminclient
python-nova-adminclient: nova_adminclient-0.1.8.egg-info
python-novaclient: novaclient
python-nova-lxd: nova
python-nova-lxd: nova_lxd
python-nova-lxd: nova_lxd-13.0.0.egg-info
python-nova-lxd: nova_lxd-13.0.0-nspkg.pth
python-nova-lxd: nova_lxd-13.2.0.egg-info
python-nova-lxd: nova_lxd-13.2.0-nspkg.pth
python-nova-lxd: nova_lxd-13.3.0.egg-info
python-nova-lxd: nova_lxd-13.3.0-nspkg.pth
python-nova-lxd: nova_lxd-14.2.2.egg-info
python-nova: nova
python-nova: nova-13.0.0.egg-info
python-nova: nova-13.1.4.egg-info
python-nova: nova-14.0.8.egg-info
  • 查看nova-scheduler软件包依赖和文件列表:

$ apt-cache show nova-scheduler
Package: nova-scheduler
Source: nova
Priority: extra
Section: net
Installed-Size: 56
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: all
Version: 2:14.0.8-0ubuntu1~cloud1
Depends: nova-common (= 2:14.0.8-0ubuntu1~cloud1), init-system-helpers (>= 1.18~), 
lsb-base (>= 4.1+Debian11ubuntu7), python:any (>= 2.7~)
Supported: 24m
Filename: pool/main/n/nova/nova-scheduler_14.0.8-0ubuntu1~cloud1_all.deb
Size: 6460
SHA256: 133a23ea0ab69e08716e117aa5881c7840d0431c29fc0cba9fd7146180460965
SHA1: 41649a1311faa52fe539a293670dd58a79fd200b
MD5sum: bc448ccba9626103924c086c30012d7e
Description-en: OpenStack Compute - virtual machine scheduler
 OpenStack is a reliable cloud infrastructure. Its mission is to produce
 the ubiquitous cloud computing platform that will meet the needs of public
 and private cloud providers regardless of size, by being simple to implement
 and massively scalable.
 .
 OpenStack Compute, codenamed Nova, is a cloud computing fabric controller. In
 addition to its "native" API (the OpenStack API), it also supports the Amazon
 EC2 API.
 .
 Nova is intended to be modular and easy to extend and adapt. It supports many
 different hypervisors (KVM and Xen to name a few), different database backends
 (SQLite, MySQL, and PostgreSQL, for instance), different types of user
 databases (LDAP or SQL), etc.
 .
 This is the Nova scheduler.
Description-md5: 8edec11a409c894d59bffef4d16d21b6
Original-Maintainer: Openstack Maintainers <openstack@lists.launchpad.net>
...

$ dpkg -L nova-scheduler
/.
/usr
/usr/bin
/usr/bin/nova-scheduler
/usr/share
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/nova-scheduler.1.gz
/usr/share/doc
/usr/share/doc/nova-scheduler
/usr/share/doc/nova-scheduler/copyright
/etc
/etc/init.d
/etc/init.d/nova-scheduler
/etc/init
/etc/init/nova-scheduler.conf
/lib
/lib/systemd
/lib/systemd/system
/lib/systemd/system/nova-scheduler.service
/usr/share/doc/nova-scheduler/changelog.Debian.gz
  • 查看nova-common软件包依赖和文件列表:

$ apt-cache show nova-common
Package: nova-common
Source: nova
Priority: extra
Section: net
Installed-Size: 79
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: all
Version: 2:14.0.8-0ubuntu1~cloud1
Recommends: python-glanceclient, python-keystone
Depends: adduser, python-nova (= 2:14.0.8-0ubuntu1~cloud1), python:any (>= 2.7~)
Supported: 24m
Filename: pool/main/n/nova/nova-common_14.0.8-0ubuntu1~cloud1_all.deb
...

$ dpkg -L nova-common
/.
/usr
/usr/bin
/usr/bin/nova-rootwrap
/usr/bin/nova-policy
/usr/bin/nova-rootwrap-daemon
/usr/bin/nova-manage
/usr/share
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/nova-rootwrap.1.gz
/usr/share/man/man1/nova-manage.1.gz
/usr/share/doc
/usr/share/doc/nova-common
/usr/share/doc/nova-common/copyright
/etc
/etc/nova
/etc/nova/policy.json
/etc/nova/api-paste.ini
/etc/nova/nova.conf
/etc/nova/rootwrap.d
/etc/nova/rootwrap.conf
/etc/nova/logging.conf
/etc/logrotate.d
/etc/logrotate.d/nova-common
/etc/sudoers.d
/etc/sudoers.d/nova_sudoers
/var
/var/log
/var/log/nova
/var/lib
/var/lib/nova
/var/lib/nova/CA
/var/lib/nova/CA/INTER
/var/lib/nova/CA/private
/var/lib/nova/CA/newcerts
/var/lib/nova/CA/reqs
/var/lib/nova/instances
/var/lib/nova/buckets
/var/lib/nova/keys
/var/lib/nova/tmp
/var/lib/nova/images
/var/lib/nova/networks
/usr/share/doc/nova-common/changelog.Debian.gz
  • 查看python-nova软件包依赖和文件列表:

$ apt-cache show python-nova
Package: python-nova
Source: nova
Priority: extra
Section: python
Installed-Size: 22286
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: all
Version: 2:14.0.8-0ubuntu1~cloud1
Suggests: python-ldap
Provides: python2.7-nova
Depends: alembic (>= 0.8.0), openssh-client, openssl, python-babel (>= 2.3.4), python-boto (>= 2.32.1), 
python-castellan (>= 0.4.0), python-cinderclient (>= 1:1.6.0), python-cryptography (>= 1.0), 
python-decorator (>= 3.4.0), python-eventlet (>= 0.18.2), python-glanceclient (>= 1:2.0.0), 
python-greenlet (>= 0.3.2), python-iso8601 (>= 0.1.11), python-jinja2 (>= 2.8), python-jsonschema (>= 2.0.0), 
python-keystoneauth2 (>= 2.7.0), python-keystonemiddleware (>= 4.0.0), python-lxml (>= 2.3), 
python-microversion-parse (>= 0.1.2), python-migrate (>= 0.9.6), python-netaddr (>= 0.7.13), 
python-netifaces (>= 0.10.4), python-neutronclient (>= 4.2.0), python-os-brick (>= 1.6.1), 
python-os-vif (>= 1.1.0), python-os-win (>= 0.2.3), python-oslo.cache (>= 1.5.0), 
python-oslo.concurrency (>= 3.8.0), python-oslo.config (>= 1:3.10.0), python-oslo.context (>= 2.4.0), 
python-oslo.db (>= 4.1.0), python-oslo.i18n (>= 2.1.0), python-oslo.log (>= 3.16.0), 
python-oslo.messaging (>= 5.2.0), python-oslo.middleware (>= 3.0.0), python-oslo.policy (>= 1.9.0), 
python-oslo.privsep (>= 1.9.0), python-oslo.reports (>= 0.6.0), python-oslo.rootwrap (>= 2.0.0), 
python-oslo.serialization (>= 1.10.0), python-oslo.service (>= 1.10.0), python-oslo.utils (>= 3.11.0), 
python-oslo.versionedobjects (>= 1.9.1), python-paramiko (>= 1.16.0), python-paste, 
python-pastedeploy (>= 1.5.0), python-prettytable (>= 0.7), python-psutil (>= 1.1.1), python-pymysql, 
python-requests (>= 2.10.0), python-rfc3986 (>= 0.2.2), python-routes (>= 1.12.3), python-setuptools (>= 16.0), 
python-six (>= 1.9.0), python-sqlalchemy (>= 1.0.10), python-stevedore (>= 1.10.0), python-webob (>= 1.2.3), 
python-pbr, python-pkg-resources, python-sqlalchemy (< 1.1), python:any (< 2.8), python:any (>= 2.7.5-5~)
Conflicts: python-cjson
Supported: 24m
Filename: pool/main/n/nova/python-nova_14.0.8-0ubuntu1~cloud1_all.deb
Size: 2573388
SHA256: 9656466b5daff7af684cd1032e2e5d72ab415aa82a0f8cadf0106cfe9ea36096
SHA1: 67dc9bf0bd71a03d87e87b1082bf1892a635f007
MD5sum: 5301e37eacef81b4519a4bcd418ef9be
Description-en: OpenStack Compute Python libraries
 OpenStack is a reliable cloud infrastructure. Its mission is to produce
 the ubiquitous cloud computing platform that will meet the needs of public
 and private cloud providers regardless of size, by being simple to implement
 and massively scalable.
 .
 OpenStack Compute, codenamed Nova, is a cloud computing fabric controller. In
 addition to its "native" API (the OpenStack API), it also supports the Amazon
 EC2 API.
 .
 Nova is intended to be modular and easy to extend and adapt. It supports many
 different hypervisors (KVM and Xen to name a few), different database backends
 (SQLite, MySQL, and PostgreSQL, for instance), different types of user
 databases (LDAP or SQL), etc.
 .
 This package contains the core Python parts of Nova.
Description-md5: 9e7471c108af7843da4a920afe750d19
Python-Version: 2.7
Original-Maintainer: Openstack Maintainers <openstack@lists.launchpad.net>
...

$ dpkg -L python-nova
/.
/usr
/usr/share
/usr/share/apport
/usr/share/apport/package-hooks
/usr/share/apport/package-hooks/source_nova.py
/usr/share/doc
/usr/share/doc/python-nova
/usr/share/doc/python-nova/changelog.Debian.gz
/usr/share/doc/python-nova/copyright
/usr/lib
/usr/lib/python2.7
/usr/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/dependency_links.txt
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/top_level.txt
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/not-zip-safe
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/requires.txt
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/pbr.json
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/PKG-INFO
/usr/lib/python2.7/dist-packages/nova-14.0.1.egg-info/entry_points.txt
/usr/lib/python2.7/dist-packages/nova
...

初步分析总结

  • systemd服务启动(/lib/systemd/system/nova-scheduler.service):

ExecStart=/etc/init.d/nova-scheduler systemd-start
  • init服务启动(/lib/systemd/system/nova-scheduler.service):

NAME=nova-scheduler
DAEMON=/usr/bin/${NAME}
DAEMON_ARGS="--config-file=${CONFIG_FILE} ${DAEMON_ARGS}"
exec $DAEMON $DAEMON_ARGS
  • 执行的最终命令为:

/usr/bin/python /usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf 
--log-file=/var/log/nova/nova-scheduler.log
  • Nova调度器启动源码(/usr/bin/nova-scheduler):

import sys
from nova.cmd.scheduler import main

if __name__ == "__main__":
	sys.exit(main())

虚拟机创建源码分析

虚拟机创建时通过RPC调用nova-scheduler服务的select_destinations方法,获取可用节点列表。

  • conductor/manager.py ComputeTaskManager的build_instances方法创建一个新的虚拟机实例时,使用scheduler的build_request_spec方法构造请求的资源规格,最终使用RPC调用select_destinations方法选择合适的节点。

...

from nova.scheduler import utils as scheduler_utils
...

class ComputeTaskManager(base.Base):
...

    def build_instances(self, context, instances, image, filter_properties,
            admin_password, injected_files, requested_networks,
            security_groups, block_device_mapping=None, legacy_bdm=True):
        # TODO(ndipanov): Remove block_device_mapping and legacy_bdm in version
        #                 2.0 of the RPC API.
        # TODO(danms): Remove this in version 2.0 of the RPC API
        if (requested_networks and
                not isinstance(requested_networks,
                               objects.NetworkRequestList)):
            requested_networks = objects.NetworkRequestList.from_tuples(
                requested_networks)
        # TODO(melwitt): Remove this in version 2.0 of the RPC API
        flavor = filter_properties.get('instance_type')
        if flavor and not isinstance(flavor, objects.Flavor):
            # Code downstream may expect extra_specs to be populated since it
            # is receiving an object, so lookup the flavor to ensure this.
            flavor = objects.Flavor.get_by_id(context, flavor['id'])
            filter_properties = dict(filter_properties, instance_type=flavor)

        request_spec = {}
        try:
            # check retry policy. Rather ugly use of instances[0]...
            # but if we've exceeded max retries... then we really only
            # have a single instance.
            scheduler_utils.populate_retry(
                filter_properties, instances[0].uuid)
            request_spec = scheduler_utils.build_request_spec(
                    context, image, instances)
            hosts = self._schedule_instances(
                    context, request_spec, filter_properties)
        except Exception as exc:
            updates = {'vm_state': vm_states.ERROR, 'task_state': None}
            for instance in instances:
                self._set_vm_state_and_notify(
                    context, instance.uuid, 'build_instances', updates,
                    exc, request_spec)
                try:
                    # If the BuildRequest stays around then instance show/lists
                    # will pull from it rather than the errored instance.
                    self._destroy_build_request(context, instance)
                except exception.BuildRequestNotFound:
                    pass
                self._cleanup_allocated_networks(
                    context, instance, requested_networks)
            return

        for (instance, host) in six.moves.zip(instances, hosts):
            try:
                instance.refresh()
            except (exception.InstanceNotFound,
                    exception.InstanceInfoCacheNotFound):
                LOG.debug('Instance deleted during build', instance=instance)
                continue
            local_filter_props = copy.deepcopy(filter_properties)
            scheduler_utils.populate_filter_properties(local_filter_props,
                host)
            # The block_device_mapping passed from the api doesn't contain
            # instance specific information
            bdms = objects.BlockDeviceMappingList.get_by_instance_uuid(
                    context, instance.uuid)

            # This is populated in scheduler_utils.populate_retry
            num_attempts = local_filter_props.get('retry',
                                                  {}).get('num_attempts', 1)
            if num_attempts <= 1:
                # If this is a reschedule the instance is already mapped to
                # this cell and the BuildRequest is already deleted so ignore
                # the logic below.
                inst_mapping = self._populate_instance_mapping(context,
                                                               instance,
                                                               host)
                try:
                    self._destroy_build_request(context, instance)
                except exception.BuildRequestNotFound:
                    # This indicates an instance delete has been requested in
                    # the API. Stop the build, cleanup the instance_mapping and
                    # potentially the block_device_mappings
                    # TODO(alaski): Handle block_device_mapping cleanup
                    if inst_mapping:
                        inst_mapping.destroy()
                    return

            self.compute_rpcapi.build_and_run_instance(context,
                    instance=instance, host=host['host'], image=image,
                    request_spec=request_spec,
                    filter_properties=local_filter_props,
                    admin_password=admin_password,
                    injected_files=injected_files,
                    requested_networks=requested_networks,
                    security_groups=security_groups,
                    block_device_mapping=bdms, node=host['nodename'],
                    limits=host['limits'])

    def _schedule_instances(self, context, request_spec, filter_properties):
        scheduler_utils.setup_instance_group(context, request_spec,
                                             filter_properties)
        # TODO(sbauza): Hydrate here the object until we modify the
        # scheduler.utils methods to directly use the RequestSpec object
        spec_obj = objects.RequestSpec.from_primitives(
            context, request_spec, filter_properties)
        hosts = self.scheduler_client.select_destinations(context, spec_obj)
        return hosts

...
  • scheduler/client/init.py 加载RPC客户端类。

class SchedulerClient(object):
    """Client library for placing calls to the scheduler."""

    def __init__(self):
        self.queryclient = LazyLoader(importutils.import_class(
            'nova.scheduler.client.query.SchedulerQueryClient'))
        self.reportclient = LazyLoader(importutils.import_class(
            'nova.scheduler.client.report.SchedulerReportClient'))

    @utils.retry_select_destinations
    def select_destinations(self, context, spec_obj):
        return self.queryclient.select_destinations(context, spec_obj)

...
  • scheduler/client/query.py 获取RPC调用接口。

from nova.scheduler import rpcapi as scheduler_rpcapi

class SchedulerQueryClient(object):
    """Client class for querying to the scheduler."""

    def __init__(self):
        self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI()

    def select_destinations(self, context, spec_obj):
        """Returns destinations(s) best suited for this request_spec and
        filter_properties.

        The result should be a list of dicts with 'host', 'nodename' and
        'limits' as keys.
        """
        return self.scheduler_rpcapi.select_destinations(context, spec_obj)
...
  • scheduler/rpcapi.py 执行RPC调用。

class SchedulerAPI(object):
...
    def select_destinations(self, ctxt, spec_obj):
        version = '4.3'
        msg_args = {'spec_obj': spec_obj}
        if not self.client.can_send_version(version):
            del msg_args['spec_obj']
            msg_args['request_spec'] = spec_obj.to_legacy_request_spec_dict()
            msg_args['filter_properties'
                     ] = spec_obj.to_legacy_filter_properties_dict()
            version = '4.0'
        cctxt = self.client.prepare(version=version)
        return cctxt.call(ctxt, 'select_destinations', **msg_args)

...

RPC服务启动源码分析

  • cmd/scheduler.py 使用“Service”类创建nova-scheduler服务。

CONF = nova.conf.CONF


def main():
    config.parse_args(sys.argv)
    logging.setup(CONF, "nova")
    utils.monkey_patch()
    objects.register_all()

    gmr.TextGuruMeditation.setup_autorun(version)

    server = service.Service.create(binary='nova-scheduler',
                                    topic=CONF.scheduler_topic)
    service.serve(server)
    service.wait()
  • conf/scheduler.py nova-scheduler的RPC主题默认为“scheduler”。

rpc_sched_topic_opt = cfg.StrOpt("scheduler_topic",
        default="scheduler",
        help="""
This is the message queue topic that the scheduler 'listens' on. It is used
when the scheduler service is started up to configure the queue, and whenever
an RPC call to the scheduler is made. There is almost never any reason to ever
change this value.

* Related options:

    None
""")

...
  • service.py 从“manager_cls = ('%s_manager' %”可以看出,使用“scheduler_manager”参数所指定的类处理RPC服务请求。

...
class Service(service.Service):
    """Service object for binaries running on hosts.

    A service takes a manager and enables rpc by listening to queues based
    on topic. It also periodically runs tasks on the manager and reports
    its state to the database services table.
    """

    def __init__(self, host, binary, topic, manager, report_interval=None,
                 periodic_enable=None, periodic_fuzzy_delay=None,
                 periodic_interval_max=None, db_allowed=True,
                 *args, **kwargs):
        super(Service, self).__init__()
        self.host = host
        self.binary = binary
        self.topic = topic
        self.manager_class_name = manager
        self.servicegroup_api = servicegroup.API()
        manager_class = importutils.import_class(self.manager_class_name)
        self.manager = manager_class(host=self.host, *args, **kwargs)
        self.rpcserver = None
        self.report_interval = report_interval
        self.periodic_enable = periodic_enable
        self.periodic_fuzzy_delay = periodic_fuzzy_delay
        self.periodic_interval_max = periodic_interval_max
        self.saved_args, self.saved_kwargs = args, kwargs
        self.backdoor_port = None
        self.conductor_api = conductor.API(use_local=db_allowed)
        self.conductor_api.wait_until_ready(context.get_admin_context())

...
    @classmethod
    def create(cls, host=None, binary=None, topic=None, manager=None,
               report_interval=None, periodic_enable=None,
               periodic_fuzzy_delay=None, periodic_interval_max=None,
               db_allowed=True):
        """Instantiates class and passes back application object.

        :param host: defaults to CONF.host
        :param binary: defaults to basename of executable
        :param topic: defaults to bin_name - 'nova-' part
        :param manager: defaults to CONF.<topic>_manager
        :param report_interval: defaults to CONF.report_interval
        :param periodic_enable: defaults to CONF.periodic_enable
        :param periodic_fuzzy_delay: defaults to CONF.periodic_fuzzy_delay
        :param periodic_interval_max: if set, the max time to wait between runs

        """
        if not host:
            host = CONF.host
        if not binary:
            binary = os.path.basename(sys.argv[0])
        if not topic:
            topic = binary.rpartition('nova-')[2]
        if not manager:
            manager_cls = ('%s_manager' %
                           binary.rpartition('nova-')[2])
            manager = CONF.get(manager_cls, None)
        if report_interval is None:
            report_interval = CONF.report_interval
        if periodic_enable is None:
            periodic_enable = CONF.periodic_enable
        if periodic_fuzzy_delay is None:
            periodic_fuzzy_delay = CONF.periodic_fuzzy_delay

        debugger.init()

        service_obj = cls(host, binary, topic, manager,
                          report_interval=report_interval,
                          periodic_enable=periodic_enable,
                          periodic_fuzzy_delay=periodic_fuzzy_delay,
                          periodic_interval_max=periodic_interval_max,
                          db_allowed=db_allowed)

        return service_obj

...
  • config/service.py “scheduler_manager”参数的默认值为“nova.scheduler.manager.SchedulerManager”。

...

service_opts = [

...
    cfg.StrOpt('scheduler_manager',
               default='nova.scheduler.manager.SchedulerManager',
               help='DEPRECATED: Full class name for the Manager for '
                   'scheduler',
               deprecated_for_removal=True),

...
  • scheduler/manager.py 设定默认调度驱动为“filter_scheduler”。

class SchedulerManager(manager.Manager):
    """Chooses a host to run instances on."""

    target = messaging.Target(version='4.3')

    _sentinel = object()

    def __init__(self, scheduler_driver=None, *args, **kwargs):
        if not scheduler_driver:
            scheduler_driver = CONF.scheduler_driver
        try:
            self.driver = driver.DriverManager(
                    "nova.scheduler.driver",
                    scheduler_driver,
                    invoke_on_load=True).driver
        # TODO(Yingxin): Change to catch stevedore.exceptions.NoMatches after
        # stevedore v1.9.0
        except RuntimeError:
            # NOTE(Yingxin): Loading full class path is deprecated and should
            # be removed in the N release.
            try:
                self.driver = importutils.import_object(scheduler_driver)
                LOG.warning(_LW("DEPRECATED: scheduler_driver uses "
                                "classloader to load %(path)s. This legacy "
                                "loading style will be removed in the "
                                "N release."),
                            {'path': scheduler_driver})
            except (ImportError, ValueError):
                raise RuntimeError(
                        _("Cannot load scheduler driver from configuration "
                          "%(conf)s."),
                        {'conf': scheduler_driver})
        super(SchedulerManager, self).__init__(service_name='scheduler',
                                               *args, **kwargs)

...
    @messaging.expected_exceptions(exception.NoValidHost)
    def select_destinations(self, ctxt,
                            request_spec=None, filter_properties=None,
                            spec_obj=_sentinel):
        """Returns destinations(s) best suited for this RequestSpec.

        The result should be a list of dicts with 'host', 'nodename' and
        'limits' as keys.
        """

        # TODO(sbauza): Change the method signature to only accept a spec_obj
        # argument once API v5 is provided.
        if spec_obj is self._sentinel:
            spec_obj = objects.RequestSpec.from_primitives(ctxt,
                                                           request_spec,
                                                           filter_properties)
        dests = self.driver.select_destinations(ctxt, spec_obj)
        return jsonutils.to_primitive(dests)

...
  • conf/scheduler.py 默认“scheduler_driver”为“filter_scheduler”,最终的RPC请求将会使用filter_scheduler包中的FilterScheduler类来实现。

driver_opt = cfg.StrOpt("scheduler_driver",
        default="filter_scheduler",
        help="""
The class of the driver used by the scheduler. This should be chosen from one
of the entrypoints under the namespace 'nova.scheduler.driver' of file
'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is
used.

This option also supports deprecated full Python path to the class to be used.
For example, "nova.scheduler.filter_scheduler.FilterScheduler". But note: this
support will be dropped in the N Release.

Other options are:

    * 'caching_scheduler' which aggressively caches the system state for better
    individual scheduler performance at the risk of more retries when running
    multiple schedulers.

    * 'chance_scheduler' which simply picks a host at random.

    * 'fake_scheduler' which is used for testing.

* Related options:

    None
""")

RPC请求处理源码分析

FilterScheduler.select_destinations选择目标节点的关键步骤如下:

  1. 使用self.host_manager.get_all_host_states获取所有节点状态,

  2. 使用self.host_manager.get_filtered_hosts获取所有过滤后的节点,

  3. 使用self.host_manager.get_weighed_hosts计算可用节点的权重,最后留下权重最高的节点,

  4. 如果有多个则使用random.choice随机选择一个,

  5. 同时使用chosen_host.obj.consume_from_request修改该节点的权重。

  • scheduler/filter_scheduler.py 过滤符合请求的节点。

class FilterScheduler(driver.Scheduler):
    """Scheduler that can be used for filtering and weighing."""
    def __init__(self, *args, **kwargs):
        super(FilterScheduler, self).__init__(*args, **kwargs)
        self.options = scheduler_options.SchedulerOptions()
        self.notifier = rpc.get_notifier('scheduler')

    def select_destinations(self, context, spec_obj):
        """Selects a filtered set of hosts and nodes."""
        self.notifier.info(
            context, 'scheduler.select_destinations.start',
            dict(request_spec=spec_obj.to_legacy_request_spec_dict()))

        num_instances = spec_obj.num_instances
        selected_hosts = self._schedule(context, spec_obj)

        # Couldn't fulfill the request_spec
        if len(selected_hosts) < num_instances:
            # NOTE(Rui Chen): If multiple creates failed, set the updated time
            # of selected HostState to None so that these HostStates are
            # refreshed according to database in next schedule, and release
            # the resource consumed by instance in the process of selecting
            # host.
            for host in selected_hosts:
                host.obj.updated = None

            # Log the details but don't put those into the reason since
            # we don't want to give away too much information about our
            # actual environment.
            LOG.debug('There are %(hosts)d hosts available but '
                      '%(num_instances)d instances requested to build.',
                      {'hosts': len(selected_hosts),
                       'num_instances': num_instances})

            reason = _('There are not enough hosts available.')
            raise exception.NoValidHost(reason=reason)

        dests = [dict(host=host.obj.host, nodename=host.obj.nodename,
                      limits=host.obj.limits) for host in selected_hosts]

        self.notifier.info(
            context, 'scheduler.select_destinations.end',
            dict(request_spec=spec_obj.to_legacy_request_spec_dict()))
        return dests

    def _get_configuration_options(self):
        """Fetch options dictionary. Broken out for testing."""
        return self.options.get_configuration()

    def _schedule(self, context, spec_obj):
        """Returns a list of hosts that meet the required specs,
        ordered by their fitness.
        """
        elevated = context.elevated()

        config_options = self._get_configuration_options()

        # Find our local list of acceptable hosts by repeatedly
        # filtering and weighing our options. Each time we choose a
        # host, we virtually consume resources on it so subsequent
        # selections can adjust accordingly.

        # Note: remember, we are using an iterator here. So only
        # traverse this list once. This can bite you if the hosts
        # are being scanned in a filter or weighing function.
        hosts = self._get_all_host_states(elevated)

        selected_hosts = []
        num_instances = spec_obj.num_instances
        # NOTE(sbauza): Adding one field for any out-of-tree need
        spec_obj.config_options = config_options
        for num in range(num_instances):
            # Filter local hosts based on requirements ...
            hosts = self.host_manager.get_filtered_hosts(hosts,
                    spec_obj, index=num)
            if not hosts:
                # Can't get any more locally.
                break

            LOG.debug("Filtered %(hosts)s", {'hosts': hosts})

            weighed_hosts = self.host_manager.get_weighed_hosts(hosts,
                    spec_obj)

            LOG.debug("Weighed %(hosts)s", {'hosts': weighed_hosts})

            scheduler_host_subset_size = max(1,
                                             CONF.scheduler_host_subset_size)
            if scheduler_host_subset_size < len(weighed_hosts):
                weighed_hosts = weighed_hosts[0:scheduler_host_subset_size]
            chosen_host = random.choice(weighed_hosts)

            LOG.debug("Selected host: %(host)s", {'host': chosen_host})
            selected_hosts.append(chosen_host)

            # Now consume the resources so the filter/weights
            # will change for the next instance.
            chosen_host.obj.consume_from_request(spec_obj)
            if spec_obj.instance_group is not None:
                spec_obj.instance_group.hosts.append(chosen_host.obj.host)
                # hosts has to be not part of the updates when saving
                spec_obj.instance_group.obj_reset_changes(['hosts'])
        return selected_hosts

    def _get_all_host_states(self, context):
        """Template method, so a subclass can implement caching."""
        return self.host_manager.get_all_host_states(context)

...
  • scheduler/driver.py FilterScheduler的父类初始化“host_manager”为“nova.scheduler.host_manager”。

class Scheduler(object):
    """The base class that all Scheduler classes should inherit from."""

    def __init__(self):
        self.host_manager = driver.DriverManager(
                "nova.scheduler.host_manager",
                CONF.scheduler_host_manager,
                invoke_on_load=True).driver
        self.servicegroup_api = servicegroup.API()

...

获取所有节点状态

FilterScheduler的_get_all_host_states方法调用host_manager.get_all_host_states获取所有主机状态。

  • scheduler/host_manager.py 遍历所有计算节点,使用HostManager类的update方法更新状态,并放在host_state_map缓存中。

...

class HostManager(object):
...

    def host_state_cls(self, host, node, **kwargs):
        return HostState(host, node)
...

    def get_all_host_states(self, context):
        """Returns a list of HostStates that represents all the hosts
        the HostManager knows about. Also, each of the consumable resources
        in HostState are pre-populated and adjusted based on data in the db.
        """

        service_refs = {service.host: service
                        for service in objects.ServiceList.get_by_binary(
                            context, 'nova-compute', include_disabled=True)}
        # Get resource usage across the available compute nodes:
        compute_nodes = objects.ComputeNodeList.get_all(context)
        seen_nodes = set()
        for compute in compute_nodes:
            service = service_refs.get(compute.host)

            if not service:
                LOG.warning(_LW(
                    "No compute service record found for host %(host)s"),
                    {'host': compute.host})
                continue
            host = compute.host
            node = compute.hypervisor_hostname
            state_key = (host, node)
            host_state = self.host_state_map.get(state_key)
            if not host_state:
                host_state = self.host_state_cls(host, node, compute=compute)
                self.host_state_map[state_key] = host_state
            # We force to update the aggregates info each time a new request
            # comes in, because some changes on the aggregates could have been
            # happening after setting this field for the first time
            host_state.update(compute,
                              dict(service),
                              self._get_aggregates_info(host),
                              self._get_instance_info(context, compute))

            seen_nodes.add(state_key)

        # remove compute nodes from host_state_map if they are not active
        dead_nodes = set(self.host_state_map.keys()) - seen_nodes
        for state_key in dead_nodes:
            host, node = state_key
            LOG.info(_LI("Removing dead compute node %(host)s:%(node)s "
                         "from scheduler"), {'host': host, 'node': node})
            del self.host_state_map[state_key]

        return six.itervalues(self.host_state_map)

获取过滤后的节点

  • scheduler/host_manager.py 调用nova.scheduler.filters.HostFilterHandler的get_filtered_hosts方法进行节点过滤。

...

from nova.scheduler import filters
...

class HostManager(object):
    """Base HostManager class."""

    # Can be overridden in a subclass
    def host_state_cls(self, host, node, **kwargs):
        return HostState(host, node)

    def __init__(self):
        self.host_state_map = {}
        self.filter_handler = filters.HostFilterHandler()
        filter_classes = self.filter_handler.get_matching_classes(
                CONF.scheduler_available_filters)
        self.filter_cls_map = {cls.__name__: cls for cls in filter_classes}
        self.filter_obj_map = {}
        self.default_filters = self._choose_host_filters(self._load_filters())
        self.weight_handler = weights.HostWeightHandler()
        weigher_classes = self.weight_handler.get_matching_classes(
                CONF.scheduler_weight_classes)
        self.weighers = [cls() for cls in weigher_classes]
        # Dict of aggregates keyed by their ID
        self.aggs_by_id = {}
        # Dict of set of aggregate IDs keyed by the name of the host belonging
        # to those aggregates
        self.host_aggregates_map = collections.defaultdict(set)
        self._init_aggregates()
        self.tracks_instance_changes = CONF.scheduler_tracks_instance_changes
        # Dict of instances and status, keyed by host
        self._instance_info = {}
        if self.tracks_instance_changes:
            self._init_instance_info()

...

    def get_filtered_hosts(self, hosts, spec_obj, index=0):
        """Filter hosts and return only ones passing all filters."""

        def _strip_ignore_hosts(host_map, hosts_to_ignore):
            ignored_hosts = []
            for host in hosts_to_ignore:
                for (hostname, nodename) in list(host_map.keys()):
                    if host.lower() == hostname.lower():
                        del host_map[(hostname, nodename)]
                        ignored_hosts.append(host)
            ignored_hosts_str = ', '.join(ignored_hosts)
            LOG.info(_LI('Host filter ignoring hosts: %s'), ignored_hosts_str)

        def _match_forced_hosts(host_map, hosts_to_force):
            forced_hosts = []
            lowered_hosts_to_force = [host.lower() for host in hosts_to_force]
            for (hostname, nodename) in list(host_map.keys()):
                if hostname.lower() not in lowered_hosts_to_force:
                    del host_map[(hostname, nodename)]
                else:
                    forced_hosts.append(hostname)
            if host_map:
                forced_hosts_str = ', '.join(forced_hosts)
                msg = _LI('Host filter forcing available hosts to %s')
            else:
                forced_hosts_str = ', '.join(hosts_to_force)
                msg = _LI("No hosts matched due to not matching "
                          "'force_hosts' value of '%s'")
            LOG.info(msg % forced_hosts_str)

        def _match_forced_nodes(host_map, nodes_to_force):
            forced_nodes = []
            for (hostname, nodename) in list(host_map.keys()):
                if nodename not in nodes_to_force:
                    del host_map[(hostname, nodename)]
                else:
                    forced_nodes.append(nodename)
            if host_map:
                forced_nodes_str = ', '.join(forced_nodes)
                msg = _LI('Host filter forcing available nodes to %s')
            else:
                forced_nodes_str = ', '.join(nodes_to_force)
                msg = _LI("No nodes matched due to not matching "
                          "'force_nodes' value of '%s'")
            LOG.info(msg % forced_nodes_str)

        def _get_hosts_matching_request(hosts, requested_destination):
            (host, node) = (requested_destination.host,
                            requested_destination.node)
            requested_nodes = [x for x in hosts
                               if x.host == host and x.nodename == node]
            if requested_nodes:
                LOG.info(_LI('Host filter only checking host %(host)s and '
                             'node %(node)s') % {'host': host, 'node': node})
            else:
                # NOTE(sbauza): The API level should prevent the user from
                # providing a wrong destination but let's make sure a wrong
                # destination doesn't trample the scheduler still.
                LOG.info(_LI('No hosts matched due to not matching requested '
                             'destination (%(host)s, %(node)s)'
                             ) % {'host': host, 'node': node})
            return iter(requested_nodes)

        ignore_hosts = spec_obj.ignore_hosts or []
        force_hosts = spec_obj.force_hosts or []
        force_nodes = spec_obj.force_nodes or []
        requested_node = spec_obj.requested_destination

        if requested_node is not None:
            # NOTE(sbauza): Reduce a potentially long set of hosts as much as
            # possible to any requested destination nodes before passing the
            # list to the filters
            hosts = _get_hosts_matching_request(hosts, requested_node)
        if ignore_hosts or force_hosts or force_nodes:
            # NOTE(deva): we can't assume "host" is unique because
            #             one host may have many nodes.
            name_to_cls_map = {(x.host, x.nodename): x for x in hosts}
            if ignore_hosts:
                _strip_ignore_hosts(name_to_cls_map, ignore_hosts)
                if not name_to_cls_map:
                    return []
            # NOTE(deva): allow force_hosts and force_nodes independently
            if force_hosts:
                _match_forced_hosts(name_to_cls_map, force_hosts)
            if force_nodes:
                _match_forced_nodes(name_to_cls_map, force_nodes)
            if force_hosts or force_nodes:
                # NOTE(deva): Skip filters when forcing host or node
                if name_to_cls_map:
                    return name_to_cls_map.values()
                else:
                    return []
            hosts = six.itervalues(name_to_cls_map)

        return self.filter_handler.get_filtered_objects(self.default_filters,
                hosts, spec_obj, index)

...
  • scheduler/filters/__init.py HostFilterHandler继承自filters.BaseFilterHandler基类。

class HostFilterHandler(filters.BaseFilterHandler):
    def __init__(self):
        super(HostFilterHandler, self).__init__(BaseHostFilter)
  • filters.py 遍历每个过滤器并调用其filter_all方法进行所有节点过滤,最终调用过滤器基类的_filter_one方法进行单个节点过滤。

...

class BaseFilter(object):
    """Base class for all filter classes."""
    def _filter_one(self, obj, spec_obj):
        """Return True if it passes the filter, False otherwise.
        Override this in a subclass.
        """
        return True

    def filter_all(self, filter_obj_list, spec_obj):
        """Yield objects that pass the filter.

        Can be overridden in a subclass, if you need to base filtering
        decisions on all objects.  Otherwise, one can just override
        _filter_one() to filter a single object.
        """
        for obj in filter_obj_list:
            if self._filter_one(obj, spec_obj):
                yield obj

    # Set to true in a subclass if a filter only needs to be run once
    # for each request rather than for each instance
    run_filter_once_per_request = False

    def run_filter_for_index(self, index):
        """Return True if the filter needs to be run for the "index-th"
        instance in a request.  Only need to override this if a filter
        needs anything other than "first only" or "all" behaviour.
        """
        if self.run_filter_once_per_request and index > 0:
            return False
        else:
            return True
...

class BaseFilterHandler(loadables.BaseLoader):
    """Base class to handle loading filter classes.

    This class should be subclassed where one needs to use filters.
    """

    def get_filtered_objects(self, filters, objs, spec_obj, index=0):
        list_objs = list(objs)
        LOG.debug("Starting with %d host(s)", len(list_objs))
        # Track the hosts as they are removed. The 'full_filter_results' list
        # contains the host/nodename info for every host that passes each
        # filter, while the 'part_filter_results' list just tracks the number
        # removed by each filter, unless the filter returns zero hosts, in
        # which case it records the host/nodename for the last batch that was
        # removed. Since the full_filter_results can be very large, it is only
        # recorded if the LOG level is set to debug.
        part_filter_results = []
        full_filter_results = []
        log_msg = "%(cls_name)s: (start: %(start)s, end: %(end)s)"
        for filter_ in filters:
            if filter_.run_filter_for_index(index):
                cls_name = filter_.__class__.__name__
                start_count = len(list_objs)
                objs = filter_.filter_all(list_objs, spec_obj)
                if objs is None:
                    LOG.debug("Filter %s says to stop filtering", cls_name)
                    return
                list_objs = list(objs)
                end_count = len(list_objs)
                part_filter_results.append(log_msg % {"cls_name": cls_name,
                        "start": start_count, "end": end_count})
                if list_objs:
                    remaining = [(getattr(obj, "host", obj),
                                  getattr(obj, "nodename", ""))
                                 for obj in list_objs]
                    full_filter_results.append((cls_name, remaining))
                else:
                    LOG.info(_LI("Filter %s returned 0 hosts"), cls_name)
                    full_filter_results.append((cls_name, None))
                    break
                LOG.debug("Filter %(cls_name)s returned "
                          "%(obj_len)d host(s)",
                          {'cls_name': cls_name, 'obj_len': len(list_objs)})
        if not list_objs:
            # Log the filtration history
            # NOTE(sbauza): Since the Cells scheduler still provides a legacy
            # dictionary for filter_props, and since we agreed on not modifying
            # the Cells scheduler to support that because of Cells v2, we
            # prefer to define a compatible way to address both types
            if isinstance(spec_obj, dict):
                rspec = spec_obj.get("request_spec", {})
                inst_props = rspec.get("instance_properties", {})
                inst_uuid = inst_props.get("uuid", "")
            else:
                inst_uuid = spec_obj.instance_uuid
            msg_dict = {"inst_uuid": inst_uuid,
                        "str_results": str(full_filter_results),
                       }
            full_msg = ("Filtering removed all hosts for the request with "
                        "instance ID "
                        "'%(inst_uuid)s'. Filter results: %(str_results)s"
                       ) % msg_dict
            msg_dict["str_results"] = str(part_filter_results)
            part_msg = _LI("Filtering removed all hosts for the request with "
                           "instance ID "
                           "'%(inst_uuid)s'. Filter results: %(str_results)s"
                           ) % msg_dict
            LOG.debug(full_msg)
            LOG.info(part_msg)
        return list_objs
  • scheduler/host_manager.py 获取所有过滤器列表。

...

CONF = nova.conf.CONF
...

class HostManager(object):
...

    def __init__(self):
...

         self.default_filters = self._choose_host_filters(self._load_filters())

...

    def _load_filters(self):
        return CONF.scheduler_default_filters
...

    def _choose_host_filters(self, filter_cls_names):
        """Since the caller may specify which filters to use we need
        to have an authoritative list of what is permissible. This
        function checks the filter names against a predefined set
        of acceptable filters.
        """
        if not isinstance(filter_cls_names, (list, tuple)):
            filter_cls_names = [filter_cls_names]

        good_filters = []
        bad_filters = []
        for filter_name in filter_cls_names:
            if filter_name not in self.filter_obj_map:
                if filter_name not in self.filter_cls_map:
                    bad_filters.append(filter_name)
                    continue
                filter_cls = self.filter_cls_map[filter_name]
                self.filter_obj_map[filter_name] = filter_cls()
            good_filters.append(self.filter_obj_map[filter_name])
        if bad_filters:
            msg = ", ".join(bad_filters)
            raise exception.SchedulerHostFilterNotFound(filter_name=msg)
        return good_filters

...
  • conf/scheduler.py 默认的过滤器列表。

...

host_mgr_default_filt_opt = cfg.ListOpt("scheduler_default_filters",
        default=[
          "RetryFilter",
          "AvailabilityZoneFilter",
          "RamFilter",
          "DiskFilter",
          "ComputeFilter",
          "ComputeCapabilitiesFilter",
          "ImagePropertiesFilter",
          "ServerGroupAntiAffinityFilter",
          "ServerGroupAffinityFilter",
          ],
        help="""
This option is the list of filter class names that will be used for filtering
hosts. The use of 'default' in the name of this option implies that other
filters may sometimes be used, but that is not the case. These filters will be
applied in the order they are listed, so place your most restrictive filters
first to make the filtering process more efficient.

This option is only used by the FilterScheduler and its subclasses; if you use
a different scheduler, this option has no effect.

* Related options:

    All of the filters in this option *must* be present in the
    'scheduler_available_filters' option, or a SchedulerHostFilterNotFound
    exception will be raised.
""")

...

获取可用节点权重

  • scheduler/host_manager.py

...

from nova.scheduler import weights
...

class HostManager(object):
...

    def __init__(self):
...
        self.weight_handler = weights.HostWeightHandler()
...

    def get_weighed_hosts(self, hosts, spec_obj):
        """Weigh the hosts."""
        return self.weight_handler.get_weighed_objects(self.weighers,
                hosts, spec_obj)

...

虚拟机创建流程总结

nova-api服务

  • /nova/api/openstack/compute/servers.py:Controller create() - 处理用户的http请求;

  • /nova/compute/api.py:API create()->_create_instance()->compute_task_api() - 处理虚拟机create方法;

  • /nova/conductor/init.py:ComputeTaskAPI build_instances() - 根据配置选择本地还是远程调用;

  • /nova/conductor/rpcapi.py:ComputeTaskAPI build_instances() - RPC调用nova-conductor的build_instances方法。

nova-conductor服务

  • /nova/conductor/Manager.py:ComputeTaskManager build_instances() - RPC调用nova-scheduler服务进行节点选择(self.scheduler_client.select_destinations());

  • /nova/compute/rpcapi.py:ComputeAPI build_and_run_instance() - RPC调用nova-compute服务创建并运行虚拟机。

nova-scheduler服务

  • /nova/scheduler/client/init.py:SchedulerClient select_destinations() - nova-scheduler节点选择的RPC客户端;

  • /nova/scheduler/manager.py:SchedulerManager select_destinations() - 使用调度器驱动进行节点选择;

  • /nova/scheduler/filter_scheduler.py:FilterScheduler select_destinations() - 使用过滤器调度驱动进行节点选择;

  • /nova/scheduler/filter_scheduler.py:FilterScheduler _schedule() - 进行节点过滤和称重,选择出合适的节点。

nova-compute服务

  • /nova/compute/manager.py:ComputeManager build_and_run_instance() 构建并运行虚拟机。

主机状态获取源码分析

  • scheduler/host_manager.py

class HostState(object):
    """Mutable and immutable information tracked for a host.
    This is an attempt to remove the ad-hoc data structures
    previously used and lock down access.
    """

    def __init__(self, host, node):
        self.host = host
        self.nodename = node
        self._lock_name = (host, node)

        # Mutable available resources.
        # These will change as resources are virtually "consumed".
        self.total_usable_ram_mb = 0
        self.total_usable_disk_gb = 0
        self.disk_mb_used = 0
        self.free_ram_mb = 0
        self.free_disk_mb = 0
        self.vcpus_total = 0
        self.vcpus_used = 0
        self.pci_stats = None
        self.numa_topology = None

        # Additional host information from the compute node stats:
        self.num_instances = 0
        self.num_io_ops = 0

        # Other information
        self.host_ip = None
        self.hypervisor_type = None
        self.hypervisor_version = None
        self.hypervisor_hostname = None
        self.cpu_info = None
        self.supported_instances = None

        # Resource oversubscription values for the compute host:
        self.limits = {}

        # Generic metrics from compute nodes
        self.metrics = None

        # List of aggregates the host belongs to
        self.aggregates = []

        # Instances on this host
        self.instances = {}

        # Allocation ratios for this host
        self.ram_allocation_ratio = None
        self.cpu_allocation_ratio = None
        self.disk_allocation_ratio = None

        self.updated = None

    def update(self, compute=None, service=None, aggregates=None,
            inst_dict=None):
        """Update all information about a host."""

        @utils.synchronized(self._lock_name)
        def _locked_update(self, compute, service, aggregates, inst_dict):
            # Scheduler API is inherently multi-threaded as every incoming RPC
            # message will be dispatched in it's own green thread. So the
            # shared host state should be updated in a consistent way to make
            # sure its data is valid under concurrent write operations.
            if compute is not None:
                LOG.debug("Update host state from compute node: %s", compute)
                self._update_from_compute_node(compute)
            if aggregates is not None:
                LOG.debug("Update host state with aggregates: %s", aggregates)
                self.aggregates = aggregates
            if service is not None:
                LOG.debug("Update host state with service dict: %s", service)
                self.service = ReadOnlyDict(service)
            if inst_dict is not None:
                LOG.debug("Update host state with instances: %s", inst_dict)
                self.instances = inst_dict

        return _locked_update(self, compute, service, aggregates, inst_dict)

    def _update_from_compute_node(self, compute):
        """Update information about a host from a ComputeNode object."""
        if (self.updated and compute.updated_at
                and self.updated > compute.updated_at):
            return
        all_ram_mb = compute.memory_mb

        # Assume virtual size is all consumed by instances if use qcow2 disk.
        free_gb = compute.free_disk_gb
        least_gb = compute.disk_available_least
        if least_gb is not None:
            if least_gb > free_gb:
                # can occur when an instance in database is not on host
                LOG.warning(_LW("Host %(hostname)s has more disk space than "
                                "database expected "
                                "(%(physical)s GB > %(database)s GB)"),
                            {'physical': least_gb, 'database': free_gb,
                             'hostname': compute.hypervisor_hostname})
            free_gb = min(least_gb, free_gb)
        free_disk_mb = free_gb * 1024

        self.disk_mb_used = compute.local_gb_used * 1024

        # NOTE(jogo) free_ram_mb can be negative
        self.free_ram_mb = compute.free_ram_mb
        self.total_usable_ram_mb = all_ram_mb
        self.total_usable_disk_gb = compute.local_gb
        self.free_disk_mb = free_disk_mb
        self.vcpus_total = compute.vcpus
        self.vcpus_used = compute.vcpus_used
        self.updated = compute.updated_at
        self.numa_topology = compute.numa_topology
        self.pci_stats = pci_stats.PciDeviceStats(
            compute.pci_device_pools)

        # All virt drivers report host_ip
        self.host_ip = compute.host_ip
        self.hypervisor_type = compute.hypervisor_type
        self.hypervisor_version = compute.hypervisor_version
        self.hypervisor_hostname = compute.hypervisor_hostname
        self.cpu_info = compute.cpu_info
        if compute.supported_hv_specs:
            self.supported_instances = [spec.to_list() for spec
                                        in compute.supported_hv_specs]
        else:
            self.supported_instances = []

        # Don't store stats directly in host_state to make sure these don't
        # overwrite any values, or get overwritten themselves. Store in self so
        # filters can schedule with them.
        self.stats = compute.stats or {}

        # Track number of instances on host
        self.num_instances = int(self.stats.get('num_instances', 0))

        self.num_io_ops = int(self.stats.get('io_workload', 0))

        # update metrics
        self.metrics = objects.MonitorMetricList.from_json(compute.metrics)

        # update allocation ratios given by the ComputeNode object
        self.cpu_allocation_ratio = compute.cpu_allocation_ratio
        self.ram_allocation_ratio = compute.ram_allocation_ratio
        self.disk_allocation_ratio = compute.disk_allocation_ratio

    def consume_from_request(self, spec_obj):
        """Incrementally update host state from a RequestSpec object."""

        @utils.synchronized(self._lock_name)
        @set_update_time_on_success
        def _locked(self, spec_obj):
            # Scheduler API is inherently multi-threaded as every incoming RPC
            # message will be dispatched in it's own green thread. So the
            # shared host state should be consumed in a consistent way to make
            # sure its data is valid under concurrent write operations.
            self._locked_consume_from_request(spec_obj)

        return _locked(self, spec_obj)

    def _locked_consume_from_request(self, spec_obj):
        disk_mb = (spec_obj.root_gb +
                   spec_obj.ephemeral_gb) * 1024
        ram_mb = spec_obj.memory_mb
        vcpus = spec_obj.vcpus
        self.free_ram_mb -= ram_mb
        self.free_disk_mb -= disk_mb
        self.vcpus_used += vcpus

        # Track number of instances on host
        self.num_instances += 1

        pci_requests = spec_obj.pci_requests
        if pci_requests and self.pci_stats:
            pci_requests = pci_requests.requests
        else:
            pci_requests = None

        # Calculate the numa usage
        host_numa_topology, _fmt = hardware.host_topology_and_format_from_host(
                                self)
        instance_numa_topology = spec_obj.numa_topology

        spec_obj.numa_topology = hardware.numa_fit_instance_to_host(
            host_numa_topology, instance_numa_topology,
            limits=self.limits.get('numa_topology'),
            pci_requests=pci_requests, pci_stats=self.pci_stats)
        if pci_requests:
            instance_cells = None
            if spec_obj.numa_topology:
                instance_cells = spec_obj.numa_topology.cells
            self.pci_stats.apply_requests(pci_requests, instance_cells)

        # NOTE(sbauza): Yeah, that's crap. We should get rid of all of those
        # NUMA helpers because now we're 100% sure that spec_obj.numa_topology
        # is an InstanceNUMATopology object. Unfortunately, since
        # HostState.host_numa_topology is still limbo between an NUMATopology
        # object (when updated by consume_from_request), a ComputeNode object
        # (when updated by update_from_compute_node), we need to keep the call
        # to get_host_numa_usage_from_instance until it's fixed (and use a
        # temporary orphaned Instance object as a proxy)
        instance = objects.Instance(numa_topology=spec_obj.numa_topology)

        self.numa_topology = hardware.get_host_numa_usage_from_instance(
                self, instance)

        # NOTE(sbauza): By considering all cases when the scheduler is called
        # and when consume_from_request() is run, we can safely say that there
        # is always an IO operation because we want to move the instance
        self.num_io_ops += 1

    def __repr__(self):
        return ("(%(host)s, %(node)s) ram: %(free_ram)sMB "
                "disk: %(free_disk)sMB io_ops: %(num_io_ops)s "
                "instances: %(num_instances)s" %
                {'host': self.host, 'node': self.nodename,
                 'free_ram': self.free_ram_mb, 'free_disk': self.free_disk_mb,
                 'num_io_ops': self.num_io_ops,
                 'num_instances': self.num_instances})

资源规格生成源码分析

创建虚拟机时的参数构建

这是在compute/api.py源码中的参数构建,还有一部分在conductor/manager.py的build_instances方法中。

...

class API(base.Base):
    """API for interacting with the compute manager."""
...

    def _create_instance(self, context, instance_type,
               image_href, kernel_id, ramdisk_id,
               min_count, max_count,
               display_name, display_description,
               key_name, key_data, security_groups,
               availability_zone, user_data, metadata, injected_files,
               admin_password, access_ip_v4, access_ip_v6,
               requested_networks, config_drive,
               block_device_mapping, auto_disk_config, filter_properties,
               reservation_id=None, legacy_bdm=True, shutdown_terminate=False,
               check_server_group_quota=False):
        """Verify all the input parameters regardless of the provisioning
        strategy being performed and schedule the instance(s) for
        creation.
        """

        # Normalize and setup some parameters
        if reservation_id is None:
            reservation_id = utils.generate_uid('r')
        security_groups = security_groups or ['default']
        min_count = min_count or 1
        max_count = max_count or min_count
        block_device_mapping = block_device_mapping or []

        if image_href:
            image_id, boot_meta = self._get_image(context, image_href)
        else:
            image_id = None
            boot_meta = self._get_bdm_image_metadata(
                context, block_device_mapping, legacy_bdm)

        self._check_auto_disk_config(image=boot_meta,
                                     auto_disk_config=auto_disk_config)

        base_options, max_net_count, key_pair = \
                self._validate_and_build_base_options(
                    context, instance_type, boot_meta, image_href, image_id,
                    kernel_id, ramdisk_id, display_name, display_description,
                    key_name, key_data, security_groups, availability_zone,
                    user_data, metadata, access_ip_v4, access_ip_v6,
                    requested_networks, config_drive, auto_disk_config,
                    reservation_id, max_count)

        # max_net_count is the maximum number of instances requested by the
        # user adjusted for any network quota constraints, including
        # consideration of connections to each requested network
        if max_net_count < min_count:
            raise exception.PortLimitExceeded()
        elif max_net_count < max_count:
            LOG.info(_LI("max count reduced from %(max_count)d to "
                         "%(max_net_count)d due to network port quota"),
                        {'max_count': max_count,
                         'max_net_count': max_net_count})
            max_count = max_net_count

        block_device_mapping = self._check_and_transform_bdm(context,
            base_options, instance_type, boot_meta, min_count, max_count,
            block_device_mapping, legacy_bdm)

        # We can't do this check earlier because we need bdms from all sources
        # to have been merged in order to get the root bdm.
        self._checks_for_create_and_rebuild(context, image_id, boot_meta,
                instance_type, metadata, injected_files,
                block_device_mapping.root_bdm())

        instance_group = self._get_requested_instance_group(context,
                                   filter_properties)

        instances = self._provision_instances(context, instance_type,
                min_count, max_count, base_options, boot_meta, security_groups,
                block_device_mapping, shutdown_terminate,
                instance_group, check_server_group_quota, filter_properties,
                key_pair)

        for instance in instances:
            self._record_action_start(context, instance,
                                      instance_actions.CREATE)

        self.compute_task_api.build_instances(context,
                instances=instances, image=boot_meta,
                filter_properties=filter_properties,
                admin_password=admin_password,
                injected_files=injected_files,
                requested_networks=requested_networks,
                security_groups=security_groups,
                block_device_mapping=block_device_mapping,
                legacy_bdm=False)

        return (instances, reservation_id)
...

构建基本的参数选项

(compute/api.py)

...

    def _validate_and_build_base_options(self, context, instance_type,
                                         boot_meta, image_href, image_id,
                                         kernel_id, ramdisk_id, display_name,
                                         display_description, key_name,
                                         key_data, security_groups,
                                         availability_zone, user_data,
                                         metadata, access_ip_v4, access_ip_v6,
                                         requested_networks, config_drive,
                                         auto_disk_config, reservation_id,
                                         max_count):
        """Verify all the input parameters regardless of the provisioning
        strategy being performed.
        """
        if instance_type['disabled']:
            raise exception.FlavorNotFound(flavor_id=instance_type['id'])

        if user_data:
            l = len(user_data)
            if l > MAX_USERDATA_SIZE:
                # NOTE(mikal): user_data is stored in a text column, and
                # the database might silently truncate if its over length.
                raise exception.InstanceUserDataTooLarge(
                    length=l, maxsize=MAX_USERDATA_SIZE)

            try:
                base64.decodestring(user_data)
            except base64.binascii.Error:
                raise exception.InstanceUserDataMalformed()

        self._check_requested_secgroups(context, security_groups)

        # Note:  max_count is the number of instances requested by the user,
        # max_network_count is the maximum number of instances taking into
        # account any network quotas
        max_network_count = self._check_requested_networks(context,
                                     requested_networks, max_count)

        kernel_id, ramdisk_id = self._handle_kernel_and_ramdisk(
                context, kernel_id, ramdisk_id, boot_meta)

        config_drive = self._check_config_drive(config_drive)

        if key_data is None and key_name is not None:
            key_pair = objects.KeyPair.get_by_name(context,
                                                   context.user_id,
                                                   key_name)
            key_data = key_pair.public_key
        else:
            key_pair = None

        root_device_name = block_device.prepend_dev(
                block_device.properties_root_device_name(
                    boot_meta.get('properties', {})))

        try:
            image_meta = objects.ImageMeta.from_dict(boot_meta)
        except ValueError as e:
            # there must be invalid values in the image meta properties so
            # consider this an invalid request
            msg = _('Invalid image metadata. Error: %s') % six.text_type(e)
            raise exception.InvalidRequest(msg)
        numa_topology = hardware.numa_get_constraints(
                instance_type, image_meta)

        system_metadata = {}

        # PCI requests come from two sources: instance flavor and
        # requested_networks. The first call in below returns an
        # InstancePCIRequests object which is a list of InstancePCIRequest
        # objects. The second call in below creates an InstancePCIRequest
        # object for each SR-IOV port, and append it to the list in the
        # InstancePCIRequests object
        pci_request_info = pci_request.get_pci_requests_from_flavor(
            instance_type)
        self.network_api.create_pci_requests_for_sriov_ports(context,
            pci_request_info, requested_networks)

        base_options = {
            'reservation_id': reservation_id,
            'image_ref': image_href,
            'kernel_id': kernel_id or '',
            'ramdisk_id': ramdisk_id or '',
            'power_state': power_state.NOSTATE,
            'vm_state': vm_states.BUILDING,
            'config_drive': config_drive,
            'user_id': context.user_id,
            'project_id': context.project_id,
            'instance_type_id': instance_type['id'],
            'memory_mb': instance_type['memory_mb'],
            'vcpus': instance_type['vcpus'],
            'root_gb': instance_type['root_gb'],
            'ephemeral_gb': instance_type['ephemeral_gb'],
            'display_name': display_name,
            'display_description': display_description,
            'user_data': user_data,
            'key_name': key_name,
            'key_data': key_data,
            'locked': False,
            'metadata': metadata or {},
            'access_ip_v4': access_ip_v4,
            'access_ip_v6': access_ip_v6,
            'availability_zone': availability_zone,
            'root_device_name': root_device_name,
            'progress': 0,
            'pci_requests': pci_request_info,
            'numa_topology': numa_topology,
            'system_metadata': system_metadata}

        options_from_image = self._inherit_properties_from_image(
                boot_meta, auto_disk_config)

        base_options.update(options_from_image)

        # return the validated options and maximum number of instances allowed
        # by the network quotas
        return base_options, max_network_count, key_pair

...

构建实例参数

(compute/api.py)

...

    def _provision_instances(self, context, instance_type, min_count,
            max_count, base_options, boot_meta, security_groups,
            block_device_mapping, shutdown_terminate,
            instance_group, check_server_group_quota, filter_properties,
            key_pair):
        # Reserve quotas
        num_instances, quotas = self._check_num_instances_quota(
                context, instance_type, min_count, max_count)
        security_groups = self.security_group_api.populate_security_groups(
                security_groups)
        self.security_group_api.ensure_default(context)
        LOG.debug("Going to run %s instances...", num_instances)
        instances = []
        instance_mappings = []
        build_requests = []
        try:
            for i in range(num_instances):
                # Create a uuid for the instance so we can store the
                # RequestSpec before the instance is created.
                instance_uuid = str(uuid.uuid4())
                # Store the RequestSpec that will be used for scheduling.
                req_spec = objects.RequestSpec.from_components(context,
                        instance_uuid, boot_meta, instance_type,
                        base_options['numa_topology'],
                        base_options['pci_requests'], filter_properties,
                        instance_group, base_options['availability_zone'])
                req_spec.create()

                # Create an instance object, but do not store in db yet.
                instance = objects.Instance(context=context)
                instance.uuid = instance_uuid
                instance.update(base_options)
                instance.keypairs = objects.KeyPairList(objects=[])
                if key_pair:
                    instance.keypairs.objects.append(key_pair)
                instance = self.create_db_entry_for_new_instance(context,
                        instance_type, boot_meta, instance, security_groups,
                        block_device_mapping, num_instances, i,
                        shutdown_terminate, create_instance=False)
                block_device_mapping = (
                    self._bdm_validate_set_size_and_instance(context,
                        instance, instance_type, block_device_mapping))

                build_request = objects.BuildRequest(context,
                        instance=instance, instance_uuid=instance.uuid,
                        project_id=instance.project_id,
                        block_device_mappings=block_device_mapping)
                build_request.create()
                build_requests.append(build_request)
                # Create an instance_mapping.  The null cell_mapping indicates
                # that the instance doesn't yet exist in a cell, and lookups
                # for it need to instead look for the RequestSpec.
                # cell_mapping will be populated after scheduling, with a
                # scheduling failure using the cell_mapping for the special
                # cell0.
                inst_mapping = objects.InstanceMapping(context=context)
                inst_mapping.instance_uuid = instance_uuid
                inst_mapping.project_id = context.project_id
                inst_mapping.cell_mapping = None
                inst_mapping.create()
                instance_mappings.append(inst_mapping)
                # TODO(alaski): Cast to conductor here which will call the
                # scheduler and defer instance creation until the scheduler
                # has picked a cell/host. Set the instance_mapping to the cell
                # that the instance is scheduled to.
                # NOTE(alaski): Instance and block device creation are going
                # to move to the conductor.
                instance.create()
                instances.append(instance)

                self._create_block_device_mapping(block_device_mapping)

                if instance_group:
                    if check_server_group_quota:
                        count = objects.Quotas.count(context,
                                             'server_group_members',
                                             instance_group,
                                             context.user_id)
                        try:
                            objects.Quotas.limit_check(context,
                                               server_group_members=count + 1)
                        except exception.OverQuota:
                            msg = _("Quota exceeded, too many servers in "
                                    "group")
                            raise exception.QuotaError(msg)

                    objects.InstanceGroup.add_members(context,
                                                      instance_group.uuid,
                                                      [instance.uuid])

                # send a state update notification for the initial create to
                # show it going from non-existent to BUILDING
                notifications.send_update_with_states(context, instance, None,
                        vm_states.BUILDING, None, None, service="api")

        # In the case of any exceptions, attempt DB cleanup and rollback the
        # quota reservations.
        except Exception:
            with excutils.save_and_reraise_exception():
                try:
                    for instance in instances:
                        try:
                            instance.destroy()
                        except exception.ObjectActionError:
                            pass
                    for instance_mapping in instance_mappings:
                        try:
                            instance_mapping.destroy()
                        except exception.InstanceMappingNotFound:
                            pass
                    for build_request in build_requests:
                        try:
                            build_request.destroy()
                        except exception.BuildRequestNotFound:
                            pass
                finally:
                    quotas.rollback()

        # Commit the reservations
        quotas.commit()
        return instances
...

构建镜像参数

(compute/api.py)

...

    def _get_bdm_image_metadata(self, context, block_device_mapping,
                                legacy_bdm=True):
        """If we are booting from a volume, we need to get the
        volume details from Cinder and make sure we pass the
        metadata back accordingly.
        """
        if not block_device_mapping:
            return {}

        for bdm in block_device_mapping:
            if (legacy_bdm and
                    block_device.get_device_letter(
                       bdm.get('device_name', '')) != 'a'):
                continue
            elif not legacy_bdm and bdm.get('boot_index') != 0:
                continue

            volume_id = bdm.get('volume_id')
            snapshot_id = bdm.get('snapshot_id')
            if snapshot_id:
                # NOTE(alaski): A volume snapshot inherits metadata from the
                # originating volume, but the API does not expose metadata
                # on the snapshot itself.  So we query the volume for it below.
                snapshot = self.volume_api.get_snapshot(context, snapshot_id)
                volume_id = snapshot['volume_id']

            if bdm.get('image_id'):
                try:
                    image_id = bdm['image_id']
                    image_meta = self.image_api.get(context, image_id)
                    return image_meta
                except Exception:
                    raise exception.InvalidBDMImage(id=image_id)
            elif volume_id:
                try:
                    volume = self.volume_api.get(context, volume_id)
                except exception.CinderConnectionFailed:
                    raise
                except Exception:
                    raise exception.InvalidBDMVolume(id=volume_id)

                if not volume.get('bootable', True):
                    raise exception.InvalidBDMVolumeNotBootable(id=volume_id)

                return utils.get_image_metadata_from_volume(volume)
        return {}
...

组合资源规格参数

由conductor.manager.ComputeTaskManager.build_instances方法调用(scheduler/utils.py )。

...

def build_request_spec(ctxt, image, instances, instance_type=None):
    """Build a request_spec for the scheduler.

    The request_spec assumes that all instances to be scheduled are the same
    type.
    """
    instance = instances[0]
    if instance_type is None:
        if isinstance(instance, obj_instance.Instance):
            instance_type = instance.get_flavor()
        else:
            instance_type = flavors.extract_flavor(instance)

    if isinstance(instance, obj_instance.Instance):
        instance = obj_base.obj_to_primitive(instance)
        # obj_to_primitive doesn't copy this enough, so be sure
        # to detach our metadata blob because we modify it below.
        instance['system_metadata'] = dict(instance.get('system_metadata', {}))

    if isinstance(instance_type, objects.Flavor):
        instance_type = obj_base.obj_to_primitive(instance_type)
        # NOTE(danms): Replicate this old behavior because the
        # scheduler RPC interface technically expects it to be
        # there. Remove this when we bump the scheduler RPC API to
        # v5.0
        try:
            flavors.save_flavor_info(instance.get('system_metadata', {}),
                                     instance_type)
        except KeyError:
            # If the flavor isn't complete (which is legit with a
            # flavor object, just don't put it in the request spec
            pass

    request_spec = {
            'image': image or {},
            'instance_properties': instance,
            'instance_type': instance_type,
            'num_instances': len(instances)}
    return jsonutils.to_primitive(request_spec)

...

构建最终规格参数

在响应RPC调用时,ComputeTaskManager(conductor/manager.py)的_schedule_instances方法和SchedulerManager(scheduler/manager.py)的select_destinations方法都会使用objects.RequestSpec.from_primitives(objects/request_spec.py)方法生成一个新风格的RequestSpec。

  • objects/request_spec.py 生成一个新风格的RequestSpec

...

from nova.objects import instance as obj_instance

...

@base.NovaObjectRegistry.register
class RequestSpec(base.NovaObject):
...

    @classmethod
    def from_primitives(cls, context, request_spec, filter_properties):
        """Returns a new RequestSpec object by hydrating it from legacy dicts.

        Deprecated.  A RequestSpec object is created early in the boot process
        using the from_components method.  That object will either be passed to
        places that require it, or it can be looked up with
        get_by_instance_uuid.  This method can be removed when there are no
        longer any callers.  Because the method is not remotable it is not tied
        to object versioning.

        That helper is not intended to leave the legacy dicts kept in the nova
        codebase, but is rather just for giving a temporary solution for
        populating the Spec object until we get rid of scheduler_utils'
        build_request_spec() and the filter_properties hydratation in the
        conductor.

        :param context: a context object
        :param request_spec: An old-style request_spec dictionary
        :param filter_properties: An old-style filter_properties dictionary
        """
        num_instances = request_spec.get('num_instances', 1)
        spec = cls(context, num_instances=num_instances)
        # Hydrate from request_spec first
        image = request_spec.get('image')
        spec._image_meta_from_image(image)
        instance = request_spec.get('instance_properties')
        spec._from_instance(instance)
        flavor = request_spec.get('instance_type')
        spec._from_flavor(flavor)
        # Hydrate now from filter_properties
        spec.ignore_hosts = filter_properties.get('ignore_hosts')
        spec.force_hosts = filter_properties.get('force_hosts')
        spec.force_nodes = filter_properties.get('force_nodes')
        retry = filter_properties.get('retry', {})
        spec._from_retry(retry)
        limits = filter_properties.get('limits', {})
        spec._from_limits(limits)
        spec._populate_group_info(filter_properties)
        scheduler_hints = filter_properties.get('scheduler_hints', {})
        spec._from_hints(scheduler_hints)

        # NOTE(sbauza): Default the other fields that are not part of the
        # original contract
        spec.obj_set_defaults()

        return spec
...

    def _image_meta_from_image(self, image):
        if isinstance(image, objects.ImageMeta):
            self.image = image
        elif isinstance(image, dict):
            # NOTE(sbauza): Until Nova is fully providing an ImageMeta object
            # for getting properties, we still need to hydrate it here
            # TODO(sbauza): To be removed once all RequestSpec hydrations are
            # done on the conductor side and if the image is an ImageMeta
            self.image = objects.ImageMeta.from_dict(image)
        else:
            self.image = None

...

    def _from_instance(self, instance):
        if isinstance(instance, obj_instance.Instance):
            # NOTE(sbauza): Instance should normally be a NovaObject...
            getter = getattr
        elif isinstance(instance, dict):
            # NOTE(sbauza): ... but there are some cases where request_spec
            # has an instance key as a dictionary, just because
            # select_destinations() is getting a request_spec dict made by
            # sched_utils.build_request_spec()
            # TODO(sbauza): To be removed once all RequestSpec hydrations are
            # done on the conductor side
            getter = lambda x, y: x.get(y)
        else:
            # If the instance is None, there is no reason to set the fields
            return

        instance_fields = ['numa_topology', 'pci_requests', 'uuid',
                           'project_id', 'availability_zone']
        for field in instance_fields:
            if field == 'uuid':
                setattr(self, 'instance_uuid', getter(instance, field))
            elif field == 'pci_requests':
                self._from_instance_pci_requests(getter(instance, field))
            elif field == 'numa_topology':
                self._from_instance_numa_topology(getter(instance, field))
            else:
                setattr(self, field, getter(instance, field))

...

    def _from_flavor(self, flavor):
        if isinstance(flavor, objects.Flavor):
            self.flavor = flavor
        elif isinstance(flavor, dict):
            # NOTE(sbauza): Again, request_spec is primitived by
            # sched_utils.build_request_spec() and passed to
            # select_destinations() like this
            # TODO(sbauza): To be removed once all RequestSpec hydrations are
            # done on the conductor side
            self.flavor = objects.Flavor(**flavor)

...

    def _from_retry(self, retry_dict):
        self.retry = (SchedulerRetries.from_dict(self._context, retry_dict)
                      if retry_dict else None)

...

    def _from_limits(self, limits_dict):
        self.limits = SchedulerLimits.from_dict(limits_dict)

...

    def _from_hints(self, hints_dict):
        if hints_dict is None:
            self.scheduler_hints = None
            return
        self.scheduler_hints = {
            hint: value if isinstance(value, list) else [value]
            for hint, value in six.iteritems(hints_dict)}

...

class SchedulerRetries(base.NovaObject):
    # Version 1.0: Initial version
    # Version 1.1: ComputeNodeList version 1.14
    VERSION = '1.1'

    fields = {
        'num_attempts': fields.IntegerField(),
        # NOTE(sbauza): Even if we are only using host/node strings, we need to
        # know which compute nodes were tried
        'hosts': fields.ObjectField('ComputeNodeList'),
    }

...

class SchedulerLimits(base.NovaObject):
    # Version 1.0: Initial version
    VERSION = '1.0'

    fields = {
        'numa_topology': fields.ObjectField('NUMATopologyLimits',
                                            nullable=True,
                                            default=None),
        'vcpu': fields.IntegerField(nullable=True, default=None),
        'disk_gb': fields.IntegerField(nullable=True, default=None),
        'memory_mb': fields.IntegerField(nullable=True, default=None),
    }

...
  • objects/image_meta.py 镜像规格

...

class ImageMeta(base.NovaObject):

...

    fields = {
        'id': fields.UUIDField(),
        'name': fields.StringField(),
        'status': fields.StringField(),
        'visibility': fields.StringField(),
        'protected': fields.FlexibleBooleanField(),
        'checksum': fields.StringField(),
        'owner': fields.StringField(),
        'size': fields.IntegerField(),
        'virtual_size': fields.IntegerField(),
        'container_format': fields.StringField(),
        'disk_format': fields.StringField(),
        'created_at': fields.DateTimeField(nullable=True),
        'updated_at': fields.DateTimeField(nullable=True),
        'tags': fields.ListOfStringsField(),
        'direct_url': fields.StringField(),
        'min_ram': fields.IntegerField(),
        'min_disk': fields.IntegerField(),
        'properties': fields.ObjectField('ImageMetaProps'),
    }

...
  • objects/instance.py 实例规格

...

class Instance(base.NovaPersistentObject, base.NovaObject,
               base.NovaObjectDictCompat):
...

    fields = {
        'id': fields.IntegerField(),

        'user_id': fields.StringField(nullable=True),
        'project_id': fields.StringField(nullable=True),

        'image_ref': fields.StringField(nullable=True),
        'kernel_id': fields.StringField(nullable=True),
        'ramdisk_id': fields.StringField(nullable=True),
        'hostname': fields.StringField(nullable=True),

        'launch_index': fields.IntegerField(nullable=True),
        'key_name': fields.StringField(nullable=True),
        'key_data': fields.StringField(nullable=True),

        'power_state': fields.IntegerField(nullable=True),
        'vm_state': fields.StringField(nullable=True),
        'task_state': fields.StringField(nullable=True),

        'services': fields.ObjectField('ServiceList'),

        'memory_mb': fields.IntegerField(nullable=True),
        'vcpus': fields.IntegerField(nullable=True),
        'root_gb': fields.IntegerField(nullable=True),
        'ephemeral_gb': fields.IntegerField(nullable=True),
        'ephemeral_key_uuid': fields.UUIDField(nullable=True),

        'host': fields.StringField(nullable=True),
        'node': fields.StringField(nullable=True),

        'instance_type_id': fields.IntegerField(nullable=True),

        'user_data': fields.StringField(nullable=True),

        'reservation_id': fields.StringField(nullable=True),

        'launched_at': fields.DateTimeField(nullable=True),
        'terminated_at': fields.DateTimeField(nullable=True),

        'availability_zone': fields.StringField(nullable=True),

        'display_name': fields.StringField(nullable=True),
        'display_description': fields.StringField(nullable=True),

        'launched_on': fields.StringField(nullable=True),

        # NOTE(jdillaman): locked deprecated in favor of locked_by,
        # to be removed in Icehouse
        'locked': fields.BooleanField(default=False),
        'locked_by': fields.StringField(nullable=True),

        'os_type': fields.StringField(nullable=True),
        'architecture': fields.StringField(nullable=True),
        'vm_mode': fields.StringField(nullable=True),
        'uuid': fields.UUIDField(),

        'root_device_name': fields.StringField(nullable=True),
        'default_ephemeral_device': fields.StringField(nullable=True),
        'default_swap_device': fields.StringField(nullable=True),
        'config_drive': fields.StringField(nullable=True),

        'access_ip_v4': fields.IPV4AddressField(nullable=True),
        'access_ip_v6': fields.IPV6AddressField(nullable=True),

        'auto_disk_config': fields.BooleanField(default=False),
        'progress': fields.IntegerField(nullable=True),

        'shutdown_terminate': fields.BooleanField(default=False),
        'disable_terminate': fields.BooleanField(default=False),

        'cell_name': fields.StringField(nullable=True),

        'metadata': fields.DictOfStringsField(),
        'system_metadata': fields.DictOfNullableStringsField(),

        'info_cache': fields.ObjectField('InstanceInfoCache',
                                         nullable=True),

        'security_groups': fields.ObjectField('SecurityGroupList'),

        'fault': fields.ObjectField('InstanceFault', nullable=True),

        'cleaned': fields.BooleanField(default=False),

        'pci_devices': fields.ObjectField('PciDeviceList', nullable=True),
        'numa_topology': fields.ObjectField('InstanceNUMATopology',
                                            nullable=True),
        'pci_requests': fields.ObjectField('InstancePCIRequests',
                                           nullable=True),
        'device_metadata': fields.ObjectField('InstanceDeviceMetadata',
                                              nullable=True),
        'tags': fields.ObjectField('TagList'),
        'flavor': fields.ObjectField('Flavor'),
        'old_flavor': fields.ObjectField('Flavor', nullable=True),
        'new_flavor': fields.ObjectField('Flavor', nullable=True),
        'vcpu_model': fields.ObjectField('VirtCPUModel', nullable=True),
        'ec2_ids': fields.ObjectField('EC2Ids'),
        'migration_context': fields.ObjectField('MigrationContext',
                                                nullable=True),
        'keypairs': fields.ObjectField('KeyPairList'),
        }

...
  • objects/flavor.py 模板规格

...

class Flavor(base.NovaPersistentObject, base.NovaObject,
             base.NovaObjectDictCompat):
...

    fields = {
        'id': fields.IntegerField(),
        'name': fields.StringField(nullable=True),
        'memory_mb': fields.IntegerField(),
        'vcpus': fields.IntegerField(),
        'root_gb': fields.IntegerField(),
        'ephemeral_gb': fields.IntegerField(),
        'flavorid': fields.StringField(),
        'swap': fields.IntegerField(),
        'rxtx_factor': fields.FloatField(nullable=True, default=1.0),
        'vcpu_weight': fields.IntegerField(nullable=True),
        'disabled': fields.BooleanField(),
        'is_public': fields.BooleanField(),
        'extra_specs': fields.DictOfStringsField(),
        'projects': fields.ListOfStringsField(),
        }

...

实现一个简单的过滤器

过滤器基类

  • filters.py BaseFilter为供外部调用的基类。

class BaseFilter(object):
    """Base class for all filter classes."""
    def _filter_one(self, obj, spec_obj):
        """Return True if it passes the filter, False otherwise.
        Override this in a subclass.
        """
        return True

    def filter_all(self, filter_obj_list, spec_obj):
        """Yield objects that pass the filter.

        Can be overridden in a subclass, if you need to base filtering
        decisions on all objects.  Otherwise, one can just override
        _filter_one() to filter a single object.
        """
        for obj in filter_obj_list:
            if self._filter_one(obj, spec_obj):
                yield obj

    # Set to true in a subclass if a filter only needs to be run once
    # for each request rather than for each instance
    run_filter_once_per_request = False

    def run_filter_for_index(self, index):
        """Return True if the filter needs to be run for the "index-th"
        instance in a request.  Only need to override this if a filter
        needs anything other than "first only" or "all" behaviour.
        """
        if self.run_filter_once_per_request and index > 0:
            return False
        else:
            return True
  • scheduler/filters/init.py BaseHostFilter是过滤器需要派生的基类,重载了_filter_one方法,所有的子类都需要重载host_passes方法来判断节点是否通过过滤。

class BaseHostFilter(filters.BaseFilter):
    """Base class for host filters."""
    def _filter_one(self, obj, filter_properties):
        """Return True if the object passes the filter, otherwise False."""
        return self.host_passes(obj, filter_properties)

    def host_passes(self, host_state, filter_properties):
        """Return True if the HostState passes the filter, otherwise False.
        Override this in a subclass.
        """
        raise NotImplementedError()

过滤器的编写

这个过滤器只是简单的打印参数和状态信息,以及请求规格到日志。

  • 编写过滤器dump_filter:

$ vi /usr/lib/python2.7/dist-packages/nova/scheduler/filters/dump_filter.py
from oslo_log import log as logging

from nova.i18n import _LI
from nova.scheduler import filters

LOG = logging.getLogger(__name__)


class DumpFilter(filters.BaseHostFilter):

    def host_passes(self, host_state, spec_obj):
        
        LOG.info(_LI("------ Dump Filter ------"))
        
        LOG.info(_LI(" ** Host Status **"))
        
        LOG.info(_LI("host                  %s"), host_state.host)
        LOG.info(_LI("nodename              %s"), host_state.nodename)
        
        LOG.info(_LI("total_usable_ram_mb   %s"), host_state.total_usable_ram_mb)
        LOG.info(_LI("total_usable_disk_gb  %s"), host_state.total_usable_disk_gb)
        LOG.info(_LI("disk_mb_used          %s"), host_state.disk_mb_used)
        LOG.info(_LI("free_ram_mb           %s"), host_state.free_ram_mb)
        LOG.info(_LI("free_disk_mb          %s"), host_state.free_disk_mb)
        LOG.info(_LI("vcpus_total           %s"), host_state.vcpus_total)
        LOG.info(_LI("vcpus_used            %s"), host_state.vcpus_used)
        LOG.info(_LI("pci_stats             %s"), host_state.pci_stats)
        LOG.info(_LI("numa_topology         %s"), host_state.numa_topology)
        
        LOG.info(_LI("num_instances         %s"), host_state.num_instances)
        LOG.info(_LI("num_io_ops            %s"), host_state.num_io_ops)
        
        LOG.info(_LI("host_ip               %s"), host_state.host_ip)
        LOG.info(_LI("hypervisor_type       %s"), host_state.hypervisor_type)
        LOG.info(_LI("hypervisor_version    %s"), host_state.hypervisor_version)
        LOG.info(_LI("hypervisor_hostname   %s"), host_state.hypervisor_hostname)
        LOG.info(_LI("cpu_info              %s"), host_state.cpu_info)
        LOG.info(_LI("supported_instances   %s"), host_state.supported_instances)
        
        LOG.info(_LI("limits                %s"), host_state.limits)
        LOG.info(_LI("metrics               %s"), host_state.metrics)
        LOG.info(_LI("aggregates            %s"), host_state.aggregates)
        LOG.info(_LI("instances             %s"), host_state.instances)
        
        LOG.info(_LI("ram_allocation_ratio  %s"), host_state.ram_allocation_ratio)
        LOG.info(_LI("cpu_allocation_ratio  %s"), host_state.cpu_allocation_ratio)
        LOG.info(_LI("disk_allocation_ratio %s"), host_state.disk_allocation_ratio)
        LOG.info(_LI("updated %s"), host_state.updated)
        
        LOG.info(_LI(" ** Request Spect **"))
        
        LOG.info(_LI("memory_mb            %s"), spec_obj.memory_mb)
        LOG.info(_LI("vcpus                %s"), spec_obj.vcpus)
      
        return True
  • 生成pyc文件:

$ python
Python 2.7.12 (default, Dec  4 2017, 14:50:18) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import py_compile
>>> py_compile.compile("/usr/lib/python2.7/dist-packages/nova/scheduler/filters/dump_filter.py")
  • 修改默认过滤器列表和日志等级:

$ vi /etc/nova/logging.conf
[DEFAULT]
...

scheduler_default_filters = RetryFilter, DumpFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, 
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, 
ServerGroupAffinityFilter

...

[logger_root]
level = DEBUG
handlers = null

[logger_nova]
level = DEBUG
handlers = stderr
qualname = nova

...
  • 停止原有nova-scheduler后台系统服务,并在控制台手动启动nova-scheduler服务:

$ systemctl stop nova-scheduler
$ /usr/bin/python /usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf
  • 创建、删除一个虚拟机:

$ . demo-openrc

$ openstack server create --flavor m1.nano --image cirros \
	--nic net-id=eb2f08c3-dbc4-423e-8206-6b0fb07d94b7 \
	--security-group default --key-name mykey test1

$ openstack server delete test1
  • 查看nova-scheduler服务输出:

...
2017-12-14 10:43:15.670 77206 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.


2017-12-14 10:43:16.265 77206 INFO nova.service [-] Starting scheduler node (version 14.0.1)


2017-12-14 10:43:24.264 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
------ Dump Filter ------


2017-12-14 10:43:24.265 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -]  
** Host Status **


2017-12-14 10:43:24.266 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
host                  UbuntuStack


2017-12-14 10:43:24.266 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
nodename              ubuntustack


2017-12-14 10:43:24.267 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] total_usable_ram_mb   3934


2017-12-14 10:43:24.267 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] total_usable_disk_gb  18


2017-12-14 10:43:24.268 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
disk_mb_used          2048


2017-12-14 10:43:24.273 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
free_ram_mb           3294


2017-12-14 10:43:24.274 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
free_disk_mb          7168


2017-12-14 10:43:24.274 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
vcpus_total           2


2017-12-14 10:43:24.275 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
vcpus_used            2


2017-12-14 10:43:24.275 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
pci_stats             <nova.pci.stats.PciDeviceStats object at 0x7f0cd5da9b50>


2017-12-14 10:43:24.276 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
numa_topology         
{"nova_object.version": "1.2", "nova_object.changes": ["cells"], "nova_object.name": "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset", "pinned_cpus", "siblings", "memory", "mempages", "id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0, "cpuset": [0, 1], "pinned_cpus": [], "siblings": [], "memory": 3934, "mempages": [{"nova_object.version": "1.1", "nova_object.changes": ["total", "reserved", "size_kb", "used"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 1007177, "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "reserved", "size_kb", "used"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 2048}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "reserved", "size_kb", "used"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 1048576}, "nova_object.namespace": "nova"}], "id": 0}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}


2017-12-14 10:43:24.281 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
num_instances         2


2017-12-14 10:43:24.281 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
num_io_ops            0


2017-12-14 10:43:24.282 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
host_ip               192.168.195.160


2017-12-14 10:43:24.283 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
hypervisor_type       QEMU


2017-12-14 10:43:24.283 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
hypervisor_version    2005000


2017-12-14 10:43:24.283 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
hypervisor_hostname   ubuntustack


2017-12-14 10:43:24.284 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
cpu_info              {"vendor": "Intel", "model": "Broadwell-noTSX", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "syscall", "vme", "invpcid", "tsc", "fsgsbase", "xsave", "pge", "erms", "cmov", "smep", "pcid", "pat", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "abm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "mpx", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 1}}


2017-12-14 10:43:24.284 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
supported_instances   [[u'i686', u'qemu', u'hvm'], [u'x86_64', u'qemu', u'hvm']]


2017-12-14 10:43:24.285 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
limits                {}
2017-12-14 10:43:24.285 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
metrics               MonitorMetricList(objects=[])


2017-12-14 10:43:24.286 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
aggregates            []


2017-12-14 10:43:24.292 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
instances             {'aef838f5-8f9d-4ff2-8a34-17bd5ac05a8a': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2016-11-29T09:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='selfservice-instance',display_name='selfservice-instance',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,host='UbuntuStack',hostname='selfservice-instance',id=2,image_ref='e345e8b0-71b7-44e0-b1a1-e168f85a19f6',info_cache=<?>,instance_type_id=1,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDb088fU8458jkAfgBuvCC6fEwlYjM2Chnj3CHKaepskuZ556KKNqyVuVKxc813gagh/bIFu/tN4WMC0DXqR5+v9Sx3IoK2m35LMUe0Lukuii4Ztny3pJp4/zs/lRtOfc9w+ykrocw2yyw14KqGwOkh7QEESr8/CChc5T7d5IqpUWYHObZ2hVr8Z6JxP7wBQKT8wLCiZ8DtSkpNxrIBdZGEO/RaBAa2H1Jumik01Lh/2iXEDsI+ohcnd2trH/k3D7HYhfb2Oz/Da2CQISHenTJVUBaX19I3Eass6VflVYP8msQsPPdQVE5oQsYaNbzblYaLNT79z04Miu4VjeNjTTvb root@UbuntuStack
',key_name='mykey',keypairs=<?>,launch_index=0,launched_at=2016-11-29T10:00:29Z,launched_on='UbuntuStack',locked=False,locked_by=None,memory_mb=64,metadata=<?>,migration_context=<?>,new_flavor=<?>,node='ubuntustack',numa_topology=<?>,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c7ddc0ecab64419486df0d7f66e8174c',ramdisk_id='',reservation_id='r-s86llflp',root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,updated_at=2017-12-06T14:51:10Z,user_data=None,user_id='ffff52bbf1da4c86a3d2b57e977f6b82',uuid=aef838f5-8f9d-4ff2-8a34-17bd5ac05a8a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped'), 'f7279e6b-7c15-421f-8c81-033fc9f70e30': Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2017-12-06T15:40:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,host='UbuntuStack',hostname='test',id=3,image_ref='e345e8b0-71b7-44e0-b1a1-e168f85a19f6',info_cache=<?>,instance_type_id=1,kernel_id='',key_data='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDb088fU8458jkAfgBuvCC6fEwlYjM2Chnj3CHKaepskuZ556KKNqyVuVKxc813gagh/bIFu/tN4WMC0DXqR5+v9Sx3IoK2m35LMUe0Lukuii4Ztny3pJp4/zs/lRtOfc9w+ykrocw2yyw14KqGwOkh7QEESr8/CChc5T7d5IqpUWYHObZ2hVr8Z6JxP7wBQKT8wLCiZ8DtSkpNxrIBdZGEO/RaBAa2H1Jumik01Lh/2iXEDsI+ohcnd2trH/k3D7HYhfb2Oz/Da2CQISHenTJVUBaX19I3Eass6VflVYP8msQsPPdQVE5oQsYaNbzblYaLNT79z04Miu4VjeNjTTvb root@UbuntuStack
',key_name='mykey',keypairs=<?>,launch_index=0,launched_at=2017-12-06T15:40:45Z,launched_on='UbuntuStack',locked=False,locked_by=None,memory_mb=64,metadata=<?>,migration_context=<?>,new_flavor=<?>,node='ubuntustack',numa_topology=<?>,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c7ddc0ecab64419486df0d7f66e8174c',ramdisk_id='',reservation_id='r-d00bhg7u',root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,updated_at=2017-12-13T06:50:48Z,user_data=None,user_id='ffff52bbf1da4c86a3d2b57e977f6b82',uuid=f7279e6b-7c15-421f-8c81-033fc9f70e30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped')}


2017-12-14 10:43:24.293 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
ram_allocation_ratio  1.5


2017-12-14 10:43:24.294 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
cpu_allocation_ratio  16.0


2017-12-14 10:43:24.294 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] disk_allocation_ratio 1.0


2017-12-14 10:43:24.294 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
updated 2017-12-14 02:43:13+00:00


2017-12-14 10:43:24.295 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
 ** Request Spect **


2017-12-14 10:43:24.295 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
memory_mb            64


2017-12-14 10:43:24.296 77206 INFO nova.scheduler.filters.dump_filter [req-6a06ba1d-aeaf-41a8-a922-0402c0be3444 ffff52bbf1da4c86a3d2b57e977f6b82 c7ddc0ecab64419486df0d7f66e8174c - - -] 
vcpus                1
...

关于“OpenStack Nova调度服务学习及其过滤器编写的示例分析”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI