Home >Backend Development >PHP Tutorial >Extension development of ceph management platform Calamari_PHP tutorial
The red box part is the part implemented by Calamari code, and the non-red box part is the open source framework that is not implemented by Calamari.
The components installed on the Cephserver node include Diamond and Salt-minion. Diamond is responsible for collecting monitoring data. It supports many data types and metrics. Each type of data is a collector in the above figure. In addition to collecting the status information of Ceph itself, it can also collect key resource usage and performance. Data, including CPU, memory, network, I/O load and disk metrics. Collector uses the local command line to collect data and then reports it to Graphite.
Graphite is not only an enterprise-level monitoring tool, it can also draw in real time. carbon-cache is a back-end process of a highly scalable event-driven I/O architecture implemented in Python. It can effectively communicate with a large number of clients and handle a large amount of business volume with low overhead.
Whisper is similar to RRDtool, providing a database development library for applications to manipulate and retrieve file data (time data point data) stored in special formats. The most basic operation of Whisper is to create new Whisper files and update them. Write new data points to a file and get the retrieved data points
Graphite_web is a user interface used to generate images. Users can access these generated images directly through URLs.
Calamari uses Saltstack to communicate between Calamari Server and Ceph server node. Saltstack is an open source automated operation and maintenance management tool with similar functions to Chef and Puppet. Salt-master sends instructions to the designated Salt-minion to complete the management of Cpeh Cluster; Salt-minion will synchronize and install a ceph.py file from the master after the Ceph server node is installed, which contains the API for Ceph operations. It will Call librados or the command line to finally communicate with Ceph Cluster.
calamari_rest provides Calamari REST API. Please refer to the official documentation for detailed interfaces. Ceph's REST API is a low-level interface, in which each URL maps directly to the equivalent CEPH CLI; Calamari REST API provides a higher-level interface, and API users can use GET/POST/PATCH. method to operate objects without knowing the underlying Ceph commands; the main difference between them is that users of Ceph's REST API need to know Ceph itself very well, while Calamari's REST API is closer to the description of Ceph resources, so it is more Suitable for calling upper-layer applications.
cthulhu can be understood as the Service layer of Calamari Server. It provides an interface for API on the upper side and calls Salt-master on the lower side.
calamari_clients is a set of user interfaces. During the installation process, Calamari Server will first create the opt/calamari/webapp directory, and check in the manager.py (django configuration) file under webapp/calamari. calamari_web All content must be placed under opt/calamari/webapp to provide the UI access page.
The files under the calamari-web package provide all web-related configurations, both calamari_rest and calamari_clients are used.Develop new functions based on Calamari, which are mainly divided into the following modules. This part includes the Rest-API part, Cthulhu, and salt client Extension. The basic steps for extending new functions are as follows:
>> Expand the URL module and determine the corresponding response interface parameters and the corresponding response interface in the ViewSet.
>> Complete the implementation of some interfaces in ViewSet. This part mainly involves interaction with cthulhu, how to obtain data information, and in some cases, it is also necessary to obtain the serialization operation of the object in the serializer.
>> Complete the expansion of the corresponding type in background rpc.py. This part is mainly for some post operations.
>> Complete the expansion of cluster_monitor.py. Some functions that provide operations need to support create, update, delete and other operations, and the corresponding RequestFactory must be provided. In cluster_monitor.py, the corresponding RequestFactory needs to be added to the code.
>> Complete the writing of the corresponding RequestFactory class. This part is mainly to complete the encapsulation of command operations. And build the corresponding request operation.
>> Extension of salt-minion, this part is mainly for the extension of ceph.py file, of course, new xxx.py file can also be provided.
The following will take the control and operation of PG as an example.
Currently Calmamari adopts Rest-API form and is supported by Django’s Rest-Framework framework. This part is in the rest-api code directory. Django adopts the implementation method of separating Url and code logic, so the URL can be expanded independently.
Add the following PG-related URL to rest-api/calamari-rest/urls/v2.py:
url(r'^cluster/(?P
url(r'^cluster/(?P
calamari_rest.views.v2.PgViewSet.as_view({'post': 'apply'}),
name='cluster-pool-pg-control'),
Two URLs are defined above, which are:
api/v2/cluster/xxxx/pool/x/pg
api/v2/cluster/xxxx/pool/x/pg/ xx/command/xxx
The above two URLs respectively specify the interfaces in PgViewSet, and the get method of the URL corresponds to the list interface. The apply interface corresponding to the post interface. These two interfaces must be implemented in PgViewSet.
After extending the URL, the next step is to extend the corresponding response interface. This part of the extension is mainly to implement the interface class specified in the URL. In the previous PG, two different interfaces were specified, namely acquisition and operation commands. The corresponding code path is /rest-api/calamari-rest/view/v2.py. The specific code is as follows:
class PgViewSet(RPCViewSet):
serializer_class= PgSerializer
deflist(self, request, fsid, pool_id):
poolName = self.client. get(fsid, POOL, int(pool_id))['pool_name']
pg_summary = self.client.get_sync_object(fsid, PgSummary.str)
pg_pools = pg_summary['pg_pools'] ['by_pool'][int(pool_id)]
forpg in pg_pools:
pg['pool'] = poolName
return Response( PgSerializer(pg_pools, many=True).data)
defapply(self, request, fsid, pool_id, pg_id, command):
return Response(self.client .apply(fsid, PG, pg_id, command), status=202)
As can be seen from the above implementation, the code implements two interfaces, namely the list and apply interfaces, which correspond to the previous get and post operate. The above two operations will interact with the background cthulhu. They are to obtain parameters and submit requests. There are also certain differences in the returned content.
At the same time, serialization settings are made in the list interface, namely PgSerializer, which is implemented in rest-api/calamari-rest/serializer/v2.py.
Usually data is serialized in Rest-Api. This part is not necessary. It is usually necessary in operations that need to be changed. of. The following is the serialization operation of Pg:
class PgSerializer(serializers.Serializer):
classMeta:
fields = ('id', 'pool', 'state' , 'up', 'acting', 'up_primary','acting_primary')
id =serializers.CharField(source='pgid')
pool =serializers.CharField(help_text=' pool name')
state =serializers.CharField(source='state', help_text='pg state')
up =serializers.Field(help_text='pg Up set')
acting =serializers.Field(help_text='pg acting set')
up_primary = serializers.IntegerField(help_text='pg up primary')
acting_primary =serializers.IntegerField (help_text='pg acting primary')
This part is not necessary. Some modules may not have this part of the operation. In the previous three steps, the extension of the Rest-API part was basically implemented, among which the main extension was the ViewSet. The relevant ViewSet actually implements the interaction method between cthulhu and rest-api.
In the extension of ViewSet, rpc is actually used to interact with the background, so the implementation part of cthulhu mainly handles the corresponding rpc requests.
rpc.py implements all requested operations, but new extended operations also need to support extensions. Take pg as an example to continue the explanation:
defapply(self, fs_id, object_type, object_id, command):
"""
Apply commands that do not modify an object in a cluster.
"""
cluster = self._fs_resolve(fs_id)
ifobject_type == OSD:
# Run a resolve to throw exception if it's unknown
self._osd_resolve(cluster, object_id)
return cluster.request_apply(OSD, object_id, command)
elifobject_type == PG:
return cluster.request_apply(PG ,object_id, command)
else:
raise NotImplementedError(object_type)
The list of Pg is obtained through PgSummary. This part already existed in the previous implementation. The previous code was implemented as follows:
defget_sync_object(self, fs_id, object_type, path=None):
"""
Getone of the objects that ClusterMonitor keeps a copy of from the mon, such
as the cluster maps.
:param fs_id: The fsid of a cluster
:param object_type: String, one of SYNC_OBJECT_TYPES
:param path: List, optional, a path within the object to return instead of the whole thing
:return : the requested data, or None if it was not found (including ifany element of ``path``
was not found)
"""
ifpath:
obj =self._fs_resolve(fs_id).get_sync_object(SYNC_OBJECT_STR_TYPE[object_type])
try:
for part in path:
if isinstance(obj, dict):
obj = obj[part]
else:
obj = getattr(obj, part)
except (AttributeError, KeyError) as e:
log.exception("Exception %s traversing %s: obj=%s" % (e, path,obj))
raise NotFound( object_type, path)
return obj
else:
returnself._fs_resolve(fs_id).get_sync_object_data(SYNC_OBJECT_STR_TYPE[object_type])
All requested operations will be controlled by the cluster. This part can be implemented through cluster_monitor, taking pg as an example.
def__init__(self, fsid, cluster_name, notifier, persister, servers, eventer, requests):
super(ClusterMonitor, self).__init__()
self.fsid = fsid
self.name = cluster_name
self.update_time = datetime.datetime.utcnow().replace(tzinfo=utc)
self._notifier = notifier
self._persister= persister
self._servers = servers
self._eventer = eventer
self ._requests = requests
#Which mon we are currently using for running requests,
#identified by minion ID
self._favorite_mon = None
self._last_heartbeat = {}
self._complete = gevent.event.Event()
self.done = gevent.event.Event( )
self._sync_objects = SyncObjects(self.name)
self._request_factories = {
CRUSH_MAP: CrushRequestFactory,
OSD: OSDREQUESTFACTORY,
PoolRequestFactory,
CacheTierRequ ESTFactory,
PG: PGREQUESTFACTORY,
Async_command: AsyncComrequestFactory
itor = pluginmonitor (servers)🎜>self._ready = gevent.event.Event()This part is mainly to bind the corresponding request to the corresponding request factory class, so that a suitable request can be generated.
This factory class is mainly designed to implement specific interface classes for different needs. Different objects have different request classes. Take Pg as an example:
from cthulhu.manager.request_factory importRequestFactory
from cthulhu.manager.user_request importRadosRequest
from calamari_common.types importPG_IMPLEMENTED_COMMANDS, PgSummary
class PgRequestFactory (RequestFactory) :
def scrub(self,pg_id):
return RadosRequest(
"Initiating scrub on{cluster_name}-pg{id}".format(cluster_name=self. _cluster_monitor.name,id=pg_id),
self._cluster_monitor.fsid,
self._cluster_monitor.name,
[('pg scrub', {'pgid' : pg_id})])
defdeep_scrub(self, pg_id):
return RadosRequest(
"Initiating deep-scrub on{cluster_name}- osd.{id}".format(cluster_name=self._cluster_monitor.name,id=pg_id),
self._cluster_monitor.fsid,
self._cluster_monitor.name,
[('pg deep-scrub', {'pgid': pg_id})])
defrepair(self, pg_id):
return RadosRequest(
"Initiating repair on{cluster_name}-osd.{id}".format(cluster_name=self._cluster_monitor.name,id=pg_id),
self._cluster_monitor.fsid,
self._cluster_monitor.name,
[('pg repair', {'pgid': pg_id})])
defget_valid_commands(self, pg_id) :
ret_val = {}
file('/tmp/pgsummary.txt', 'a ').write(PgSummary.str 'n')
pg_summary = self._cluster_monitor.get_sync_object(PgSummary)
pg_pools = pg_summary['pg_pools']['by_pool']
pool_id = int(pg_id.split('.')[0])
pool= pg_pools[pool_id]
forpg in pool:
if pg['pgid'] == pg_id:
ret_val[pg_id] = {'valid_commands': PG_IMPLEMENTED_COMMANDS}
else:
ret_val[pg_id] = {'valid_commands': []}
return ret_val
This class implements the implementation of three different commands. This command is mainly for corresponding encapsulation. These keywords need to be selected according to the parameters in the ceph source code, so they need to be encoded. Refer to the json parameter name of the corresponding command in the ceph source code.