Audit events for CM managed services are only retrieved if Cloudera Navigator server is running.]]>
Selector | Description | SCM | HDFS | HBase | Hive | Impala | Sentry |
---|---|---|---|---|---|---|---|
service | Cloudera Manager Service | x | x | x | x | x | x |
operation | Operation name | x | x | x | x | x | x |
username | User name | x | x | x | x | x | x |
impersonator | Impersonator | x | x | x | x | x | |
ip_address | IP Address | x | x | x | x | x | x |
allowed | Whether the request was allowed or denied | x | x | x | x | x | x |
qualifier | Column qualifier | x | |||||
source | Source resource of the operation | x | x | x | x | x | |
destination | Destination resource of the operation | x | x | x | x | ||
hostIpAddress | Host IP Address | x | |||||
role | Cloudera Manager Role | x | |||||
family | Column family | x | |||||
database_name | Database name | x | x | x | |||
table_name | Table name | x | x | x | x | ||
object_type | Type of object being handled | x | x | x | |||
operation_text | Command/query text | x | x | x |
The only supported operator is ";" (Boolean AND). Boolean OR is not supported.
The supported comparators are == and != Note that "LIKE" comparison is supported using the wild card syntax, for example foo==*value*. Asterisk is interpreted as a wild card character and must not be part of the value. (LIKE comparison queries are converted to standard SQL LIKE syntax, so any % (%25) character in a value that also contains a wild card will be interpreted as a wild card.)
Values for time related query parameters (startTime and endTime) should be ISO8601 timestamps.
Available since API v8. A subset of these features is available since v4.]]>The other valid comparators are =lt=, =le=, =ge=, and =gt=. They stand for "<", "<=", ">=", ">" respectively. These comparators are only applicable for date time fields.]]>
You must specify at least the hostname and ipAddress in the request objects. If no hostId is specified, it will be set to the hostname. It is an error to try and create host with the same hostId as another host.
]]>To rename the cluster, provide the new name in the "displayName" property for API >= 6, or in the "name" property for API <=5.
Available since API v2.]]>
Settings that are not listed in the input will maintain their current values.]]>
The license file should be uploaded using a request with content type "multipart/form-data", instead of being encoded into a JSON representation.]]>
It is recommended to run the remote server with TLS enabled, since creating and using peers involve transferring credentials over the network.
Available since API v3. Only available with Cloudera Manager Enterprise Edition.]]>
Regardless of the list of roles provided in the input data, all management roles are created by this call. The input is used to override any default settings for the specific roles.
This method needs a valid CM license to be installed beforehand.
This method does not start any services or roles.
This method will fail if a CMS instance already exists.
Available role types:
Currently, only updating the rackId is supported. All other fields of the host will be ignored.
]]>The message field contains a description of the error. The causes field, if not null, contains a list of causes for the error.
Note that this never returns an echoMessage. Instead, the result (and all error results) has the above structure.
]]>Available since API v6.]]>
Available since API v6.]]>
Available since API v3.]]>
Cluster Version | Available Service Types |
---|---|
CDH4 | HDFS, MAPREDUCE, HBASE, OOZIE, ZOOKEEPER, HUE, YARN, IMPALA, FLUME, HIVE, SOLR, SQOOP, KS_INDEXER |
CDH5 | HDFS, MAPREDUCE, HBASE, OOZIE, ZOOKEEPER, HUE, YARN, IMPALA, FLUME, HIVE, SOLR, SQOOP, KS_INDEXER, SQOOP_CLIENT, SENTRY, ACCUMULO16, KMS, SPARK_ON_YARN |
Available since API v3. Only available with Cloudera Manager Enterprise Edition.]]>
If changing the peer's URL, an attempt will be made to contact the old Cloudera Manager to delete the existing credentials.
Available since API v3. Only available with Cloudera Manager Enterprise Edition.]]>
Available since API v6.]]>
Available since API v6.]]>
Attributes that are not listed in the input will maintain their current values in the configuration.]]>
Attributes that are not listed in the input will maintain their current values in the configuration.]]>
By default, the returned results correspond to a 5 minute window based on the provided end time (which defaults to the current server time). The from and to parameters can be used to control the window being queried. A maximum window of 3 hours is enforced.
When requesting a "full" view, aside from the extended properties of the returned metric data, the collection will also contain information about all metrics available for the role, even if no readings are available in the requested window.
Host metrics also include per-network interface and per-storage device metrics. Since collecting this data incurs in more overhead, query parameters can be used to choose which network interfaces and storage devices to query, or to these metrics altogether.
Storage metrics are collected at different levels; for example, per-disk and per-partition metrics are available. The "storageIds" parameter can be used to filter specific storage IDs.
In the returned data, the network interfaces and storage IDs can be identified by looking at the "context" property of the metric objects.]]>
Available since API v2.
]]>Available since API v7.
]]>Available since API v7.
]]>If using packages, CDH packages on all hosts of the cluster must be manually upgraded before this command is issued.
The command will upgrade the services and their configuration to the version available in the CDH5 distribution.
Unless rolling upgrade options are provided, the entire cluster will experience downtime. If rolling upgrade options are provided, command will do a "best-effort" rolling upgrade of the given cluster, i.e. it does plain upgrade of services that cannot be rolling upgraded, followed by first rolling upgrading non-slaves and then rolling restarting the slave roles of services that can be rolling restarted. The slave restarts are done host-by-host.
Available since v9. Rolling upgrade is only available with Cloudera Manager Enterprise Edition. A more limited upgrade variant available since v6.]]>
If parcels are used instead of CDH system packages then the following steps need to happen in order:
Available since API v3.]]>
The valid comparators are ==, !=, =lt=, =le=, =ge=, and =gt=. They stand for "==", "!=", "<", "<=", ">=", ">" respectively.]]>
The "full" view contains all available configuration parameters for the service and its role types. This mode performs validation on the configuration, which could take a few seconds on a large cluster (around 500 nodes or more).]]>
Attributes that are not listed in the input will maintain their current values in the configuration.]]>
By default, the returned results correspond to a 5 minute window based on the provided end time (which defaults to the current server time). The from and to parameters can be used to control the window being queried. A maximum window of 3 hours is enforced.
When requesting a "full" view, aside from the extended properties of the returned metric data, the collection will also contain information about all metrics available for the service, even if no readings are available in the requested window.
HDFS services that have more than one nameservice will not expose any metrics. Instead, the nameservices should be queried separately.
]]>Attributes that are not listed in the input will maintain their current values in the configuration.]]>
Available since API v3.]]>
For HDFS services, the list should contain names of DataNodes to decommission.]]>
The command will create a new JobTracker on the specified host and then create an active/standby pair with the existing JobTracker. Autofailover will be enabled using ZooKeeper. A ZNode will be created for this purpose. Command arguments provide option to forcefully create this ZNode if one already exists. A node may already exists if JobTracker was previously enabled in HA mode but HA mode was disabled later on. The ZNode is not deleted when HA is disabled.
As part of enabling HA, any services that depends on the MapReduce service being modified will be stopped. Command will redeploy the client configurations for services of the cluster after HA has been enabled.]]>
The command will create a new ResourceManager on the specified host and then create an active/standby pair with the existing ResourceManager. Autofailover will be enabled using ZooKeeper.
As part of enabling HA, any services that depends on the YARN service being modified will be stopped. Command will redeploy the client configurations for services of the cluster after HA has been enabled.]]>
Available since API v7.
]]>The ZooKeeper dependency of the service will not be removed.]]>
As part of disabling HA, any services that depend on the HDFS service being modified will be stopped. The command arguments provide options to re-start these services and to re-deploy the client configurations for services of the cluster after HA has been disabled.]]>
If no nameservices uses Quorum Journal after HA is disabled for the specified nameservice, then all JournalNodes are also deleted.
Then, HDFS service is restarted and all services that were stopped are started again afterwards. Finally, client configs for HDFS and its depedents will be re-deployed.]]>
The command will create the needed failover controllers, perform the needed initialization and configuration, and will start the new roles. The existing NameNodes which are part of the nameservice will be re-started in the process.
This process may require changing the service's configuration, to add a dependency on the provided ZooKeeper service. This will be done if such a dependency has not been configured yet, and will cause roles that are not affected by this command to show an "outdated configuration" status.
If a ZooKeeper dependency has already been set up by some other means, it does not need to be provided in the command arguments.]]>
If there is a SecondaryNameNode associated with either given NameNode instance, it will be deleted.
Note that while the shared edits path may be different for both nodes, they need to point to the same underlying storage (e.g., an NFS share).
As part of enabling HA, any services that depend on the HDFS service being modified will be stopped. The command arguments provide options to re-start these services and to re-deploy the client configurations for services of the cluster after HA has been enabled.]]>
The command will also create JournalNodes needed for HDFS HA if they do not already exist.
As part of enabling HA, any services that depend on the HDFS service being modified will be stopped. They will be restarted after HA has been enabled. Finally, client configs for HDFS and its depedents will be re-deployed.]]>
Available since API v8.]]>
Available since API v4.]]>
Available since API v3.]]>
This command is to be run after enabling HDFS High Availability. Only available when all Hive Metastore Servers are stopped.
Available since API v4.]]>
This command is to be run whenever a new user and database needs to be created in the embedded postgresql database for the Impala Catalog Server. This command should usually be followed by a call to impalaCreateCatalogDatabaseTables.
Available since API v6.]]>
This command is to be run whenever a new database has been specified. Will do nothing if tables already exist. Will not perform an upgrade. Only available when all Impala Catalog Servers are stopped.
Available since API v6.]]>
The command argument specifies the name of the Llama role to be retained. The other Llama role in the HA pair will be removed. As part of disabling HA, any services that depend on the Impala service being modified will be stopped. The command will redeploy the client configurations for all services of the cluster after HA has been disabled.
Available since API v8.]]>
This command disables resource management for Impala by removing all Llama roles present in the Impala service. Any services that depend on the Impala service being modified are restarted by the command, and client configuration is deployed for all services of the cluster.
Note that any configuration changes made to YARN and Impala when enabling resource management are not reverted by this command.
Available since API v8.]]>
The command will create a new Llama role on the specified host, and then create an active/standby pair with the existing Llama role. Autofailover will be enabled using ZooKeeper.
If an optional role name is supplied, the new Llama role will be named accordingly; otherwise, a role name will be automatically generated.
As part of enabling HA, any services that depend on the Impala service being modified will be stopped. The command will redeploy the client configurations for services of the cluster after HA has been enabled.
Available since API v8.]]>
This command configures YARN and Impala for Llama resource management, and then creates one or two Llama roles, as specified by the arguments. When two Llama roles are created, they are configured as an active/standby pair. Auto-failover from active to standby Llama will be enabled using ZooKeeper.
If an optional role name(s) are supplied, the new Llama role(s) will be named accordingly; otherwise, role name(s) will be automatically generated.
By default, YARN, Impala, and any dependent services will be restarted, and client configuration will be re-deployed across the cluster. These default actions may be suppressed via setSkipRestart().
In order to enable Llama resource management, a YARN service must be present in the cluster, and Cgroup-based resource management must be enabled for all hosts with NodeManager roles. If these preconditions are not met, the command will fail.
Available since API v8.]]>
Before running this command, Yarn must be stopped and MapReduce must exist with valid configuration.
Available since API v6.]]>
The command will create new Oozie Servers on the specified hosts and set the ZooKeeper and Load Balancer configs needed for Oozie HA.
As part of enabling HA, any services that depends on the Oozie service being modified will be stopped and restarted after enabling Oozie HA.]]>
Available since API v2.
]]>Available since API v3. Only available with Cloudera Manager Enterprise Edition.]]>
Available since API v7.]]>
Available since API v7.]]>
Available since API v8.]]>
Available since API v6.]]>
Available since API v4. Only available with Cloudera Manager Enterprise Edition.]]>
Available since API v4. Only available with Cloudera Manager Enterprise Edition.]]>
The HA partner must already be formatted and running for this command to run.]]>
Note about high availability: when two NameNodes are working in an HA pair, only one of them should be formatted.
Bulk command operations are not atomic, and may contain partial failures. The returned list will contain references to all successful commands, and a list of error messages identifying the roles on which the command failed.]]>
Only one controller per nameservice needs to be initialized.]]>
The provided role names should reflect one of the NameNodes in the respective HA pair; the role must be stopped and its data directory must already have been formatted. The shared edits directory must be empty for this command to succeed.]]>
This request should be sent to Hue servers only.]]>
For HDFS services, this command should be executed on NameNode roles. It refreshes the NameNode's node list.
For Yarn services, this command should be executed on ResourceManager roles. It refreshes the role's queue and node information.]]>
These attributes can be used to search for specific YARN applications through the getYarnApplications API. For example the 'user' attribute could be used in the search 'user = root'. If the attribute is numeric it can also be used as a metric in a tsquery (ie, 'select maps_completed from YARN_APPLICATIONS').
Note that this response is identical for all YARN services.
Available since API v6.]]>
Log files are returned as plain text (type "text/plain").]]>
Log files are returned as plain text (type "text/plain").]]>
Log files are returned as plain text (type "text/plain").]]>
By default, the returned results correspond to a 5 minute window based on the provided end time (which defaults to the current server time). The from and to parameters can be used to control the window being queried. A maximum window of 3 hours is enforced.
When requesting a "full" view, aside from the extended properties of the returned metric data, the collection will also contain information about all metrics available for the activity, even if no readings are available in the requested window.]]>
By default, the returned results correspond to a 5 minute window based on the provided end time (which defaults to the current server time). The from and to parameters can be used to control the window being queried. A maximum window of 3 hours is enforced.
When requesting a "full" view, aside from the extended properties of the returned metric data, the collection will also contain information about all metrics available, even if no readings are available in the requested window.]]>
Available since API v3. Only available with Cloudera Manager Enterprise Edition.]]>
Attributes that are not listed in the input will maintain their current values in the configuration.]]>
By default, the returned results correspond to a 5 minute window based on the provided end time (which defaults to the current server time). The from and to parameters can be used to control the window being queried. A maximum window of 3 hours is enforced.
When requesting a "full" view, aside from the extended properties of the returned metric data, the collection will also contain information about all metrics available for the role, even if no readings are available in the requested window.]]>
Log files are returned as plain text (type "text/plain").]]>
Log files are returned as plain text (type "text/plain").]]>
Log files are returned as plain text (type "text/plain").]]>